Raw formats can have 12, 14 or 16bit depending on the readout precision of the sensor. DR depends on what you mean, per-pixel or normalized to area. Per-pixel DR can be horrible but with enough pixels your area-normalized DR can be very high. Think 1bit per pixel ADC but for the area you’re interested in you have a million pixels.
This is a hypothetical upper bound.
With 16bit half-float in OpenEXR for example you can cover 30 f-stops of dynamic range with 1024 steps per f-stop. If that’s not enough, there is 32bit-float. So for all intents and purposes, all sensor technology and capture techniques (stacking etc.) calculation- and if you take care storage-precision is, while not infinite, more than good enough.
I’m more on the opinion that image processing algorithms are always imperfect.
This is the reason why there are multiple demosaicing, denoise, chroma preserving options and so on.
Coming back to this topic we don’t have to include in darktable every filmic tonemapper invented, there are too many, but this one is really promising.
I’d like to see it in a next darktable release.
I’d like to see the crosstalk desaturation and resaturation too, we have so many options in the chroma preserving drop down menu, we could add an alternative over the simple rgb curve.
It’s a hack, a compromise, exactly like rgb ratio preserving and every other module.
I’m not some kind of rgb purist, sometimes the rgb ratio preserving option is the best choice, this is true and if it wasn’t implemented i would have request it to the developers.
I have to pointing out that i don’t like when something is hided from the user and undocumented.
For example if we look at the GIMP color balance https://docs.gimp.org/en/images/menus/colors/color-balance-dialog.png
There is a preserve luminance option, it’s up to the user decide when checking this option, this doesn’t happen in darktable
I have the same issue with the gamut compression inside filmic, i’m fine with it but sometimes it’s better to not use it.
Please don’t hide imprtant options from the gui.
I agree that the comments should stay correct in all circumstances. There is no need to send harsh comments to others.
On my side I have not yet tested this new module. Given the amount of exchanges between Aurélien and Jandren and the corresponding adjustment made I’m pretty sure that this module will finally find its place into dt set of modules.
Again I have not tested it, but if this module gives good final results with very fewer controls I suppose it will be a nice alternative to more advanced Filmic.
Once a person actually takes time to learn filmic and it’s rules/flow then the results are much more predictable on various images in various conditions.
I haven’t tested sigmoid yet but premise seems to go in a way of simplicity vs robustness in every condition.
And If i’m not much mistaken the sigmoid stuff could be a part of filmic since filmic does more that just tonemap.
I think in the scene its infinite and I guess some day the number will be greater than 65000 for capture although never infinite so I suppose there is a degree of future proofing??? You may raise a good point though is that “future proofing” in any way impacting the way data in a 0-65000 window processed. I always thought the idea was to make it work no matter the physical bounds theoretically up to infinity??
@priort and @MarcoNex the zero to infinity part does not introduce any extra complexity, it’s just the case that the function supports this range. The curve converges towards the user-defined display white and black targets with contrast defining the rate of convergence. The kind of free lunch you sometimes can get.
@johnny-bit I have taken the time to learn filmic (now I know its source code as well:sweat_smile:) so I’m trying to take some learnings from that with me when exploring this approach. Please try what I have done. Begin with the python based homepage that I did for making it possible for others how the tone curve behaves! https://share.streamlit.io/jandren/tone-curve-explorer Then compile the PR and run some of your own pictures through it.
On the topic of integrating it in the filmic module, let’s wait and see what the consensus is later, it’s easier for me to continue this development in a separate module for now. Reduces the risk for merge conflicts and I can destroy things however I like
@Pascal_Obry Thanks for standing up for a positive environment! I only want this to be merge if it meets and exceeds all darktable requirements for what a scene to display mapping should do. If it doesn’t, well that’s that. What I do expect is some good feedback, testing, and constructive critique, really no use in trying otherwise
The data coming from the camera may be bounded to whatever the bit-depth of the camera can encode (and the sensor can provide), but while processing an image in Darktable, that range can change, through use of any number of operations. Having the range unbounded ensures that you can’t accidentally “cut off” the top range of your data by e.g. increasing exposure in one module, then reducing contrast in another. If you were operating with limited numbers, you’d be clipping highlights all the time, by accident.
…and of course there’re also “true” HDR images with dynamic ranges way above what 16-bit integers can encode.
Could I maybe ask you for some help?
I’m continuously trying out the sigmoid mapping on my pictures in search of a picture where the sigmoid does not work but I’m not finding any. It would be awesome if you could share some pictures where you feel like the filmic module really saved your butt as it would be interesting if the skew log-logistic could deliver at the same level!
I have found one problem so far, some blown-out highlights can be hard to push all the way to display white with the sigmoid as they aren’t as bright as they should be. Not too worried about that though, as it rather highlights the need for an improved highlight reconstruction workflow in darktable than a problem with the actual mapping.
Here are three pictures for you to look at while waiting for more coding updates: IMG_5719.CR2 (22.1 MB)
Thanks, @eyedear, and lovely that you want to test it!
You do not need to merge it, just checkout the branch. There are some suggestions at the beginning of this thread, I would suggest just do a fresh clone of my fork for the least amount of git work: GitHub - jandren/darktable at sigmoid_tone_mapping
As for compiling and using, please install as a dev version as it will fuck up your database otherwise. See the darktable main Readme.
And then just fire away! Note that the parameters aren’t fully orthogonal yet so you will need to adjust the contrast after adjusting the skew to maintain the contrast from before. Looking forward to your comments, especially if you find images where it struggles! Weird edge cases are my favorites Seriously, those are the once where you find bugs and problems!
Of course it is always possible to fine tune curves and improve interface but let me add something to this discussion that has not been touched on earlier. Filmic has for the first time forged a real and important link between the camera and the software. The mid-gray (18%) that the camera perceives is used as a default fulcrum for filmic processing. This is important concept that should be maintained
Of course the results from filmic are not always perfect but it generally it works exceptionally well and certainly is a vast improvement over the hit-and-miss results that a tone curve may produce particularly for those who have lesser experience.
Filmic on the ‘scene’ tab is very simple and can produce good results with a large percentage of input images. Other tabs allow for more complex situations. Let’s not provide so many ‘front-end’ options that users are once again required to design a custom curve for each and every image.
I get your concern about introducing another module might cause confusion but I think it should be fine as long as they are consistent on their main principles. I’m for example staying true to the fixed middle grey at 18% with the experiments I’m doing here. I definitely encourage you to try out what I have so far if you know how to compile darktable yourself. It’s really hard to show the effect with just images here on the forum!
I think the advantage of using a sigmoid-like function as the skew log-logistic function is that it will always return a “proper” curve regardless of user settings. This should dramatically reduce the risk for hit and miss experiences.
darktable is about having options. There’re also a couple of denoising modules, so if there’s a good solution why not integrate it? Especially when this is not just requested to be implemented by developers focussing on different concepts but provided as a pull request…
The photos you’ve showcased with Sigmoid look excellent which makes me want to give it a shot to compare with Filmic.
I’m just not ready to spend hours trying to set up my system to compile Darktable. Could you, or someone else, provide either a Mac (Catalina) or Windows dev build?
I absolutely agree.
Why not a global “scene tonemapping” (or any other name fwiw) with a drop-down : filmic, magic (aka sigmoid), etc.?
As much as I love filmic, alternative is always good! And even if it’s not theoretically perfect for HDR displays, which are yet to come and/or be handled by linux, let’s enjoy it for the next 5 years or do!
current master + vectorscope + diffuse + sigmoid: darktable-3.5.0+1347~g31bd9f27d3.dmg
built with sdk 10.15.
Not recommended for productive work since it contains wip modules.
Backup your darktable config directory since there’s a database update and so no way back.
because it’s not that simple - the couple of sliders in filmic aren’t there just to annoy users
darktable keeps old modules for compatibility - so if you need to provide a new “scene tonemapping v25” to adapt it to latest theoretical conditions that became practical (You remember Bill Gates: “640 kB ought to be enough for anybody.”) you need to maintain also v24, v23, … and keep them in the codebase …
I can’t speak to the math and physics behind the two modules and how theoretically robust either are, but having used filmic RGB on thousands of images in the past year, I’m very impressed that I’m getting better looking images in less time with Sigmoid. I have watched all of Aurelian’s videos and read his major posts on Filmic so I feel confident I am using it correctly, but it’s the number one module that causes me frustration. Flipping between scene and look tabs is a royal pain, and I’m noticing that I have to touch at least 4 sliders on both tabs in filmic to get approximately the same result as what I’m getting in Sigmoid. Another problem I often encounter with filmic is the middle tones saturation tends to push nasty oranges into my images. I regularly need to go back to filmic and knock it down, and then I have to go back to color balance and resaturate. I’m not seeing those colors in Sigmoid. It’s all a major time suck to spend my life in Filmic, keeping it from causing weird issues later in the pipeline, when I really just want to be focusing on the creative edits that make my clients happy.
As someone who has a fledgling photography business, I’m finding that editing speed is just as important as quality. Filmic is one of the major reasons I often think about switching to Capture One or Lightroom, but most of the rest of Darktable is fairly good in terms of speed and quality. Again, I cannot speak to the math and physics of Sigmoid vs. Filmic, but if they are accomplishing essentially the same thing, and one is both faster and tends to give better images to my tastes, then for me the choice is obvious.
FYI, it (and a few other factors) were why I dumped darktable for RawTherapee. My productivity skyrocketed immediately. darktable is the main reason my family began joking about my camera having “write only memory”… I’ve been avoiding weighing in on this whole conversation, but my general sentiment has been similar to yours.
I hope that @jandren has better luck in his attempts to fix this pain point. I personally gave up. I wasted a month digging through the bowels of dt to try and fix the pain points in my workflow, but it became clear that I had the options of either switching to other software or constantly maintaining a personal fork.