There have been several calls to put sigmoidal tone mapping into the filmic module, which is just a different tone mapper, and nobody has done so yet.
Ya I wasnât even trying to make a counter argument just following on your comment for which I really didnât have any good background or grounding so I was just curious to see what comments would followâŠ
This IMHO is where the simple âdo this before thatâ advice breaks down. Now that we have the ability to mess with order of operations in darktable a whole 'nother layer of knowledge is needed to understand what orders are better than others, and what orders will just muck things up.
The whole generic raw processing pipeline has two distinct phases: 1) BD - Before Demosaic, and 2) AD - After Demosaic. In BD, the image data isnât really renderable, but there are things that are usually done here, like black subtract and white balance (for you RT folk, I do WB once, before demosaic). At the act of demosaic, the image data is transformed into an encoding that can now be regarded on RGB media, but the data is still what we call âlinearâ, or âenergy-linearâ, or, or, âScene-Referredâ. It is here that processing starts to be discretionary, but thereâs yet another rubicon to be considered: Departing Scene-Referred. This is where the first non-linear tone curve is applied, destroying the original energy relationship of the measured image values.
It used to be de-riguer for softwares to immediately apply a lifting tone curve after demosaicing, so the operator could regard their image nicely and feel good about the software. But, then the software would allow one to apply color and tone manipulation on top of that initial tone curve, and thatâs where the problems were introduced. So, itâs not that one particular tool should go before another, itâs that one should do any color editing before the image data is yanked from linear, and preferably that linear departure should be as close to the last thing done as possible. This is the basis for the whole migration of darktable to scene-referred editing: putting as much of the editing as possible before the departure-from-linear.
My picayune point was that the tone curve you manipulate in the darktable UI is not the only one applied to make the final rendition. The export/display ICC profiles have one too, and it is applied after and on top of the output of filmic/sigmoid/whatever. So, what youâre messing with in filmic/sigmoid is the place between linear and the start of the rendition transformâŠ
I think there are two different concepts for which this would apply,
- Make it part of the filmic module itself.
- Make a âmetaâ module that holds all 3 (or more) transforms into display dynamic range into one common module, but keep them apart such that only one is active at a time.
The second would be just a UX thing and technically they would be three different modules (I consider base curve number 3). Having such a meta module makes IMHO sense as you typically have only one display transform, and you decide between filmic, basecurve and this approach. It would unclutter darktables UI a bit, without reducing flexibility. In this meta module, you just select filmic and get access to all filmic settings/UI, or you select basecurve and get basecurveâs UI, etc. Only one of the three would of course be active, the one that is selected. For the rare cases where you need 2 basecurves and 1 filmic module, you could just use the typical module mechanisms.
IMHO, 2 makes sense, 1 does not as it would complicate filmic.
Edit: Pipeline position could be the same for all three, or, if it is really required, default pipeline position could change when another method is selected, unless there was a manual shift of the pipeline position by the user, which would override the default position.
Thanks by the great exposition of the general workflow.
I knew it more or less in general terms, but your clear explanation is great for refreshing concepts.
But the question remains the same: why you canât just substitute filmic tone mapping fro another one like sigmoid (or upcoming ones).
All it should do is map [0-inf] to [0-255] there are many ways of doing that (few make sense probably) but it is a bit like selecting a mapping projection or another, best depends on the use you aree going to make with the plane and the part of the Earth you are interested on.
If one does not suit you, you can use another.
If it is just tone mapping it does not seem problematic to substitute that tone mapping curve.
After that you can do color adaptation for the display.
To make its job in the best possible way, the tone mapper should not change hue or saturation or do it as few as possible.
But reading the answers it seems that filmic does not only makes a tone mapping.
Indeed I had the feeling that it bites too much, as it tries to recover higlights, there are options to preserve colors in âclipppedâ highlightsâŠ
shouldnât that other operation be separated from tone mapping task?
No reason you canât, except for the pull request hasnât been acceptedâŠ
Sometimes when developing a high-DR scene, like with shadows at sunset, Iâll delete my default filmic curve and replace it with 1) a loggamma curve, which lifts the shadows into a milky midrange, and 2) a regular control point curve that I twist and skew to make the image look nice low-to-high. Itâs usually S-shaped to pull the shadows back down, and keep the highlights from washing to white. Thatâs the fundamental problem with a filmic-shaped curve: the shoulder at the top canât be flattened too much, even if sometimes it should beâŠ
Well, yes, it is a small detail xD
But there are some arguments about that, saying that it has some collateral effects and does not integrate well with other new modules.
But it is at the end of the chain and just does tone mapping it cannot have collateral effects in other modules.
Well I will try to compile DT and later integrate sigmoid in it.
It will be the way of being able to test things and modules not in the main stream, as there is no plugin mechanism in DT.
Yes that is exactly my feeling sometimes with filmic sometimes and it renders all your highlights (skies) in few space at the output.
That is why having an alternative of mapping values is great.
I want to remind people that there are only 2.45 EV of available DR between middle-grey and peak SDR white. Whatever tone mapping method you choose, skies are going to be compressed in display because they have a lot more than 3 EV DR on the scene. Which, by the way, is consistent with human vision.
Letâs get some reference:
- Middle-grey is the luminance of a 20% black patch at 20% reflectance, and records at 18.45% linear luminance display-referred SDR,
- color checker reflective white at 20% reflectance records at 92% linear luminance display-referred SDR.
What this means is, at 1:1 contrast ratio over reflective surfaces, you are left with the upper 8% of the SDR display range to squeeze all pseudo-emissive clouds and lightsources. If you increase contrast in reflective areas, then reflective white will register above 92% and will eat the emissive range accordingly.
So much for dramatic skies. We simply have no room for them in SDR.
The other option is to pull down the middle grey level, which means darkening the picture globally, and thatâs not what you want to do.
Whishful thinking changes nothing to this fact, until HDR displays become mainstream. Different tonemapping strategies will yield different curvatures and slopes, but if you anchor black, grey and white and derivate a smooth transform between them, you will get mostly the same results.
Getting more contrast in HL needs to be achieved by local tonemapping, at the risk of breaking luminance relationships between sky and ground (light rources are expected to be brighter than reflective surfaces) and introducing usual HDR creepiness.
All of that wouldnât be a problem if people didnât got used to overdone tasteless skies born of happy slider pushing. Remember aesthetics are acquired.
Photography and videography are technically bound arts, and if display canât display what you want it to display, no matter how famous and praised you are, you need to reconsider your creative choices.
Of course you are right, and may be skies in photos are not as natural as we remember.
Or may be we do not remember the sky the same way our eyes see and capture it, as vision is not a subject of phisical light and âeyeâ response, but a matter or interpretation in our mind.
When we see a sunset we donât capture all scene at a glance, we concentrate in different parts, move the eyes all over the scene while they adapt to differente levels of light and keep in in our memory after that.
But it is not as important to discuss if it is more or less real what filmic or other mapping tools get or whether skies in many photos are not so real.
Photography is not just capturing the real scene, but an artistic interpretation of it.
It is usual to lower light in the sky to enhance details, make selections and change color a bit.
Is it real? not for sure, but reality is boring, and may be your sky was not the best at the moment of the photo, and you want to âenhanceâ it a bit.
The problem is not that filmic comprises highlights, it is expected to do it, any simple contrast curve does it, and that is the use of filmic.
The problem is that it is not easy to revert it, even with curves and parametric slections, or using tone equalizer (it seems to work the best) or color balance RGB to lower highlights.
I donât know exactly why it gets so difficult.
In other programs (due to non linearity of their workflow?) it is easier.
Thatâs because it is a local tonemapping issue and you try to tackle it with global tonemapping methods. Wrong approach, wrong tools.
Try dodging and burning, it has worked since circa 1860 to solve this. Tone EQ is one way of doing it, masked exposure is another. Aka decrease exposure selectively on a region.
Yes that is what I am trying now, using exposure to reduce it a bit (with masks) and tone eq.
It improves, no doubt of it.
You might want to also check out the recent work in RTâŠI have not had time but it sound interesting
Log Encoding
The code used in this part of RawTherapee is similar to:
- The Log Tone Mapping module in ART, designed by Alberto Griggio.
- The Filmic module in darktable, designed by Aurélien Pierre.
Both are inspired by the work on logarithmic coding developed by the Academy Color Encoding System (ACES).
The algorithm is based on a 3-step process:
-
The first step for a given image (HDR or otherwise) involves calculating the deviation from the theoretical mean gray value (18% gray) of the darkest blacks and the brightest whites. This is expressed in photographic Ev units (luminosity index, which is related to the brightness of the scene). The black and white Ev values, along with the average or mean luminance of the scene (Yb%) are used by the algorithm (either automatically or with manual override) to modify the balance of the RGB values, thereby reducing contrasts, enhancing shadows and reducing highlights, without overly distorting the image rendering.
-
In the second and third steps, the data is manually corrected by the user to increase local contrast (which has been reduced by the âLogâ conversion) and adjust the viewing conditions for the intended output device.
https://rawpedia.rawtherapee.com/CIECAM02#The_algorithm_is_based_on_a_3-step_process:
As primarliy a RT user Iâve used the Local adjusments > Log encoding quite a lot building mainly from dev.
Despite the explosion of sliders in the RT interface the decoupling of the log and the curve makes it easier to control imho. It does suffer, when used as a full image edit, from destroying highlights in clouds! I frequently have to add an excluding spot over highlights to bring back detail. A very different issue than the darktable one but perhaps they are related? In RT it seems to sort of undo any highlight recovery and flattens it do a dull colour. Itâs my pet peeve with RT Log at the moment.
For those interested I basically use
- Log strength
- Mean luminance (under viewing conditions. I (ab) use this to raise shadows)
- Contrast
Above are in the order I tweak them.
If someone has the time, it would be great to have a âthis is how you get dramatic skies in Darktable 3.8â tutorial.
Basically a RAW file shot during daytime with some blue skies + nice clouds, a mild exposure adjustment with a gradient mask, then diffuse and sharpen and finally some color boosting to taste.
It really got much easier in 3.8.
Well I had a look to RT and ART.
They are good, but not the kind of program I was looking for.
No tools to manage photo collection, slow screen refreshing when you change an slider and no feedback when you are moving it, clutter interfaceâŠ
But for my tests they make a better work in clipped light recovery at least when recovering color of zones witch one or two clipped channels.
And it is easier to get more pleasent results in highlights, may be due to the more classical non linear approach to developing.
In general I far prefer DT , for it it only has these drawbacks: poor clipped light recovery when you need to reconstruct color and harder to get pleasent results in highlight when you want to expand them to the midtones.
Itâs super easy. Use a new instance of the colour calibration module with, for example, a gradient mask, darken the blue channel and compensate by lightening the red channel:
Yes that would be great.
Personally I donât need âdramaticâ but if you can get good dramatic skies, you can allway be less aggressive.
And I would add using as low modules as possible and I would add using a sample photo with some highlights in skies clipped in one or two channel y a coloured zone, to see how best use of highlight recovery module and filmic.
Probably a sunset or sunrise is a good example.
May be tone equalizer, filmic (of course), highlight recovery, a bit of color calibration magic, color balance RGB and local contrast?
Of course using parametric and local masks to appropiate selections and some focusing (but that wonât be the main objective, for the sky no need of special focusing).
This is known, and work is in progress to address it. See
Thanks! I always forget how powerful that modules is (learnt a lot about it from your videos).
Well yes, but that was an easy one, not many contrast in the original.
I was using balance RGB to try to do it until recently I discoverd it is easier usually with color balance.
When you have a backlit and your sky is a bit overexposed in order to have no too strong darks, and you need to compensate both gets harder.
May be it is all due to the linear editing which needs more aggresive curves or something to expand that highlights.
Great, there is no doubt it would improve a lot.
Those 4th derivative laplacians do it all: focus, defocus, noise reduction, highlight recoveringâŠ
We are only going to need one module.
I only hope you wonât need to fight with all those weights and derivate coefficients.
Aureliene seems to be developing a more specialized and friendly module for the task.
Thanks for it.