Filmic, when to use?

I still need to check how you do your curve interpolation though, because if you remove the display gamma, remapping the grey log from 67-75% to 18% makes the cubic spline fail big time. The benefit of the gamma, strictly from a numerical stability point of view, is the spline maps from 67-75% to 45-50%, so you avoid oscillations. I have put the maths down for a custom filmic spline, I still need to check if it behaves better.

I wouldn’t ascribe my explanations to “canon”; I’m just trying to show the fundamentals of the curve so one can understand how to intelligently misuse it to good effect… :smile:

Interior shots with such windows are just challenging. My current thinking is, if one wants to get it in one exposure, you ETTR for the window and yank the shadows out of the depths. Thing is, with most cameras you’ll get a noisy room doing that. So, you either 1) give in and shoot two exposures, one for each “scene”, and combine the two with HDR software, or 2) get a camera with a better dynamic range so that mitigating the low-end noise is a reasonable task. I’ve played with both alternatives now, #1 works really well but I’m really warming to #2 with the new camera.

Which brings me back to filmic, which I think is the tool in most implementations that gives the most control in pulling up the shadows in a highlight-weighted exposure. And, that’s my response to the thread title…

1 Like

Ah, home again, with all my little tools…

Here’s a screenshot of what i’ve been messing with in filmic:


I hope the .png renders well on your monitors…

First, this scene is not quite as challenging as @obe’s dining room image, but it doe separate into two distinct “scenes” for exposure consideration. I pulled the parameters pane out of the dock and resized it to show all of the tone tool. Starting from the top, the commands stack has:

  • all of the regular raw processing: camera colorspace assignment, blackpoint subtraction, as-shot whitebalance, and AHD demosaic;
  • the blackwhitepoint tool normalizes the data to put the raw data at the top and bottom of the display range;
  • the tone tool, which is a small “zoo” of tonemapping curves, has the filmic curve selected, which is the Duiker equation shown in a previous post,
  • and the “resize-sharpen” group is for file saving, but I keep it for this messing-around because the display profile transform is faster than with the full-sized image.

Note that the A,B,and D coefficients aren’t the Duiker defaults. Particularly, B is well-below, which is specifically controlling the curve segment applied to the lowest parts of the image. I can scroll through B values and watch the shadows go bright and dark, and the upper part of the image just stays nice and balanced - the “toe” at the bottom of the curve does this nicely. A and D were messed-with to deal with the upper part of the image under the curve shoulder; I don’t have a particular heuristic about them yet.

Really though, I’m pretty sure the piecewise filmic curve @Carmelo_DrRaw is implementing will be much easier to control, so keep his endeavor on your radar…

1 Like

Hi’ @ggbutcher

And your third option is of course to use a flash…

But anyway, you are bound to shoot some photos with a high dynamic range that need special treatment. So it is interesting to study, develop and optimise tools to do that.

I don’t understand all the maths but I see that there are different algoritmes to handle this problem. In the past I have used RT’s dynamic range compression tool which is superior to DT’s tone mapping, in my opinion. But filmic can produce even better results and from the discussion I understand that filmic will be improved further in 2.8. I’m looking forward to that.

In the meantime thank you for all your input, explanations and clarifications….

1 Like

Oh, gee, yes, thanks for completing the consideration. I’m actually in the middle of a “source selection” for a flash; new camera doesn’t have one, and most of my family snapshots occur in a tungsten-lit room with rather large windows…

I’m not really a math person either. What I’ve found, however, is that all of these tone mapping functions (well except for LUTs, maybe) express their performance in terms of a curve, and that curve is the basis for intuitively understanding what’s going on. If you understand the X → Y, goes into → comes outof dynamic of a transfer function depicted as a curve, you can easily start to understand the outcome of applying it to all the pixels of an image.

In my tone tool, seen in the screenshot above, I spent a couple of hours making the ability to plot the curve of the selected operator, and that has been quite instructive to my consideration of filmic. The “money-maker” in the filmic curve, that oh-so-little “toe” at the left end, is hard to depict in context of the full curve, but its little manipulations make a large difference in the transform of a linear scene.

Know the ways of the curve, and the effect of the maths becomes clear…

1 Like

Yes, that’s what I meant by not perfect :-). This was a quick and dirty edit. I did not spend a lot of time on tweaking the masks and added a quite liberal blur and feathering. Probably fixable with a lot tweaking of the masks. I also did not spend any time on noise. But the point here is that this is an HDR where you either fix the grey point of the interior or of the exterior, or both using some kind of masking. Using flimic for the whole scene will result in either a completely blown out exterior or a very dark interior.

Agree on the dark side, but the original filmic equation doesn’t reach display white unless you normalize it to the 0.0 -1.0 range (or whatever black-white range the particular software works with). If the raw image doesn’t have saturated pixels, if something is blown in processing, I think it’s more likely due to a white balance or exposure multiplication than a filmic curve.

@jillesvangurp, I hope I’m not irritating you in my responses, but you’re really getting me to think about filmic in context with all the prior stuff in exposing and processing. For high DR scenes, a given camera will only have so much tone space between an acceptable noise floor and saturation, and any tone curve can only go so far to redistribute tones to accommodate it. I think the essential question then with any tone mapping curve is, given an image that isn’t highlight-saturated, how much “lift” can it give to the shadow regions before it compromises mid-tone contrast? Any curve lifting shadows has to flatten out somewhere, and that’s where contrast will be killed.

The next step, masks, IMHO give only a limited ability to go farther than a tone curve, because the mask boundary forms a discontinuity region for tone gradation. Some scenes give you a clear line upon which to place the discontinuity, such as a horizon, or the window in the dining room, others not so much. Even with clear delineation, over compensating tone in the two regions can start to look “processed”.

And so, some scenes just require multiple-exposure HDR, depending on the camera. This approach to my thinking is just masking with a bit more latitude, but it still can suffer from looking “processed”.

After all this discourse, to my mind, a filmic tone curve has two compelling considerations: 1) that little “toe” at the bottom keeps some tonality in the near-blacks, and 2) programmers are working hard on the equation to provide shaping controls that are more usable than with other tone curves.

1 Like

As far as “multiple exposure HDR” - do you mean all-in-one fusion and tonemapping in one pass a la the original Mertens approach, or taking a set of multiple exposures and merging them (whether via averaging or the HDRMerge “best pixel selection” approach) to generate a scene-linear representation with reduced noise (which then needs to be tonemapped - and that tonemapping approach COULD potentially be the Mertens algorithm applied to synthetic exposures at the end of the pipeline)

Either way, most likely a single exposure is not sufficient in this case - noise reduction can help a bit, but if you lift the shadows there’s definitely noise.

I was unable to get results that looked pleasing to my eyes with filmic or any approach that did not involve a local tonemapping operator.

Both of these require additional patches at GitHub - Entropy512/darktable at fusion_work


DSC_8528.nef.xmp (4.8 KB)
I chose to sacrifice some of the texture in the highlights to get a bit more contrast in the foreground by taking the much-maligned “basecurve” module and moving it all the way to the end of the pipeline, to give a “canon-like” look to the file. (As to why Canon - the Canon tonecurve is a little less aggressive at squashing the highlights). At this point in the pipeline it’s no longer really a “base curve” but a “camera look emulation” curve. Turning basecurve off will provide significantly more texture in the outdoor area, but at the risk of the foreground looking a bit too neutral/flat to my eyes - I felt that the foreground was more important for this image.

Of note, @ggbutcher - most of the built-in tonecurves of various cameras look pretty close to the Duiker curves you show.


DSC_0856.nef.xmp (5.3 KB)

For this one, I considered the texture in the outdoor part of the scene to be more important, so have nothing after the tonemapping-by-fusion step.

Getting back to whether this is exceeding the ability to push a single shot without stacking+merging or an HDRMerge of a bracket - there’s definitely quite a lot of noise in both shots, and in 0856, there’s what looks like some nasty FPN showing up. :frowning: I could potentially have achieved more by tweaking the denoise modules a bit.

(side note, does the “please limit images to 1080 or smaller” apply even to PlayRaw?)

Generally, yes. That’s why we also ask that people upload a sidecar file, so people can reproduce results locally.

However, if you’re trying to show something specific, like noise reduction, you may need a full resolution image. Perhaps a crop of a full resolution image would be better for that task.

We ask that you use your best judgment!

1 Like

Hi’ @Entropy512
Thank you for your reply.
I like both images and especially your version of DSC_0856. I quite agree that the outdoor part of the scene is the most important. What is tonemapping-by-fusion, is it basecurve fusion? My 2.6.2 can’t read your xmp-file.
I shot many photos at a recent party. When you use a flash pointing upwards being reflected by the ceiling you often get some unpleasant reflections (highlights) on persons forehead. Filmic comes in handy in this case. It’s easy to get rid of the reflections but regrettably not so easy to avoid a greyish overall look and to maintain the skin colors.

Unfortunately, basecurve fusion is broken for a variety of reasons, but yes - that is tonemapping-by-fusion. The concept is sound, but there are a bunch of implementation flaws. I started doing a rework to fix the issues, but in the end, @Edgardo_Hoszowski split everything into a separate module (which it should have been in the first place in my opinion…). Unfortunately, for various reasons, that effort is dead. :frowning:

You won’t be able to use 2.6.2 unfortunately - which is why I included a link to the current state of my changes to Edgardo’s starting point in my post.

I call it tonemapping-by-fusion in order to differentiate it from another common use case of exposure fusion, which is feeding bracketed images from a camera into enfuse, and getting a fused image with compressed dynamic range as output, bypassing the intermediary linear HDR data representation. Many cameras now have enough dynamic range that you can get good results by tonemapping data from a single raw exposure, and there are other approaches to both raw merging (HDRMerge, Google’s align-and-merge stacking method, other stacking/averaging methods) and to tonemapping (PhotoFlow has some interesting techniques that its author has been working on) that you can use. I happen to like the results of Mertens algorithm for tonemapping, even if it is not the original design intent of feeding it bracketed JPEGs from a camera. Some people strongly dislike it - as with everything, many things are a matter of taste.

Obviously the ideal is to fix the scene lighting by adding light in the shadows, but as you’ve experienced, that can often be easier said than done. Bounce flash can work GREAT, if you have a room where the ceiling is a matte white. I’ve sometimes had decent results by putting a flash with a small mini-softbox (Lumiquest ultrasoft) on the end of a monopod and holding it up, but often you’ll still have that “this was taken with flash” look. Interestingly, one of my uses for fusion has been to lessen that look such that the falloff from near to far isn’t so obvious. (I would post an example, but I am uncomfortable posting family wedding pictures here.)

Edit: As a side note, for use cases like this, you might want to check out PhotoFlow - in addition to the tone curve work that was discussed earlier here, @Carmelo_DrRaw has been doing a lot of investigation into various approaches for local tonemapping. I admit, I haven’t tried it yet but based on discussions here I’m planning on compiling and playing with it soon as it looks like it may be far more suitable to my own typical workflow needs due to the tonemapping work he’s done.

…Am I a bit late to the party? :grin:

Here are the results I obtained by using my own preset, Highlights-Shadow_Gaito,
find it here: https://discuss.pixls.us/t/highlights-shadows-control-my-preset/13791/2

My preset aims to “imitate” the quick and simple two sliders Lightroom approach: one slider for boosting the shadows, one for attenuating highlights, that’s all.

The results you can see here are obtained using only that preset and a little bit of local contrast.

On the chalet picture, I had to duplicate the ShadowsBoost instance of my own preset, in order to doubling the shadow recovery capability: shadows in that pictures were very dark! :smile:

2 Likes

Hi’ @Gaito

Amazing results I think…….:blush:.
And thank you for mailing your presets. I will have a closer look tomorrow.

I shot the chalet picture with the intent of getting a big contrast between the dark interior and the light exterior. Out of the camera the contrast is too big, but it is interesting to see how much you can lighten the interior using your presets.

Hi’ @Gaito

I have studied and tested your presets. I post an image so that others easily can follow the discussion. Your presets are:

Your method is to separate the dark areas and light areas by parametric masks in the lightness channel, compress the selected areas by tone mapping and use the blend modes screen/multiply to lighten/darken the shadows/highlights respectively.

I have tested your method on a few images, and it works really well!

Below you will find some comments and questions.

You adjust the dark areas and the light areas separately. That is a nice feature giving good control of the end result. Much better than the standard tone mapping tool.

The order of pixels arranged from 100% black to 100% white should, in my view, not be changed when compressing the dynamic range. Are you sure, this is not happening?

Once you have separated the dark/light areas by means of masks you could use other tools than tone mapping to lighten/darken. What about the exposure tool? It’s simple and it seems to work well too.

The shadow boost preset seems more powerfull (opacity 65%) than the highlight attenuation preset (opacity 100%). Is there a way to make this preset more powerfull?

Mask refinement settings are different in your presets. Is this a result of experiments?

I’m not sure whether other contributors to this thread will get notified, but I will type some names here hoping that they are notified because I think that your post is really interesting: @Entropy512 @ggbutcher @anon41087856 @gadolf

4 Likes

This sounds like a form of what is called local tonemapping - where attempts are made to preserve or even enhance the local contrast of an image, because the human eye is significantly more sensitive to local contrast than global contrast. With local tonemapping, you will have pixels being reordered in luminance based on the properties of their neighbors.

The advantage of local tonemapping is, as mentioned above, that the human eye is more sensitive to local contrast than to global contrast.

The disadvantage is that if overdone, it can start to look strange/unnatural and always has a risk of haloing. (However, a good algorithm is fairly resistant to doing so unless it is abused. Although some people actually LIKE the results when such algorithms are abused, even if others think they look cartoonish.) There’s one academic paper on a local tonemapping approach that claims “no halos”, but only for a restricted atypical definition of “no” or “halos”, because the very paper itself provides examples of haloing as a visual artifact that can occur. (Specifically, it only prevents halos for discontinuities with amplitude greater than the selected value of the constant sigma described in the paper) I’m not saying it’s bad (it’s a pretty solid approach), but it’s not the One True Approach some have made it out to be because the assertion that it can’t ever fail is false.

I didn’t know about the term, thanks.
Actually, that’s what I have done in my edit above, except I didn’t use the tone mapping tool, but filmic. Neither did I use alternative blend modes as @Gaito’s, which undoubtedly give a more pleasant look.

You are right about possible haloing. In my opinion this is also often the risk when applying DT masks.

I have spend some time testing several tools in darktable compressing the dynamic range of a number of photos. In my opinion the Dynamic Range Compression tool in Rawtherapee is superior to the tools available in darktable. It would by really nice to have this tool implemented in DT.

dt’s tonemapping module is one of many potential approaches to local tonemapping. IIRC it attempts to implement a similar algorithm to one @Carmelo_DrRaw was experimenting with involving bilateral filters, but seems to have some implementation flaws that result in artifacts not seen in other implementations of the same algorithm.

dt’s exposure fusion module is really just another approach at tonemapping (which also, again, has documented flaws/deficiencies compared to other implementations of the same algorithm)

RT’s “Tone Mapping” algorithm is based on a paper by (if I recall correctly, for whatever reason Google search is being very slow, it is frequently referred to as the Drago algorithm). I’m personally not a fan of the results I’ve gotten from it.

RT’s “Dynamic Range Compression” algorithm is also a local tonemapping operator, based on a 2002 paper by Fattal. It’s a little more fragile (meaning more prone to potentially looking strange) than some more modern algorithms, but so far it’s gotten the job done for my needs. I don’t like it quite as much as a fixed Mertens implementation, but it’s good enough for most of my use cases and scenarios.

PhotoFlow has quite a few choices for tonemapping operators, with @Carmelo_DrRaw working on even more. The various algorithms out there often have differering weaknesses, so choice is always good - the failures of an algorithm in one use case may not affect another algorithm for that case, but in another scenario the situation may be reversed. In some cases, the relative strengths and weaknesses of one algorithm over another will be subjective, certain visual artifacts are more disturbing to some people than to others. As I’ve mentioned, some of the ultra-aggressive approaches implemented by some software (such as some of the approaches used by Photomatix) are ones I really find distasteful, but there are some people who absolutely love that sort of look.

1 Like

Hi obe,

I fear I have not understood what you’re asking me :grin:

Uhmm…I remember I did experiment a lot, a bunch of months ago, using the same approach on others darktable modules, included the exposure one, but, from my memories, the best balance between “effectiveness” and artifacts was obtained using the tone mapping module

Well, maybe…but, as always, is a matter of tradeoff between effectiveness and artifacts :slight_smile:
Still, are you sure that highlights reduction is less effective than shadow boost? Be sure you’re not trying to reduce raw clipped highlights: those ones are impossible to recover using my approach nor any other approach (a part from “simulating” a recover by the means of “reconstructing” them from not clipped neighbours pixels)

Yes: lots of experiments on as more as possible raw files :slight_smile:

hi,

do you have examples by chance? thanks!