Filmic, when to use?

Agree on the dark side, but the original filmic equation doesn’t reach display white unless you normalize it to the 0.0 -1.0 range (or whatever black-white range the particular software works with). If the raw image doesn’t have saturated pixels, if something is blown in processing, I think it’s more likely due to a white balance or exposure multiplication than a filmic curve.

@jillesvangurp, I hope I’m not irritating you in my responses, but you’re really getting me to think about filmic in context with all the prior stuff in exposing and processing. For high DR scenes, a given camera will only have so much tone space between an acceptable noise floor and saturation, and any tone curve can only go so far to redistribute tones to accommodate it. I think the essential question then with any tone mapping curve is, given an image that isn’t highlight-saturated, how much “lift” can it give to the shadow regions before it compromises mid-tone contrast? Any curve lifting shadows has to flatten out somewhere, and that’s where contrast will be killed.

The next step, masks, IMHO give only a limited ability to go farther than a tone curve, because the mask boundary forms a discontinuity region for tone gradation. Some scenes give you a clear line upon which to place the discontinuity, such as a horizon, or the window in the dining room, others not so much. Even with clear delineation, over compensating tone in the two regions can start to look “processed”.

And so, some scenes just require multiple-exposure HDR, depending on the camera. This approach to my thinking is just masking with a bit more latitude, but it still can suffer from looking “processed”.

After all this discourse, to my mind, a filmic tone curve has two compelling considerations: 1) that little “toe” at the bottom keeps some tonality in the near-blacks, and 2) programmers are working hard on the equation to provide shaping controls that are more usable than with other tone curves.

1 Like

As far as “multiple exposure HDR” - do you mean all-in-one fusion and tonemapping in one pass a la the original Mertens approach, or taking a set of multiple exposures and merging them (whether via averaging or the HDRMerge “best pixel selection” approach) to generate a scene-linear representation with reduced noise (which then needs to be tonemapped - and that tonemapping approach COULD potentially be the Mertens algorithm applied to synthetic exposures at the end of the pipeline)

Either way, most likely a single exposure is not sufficient in this case - noise reduction can help a bit, but if you lift the shadows there’s definitely noise.

I was unable to get results that looked pleasing to my eyes with filmic or any approach that did not involve a local tonemapping operator.

Both of these require additional patches at GitHub - Entropy512/darktable at fusion_work


DSC_8528.nef.xmp (4.8 KB)
I chose to sacrifice some of the texture in the highlights to get a bit more contrast in the foreground by taking the much-maligned “basecurve” module and moving it all the way to the end of the pipeline, to give a “canon-like” look to the file. (As to why Canon - the Canon tonecurve is a little less aggressive at squashing the highlights). At this point in the pipeline it’s no longer really a “base curve” but a “camera look emulation” curve. Turning basecurve off will provide significantly more texture in the outdoor area, but at the risk of the foreground looking a bit too neutral/flat to my eyes - I felt that the foreground was more important for this image.

Of note, @ggbutcher - most of the built-in tonecurves of various cameras look pretty close to the Duiker curves you show.


DSC_0856.nef.xmp (5.3 KB)

For this one, I considered the texture in the outdoor part of the scene to be more important, so have nothing after the tonemapping-by-fusion step.

Getting back to whether this is exceeding the ability to push a single shot without stacking+merging or an HDRMerge of a bracket - there’s definitely quite a lot of noise in both shots, and in 0856, there’s what looks like some nasty FPN showing up. :frowning: I could potentially have achieved more by tweaking the denoise modules a bit.

(side note, does the “please limit images to 1080 or smaller” apply even to PlayRaw?)

Generally, yes. That’s why we also ask that people upload a sidecar file, so people can reproduce results locally.

However, if you’re trying to show something specific, like noise reduction, you may need a full resolution image. Perhaps a crop of a full resolution image would be better for that task.

We ask that you use your best judgment!

1 Like

Hi’ @Entropy512
Thank you for your reply.
I like both images and especially your version of DSC_0856. I quite agree that the outdoor part of the scene is the most important. What is tonemapping-by-fusion, is it basecurve fusion? My 2.6.2 can’t read your xmp-file.
I shot many photos at a recent party. When you use a flash pointing upwards being reflected by the ceiling you often get some unpleasant reflections (highlights) on persons forehead. Filmic comes in handy in this case. It’s easy to get rid of the reflections but regrettably not so easy to avoid a greyish overall look and to maintain the skin colors.

Unfortunately, basecurve fusion is broken for a variety of reasons, but yes - that is tonemapping-by-fusion. The concept is sound, but there are a bunch of implementation flaws. I started doing a rework to fix the issues, but in the end, @Edgardo_Hoszowski split everything into a separate module (which it should have been in the first place in my opinion…). Unfortunately, for various reasons, that effort is dead. :frowning:

You won’t be able to use 2.6.2 unfortunately - which is why I included a link to the current state of my changes to Edgardo’s starting point in my post.

I call it tonemapping-by-fusion in order to differentiate it from another common use case of exposure fusion, which is feeding bracketed images from a camera into enfuse, and getting a fused image with compressed dynamic range as output, bypassing the intermediary linear HDR data representation. Many cameras now have enough dynamic range that you can get good results by tonemapping data from a single raw exposure, and there are other approaches to both raw merging (HDRMerge, Google’s align-and-merge stacking method, other stacking/averaging methods) and to tonemapping (PhotoFlow has some interesting techniques that its author has been working on) that you can use. I happen to like the results of Mertens algorithm for tonemapping, even if it is not the original design intent of feeding it bracketed JPEGs from a camera. Some people strongly dislike it - as with everything, many things are a matter of taste.

Obviously the ideal is to fix the scene lighting by adding light in the shadows, but as you’ve experienced, that can often be easier said than done. Bounce flash can work GREAT, if you have a room where the ceiling is a matte white. I’ve sometimes had decent results by putting a flash with a small mini-softbox (Lumiquest ultrasoft) on the end of a monopod and holding it up, but often you’ll still have that “this was taken with flash” look. Interestingly, one of my uses for fusion has been to lessen that look such that the falloff from near to far isn’t so obvious. (I would post an example, but I am uncomfortable posting family wedding pictures here.)

Edit: As a side note, for use cases like this, you might want to check out PhotoFlow - in addition to the tone curve work that was discussed earlier here, @Carmelo_DrRaw has been doing a lot of investigation into various approaches for local tonemapping. I admit, I haven’t tried it yet but based on discussions here I’m planning on compiling and playing with it soon as it looks like it may be far more suitable to my own typical workflow needs due to the tonemapping work he’s done.

…Am I a bit late to the party? :grin:

Here are the results I obtained by using my own preset, Highlights-Shadow_Gaito,
find it here: https://discuss.pixls.us/t/highlights-shadows-control-my-preset/13791/2

My preset aims to “imitate” the quick and simple two sliders Lightroom approach: one slider for boosting the shadows, one for attenuating highlights, that’s all.

The results you can see here are obtained using only that preset and a little bit of local contrast.

On the chalet picture, I had to duplicate the ShadowsBoost instance of my own preset, in order to doubling the shadow recovery capability: shadows in that pictures were very dark! :smile:

2 Likes

Hi’ @Gaito

Amazing results I think…….:blush:.
And thank you for mailing your presets. I will have a closer look tomorrow.

I shot the chalet picture with the intent of getting a big contrast between the dark interior and the light exterior. Out of the camera the contrast is too big, but it is interesting to see how much you can lighten the interior using your presets.

Hi’ @Gaito

I have studied and tested your presets. I post an image so that others easily can follow the discussion. Your presets are:

Your method is to separate the dark areas and light areas by parametric masks in the lightness channel, compress the selected areas by tone mapping and use the blend modes screen/multiply to lighten/darken the shadows/highlights respectively.

I have tested your method on a few images, and it works really well!

Below you will find some comments and questions.

You adjust the dark areas and the light areas separately. That is a nice feature giving good control of the end result. Much better than the standard tone mapping tool.

The order of pixels arranged from 100% black to 100% white should, in my view, not be changed when compressing the dynamic range. Are you sure, this is not happening?

Once you have separated the dark/light areas by means of masks you could use other tools than tone mapping to lighten/darken. What about the exposure tool? It’s simple and it seems to work well too.

The shadow boost preset seems more powerfull (opacity 65%) than the highlight attenuation preset (opacity 100%). Is there a way to make this preset more powerfull?

Mask refinement settings are different in your presets. Is this a result of experiments?

I’m not sure whether other contributors to this thread will get notified, but I will type some names here hoping that they are notified because I think that your post is really interesting: @Entropy512 @ggbutcher @anon41087856 @gadolf

4 Likes

This sounds like a form of what is called local tonemapping - where attempts are made to preserve or even enhance the local contrast of an image, because the human eye is significantly more sensitive to local contrast than global contrast. With local tonemapping, you will have pixels being reordered in luminance based on the properties of their neighbors.

The advantage of local tonemapping is, as mentioned above, that the human eye is more sensitive to local contrast than to global contrast.

The disadvantage is that if overdone, it can start to look strange/unnatural and always has a risk of haloing. (However, a good algorithm is fairly resistant to doing so unless it is abused. Although some people actually LIKE the results when such algorithms are abused, even if others think they look cartoonish.) There’s one academic paper on a local tonemapping approach that claims “no halos”, but only for a restricted atypical definition of “no” or “halos”, because the very paper itself provides examples of haloing as a visual artifact that can occur. (Specifically, it only prevents halos for discontinuities with amplitude greater than the selected value of the constant sigma described in the paper) I’m not saying it’s bad (it’s a pretty solid approach), but it’s not the One True Approach some have made it out to be because the assertion that it can’t ever fail is false.

I didn’t know about the term, thanks.
Actually, that’s what I have done in my edit above, except I didn’t use the tone mapping tool, but filmic. Neither did I use alternative blend modes as @Gaito’s, which undoubtedly give a more pleasant look.

You are right about possible haloing. In my opinion this is also often the risk when applying DT masks.

I have spend some time testing several tools in darktable compressing the dynamic range of a number of photos. In my opinion the Dynamic Range Compression tool in Rawtherapee is superior to the tools available in darktable. It would by really nice to have this tool implemented in DT.

dt’s tonemapping module is one of many potential approaches to local tonemapping. IIRC it attempts to implement a similar algorithm to one @Carmelo_DrRaw was experimenting with involving bilateral filters, but seems to have some implementation flaws that result in artifacts not seen in other implementations of the same algorithm.

dt’s exposure fusion module is really just another approach at tonemapping (which also, again, has documented flaws/deficiencies compared to other implementations of the same algorithm)

RT’s “Tone Mapping” algorithm is based on a paper by (if I recall correctly, for whatever reason Google search is being very slow, it is frequently referred to as the Drago algorithm). I’m personally not a fan of the results I’ve gotten from it.

RT’s “Dynamic Range Compression” algorithm is also a local tonemapping operator, based on a 2002 paper by Fattal. It’s a little more fragile (meaning more prone to potentially looking strange) than some more modern algorithms, but so far it’s gotten the job done for my needs. I don’t like it quite as much as a fixed Mertens implementation, but it’s good enough for most of my use cases and scenarios.

PhotoFlow has quite a few choices for tonemapping operators, with @Carmelo_DrRaw working on even more. The various algorithms out there often have differering weaknesses, so choice is always good - the failures of an algorithm in one use case may not affect another algorithm for that case, but in another scenario the situation may be reversed. In some cases, the relative strengths and weaknesses of one algorithm over another will be subjective, certain visual artifacts are more disturbing to some people than to others. As I’ve mentioned, some of the ultra-aggressive approaches implemented by some software (such as some of the approaches used by Photomatix) are ones I really find distasteful, but there are some people who absolutely love that sort of look.

1 Like

Hi obe,

I fear I have not understood what you’re asking me :grin:

Uhmm…I remember I did experiment a lot, a bunch of months ago, using the same approach on others darktable modules, included the exposure one, but, from my memories, the best balance between “effectiveness” and artifacts was obtained using the tone mapping module

Well, maybe…but, as always, is a matter of tradeoff between effectiveness and artifacts :slight_smile:
Still, are you sure that highlights reduction is less effective than shadow boost? Be sure you’re not trying to reduce raw clipped highlights: those ones are impossible to recover using my approach nor any other approach (a part from “simulating” a recover by the means of “reconstructing” them from not clipped neighbours pixels)

Yes: lots of experiments on as more as possible raw files :slight_smile:

hi,

do you have examples by chance? thanks!

I’ll try to put together a good side-by-side… It’s hard to describe, it’s not a major failure such as the ones you submitted some code to fix on issue 5424, it’s far more subtle. Just what looks like slightly unnatural subject/background separation. A challenge here is that it’s most noticeable on human subjects and I am not comfortable posting anything here from family weddings…

Hi’ @Entropy512

From your post I understand that it isn’t possible to identify “the best” tone mapping algorithm. Every algorithm has advantages and disadvantages.

I must say that RT’s dynamic range compression has always got the job done for me too even on difficult photos but it’s awkward to have to switch from darktable to rawtherapee to use the tool.

I have been struggling a lot with filmic and the result seems to me always a little bit greyish and flat. This is due to disabling of the base curve, which is the recommended way of using filmic. I can leave the base curve turned on. Is filmic working as intended in this situation?

The base curve produces a colorful and interesting image. Maybe the image gets more realistic when the base curve is turned off. It’s all a matter of taste.

For me, local tonemapping is extremely important. Between that and the fact that RT recently added a film negative inversion feature that blows away dt’s , and the end result of the last time anyone tried to pull request local tonemapping work to dt, I’ve switched.

As its developer intends, but I’ve never gotten good results from filmic in my use cases.

Funny thing is that for all of the ranting Pierre has done in the past about doing per-channel tone curves, that’s exactly what filmic does by default.

basecurve’s presets effectively a film-like curve that happens to have been derived from reverse engineering the years of work various camera manufacturers spent tuning their camera’s “look” instead of endlessly fiddling with the settings yourself trying to get a decent result. If you don’t like what your camera OEM did, you can adjust things yourself.

Yup. My observations on base curve:

  1. One complaint about it is that per-channel tone curve lookups can cause chromaticity shifts. Well, I’ve found that for a good looking image, I’ve always had to fiddle with chromaticity in the shadows if I did a luminance-only contrast curve. basecurve now defaults to “preserve colors” for all camera presets, but honestly, if you’re trying to emulate a camera’s behavior, you should turn this to “none”. Preserve colors is only in development builds at the moment. 2.6 behavior is equal to “none”
  2. Endless whining about it being “too early in the pipeline”. A valid complaint against 2.6, but modules can be reordered in development builds and basecurve can be put at the end of the pipeline. That was the case in the one example I gave you where I mentioned using basecurve - it was the last module in the pipe after fusion. If you use a dev build, I would DEFINITELY recommend experimenting with putting basecurve at the end of the pipeline. I have pretty much universally gotten better results with basecurve at the end than any attempt I made at using filmic. Honestly this is one of the few things I wish I could do in RT - “base curves” like this are intended to emulate the workflow of starting from a camera-JPEG-like starting point, but if you think about what the camera is likely doing internally, it’s likely implementing the tone curve as the last stage in its pipeline - so if you’re starting from RAW, why wouldn’t you do the same thing?

Hi’ @Gaito

As mentioned in my previous post I really like your presets and have already used them many times!

I was just wondering: You select dark pixels by masking. The selected pixels are then lightened by tone mapping and the screen blend mode. Is there a risk that some of the pixels end up being lighter than some of the non-selected pixels?

The opacity slider is by default set to 100% giving no room to make the effect stronger. The opacity slider in the shadow-boost preset is set to 65% giving lots of room to make the effect stronger. That is why I ask.
Highlight reconstruction is turned on in my setup. Is that a problem using your preset?

Hi’ @Entropy512

I understand that the ability to relocate the basecurve or other modules will be available in 2.8. I think I will wait for this release because compiling development builds may be a little bit out of my present league.

I’m a fan of the base curve because it provides the average user, such as myself, with a good starting point editing an image. You can turn it off when you become more experienced.

Some years ago, when you opened a photo in rawtherapee you would wonder how ever to be able to change the image into something usable. This is not the case using DT with the base curve module turned on.

Using rawtherapee you had to extract an input profile from Adobes DNG converter or some NIKON software (in my case) to get a good starting point. Messing around with input profiles is not straightforward for average users, and we want many users, don’t we?

Is it correct that the DT base curve acts the same way as a RT color management input profile?

FYI it has improved a lot in RT these days.

1 Like