Filmic, when to use?

Unfortunately, basecurve fusion is broken for a variety of reasons, but yes - that is tonemapping-by-fusion. The concept is sound, but there are a bunch of implementation flaws. I started doing a rework to fix the issues, but in the end, @Edgardo_Hoszowski split everything into a separate module (which it should have been in the first place in my opinion…). Unfortunately, for various reasons, that effort is dead. :frowning:

You won’t be able to use 2.6.2 unfortunately - which is why I included a link to the current state of my changes to Edgardo’s starting point in my post.

I call it tonemapping-by-fusion in order to differentiate it from another common use case of exposure fusion, which is feeding bracketed images from a camera into enfuse, and getting a fused image with compressed dynamic range as output, bypassing the intermediary linear HDR data representation. Many cameras now have enough dynamic range that you can get good results by tonemapping data from a single raw exposure, and there are other approaches to both raw merging (HDRMerge, Google’s align-and-merge stacking method, other stacking/averaging methods) and to tonemapping (PhotoFlow has some interesting techniques that its author has been working on) that you can use. I happen to like the results of Mertens algorithm for tonemapping, even if it is not the original design intent of feeding it bracketed JPEGs from a camera. Some people strongly dislike it - as with everything, many things are a matter of taste.

Obviously the ideal is to fix the scene lighting by adding light in the shadows, but as you’ve experienced, that can often be easier said than done. Bounce flash can work GREAT, if you have a room where the ceiling is a matte white. I’ve sometimes had decent results by putting a flash with a small mini-softbox (Lumiquest ultrasoft) on the end of a monopod and holding it up, but often you’ll still have that “this was taken with flash” look. Interestingly, one of my uses for fusion has been to lessen that look such that the falloff from near to far isn’t so obvious. (I would post an example, but I am uncomfortable posting family wedding pictures here.)

Edit: As a side note, for use cases like this, you might want to check out PhotoFlow - in addition to the tone curve work that was discussed earlier here, @Carmelo_DrRaw has been doing a lot of investigation into various approaches for local tonemapping. I admit, I haven’t tried it yet but based on discussions here I’m planning on compiling and playing with it soon as it looks like it may be far more suitable to my own typical workflow needs due to the tonemapping work he’s done.

…Am I a bit late to the party? :grin:

Here are the results I obtained by using my own preset, Highlights-Shadow_Gaito,
find it here: https://discuss.pixls.us/t/highlights-shadows-control-my-preset/13791/2

My preset aims to “imitate” the quick and simple two sliders Lightroom approach: one slider for boosting the shadows, one for attenuating highlights, that’s all.

The results you can see here are obtained using only that preset and a little bit of local contrast.

On the chalet picture, I had to duplicate the ShadowsBoost instance of my own preset, in order to doubling the shadow recovery capability: shadows in that pictures were very dark! :smile:

2 Likes

Hi’ @Gaito

Amazing results I think…….:blush:.
And thank you for mailing your presets. I will have a closer look tomorrow.

I shot the chalet picture with the intent of getting a big contrast between the dark interior and the light exterior. Out of the camera the contrast is too big, but it is interesting to see how much you can lighten the interior using your presets.

Hi’ @Gaito

I have studied and tested your presets. I post an image so that others easily can follow the discussion. Your presets are:

Your method is to separate the dark areas and light areas by parametric masks in the lightness channel, compress the selected areas by tone mapping and use the blend modes screen/multiply to lighten/darken the shadows/highlights respectively.

I have tested your method on a few images, and it works really well!

Below you will find some comments and questions.

You adjust the dark areas and the light areas separately. That is a nice feature giving good control of the end result. Much better than the standard tone mapping tool.

The order of pixels arranged from 100% black to 100% white should, in my view, not be changed when compressing the dynamic range. Are you sure, this is not happening?

Once you have separated the dark/light areas by means of masks you could use other tools than tone mapping to lighten/darken. What about the exposure tool? It’s simple and it seems to work well too.

The shadow boost preset seems more powerfull (opacity 65%) than the highlight attenuation preset (opacity 100%). Is there a way to make this preset more powerfull?

Mask refinement settings are different in your presets. Is this a result of experiments?

I’m not sure whether other contributors to this thread will get notified, but I will type some names here hoping that they are notified because I think that your post is really interesting: @Entropy512 @ggbutcher @anon41087856 @gadolf

4 Likes

This sounds like a form of what is called local tonemapping - where attempts are made to preserve or even enhance the local contrast of an image, because the human eye is significantly more sensitive to local contrast than global contrast. With local tonemapping, you will have pixels being reordered in luminance based on the properties of their neighbors.

The advantage of local tonemapping is, as mentioned above, that the human eye is more sensitive to local contrast than to global contrast.

The disadvantage is that if overdone, it can start to look strange/unnatural and always has a risk of haloing. (However, a good algorithm is fairly resistant to doing so unless it is abused. Although some people actually LIKE the results when such algorithms are abused, even if others think they look cartoonish.) There’s one academic paper on a local tonemapping approach that claims “no halos”, but only for a restricted atypical definition of “no” or “halos”, because the very paper itself provides examples of haloing as a visual artifact that can occur. (Specifically, it only prevents halos for discontinuities with amplitude greater than the selected value of the constant sigma described in the paper) I’m not saying it’s bad (it’s a pretty solid approach), but it’s not the One True Approach some have made it out to be because the assertion that it can’t ever fail is false.

I didn’t know about the term, thanks.
Actually, that’s what I have done in my edit above, except I didn’t use the tone mapping tool, but filmic. Neither did I use alternative blend modes as @Gaito’s, which undoubtedly give a more pleasant look.

You are right about possible haloing. In my opinion this is also often the risk when applying DT masks.

I have spend some time testing several tools in darktable compressing the dynamic range of a number of photos. In my opinion the Dynamic Range Compression tool in Rawtherapee is superior to the tools available in darktable. It would by really nice to have this tool implemented in DT.

dt’s tonemapping module is one of many potential approaches to local tonemapping. IIRC it attempts to implement a similar algorithm to one @Carmelo_DrRaw was experimenting with involving bilateral filters, but seems to have some implementation flaws that result in artifacts not seen in other implementations of the same algorithm.

dt’s exposure fusion module is really just another approach at tonemapping (which also, again, has documented flaws/deficiencies compared to other implementations of the same algorithm)

RT’s “Tone Mapping” algorithm is based on a paper by (if I recall correctly, for whatever reason Google search is being very slow, it is frequently referred to as the Drago algorithm). I’m personally not a fan of the results I’ve gotten from it.

RT’s “Dynamic Range Compression” algorithm is also a local tonemapping operator, based on a 2002 paper by Fattal. It’s a little more fragile (meaning more prone to potentially looking strange) than some more modern algorithms, but so far it’s gotten the job done for my needs. I don’t like it quite as much as a fixed Mertens implementation, but it’s good enough for most of my use cases and scenarios.

PhotoFlow has quite a few choices for tonemapping operators, with @Carmelo_DrRaw working on even more. The various algorithms out there often have differering weaknesses, so choice is always good - the failures of an algorithm in one use case may not affect another algorithm for that case, but in another scenario the situation may be reversed. In some cases, the relative strengths and weaknesses of one algorithm over another will be subjective, certain visual artifacts are more disturbing to some people than to others. As I’ve mentioned, some of the ultra-aggressive approaches implemented by some software (such as some of the approaches used by Photomatix) are ones I really find distasteful, but there are some people who absolutely love that sort of look.

1 Like

Hi obe,

I fear I have not understood what you’re asking me :grin:

Uhmm…I remember I did experiment a lot, a bunch of months ago, using the same approach on others darktable modules, included the exposure one, but, from my memories, the best balance between “effectiveness” and artifacts was obtained using the tone mapping module

Well, maybe…but, as always, is a matter of tradeoff between effectiveness and artifacts :slight_smile:
Still, are you sure that highlights reduction is less effective than shadow boost? Be sure you’re not trying to reduce raw clipped highlights: those ones are impossible to recover using my approach nor any other approach (a part from “simulating” a recover by the means of “reconstructing” them from not clipped neighbours pixels)

Yes: lots of experiments on as more as possible raw files :slight_smile:

hi,

do you have examples by chance? thanks!

I’ll try to put together a good side-by-side… It’s hard to describe, it’s not a major failure such as the ones you submitted some code to fix on issue 5424, it’s far more subtle. Just what looks like slightly unnatural subject/background separation. A challenge here is that it’s most noticeable on human subjects and I am not comfortable posting anything here from family weddings…

Hi’ @Entropy512

From your post I understand that it isn’t possible to identify “the best” tone mapping algorithm. Every algorithm has advantages and disadvantages.

I must say that RT’s dynamic range compression has always got the job done for me too even on difficult photos but it’s awkward to have to switch from darktable to rawtherapee to use the tool.

I have been struggling a lot with filmic and the result seems to me always a little bit greyish and flat. This is due to disabling of the base curve, which is the recommended way of using filmic. I can leave the base curve turned on. Is filmic working as intended in this situation?

The base curve produces a colorful and interesting image. Maybe the image gets more realistic when the base curve is turned off. It’s all a matter of taste.

For me, local tonemapping is extremely important. Between that and the fact that RT recently added a film negative inversion feature that blows away dt’s , and the end result of the last time anyone tried to pull request local tonemapping work to dt, I’ve switched.

As its developer intends, but I’ve never gotten good results from filmic in my use cases.

Funny thing is that for all of the ranting Pierre has done in the past about doing per-channel tone curves, that’s exactly what filmic does by default.

basecurve’s presets effectively a film-like curve that happens to have been derived from reverse engineering the years of work various camera manufacturers spent tuning their camera’s “look” instead of endlessly fiddling with the settings yourself trying to get a decent result. If you don’t like what your camera OEM did, you can adjust things yourself.

Yup. My observations on base curve:

  1. One complaint about it is that per-channel tone curve lookups can cause chromaticity shifts. Well, I’ve found that for a good looking image, I’ve always had to fiddle with chromaticity in the shadows if I did a luminance-only contrast curve. basecurve now defaults to “preserve colors” for all camera presets, but honestly, if you’re trying to emulate a camera’s behavior, you should turn this to “none”. Preserve colors is only in development builds at the moment. 2.6 behavior is equal to “none”
  2. Endless whining about it being “too early in the pipeline”. A valid complaint against 2.6, but modules can be reordered in development builds and basecurve can be put at the end of the pipeline. That was the case in the one example I gave you where I mentioned using basecurve - it was the last module in the pipe after fusion. If you use a dev build, I would DEFINITELY recommend experimenting with putting basecurve at the end of the pipeline. I have pretty much universally gotten better results with basecurve at the end than any attempt I made at using filmic. Honestly this is one of the few things I wish I could do in RT - “base curves” like this are intended to emulate the workflow of starting from a camera-JPEG-like starting point, but if you think about what the camera is likely doing internally, it’s likely implementing the tone curve as the last stage in its pipeline - so if you’re starting from RAW, why wouldn’t you do the same thing?

Hi’ @Gaito

As mentioned in my previous post I really like your presets and have already used them many times!

I was just wondering: You select dark pixels by masking. The selected pixels are then lightened by tone mapping and the screen blend mode. Is there a risk that some of the pixels end up being lighter than some of the non-selected pixels?

The opacity slider is by default set to 100% giving no room to make the effect stronger. The opacity slider in the shadow-boost preset is set to 65% giving lots of room to make the effect stronger. That is why I ask.
Highlight reconstruction is turned on in my setup. Is that a problem using your preset?

Hi’ @Entropy512

I understand that the ability to relocate the basecurve or other modules will be available in 2.8. I think I will wait for this release because compiling development builds may be a little bit out of my present league.

I’m a fan of the base curve because it provides the average user, such as myself, with a good starting point editing an image. You can turn it off when you become more experienced.

Some years ago, when you opened a photo in rawtherapee you would wonder how ever to be able to change the image into something usable. This is not the case using DT with the base curve module turned on.

Using rawtherapee you had to extract an input profile from Adobes DNG converter or some NIKON software (in my case) to get a good starting point. Messing around with input profiles is not straightforward for average users, and we want many users, don’t we?

Is it correct that the DT base curve acts the same way as a RT color management input profile?

FYI it has improved a lot in RT these days.

1 Like

Yes, basecurve is the same basic concept as the input profile tone curve.

At least with a Sony A6300 and Sony A7III, RT had built-in Sony tone curves. There’s also the AMTC feature which in many cases can automatically match the camera’s built-in tone curve if the RAW file has a decent quality JPEG preview/thumbnail. This is potentially useful if you found a particular combination of camera JPEG settings that you like, or just understand what the heck those settings are doing (because of the documentation being as clear as mud, which is definitely the case for Sony.)

I can’t remember why I originally didn’t even try RT a few years ago. I know that exposure fusion was part of what attracted me to darktable, but I think there were other reasons. After giving RT another try recently, I’m VERY impressed and I’m rapidly making progress in finally clearing a massive backlog.

I do wish in RT I could move the “input profile” tone curve later like some of my most recent darktable workflows, although I get pretty close with the LAB Adjustments tool. S-curve to taste on the L channel, and then a boost to saturation in the shadows using the CL or LC curve (I forget which one, I don’t have RT open, it’s the lower rightmost curve in the LAB adjustments tool)

Hi’ @paperdigits
Yes, you are absolutely right, and that’s why I wrote “some years ago……”. Today the auto matched facility in RT is just great in providing the users with a good starting point for further editing.

I have spent many hours exploring RT, and what it does it does really well outperforming the equivalent features in DT (in my opinion, at least). This holds true for dynamic range compression, noise reduction, haze removal and maybe others.

But to at new user RT seems like a subset of DT missing masking, retouch, automatic perspective correction and several other facilities. It is a hard decision to choose between RT and DT.

Hallojsa!

I agree! BUT: you do not have too choose between RT and dt.
Use them both!

Med venlig hilsen,
Claes fra Lund i Sverige

2 Likes

@obe @Entropy512 One should also remember the ability of RT (and PhF) to apply DCP profiles. Those provided with the Adobe DNG converter do a pretty good job at reproducing the various picture styles of my Nikon D300, and I guess the same is true for many other cameras.