Some thoughts on workflows and tone mapper modules

I wondered how long it would take for someone to find that thread.

Don’t make an issue of the language, it’s just my exaggerated way among friends of complaining I’m finding the pixl.us explanations either too technically opaque or too brush-off “don’t worry about understanding it, just use it”, depending on the responder. This should not be a surprise, I’m talking to imaging engineers and developers, I’m out of my depth. Over there, they’re used to me not understanding technical stuff and adjust accordingly.

That’s my issue, not yours, no disrespect intended, I understand I’m a bit ignorant and frankly thick about these highly technical matters. I really struggle to grasp what people are saying. I usually get there in the end through bloody minded persistence but at the cost of annoying the heck out out of people. But, in the end I will be appropriately grateful and try to help others in turn…

1 Like

I’m not too keen on that article. But if it helps you, go for it.

There is an article on this site that I think addresses your questions. It’s a bit technical. Also, it predates Sigmoid and AgX. But it addresses the issue of the legacy process flow (most of which occurs after tone compression) and the newer scene-referred flow (most of which occurs before tone mapping). The old flow permitted a color space called Lab but that is not the main point.

https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/

You obviously know how to capture a nice image with your camera, and you have a workflow that is comfortable for you. So, your workflow is not incorrect. But there are advantages to the scene-referred workflow that are helpful to many of us.

You will find a community here that is willing to help you learn a more modern workflow, if that is what you want. But please understand that there is not a problem here just because you do not understand part of it.

A couple of other points in case it is not clear from earlier discussions:

Module editing order (history) is not the same as the module processing order (pipeline).

Sigmoid is the simplest of the modern tone mappers in dt and is sufficient for the majority of photos. If you want to try a more modern workflow you could try Sigmoid before diving into AgX. (And with AgX is is not usually necessary to touch many of the sliders. They are there to give more control when needed or desired.)

Thanks I’ll take a look at both those papers.

You said something that has been mentioned before (repetition helps with unfamiliar things I find). You said that with display-referred, tone compression come first, and edits after and with scene-referred, edits come first and compression at the end. This echoes something already said earlier.

Is this the key difference I need to grasp? If so, I think it is difficult for me because I came with the conceptual baggage that all raw convertors do their processing in the widest, most flexible space, and the compression comes only at the very end when the image is rendered out. I’m beginning to get an inkling that I may be harbouring an incorrect assumption here and that is what is blocking me…

p.s.

I’ve tried out Sigmoid a few times before and my impression was that it basically seemed to boost contrast and make things clip. I assumed that it wasn’t supposed to do this so I must not know what it is for or how to use it and discontinued using it.

Try the “Smooth” preset in Sigmoid. You may need to back off of some of your typical adjustments.

Since you seemed to formulate your thoughts a DP Review a bit more succinctly, maybe we can have a go at answering those questions.

  1. What does display referred mean

It means you’re editing pixels that have already had a tone mapper applied to them, so how far you can push a slider in a display referred module is limited before artifacts are introduced. Edits are logarithmic (I think, for certain they are non-linear).

  1. What does scene referred mean

This means edits are done in a linear space. You can push module settings a lot further before the image starts to show processing artifacts.

  1. What is the practical outcome of using one or tother and which is best for what

The practical outcome is (hoepfully) the same: you end up with an edit you’re happy with. Trust your eyes

The technical difference is that you can push contrast and saturation a lot farther in the scene- part of the editing pipeline than you can in the display- part of the pipeline.

  1. What is a tone mapper like AGX or Filmic for? Why would I need to use one

To map the linear data from the scene-referred part of the processing pipeline onto the smaller, squished up part of the display. Using a tone mapper gives you easy control for managing the dynamic range of your output medium.

For example, If I edit my image in order to print it on paper, I generally target the Adobe RGB color space and I soft proof my edit on screen with Adobe RGB since my printer supports that color space natively. Now I print my photo and it look like it does on my monitor (give or take reflective vs emissive, etc etc). That is awesome.

But! I just got a new TV that has a much larger color gamut and supports all this fancy HDR stuff (it’s a Samsung Frame TV and its DCI-P3). It has a cool mode call “Art mode” where I can be in a low power state but will show a still image on the screen. A great way to showcase my photograps. But if I just take the Adobe RGB tiff file I exported for printing and put it on the TV, it looks dull and life less and not nearly as good as the print. Because I used my tone mapper, all I have to do is set DCI-P3 as my soft proof profile, and adjust the black/white point in the tone mapper, and now my image is optimized for display on the TV.

  1. What is the consequence of me turning off all the built in tone mappers, including the old Base Curve and starting with a completely flat, murky image and editing it primarily by dodging and burning with multiple instances of a simple curve tool and masks to restrict it to local edits

It might take you a long time to edit this way. If you have a different display medium (like my TV) that is vastly different than what your original edit was for, then you’ll have to do a lot more wrangling to prepare for that new display medium.

Again, if you’re happy doing what you do, then keep doing it.

1 Like

In the older display referred pipeline ala Base Curve came first which compressed everything down to 0…255. Then edits followed.

Yes. I think you’ve got the image pipeline backwards. Tone Mapping (Agx, Flimic, Sigmoid) comes near the end of the pipeline (bottom to top where top is the end of the pipeline in the UI).

Scene referred is wider and has more latitude. It’s represented as 0…infinity. The tone mapper compresses that to 0…255 (display-referred), which is limited.

Scene referred modules (everything below the tone mapper) use a different math that can take advantage of that representation, i.e. greater latitude/flexibility. For instance Color Balance RGB vs Color Balance (or Curves RGB vs Curves) is scene referred vs display referred. Display referred modules either clip by clamping the values if they below/above the 0 and 255 respectively or leave them for the color space clipping (output module). Scene referred doesn’t try to clip in this way and just leave it as-is. This will be handled by the tone-mapper trying to roll-off the shadows & highlights more gracefully taking into account other factors like color as well.

Older display referred modules should happen after the tone mapper. There’s nothing stopping you from using the old modules. More care needs to be used to avoid clipping shadows and highlights with them.

Darktable also operates in a wider color space (linear Rec2020 RGB eventually converting it to sRGB or Adobe later before exporting) which you may or may be confusing with scene referred.

1 Like

Thank you, paper, I think I am now slowly crawling towards at least a conceptual understanding. My dpreview questions were likely in part informed by the discussion here which at least educated me enough to be able to ask the questions :slight_smile: The Aurelien authored paper was a useful read too, even if it is a bit old.

So, to summarise what I currently have learned:

display-referred means edits done within a tonally compressed space

scene-referred means edits done within an uncompressed space

Working in an uncompressed space has some advantages when it comes to maintaining image fidelity

tone mappers manage the transition of a work done in the uncompressed space to the compressed space used by our output media (monitors and printers) and prevent any nasty distortions that can occur in that transition

darktable was originally designed mainly to use lab space but this is not a great idea with current wide dynamic range cameras, which lab was not designed to handle. To deal with this, certain modules that previously used lab as their working space have been replaced or upgraded to use different more linear spaces. This removes some of the errors that darktable might introduce during extreme processing actions.

AGX is the latest tone mapper with some advantages over the older ones (at the cost of a complex UI)

Sigmoid is the simplest to use

The Exposure module is a good module to use for controlling mid tones and it is a mistake on my part to stop using it. I could use it with masks to replace RGB curves for dodging and burning.

It is ok to use tone mappers in a very basic way ie just switch one on then do my tonal adjustments with other modules.

But there are potentially other gains to using the tone mapper more intelligently - this bit is currently outside my understanding.

2 Likes

Oh, and I need to remember that modules that are lower down the development panel are processed first and modules that are at the top are processed last - the other way around that it looks.

Not replaced, all those modules are still there and people still use them. Rather new modules that account for the increase in DR from modern cameras were developed and the processing pipeline was amended to be able to handle the uncompressed working space as well.

The rest of it is relatively correct.

Yes.Also the order in which you use modules in your workflow is not (necessarily) the same as which order the modules are applied in the image processing pipeline.

I think in practical terms, a tone mapper gives me a good base to manage the dynamic range of my image no matter the output medium (print, low dynamic range screen, high dynamic range screen, and/or something else that might not even exist yet).

In really basic terms, tone mappers are just converting from RAW file data to the more limited color spaces (sRGB, etc). As far as I know, every RAW editor does this at some point in the process. They would have to in order to be able to output a sRGB jpeg file for example.

I don’t know if Lightroom does this before you start editing (meaning you’d be editing in “display-referred” space) or if it processes your changes first (meaning you’d be editing in “scene-referred” space). It kind of doesn’t matter, Lightroom doesn’t give you any tools to control how this mapping is done as far as I know.

AgX, Sigmoid, and Filmic I believe all operate on the principal that have a midpoint in your image (e.g. 18% grey) and the tone mapper creates a curve that pivots around that midpoint. In other words, the tone mapper will not change the luminance of that midpoint. What the tone mappers will do in very simple terms is adjust how the brightest and darkest parts of your image map to a color space like sRGB.

Here’s some screenshots of the waveform from my workflow post, with an additional screenshot that I did not include in that post:

Here’s the waveform as it looks with no tone mapper, no color balance RGB, pretty much just the starting point when initially loaded into darktable. You can see the highlights are clipping.

This is with the agx module enabled with the default settings. The highlights were brought down so they aren’t clipping, and the shadows were brought down as well. But the midpoint (the dashed line in the middle of the waveform) was not affected. I circled a little peak in this area to illustrate this, but if you were to toggle the agx module on and off you’d see that area does not change.

Finally, using the “auto tune levels” button adjusts things based on the properties of my image. It pushes the highlights up more, and pushes the shadows down more, and once again, the mid point was not affected. The values selected by the “auto tune levels” feature are not necessarily “correct”, you can certainly adjust them even more, it’s just a starting point.

So without knowing the actual math behind it, you can see that agx (and other tone mappers) can adjust the contrast of the image without affecting the midpoint exposure. The rest of the controls are to tweak how it’s doing all that but in essence it’s just trying to maximize the histogram without clipping and without changing that midpoint exposure.

The manual is pretty good … though its often in the context of filmic

Filmic even has a graph to help visualize how you are mapping the data

The overall premise of these references can be applied to AGX for the most part…

2 Likes

Yes, the manual is useful - or maybe I’m now in a place where I can better appreciate it. One or the other :slight_smile:

2 Likes

In photoshop is his file not opened by camera raw and so it gets a profile and tone curve so it does have some global processing right out of the gate…DT makes you decide how that step goes… again everyone should work as they see fit but just be cause he uses photoshop he is not starting from absolute scratch…

Actually, it’s quite useful to know what tools are scene-referred. For example, because the tone equalizer is scene-referred and operates before the tonemapping, you can still use it to bring down highlights that look clipped after some editing as long as the detail is there. But the same can’t be done with the tone curve once the data is already clipped away.

Tools like Lightroom and DxO Photolab therefore are exceptionally frustrating after you’ve experienced the explictness of darktable. Maybe that’s just a personal preference but it’s very nice to know the processing pipeline so I can better predict the behaviour of the modules.

4 Likes

There are a number of PlayRaw examples in this forum where you could choose to practice the use of scene-referred workflows. Here is one:

There are a number of nice edits there, but my favorite is from Boris @s7habo (I think it’s a masterful rendition). There is much more to the edit than just tone mapping.

You could then try your usual workflow and compare.

1 Like

Oh I agree in darktable. I just meant that with Lightroom (or DxO or whatever) it doesn’t matter much because there’s nothing meaningful you can do with it, it just does what it does for you and you have no choice about it.

@D_M : one can think of the tone mapper’s curve as the characteristic curve of an imaginary film stock (https://www.nfsa.gov.au/preservation/preservation-glossary/characteristic-curve). The x-axis is logarithmic (not recorded in lux * seconds, but rather in exposure relative to mid-grey, so using log2, not log10), the y-axis is like density. Using the parameters, you can change the curve.

It may be beneficial to restate that tone mapping is not optional. It is a necessary part of rendering a raw file. The raw file recorded physical brightness, which on a sunny day can be thousands of nits. Yet we render it on a display with 300-or-so nits. This is a huge reduction in brightness.

Due to various psychovisual effects, we perceive a dimmer image as less contrasty and less saturated. It is thus that we need a tone mapper that increases contrast and saturation. If we had an immensely bright display, tone mapping wouldn’t be necessary.

I like this explanation better than the usual 14-to-8 EV dynamic range compression, since that actually has quite a few logical problems. It is at the end stating the same thing, though.

3 Likes

Morning (at least for me)

I woke up with an insight (at least, I think it is an insight).

If the difference between scene-referred and output-referred is that the former its performs mathematical operations before compression and the latter after, we have an operational problem of how to signal to the software user what effects their edits in a scene-referred space are having.

This means that the preview image on your screen you use to judge your edits has to be a screen-referred image. Even if you are editing using exclusively scene-referred tools, the image you look at to make image quality judgements has to be a screen-referred compressed image.

That means the raw convertor is always sort of lying to you, it can never be truly explicit.

Somehow, when you are editing in scene-referred mode, your raw convertor is performing the calculations in a high DR space, then translating the results to you on the fly into the SDR of your screen so you can see them. You think you are editing in an HDR space, but the convertor is actually translating your instructions on the fly into scene-referred mode, making the changes, then reporting back the results after another translation back to screen-referred mode for display purposes.

This logically has to what happens with everything you do with scene-referred tools, because your screen simply can’t display the scene-referred truth.

I presume this means that with current output devices, you can never see what the image really looks like, only the display-referred translation. Effectively, the only clue you are working in a scene-referred way must be the absence of processing-induced artefacts.

So, although unlike LR, C1 etc darktable does go a step further and makes it explicit you are working uncompressed, it can’t actually visually display the results of this on a screen.

Complicated!

1 Like