Trying to figure out RGB (filmic) workflow

Thing is that it’s simpler but arguably more powerful thanks to the freeing of the tone curve. (tie in with complexity thread) My mentioning of Blender filmic was because I don’t feel I need the tone curve control in the renders where I have complete control of the light. Much of my photography is outdoors with narrow time slots so no control of light. In such situations more tone curve control is very useful because i have to expose for dynamic range not subject.

Somehow the splitting of tools and tone curve freedom of ART also radically reduced my roundtrips between tonemapping and curve/look. I felt, admittedly beeing a noob, that the tonemap/look entanglement was more complex and unpredictable in dt resulting in a lot of back and forth between the tabs. Please know that I spend a limited time and maybe 20 odd difficult photos trying out filmic so I was a bit lazy.

Nonetheless, the curve is still there. It’s just that someone with skills baked that curve for you in the LUTs so you don’t need to bother about it.

Hmm… The scene tab (black/grey/white) is supposed to be absolute and solely dependent on the scene lighting. Then the look tab is supposed to be completely free from technical lighting considerations (until you clip the curve) and relative to the scene. So it makes no sense to have to go back and forth between scene and look.

1 Like

@nosle

Have you tried the “Log tone mapping” in ART?

I have not. But I did download it. I’ll take a look at it.

It was mainly clipping I had issue with. The look caused clipping (according to warnings) which meant I had to go back to tweak tone mapping. I now know you you can equalize some of that back. A flexible tone curve can avoid it immediately.

If the curve is clipping, then you have been adding too much contrast in look. The scene parameters shouldn’t be used to unclip the filmic curve. White means white, and black means black, no matter how you want to tweak the contrast. The scene parameters are the bounds of the curve. I’m not sure what a “flexible curve” means here, a spline is still a spline.

1 Like

Hi,

I certainly like many of the things you have done in darktable.

and the log tone mapping is essentially dt’s filmic with a simplified UI.

yes and no. I think they use the same tone mapping formula (that comes from ACES), but the implementations are independent. I am also not using splines at all, but a simple remapping to linear. In ART there is also the “regularisation” slider that tries to preserve some local contrast.

In my view, that is part of the look, and therefore left to the tone curve:

Screenshot from 2020-03-05 08.15.11

4 Likes

Yes, but the image tended to become to flat otherwise. Tone eq can fix that but initially I attempted to get all tone control from filmic.

With flexible curve I just mean that it can be tweaked to be completely asymmetrical for example with a huge soft curve for the highlights and a near linear toe. It’s an easy way of having most contrast where you need it regardless of where it falls in the tonal range. I have a preference for global adjustments. Imho you need to be very observant when doing local adjustments. Things can get odd in ways you don’t notice when you are working on the file.

I think @agriggio is talking about the log tonemap module here but the tone curve which does the look work is a spline as far as I understand.

I forgot that I actually figured out the saturation curve.

correct

1 Like

Quite new to this forum, quite new to Darktable and quite late in this topic, I still try to contribute.
Hope this is OK.

I am also impressed and challenged at the same time by filmic RGB. Improved contrast in all dark and higlighted areas of an image is always a compromise. This because when someehere expansion (contrast improvment) is created with filmic RGB, it is inevitable that elsewhere compression is the result. So filmic is not the primary tool to “improve” the dark or bright areas of a photo.

Please find attached my trial to get some snow details. Not really good (vignetting!) and only a proof, that something can be recovered in the snow areas.


DSCF8284_01.RAF.xmp (56.6 KB)

1 Like

Basically I think, that it is a good idea to test and learn really ALL tools in Darktable with not only real world photos, but with PNG test images. Especially filmic RGB can be understood very good with adequate test images.

Please find attached one of my test images. With such image you can clearly see, what happens in most (at least a lot of) Darktable modules.

The four rows are:

  1. Just 10 areas from 8 bit RGB 0, 28, 56, 85, 113, 141, 170, 198, 226 to 255.
  2. 255 areas (each with 5 pixel width) from RGB 0 to 255
  3. Vice versa of row 2
  4. Similar to row 3, but with small variations of area 2, 4, 6 …

The row 4 can be used to see what happens, when the contrast in specific brightness areas is changed.

4 Likes

The problem with png is that many modules (e.g. exposure) work differently with RAW files. So it’s not really comparable.

For such experiments I photographed the screen with a png to get a RAW. This is not particularly accurate, but shows tendencies.

Yes Herbert, that is a nice idea. The alternative to generate raw files directly or convert them from png would go far over my my capabilities, if that is even possible. For basic understanding (e.g. sharpening) the png files can be ok, but of course not for the sophisticated raw modules in Darktable.

During understanding the linear and nonlinear encoding and color processing, I have generated not only test images, but also detailed transfer curves. They might be useful for others:

I dedicate the curves to Eric Brasseur and Aurelien Pierre!

Photographed png’s will still be bounded data, i.e. zero to one data.

A better approach is to generate openexr images and open them if you are testing things after the demosiacing step. These are true scene-referred files in float. I have generated them both using Python and as renders in Blender.

I used that a lot for the sigmoid development!

1 Like

What do you mean by that? If you mean that the input image probably has some ‘gamma’ applied (is not a linear representation), the input profile takes care of that. If used with the v3.0 JPEG module order, the input profile will come before exposure:
image

Then, increasing exposure by 1 EV will multiply linear data by 2 (check the color picker values in the screenshot):

The white square of the 0…255 bounded input chart, after scaling by 1 EV, ends up at a value of 510 (200%):

And turning on filmic or sigmoid shows that the data is there, and can be mapped back into display space:

I don’t understand this distinction: the sensor also provides bounded data, whether it’s the screen or the ‘real world’ that you photograph.

1 Like

I don’t know of anything that exposure does differently depending on input file type.

In fact, the only module that Ï know of that behaves differently depending on input data, is the highlight reconstruction module (because sometimes it requires bayer data for example).

after that and the demosaicing step, it’s just all floating point rgb data. Scale, squash, tone-map, do with it what you want.

Your raw sensor data is always between values of 0% and 100%. Because the file can’t represent more. After highlight reconstruction, you can get values above 100%. And if you apply an exposure boost (as the default 0.7ev for example) you get values above 100%.

But if you start with a PNG (even 8bit), you get values between 0% and 100%. Boost that with exposure, and you get values above 100%, no problem.

But… I think a normal photograph often contains values that are between 0% and (give or take) 150% to 200%… and then you have some bright highlights that reach up to 800%. So most of the data sits at the < 200% end, but you can have peaks.

If you start with a nice, perfect gradient ramp, and you scale that up to 1000%, you still have data that doesn’t seem to ‘emulate’ the lightness values you encounter in most photographs.
It can help in showing that happens with the original values, since it’s easier to see.

But all the gamut mapping and hue matching will also respond different to a gray gradient vs real colour data. So if it’s a test to take conclusions from, I’m not so sure.

The whole idea of the Darktable pipeline is that the pipeline can’t clip. So if you load a PNG with values between 0% and 100% and apply 1EV exposure (to get values between 0% and 200%), or if you load an EXR file that already has values between 0% and 200%… no difference.

Saw the picture, and had to give it a go. Just because I think it’s a very nice picture :slight_smile: .

DSCF8284.RAF.xmp (9.5 KB)

First of, enable lens correction because the vignetting is strong in this one :slight_smile: .

I white-balanced on the snow. Both legacy-color and modern-color seem to give similar results here.

Raised exposure enough to have the face brightness where I want it, enable filmic and use the white-picker to set it’s auto mode.

Then, directly after that: Time to get the details out, so I enabled ‘local contrast’, switched it to bilateral mode, set the contrast to 3 and cranked the details up to 400%.

Now, I can tweak the filmic white slider again to get a nice mix between details and brightness in the snow.

After that, I went back to local contrast to enable a parametric-mask on it, to only make it work on the highlights (snow). Now I get a nice, bright picture and subject, while still having details in the snow.

From there, it’s pretty much ‘standard workflow’. I enabled color-balance rgb and try the presets. Vibrant works well here, I think. Enable denoise if you think it needs it, enable diffuse to sharpen, and add a touch of ‘local contrast’ in the stock default settings for a small, small amount of ‘punch’

No highlight reconstruction, because I saw nothing being clipped. Maybe I missed a small part somewhere.

4 Likes

Yes, I think you are right.

The PNG represents a linear color space because the distances between the individual gray blocks are equal: 0, 28, 56, 85, … always increasing by 28.

It is the same as a RAW, except for the difference that 255 is the end.

So the solution is to use a conversion from linear to non-linear, i.e. Filmic RGB, as with a RAW.

In this case, the Exposure module (and probably all the others that work in linear color space) work in the same way as with a RAW.

Yes, I know the modules don’t make a difference, but I’m referring to what I see on the screen and in the histogram. And when I load a JPG and set +1EV, it looks different than when I do it with a RAW.

Great edit, well done.

Thanks for the explanations.

I tried to reproduce it with darktable. I have succeeded so far. The point is to set the color space for the histogram to linear Rec2020RGB.
In this case I get a doubling of the color values ​​at +1EV.

But what absolutely confuses me is that the gradations between the gray blocks are no longer linear: 0 3 10 23 42 …

Auswahl_139

With sRGB i get , i get 0 28 56 … → linear

Auswahl_136

Adding 1EV results in

This is not 100% linear and the values ​​were not multiplied by two.

Are your files available somewhere? If not, would you mind sharing them?