[Solved] Why do CAT and Filmic render clipped areas so weirdly

In this picture, if you don’t use the chroma preservation, you turn the women into anemic ghosts and the bricks highlights to yellow-white.

With max RGB:


_MG_5096.CR2.xmp (32,9 Ko)

The way the walls shift to yellow is pretty clear. Reference colors at -1 EV (no filmic):

Also, color balance changes saturation or chroma at constant hue, which is invaluable in terms of control and much better behaved… That’s what you get for using an extra step.

@Leniwiec notice the weird areas you see here have nothing to do with any of the modules you incriminate. It’s just that highlights clip at different rates on the 3 RGB channels :

  • red clips first (producing yellow halos, where you have too much green for the amount of red),
  • then green (producing magenta, where you have too much blue for the amount of green and red),
  • then finally blue (producing white).

Since filmic is set to preserve the chrominance (that is hue + chroma), it also preserves the chromatic artifacts while the non-preserving approach hides them as a side-effect (which also turns red bricks to yellow).

So here, your best shot is to disable the highlights reconstruction and use fimic reconstruction in blooming + color inpainting modes.

Also, it’s worth nothing (and a typical beginner mistake) to try to pull clipped highlights too low… If it’s clipped in, let it clip out. It’s weirder to have grey clipped highlights than having them true white. Anyway, they will pop up as flat areas.

9 Likes

I did. In fact I’ve tried all combinations of clickable options across Highlight reconstruction and Filmic. Judging on this and some other photos, for the time being it seems to me that even CAT alone produces cyan or similar problems.

To narrow the problem, I found it working worst when CAT has to correct white balance (don’t mistake with White Balance module) to something far from “Camera reference D65” set in White Balance module, which is of course required if we want to use CAT.

It may be bad comparison, but to non-scientific man as me, working with CAT looks like setting the wrong white balance of 6500 indoors when bulbs are 2700 K, exporting a TIF or JPEG and then force the colours to be white balanced.

That perhaps is why I like the output of Base curve :smiley: It just blowns those problematic areas and it’s fine.

I will need to process the rest of your answer in my head!
Thanks!

EDIT:
@anon41087856 I can’t load sidecar from your JPG, do I need darktable 3.7 or something? I’m using 3.6 Windows.

Yes, that’s the point of the scene-reffered workflow, that you are now able to use the full dynamic range of your camera without creating artifacts if you know how to shoot to match that dynamic range to your scene.

1 Like

So now I have a conundrum. Display-referred was easy to understand as it corresponded to what I saw on my camera at the time of shooting. What to expect.

I think I try to mimic my old workflow, something like “work the same, have the same results you like, but use those new better tools” - @s7habo How do you think, is it possible to recreate the look of old edits in scene-referred?

Your camera lies to you anyway, what you see is a JPEG render on the back of the camera.

Give it some time, I have done hours of video to explain what’s what.

1 Like

Yes, I follow that.

Or at least try - your deep understanding of the matter is mind blowing, but I must admit I prefer those shorter videos like: https://youtu.be/5CmsxxxsMDs

Time and changing of habits surely will do their work, but any guidance is appreciated :slight_smile:

Can you please stop spreading wrong info like that ?

Contrast can be performed in unbounded spaces, that’s why we used fulcrums. The scene-referred workflow does not deal with unvisible colors as these are compressed and clipped in the color calibration module, after white balancing is performed. Colors are yet again gamut-mapped at constant hue at 3 places in color balance RGB module.

That’s absolute bullshit, please refrain. Edge artifacts at sharpening time have to do with gradients, and little to do with color science.

No.

1 Like

:thinking: Soo speaking [really] basicaly, Filmic RGB has its orders from the general: “Protect the hue+chroma at all cost!” and since I’ve introduced artifacts by overexposing from the very beginning, Filmic works as it is supposed to work?

And going further, working with scene-referred, the healthy practice would be protecting highlights at the moment of shooting, then correct the exposure in darktable - even initially blowing them - because the Exposure module uses floating numbers, doesn’t knock off any highlights and they can be safely restored in Filmic RGB?

3 Likes

No, light could be added, at the time of capture. Now, that’s not always an option, but multishot HDR is not the only way.

So, where’s your rendition? Show us how what is effectively using a working space helps this situation. Frankly, I’m not seeing it, as doing a gamut compression early on just drags those fine magentas into rendition space, making them impossible to get rid of with highlight recovery tools.

I wanted to do a regular playraw, but decided instead to demonstrate my assertion. First, a screenshot of a regular old filmic curve applied to the linear data. Of note, I don’t do the color transform until display/output, at the end:

No fancy tools here, just a use of the filmic shoulder to drive the blown magentas to (almost) white.

Now, a screenshot of the same toolchain, except this time I’ve inserted a colorspace transform to ClayRGB (this is Elle’s free name for AdobeRGB), with so-called "linear gamma, 1.0, which is effectively no gamma. Still the same filmic curve after it:

All those beautiful magentas of the sensor-saturated areas are nicely preserved. Of note is the green channel spike in the histogram; it’s effectively centered in the scene, making those magentas a “legitimate” color.

Really, this scene appears to need the scene-referred goodness of dt. Mapping the camera colors to a bounded gamut doesn’t seem to help this…

4 Likes

Sorry that is bovine manure - you are exchanging a workflow which is easy to understand (because it’s settings directly correspond to visual changes) and which is consistent over a range of photos with the drawback of one kind of artefacts (lacking a certain color fidelity) with an unpredictable toolchain (that can not be used to create a workflow because the tools in part contradict each other in terms of visual results) that creates a different type of artefacts (how do you deal with out of gamut colors on the scene referenced side, especially ).

Take my pet peeve, the sharpening process. sharpening in scene referenced mode does not account for the gamma curve applied when transforming to a display, it doesn‘t account for contrast compression or stretching - all things done by that wretched filmic module. Part of the sharpening may create color values that are indistnguishable after transforming to a well defined color space. Thus you get inconsistent effects of the sharpening and other artifacts because of the scene referenced mode and since the filmic settings do impact contrast massively they also impact sharpness in the same way.

I have found the scene referenced mode unworkable for my needs because of this interaction and I know quite a few people that have the same problem, if you ever need to adjust anything in filmic you can start your edit anew. That is worse than having a display referenced workflow if the developers had the foresight to use a large editing color space like ProPhoto RGB, a display referenced editing mode is only a problem if the developers were so stupid as to use a small editing color space like sRGB or worse still, a monitor profile color space. But a large working color space doesn‘t pose any problems that a scene referenced mode would be able to solve.

Now i understand where I stand. I like the camera lying to me

1 Like

Actually, the camera doesn’t lie, it just measures light in well-defined ways. Your brain is the liar, making up notions of color and tone for all the light mixtures… :smiley:

5 Likes

@charlyw64 @eyedear

Well, to some point I’ve been (am) there and I’m able to understand your concern about inconsistency and unpredictable tools - for the speed edit of large volumes of images, I still keep my old habits: massively applying the same base curve, then going through the images just adjusting white balance, cropping and rotating is a breeze and I like darktable for that.

However, my philosophy is: Those new tools are designed by someone much wiser than me, and if I don’t understand what they actually do (for the time being), that’s my own problem.

Despite that, new workflow intrigues me a lot and I think it is powerful, I just want to master it.

And since I can find many helpful people here, I’m glad and eager to learn until one day I can say: “Wow, now running through the images with Filmic is a breeze” :slight_smile:

4 Likes

OK. Since you’ve cleared that up for yourself, the solution to the problem you’re facing is very simple.

Just use those other tools that match your approach and logic, and convince those people who have the same difficulty to do the same. It would be terrible and very limiting if the darktable was the only alternative and everyone was forced to use it.

The beauty of having other tools is that everyone can be happy. :partying_face:

9 Likes

Careful reading the discussion and voilà!
Nasty colours are gone :grinning_face_with_smiling_eyes:

All it had to be done was to disable Highlight reconstruction, set the threshold in Filmic to 0 and I can have both CAT and Chrominance preservation enabled.

10 Likes

This is what I get in RT for comparison. I would crop to 4:3 on that one though.


_MG_5096.CR2.pp3 (38.8 KB)

1 Like

Slightly off topic but I’ve never handled Canon 6D files before and they behave very, very differently to what I’m used to. I’ve not previously fully appreciated how differently you have to work depending on the camera you have.

2 Likes

Yes, that’s true. I have also got FujiFilm X-T1 and its sensor is much different. Underexposes quite heavily, but can recover a lot of shadows, which makes it… perfect for scene-referred haha.

With Canon on the other hand, I wouldn’t dare taking shot like that street @s7habo presented us - it delivers nice images providing you fill the sensor with light decently, so why I used to overexpose so much.

Also images from Fuji are more crisp, but that’s perhaps thanks to not having AA filter.

Any contrast or brightness change has direct implications regarding color science because nothing on the scene side prevents the creation of false colors or colors which the output color space can‘t reproduce.

You run into the same problems as other programs that use a large color working space do, except that the transistion module in the case of darktable is set up in a way that it itself messes with the contrast - there is no escaping the fact that contrast changes in scene referenced mode are inevitably interacted with by the filmic module. That module may be your pet project but it‘s unpredictable interactions with other edits make it hard to get the desired result. It would be much better if every effect of the filmic module were a seperate module. If I already have adjusted the contrast by other means I don‘t want the filmic module to touch contrast but I may need it to do the transformation to the destination colorspace.

In other programs that use large working spaces they can use either a perceptual or relative colormetric conversion to the display or output color space, they can show which colors fall outside the destination color and you can decide how to deal with this.

OTOH, why would you clip colors in the transistion from camera capture color space to scene referenced data? There are quite a number of filters that may transform those false colors to fall within the realm of normal vision?