[Solved] Why do CAT and Filmic render clipped areas so weirdly

Do you understand the difference between a blog post and an academic article? Obviously the author’s intention was not to provide a robust scholarly treatment of the topic; it was just to introduce the reader to some of the limitations of common approaches to image processing in an informal colloquial way. If you want academic-quality articles, you will find some in the bibliography at the end of the blog post.

1 Like

Haha :sweat_smile:

Indeed that is an approach. It’s nice you provided detailed instructions - I like the colours, is it thanks to the LUT alone, or maybe you used Auto Match Tone curve or something?

If the author claims that the demosaicing algorithm must prevent moiree or other aliasing artefacts it dhows that his grip on where these things happen and thus can be prevented is rather poor… Maybe he should refrain from making such bold statements until he read up on the subject of aliasing…

No, I have not used Auto Match Tone curve.
I have used a parametric curve:
ParametricCurve

1 Like

Quite sadly I see there is a bit of harsh disagreement, but I try to pick up some lines which may transform to precious tips if unfolded / explained.

At the moment, what made me really curious is that @charlyw64 said:

for over a decade I’ve been converting RAWs in Lightroom, DXO, RawTherapee, darktable, Capture One […] and then, at the prepress stage, putting them into Photoshop, choosing “Convert to profile” with perceptual or relative colorimetric mode to ISO Coated v2 ECI or newer PSO Coated v3 and well… it seems the Colour Management System has been doing its magic nicely - I have never had to worry about changing dynamic range of the images. :flushed:

Yes, that‘s what I do, except that I use a large colorspace throughout which requires a bit more work in an adjustment layer when soft proofing so that I don‘t get large color clipped areas because of the color space conversion. One of my interests is photographing racing cars and they are notorious for sporting large areas of bold colors - which require changes to allow them to be printed as anything else but large detail free color blotches.

So again, what‘s the point of having a technically convoluted pipeline to preserve color at all stages of the edit because they are under extreme threat in a display referenced pipeline if at the end you need to fire up a tool that works display referenced to make them look and print right?

Something I don’t quite understand with the linear scene-referred workflow is that combining linear operations is just a linear operation (e.g. y_1 = 2x + 1, y2 = 3y_1 - 2 result in y_2 = 6x + 1). So in a true linear workflow you can have only one module (essentially some kind of white balance). If you have more than one it means either they are either redundant or non-linear.

Now it might well be the order of modules in the scene-referred workflow is better for some tasks, and that the new modules are better than the previous ones, but I don’t really see the conceptual leap everybody is talking about.

Then you simply didn’t touch the limits of a display referred workflow, never halos, never weird artifacts, colors… you simply had luck :wink:

I must admit I didn’t quite catch that - maybe as non native English speaker I need more plain language. What do you mean by being under “extreme threat” here and what is that tool fired in the end? Photoshop to convert to CMYK, right?

Well… Why should I go back to a display-referred workflow where I have to fight a lot harder to control contrast and (highlight) detail in more difficult images? I use fewer modules and less time to get a satisfying edit with filmic and associated tools than with the older display-referred workflow. And that’s for images where nothing is clipped in the raw data, so basically all the necessary information is present.

Agree with what you say, but the heat of the debate on this thread is caused by the same issues as tech companies face when introducing new versions. I am only suggesting that the way to reduce the heat and anger is to focus on the positives of the new release.

Keep up the good (and FOSS)n work !!!

image
New display technology is emerging that will really take advantage of this…

1 Like

This silly thing about all this is you still have a choice in DT so why such a fuss. Use that path that suits you and move on…use tone curves use whatever…It would be different if all the traditional tools were gone…and if you want or like the newer workflow then experiment or not…the choice is there…there is way too much time and vitriol wasted on this…If it is not apparent from the start that DT is not all wrapped up in a bow like LR then it should be so learn to be a ground up editor or move to something that gives that solution… THis is not a company that has just changed software and is forcing everyone to learn how to use it…take more photos and bicker less about shadows and highlights… :slight_smile:

With skipping lots or this thread, isn’t the answer that if you are having issues with artifacts in the highlights you should turn off highlight reconstruction? Because then most issues are fixed. The re issue is when you really really want to recover the very very last bit of highlight information possible to use (instead of clipping it out), then you need to try a lot to get the artifacts away. But then you are trying to recover more then most other software allows.

Oh, and put charlyw64 on your ignore list. Seems to improve the forum a lot.

3 Likes

The problem was a bit more complex, but you’re mostly right - turning off Highlight reconstruction was an important part of the solution :slight_smile:

Moreover, edits provided by others, ideas and explanations given, as well as Aurélien detailed teaching have given me precious understanding, thanks to which I was able to set my new quick and high quality workflow.

5 Likes

I know I am quoting answers not related that much, but I think you will get the point. I do not think the reason for @charlyw64’s continuing “ramblings” (maybe that’s too harsh but I do not find a better word – I am not a native speaker) is that he dislikes darktable, but he likes it very much and sees more and more of his old habits fade away as more and more modules of the display-referred section of the pipe are deprecated and/or get less attention/fixes/additions than in the past.

This or he’s just another troll.

However, assuming the first, there is a way to get to results. Proof that the drawbacks of the scene referred paradigm are serious enough to require another paradigm (and ideally start coding a solution). That’s what @anon41087856 did for a switch away from the display-referred paradigm, he was able to convince darktable folks to continue with the new approach despite lots of questions and initial objections (and despite his way of talking/writing being perceived as rude by many people) by showing the technical limitations and implementing a better alternative.

And that’s what Elle did regarding Gimp and colour management. And that’s what @patdavid did regarding the proliferation of FLOSS image processing documentation, support, tutorials and forums (probably ongoing process).

IMHO my examples show (at least to me) that that’s the preferred method for fundamental changes. YMMV. However, at least some proof where scene referred fails and display referred not or where the latter has serious benefits would be much better than continuing “ramblings”.

In case the troll assumption is correct, sorry for feeding :wink:.

7 Likes

No, no , no and no.

The only difference between scene and display workflows is where the view transform (aka tone curve aka tone mapping) sits in the pipeline : end vs. beginning. All the rest works essentially the same, it’s the state of the color data that changes a bit : 100% can’t be assumed to be white, 50% can’t be assumed to be grey, so it’s only the bounds that change. What’s hard is algos as well as GUI have taken take shortcuts that don’t work anymore in scene-referred.

Also, the non-chroma-preserving way of manipulating RGB channels independently in the context of lightness changes has created aesthetic expectations that are nothing more than habit (desaturating highlights, saturating shadows). What people forget is the chroma/hue changes have been deferred to specialized tools that allow a better handling of color in perceptual frameworks (color balance), so the tone mapping is asked to “please honour user’s color grading intent” at all cost.

But the scene-referred workflow is not enough of a revolution to be held responsible for all this. The same concepts apply, the same operations are realized, it’s jut that you need to care now about stuff you never heard of before (even though it was already there).

7 Likes

That’s because camera sensors started to get much higher dynamic range only since 2013 or so. Before, the dynamic ranges were mostly equivalent along the whole pipeline. Now, inputs are much larger than what outputs can display. When you do an inkjet print, you get a DR between 5 and 6 EV. So when you come to that from a 12 EV raw picture shot at 64 ISO, it’s more than halved.

2 Likes

Aurélien, I have no doubt regarding the benefits of a scene referred paradigm, I am a happy user of it. The point I made was that ongoing complaints won’t change anything, and proofing the claims is the only way to convince anybody. So your answer is heading in the wrong direction.

Therefore I cannot accept your multiple no’s: If the guy complaining can show/proof that there is a serious issue with a late view transform, fair enough, we have to deal with it. I personally doubt that there are serious issues that rectify a step back to the old paradigm. But that’s my personal opinion, which seems to coincide with your opinion. The scientific experience, however, tells us that there is always a chance that a better/more accurate theory/model supersedes the current state of the art.

6 Likes

everybodys darling is everybody’s fool
If someone wants to use the ‘old’ display referred workflow with all of its pitfalls then darktable has a whole bunch of tools for that since a long time. Further there’s no lack of alternatives. So no need to argue on the new scene referred stuff.
Most of those complaining about scene referred tools never proved it - they’re writing long letters of complaints instead. Their main arguments is ‘intuitivity’ of a mass market, never realizing, that quantity of users is not the driver for darktable developers but quality they can achieve (at a price of mass-intuitivity)
if darktable doesn’t fit a users need, there are two options:

  1. use a tool that fit’s your need
  2. fork it and implement the missing stuff yourself or pay someone doing this for you

If there are design flaws: just prove - not by long essays but by examples, issue reports,…

You can’t expect developers that spent much time to find the best way to cope with pitfalls of common tools to change their priorities just because some don’t understand this way and don’t want to change their habits (and also don’t want to convince developers by pouring sackfuls of money over them)
darktable is not lightroom - neither the business model nor the functionality

6 Likes