CAM, UCS, perceptual, and other color spaces black magic

In image processing the stopped clock may actually be a great feature. If it turns out that everyone 90 percent of the time looks at their clock at ten past ten. No need to wind, change batteries or adjust for daylight saving .

The above is in jest but there’s truth to it. If things fall apart at settings few use but is more intuitive (hehe) for a majority of cases it’s a better tool for that majority. They may get 200 rather that 100 photos processed in the same time .

I’m just pointing this out to temper some of the engineering thinking. Your proposal may well be more usable and more robust . If the axis of the tool don’t align with what people need the accuracy is moot. As I’ve said before, understanding what really matters, edit: and for what use case, is the main problem.

3 Likes

I am not in the habit of responding to assertions because most of the time this will lead to exchanges that are of little interest to the readers. Nevertheless in this case I would make an exception but I would not start any debate whatever the answers and arguments. So this will be my only speech.

First of all, please excuse my bad English, and maybe some misunderstandings related to the translator (DeepL)

In the same way, I am not in the habit of displaying my science, it is of no interest to anyone and reflects a form of ego that I loathe.

I’ll just give a (very brief) update on my training and a few highlights, as well as my age and health status.

  • I am going to be 75 years old and my health is not good to say the least…

  • Around 1967 - 1971, I did Math-Sup and Math-Spé, then attended an engineering school where I got the first prize in mechanics (statics, vibration, fluids)… Well, one can say that it is outdated, but it does not change the ability to reason. I took 2 IQ tests (Intelligence Quotient, a long time ago) and the results were 145 and 150.

The highlights of my last jobs are :

  • the design and management of the TGV (high speed train) maintenance site in Paris-Nord (Le Landy : about 1000 people at the time)
  • the prospective in maintenance and human factors, notably with the collaboration of the most important French sociologists
  • then as an independent consultant to deal with clients on change management, business organization and human factors issues

Then, the retirement approaching (and now I hope it’s not coming to an end…), I set myself another challenge around 2005 (while I have no training in these fields, if not learning Fortran IV in 1970), to learn a little bit of C language (I’m frankly bad) and to learn colorimetry.

Some general remarks. It is not because a theory is old that it is obsolete. For example Einstein made the theory of general relativity in 1915, and no flaws have been found yet. In the same way the uncertainty principle of Heisenberg (1927) for quantum physics has not been questioned either.

From my point of view, the essential quality of an individual is modesty and humility in front of the extent of ignorance and the need for constant learning. And before announcing something it is important to check that you are not making false cognitive inferences.

Nevertheless I am admiring what has been done for example on Darktable in terms of creativity, innovation, even if the results don’t always convince me (I must admit I didn’t spend much time on it)

I’m not saying that what has been developed in RT is perfect, there are big gaps, largely due to my incompetence in dealing with certain computer problems (thanks to the RT team for correcting my mistakes and shortcomings). But in a majority of cases it works correctly, or at least I think so and see it from those who have helped me. And of course there is still a lot of work to do especially if we want to go towards HDR.

Jacques

10 Likes

@nosle Well, so far @anon41087856 has added very useful tools to darktable with clear explanations as to why he thinks they are better. You can still keep on using the older tools, they are all still there.

Of course, the use of new tools has to be learned. And it’s normal that the first results aren’t as good, or as fast, as the results you get with tools you are used to…

Personally, I now get my results faster with the new tools, especially with more difficult images.

3 Likes

Yup. Also “never falling apart” is an interesting design goal, but a person should not rant about tools that might possibly fall apart if you tweak a wrong slider if their own tools fall apart in their default settings after years of flailing against what turns out to be a fundamentally broken concept (that of using norms for color preservation): New Sigmoid Scene to Display mapping - #545 by jandren - norms not only CAN fail, they routinely fail and in fact the “recommended settings” are failing by default. That link shows that power norm can result in a “saturation inversion” - where, for a constant luminance input, a highly saturated blue comes out completely white but a less saturated blue does not. max() norm will result in output luminance varying with hue, which is also highly undesirable.

I’ve refrained from commenting on filmic because I think that bashing someone else’s work is bad form, but this entire thread starts on the premise of bashing someone else’s work. People in glass houses should not throw stones.

3 Likes

My faint thought, completely unfounded, is that this goal might be unduly driving the interface and results.

Similarly my question about use cases was designed to draw out if the artistic colour modifications have implicit assumptions about what people are trying to do and what is important for this to be done.

I have a NCS meter and a huge NCS chart sitting 20cm from my mouse. I use it in my day job. It’s a good colour system for picking and identifying colours and the standard in my job. Interestingly I don’t really feel I would benefit from its concepts in my post processing. This is because I don’t work my colours artistically in that way. But many may well do! Or not. I have no idea.

So clarifying the workflow and use case might help. People will understand if it’s relevant to what they are doing. Is there an assumption that a photo is graded with looks as extreme as movies? Viewed as a near blank slate to be formed in pp? Other assumption about what aspects of light and colour you want to change?

1 Like

This is one of the big disconnects commonly - completely misunderstanding the user’s use cases, and forcing one’s own assumptions about how the world should work upon them. This is why when dealing with bug reports, I routinely ask questions, such as “did you mean X?” and “if Y did not work for you, why?” rather than immediately declare someone’s use case to be invalid.

See How to shoot and edit real estate photos - #19 by anon41087856 including his assertion that the OP is unlikely to achieve success as a real estate photographer and How to shoot and edit real estate photos - #29 by anon41087856 . There’s also How to shoot and edit real estate photos - #16 by anon41087856 which clearly shows that he judged something without testing it, or even reading the manual - solely based on a banner image without realizing, for example, that HDRMerge does not tonemap and if you don’t like the tonemapping, blame whatever tool DID do the tonemapping, because that tool was NOT HDRMerge. Just like with this thread, I am 99% certain Aurelian saw some words, started frothing at the mouth, and rushed to judgement and started writing out his condescending rant without even reading the manual or trying the tool, or even bothering to attempt to understand its design and/or intended use cases.

This shows the flaws of inflexibility, and also a complete and total misunderstanding of the real estate photography industry. Real estate clients do not give a shit about whether your postprocessing obeys the laws of physics. In fact, unless you are bringing strobes for your real estate shoot to greatly increase light levels, real estate clients expect you to disrespect the balance between in and out. Dark interiors do not sell houses. Blown-out exteriors also look like crap and do not sell houses. Clients want to be able to see what the view out the window looks like, and they don’t want to spend time looking at two separate images to do it. One may hate that this is standard industry practice, but it does not invalidate that it IS standard industry practice and good luck in changing said practice. Especially good luck changing it when you consistently declare everyone else’s work to be “shit” and “pixel garbage” if it doesn’t fit your narrow-minded view of how the world should work. Real estate is not the only industry which a user is going to have to operate in and for which industry expectations are going to violate the inflexible sensibilities of an uncompromising purist.

1 Like

@Entropy512 But real estate photography is not the subject of this thread.

No one forces you to use the tools any particular program or person provides. If you don’t agree, or need something else, use something else.

4 Likes

Yet this thread, a frothing-at-the-mouth rant about someone else’s tool, exists.

1 Like

There is an option to mute threads (and even members, iirc)…

1 Like

Careful folks. Two wrongs don’t make a right.

Oh, the recent posts here just kaburbulate all sorts of “perspective” notions in my little brain. Too much to write, let me share just one recent experience…

I’ve done a bit of finish carpentry in my time, but recently took up working on railway car restoration. First day of my first session, the team lead hands me a board with a line on it to cut on the radial arm saw. I asked him, “what side of the line do you want me to favor?”, a fundamental consideration in building cabinets. He said, “Just Eat The Line.” and walked off to one of the other tasks…

I think you’ve gotta know what’s important, but you also need to realize when it’s important…

5 Likes

:smile:

Perspective is critical and much work is lost and many arguments had because the perspective , even ones own, isn’t understood or articulated. Some are more blind to this than others I might add…

Does this mean you’re going to stop repeating the same vague thing over and over again regardless of context or the direction of the discussion?

2 Likes

I’m sorry but you’re being vague about what you are referring to. Does this context where I repeat vague things include topics outside dt and filmic/norms/curves?

If it’s about dt I think recent discussions and works such as sigmoid and the many, many play raws are slowly revealing the issues in ways dev minded people might understand. These were always visible from a users standpoint and have been getting better but problems till exist. There must be issues with perspective to have so much work done that still is problematic in very fundamental ways.

So you’re going to keep flogging this horse.

What gives you that idea? Be specific.

This is too vague to be actionable.

Here is the thing : I do not give a flying shit about workflows that consist in nervously pushing all the buttons of an ill-designed color changing tool in which you may develop hacky shortcuts, which will feel intuitive if you keep practicing them enough, or if they resemble some other bullshit coming from an unrelated app.

I will not consider such workflows for they are nonsensical and mostly masochistic.

The “working” or “non working” property of a method or to an algorithm is to be asserted against a well defined goal. “Looking good” is not a well-defined goal. “Looking good” can be achieved from a canned look, called a LUT. Pushing saturation at constant brightness is a well-defined goal. Preserving hue through a tonemapping operator is a well-defined goal. Those goals imply controlling one parameter at a time without affecting the other.

We use photo editing software because digital photography exist as digital data, and software is how we interact with data. Interaction is about being to able to enforce a desired look in the least painful way. Fact is, a skilled retoucher will get good results out of any software, no matter how shitty it is. The difference between quality and shit software is the time spent and the number of steps required to go to the result.

Most users will take this problem the other way around, since they have no a priori target look: they look for a rewarding software that will give them “good looking” out of the box, or with very few steps, where the “good looking” metric is vendor-based and mostly rigid. The concept of vendor-based “good looking” is outright ridiculous in a FLOSS ecosystem where no budget can be allocated to research what good looking is on statistically significant samples.

To these users, explaining the limits of their tools is impossible since they don’t use them as tools but as toys, and simply don’t shake them enough to see how brittle they are. Yet they are. But they don’t want control, they want instant gratification. Until they see some really unwanted artifact, being out-of-gamut colors, fringes, huge hue shifts… Suddenly they become intolerant with all the shortcuts, cut corners and trade-offs that led to their very ugly outcome on which there seems to be no solution.

Whenever you find yourself resorting to “perceptual” color spaces, what you really want is to provide users a way to control the content of the image in a framework that provides psychological connections between sliders and the way we perceive color.

What I have shown here is that “perceptual” color spaces will not magically give you that, because they are riddled with flaws, design trade-offs and limitations that FLOSS developers simply overlook because they see “perceptual” written on the can.

I have shown here that this is not only conceptually wrong, but also problems start showing even in short-gamut spaces like sRGB, with examples to support it.

Now, don’t start trying to convince me that it’s still ok to push nails with a screwdriver, because if you don’t see the holes you poked in the drywall when missing the nail twice, I do. If this demonstration is not enough for you, then you are a believer, so please keep reproducing with your sister on your own highland and don’t seek respect from me.

For all of you who only seek “good looking”, fucking use pre-built LUTs and cut your expenses on software meant for control.

3 Likes
1 Like

There is an important thing that you forget in your dishonesty here.

I started as an user circa 2010. I joined dev in 2018. Between 2010 and 2018, I shot mostly B&W. Why ? Not because I like monochrome, just because colors looked most of the time like shit, with high contrasts resulting in impossible saturation/hue issues.

So, the only reason I’m dev is because I was tired of being an user facing problems that had no sensible solution. Don’t try to oppose dev and user here, it won’t work. Fatigue of hitting the same walls made the user become dev.

1 Like

Another proof that you and @Entropy512 understand shit.

Filmic cannot render “desaturated” because, precisely, saturation is kept constant. Your expectation, though, is that a tonemapping operator should resaturate shadows, because you have been impregnated with that look and never questionated the workflow-wise validy of having a contrast setting change (a) saturation (setting that you may have carefully adjusted earlier) at the end of your pipeline.

I GET IT !!! RGB curves resaturate and you like that. BUT THEY ALSO FUCKING SHIFT HUES !!! IN WHAT UNIVERSE DO YOU WANT TO RANDOMLY SHIFT HUES WHEN TONEMAPPING TO DISPLAY ??? DO YOU EVEN UNDERSTAND that the hue shift will not be the same whether you tonemap to 5 EV or to 12 EV of display dynamic range ? In what alternate universe is that desirable ??? THIS IS NOT A WORKFLOW, it’s a random surprise everytime.

How hard is it to hammer that in your head ???

Fucking just use LUTs.

1 Like

Time to relax and chill, its Sunday.

8 Likes