Color calibration - colorfulness

With the latest changes, green colorfulness affects other colours much less:


And the wheel:

Thanks, @anon41087856!

4 Likes

Edit: haha, should have read to the end. It seems that the results look a lot closer to expectations after the latest fixes to DT – so there was a bug after all.

I don’t pretend I understand the intricacies, and I find some of the tool not at all intuitive to use. That said: one point you are missing is something I do understand, which is: Depending on the working colour space, what colour profile was assigned to the image at import, and what colour profile darktable uses to output pictures to your monitor, each of the primary colours has a slightly different “meaning”. That’s because the colour filters in front of your camera sensor are different from somebody else’s, the ones on different monitors are, too, and the spectra which activate the corresponding cells in your eyes are different still.

This means that if you started with an sRGB image with just 100% green channel, and everything else set to zero, and converted it from one colour space to another (e.g. to display it on a calibrated monitor using that screen’s colour space), it would light up not only the green subpixels, because they’re a different green than sRGB green. Darktable converts from sRGB to Rec2020, maybe performs some white balance, and after that there are no “pure” greens in that colour circle any more, and the green slider does not correspond to just the green subpixels on your screen.

That far, it’s very clear to me. Actually, if I try to minimize such conversions and tell Darktable to interpret the colour circle as REC2020, and work in REC2020, skip white balance and use linear Bradford for the CAT module, the channel swapping kinda works. The colours are still shifting quite a bit, though, and I’m not sure why they do that.
The brightness tab is maybe more straightforward, and it does let you darken the image based on the values of each of the channels, and works as I expect it to.

What I don’t understand is:
1: why are the colour shifts as big as they still are, because I would not have thought that those colour spaces are that different
2: if I go to the grey tab and use only the blue slider, I see the respective part of the wheel brighter, but even the opposite end (where the colour picker showed zero blue before), is rather bright. When I use only the red slider, the entire wheel has almost the same brightness – under what definition of “red” would that rainbow have a near-constant red channel?

Some of that may have to do with the bug(s) which @anon41087856 has already confirmed and fixed.

I’ll try a more intuitive phrasing:

You’ve probably seen a 3D colour cube at some point, like any of these: RGB cube at DuckDuckGo

Since the interpretatons of what to call “pure” red, green or blue are variable, the way the CAT model does it is to keep black where it is, but then draw a new set of 3 axes (one for each of R, G and B), at some angle to the old ones. So you get a slightly rotated cube, and the new (say) green axis is now sticking through the old one at a point that would have somewhat blue-ish green, the new red is shifted towards orange and so forth. If you keep white where it was, you’re rotating the cube around its diagonal axis (the line between black and white).

To make it more complicated:
Depending on how luminance is dealt with (I’ve no idea how the CAT tool does it, or even what the “standard” method would be), the RGB cube isn’t actually a cube but an oblong. That’s because (at equal light intensity) green is perceived as brighter than red, and blue is perceived darker than red. So the blue axis would be shorter then the red, and the green one longer. So if you rotated that thing, you get even more complicated results when working out what the prime colours from the previous RGB space should look like in the new one.

That’s also one motivation to just work in LAB space (which tries to approximate the human experience of colours, not quantities of light), and only convert to anything else RGB for display – but I personally don’t find it intuitive to work in LAB, and also way too easy to get to colours which no monitor can display, and no printer print. Another way to deal with it is to work in LHC (Luminance, hue, chrominance) space, which is a lot more intuitive than LAB, but still not as easy to me as RGB.

I think it will take some time for devs and users to come up with functions and ways of using them which allow users to get what they want, predictably, while getting the science right, too.

Are you using the v2 version…maybe try the same things and see what you get. My original confusion was I thought colorfulness was doing something equivalent to lowering say the red in each channel etc etc so like taking the red from red the greens and the blues. So RGB of say 100 134 80 would lose red predominantly but the cross talk would also alter other colors. Obviously the impact of such a thing in a different colorspace would not be the same on the overall gamut much in the way you describe. I tried with v1 to look at a color checker image and move the sliders and understand the effect. It wasn’t obvious. In v2 it seemed more relatable…

This is not a contradiction to that, but I think it was Feynman who said that if you can’t explain your topic to a layperson, then maybe you don’t understand it as well as you think. That, too, I consider an overstatement, because some stuff just is complicated. But my experience (teaching at uni) tells me that what appears intuitive to one person (A) can be completely random to someone else (B), but that there is likely another way of phrasing the same thing which makes it very obvious to B (while A might be wondering why you’d phrase it in such a funny way…).

Applied to the issue at hand (and actually to the filmic module, too, which has some people, including myself, guessing a lot, and guessing wrong): I think the same operations could be presented/explained in a way which could allow more people to develop an intuition for how they work, what will happen if they pull a certain slider, and which sliders they should pull to get a certain result. Often, the key to this lies in unstated assumptions which one person considers too obvious to mention, and another to obscure to consider.

When I write a paper and I get a “stupid” remark by some reviewer who didn’t understand what the point of the paper was: I can’t deal with that by explaining to the reviewer that they’re wrong (even though they probably are!) – this is a sign that the point of the paper was not clear enough, and I need to rephrase it to make sure people get it. I find this the hardest and often most tedious part of my work but unless I explain what I do and people can understand and appreciate it, I’m not really done.

4 Likes

Yeah, just realized I’m still on 3.4, so it still has all those bugs in it. I should have read the whole thread before responding… but it’s a really long one :man_shrugging:

It is. Thanks for your perspective

This, and a further example: early on in my career I attended a presentation by a very thoughtful colonel who said something along the lines of “I want something that can be used in the dark by a young soldier with minimal education, who is cold, tired, and hungry, while he’s sheltering in a trench in the rain. He’s also frightened because a lot of people just over there are trying to kill him. This (waving a modern scientific calculator) is useless”. A bit over the top for this forum, perhaps, but it illustrates the point. My expertise (and degree) was in life sciences but that shouldn’t disqualify me from being to use something like the color calibration module…but I just can’t get my head around the sliders (DT ver 3.4). I don’t think I’m alone, Bruce Williams seemed to have some difficulty as well, and posts here seem to agree. Not all outstanding photographers (you can exclude me from that category) and artists have an understanding of advanced mathematics. So I’m really sorry to say that the color calibration module (as in 3.4) was not up to the usual excellent, high, easily understood standards in the rest of darktable. I’m looking forward to an update with a modified color calibration module that’s a bit more intuitive to aged grunt amateur photogs like me.

Please don’t shoot me, this is a clumsy attempt to give constructive criticism. DT as a whole is an outstanding package, arguably more so because it’s open source, and huge thanks and congratulations to all those who put in hours of work to make it so,

4 Likes

I find the soldier analogy very easy to get, although I would not hold Darktable to quite the same standard :slight_smile:

I’ve developed engineering methods and implemented them in software (lots of computation, a little GUI) – and what I’ve understood from that is that you can’t teach the users everything you know (even if they’re fellow engineers working in the same field), but you also can’t dumb it down to the point where they don’t need to understand anything. The trick is to present the method in a way that makes sense to them, and choose a way to control the system which makes sense according to their perspective, not your own*. The number of times (and different ways) people have naively tried to use my software in ways that obviously (to me) could not have worked but still blindly trusted the results is staggering. But I’d like to believe that a developer can’t be expected to anticipate all of those ways. So the only solution (unless you have in-house focus-group testing…) is to make something, see how it lands, talk to users, and modify the presentation/controls until most users can work it out in a short time. They learn, you learn, and eventually it clicks.

There’s a lot of discussion about both Filmic and CAT around here, and I hope that @anon41087856 stays patient for long enough that enough people get not just an intuitive grasp but a more general understanding of both the method and what it is that many users don’t get about it, that the implementation/GUI can be updated to make it easier to use and understand. He’s demonstrated a few times how the fundamentals he’s put in allow for pretty amazing results, but I think there must be an easier way to “drive” it.

(*) Example: I non-dimensionalize everything first, then work in non-dimensional space. They’re way more familiar with dimensional values (in imperial units, can you believe it?!) – so that’s what is displayed. Had to implement unit conversion just for that purpose.

2 Likes

As a software developer in my earlier life, I recall many times sitting with the targeted users for hours to understand what they wanted and what it should do. Then developing it and taking it back to show them. I would see that they were puzzled, then they would finally say, no, that’s not it. I would go back over the notes with them, and I know this is cliché, they would say, “Yes, that’s exactly what I said, but it’s not what I meant.”

Software is All About Abstraction, that is, making something that is complex a bit easier to deal with by the next layer of user. if it weren’t, we’d have thrown away these machines years ago as too cumbersome to deal with.

The art of it all is finding an abstraction that both reliably wrangles the complex thing and provides a useful interface to the user. The most successful endeavors I’ve seen in doing such puts the developers in the trench with the soldier, to use @SalisburyJon’s story. One really needs to understand what the user needs to do and the world in which they’re doing it, and then tamp their ambitions and just build the thing they need…

1 Like

This is what has happened for a lot of the scene referred modules, the initial reaction is negative, but after some use, some videos, and forum posts, the module is understood and then praised. So give it some time :slight_smile:

3 Likes

Please keep in mind - we don’t have paid software developers at darktable that also would code games instead if they were paid for it.
The key developers are photographers, implementing stuff they use.
So there’s no gap between users and developers, just between users that understand in detail what‘s implemented and users that doesn’t :wink:

3 Likes

darktable’s space invaders clone would like to interject here :wink:

3 Likes

Oh, I know the dynamic. At the extreme end of the spectrum, I’ve written a comprehensive raw processor for an audience of one, me. I’ll find it interesting to see how many, if any, other people take up my predilections regarding workflow…

In our business here, the wrangling of complex technology to creative ends, there’ll always be a tension between developers and users, I think. How much do the developers abstract, vs how much do the artists learn about their medium. I’m fine with the tension; I believe all who participate in it learn…

3 Likes

At the age of 4 to 5 I was on a visit at my grandparents. On the kitchen table there was a bottle of Strohrum. I read murhorts. Then they told me to read from the other side :slight_smile:

1 Like

haha i love that story. almost as good as “redrum”. in any case you’re saying reading saved your life i gather?

1 Like

Try developing an expertise in image editing, and come back to me. That kind of statement is all what is wrong about photography in general : college-educated men with some expertise in unrelated fields thinking, out of pure hubris, that this unrelated expertise make them somehow expert at everything else, and since image processing is so easy, it shouldn’t be a problem.

Well, think again. Image processing is not easy, it’s a job. For real experts. Who trained for it.

Besides, color calibration features are standard in many other software since the 1990’s. So it’s nothing ground breaking, nothing new, nothing unheard of. Your average Netflix/Hollywood movie is graded using similar tools. Not sure if your average Hollywood colorist is a math geek, I wouldn’t bet on it, but they managed to make those tools work decently in production.

The keyword here is polar decomposition of a 3×3 matrix. I have to finish it, but that stuff takes time.

THANKS ! \o/ Portfolio - Aurélien PIERRE, Photographe

All the knowledge I have in fundamental image processing comes from my very frustration toward darktable after years of being but a simple user getting semi-shitty results with cameras supposed to be awesome. I wasn’t born like this. And I learned C/OpenCL only to hack darktable. I really didn’t picture myself coding all day, I don’t like that anyway.

All the tools I develop come from a look I want to achieve, as a photographer, and that I’m unable to get with current things. It’s always feature to tech, and not tech to feature. And it’s actually look to feature to tech. Look comes first.

So trying to cast away the dev from the photographer is not going to serve your point. You just need to humbly discard your unrelated expertises, try developing the required skills from scratch, and stop making excuses.

Also, photography is bloody damn difficult. People should really stop assuming it is or it should be easy. It’s not. Color alone is super difficult to grasp (forget your usual hue-saturation-lightness, it doesn’t exist outside of shitty GUIs), and that is the core of what we do.

I’m looking forward to an update with a modified color calibration module that’s a bit more intuitive to aged grunt amateur photogs like me.

Color calibration will be intuitive the day walking, writing, counting and speaking are intuitive for a baby or for an adult with brain damage. Try walking drunk holding a glass of water on a tray and make it a priority to not spill any water. That’s about as intuitive as life gets.

5 Likes

I wouldn’t say it saved my life. But shortly after I learned reading, I read one Karl-May book per day until my dad told me how to read and write punch cards…

Teehee, yes! Because what users say they want is based on their interpretation of what your program does under the hood, which may be wrong. What developers think users want is based on what they think about users, and what they themselves would want for the stuff they do with the software (have I ever fallen into that second trap…). It sometimes helps to discuss things in terms of what users want to achieve (or which part of the current setup does not work, and why) rather than how they’d like to do it, because it’s hard for them to imagine how the software could work differently, but it’s hard for the developer to know in what environment and to what end the user is using the software, i.e. which implementation works for them and which does not.

I like to believe that after some >5 years working on one piece of software I got to the point where most of the stuff the which users did not get was simply due to no time being available to implement it – but each new feature still had a design iteration planned in because either there’d be some misunderstanding about how it needed to work to fit into their workflow, or about what’s technically possible.

1 Like