"Aurelien said : basecurve is bad"

Why are you sure about this? Which operations do you think c1 does in camera rgb?

I’ve seen some research about the biological component of human perception, so I know there are folk interested in the cone spectral response. I haven’t actually pored over any of it, so how specific is their knowledge is not in my toolbox.

So it appears to me, the end result, the amalgam of cone response and neural/brain transforms is rather well-characterized. I’ll go with that… :laughing:

2 Likes

Oh, everyone should at least skim this; a wonderful simple-to-complex introduction to color, and then digital imaging. Saved to my Documents folder.

At the risk of making any resident Mathematicians twitchy, heres my attempt to explain the linearity (or otherwise) of Camera RGB to XYZ conversions.

Conversion from scene spectra to Camera RGB:

for each wavelength, multiply the light intensity by the Camera RGB filter responses and add up all the contributions

This process is LINEAR.

Conversion from scene spectra to CIE XYZ:

for each wavelength, mulitply the light intensity by the standard observer (x_bar, y_bar, z_bar) and add up all the contributions.

This process is also LINEAR.

The problem is that both spectrum->Camera RGB and spectrum->XYZ lose information: there are an infinite possible number of spectra that could result in the same RGB or XYZ values (this is known as metamerism).

The set of metamers of a stimulus that generates a certain RGB value is not necessarily the same as the set of spectra that produce the corresponding XYZ value.

In general, given only Camera RGB values, you don’t know which of the possible scene spectra caused them. And that means you don’t know what the XYZ values would be. The transform between them is neither linear nor non-linear - there is no exact mapping from one to the other.

The exception is if the camera filter response and the standard observer are linear transforms of each other. In that case (the ‘Luther-Ives’ condition), you can transform exactly between RGB and XYZ using a matrix.

Real camera sensors do not meet the Luther-Ives condition (the actual choice of dyes to use depends on other things such as cost, light-fastness and noise performance as well as accurate colour rendering). However, they are close enough (and typical scene spectra are predictable enough) that you can approximate the RGB->XYZ conversion with a matrix.

This mostly works well, except for some extreme colours which result from unusual spectra. The classic example of this is blue LED lighting. It is possible to use a non-linear approach (LUTs work well for this even in an unbounded workflow because the data from the camera sensor itself is bounded - the LUT only needs to deal with camera sensor data) to map some of the stranger results back into a more normal range.

6 Likes

His google scholar list is quite interesting…he has a nice review on mobile phone processing and some other really interesting things…His field of study is computational processing and its interesting to see some patents and approaches to tackle over and under exposed images, auto wb from multiple illuminants etc…I think he also links to a few databases for image testing…some good reading there for the curious…

Thanks for taking the time to add to the conversation…very nicely laid out…

1 Like

You are doing yeoman’s work! Thanks and keep it up.

1 Like

My humble opinion on this:

When I read statements like this, I always prick up my ears. I have the impression - also due to many other threads in various forums - the darktable users are divided into two camps.

First of all: It is true that darktable is owned by the developers and they have all rights to do with it what they want.

But I think one should also understand the users. Many of them chose darktable a long time ago and edited all there photos with it. Now, suddenly, a revolution comes in the form of scene-referred workflow, challenging them to do anything different than what they’ve been doing before.

Some of them understand the changes, accept them and use them for their photos. Others do not do this for a variety of reasons. And these reasons cannot simply be wiped away. And so it comes to despair and annoyance among these users. I understand that because they got involved with Darktable and have been using it successfully for years and suddenly everything is about to change.

This anger then vents in such statements and achieve the opposite. The two camps are thus moving further and further apart.

At last one of the camps will win and I think that will be the scene-referred workflow. The other users will then use a different software for their photos.

But even if there are no rights for the users and the developers only let them participate, they are still important for a software. It would be a shame if only a few people used such great software as darktable.

The advice from Aurelien, the documentation and the numerous videos from him and many others go in the right direction. This allows the user to familiarize himself with the new technologies.

But here some problems arise.

  1. The whole thing is difficult to understand.

Not because it is poorly explained (on the contrary). But because the new techniques are simply more complex than the old ones.

I don’t think I need to explain that with examples. If it were easy, there wouldn’t be so much discussion about it.

  1. The scene-referred workflow assumes that all images are processed.

If I bring 500 photos back from a city trip, I don’t want to edit them all extensively. Many of them are ok, some just need small tweaks and a few are worth tweaking more. Still, I want all these photos in darktable. In order to be able to assign the stars, I don’t want to use filmic and colorbalance RGB first, to see of they are good or not.

This is something the display-referred workflow can do better.

  1. The benefits of scene-based workflow are hard to see.

I have to admit that I don’t know much about mathematics or color science. But I realize that there is an advantage in applying a module when all the information from the RAW is still there. So for me, the scene-referred workflow is the better technology.

But when I look at the results, I can see differences, but I can’t always say that A is better than B. Sometimes it is, but not always.

My own point of view.

So I’m a little torn here.

The new modules are definitely harder for me to understand and even though I’ve got the hang of it, it’s still hard to use.
For shadows and highlights I click on “activate” and for many photos I’m done, whereas with the tone-equalizer I first have to deal with a mask.

On the other hand, there is the color-balance RGB module, which is also difficult to understand, but brings interesting results that I can’t achieve with other modules.

That’s why I take the position that neither one nor the other is bad.

Anything is possible with the current version 3.8.

You can use the scene-referred workflow with the new modern modules. And with the help of darktable-chart I also found a way for me that I only have to edit the photos that I want to edit (which solves problem 2. for me).

However, the version can also use the display-referred workflow with the old simple modules. The advantage is that it is easier to understand and use, making it suitable for beginners and for quick processing. But the simple modules such as shadows and highlights or color zones can also be used in the scene-referred workflow.

That’s why I often work by activating shadows and highlights, for example. Just one click. Only when that doesn’t fit do I use the tone equalizer and work on a mask.

Hence my request to Aurelien and the other developers: please do not remove the old modules.

They have done a good job for years and have the advantage of being easy to use. I can understand not evolving them, but they don’t have to.

And in the current version, Darktable offers all the options to use either one or the other and hide annoying ones.

Thus, even beginners and people with less image processing know-how have the opportunity to familiarize themselves with Darktable.
And I can keep using simple modules for simple tasks.

To put it in the words of Zappa: basecurve is not bad, it just smells funny. :slight_smile:

6 Likes

I am curious why you think they would do that. All current display-referred modules remain available, and users can continue to use them forever. They are still relevant, eg for editing JPEGs.

3 Likes

That’s exactly the point. If the old modules and basecurve remains with the display-referred workflow, then that probably won’t happen.

But some modules have already left darktable. OK - nothing dramatic.

@herbert-50 :
You may want to read all of @anon41087856 's initial post in this thread. He explains why basecurve was bad in 2018 and why the main problem doesn’t exist anymore. Also, there are only three deprecated modules in 3.8.0, meaning they should not be used in new edits. “Shadows and highlights” isn’t one of them.

Your argument about “500 photos […] from a city trip” needing to be processed before you can apply a rating is a bit of a red herring: in the lighttable you can use the embedded jpegs for un-edited photos. Those should be fairly close to what you get from a basecurve edit, and require no processing.
No difference here between scene-referred or display-referred editing, as there is no editing done yet…

And yes, the scene-referred workflow requires a bit of practice when you come from a display-referred workflow. But when I compare my old display-referred edits with new scene-referred edits, from the same raw files, I’ve noticed that the scene-referred edits require fewer modules, as it’s much easier to handle a large dynamic range (less need for “shadows & highlights”, exposure and tone curve). Filmic handles a large part of those issues; certainly not all, but most. Especially now with the “local contrast” preset in “Diffuse and sharpen”(*): compressing the dynamic range means a loss of local contrast, which has to be corrected.
That means that for me, the scene-referred workflow is actually faster and easier than the display-referred workflow…
(*): not pleasant to use on a low-powered computer, though)

Finally, I don’t see why scene-referred should be harder for a beginner than display-referred. “Beginner” in this context being someone with little or no photo editing experience at all; not someone with experience with another (display-referred) program.

4 Likes

I have already read the whole post. But still the pictures with the “old” base curve (preserve colors == none) look better to me than with the new one. And the old basecurve was simple and good enough for my photos until 2018.

And the example with the 500 images is already real. Of course I can also look at the embedded JPGs. But with the smallest change, for example a crop or a little more exposure, this preview is lost. And these small changes happen often.

The scene-related workflow is more difficult for the beginner because, on the one hand, he doesn’t immediately get a finished image, but only a good starting point. On the other hand, the scene-related modules are more difficult to use. It starts with filmic and its many controls.

I don’t think the comparison with the few modules is fair. These few modules have a lot of controllers and thus combine several modules in one.

For those who are familiar with it, a good starting point is more important than a finished picture, because they want to develop the picture according to their ideas.

But ultimately Darktable serves both in the current version. He only has to know, how to enable the old workflow.

But if the base curve and the display-referred workflow are removed, then the beginner has a high learning curve with little success at the beginning. And also if the simple modules like shadows and highlights or color zones are will be removed.

Do not get me wrong. I’m not saying that the scene-referred workflow and the new modules are bad. I use it too.
But neither is the display-referred workflow because it’s easy for the user to use. And so I use some modules of it.

Unless you have something much more detailed than this, we have been over this a bunch of times. The written documentation for the new modules is there, there are ample videos. You don’t need to twiddle every knob from the outset.

Sure, that’s a huge IF that isn’t going to happen. That’s been said a bunch of times too.

At this point filmic defaults + a bit of color boosting (to taste, eg the add basic colorfulness preset in color balance rgb) gets you a reasonable-looking image, close to OOC JPEGs (without the local contrast enhancements etc).

You can just make these an auto-applied preset, and be done with it.

That said, the point of photo editing software like darktable is not to get a “finished” image that requires no further intervention, but precisely the opposite: a flexible tool for sophisticated post-processing.

8 Likes

Exactly this. I created a style to do this plus add 2 D&S for local contrast and sharpening. It gets me 95% of a great image and I then tweak. Most of the time I dont need to touch it.

6 Likes

The aim of my post above was not to suggest that the scene-referred workflow and its modules are bad or to get help with the problems I am having with it.

(Yes there are some, but I think this thread is not the right one for it and of course I should describe them in more detail)

My intention was to show that the display-referred workflow and its modules still have their place and their power lies in their simplicity.

I know I can work with presets and styles and in fact I have. It can also be read in my first post.

And that the old modules will be removed is just a apprehension. So I am happy to read:

As of 3.8, I don’t think the scene referred workflow is any more difficult than the old workflow. I usually get a very reasonable starting point just by adjusting exposure (because I’m lazy at capture time). My default preset adds some saturation, but that’s about it for a starting point.

What I will say is that the scene referred workflow is unconventional. If you are used to the way any of the other raw developers work, learning the new workflow takes some doing. But it’s not inherently more difficult or complex. If anything, it’s less complicated, because there’s no longer a dichotomy between a few modules that can reach beyond black/white, and the bulk of the remaining modules that can’t.

1 Like

Now that I have a couple of auto presets in place (an extra 0.5EV for exposure and the “colorfulness” preset for color balance RGB) I struggle to see much difference between the start point in darktable and the embedded jpeg preview. So I would argue that, in terms of default look, there’s not much difference between base curve and filmic.

I used to have the lighttable default to showing the jpeg preview before editing but I’ve now turned that off as I find it less useful than seeing the raw with default processing.

I’d say this is only true because other tooling has thought about digital photography as if it were “digital film.” If you’re from a discipline that was rigerous about the advantages of being digital, such as animation or video, then you might be more at home with darktable than other editors.

3 Likes

I have used Darktable for many years. In the early days it was display-referred. So I guess I count as a user that needed to convert from the old world to the new.

When the first scene referred modules arrived it was immediately obvious to me that it was a better way to process raw images. I had NO TROUBLE adjusting to new world, and having read the various articles about scene referred I could see it was the way forward.

When further modules arrived I could EASILY SEE how they gave new options to improve images, and even go back and reprocess old images.

I fail to understand all these complaints about scene referred being complicated, unwanted and difficult to use.

Please keep up the good work on DT and congratulations on the progress so far!!

9 Likes