"Aurelien said : basecurve is bad"

I can see where they come from. That part I think is easy when you look at the array of raw editors that people might have tried or used. If you look at something like ART or RT that by default turns on the auto-match tone curve the images are usually bright “contrasty” and saturated by default. I have not used Lightroom more than a handful of times but I have seen lots of videos. I do own ON1 which I don’t actually use that much and again you open the image and the profiles add a curve and color adjustments… The opening result for many images will not come close to that “expected” look in DT with the defaults (easily corrected) and for those just starting out this will be what many use as a first impression. It is far easier for those of us that have spent many hours using DT to understand that this is by design and that we are free to begin the editing with less formula applied and we have access to the tools to recover the image to that same point and beyond with not that much effort. Or just use the old basecurve.

I think its embracing this “workflow” paradigm as much as it is embracing the different tools or modules that has to be overcome and for some its off putting and they will not hang in there long enough to see the benefits and potential…

This will not go away any time soon…there are just different users with different expectations and the tools could be the best tools that could ever be invented but if the implementation doesn’t match expectations unrealistic or uninformed they will be critical.

I am often a bit puzzled at the extent of the discussion on this issue… To me its like watching television. If you don’t like what is on the channel you are watching just change the channel…:slight_smile:

1 Like

Scene-referred has been around since the beginning, though done by the more technically able. The question is a matter of integrating it into an editing application that displays an image on the screen where you can see real-time changes to workflow adjustments. It is easy to think in display-referred terms. We cannot fault anyone for that.

No one is angry with anyone who resorts to simpler tools

The main motivation of darktable developers isn’t to provide just a further simple tool (there are plenty around) but to be able to get capabilities other software didn’t give them.
So better is the enemy of good and the price of having more control over results is less simplicity and a longer way to learn …

1 Like

Some of the impression of complexity may also come from the early times when the scene referred workflow was just being introduced.

Nowadays most of the gotchas have been smoothed over. But I remember the days when highlights greyed unnaturally I you weren’t careful and you had to watch the filmic curve manually for overshoots. That was genuinely more complicated than other editors. But now it’s no longer the case.


I think this is only true because people are used to it. Other than that, scene referred means linear to light power and unbounded. At least if you think about what operations are doing to your input data, it is tremendously easier to understand. Clipped data is not easy in both domains. As long as people don’t want to understand the operations and the pixel pipeline, the domain is not so important. It’s just that people then require a fixed pipe, a vastly reduced number of controls and these should even have names that reflect their artistic purpose. There’s nothing wrong with this, but easier to understand is for sure the scene referred approach, if understanding means how and not only what.


What are display-referred terms? And what are scene-referred terms?

Can you make some examples? Is “saturation” a display-referred term?

Scene-referred workflow and hue-preserving workflow are two different things but neither darktable’s developers understand the difference , of course new users will be a little lost when trying darktable.

Please read eg PIXLS.US - Darktable 3:RGB or Lab? Which Modules? Help!, or watch one of the subsequent tutorials, etc. These terms have been discussed extensively and there is no need to rehash it.

Hm, I have thought about it a bit more and I think there are two aspects that are easier when working referred to the output medium (not always a display, could be paper):

  1. The concept of white. In nature there is nothing such, and therefore it only exists after filmic or other transform, maybe not even then. However, the task is often to make adjustments relative to the output medium, which are more difficult when the transform is at the very end. Example: Make a white background for your subject.
  2. Blend modes that rely on absolute black or absolute white are often used in image processing, which become difficult for hdr input data. As long as the dr of the scene is low, it may not be an issue. Blending itself may be more useful in scene referred, though.

There may be other examples. But still something about getting used to.

Practically, you can just make something (highly) overexposed, and let filmic map it to the display’s “white”.

An example would be useful here. I would hazard a guess that if there isn’t a conceptual equivalent in scene-referred, you can still get the same (or similar) end result. “Anchoring” a blend mode is often done with the blend fulcrum parameter.

Sorry, that’s not the answer to my question.

I don’t want an explanation of what the scene-referred workflow is, I just want to know what is meant by the term “display-referred terms”.

There are two key cons to this workflow, which are enough to deter far too many filmmakers from using it. The first is that it requires us to learn the basics of our image’s journey from sensor to screen. The second is that it requires us to lay a proper foundation in our grade before we begin turning knobs. Both of these take time, and neither are very sexy.

With all due respect, I think this debate can be exhausting if the non-sexy requirement is not met…


It doesn’t hurt to have some understanding of light’s journey from source to sensor as well, and how cameras differ from how the brain handles the same information. A lot of what we do in image processing is either trying to reverse physical processes that altered the source light in undesirable ways, or to reproduce in a low dynamic range image the sort of internal processing our brain does when confronted by a high dynamic range scene.


Let me explain:

  1. I was using it in the idiomatic sense as in “in relation to”, not “terminology”. We see the display and that is how we think about the image when we observe and work on it.

  2. The display-referred way of thinking is intuitive to many not because we are used to the methodology but because we are accustomed to seeing things in a WYSIWYG manner.

I could write more on the second point elaborating on the history of science and technology and how humans have approached and thought of things over the centuries.

Our task is to make scene-referred more intuitive and easier to reconcile. A healthy dose of charity (kindness, understanding, generosity and patience) would go a long way to facilitate this learning.


Ok, I think I see my misunderstanding.

The emphasis is not on “terms” but on “think of”.

So there isn’t a set of terms for display-based workflow or scene-based workflow, but a way of thinking about one workflow or the other.

1 Like

I don’t think it is even that, it is just a habit shaped by technological constraints that were binding until recently.

You don’t have to be retired to remember a time when photo editing was basically a thin GUI for destructively modifying a 3D array of 8-bit integers with limited undo. Breaking each of those constraints required decades, and habits change slowly.

Scene-referred is simply the next logical step. It is one of those a-ha moments when you recognize how something should have been designed from the very beginning ---- if you had hindsight, the computers, scientific and algorithmic developments of today :wink:


That word again: “intuitive”. Usually a shortcut for “I don’t have to think about this because I’ve done it so often already”. That is, a tool is intuitive because you have spent the time needed to learn how to use it (cf. riding a bike, playing a musical intstrument).

And that means that any system replacing the “intuitive” system you are used to needs to follow the same rules to be intuitive as well, or you need to spend time to learn new habits.

The latter means spending time, the former means that some changes that will not fit in the old habits are impossible to implement… So while it may be a worthwhile goal to make new tools resemble the old ones as close as possible, pushing that too far will limit what you can do to improve the tool.


No, I did not mean retrograde implementation but something closer to Super Mario Bros. World 1-1. I wish people would refrain from nitpicking my diction. Our forum has taken a critical tone where people cannot express their ideas without being criticized or taken apart. Frankly, it is exhausting having to explain or defend oneself. I am speaking on general terms for the community. That said, intuitive can be counterproductive word. :person_shrugging:

One could now discuss what “intuitive” really means. But I think that leads to nothing.
Ultimately, it is also clear what is meant by the statement. At least to me.

But I can also understand that it is important to deal with the matter.
It’s about knowing what you’re doing or wanting to achieve.

Nevertheless, I find it a good approach to make the scene-related workflow “more intuitive” or, to put it another way, “easier to understand”.
The discussions about it show that there are some people (including me) who have their problems with it.

My point here is (sorry for terminology again, but I think it’s important):
When someone says scene-referred workflow, what exactly does he mean?

Does that denote the part from linear to non-linear color space?
Or the fact that edits are being made in this part?
Or does it involve using the filmic RGB module and the new ordering of the pixelpipe? So what happens when scene-referred is set in the presets.

In the above article (Color Grading Workspaces - Film Riot) this is described relatively simply:

“A scene-referred workflow is one in which we manipulate our images prior to their transformation from camera color space to display color space. A display-referred workflow is one in which we manipulate our images after they’ve been transformed from camera color space to display color space.”

I think that’s also easy to understand once you understand the difference between the color spaces.

But with this definition you always have a mixed form in Darktable, regardless the scene-referred or display-referred workflow is set. Even with the old module order, there are a number of modules which come into play before the base curve in the linear color space.

And that was before the scene-referred workflow was introduced to darktable. The exposure module has always been scene-referred.
And I don’t think there is anything difficult about this module.

With the introduction of the scene-related workflow new modules have been added and more are added with new versions.
And these modules are more difficult for me to understand than the old ones.

So I would say it’s not the scene-referred workflow that’s difficult. The new modules are difficult.

But I don’t necessarily want to criticize that. Maybe it has to be like this? I don’t know. Over time I learn to deal with it.
And of course I get good results with it.

Nevertheless, simpler modules are desirable (for me) and maybe there are possible.

1 Like


At the end of any workflow is a rendition, and that rendition has definitely had at least one tone curve applied. if you don’t think so, go pick apart your display profile or export profile, they’re in there. The only distinction between scene- and display-referred is regarding where we put any color (including white balance) work in reference to the first tone curve - before or after. That’s it.

Filmic, basecurve, whatever, those are little intermediate tone curves before the final one for display or export. What’s important is to put all the other color work before them so they’re working on the image data in its original energy-linear relationship,