Is the display referred workflow a destructive workflow? And the scene referred non destructive?

Is the display referred workflow a destructive workflow? And the scene referred non destructive? Are all these workflows focus in trying to preserve luma and colors that are clipped with any of the modules used?

Terminology you use refers to software that does not modify the data of the original file. Edits in DT are saved in a database as well as xmp text based files…your raw files are untouched…this is the non destructive part…

A good summary of pros and cons is found here Color Grading Workspaces - Film Riot

3 Likes

@priort clarified your terminology. Now, with regard to the propensity of operations on image data to “destroy”, all operations do to some extent. Tone operations particularly, as they inflict change that departs the image data from the original “energy-linear” relationship it had corresponding to the light at the scene.

So, in display-referred workflow, the image is presented to the user after the tone curve to lift the data to the perceptual range has been applied. Doing other tone curves, or color modifications, on this data only further slews the data in ways not related to the original linear data, with sometimes egregious effect on the colors. In scene-referred workflow, all that user work is done before that final tone curve to perceptual, which helps to reduce the color damage.

3 Likes

What is the main purpose of the scene referred workflow? Is it for easier control of values while using the sliders or is it mainly about preserving the color/luma data during adjustments?

Is the reasoning just the same as doing adjustments before the LUT in video editing because it is destructive. After the LUT, if the luma or colors have clipped thye are not recoverable in the next pipeline or module. Is that the same here with the scene referred workflow? I still am trying to understand what is the main reason why there is a specific workflow like scene referred.

Did you read the link I provided…it is pretty well spelled out there … sort of inline with your thoughts…

This picture is a good visual…

5 Likes

Have a look at topics by @anon41087856 , like this article about problems with Lab.

Of course you can still get clipping, even in a scene-referred workflow. And a display-referred workflow doesn’t have to introduce clipping. I find it’s just a lot easier to control the tonal range with scene-referred.

With colours it’s a bit different: you can still push your colours outside your final display colour space, even without hard clipping of any data. But the conversion to the final display colour space happens at the very end of the pipeline, where the workflow has become display-referred anyway.

Keep in mind that no workflow is 100% scene- or display-referred: there’s always some work done before the transformation to display-referred, and some after. That transformation happens in the filmic or basecurve module. The two workflows differ in where most of the editing is done in the pixel pipe: before or after the “transformation modules”. And given the history of darktable, there are two modules (or more) for any given task, some active before filmic/basecurve, some after. It’s not really useful to include several modules that perform a similar task in a workflow, hence the “specific” workflows.

(That’s the current situation, since the pixel pipe order was reworked, you may want to look at another topic by @anon41087856 )

1 Like

I am slowly getting there. Thanks for the inputs guys.

it’s not about destructive or nondestructive - in general raw processing is destructive in the end since no technical tool can reproduce the scene as in reality.
It’s all about having control until it comes to pressing the whole stuff into the borders that can be displayed or printed.

1 Like

Yes, this.

EVERY thing you do to the original data recorded by the camera is destructive to that data, in some way. The whole digital imaging pipeline is about taking the rich light spectrum of the scene and crunching it down to little RGB triples that coarsely approximate it, viewable on limited-gamut devices. Scene-referred workflow is all about ordering the operations to minimize the cumulative damage.

10 Likes

This is so great !

The part that people tend to miss about scene-referred or display-referred is about colour. They focus too much on contrast or dynamic range.

Colour is a complicated topic but it pretty much derivates from something easy : it’s what proportion of R, G, and B your pixel have.

When a sensor records an image, it breaks a light spectrum into that RGB tristimulus. Full spectrum → 3 intensities is already the first destructive step of the pipeline, but we can manage.

Problem is, those camera proportions are not consistent with human vision. So, using an input profile and a white balance correction, we reweigh hem. Now we have the scene light as would have been seen by an human observer.

The thing is, most display transforms will mess up those scene ratios (filmic has chrominance preservation to prevent this), which means that:

  1. you don’t work the original colours,
  2. colours that were correlated on the scene (belonging to the same object, for example), are not necessarily correlated on display,
  3. colours out of gamut cannot be gamut-mapped consistently with their correlated colors because they were clipped too early in the pipeline and because their correlated colors may not be correlated anymore.

The point #2 will make chroma/hue masking very hard and the point #3 will create the infamous rat-piss sunset.

Gradients are an important part of image making. They represent the transitions of colour and intensity within objects. Colours that belong to the same object or surface are expected to be correlated to their neighbours in a smooth way. So ratios should be handled in a way that keeps colour correlated. And, well, your typical split-channel RGB tone curve as display transform does not do that.

5 Likes