Survey on the linear workflow

Having read this thread with high interest, I have a need to post my thoughts as well.
 

I’m not a programmer myself, but an avid user of raw converters. I came here being an unhappy user of Lightroom (quality wise).
 

To begin with, I would like to state two concepts already mentioned, but to my knowledge not properly used sometimes:

  • radiometrically correct: in a picture means that the pixel values in the demosaiced image are the same or at least proportional to those present in the photosites of the camera sensor. E.g., if a photosite has received 4 times the signal (photons) than its neighbours, the demosaiced image values must respect such proportion, that is, 100 vs 400 or 0,211 vs 0,844.
  • linear data values: mostly referred as linear gamma. If the sensor receives a given amount of signal that is stored as value 20, it has to receive double the signal to be stored as value 40. To me that is linearity. That is how sensors work. We are not taking here (yet) about black point compensation, white balance, clipping or anything else. Just value x has received half the light as value 2x. Likewise, in a demosaiced linear image, a value that doubles another one, must double its intensity (hue, lightness, …).

The problem is that our eyes doesn’t work that way, so we have to play tricks to show the sensor data in a way that our eyes understand or are pleased to watch. In this sense, a gamma encoded image doesn’t respect the sensor linearity (obviously, as it has been designed for that), and a value that doubles another one doesn’t have double its intensity.
 

Now to the problem of a linear workflow: the data within the pipeline should always be in a linear data fashion, same as it was captured by the sensor.
 

It’s easy to find webpages telling that the sensor captures light in a linear way:

Now we can go to Elle Stone’s website and read about linearity:

And just a note about if we even need gamma correction at all, in an era of 32 bits/channel, floating point precission raw engines:

But now let’s see something that could lead to missconceptions: if you linearly modify the values of every pixel in the demosaiced image, you will end up with a modified linear gamma image. You’re not gamma encoding the image, but just changing its pixel values. You started in linear gamma and ended in linear gamma (with modified values).
 

Again, this is not how our eyes work, so now the developers have to carefully think if a tool works better in a linear fashion (doubling intensities when doubling values), or in a gamma version of the image (more perceptually uniform). Or even if the tool has to be processed before or after its current position in the pipeline.

I won’t dare to say that such decission is easy, or that coding the tools is a piece of cake. It’s just that developers are the only ones with the power to decide how to tweak an image (I’m not talking about setting the sliders, but about the algorithm itself), so they have to carefully weight whether it should be done linearly or non-linearly, to prevent artifacts.
 

In fact, I think that working with tools that need gamma converted values is pretty simple: you get the linear data from the pipeline, gamma encode it, modify that data with the tool, and send back the resulting values after decoding them to linear gamma. All in all is just a question of converting a value to the x power, and then convert the tool_modified_value to the 1/x power (in its simpler scenario).

Obviously the returned modified-but-linear value won’t ever be radiometrically correct, but to my knowledge that’s not the purpose of a raw processing app. I seek a beautiful image that is the essence of what I felt when I was taking it, not an image that exactly, clinically depicts what was present the moment I made click. I only need a radiometrically correct translation of the image at the beginning of the process (maybe just after demosaicing).
 

If we don’t send back to the pipeline the modified values in a linear gamma (but not radiometrically correct anymore), it is possible that the next tool in the pipeline maybe works better with linear data, but it receives indeed gamma corrected data, thus producing artifacts.

If we go on and keep mixing tools that need gamma converted data with tools that work better with linear data, and one tool sends its results to the next, what we end up is with more and more artifacts (not desirable hues, halos, strange luminances, …).
 

Hope all of this makes sense, because if FOSS apps get to work like that, they will be waaaaay ahead of commercial products.

1 Like