The Myth of Digital Reds, or, How to Properly Expose to the Right

Thanks for this @CarVac. Re.

this is bad news! as I often use uniWB.

@heckflosse, how seriously do you view the mazing / aliasing increase please? Is there theoretically a way for RT do something better re. the demosaic / WB relationship?

afaik rawtherapee doesn’t work that way (it uses an auto white balance specific for demosaicing iirc), so you are safe

Exactly :slight_smile:

Photoflow performs the demosaicing with the user-selected WB multipliers. If you change the WB settings, the demosaicing is re-done :wink:

Having no experience with raw images from a digital camera, I just wonder, what white balance has to do with de-mosaicing???

De-mosaicing is meant to fill in the missing values for the three channels in a Bayer-matrix based detector by interpolation (how exactly this is done depends on the algorithm employed):

White balance, in one of its common applications (grey world) makes the average RGB signal equal to one another (Hunt 2004, page 561), which can be done without de-mosaicing.

So here I am completely lost in the above discussion.


@agriggio, @heckflosse, thanks, and good news.

I don’t get the point where the UniWB affects the raw data. I would use it when taking a picture with my camera.
Later in my raw-developer I choose a different whitebalance (5000k for example), but not UniWB.

Demosaicing almost always works on multiple channels to fill in the missing data, not just e.g. the blue channel to fill missing blue pixels. If you first rescale the individual RGB channels according to some while balance multipliers, and then interpolate the missing data, you get different very different results. The order matters.


The order is not my understanding problem, it’s the part where UniWB leads to problems with a raw file.

@jossie: your image is a bit misleading to describe white balance and demosaicing.

First the raw-developer amplifies the blue and red channel according the values from the white balance. Since the sensor cannot detect colors everything is grey right now.

After that the colors of the image are being calculated.
How that works can be explained with the following image

Let’s assume we analyse a pixel in the area of the biker’s pullover that considered to be red later.

We look at the surrounding pixels with green, blue and red overlays:
The blue pixels are dark
The green pixels are dark
The red pixels are bright
-> This means that our pixel must be red

With this logic we go through all the pixels in our image and calculate their colors.

Okay, thanks, this I understand. However, I do not understand, why the other channels will affect the interpolation for a given channel. Does one assume, e.g., a certain spectral energy distribution?


Could you let me in on the secrets for how to translate kelvin/tint settings back into the camera color space?

(Or do you do convert color space before demosaicing?)

As far as I know this information is stored in the RAW file as metadata that can be extracted with tools like dcraw or rawdigger:

I’ve tried back-converting the XYZ calculations for the planckian locus to the raw color space and never succeeded.


It cannot be that simple, because it is not a question of red, green or blue – but of how much red etc. Thus, as I mentioned above, interpolation must be used. However, interpolation will produce colour defects around edges. Thus a more elaborate procedure must be employed.

What I do not see currently is how demosaicing of a given channel can be influenced by the other channels. For instance, the gradient can be quite different from channel to channel. So this information is not of much use. Perhaps around edges there can be a benefit from looking at the other channels. But I have to read the relevant papers to learn more about the more sophisticated implementations like AMaZE. Unfortunately I cannot find any reference to the work by Emil J. Martinec. I would be grateful for any hint where to find this.



I agree, since each pixel should have its constant individual colour. The graph was made for a different purpose and I just used it here.

The spatial information is “intertwined” in the color channels of the Bayer CFA. See e.g. some background here

Most demosaic methods try to recover something similar to “luma” first (weighted sum of R, G, and B), but that also relies on white balanced data for consistency, so the PSD of the illumination does play a role.

As the article above explains - if you had a purely grayscale (acrhomatic) scene and perform your white balance correctly, you could recover the full res grayscale image without the need for demosaicing!

Thanks for the reference!