The Myth of Digital Reds, or, How to Properly Expose to the Right

Thanks for the hint that raw-developers do white balancing before demosaicing. Checked darktable manual for info about that and page 43 says:

But at any time, the processing took the raw image, adjusted white balance on it, then
demosaic…

UniWB still could be used for setting up the ideal settings for a scene. For taking photos after that one could switch to a ‘normal’ white balance.
But since this is time-consuming and more complex this wouldn’t be usable for most of my photos.

So there is no perfect solution for the histogram when shooting raw (unless camera manufacturers implement raw histograms but I don’t think they will do that).

I think that in my everyday photos I keep the focus on the green channel in doubt when red and/or blue are overexposed in the histogram. Some exceptions are when there are stronger red or blue colors in my view. Had many overexposed reds when shooting flowers, especially on red ones…

2 Likes

I personally find this approach to be reasonably easy and working well (when you have time to follow it – in fact most of my shots are “capture the moment” types, for which I rely completely on the standard matrix metering of the camera to be honest…):

1 Like

That is what I do. Know how much headroom your camera has and then offset the viewfinder meter. I often forget to meter the shadows and mid-tones though.

I think this statement is not correct. Having twice the number of green pixels than red or blue ones does not change the colour distribution in an image. Just view the red, green and blue pixels as three separate images.

Of course, the image without demosaicing will look greenish, due to the larger number of green pixels. But this has nothing to do with white balance!

Hermann-Josef

2 Likes

Right. You probably already get it, but since I can not be sure, i’ll review it quickly again. The most common “Bayer Interpolation” sensors in most cameras around have alternating series of red-then-green pixels, and below them green-then-blue pixels …something like this:
R.G. R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G
G.B. G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B
R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G
G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B
R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G
G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B

What then happens is the camera takes and looks at them in sets (as above) of one red, one blue and the greens in between them, and doing so ends up generating the values for one given resulting pixel; where its red value (of the resulting pixel) is the red pixel value from the sensor, the blue value, corresponds with the blue pixel from the sensor, while the green colour value for the resulting pixel will be based on the two green pixels on the sensor.

The reason there are two green pixels on the sensor is that they are used for more than just the green colour value of the resulting pixel. Human sight supposedly is more sensitive to green light, and therefore the value of the two green pixels is also used to define the luminosity of the resulting pixel.

So, as weird as it sounds, every resulting pixel (more or less - cause I may not be 100% accurate on this) in a produced JPG from a digital sensor, is based on red, green, blue colour value as well as a luminosity dictated by the two green pixels. Those get combined and give you the actual R,G,B values of each pixel. The process by which a RAW editor does this is the demosaicing algorithm, such as RawTherapee’s AMAZE (good for low level detail when ISO is low), or LMMSE (good for high ISO images where we want to keep the noise down).

These were a lot of words to explain why a raw file with most of the time have a higher amount of green information stored in it than red or blue. So basically set your camera to daylight white-balance and look at your green histogram when you take a picture to check if you clip or not. If the green one isn’t clipped, then your likelihood of clipping red and blue are way lower, due to the stuff discussed above. :slight_smile:

1 Like

The luminosity is merely what we perceive. The red, green, and blue are all that come off the sensor (slightly transformed by the color space conversion matrix).

99% of the time, I shoot outdoors. At times, it is full daylight, sometimes overcast, and sometimes outdoor night scenes.

I use the Custom White Balance in my Olympus E-M5, set to 6000K and the white balance module in darktable also is set to 6000K. I have never been dissatisfied with the colour balance

I didn’t come up with this on my own but read it somewhere. However, I’ve also seen a suggestion to use 5400K instead as it is closer to daylight but basically, the same thing: both camera and darktable set to 5400K.

Have I been lucky or is this a reasonable approach?

Thanks for this @CarVac. Re.

this is bad news! as I often use uniWB.

@heckflosse, how seriously do you view the mazing / aliasing increase please? Is there theoretically a way for RT do something better re. the demosaic / WB relationship?

afaik rawtherapee doesn’t work that way (it uses an auto white balance specific for demosaicing iirc), so you are safe

Exactly :slight_smile:

Photoflow performs the demosaicing with the user-selected WB multipliers. If you change the WB settings, the demosaicing is re-done :wink:

Having no experience with raw images from a digital camera, I just wonder, what white balance has to do with de-mosaicing???

De-mosaicing is meant to fill in the missing values for the three channels in a Bayer-matrix based detector by interpolation (how exactly this is done depends on the algorithm employed):


White balance, in one of its common applications (grey world) makes the average RGB signal equal to one another (Hunt 2004, page 561), which can be done without de-mosaicing.

So here I am completely lost in the above discussion.

Hermann-Josef

@agriggio, @heckflosse, thanks, and good news.

I don’t get the point where the UniWB affects the raw data. I would use it when taking a picture with my camera.
Later in my raw-developer I choose a different whitebalance (5000k for example), but not UniWB.

Demosaicing almost always works on multiple channels to fill in the missing data, not just e.g. the blue channel to fill missing blue pixels. If you first rescale the individual RGB channels according to some while balance multipliers, and then interpolate the missing data, you get different very different results. The order matters.

2 Likes

The order is not my understanding problem, it’s the part where UniWB leads to problems with a raw file.

@jossie: your image is a bit misleading to describe white balance and demosaicing.

First the raw-developer amplifies the blue and red channel according the values from the white balance. Since the sensor cannot detect colors everything is grey right now.

After that the colors of the image are being calculated.
How that works can be explained with the following image

Let’s assume we analyse a pixel in the area of the biker’s pullover that considered to be red later.

We look at the surrounding pixels with green, blue and red overlays:
The blue pixels are dark
The green pixels are dark
The red pixels are bright
→ This means that our pixel must be red

With this logic we go through all the pixels in our image and calculate their colors.

@Thanatomanic
Okay, thanks, this I understand. However, I do not understand, why the other channels will affect the interpolation for a given channel. Does one assume, e.g., a certain spectral energy distribution?

Hermann-Josef

Could you let me in on the secrets for how to translate kelvin/tint settings back into the camera color space?

(Or do you do convert color space before demosaicing?)

As far as I know this information is stored in the RAW file as metadata that can be extracted with tools like dcraw or rawdigger:

I’ve tried back-converting the XYZ calculations for the planckian locus to the raw color space and never succeeded.