The Myth of Digital Reds, or, How to Properly Expose to the Right

Thank you all for your answers!

I clearly missed the «camera histogram» part. Now I understand it all. Thanks again! :smile:

Well, the sample image I used in this photograph is natural light.

However, when I spoke of incandescent, I meant the following:

When natural light is super warm, I generally want to preserve it. When artificial light is super warm, I generally want to neutralize it.

For natural light, I leave white balance on Daylight, and expose for the green channel.

In incandescent light, I use a custom WB (to save work later), and expose for both the green and the red channel (because I can’t know ahead of time which has headroom and which doesn’t).

Open up RawTherapee and click on the button third from bottom next to the histogram to view the raw histogram, and you can find out if it’s your capturing or your processing.

3 Likes

a) What whitebalance does:
You need to know your sensor’s color matrix to have an understanding about white balance. Assuming your camera has a bayer-matrix, the sensor colors are distributed with 4 pixels: one red, one blue, and ->two<- green pixels.

As a result of this your camera sensor records more green than blue or red.
An image without applied white balance has got a strong greencast.
You can check that by opening a raw file in your raw developer and disabling the white balance.

The purpose of Whitebalance is to compensate this color tint. It multiplies the values of certain color channels for this. In most cases red and blue channels are amplified.

b) The histograms in your camera lie when shooting RAW
The camera shows the histogram based on a JPG, not the RAW. That JPG has got the white balance applied to.

‘Applied white balance’ means that the histogram is based on amplified sensor data.

Resulting problem of this: You take a photo and see clipped data in the histogram of the red channel. This data is clipped in the JPG. But what about the red channel in the RAW data?
All you know is that the data is somewhere lower than in the JPG. But you don’t know if the RAW data itself is clipped or not.

But there is a chance to get a histogram that gets very near to the RAW data, it’s called ‘UniWB’. This is a whitebalance setting that ‘nullifies’ the whitebalance of the camera and has to be calibrated for your camera model.
When shooting a picture with this white balance setting you see the picture with a strong green tint but the histogram is almost correct to the RAW data.
You can change the white balance later and get a ‘normal’ looking picture.

More explanation about UniWB:

https://blog.kasson.com/using-in-caera-histograms-for-ettr/preparing-for-monitor-based-uniwb/
http://www.guillermoluijk.com/tutorial/uniwb/index_en.htm

1 Like

A raw photo isn’t green because there are two green pixels. It’s green because each of the green pixels admits more light than the red or blue pixels.

I mentioned UniWB, but it’s a) annoying to fix WB in every single shot and b) unnecessary as long as you know that the green channel is going to be the first to clip in most circumstances.

UniWB also, in practical use, affects demosaicing quality because as far as I know, most raw editors do the pre-demosaic white balancing with the camera multipliers and correct otherwise afterwards. So you will get increased mazing and aliasing if you use UniWB.

So I don’t recommend it basically ever.

4 Likes

Thanks for the hint that raw-developers do white balancing before demosaicing. Checked darktable manual for info about that and page 43 says:

But at any time, the processing took the raw image, adjusted white balance on it, then
demosaic…

UniWB still could be used for setting up the ideal settings for a scene. For taking photos after that one could switch to a ‘normal’ white balance.
But since this is time-consuming and more complex this wouldn’t be usable for most of my photos.

So there is no perfect solution for the histogram when shooting raw (unless camera manufacturers implement raw histograms but I don’t think they will do that).

I think that in my everyday photos I keep the focus on the green channel in doubt when red and/or blue are overexposed in the histogram. Some exceptions are when there are stronger red or blue colors in my view. Had many overexposed reds when shooting flowers, especially on red ones…

2 Likes

I personally find this approach to be reasonably easy and working well (when you have time to follow it – in fact most of my shots are “capture the moment” types, for which I rely completely on the standard matrix metering of the camera to be honest…):

1 Like

That is what I do. Know how much headroom your camera has and then offset the viewfinder meter. I often forget to meter the shadows and mid-tones though.

I think this statement is not correct. Having twice the number of green pixels than red or blue ones does not change the colour distribution in an image. Just view the red, green and blue pixels as three separate images.

Of course, the image without demosaicing will look greenish, due to the larger number of green pixels. But this has nothing to do with white balance!

Hermann-Josef

2 Likes

Right. You probably already get it, but since I can not be sure, i’ll review it quickly again. The most common “Bayer Interpolation” sensors in most cameras around have alternating series of red-then-green pixels, and below them green-then-blue pixels …something like this:
R.G. R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G
G.B. G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B
R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G
G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B
R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G
G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B

What then happens is the camera takes and looks at them in sets (as above) of one red, one blue and the greens in between them, and doing so ends up generating the values for one given resulting pixel; where its red value (of the resulting pixel) is the red pixel value from the sensor, the blue value, corresponds with the blue pixel from the sensor, while the green colour value for the resulting pixel will be based on the two green pixels on the sensor.

The reason there are two green pixels on the sensor is that they are used for more than just the green colour value of the resulting pixel. Human sight supposedly is more sensitive to green light, and therefore the value of the two green pixels is also used to define the luminosity of the resulting pixel.

So, as weird as it sounds, every resulting pixel (more or less - cause I may not be 100% accurate on this) in a produced JPG from a digital sensor, is based on red, green, blue colour value as well as a luminosity dictated by the two green pixels. Those get combined and give you the actual R,G,B values of each pixel. The process by which a RAW editor does this is the demosaicing algorithm, such as RawTherapee’s AMAZE (good for low level detail when ISO is low), or LMMSE (good for high ISO images where we want to keep the noise down).

These were a lot of words to explain why a raw file with most of the time have a higher amount of green information stored in it than red or blue. So basically set your camera to daylight white-balance and look at your green histogram when you take a picture to check if you clip or not. If the green one isn’t clipped, then your likelihood of clipping red and blue are way lower, due to the stuff discussed above. :slight_smile:

1 Like

The luminosity is merely what we perceive. The red, green, and blue are all that come off the sensor (slightly transformed by the color space conversion matrix).

99% of the time, I shoot outdoors. At times, it is full daylight, sometimes overcast, and sometimes outdoor night scenes.

I use the Custom White Balance in my Olympus E-M5, set to 6000K and the white balance module in darktable also is set to 6000K. I have never been dissatisfied with the colour balance

I didn’t come up with this on my own but read it somewhere. However, I’ve also seen a suggestion to use 5400K instead as it is closer to daylight but basically, the same thing: both camera and darktable set to 5400K.

Have I been lucky or is this a reasonable approach?

Thanks for this @CarVac. Re.

this is bad news! as I often use uniWB.

@heckflosse, how seriously do you view the mazing / aliasing increase please? Is there theoretically a way for RT do something better re. the demosaic / WB relationship?

afaik rawtherapee doesn’t work that way (it uses an auto white balance specific for demosaicing iirc), so you are safe

Exactly :slight_smile:

Photoflow performs the demosaicing with the user-selected WB multipliers. If you change the WB settings, the demosaicing is re-done :wink:

Having no experience with raw images from a digital camera, I just wonder, what white balance has to do with de-mosaicing???

De-mosaicing is meant to fill in the missing values for the three channels in a Bayer-matrix based detector by interpolation (how exactly this is done depends on the algorithm employed):


White balance, in one of its common applications (grey world) makes the average RGB signal equal to one another (Hunt 2004, page 561), which can be done without de-mosaicing.

So here I am completely lost in the above discussion.

Hermann-Josef

@agriggio, @heckflosse, thanks, and good news.

I don’t get the point where the UniWB affects the raw data. I would use it when taking a picture with my camera.
Later in my raw-developer I choose a different whitebalance (5000k for example), but not UniWB.

Demosaicing almost always works on multiple channels to fill in the missing data, not just e.g. the blue channel to fill missing blue pixels. If you first rescale the individual RGB channels according to some while balance multipliers, and then interpolate the missing data, you get different very different results. The order matters.

2 Likes

The order is not my understanding problem, it’s the part where UniWB leads to problems with a raw file.

@jossie: your image is a bit misleading to describe white balance and demosaicing.

First the raw-developer amplifies the blue and red channel according the values from the white balance. Since the sensor cannot detect colors everything is grey right now.

After that the colors of the image are being calculated.
How that works can be explained with the following image

Let’s assume we analyse a pixel in the area of the biker’s pullover that considered to be red later.

We look at the surrounding pixels with green, blue and red overlays:
The blue pixels are dark
The green pixels are dark
The red pixels are bright
→ This means that our pixel must be red

With this logic we go through all the pixels in our image and calculate their colors.