The Myth of Digital Reds, or, How to Properly Expose to the Right

The myth of digital reds and the concept of highlight headroom

I’ve seen repeated many times on the internet that “reds are difficult to capture on digital” and I’d like to dispel that notion.

On this old page, Bill Claff, a generally trustworthy source of objective data nowadays, notes that with this extremely intense red, he needed to underexpose (read: use -2 stops of exposure compensation) this image of a flower by two stops in order to properly capture its color. He says he based this on the RGB histogram provided by the camera.

While I’m sure Bill is aware of proper technique now, it remains a fact that cameras continue to mislead their users. In this situation, the camera’s sensor is almost certainly not clipping the data for the red channel. By scaring the user into underexposing, the camera gathers less image data relative to the noise, reducing your achievable image quality.

In this article I will go into why this is and how you can change how you shoot in order to optimize your image quality.

White Balance

Many of you may be familiar with white balance correction: adjusting a setting in your editing program in order to ensure that whites stay white. However, you may not be aware of the fact that instead of correcting an image that starts out white for daylight, most cameras start with an image that is extremely green, with the red and blue channels one to two stops darker when exposed to white light.

What does this mean? Let’s get into the concept of highlight “headroom”.

Highlight Headroom

It’s commonly known that when you shoot raw, you can recover highlights that were clipped in the JPEG provided by the camera, thus extending your dynamic range.

This headroom manifests in several ways:

  1. Highlight reconstruction: a clipped color’s value is extrapolated from the raw values of the color channels that weren’t clipped. This can work very well, or it can work poorly, depending on the subject and the degree of clipping.
  2. The color channels were actually not clipped, and there’s no guessing involved. This is a lossless process.

We’ll focus on the second type, because it always behaves gracefully.

Effect of White Balance on Highlight Headroom

Because the red and blue channels are darker than the green, they rarely clip. This means that things like the blue sky and red or orange flowers can be significantly clipped on your camera’s histogram and still not be clipped on the sensor. Here’s an example of a photo where the JPEG suggests that the red channel is badly overexposed.

In the camera’s histogram, the photo looks like this:

But when you view the raw histogram, which you can find in a program like RawTherapee, you can see that since the entire right half of the raw histogram is empty, you have in fact a whole stop of highlight headroom.


The white balance multiplier for red for my camera in this situation is roughly 2, which corresponds almost exactly with the one stop of headroom.

The result I got after processing turned out like this, with no lost color information:

You must watch out, however, when your white balance is set to something other than daylight. Under incandescent light, the red channel will be gathering a lot more light in proportion to the other colors, and the blue channel less. Thus, if you’re using a custom WB, a preset tungsten WB, or auto WB in warm light, you may find that the red channel has a very low gain.

Likewise with cool light: the blue channel gathers a lot more light than green and especially red, so it might have gain similar to or even less than than the green channel.

What does this mean?

On the lowest gain channel, whether that be red or blue, you don’t have the lossless highlight recovery that you do in daylight white balance.

How to Expose Properly

So how should you expose in order to maximize your image quality while avoiding clipping?

Low Light (incandescent)

In low light, which I would define as any situation where you must raise the ISO to achieve hand-holdable ETTR, it’s not necessary to expose to the right. Just pick a high-enough ISO and expose as brightly as you’re comfortable. Often there are point light sources in frame, so you will inevitably have to let those clip fully.

Bright light (especially natural light)

Whenever you have enough light to expose to the right at base ISO without risking blur or sacrificing too much DOF, then you should expose to the right. The question is: how do you do this consistently? Ideally, you have a raw histogram or raw zebras which show you where the sensor is clipped. But only Phase One cameras (and Magic Lantern on Canons) give you a raw histogram.

Another method of checking exposure better that used to be popular is to generate a UniWB profile, which is basically a custom WB that used a feedback loop to set all the white balance multipliers equal to 1… but this makes your JPEGs all extremely green, and is really not an option at all for mirrorless cameras where it impacts composing.

What I do is I permanently leave my cameras in fixed daylight white balance except when shooting in dim incandescent light, where I use a custom WB.

This means that except in extremely colored LED lighting, which brings its own problems (with colors that are out of gamut for reasonable color spaces), the green channel is almost guaranteed to clip first.

So if you never let the green channel clip, then you are pretty much always safe.


To obtain maximum image quality with ETTR, use daylight white balance and ETTR based only on the green histogram, and the other color channels should almost never clip.

By doing this you get the most out of your camera’s dynamic range.


Wow, this is a great write up! Thank you for taking the time.

Although I find this explanation really useful, I’m not sure I understand it in full. Probably I’m a bit dense right now…

If I’m right, those sentences mean that the chosen white balance sets the appropriate channel gain to level all 3 channels and give the most realistic image possible. That is, not too red, not too blue.

And by lowering the captured signal of a channel, it removes the ability to properly recover highlights (I think about it as something like posterizing the channel).

But what I don’t get is the subliminal idea that the white balance DOES CHANGE the values recorded in the raw image. I always thought that those raw values where, erm…, raw. Unmodified, straight values taken from the sensels themselves, and that the white balance was applied AFTER the raw image has been recorded into a file. That is, as I have ever heard: the in-camera white balance can always be changed in post-processing, and none will have any impact in the raw data.

Am I wrong? What am I missing?

1 Like

White balance adjusts the three channels so that (contextually) white light, which does not register equally on all sensor channels, will output as white.

White balance isn’t changing the raw values, white balance is adjusting for actually different light.

For example, hypothetically this is the proportion of sensor excitation when each kind of “white light” hits the sensor:

Daylight (neutral)

Incandescent (warm light, lots of red, little blue)

Shade (cool light, lots of blue, little red)

For white to look white in the output, you want each channel to be equally bright.

In daylight, this sensor would need to use multipliers of 2, 1, and 2 for red, green, and blue, respectively.
In incandescent light, the sensor would need 1, 1.2, and 4.
In shade, the sensor would need 3, 1, and 1.

When the histograms are generated with those multipliers, that means you have highlight headroom beyond the right side of the histogram as denoted by the underscores:

RRRRRR______ x2.0 -> RRRRRRRRRRRR____________
BBBBBB______ x2.0 -> BBBBBBBBBBBB____________

BBB_________ x4.0 -> BBBBBBBBBBBB____________________________________

RRRR________ x3.0 -> RRRRRRRRRRRR________________________

The white balance you set isn’t changing what you capture, it’s changing how much headroom you have over the histogram.

The white balance of the light itself changes what you capture.

1 Like

White balance doesn’t typically change all three channels, it changes red and blue, and leaves green alone. Find the multipliers in one of your images’ metadata, and you’ll find the green number is 1.000. As a multiplier, 1.000 doesn’t change anything. So, green values survive the white balance operation in the path to the JPEG image upon which the camera histogram is based.

So, the green channel in the camera histogram can be considered approximately raw, if your selected camera profile is ‘neutral’. This lets one use the green channel histogram to set the ETTR exposure, no UniWB or spot metering pet tricks needed. @CarVac, that’s a mechanic’s take on what you wrote; does it fit?


Yes: the green channel histogram is pretty close to raw, and in most lighting the green channel will clip first, so the green channel histogram alone is enough to ETTR with.

1 Like

That’s true. However, the histogram shown by the camera is (sadly) not of raw values, but raw values adjusted by the white balance. Hence it is not a good guide of whether channels have clipped.

1 Like

On the other hand, one of the channels is more likely to clip than the others, and if you set your white balance appropriately, you can use the JPEG histogram to judge clipping in that channel only.

…the other spanner thrown into the works is the camera matrix making the green histogram related to the red and blue channels to some extent. That’s the bigger issue.

1 Like

When you talk about incandescent and warm light with lots of red, are you applying this to warm daylight (eg. with a sunset or sunrise), or only talking about artificial light?

I do tend to find it hard to capture natural reds and oranges at sunset, even though shooting raw. Not completely sure if it’s often clipping the reds, or my processing skills.

Thank you all for your answers!

I clearly missed the «camera histogram» part. Now I understand it all. Thanks again! :smile:

Well, the sample image I used in this photograph is natural light.

However, when I spoke of incandescent, I meant the following:

When natural light is super warm, I generally want to preserve it. When artificial light is super warm, I generally want to neutralize it.

For natural light, I leave white balance on Daylight, and expose for the green channel.

In incandescent light, I use a custom WB (to save work later), and expose for both the green and the red channel (because I can’t know ahead of time which has headroom and which doesn’t).

Open up RawTherapee and click on the button third from bottom next to the histogram to view the raw histogram, and you can find out if it’s your capturing or your processing.


a) What whitebalance does:
You need to know your sensor’s color matrix to have an understanding about white balance. Assuming your camera has a bayer-matrix, the sensor colors are distributed with 4 pixels: one red, one blue, and ->two<- green pixels.

As a result of this your camera sensor records more green than blue or red.
An image without applied white balance has got a strong greencast.
You can check that by opening a raw file in your raw developer and disabling the white balance.

The purpose of Whitebalance is to compensate this color tint. It multiplies the values of certain color channels for this. In most cases red and blue channels are amplified.

b) The histograms in your camera lie when shooting RAW
The camera shows the histogram based on a JPG, not the RAW. That JPG has got the white balance applied to.

‘Applied white balance’ means that the histogram is based on amplified sensor data.

Resulting problem of this: You take a photo and see clipped data in the histogram of the red channel. This data is clipped in the JPG. But what about the red channel in the RAW data?
All you know is that the data is somewhere lower than in the JPG. But you don’t know if the RAW data itself is clipped or not.

But there is a chance to get a histogram that gets very near to the RAW data, it’s called ‘UniWB’. This is a whitebalance setting that ‘nullifies’ the whitebalance of the camera and has to be calibrated for your camera model.
When shooting a picture with this white balance setting you see the picture with a strong green tint but the histogram is almost correct to the RAW data.
You can change the white balance later and get a ‘normal’ looking picture.

More explanation about UniWB:

1 Like

A raw photo isn’t green because there are two green pixels. It’s green because each of the green pixels admits more light than the red or blue pixels.

I mentioned UniWB, but it’s a) annoying to fix WB in every single shot and b) unnecessary as long as you know that the green channel is going to be the first to clip in most circumstances.

UniWB also, in practical use, affects demosaicing quality because as far as I know, most raw editors do the pre-demosaic white balancing with the camera multipliers and correct otherwise afterwards. So you will get increased mazing and aliasing if you use UniWB.

So I don’t recommend it basically ever.


Thanks for the hint that raw-developers do white balancing before demosaicing. Checked darktable manual for info about that and page 43 says:

But at any time, the processing took the raw image, adjusted white balance on it, then

UniWB still could be used for setting up the ideal settings for a scene. For taking photos after that one could switch to a ‘normal’ white balance.
But since this is time-consuming and more complex this wouldn’t be usable for most of my photos.

So there is no perfect solution for the histogram when shooting raw (unless camera manufacturers implement raw histograms but I don’t think they will do that).

I think that in my everyday photos I keep the focus on the green channel in doubt when red and/or blue are overexposed in the histogram. Some exceptions are when there are stronger red or blue colors in my view. Had many overexposed reds when shooting flowers, especially on red ones…


I personally find this approach to be reasonably easy and working well (when you have time to follow it – in fact most of my shots are “capture the moment” types, for which I rely completely on the standard matrix metering of the camera to be honest…):

1 Like

That is what I do. Know how much headroom your camera has and then offset the viewfinder meter. I often forget to meter the shadows and mid-tones though.

I think this statement is not correct. Having twice the number of green pixels than red or blue ones does not change the colour distribution in an image. Just view the red, green and blue pixels as three separate images.

Of course, the image without demosaicing will look greenish, due to the larger number of green pixels. But this has nothing to do with white balance!


1 Like

Right. You probably already get it, but since I can not be sure, i’ll review it quickly again. The most common “Bayer Interpolation” sensors in most cameras around have alternating series of red-then-green pixels, and below them green-then-blue pixels …something like this:
R.G. R.G.R.G.R.G.R.G.R.G.R.G.R.G.R.G
G.B. G.B.G.B.G.B.G.B.G.B.G.B.G.B.G.B

What then happens is the camera takes and looks at them in sets (as above) of one red, one blue and the greens in between them, and doing so ends up generating the values for one given resulting pixel; where its red value (of the resulting pixel) is the red pixel value from the sensor, the blue value, corresponds with the blue pixel from the sensor, while the green colour value for the resulting pixel will be based on the two green pixels on the sensor.

The reason there are two green pixels on the sensor is that they are used for more than just the green colour value of the resulting pixel. Human sight supposedly is more sensitive to green light, and therefore the value of the two green pixels is also used to define the luminosity of the resulting pixel.

So, as weird as it sounds, every resulting pixel (more or less - cause I may not be 100% accurate on this) in a produced JPG from a digital sensor, is based on red, green, blue colour value as well as a luminosity dictated by the two green pixels. Those get combined and give you the actual R,G,B values of each pixel. The process by which a RAW editor does this is the demosaicing algorithm, such as RawTherapee’s AMAZE (good for low level detail when ISO is low), or LMMSE (good for high ISO images where we want to keep the noise down).

These were a lot of words to explain why a raw file with most of the time have a higher amount of green information stored in it than red or blue. So basically set your camera to daylight white-balance and look at your green histogram when you take a picture to check if you clip or not. If the green one isn’t clipped, then your likelihood of clipping red and blue are way lower, due to the stuff discussed above. :slight_smile:

1 Like

The luminosity is merely what we perceive. The red, green, and blue are all that come off the sensor (slightly transformed by the color space conversion matrix).

99% of the time, I shoot outdoors. At times, it is full daylight, sometimes overcast, and sometimes outdoor night scenes.

I use the Custom White Balance in my Olympus E-M5, set to 6000K and the white balance module in darktable also is set to 6000K. I have never been dissatisfied with the colour balance

I didn’t come up with this on my own but read it somewhere. However, I’ve also seen a suggestion to use 5400K instead as it is closer to daylight but basically, the same thing: both camera and darktable set to 5400K.

Have I been lucky or is this a reasonable approach?