A few months ago, I started working on tracking down the source of noise I saw in some shots. I’ve since found that there is “headroom” in the histogram in many of these photos and so I’d like to try using ETTR and compensating when the metering in-camera leaves extra room to the right so I can capture more light when making the exposure to improve the SNR. I’d like to know when reviewing photos if I am understanding correctly what I could improve with future similar shots.
For example, this shot seems to have some room at the right of the histogram:
So I turned on the exposure module and increased the exposure until the midtones look like they match the gray background in darktable (which is roughly 18% gray):
If I had taken the shot at this exposure in-camera, with +2.38EV, would those highlights have been clipped as seen above before filmic (and therefore I wouldn’t want to raise the exposure that much)? If so, how would I expose to get more light for the trees? I believe my camera (Canon 50D) has about 11EV of dynamic range at this ISO (100), so would I benefit from a camera with a greater dynamic range (e.g. 13EV at ISO 100 on the 80D)?
Try opening the file in RawTherapee or Filmulator to view the actual amount of raw headroom in the raw histogram.
A shot like that doesn’t demand immense dynamic range; it’s mainly an issue in backlit scenes, and with careful management of exposure your camera will easily suffice.
My take: If you had used +2.38EV in camera highlights most certainly would have clipped permanently - meaning filmic would not have been able to bring them back. So your ETTR is good.
With exposure and filmic turned off, turn on your clipping indicators. Is there any clipping in the blacks [when black clipping indicator = 0]? If so, you would have benefited from a camera with wider dynamic range. If not, then you successfully captured all the dynamic range of the scene. Other than the process you have followed, another way to get more light on the trees is exposure bracketing.
The difficulty I have with ETTR is judging the headroom while taking the picture: all you get there is the histogram for the camera jpeg, which has not all that much to do with the raw data. Have a look at the different base curve presets, which aim at mimicking the in-camera processing, and compare e.g. Sony-like with Leica-like (or the different Canon curves) to get an idea of what happens…
And you most certainly want to avoid clipping in your raw data. For me, avoiding that is worth a bit more noise in the shadows (not all that much a problem anyway with a modern sensor). And it can be a problem with certain subjects, where I find I have to expose less than the camera indicates to avoid clipping (red or yellow flowers, particularly, but also theatre scenes where the director went overboard with the blue light…)
So, yes, ETTR is good, but be sure you don’t over-expose. Bracketing might be indicated for critical shots, not so much to combine several brackets, but to make sure you don’t clip at the right.
On a Canon it’s said to be better to select the neutral or faithful picture styles to get a more accurate histogram, although it still won’t be perfect.
I can’t find the exact site I saw this on and from memory it was a Canon site however this link says something similar.
That’s why some shoot with UniWB (an in-camera white balance setting that uses 1 as the multiplier for all components), accompanied by a flat in-camera profile (e.g. Neutral for my Nikon DSLR), with sharpening, contrast, saturation at minimum values.
E.g. GUILLERMO LUIJK >> TUTORIALS >> UNIWB. MAKE CAMERA DISPLAY RELIABLE
And, in general, uniwb - Google-haku
ETTR is another way of saying “expose for the highlights”, so you can just choose the brightest part of the scene you are capturing and use spot metering to expose for that. Use exposure lock and re-frame the shot as necessary to capture the scene you want. This should ensure that you are always maximizing use of your sensor’s headroom at the brightest end.
You’ll want to ignore specular highlights or any highlights you don’t mind blowing. This will often result in a much darker picture than you want, but you can bring the shadows back up in post. For scenes with a high dynamic range, this will probably introduce noise in the shadows, but you won’t have blown your highlights and I find it’s usually the best trade-off.
In my experience, it’s harder to use EV compensation for successful ETTR. There’s more guesswork involved than taking a spot reading of the brightest part of the scene (often the sky).
Edit: as @rvietor correctly mentions below, you still need EV compensation for spot metering highlights, but you might be able to get away with setting it to a constant amount, which varies between cameras.
You’ll still have to use EV compensation, or your bright areas will end up at middle gray (that’s how the meter is set, it won’t know what you point it at). But indeed, it should be a more or less constant correction for a given camera model.
The DR at the input of filmic is the DR of the image corrected with dark current, and encoded with black = 0. So it’s a virtually infinite DR and has nothing to do with actual scene luminance.
UniWB is a useless trick to keep the geeks happy, since you still look at a JPEG histogram baked with an artistic tonecurve and it makes your JPEG previews unlegible, color-wise. Besides, as long as you have one non-clipped channel, it’s easier to copy the details from this one to the clipped ones, than trying to denoise the dark parts.
UniWB is a useless trick to keep the geeks happy, since you still look at a JPEG histogram baked with an artistic tonecurve and it makes your JPEG previews unlegible, color-wise
Then maybe I misunderstand the whole point about ETTR? My understanding is that one tries to maximise the signal-to-noise ratio, by trying to expose the picture as bright as possible while still avoiding saturating the sensor.
To produce a pixel in the output JPG, a bunch of transformations are done inside the camera, analog to darktable’s pipeline, e.g. white balance, demosaicing, camera space to working space to sRGB transformation, with more (curves etc.) in between.
Then, UniWB tries to eliminate one step, white balance, which applies multipliers (from the ‘trying to judge sensor-level exposure from the histogram’ point of view: a major distortion) that may change (‘corrupt’) signal levels by 1 EV or more. It’s not perfect, but I’d think it reduces the difference. I may be mistaken here, of course. I did not claim it’s perfect (that’s why I suggested to use an in-camera curve as flat as possible, even though Nikon’s ‘neutral’ curve is not really neutral, either; and even a perfectly flat curve would not avoid the other steps in the pipeline).
I don’t use and don’t advocate the use of UniWB (tried it, found it too much of a hassle). Still, it’s a technique that exists, and some find it useful, even with its limitations.
I was referring to camera sensor photographic dynamic range, which OP mentioned at the end of the first comment, wondering if they need a newer camera (80D vs 50D).
Not the theoretical dynamic range representable in the pipeline. Not sure where you got that.
As far as UniWB, it’s not entirely useless. You shouldn’t care about the shape of the histogram, only whether one, two, or all three channels are clipped.
As you are using a Canon EOS 50D you could install Magic Lantern. https://magiclantern.fm/
Magic Lantern is a software enhancement that offers increased functionality to Canon DSLR cameras. Magic Lantern created an open framework, licensed under GPL, for developing extensions to the official firmware.
Magic Lantern is not a “ hack “, or a modified firmware, it is an independent program that runs alongside Canon’s own software. Each time you start your camera, Magic Lantern is loaded from your memory card.
This way you can use the (auto) ETTR function. → (Auto) ETTR (Exposure to the Right): -- History & Beginners Guide
If you want to increase the dynamic range of your camera you can use “Dual ISO”. → Dual ISO - massive dynamic range improvement (dual_iso.mo)
Ah, I saw a filmic screenshot, I answered to that. Sorry.
Why not shoot previews in monochrome directly then ? Or derivate a systematic exposure bias from the highest WB coeff ?
Messing with preview WB will mess with hues, and since perceived brightness depends on chromaticity, it will mess with contrast and brightness too, degrading the legibility of the preview. I’m not convinced that it brings any benefit overall.
Hmm. Depends what you shoot and in what setting. Some clipping is manageable, I will definitely trade that for a more legible preview and code a specialized recovery filter later if needed
I agree completely. Instead of fussing with wonky UniWB we need the camera manufacturers to provide raw histograms for raw images. It’s not like that’s difficult in the slightest.