ETTR and how it relates to the scene-referred workflow and filmic

Hmm. Depends what you shoot and in what setting. Some clipping is manageable, I will definitely trade that for a more legible preview and code a specialized recovery filter later if needed :smiley:

I agree completely. Instead of fussing with wonky UniWB we need the camera manufacturers to provide raw histograms for raw images. It’s not like that’s difficult in the slightest.

5 Likes

@anon41087856, Are you teasing some dev idea here?!.. Je ne comprends pas les nuances…

Yes, I have R&D going on to duplicate texture from valid channels to clipped ones.

3 Likes

That kind of noise is to be expected with those histograms. In the first you are well underexposed, using barely half the dynamic range available to the sensor. In the second you have ETTR (albeit perhaps with some clipping?), using full dynamic range - which gives much better SNR. Keep in mind you will always get some noise. But its visibility is determined not by how much noise you capture, so much as the ratio between how much noise and how much light. That’s why SNR (signal to noise ratio) is important. The more light you capture, the better the ratio, the less visible the noise. In the first example you have poorer SNR than the second, which is why it is much more visible.

Roger Clark @ www.clarkvision.com has a lot of excellent articles on the topic if you wish to learn more.

2 Likes

Thanks for the clarification. Looking back at the original picture I posted at the beginning, the RAW histogram in RawTherapee is:
rawtherapee_raw_histogram

What I want to do is look at some of my past shots and think about what I would do differently in future similar situations utilizing ETTR (try to review past shots to learn what to do differently in the future). So for that picture, when I increased the exposure by 2.38 EV as seen in the screenshots, there was a lot of clipping. Even if I increase it by something smaller, like 0.75 EV, there is still a lot of clipping in the sky (with upper threshold clipping = 100%):

So does this mean if I were shooting this same shot again, I shouldn’t set exposure compensation more than ~0.5 EV when capturing the shot so as to avoid this sky clipping (which would be difficult to reconstruct)? For reference, I’ve been following this guide and 0.5 EV seems like a small amount of compensation, but maybe that makes sense given the dynamic range of this scene?

I think you are confusing two things here:

  • what you show is clipping in the final image, that’s easily corrected in the filmic module with no loss of details in the brightest areas, provided the raw signal isn’t clipped.
  • clipping in the raw signal means that the initial signal from the camera is saturated in at least one channel; in that case you have an irreversible loss of detail in the areas concerned (as long as only one or two channels are clipped, you can recover some grayscale detail, colour info is lost though)

The first kind of clipping is shown in dt with the solid colors (‘slashed square’ icon), raw clipping is shown with a “bayer” matrix for the channels concerned (and activated with the ‘bayer’ icon).

ETTR aim is avoiding raw clipping, before development, while gathering as much signal as possible to limit noise, and not at all with clipping after development (which is an artistic choice).

(As I understood it, uniWB pushes that principle even further, by filtering the incoming light so all sensor channels are used to the maximum, and too bad that the in-camera jpeg is worthless).

3 Likes

To properly use ETTR, I needed “zebra pattern” and some trials.

What you need to know is how many EVs you can still push once the zebra pattern starts to apear. It will be different on each type of a camera. In my case when zebra apprears I know that I can still apply +2/3EV and not to blow highlight.

No, UniWB (and any white balance setting) does not do anything to the raw sensors. White balance is just 3 numbers added to image metadata (and used while processing, either in-camera, or in a raw developer).
What UniWB tries to help is getting as close to the possible maximum exposure (without clipping) as possible. Since all its multipliers are 1, the histogram shown by the camera is closer to the real raw histogram than e.g. daylight white balance (where red may be multiplied by something like 2), at least that’s the theory. What it cannot take into consideration is the camera matrix and the tone curve applied by the camera, so it’s not a perfect tool. Some like it, others don’t trust it.

This difference makes sense to me - what I am trying to do by adjusting these images I already took with the exposure module is not to process them normally with the scene-referred workflow, but to try and see what I should do differently next time when shooting. I’m trying to get a sense for how much exposure compensation to use in future similar shots. As you said above:

This is what I’m trying to get better at by reviewing some past shots and seeing how much I can push exposure (while being aware that making adjustments in the exposure module will NOT result in raw clipping like when taking the shot); I want to get a better intuitive sense for how much headroom I have so I can hopefully get better at ETTR without clipping.

Over expose a HDR scene in steps taking spot metering in the highlights. Investigate the raw files to see where it clips. Leave a bit of safety margin and shoot a few test scenes with the arrived at over exposure /exposure comp. Adjust to taste.

Im much less systematic myself and find it impractical to be to close to overexposure. My camera does well with shadow recovery so I tend to shoot with flat profile and use the blinkies to check my guestimated settings.

1 Like

+1. My camera has a highlight-weighted exposure mode, which I believe is JPEG-based given the headroom it usually leaves. Still, I usually end up using that with the intent to pull shadows out of the well in post. I tried dialing in +EV in the camera, but some high-DR scenes don’t respond well to that.

ETTR bolloxes the whole middle-gray exposure system that camera manufacturers present to the vast majority of users who don’t comprehend the details of how their images are recorded. And, I think providing an alternate system is problematic because, if you anchor the exposure to the high end, there’s no good automated way to characterize the low end in all situations for a responsive tone curve. At least bear-of-little-brain here can’t posit one… :laughing:

And you don’t always want the exposition anchored to the high end. I wouldn’t expect a lamp in the scene to be within the properly exposed range, just let it clip, getting it in a usable range would a) look unnatural and b) cause too many headaches later on. But how’s the poor camera supposed to know that kind of things?

3 Likes

Hi,

It would be nice if spot metering was anchored to the saturation point instead of mid gray though. That would make ettr very easy, and completely under the user’s control…

4 Likes

Hmmm, couldn’t you just dial in +2.3EV and use spot mode? Need to think that through…

Yes, something like that. But spot mode is different on different cameras, so there’s still some guesswork to do…

1 Like

I have never anchored anything to middle grey :smiley: Ive always just fitted the scene dynamic range as well as I could in the camera dynamic range. Basecurve then end up asymetric but thats fine. Ive never really met anyone taking middle grey to seriously until this board. People just expose to get the data they need. Third day shooting its obvious.

But then my “subject” tends to be everything in the frame and vary from the sun to the shadows unless I pick the day and can get overcast weather which is ideal.

1 Like

The camera is calibrated to set an exposure based on a middle gray scene, you don’t have to do that. That’s why it has trouble with excessively bright or dark scenes (even if they don’t have a high dynamic range).

Simplified, the reason you want to keep that middle gray as middle gray is that that is the zone where your eyes see the most detail and contrast. It’s become an issue here with the early versions of filmic, where you had to set the middle grey reference value yourself. That value has a big influence on what happens with your shadows and highlights.

Filmic is a big issue, as it can deal with your camera’s full dynamic range much better than the basecurve can (or in many cases, your camera’s conversion to jpeg)

I tested the Guillermo’s quick method for setting UniWB on my Oly E-1MkII and it created a perfect UniWB; all coefficients 1.0000. The quick method is to use a totally overexposed white area as a reference to the white balance. In Olympus, there are four custom WB settings, so for time being I can keep it. Let’s see how much I will use it.

yes, this is so neat and easy