ETTR and how it relates to the scene-referred workflow and filmic

The camera is calibrated to set an exposure based on a middle gray scene, you don’t have to do that. That’s why it has trouble with excessively bright or dark scenes (even if they don’t have a high dynamic range).

Simplified, the reason you want to keep that middle gray as middle gray is that that is the zone where your eyes see the most detail and contrast. It’s become an issue here with the early versions of filmic, where you had to set the middle grey reference value yourself. That value has a big influence on what happens with your shadows and highlights.

Filmic is a big issue, as it can deal with your camera’s full dynamic range much better than the basecurve can (or in many cases, your camera’s conversion to jpeg)

I tested the Guillermo’s quick method for setting UniWB on my Oly E-1MkII and it created a perfect UniWB; all coefficients 1.0000. The quick method is to use a totally overexposed white area as a reference to the white balance. In Olympus, there are four custom WB settings, so for time being I can keep it. Let’s see how much I will use it.

yes, this is so neat and easy

I’d like Canon or someone to make a stills-devoted camera, no video!, but features for stills instead, partuculary highlights. So you’d see the blinking highlights as now, then zoom in, still see the blinking, keep zooming, ignore this one because it’s specular, see e.g. the bride’s shoulder burnt out, tap that to tell the camera to put that just inside the max on the next shot…

Raw histogram, who ever comes out with it first, I’m selling my gear and buying that.

3 Likes

I’m aware of the filmic complication and the fact that middle gray is part of metering theory.

I just found the whole filmic thing strange because of various assumptions it made. Symetric fixed curve. As explained above in practice few people accept the camera metering as is but compensate to fit dynamic range. Important detail might then reside in dark or bright areas at exposure as well as in the finished ppd image. Middle grey might be fine with a bit flatter contrast because you need detail in highlights etc.

  1. Turning exposure up and filmic bringing them back doesn’t tell anything about headroom in the raw data right? (original question). It’s the floating point pipeline. You could boost by 15ev and bring it down after.

  2. As far as I know if the raw histogram is ‘somewhere in the right half’, boosting by 1ev will already clip. Boost by 1ev is multiply by 2, right

  3. Older ( Canon) sensors could benefit from ETTR, but most modern sensors don’t really benefit AFAIK. I always think it’s easier to ruin it then it brings me something.

  4. Remember,noise comes from high iso but also shot noise from fast shutter speeds. Exposing to the right, raising exposure often lowers iso and/or slows shutterspeed. You decide for yourself what you think is worse.(chance) at clipping or (chance) at noise.

Maybe it’s playing around with film, maybe it’s being used to old Sony sensors who introduced chroma noise at almost all iso settings… But I got used to little things like that and I don’t think they’re a problem to the image most of the time.

Yes exactly, this was what I was wondering - can I practice how I would change the exposure if shooting the same shot again in the future by seeing how boosting the exposure in the exposure module changes things? It sounds like this is not indicative of how much headroom is in the RAW

The filmic look curve is not symmetrical, nor is it fixed. It is a parameterized curve divided up into 3 parts: a linear midtones piece, a highlights saturation curve and a shadows toe, modeled around the way film typically behaves. By restricting the degrees of freedom, it means you can very quickly arrive at a quite repeatable result. In special cases where more complex tone mapping is needed, tone equalizer is available. Otherwise, we optimise for the common case.

The other option is base curve. Here, instead of anchoring on middle grey, you need to anchor on the display white point by adjusting the exposure to place your histogram to the right/avoid clipping the highlights. Then you have many degrees of freedom by placing control points on the base curve, and you can make it any shape you like. But, this freedom means that is it fiddley, and if you were to edit the same image from scratch on two different days, you will likely end up with a different result. It is this sort of fussiness and inconsistency that filmic’s parameterized tone curve is trying to avoid.

3 Likes

As you have gathered, boosting the exposure using exposure module, then bringing it back down using the middle grey slider in flimic is just doing multiplcation/division, and it doesn’t gain you anything. That’s why in the latest filmic, the mid-grey slider is now hidden, and we just say to use the exposure module to simply match scene middle grey to the display’s middle grey, and then use the black/white points & look parameters in filmic to shape the rest.

So, when we talk about ETTR, we mean you do it in camera in order to maximise the dynamic range of your sensor. However, how you adjust the exposure in camera also matters:

  • if you add light to the scene, this gives you more signal, and so a stronger signal-to-noise ratio.
  • if you open the aperture, you let in more light, which gives you a better signal-to-noise ratio, as long as you don’t clip.
  • if you lengthen the time the shutter is open, again you let in more light, giving you a better signal-to-noise ratio (the remark you make about “shot noise” depending on shutter speed, it is really just about the amount of light, or signal, you let into the camera).
  • if you adjust the ISO, then this may or may not be helpful. If you use an ISO-invariant sensor, then it is basically getting back to the game of playing with digital multiplication like in the exposure module. However, sometimes varying the ISO can really affect the response of the sensor, and this can improve the SNR. For example, in the Sony A7R4, once you hit an ISO around 320, this causes a sort of “preamp” to kick in, which can be helpful.

So, in short, the way to get the most benefit of ETTR in camera is by adding lights or using aperture or shutter speed. Exposing to the right by playing with ISO may not be not so useful, really depending on the characteristics of the sensor.

6 Likes

The OP wanted something (in darktable) to see how much ‘histogram the data is using’. The easy answer is already given : use another tool that displays a raw histogram.

While shooting film negatives I wanted to know how much I can raise the exposure without clipping the filmbase. I used dcraw then, set to no white balance adjustment and ‘raw’ color profile to export to a linear tif. Open that in photoshop and look at the channels (or the histogram for each channel).
AFAIK this is the closest you can get to ‘unaltered data with only mosaicing’.

This is also how I figured out different exposure settings for each of the channels, to get as much DR for each channel from a film scan.

… After all that I just use my dedicated scanner again :slight_smile:

Noise comes from a) photon/shot noise [random], and b) read noise (camera electronics) [patterned]. It doesn’t come from ISO. ISO is just an amplifier - it amplifies whatever is there. If whatever is there is a lot of noise, that’s what gets amplified. This is what happens in low light situations. Because there is low light, people boost ISO. Then they see images with a lot of noise and think ISO is to blame. But the real problem was not capturing enough light.

1 Like

Yeah yeah I know, I was simplyfing things.

Raising iso (often) raises the gain on the amplifier, boosting the signal but also boosting the noise that was already there.

My point still stands that slower shutter speed (while keeping the same exposure value) also reduce noise, the shot noise (or ‘measurement accuracy’) and if an otherwise well lit scene at low iso values gives weird red specks I’m thinking that’s the cause… Or something weird in the demosaicing phase… Or a really noisy sensor.

Yes, but not necessarily proportionally for both signal and noise as you think. For some sensors/cameras there is an advantage in shadow SNR when shooting at higher ISO, as @Matt_Maguire pointed out. Again, Emil Martinec provides an excellent explanation of ETTR misconceptions and ISO use here.

For the interested, you can check you camera’s behaviour at Bill Claff’s indispensable site, I’m linking and example for the A7R4 mentioned, indeed showing a step at ISO 320 giving you an extra 1/2 stop in the shadows if your highlights are not a priority.

I’m interested in running some tests of my own based on this discussion. Can someone tell me how I can review the raw histogram in Darktable… is it just a matter of selecting the original image in the history, or is there more to it than that? If it’s not possible, then what’s the best way outside of DT?

Many thanks.

Darktable doesn’t have a RAW histogram display, but you can open the image in RawTherapee and one of the options it has is to show the RAW histogram.

2 Likes

You cannot. The histogram always shows the output of the pipe; even if you go back the the original image (at the bottom of the stack), white balance, demosaicing, input colour profile and output colour profile will be applied.

1 Like

If you really want to get a feel for how much headroom you have with your jpeg histogram, start bracketing your exposures in camera. It isn’t simple math, such as “always add 1.75 stops,” but if you start bracketing you’ll start to get a feel for how much you need to add.

2 Likes

My rawproc software shows the histogram of the tool selected for display. But, you can select any tool in the chain for display, including the opened raw file before any processing.

Need to keep in mind though, the rawproc internal data is floating point in the range 0.0-1.0, but the ingested raw data is converted in proportion to its container from the file, unsigned 16-bit integer. So, the max value of a 14-bit raw image, 16383, is about 0.25 in internal rawproc. Later, the user has to insert a tool to scale the data to fill the 0.0-1.0 range; either exposure compensation or blackwhitepoint will do the trick.

Really, RawDigger is probably the best tool for the job. You pay for it, but that compensation supports the open-source libraw project, which is probably the most comprehensive and current open source raw conversion library out there.

4 Likes

Thanks, that’s exactly what I’ve done, but what I needed was a way to assess the point at which I’ve clipped in raw. It sounds like Rawtherapee will allow me the means to make that rough estimate.

Thank you @garibaldi, @kofa, and @ggbutcher for your input and suggestions!