There are no underexposed digital pictures

There is this misconception in digital photography, inherited from analog photography, that photographs can be either properly exposed or not. And people don’t understand why they should temper with exposure compensation, later in post-processing, when their shot was “physically well-exposed”.

There is no such thing as a well-exposed / under-exposed / over-exposed picture in digital. There is only sensor clipping or no sensor clipping, that is all.

The case of film, the origin of the mistake

(Color) Film is a all-in-one bundle package, that does everything at once:

  • mid-tones boosting,
  • contrast massaging,
  • extreme luminances roll-on and roll-off (smooth blending of clipped and valid ranges),
  • gamut-mapping,
  • artistic hue shifting.

From https://www.kodak.com/uploadedfiles/motion/US_plugins_acrobat_en_motion_education_sensitometry_workbook.pdf, the sensitometry curves of negative color film are like this:

Beware, this is negative densitometry, so whites are at the bottom, and blacks at the top of the density scale. Exposure is the same, though.

Because of the obvious non-linearity of these curves, you can’t freely multiply the exposure in post-processing with a factor and expect the whole dynamic range to scale smoothly without complain. That is, you can’t just play with the exposure time under the enlarger and expect colours to follow. Over-exposure of film means washed, desaturated colours forever. Under-exposure means over-saturated colours and more prominent chroma noise.

So, there is, indeed, a “right” exposure in film, that consists in carefully anchoring the mid-tones in the middle of the latitude, and let everything pivot around this particular value.

However, digital…

is linearly recorded. Meaning the digital pipeline does, step by step, what film does at once, and the digital signal enters the pipeline linear. The great thing is it enables you to create your very own virtual film emulsion. In this context, software exposure compensation is only an harmless proportional scaling (provided clipping is properly handled somewhere in the soft).

Think about a microphone. When you set the hardware gain too high, you might clip the track when the recorded sound becomes loud. That sounds awful, and makes the track pop and crash. So you might be tempted to setup the physical gain to the minimum, just to be safe, and amplify the recording digitally later. But then, what you will hear is a lot of background noise (shhhhhhhhhhhhhhhhh), and that won’t be any better.

So the deal is to anchor the gain so the peak signal doesn’t clip the track, but you still maximize the signal/noise ratio (SNR). That is, anchoring the gain so the peak signal reaches a -3 to -10 dB safety margin below clipping. Then, in post, you might normalize the signal to dynamically compress extreme variations around some average volume.

The hardware gain, in photography, is the combination of ISO sensitivity + aperture + shutter speed. But the principle remains the same : you don’t anchor the exposure for mid-tones, you anchor it to manage clipping and noise at both ends of the dynamic range. Then, in post, you normalize the signal to dynamically compress extreme variations around some average exposure.

(For you geeks, that sounds an awful lot like a variance minimization problem, doesn’t it ?)

Ansel Adams developed the zone system back in the 1930’s to deal with exactly this issue. While the circumstances have changed a bit, his formalism is still up-to-date.

And that average volume or exposure might need some scaling to end up in the middle of your output medium, for better clarity of the reproduction. Then, you roll-in/roll-of both extremes of the dynamic range around that average as much as you can, to squeeze-in all those carefully recorded details.

So there is no over/under-exposure in digital, there are only clipping optimizations strategies for the whole dynamic range, and you decide later what is to be considered as your average level, and how to compress extremes around that level. Hardware exposure only slides up or down the dynamic range window along the luminance axis.

Besides, whatever you see on the back of your DSLR is a massaged JPEG file with some tonal correction already, so a “well-exposed” JPEG only means a “well-handled” raw-to-JPEG conversion in your camera firmware, but doesn’t presume anything on the actual raw (non-)data.

Bottom line, having to push the software exposure compensation, even for seemingly “well-exposed” pictures is totally fine. It’s just a maths trick to anchor mid-tones where you want them. But remember you need some later clipping handling to compress things around.

People have also got this prejudice that heavily under-exposed pictures are difficult to bring back in digital and it’s better to sacrifice some highlights. Well, it depends, but in general, that has nothing to do with the picture, but everything to do with shitty color models used by shitty software.

21 Likes

A lot of today’s common knowledge was unfortunately learned back in the days when most people shot JPEGs, I suspect.

Thus people have misconceptions about the red channel overexposing easily or, like you point out, they believe that shadows “clip” if you underexpose.

1 Like

I (mostly) agree with what you write, but I find your title somewhat misleading. You could equally say that every digital picture whose raw histogram is more than \epsilon away from saturation is underexposed. (which is the idea behind the practice commonly called “ETTR”).
I certainly agree tough that in practice it’s not always easy to “properly expose” (in the sense just defined), so when in doubt it’s better to err on the safe side.

2 Likes

There is still the question of signal to noise ratio (as also mentioned above). When it would have been possible to obtain a much better s/n ratio by capturing more light, without the danger of clipping (important) parts of the picture, I would not call this shot “well exposed” but “underexposed”.

2 Likes

As of today, noise is still a lot easier to fix than clipped highlights, and blends a lot better. Especially since the RGB channels rarely clip at the same time, so you can produce non-colours in highlights, and reconstructing that is a nightmare.

Here, I disagree. There is clipped and non-clipped. “Well exposed” implies there is a target to reach, when it’s really all a matter of compromises with technics and media.

Excellent post!

Until here. I personally think it has a lot more to do with the dynamic range of the particular scene, and the number of stops of exposure the camera can resolve between sensor saturation and the acceptable noise floor. If undesired noise is present in the original capture, any manipulation in post to mitigate it is just that, a mitigation of a bad original condition.

Maybe in another thread you can lay out the quantitative specifics behind your qualitative assertion of “shitty” color models. In a constructive fashion please; I’m growing quite weary of the vitriol. I’m very much interested in understanding this, not at all interested in sparring about it.

6 Likes

There is a target to reach. That target is maximum SNR.

5 Likes

“Maximum”, so it’s a set of constraints to resolve. As opposed to film, where the target is, without place to interpretation, the middle of the latitude, that you can find by a single measure with the camera lightmeter. It’s a whole different exercise to solve for a target value than to solve for a full range fit… Your camera lightmeter doesn’t do that.

Shitty color models meet one of the following conditions :

  • display-referred,
  • gamma-encoded,
  • perceptually scaled,
  • contrast-massaged (aka view-transformed early in the pipe),
  • decoupling chroma vs. luma in non-Luther-Ives RGB spaces,
  • assuming fixed bounds for the signal,
  • mixing physical and perceptual concepts at once,
  • massaging pixels for GUI easiness instead of adjusting GUI for pixels correctness (aka mistaking model with view and merging control with model).

Funny how people get weary of vitriol faster than a whole graphic industry crashing into a wall in slow-motion for 20 years. Please shoot the messenger.

2 Likes

Yes, maximum SNR without clipping. Exposures (far) away from this optimum are for me underexposed. That has nothing do do with practical considerations that one can deal well with noise (or even clippings).

Addendum:
I have no problem with @anon41087856’s basic explanations regarding the differences between analog and digital photography. By the way, I learned a lot from the posts of @anon41087856 and I am very grateful for his excellent contributions to darktable.

Only the title I consider misleading, because it ultimately means that I can’t do anything wrong when exposing with very little light.

5 Likes

That’s a nice list, and I know you’ve made the case for most of these in prior dialogue. Believe me, I get the need to separate accommodating the needs of the rendition from the earlier work. But, all of this is just not universally intuitive, and calling someone’s software “shitty” doesn’t just make the author want to engage to understand.

Aureilien, you’ve made significant progress in putting scene- vs display-referred transforms in their proper place, but you’ve done so only because of the overwhelming effort folks here have made to understand, in spite of your lack of patience with them.

Okay, back to exposure…

6 Likes

In the past 5 years, I’ve moved through a succession of cameras that progressed in dynamic range, and what @anon41087856 asserts about exposure rings true with my experience. With the D50, almost every exterior shot was a “two gallons of milk in a one-gallon jug” thing, where blowing the highest highlights was needed to get the majority of the scene out of the noise floor. That need lessened with the D7000, and with the Z6 I’m currently enjoying just using the JPEG-anchored highlight-weighted matrix metering knowing I can pull all but the darkest shadows into acceptable visibility.

In my “shitty” software I have a NLMeans denoise tool that does well with below-noise-floor mitigation, but I have just implemented the reference algorithm, no optimization, and it takes a horrendously long time to paint the image with it’s magic. I’ve said it before, say it again, I Am Not A Math Person, so figuring out optimizations has bumped the limits of my comprehension. Alternatively I’m noodling with incorporating a luminance threshold to focus it on only the darkest pixels. My only question about such would be, does it need a gradual roll-off to the threshold to avoid noticeable artifacts?

The Z6 also has a two-image HDR, which in one mode will produce the combined JPEG but will also leave the two NEF raws for one to apply “I think I know better…”. I really would prefer such to doing denoise in post, but one doesn’t always have the option to take two temporally separated images…

2 Likes

This post has finally made the purpose of filmic click for me. Thank you for that!

I have learned a whole lot about photographic image development from these discussions, videos, and articles. More, in fact, than from the introductory texts on image processing I read.

1 Like

I was mostly thinking about Lightroom and the likes, but if people here feel targeted, as we say in French, sweep in front of your own door… I have lots of patience, proof is I repeat the same things every week/month, but little tolerance, indeed, for idiocy from people who hold proper post-graduate degrees in applied sciences and should know better (or should know at all). Silly me, I tend to have expectations of rigor from such people, which is often deceptive, I must say.

Nobody in his right mind would ever design a subsonic aircraft using supersonic fluid dynamics, that would likely kill people, so there is this thing called due-process because an aircraft that stays in the sky only 80% of the time is not an option. In image processing, such concepts don’t seem to apply. It’s ok to manufacture stuff that works ok-ish 80% of the time, for example using perceptual models in physical spaces, and the other way around, cutting corners round and so on.

So, yeah, that pisses me off every single time, because it’s basic science. Actually, at this point, it’s not even science, it’s epistemology. As in “assert the validity of the models you use before jumping on your IDE to loose everybody’s time including yours while never forgetting to brand your software as professional”.

Back to exposure then…

Roll-off is a nasty trick. Needing nasty tricks to makes stuff work usually means something is broken higher upstream. If it is only performance and optimization, you have an optimized variant in darktable, that still doesn’t accomplish miracles, result-wise. I find the wavelets thresholding method much better. But I’m afraid you can’t escape low-level maths in this field, because it’s signal processing with statistics on top, convolved with cache memory management, and you just can get it done with only a programmer’s mind. It’s one of the trickiest parts in image processing.

3 Likes

I implemented nlmeans luminance thresholding this afternoon, without roll-off, hardest thing was organizing the UI. Seems to do what was intended, localize application of the transform to the darker pixels, thus speeding things up appreciably. No noticeable artifacts in the transitions, probably helps that it’s happening in the darker regions…

Thing is, I’d rather not have to apply this tool at all. I’d rather reliably put the highest regions for which I want definition at just under saturation, and not have to do pet-tricks like UniWB to figure out where is that.

I would then say that, in terms of using the dynamic range of the camera, there is a notion of underexposure when one isn’t using the headroom below saturation to pull the image out of the noise floor…

I appreciate your rigour Aurelien. It’s fantastic to have such people righting the wrongs of other software.

I agree with your above statement that noise is easier to fix than clipped highlights, however having thrown a few underexposed raw files out due to too much noise, I certainly believe there is a such thing as well exposed raw. Perhaps it is just the definition that varies. In film, good exposure is anchoring the mid tones in the middle. In raw, good exposure is both avoiding clipping, and minimising noise, or ETTR.

3 Likes

Do you have a source for that? I read somewhere that Lightroom is using a linear non-gamma color space internally, so that should be fine, right?

(That said, it is funny how virtually every single landscape photograph edited with Lightroom has haloes at the horizon line, because of Lightroom’s highlights slider. Yet I have never seen anyone complain about that. But woe is me if some sharpening tool produces even the tiniest amount of halos, then it’s suddenly the worst tool ever. It’s funny how people apparently got very used to Lightroom’s rendering.)

Never underestimate the power of the status quo!

8 Likes

I completely disagree with the heading. In practice it is far from the truth. Exposing the picture “correctly” (ie as you want the end result) has huge practical and time saving benefits.

It also assumes that noise reduction is acceptable when in reality its terrible and to be avoided. I’ve yet to see noise reduction that doesn’t fail. Colours change etc. try a brick building…

I’m someone who underexposes a lot due to frequent sky and shadow in the same frame. It sucks though and is best avoided. Natural looking results are very difficult to achieve when compressing dynamic range to much.

I’m not arguing that exposure follows the same principles on film and digital. The fact that they are different has been known since digital was first available. No one thinks they are the same.

6 Likes

I have to disagree with the post title too, as I think it’s wrong (just to not say false).

There is no such thing as a well-exposed / under-exposed / over-exposed picture in digital. There is only sensor clipping or no sensor clipping, that is all.

If you understand exposure as the combination of ISO-speed-aperture to record in the appropriate medium the scene you’re looking at (in this case in a digital sensor), then there is a correct exposure, and when you deviate from it when taking the picture, there are parts of the scene that won’t be recorded, either because of highlight clipping, or by shadows clipping.

If you agree that a sensor has a limited capability to record the dynamic range of a scene, when the scene dynamic range is too wide the sensor can’t capture everything: either the highlights will get clipped, and we talk about overexposure in that areas (the photosites have received more light than they are capable of handling), either the shadows get clipped, and we talk about underexposure in those areas (the photosites haven’t received enough light to get excited and record anything), or both at the same time.

If you take a straight shot to the sun, if your sensor survives the experience, it will record areas underexposed for sure. There will be pitch black areas (if the sun is properly exposed), and those areas will be underexposed in the raw data.

Over-exposure of film means washed, desaturated colours forever. Under-exposure means over-saturated colours and more prominent chroma noise.

Well, that’s not entirely true, as there is a well known mantra that within reasonable limits has served me well in the past: «Expose for the shadows, develop for the highlights». That’s true for negative film only, AFAIK.

But let’s talk just about digital images.

Bottom line, having to push the software exposure compensation, even for seemingly “well-exposed” pictures is totally fine. It’s just a maths trick to anchor mid-tones where you want them. But remember you need some later clipping handling to compress things around.

And a casual user will get lost with your explanation, because you’re constantly mixing the clipping happening when you shot and the clipping happening while processing the raw file. It’s confusing while reading it.

Areas fully clipped at shot time can’t be recovered, no matter what you do in post-processing (they could be recreated by guessing, but not recovered). Areas clipped while processing the raw file could be handled properly.

“Well exposed” implies there is a target to reach

Of course there is a target to reach!: I want a picture of a scene, and that scene has a certain light and contrast properties that forces my camera to set a specific combination of ISO-speed-aperture to capture what I want without loosing details in the areas I want.

If you give a larger exposure (more light hits the sensor), there will be areas that will become pure white (fully clipped). If you give a smaller exposure (less light hitting the sensor), there will be areas that will become pitch black, and there will be no algorithm capable of recovering this.

Perhaps the difference between people is what they would want to capture from the same scene, and resulting in images that range from a high-key shot, to a low-key shot. All those different images will have a different target to reach, but in the end each one of them will have a target to reach: what the photographer wishes to capture from the scene.

A good amount of your explanations are fine, and people must know the limits and capabilities of digital images, but I think that a less convoluted explanation would serve much better the objectives of the post.

2 Likes

One thing that seems to be overlooked so far is quantization while storing the linearly captured data. If you would have no quantization (i.e. infinite bit depth) you would be able to recover your shadows perfectly. Just stretch out the dynamic range.
Of course this is not the case in a real camera. You can only distinguish between discrete amounts of captured light. So in reality, being able to recover your shadows is still a matter of having set a ‘proper’ exposure.

2 Likes