There are no underexposed digital pictures

So this is the very reason why people don’t get filmic then. You are all looking for meaning in absolutes numbers. They are none.

To say it in maths terms, you think there is an algebraic solution to your exposure problem. There is none. There is an optimum found by constrained majorization-minimization approaches in an empiric way.

It’s, conceptually, a completely different beast.

Several years of interactions with users contradict this statement.

They would need to release their source so we can know for sure, but it certainly does not behave as something working in linear.

Clipped highlights don’t have quantization issues, they are == 1 with an hard transition from valid to clipped. At least, ground noise acts as a dithering and blends things a bit.

I can’t agree with this: every sensor is an electronic device, and as such has a floor sensitivity, below which it doesn’t record anything. So the pixels that haven’t received enough light are pitch black, and can’t ever be recovered.

Grab dt’s source code then, because what you actually want to do is roll-off the noise variance, to blend things, not the luminance mask.

1 Like

I’m lost. Can you explain it with easy words, please?

True, there is the matter of noise that makes things more complicated. However, my point stands: even in the case of a perfectly noiseless system, you still quantize your signal. Squeezing all the shadows in a few ‘bins’ makes recovery of the compressed tones impossible.

I wasn’t talking about highlight clipping. Obviously that is a hard transition, no argument there :slight_smile:

1 Like

In practical terms I find that colours deteriorate quickly at higher iso/pushed shadows. You need to choose where in the scene you want fidelity at exposure time. You won’t get the same quality if you underexpose and recover.

Granted this could be the software but it seems universal so far.

A target is a single number. What your camera lightmeter does is measuring the average luminance over some picture region(s), and anchors that to its 18% value (single value -> single value). Even in digital (except for newer models that have an highlight-priority spotmeter). That’s the definition of correct exposure, which comes from the assumptions film used, which are still needed if you use your camera the JPEG way, because 18% is a special place in the (film-like) hard-coded tone curve the firmware applies.

If you use your camera the raw way, such definitions are meaningless. 18% is not a special place. What you need to do is slide a full range (the scene dynamic range) into another full range (the camera dynamic range) by taking into account both extrema and weighting, arbitrarily, which one is the most important (as in : where are your important details located along the luminance axis).

This is not a target. These are 2 opposite constraints to solve concurrently, with some educated guess to favor one rather than the other, while taking into account the property of your specific sensor at the ISO sensitivity you are using it and accounting for possible future changes in lightness in the current scene you are photographying. (Same as audio recording : gain/volume is ok until someone coughs – you may want to plan for that).

There is nothing on your camera that allows you to do that, since even the histogram is showing the JPEG.

The starting point of this article was people found odd to have to push software exposure compensation in seemingly “well-exposed” pictures. That’s not intuitive if you think exposure is an absolute target to reach. The whole point of the article is to say that, in software, it’s only a matter of anchoring middle-grey where you need it on your screen, whereas, in hardware, it’s only a matter of dealing with dynamic range clipping.

So, indeed, there are not underexposed pictures, per say. There are only clipped pictures (at one end or at the other, or maybe at both). And so, the whole concept of hardware exposure needs a rethink in terms of range, and not in terms of anchoring individual values.


That’s an input profile issue. The 3×3 input matrices are, by design, tweaked to be accurate in priority for mid-tones. Again, a software issue.

Try to make a LUT ICC profile at current ISO for your camera, if you have a chart, and use it in place of the current matrices, you will see.

I’m guessing from my own perspective here but I reccon confusions about using filmic etc are less about 18% grey or the theoretical workings of film exposure and more about many finding it counterintuitive to first “destroy” an image only to later bring back the “lost” data. It’s more the process, ui that is unusual and getting used to idea takes a while.

Perhaps I’m wrong and I guess have a geek bias but I just don’t think people are that set in the mathematical models of exposure. Even when they might phrase it in those terms to communicate.

Wouldn’t this mean that quality only suffer at high iso because you’re using the wrong profile. That you might as well shoot at iso 819 200 all the time. My feeling is that this isn’t quite true… edit: based on the flimsy idea that iso and software exposure are close to the same on modern sensors.

Your previous longer post explains the rationale behind the post quite well and I think most photographers are quite aware of the choices to be made when exposing a digital sensor. As mentioned ETTR, uniwb etc are all strategies to deal with these issues and shows how camera manufacturers have failed to provide tools.

I think the first post and some of the following have understated the issues around under exposure. The following is spot on though and as I understand it common knowledge.

1 Like

I think it’s easier to understand the way you have written it this time. Thanks.

Now to the point: it seems to me that the problem here has been just a matter of vocabulary. I’m no English native, so some meaning has been lost in translation. I was referring to a «target» as in an objective, goal, aim. My goal is to capture the scene in a particular way, so I start with the lightmeter suggestion and make an educated guess looking at the jpeg histogram, to end up with a manual compensation that (hopefully) will give me the result I wish. Well, I’m not looking for a particular number, but I end up with a specific combination ISO-speed-aperture. Not higher, not lower.

I understand this as «What you need to do is slide the window created by the camera dynamic range along the wider scene dynamic range». Am I right?

Agreed. My goal, my target, is a certain look in the image, and that calls for a specific exposure, not the other way around.

And that’s again just a matter of vocabulary. To me, the clipped areas are called underexposed and overexposed. But I guess the main idea is the same.

Completely agreed

I think if one gets this statement, it puts a lot of the heuristics in context.

I think I’m just re-enforcing this, but additionally, the considerations to ponder at the high end are different than those at the low end. At saturation, there’s a hard-stop, where data starts to pile up; at the “low end”, where things become unacceptable is not so hard and determinate. To understand that end, one needs to comprehend the concept of noise, and then it’s component sources in our endeavors (‘read’, and ‘shot’), and the statistical assertions of signal-to-noise ratio and dynamic range.

Here’s another way to consider all this. When you stare at the world in any instant, you perceive an amalgam of light at various levels of energy. When you then point your camera at it, you have to adjust your aperture and shutter speed to allow in the light at a level its sensor can resolve most of it. The optimization in that comes from how much of that light you want to overload your sensor at saturation, if any. And then, the consideration of the low end came when you chose your camera.

This really started to make sense for me when I put Light Value and Exposure Value numbers in my software display alongside aperture, shutter speed, and ISO. I found these numbers to be best-considered as a comparison value between images of different scenes with different lighting; I started to think of my shooting as “energy management”, and that put all of the above discussion in proper context…


The headline is clickbait, deliberately false in order to provoke. So is another statement. Try under or over exposing by 16 stops, then claim that “there is no over/under-exposure in digital”.

I think most of us know about clipping versus noise, so we ETTR, and we know that cameras don’t do this automatically, so we need (for now) human intelligence to ETTR. It isn’t massively difficult.

But the OP is discussing exposure in the camera, and lightness/darkness of an image. We can digitally lighten or darken a digital image. Some software calls this “exposure correction” which I think is mistaken, partly because it confuses two independent processes: the exposure made in the camera, and post-exposure processing. When we lump both as “exposure”, we confuse everyone.

And then we need massive long posts to explain what we really meant, to unravel the confusion.


Exposure means the same in hardware or software : it’s log2(luminance).

Exposure compensation means the same in hardware or in software : add 1 EV, you double the amount of light (recorded or emitted), subtract 1EV, you halve it.

Exposure settings mean the same in hardware or in software : whatever dials you have to record or emit more light.

Well-exposed means you have mapped some particular input exposure value to some particular output exposure value. The whole image-making process is, from start to end, a mapping problem.

The only thing that changes is the meaning of bounds and magic numbers like 0%, 18% or 100% because it’s context-dependent. But people want absolute things, especially when numbers are involved, they hate thinking about proportions and connections between variables. The same number has a different meaning in a different space because it’s only a coded value, so it’s meaningless without the deciphering key.

I’m sorry, but we are dealing with different media, and using matching functions to rescale signals in a way that accounts for the peculiarities of each medium (input/output/standard connection spaces). So we need concepts that are transferable between spaces, even though the values are not. Exposure is a transferable concept if you think more about what it represents and less about how you technically massage it.

And then, we need massively long posts to explain basic concepts because people only saw their technical/GUI part without ever asking what they actually represent and mean. That’s the glory of software : let you temper with concepts beyond your understanding until you think you understand them, while you don’t.


But you don’t add any data in software. In hardware more exposure means more data until it clips.

Practically speaking the decisions you make at exposure are creative. Photography is a kind of curation of the world. Exposure is part of that curation that creative selection. The ideal of postponing that curation to postprocessing might seem convenient or safe. However as with the idea of recording film and picking out frames it misses the point that the value of photography is selection and that its best made at the scene. (With limited redundancy ). Anyone who has ever sorted through all to many similar photos having trouble picking the best knows this.

What my comment above has anything to to with this discussion. Is unclear to me at this point. :slight_smile:


You don’t add data either by changing exposure settings in hardware, you make data brighter until they reach saturation threshold, or darker until they hit the noise threshold.

However, exposure still means log2(luminance), and the technicalities of your imaging device don’t change that.

We’re just talking about different things. You certainly add data when opening the aperture and making a longer exposure. Aperture and shutter speed are in my terminology part of the exposure settings.

The camera jpeg engine however is of course just another piece of software.

1 Like

No, you add light. Data is not light, light is not data. That’s a gross mistake.

More light means higher intensities recorded on the sensor’s photosites. Data is what you get once you properly decode those intensities and remove outliers, like non-colors. That means not everything recorded by the sensor is data. Data is only the part that makes sense.

It’s a very, very fundamental thing to understand. We record light, then convert light into data, we don’t record data.

Data is digital discretized information frozen in a certain state, light is a physical signal freely moving around.


@aurelienpierre: So, do you agree that your statements “there are no underexposed digital pictures” and “there is no over/under-exposure in digital” are false? They are almost true for floating-point images where we have virtually infinite dynamic range, but they are false for current-day cameras. Hardware and software are different.

Well, that’s the nub of the confusion. In hardware we have narrow dynamic ranges and visual noise. In software we have virtually infinite dynamic range and almost no noise is introduced.

Aside from those inconvenient factors, doubling the light that hits a sensor is similar to doubling the linear digital values. Sure, that much is true.

I have been following silently, which is the best way to learn. Let me make one comment. On an intuitive level not all “exposure” settings are equal when deciding how to photograph a frame. Sure, the noise floor and saturation points are important but so is what the camera captures in between. Each camera and lens has their sweet spots, just as a tennis racket does. Some rackets are designed to be more forgiving, which is the case for the more recent cameras. Others are technical marvels that need some expert handling.

Magic Lantern certainly eases the difficulty level for my garbage camera with a tiny hard to reach sweet spot. However, being highly technical with the photograph requires lots of preparation, causing fatigue and making me miss or botch the shot due to overthinking or missing the shot altogether if it is time sensitive. All in all I find digital photography is harder to get right than film. It is less forgiving; hence, why data dense raw files are usually better because they have more wiggle room, provided that it is good data.

1 Like

@snibgo you are, once again, mixing what belongs to fundamental concepts (exposure as a metric of light), and what belongs the technical implementation of concepts into particular media (the valid range of exposures a medium can absorb, aka dynamic range).

There is no underexposure because:

  1. the exposure anchoring is medium-dependent and digital does’nt care for it until exporting to output medium,
  2. making an image is actually dealing with opposite min/max constraints pivoting around a sort-of average anchor, and that has no general a priori optimum. As @ggbutcher said, it’s heuristics on-the-fly. “Well-exposed” is a void concept in that regard, and a misleading one, since it suggests that there is one magic number to find, when there is none, or rather a range of trade-offs. You do what you can with the guesses you can make from a processed JPEG. ugh…
  3. you will clip part of the scene no matter what you do, so all in all, it’s only a matter of being clever about what you clip.

But sure, if you take only the part of the paragraph that keeps you angry, you can make it say whatever you want to stay angry.

1 Like