There are no underexposed digital pictures

Signals with SNRs below unity are NOT non-data. Signals buried in noise shall be recorded, they are valuable data (e.g. they are needed for reconstruction/fitting/denoising).

This.

For example if you look not at just one pixel but also neighboring pixels and multiple samplings of the same object. Photography is by definition about more than one pixel…you still need to come up with a nice noise model, but that is what denoising tries. Throwing away signals buried in noise (undersampling noise) would be detrimental.
If you clip values, then they become non-data.

Also problematic and depending where you come from also called clipping: if you do not have enough code values for whatever signal you have that is close to or below your critical SNR.

So yes: underexposing creates problems. Not enough light on the (real) sensor buries signals in noise (not good) OR you don’t have enough code-values left for proper sampling (BAD) or both (SUPERBAD).

3 Likes

I think we need to separate the issue of ISO vs light, here.

Recording more light, by opening the aperture or prolonging the shutter speed, increases the signal to noise ratio, and is desirable (below clipping).

Raising ISO, on the other hand, is indeed mostly useless on modern sensors (which are mostly ISO invariant).

If we look at this from the sensor’s point of view, with the shutter speed and exposure time already pre-set by the user, then Aurelian’s argument makes a lot of sense: raising the ISO does not improve the picture, it does not increase SNR, and it does not record more light.

In that sense, there is indeed no exposure at all; a sensor merely records the light that is there. If that’s little light, it’s small numbers, and if it’s a lot light, it’s big numbers. But neither is worse or better. It’s just a recording of photons. Our only consideration then is to prevent clipping.

The interpretation of these numbers as “brightness”, or “exposure” comes later, in Filmic. And is an artistic choice, not a technical one.

Did I understand that correctly?

hi,

exposure is the amount of light per unit area that reaches the sensor. it happens at capture time, and depends on shutter speed, aperture, and the luminance of the scene you are capturing. it doesn’t depend on ISO, you can’t change it in post processing.
this is at least (and AFAIK) if you use the commonly accepted definition of exposure in photography. of course, you are free to use another definition, and then claim whatever…

2 Likes

Let’s grab the nearest encyclopedia: Exposure (photography) - Wikipedia
It explains what exposure is, and it even has a section on “Optimal exposure”. However, there is also this: Exposure value - Wikipedia which you could also use when talking about “exposure”, while strictly speaking is something really different.

So, as @agriggio pointed out earlier with his quote of Alice in Wonderland, let’s be conscious of semantics and our intended meanings. There are definitely some rusted concepts in the world of photography that have persisted from the pre-digital era. There are surely new truths to be discovered, and physics can be a hard teacher.

Personally, I think that @anon41087856’s original post is clear on what he means. And please correct me if I (still) missed the point. The act of obtaining a digital photograph is only a mapping process: how do you select which portion of the dynamic range of your scene, you record onto the (limited) dynamic range of your sensor? The only way to do this, is by selecting how long you expose your sensor to the scene, i.e. “setting the exposure”. Importantly, there is no other target except this. No middle grays to aim for, etc. In that sense, photography is a purely analytical thing.
The creative part are setting your scene and your post-processing. But as long as you have collected the data you want, you can massage it afterwards to your liking.
Edit: and of course, you can clip “unwanted” data at either end of the scale. Highlights or blacks.

The thing that has been glossed over in the OP imo are quantization and noise. They do influence a “correct” exposure for two reasons: 1) light information can get lost under the noise floor, and 2) differential light information can get lost due to the discrete nature of your sensor (i.e. compressing too many tones in too few sensor values = information loss).

But once you somehow account for that as well and have mapped/clipped your scene dynamic range “perfectly” to your sensor dynamic range, you have probably reached photography Valhalla :slight_smile:

1 Like

Nice that Wikipedia page does a good job and anchors it in subjective decisions.

Can anyone explain why @anon41087856 sees targets and greys in posts where I see none?

I assume because many photographers still believe targets and greys are vital to good photography? He explains why this is not so.

measuring interval

set of values of quantities of the same kind that can be measured by a given measuring instrument or measuring system with specified instrumental measurement uncertainty, under defined conditions

https://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2012.pdf, p.39

Every sensor has a lower bound of the measuring interval past which variations of the measurand yield variations of the signal lesser than the measurement uncertainty. These are not data.

It’s as if you tried to measure a distance of 0.25 mm with a ruler graduated each mm. You can’t. You have a resolution of 1 mm with a reading error of 0.5 mm. So it’s either 0 or 1 mm ± 0.5 mm, not 0.5 mm ± your laser sight guessing and certainly not 0.25 mm. Just writing 0.5 mm ± 0.5 mm gives an idea of the stupidity : it’s literally flipping a coin. A such measurement is worth garbage.

Averaging noise is just that, averaging noise. Data is lost there. Whatever numerical process (again: process is technology, not a definition of a concept) you unroll to infer missing data doesn’t change the fact that no data is present there in the first place and you only infer it.

The reconstructibility of the missing data doesn’t equate presence of data.

I think you are all so lost in technological implementations and conceptual shortcuts/language abuses that you have forgotten the very meaning of the variables you massage.

1 Like

Then, if you think that people is fundamentally wrong, and you’re here to show us the truth, why don’t you try to make us understand what you’re talking about?

I’m not asking for high level master classes that only a few selected brains can understand. I’m asking for an easy explanation, including definitions of basic concepts as you understand them, leaving nothing to chance, leaving nothing to guesses or misunderstandings.

Try to explain to a second grade kid what you want to teach us, and then write it and post it here. I know that’s not easy to do, but given we are so wrong, perhaps it’s the only way that we understand it all.

1 Like

Having a hard time following this a lot of it is way over my basic understanding of cameras and digital electronics. I never did get super deep into it. If I am understanding some of the discussion it might help explain what is going on a little better by taking ISO completely out of the equation.

In film ISO played a big part of the exposure of the scene in digital contrary to every teaching of digital photography I ever saw ISO has nothing to do with exposure in digital at all. For digital the exposure triangle is non existent and as soon as I realized that it became much easier to take good pictures that were much easier to process.

Shutter speed and aperature are really the only things that matters the noise is coming just from the lack of light. If you take two shots of the same scene but one at iso 100 and one at 6400 with the same aperature and shutter speeds disregarding if you blow anything out you see the same amount of noise regardless.

This is because ISO is gain applied after the exposure basically just like pulling the exposure slider up in post. No different then a guitar and amplifier. The gain on a amplifier does not change the input signal from the guitar it only changes the output. Same for a camera.

So with that said in a totally round about way the only thing that matters is that you don’t clip your highlights which I think is what @anon41087856 is trying to get across. If your sensor does not have a huge DR for shadow recovery no matter what you will have to bracket to prevent the noise in a high DR scene. So again the only part of “exposure” that really matters is that you don’t clip the highlights to white. Black is always going to be black just because it seems black in the exposure does not mean it is.

So there in essence we only need to take over exposure into account as there is either enough light or not enough light because the sensor will record everything that is not black so there is no real underexposure.

I do apologize if none of my logic makes sense to anyone but me and I could be totally off base for the topic but I tried anyway :blush:

4 Likes

A sensor turns some physical phenomenon (light, sound, etc.) into an electrical current. Depending on the sensor type, one of the quantifiable properties of that electrical current is the image of the measured phenomenon, through a mapping law:

  1. either current (in ampere) ; in French, we call it intensity, because it’s less confusing.
  2. or tension (in volts).

This electrical image of the original phenomenon is called the signal. The mapping law between signal and original phenomenon can be determined by calibration (called profiling in image processing, because guys felt like inventing metrology again).

Every measurement is done with some error because no sensor is perfect. That error, once quantified and statistically analysed, is turned into an uncertainty. We therefore express the result of the measurement as (reading ± uncertainty) unit. It means that the true value can lie anywhere in an interval bounded by (reading - uncertainty) and (reading + uncertainty). We can manipulate the uncertainty so we are 88, 95, 98 or 99% confident the true value is in this interval (through the normal law and the stats we made), but we have no way to say exactly where.

Every sensor can operate in a certain range where its measurements results are deemed valid. This range is bounded by the saturation level, for high values, and the noise level, for low values.

Data is the part of the recorded signal that makes sense in the current context. For example, imaging sensors can record valid signal with little uncertainty for light outside of the human visible spectrum. They yield valid RGB values, yet, this is non-data (in the context of consumer photography), because it is non-color. That’s where humans take over technics and exercise their judgment over some signal to decide if they trust it or not.

The noise can be seen as some random quantity, taking random values bounded by the uncertainty, and added on top of the theoretically clean signal. But that’s only a model to represent things. Other models represent it as a value multiplying the theoretically clean signal. Depending the nature of the noise you are dealing with, one or the other applies best.

For each pixel, we don’t know exactly how much noise there is, but we can predict it will be inside some bounds. So, |noise| \leq |uncertainty|, and let’s say any individual reading = true\,value + noise, so true\,value lies between (reading - uncertainty) and (reading + uncertainty) with 98% of confidence.

If |reading| \leq |uncertainty|, then it means true\, value \leq noise \leq uncertainty, so the true value you are trying to measure is below the uncertainty of your sensor. How could you assign meaning to a measurement result that is more “precise” than the uncertainty of your sensor itself ? For this value, your error bounds are equal to at least 100% of the true value. So… is that not enough to call that data clipping ?

Having a signal doesn’t mean you have data. There is a science that aims at giving meaning to signals, it’s metrology.

By the way, see here what I did ? All the way, we are working with bounds. We don’t know the true value, and we can’t know it, but we know for sure it’s inside some bounds we can express with a good level of confidence. All we know are bounds.

However, thinking in terms of probability intervals is not the same thing as trying to hit the bull’s eye of some convenient target. It’s like driving alone on a very large road with no central line. You see the left border, the right border, you know you need to stay in-between while avoiding cars coming from the other way, but can you tell exactly where the middle is ? And do you even care ?

6 Likes

But can you point to any post in this discussion that suggests “target” number thinking? I can’t see it.

The very idea of “well-exposed” is grounded for historical reasons inside the idea that you need to anchor scene middle grey to medium middle grey. That’s what you all refuse to acknowledge when you use insane noise levels to justify the existence of some absolute notion of underexposure grounded on PSNR.

So you all try to redefine “well-exposed” in terms of full dynamic range fitting. “The target can be an interval”. I don’t know, usually, the only interesting part of a target is the bull’s eye…

Just stop. Well-exposed is a thing for analog shooters. Different technology, different constraints. Middle-grey to middle-grey, basta.

Digital cares about clipping. Which exists at both ends of the dynamic range if you use some fundamental metrology principles to lay down the problem, instead of blindly treating any signal as data 'cause science.

So use that. The key concept is clipping. Not an updated version of well-exposed. You are just confusing people even more.

4 Likes

This is all good, but let me just give you a link:

Now, look at the date. You are 17 years late. So, I’d ask you to please stop playing with the straw man. Thanks!

3 Likes

@anon41087856

There is much confusion going on here. The responses were less triggered by your concepts than by the provocative title.

The term “well exposed” in the above sense is not directly related to “no underexposed” from your title. You include prior knowledge that was not mentioned by others.

The fact that there may be no absolute notion of underexposure grounded on PSNR does not mean that underexposure does not exist at all.

I find the concepts and differences in handling exposure in analog and digital photography very interesting. Unfortunately, the fruitful discussion gets lost in a dogmatic dispute about terms.

2 Likes

Thanks for your explanation! :smiley:

As has already been said, and I tried to explain it myself, it’s just a misunderstanding with some terms: what I try to get from an image, that has captured a scene, is a certain feeling, something that has drawn my attention. So I choose a combination of speed-aperture (I always shot at my camera base ISO, that is ISO100) that gives me the image I wish. And that look, that image, is my objective (a.k.a. my target). Here is where one misunderstanding has gone for days. I don’t care at all about the scene middle-grey. I don’t care about my shot middle-grey. The only second I care about middle-gray is when my camera suggests a speed-aperture combination. Then I think about that suggestion and apply some compensation to get the look I wish. In a sense, I’m moving my image bounds up and down, but I know I can’t get the whole dynamic range of the scene, so I choose what fits me best. I choose which highlights will be clipped (if any), given the higher or lower importance of the shadows in my image.

I already have a middle-grey card, but I almost never use it.

Given my previous explanation, again there is a misunderstanding. I think we are talking about the same thing, but we are arguing because we don’t use the same words and then it seems that we talk about different things.

So, given that I compensate up or down my camera suggestion, and given that exposure can be understood as the amount of light per unit area reaching an image sensor, then the combination I set for my image (the exposure), that gives me the look I wish, to me is the right-exposure for that shot. And most probably it won’t be what my camera suggested me, but it’s the exposure I need.

And my target (as in my objective) is not a number, is not the bull’s eye, it’s the feeling I have when I look at the image.

I have shot film, and it always has been the same. My camera and its settings are just a tool to get an image, not to get a perfectly exposed middle-grey.

I hope all of this makes sense.

I write these posts usually after the 4th-5th time I repeat the exact same thing in an email because users are lost. They are lost because when you expose them the pipeline after using dumbed-down software for years, they suddenly don’t understand the meaning of the sliders that just appeared. They are lost because, suddenly, the input can be whatever and the output can also be whatever, so if you don’t get how they are connected, then… you can’t use the software. And HDR output is not even there… Imagine in a couple of years, I can already hear the question “why is 100% luminance not white anymore ?”.

Exhume what you will from the bottom of the internet, I will still be the one they mail when they don’t understand why “well-exposed” picture still need an exposure boost in software.

Important concepts are being named by those words. Words influence the way we think, hence the way we understand, hence the way we act. Terms matter.

1 Like

Simply call that a goal, then. Metaphors suck. We are not doing poetry.

Goal it is then :smiley:

2 Likes

This is clearly because most people don’t know what a linear raw looks like. I had this issue for a little bit as well tbh until I started to dive into a bit of research. Most “professional software” hides the linear raw.

Lightroom for instance behind the scenes applies not only a tone curve but also a contrast curve, contrast slider, blacks/whites sliders, as well as a exposure compensation. It is absurd. I can guarantee Lightroom users while culling toss out perfectly fine images due to all of these hidden adjustments which force them into trying to compensate with absurd extreme use of highlights and shadows sliders that they never needed to touch to begin with causing halos and other nasty artifacts.

After I learned how to I was able to reverse the lightroom hidden stuff and get much much better results. You know something is wrong with the software after you reverse all the hidden stuff you end up with a auto generated reverse contrast curve, neg contrast slider of nearly -50 and a -1 exposure on your image.

At least capture one gives you a linear raw if you want it but still has halo and artifact issues if you are not careful.

This is why I think people get confused they are use to hidden adjustments software is doing that it should not be doing.

4 Likes

Yes! That’s why one should not put meaning into terms that others don’t. Communication depends on both sides.

"Doch ein Begriff muss bei dem Worte sein". But which?

2 Likes