There are no underexposed digital pictures

Yes! That’s why one should not put meaning into terms that others don’t. Communication depends on both sides.

"Doch ein Begriff muss bei dem Worte sein". But which?

2 Likes

I think it was actually you who introduced “target” , “18%” etc discussions through the filmic interface and various discussions and tutorials? Users who then tried to figure out filmic naturally communicated their questions with the established concepts and vocabulary. I mean it’s literally the names you’ve assigned the main sliders of filmic! If any great number of people email you using those terms it’s because you established them in the software. They are only trying to adapt to the “correct” concepts and words established in your software. I doubt any of them, if their geek/photographer balance is toward the latter, would have used those concepts otherwise.

As pointed out by various posts and links to old material and wikipedia most photographers today conceptualize exposure close to your “new” way of thinking. It’s pretty obvious because taking a few photos you’ll figure it out. I you don’t figure it out yourself any tutorial on the internet will set you straight. Perhaps manufacturers and their gear doesn’t think like that but it hasn’t stopped photographers.

Again no one but you in this discussion conceptualize exposure as a fixed number target. You’ve carried on a 80 post discussion not reading posts but talking to a mental image you’ve formed from other discussions.

1 Like

“Middle gray” simply represents the mid-tone in the image, the thing you want to be depicted as the “middle tone”. All of the other tones are then splayed left and right from there. What @anon41087856 is saying is that, for a single image, where that is captured in the camera’s range from the “noise carpet” (I’ll use that term, carpets are softer than floors… :slight_smile: ) to the hard sensor saturation point is meaningless until the final tone conversion for display or output. Because, that’s where the intended “mid-tone” has to be at a perceived mid-point…

1 Like

Yes, but he’s arguing a straw man. No one here seems to care about middle grey. I can’t see any believers in “correct exposure” (as related to middle grey) anywhere? The wikipedia definition is the one commonly held.

See also the definitions of over and under exposure. All defined subjectively.

3 Likes

@anon41087856 I sympathize with your plight as a developer and providing support for your software. Generally, users are pretty uninformed and expect miracles…
However, this is a forum for photography enthusiasts who honestly wish to learn and become better photographers. So please, as @Thomas_Do said don’t let “the fruitful discussion gets lost in a dogmatic dispute about terms.” We’ve lost Troy Sobotka and Elle Stone (partly) due to conflicts in finding a reasonable ground for discussion and learning. I sincerely hope this won’t happen with you or anyone else in the community.
I think you are a great teacher in many respects, and I have learned a lot from you. I admire your work and dedication to the development of darktable. I even believe you are absolutely right for the vast majority of the cases. But as a high school teacher myself, please, if people don’t get what you mean and a heated dispute arises with lots of confusion, please consider two things: highlight where people are actually right (reinforce the good things!) and find other ways to correct people’s mistakes instead of simply repeating yourself. I don’t like to learn anymore when someone shouts I am simply wrong or I simply don’t understand.

That being said, onwards to more discussion! Bring it on :smiley:

8 Likes

Replace “gray” with “tone”.

The most prevalent definition of “correctness” regarding exposure is the placement of the tone that would be “middle gray” at the center of the human-oriented histogram. That’s to what the spot meter in your camera is calibrated.

We all want the subject to be “just bright enough” in the images we regard, else we complain the photograph is ‘too dark’ or ‘too bright’. Where the mid-tone ends up in the rendition is of supreme importance. We lament about our crushed highlights, noisy shadows, and all that angst is really anchored on where we want the mid-tone to end up.

Musings of a demented elder gardener… :smiley:

1 Like

I think in recent days the thing that help me understand the most about the whole SNR/Clipping/Mid Tone concept was when I started to shoot macro. The main reasons being extremely little light to work with as well as a very low DR to work with. Then by analyzing the linear raw and how to manipulate it with a raw processor. It is a great way to see how middle tones behave because you have total control over everything during the shot and not fighting with nature. This also helped me understand the differences between the preview and the raw thus my camera because the difference is extreme in such a case. Plus the fact of shooting with manual flash thus the meter in the camera does not dictate the “assumed exposure” just helps with the flash setting.

I think the best way to communicate these sorts of concepts to users is to always show a raw histogram that will show users what they’re actually getting off the sensor.

In RawTherapee you can click a (teensy-tiny little) button to show the raw histogram, but in Filmulator I’ve put one in the sequence of mini-histograms that shows what the pipeline is doing to the image data.

It’s the uppermost of the three smaller histograms, and it shows that this image is definitely underexposed by almost two stops relative to optimum.

2 Likes

Averaging noisy data can reduce measurement uncertainty but not improve accuracy. If the noise is white, i.e. the noise density is constant across the frequency spectrum, then averaging N samples will increase the SNR by sqrt(N). Averaging is low pass filtering, hence the high frequency noise is attenuated while the low frequency signal components remain unaffected. The cost you pay is that the high frequency signal components get attenuated too. Applied to image processing this means that spatial details get lost while the noise in uniform areas is reduced. Applying a gaussian blur to the image from a few post above and applying a gaussian blur with radius 10 yields:

Averaging is the key to oversampling Delta Sigma ADCs. They use an 1-Bit ADC and massive oversampling to get 20 Bits or more output resolution. See http://www.ti.com/lit/an/slyt423a/slyt423a.pdf for an explanation.

Information isn’t lost but masked by noise. The low frequency features can be recovered by low pass filtering. Intentionally clipping samples at the noise level removes recoverable information masked by the noise.

Concerning exposure: IMO exposure should be chosen to get the maximum possible SNR without clipping important features of the image. Avoiding motion blur may dictate to sacrifice some SNR. It’s all about finding a good compromise and “good” is highly subjective. I agree with you that in digital we have way more freedom choosing exposure than back in the analog days.

5 Likes

I oppose nothing in that document, it is good, everyone should have a very good read of all what is in there. The thing I am opposing is the statement that is your interpretation which boils down to

And

You can easily test this with simple codes…other people in this thread have given examples of and arguments for it. Your statement is a factually wrong claim. Repeat measurements (given repeatability of the setup) of small signals(measurand) in large random noise, yields the measurand (your signal) with a margin of error (which should be given, even as an estimate, for any measurement anyway).

If a reconstruction yields the measurand with a low enough statistical error, there is a strong argument for its ‘presence’ within the data. The quality of this reconstruction is the topic of whole journals in research.

I do not think so and you could not convince me of it.

This.
And I might add that low pass filtering is a simple concept of filtering. More complex concepts exist as we know from the vast field that is denoising/noise filtering. Low pass filtering serves as a nice example though that everyone can understand and follow.

I wish you all a nice sunday.

3 Likes

It’s been very clear that this thread had had a lot of unnecessary hiccups and animosity due to a lack of consensus on terminology.

Aurelien has brought a lot of insight to this community over the last several months, together with a huge amount of work on dt. However, the title of this thread has clearly triggered a lot of people (silently including myself).

So if I go outside right now and take a photo at f/16 and 1/500 seconds and no flash, never mind what the ISO is (it’s about 10:00 pm right now where I sit), it would be legitimate to call me silly or stupid. But what would you call the photo? I would say the sensor wasn’t, well, you know, “exposed” to enough light to make a useful photo. @anon41087856, what would you call it? I’m not asking this in a confrontational sense, I just want to break down the terminology barrier.

Looking at what Aurélien said:

There is no such thing as a well-exposed / under-exposed / over-exposed picture in digital. There is only sensor clipping or no sensor clipping, that is all.

My interpretation is that seeing as there is no “correct” exposure, i.e. no one objectively optimal exposure setting for any given scene, there can therefore be no under or over exposure.

An exposure that clips the highlights isn’t “over exposed”, its been exposed to retain more detail in the shadows. An exposure that loses some of that detail in the shadows isn’t “under exposed”, its been exposed to protect the highlights.

The part differentiating between the linear digital capture vs the mapping that is baked into film that follows is critical here, because in order to optimise that baked in mapping on film there is an optimal/target exposure. In digital however you can apply positive or negative gain with little consequence.

Ultimately that appears to be the point here, on film underexposure meant you were burying the scene in the low end of the curve which had all sorts of nasty effects on things like saturation and contrast in your image. It seems as though Aurélien has encountered a lot of people that think having to boost their digital RAW files by two stops means their photo must have been underexposed and so they must be suffering the same detrimental effects, and that is not the case.

Of course if you’ve failed to capture the detail you intended to in either the light or dark areas of the image it seems reasonable enough to call that over or under exposed respectively, but that’s just me.

3 Likes

Agreed, and this is unfortunately where we have somehow been getting into needless conflicts. Not one person in this thread has been saying there is one objectively optimal exposure.

That’s a non sequitur, and is probably at the root of the problem. I think we can generally agree that there can be a reasonably wide range of exposures that are acceptable for one person or another’s artistic objectives (edit for clarity: and a range of exposures, that surround a given person’s ideal, from where they can rescue their photo to get what they originally wanted) . People prove that daily with their different interpretations in PlayRaws. But the fact that reasonable exposure is a (somewhat subjective) range rather than a point does not infer that being well outside of that range is not under or over exposed.

Yep, me too.

2 Likes

most likely I think.

2 Likes

Outside the context of general digital photography this is not always the case. There are specialized cameras for industrial applications with logarithmic response – for example the NIT cameras.

About the discussion: if you do a severe underexpose you risk to make noticeable a fixed pattern noise of the sensor (especially if the latter is a small one). Aside from that you may encounter another problem – tinted shadows in some RAW editors like RawTherapee.

There is no such a thing as Lightroom’s rendering nor RawTherapee’s rendering nor darktable’s rendering. A glance of Play Raw is enough to see that different people process RAW files differently with the same program. Something else – for some photographers the RAW editor is not enough, so they perform additional processing in other programs. The Highlights and Shadows sliders are not mandatory (I don’t use them) neither endemic to Lightroom/Camera Raw – similar technology is implemented in many RAW editors. In particular RawTherapee offers more tools with spatially varying effect than Camera Raw, if you ignore the local editing capability of the latter.

1 Like

There is, if you understand it as the default processing the software does to the raw, with all sliders set to 0. I mean loading the neutral profile in RT, doing the same in dt (however it is done there), and setting all sliders in LR to 0. The starting results are different.

But I think this is a discussion for another thread.

1 Like

To me if a photo needs more brightness in post it is underexposed but it’s not a mistake from the photographer (like in analogue photography), it’s instead a smart strategy for the successive digital tone manipulation

2 Likes

Your audio example reminded me of how dither adds noise to reveal signal.

LOL, I think you’re talking about me here! :stuck_out_tongue:

But the more I interact with people from other countries the more I realize a lot of post HS education in America is a joke. My MS in physics has about as much rigor to it as a YouTube comment section. Honestly it was partially my fault and partially the system’s fault. But our schools are treated like a business so they graduate people who probably should have been bounced out in year two. I went to a medium-sized public school (name withheld to protect the innocent, guilty or something) so it wasn’t a total degree mill.

So yes, degrees don’t mean much. Certain countries more than others.

2 Likes

I especially like Randall Munroe’s version of Lewis Carrol:
When I Use A Word

6 Likes