Of course. But typically at least some features of the the subject will be within \pm 1.5 EV of middle gray on average, otherwise you lose contrast. Black cat photography is a good example: note the eyes, whiskers, and reflected light on shiny parts of the fur that give the texture.
Yes, of course. My understanding was that this whole discussion is about global mappings.
Typically I like to have everything that is relevant in an image within 5–6 EV, but local contrast for each part. This can be done using a combination of tone equalizer, contrast equalizer, and diffuse and sharpen.
Don’t know about the details in this standard, but whether it “allows” it or not, the last few bits of each pixel in a RAW file are noise, and there is nothing we can do about this.
Even if one could eliminate noise from the camera circuits, above a certain bit depth you start counting photons.
Your practical dynamic range also depends on the noise reduction algorithm you are using and how much loss of detail you are willing to tolerate.
For the purposes of this discussion (tone mapping): sensors produced in, say, the last 5 years usually have at least 10 EV usable range with modern noise reduction algorithms if you are not very picky.
Can we be less vague, please? The ISO Standard clearly states the lower level of the dynamic range as being when the noise equals the mean signal i.e. the point at which SNR = unity.
If “the purposes of this discussion” exclude the ISO definition of the dynamic range of a digital camera, then do feel free to pick any number of bits you like as the noise level …
My thinking was that a highly saturated photosite is the whites and a very low saturated photosite is the blacks (not withstanding noise floor). So the sensor has either recorded whites and/or blacks or it hasn’t. If all I have captured is midtones (which is not at all uncommon) then unless I want to map my brightest midtone to whites and my darkest midtone to blacks, don’t I want to set my white and black point to the actual white and black point of the sensor?
EDIT: Or have I just stumbled upon the answer to my own question, in that just like colour, white and black point is set at the whim of the photographer for artistic purposes, to suit taste. White is where I say it is?
I know a sensor deals with converting the energy of a photon into an electrical charge, what I mean by ‘saturated’ is a photosite that has generated the maximum electrical charge. If the AD stage generates a value of 0 - 1024 (say) then I mean 1024.
If I set my aperture to bulb mode, for a really long time (let’s be really daft and say for 1 hour) will I not always get a full white image? The maximum value that the photosite can record is always white isn’t it?
Slap enough ND filters on there and the answer is “no” – I have done 15 minute exposures before in mid day sun. It was only like 15 stops of ND filter required.
Sure. But my point was that when a photosite is maxed out, it will always produce white will it not ? Minimum registered light is black, maximum registered light is white. No ?
And if I am trying to post process the image to reflect reality (which maybe I want to do) then the white and black point would need to be set to match the dynamic range of the camera wouldn’t it? (assuming I set mid grey correctly). Would it not at least be a very good and sensible starting point, as I can always lower the white point/ increase the black point if I wanted. There would be no point in expanding the white/black point because that would be beyond the dynamic range of the sensor?
I usually don’t mess with the black and white points. There are better ways to control contrast. I use filmic to control the overall contrast of my images and tweak other values using Tone Equalizer.
The dynamic range of the sensor is more important to the capture. Once your in a processing pipeline, its up to the processing pipeline, and in fact if you want to maximize editing potential, then fiddling with white/black points probably isn’t idea.
I find the use of the terms “white” and “black” to describe the degree of charge of a sensor’s photocell in this thread to be quite odd. So much so that the gist of this darktable-speak totally escapes me, sorry to say …
Well sometimes you have to simplfiy to have a constructive conversarion, or else you just start every conversation from the invention of electricity and go from there.
with the subject being technical, use of oversimplified terms obfuscates the discussion to the point where someone accustomed to normal terminology, like myself, can barely understand the issue.
Or … just invent new phraseology for something for which everybody else uses normal well-understood terminology.
like “when a photosite is maxed out, it will always produce white” … come on …
And this is not a scientific forum. While using consistent, well-defined and scientifically correct terminology is useful, such terminology also needs to be learned and understood. Not evident for newer members.
Also, misusing jargon can be worse than not using the strictly correct terms.
The ISO standard may be very useful for engineering purposes, but from the practical perspective of a photographer, “dynamic range” is context dependent, and is a property of the sensor, the whole processing pipeline, and what you are doing with the image.
Consider an image you have exposed to the right, and imagine you are trying to brighten shadows. How much of this you can do depends on the denoising algorithm, the detail in the shadows, and how picky you are about noise. Eg if you are looking at a more or less smooth or homogeneous surface and you are fine with aggressive denoising, you can recover that area even if it is within 2EV of the noise floor, or even closer if you are shooting B&W. OTOH if the area has fine detail, both in luminosity and chroma, you don’t want to aggressively denoise, etc, then you need at least 3–4 EV extra. More if you plan to use algorithms on the area that amplify noise (eg diffuse & sharpen).