I completely embrace this statement.
Weird, here it is again. Hopefully itâs good now!
_6I_9419.CR3.xmp (15.4 KB)
This time it worked, thank you i wanted to take a look at your greens
_6I_9419.CR3.xmp (15.7 KB)
Oh wow! This is such a good example of how much difference the crop can make. A perfect suit for a textbook on composition. Beautiful. (The asymmetric frame is a bit weird, though ⌠)
There is actually a very easy way to figure out how much data loss occurred in an image. Look at the size of the image file against one with similar range of colors with no clipping. When you overexpose to bloom, you will get patches of a solid color, even though there was detail to capture.
A good way to see if an image is fundamentally âblown outâ versus just slightly overexposed is to examine things like trees or other fine details in the background, especially if theyâre out of focus comparatively. The sky is often the first thing to clip when doing daylight photography, and if thereâs pixel bloom, you wont see smooth, accurate edges and colors for fine details, instead they will be jagged or âchewed upâ on the transition between the bright sky, and the dark trees.
I will also mention (although I use Rawtherapee instead of Darktable, so your mileage my vary) that âclippingâ doesnât always relate to luminance. You could be seeing Out of Gamut warnings due to your selected colorspace. If youâre able to easily select other color spaces, try ProPhoto, or another modern, wide gamut color space, and see if the amount of âclippedâ warnings decreases.
Finally, the best advice for all things image quality is to fix the problem while youâre right there taking photos. I almost always keep RGB highlight active as the image review option (i still use a DSLR, so no WYSIWYG electronic viewfinder with overexposure warnings for me), with most of my lenses, I know what my EXP+/- needs to be set to, to avoid meaningful overexposure (Vintage fast primes like the 85mm 1.4 ai-s, or 135mm f/2 ai-s) are around -3.0, and the cheaper a lens is, the less I need to underexpose to avoid clipping. the 80âs vintage 50mm AF 1.8D is around -2.0-2.3, the 35mm AF-S DX f/1.8 is around -1.3 or -1.6.Not that it really matters, as only psychopaths like me, or serious videographers, can recite lens transmission numbers, but the difference in EV is directly correlated to the t/stop of the lenses. Once AF lenses hit the market, and especially once plastic fantastic consumer crop lenses came out, virtually all brands stopped even trying to get their lenses to perform to their f/stop. Personally, I think itâs because they figured out camera/gear reviewers are too lazy/clueless to test the most important part of a lens⌠How much light does it actually emit from the rear element. (and also how much of the light expected to land within an arbitrary pixel size/shape (lets say 3nm, and 6nm) hits the correct pixel at different apertures. Thatâs actually the specific test (testing the airy disk characteristics, alongside general diffusion and one of mirrorless camera designs fundamental flaws⌠the extremely acute angle the light hits the edge of the sensor at. Anyone wondering why Nikon had such a deep flange depth for the F-mount⌠It is a âfreeâ vignette, and edge sharpness correction. A year or so ago (i think it was digital camera world) had someone writing a bunch of articles lauding the Fujifilm XT-5, about how it was a massive step forward (due to them cranking up the megapixels), and how every other crop sensor camera was obsolete⌠Until they went to take some landscape photos and all their fancy lenses lost noticeable sharpness once they stopped down past f/5.6⌠Because the pixels are so small, that the airy disk created by the lenses is larger than the pixel pitch, causing mechanical diffusion. Thatâs why Micro 4/3 only kept a generation or two of 20MP+ sensors in their cameras⌠They were definitely trying to sneak into the same market bracket as APS-C cameras, by using similar MP⌠but dropped back down under 20MP for their (lmao) âflagshipâ cameras almost immediately, creating this really bizarre situation where all their entry-level, plastic fantastic cameras had sensors with almost 50% more pixels, and all their $1000+ high end versions seemed like they had much worse sensors (through the eyes of a novice/amateur who doesnât know better).
Both RT and DT have ways to see true clipping of the raw data⌠RT has the raw histogram and might have a raw clipping indicatorâŚand DT does âŚClipping beyond that is introduced by the processing and that can be visualized in the waveform or using the indicators.
Excellent diatribe except that I could crop half the data from an image but the detail level would remain about the same.
Begs the question: âWhat is detail in this topic and how is it measured?â
_6I_9419 (1).CR3.xmp (22,6 KB)
Thatâs debatable. I"m talking specifically about how raw files are actually created, and the data/structure demosaicing algorithms need. If a completely overexposed (and blooming) 16 pixel square is just showing 255:255:255:255:255:255, all the processing done from sensor readout/reset, through the DSP, having a .jpg embedded into the .raw, etc, can completely ignore that entire 4x4 pixel area except for the most rudimentary data. âitâs white, the whitest white you can getâ takes up less data compared to colorful, high contrast areas where there isnât any clipping. So itâs less data, because there is less detail in the image. Replace overexposed pixels with âburned out, dead pixelsâ and you get the same exact situation, slightly less data recorded to the raw, and the data that is missing, can be easily located using clip warnings etc. So yeah, I think you may be touching the 3rd rail of image processing (and all analog->digital data). Claiming that lossy compression doesnât actually lower image quality. WHether itâs photographs, sound, or smellyvision. Thereâs no way they can cut smellyvision data and retain all the subtle notes and flavors those stinky TVâs promise us Unless youâre just being cute and youâre just gonna crop the image to 50% L x W. Which technically yes. Half the data, same great taste
Or I suppose if youâre using certain formats, like PNG where additional layers can be created when converting it. You can definitely create a transparent layer (RGBA fomat), then remove the A, and not lose any core image data.
All that said, if there was a downsampling algorithm or process that threw away âunnecessaryâ data, with no, or no noticable loss of quality, Canon would be dancing in the streets because 4:2:0 video might actually let their customers use the R5 at its maximum video settings for more than 15 minutes before bricking itself for an hour.
RawTherapee also provides âsoft proofingâ which clips out of gamut colors. Itâs definitely not a popular discussion topic, compared to luminance/exposure clipping so I figured Iâd mention it. The reason it seems plausible is the sky is very often the first area of an outdoor, daytime image to reach saturation, and the sky in the photo looks pretty good, and the transition between the sky, and the trees, seems to be clean and well defined. It just doesnât have the look of an image that was full of clipping warnings. But I am also saying this after only looking at the posted photo, I didnât download the image to dig deeper.
Uh what? Are you OK?
i have said too much already⌠I happened to overhear some tech bros discussing the next vaporware when venture capitalists get as sick of AI, as they are Crypto, and VR. Weâre gonna have TVâs that let us smell the scene on the screen. well, once we harness cold fusion to power it all, of course.
Looks like you uploaded the raw instead of the xmpâŚ