In my 4-years long experience of helping people get decent pictures and get a grasp at image processing, I often ran into the same problem, which triggers a lot of subsequent problems in sometimes sneaky ways…
Photographers keep clipping their highlights and expecting magic in software to forgive their mistake.
What’s clipping ?
Digital sensors (and not just in photography) can only record signals up to a certain intensity: the saturation threshold. Past that threshold, if you keep increasing the signal, the sensor will not show the increase but keep outputing the same value, equal to its saturation threshold.
At the other side of the range, sensors also have a noise threshold, under which the sensors has a random behaviour and difference between actual data and noise won’t be noticeable.
So, to sum up, sensors are only good in a range between noise and saturation thresholds, and we define these thresholds as 0% and 100% of the range (their actual values depend on the sensor, and are recorded in libraries such as libraw or rawspeed).
Why is clipping bad ?
In pictures, clipped areas are flat and contain no details which, by contrast with valid & textured neighbouring regions, looks odd. But it’s not the worst part.
First of all, digital sensors clip harshly, so the transition between valid and clipped areas is sharp, and you don’t get the smooth roll-off of film anymore. Expect issues with sharpening algorithms, because they will treat that transition as a detail and “enhance” it.
Secondly, the 3 RGB channels don’t clip at the same intensity, Say you photograph a nice sunset: clouds are pinkish-red but, suddently, close to the sun, they become piss yellow. Well, you just clipped your red channel.
Your typical sunset:
And what happens with the RGB clipping:
Your less typical backlit scene:
Oddly, it seems that, on Fuji X-Trans sensors, blue clips before other channels (do they record some UV too ?):
Why is highlights recovery bad ?
The challenge of highlights recovery is you ask the software to infer what the scene looked like while you have no data to rely on. The highlights reconstruction algos aim at diffusing the colour from the neighbouring areas, so you still don’t get texture, but rather a “colourful void”. If you are lucky enough to have only one channel clipped, it’s relatively easy to transfer the texture and structure from the valid channels but dealing with the white balance adjustements can still make colours blow up.
These algos work ok when you have only one clipped channel or if the clipped areas are small and don’t contain details. Also, recreating artificial colours in very bright areas is almost a guaranty to get out-of-gamut colours, because any colour space gamut shrinks at high luminances and you are allowed very little saturation.
My opinion is they are all hacky and the results are often ugly. I much prefer having clipped area transition smoothly toward pure white. There is little we can do in software that salvages badly clipped pictures in a visually-plausible way.
How to overcome the issue ?
Simple: don’t clip your pictures.
Yeah, but how ?
Keep in mind your cameras are designed to give you good looking JPEGs straight out. When you use the automatic exposure settings (A, S, P), they expose for midtones, add a film-like S curve to smoothen transitions and add contrast, and you get a good looking JPEGs out of the box.
If you plan on post-processing your pictures yourself, you need to expose for highlights. Fuji cameras do that by default (then people complain their raw look very dark: Fuji just saved your highlights guys, stop whining and push that tone curve) and you get highlights-weighted spotmeters in recent Nikon and Sony high-end cameras (probably Canon ones too, by now – I haven’t read a GAS-inducing websites for years and completely stopped caring about new cameras).
But most cameras still don’t allow you to do that. In this case, you might be tempted to switch to manual exposure settings or set an exposure compensation in A, S, P mode. Then, don’t do that looking at the clipping alert or at the histogram at the back of your camera, because these stats are computed on the JPEG produced by the firmware, after all its corrections.
There is nothing in your camera that lets you see what the actual raw data look like, so all you can do is try different exposure compensations, open the raw in your editor, see how the clipping looks like, and learn how your lightmeter + auto exposure work in different cases.
When in doubt, under-expose. Landscapes situations ? Remove 0.3 to 1 EV. Backlighting ? 1 to 2 EV. Keep in mind, for cameras produced after 2013-2014, that we can squeeze a lot of details from shadows out of these babies, and we get better and better algos to denoise them. But reconstructing missing parts is still a challenge.
The only parts you are allowed to clip are the sun and lightbulbs. But keep in mind they will still clip harshly. Then, in software, to ensure smooth transitions around bulbs, ensure you desaturate progressively toward white-ish in highlights, and tweak your film-emulation curve (S curve, tone curve, base curve, filmic, whatever) to get a smooth roll-off in highlights.