Photographers: stop clipping your highlights

In my 4-years long experience of helping people get decent pictures and get a grasp at image processing, I often ran into the same problem, which triggers a lot of subsequent problems in sometimes sneaky ways…

Photographers keep clipping their highlights and expecting magic in software to forgive their mistake.

What’s clipping ?

Digital sensors (and not just in photography) can only record signals up to a certain intensity: the saturation threshold. Past that threshold, if you keep increasing the signal, the sensor will not show the increase but keep outputing the same value, equal to its saturation threshold.

At the other side of the range, sensors also have a noise threshold, under which the sensors has a random behaviour and difference between actual data and noise won’t be noticeable.

So, to sum up, sensors are only good in a range between noise and saturation thresholds, and we define these thresholds as 0% and 100% of the range (their actual values depend on the sensor, and are recorded in libraries such as libraw or rawspeed).

Why is clipping bad ?

In pictures, clipped areas are flat and contain no details which, by contrast with valid & textured neighbouring regions, looks odd. But it’s not the worst part.

First of all, digital sensors clip harshly, so the transition between valid and clipped areas is sharp, and you don’t get the smooth roll-off of film anymore. Expect issues with sharpening algorithms, because they will treat that transition as a detail and “enhance” it.

Secondly, the 3 RGB channels don’t clip at the same intensity, Say you photograph a nice sunset: clouds are pinkish-red but, suddently, close to the sun, they become piss yellow. Well, you just clipped your red channel.

Your typical sunset:

And what happens with the RGB clipping:

Your less typical backlit scene:

Oddly, it seems that, on Fuji X-Trans sensors, blue clips before other channels (do they record some UV too ?):

Why is highlights recovery bad ?

The challenge of highlights recovery is you ask the software to infer what the scene looked like while you have no data to rely on. The highlights reconstruction algos aim at diffusing the colour from the neighbouring areas, so you still don’t get texture, but rather a “colourful void”. If you are lucky enough to have only one channel clipped, it’s relatively easy to transfer the texture and structure from the valid channels but dealing with the white balance adjustements can still make colours blow up.

These algos work ok when you have only one clipped channel or if the clipped areas are small and don’t contain details. Also, recreating artificial colours in very bright areas is almost a guaranty to get out-of-gamut colours, because any colour space gamut shrinks at high luminances and you are allowed very little saturation.

My opinion is they are all hacky and the results are often ugly. I much prefer having clipped area transition smoothly toward pure white. There is little we can do in software that salvages badly clipped pictures in a visually-plausible way.

How to overcome the issue ?

Simple: don’t clip your pictures.

Yeah, but how ?

Keep in mind your cameras are designed to give you good looking JPEGs straight out. When you use the automatic exposure settings (A, S, P), they expose for midtones, add a film-like S curve to smoothen transitions and add contrast, and you get a good looking JPEGs out of the box.

If you plan on post-processing your pictures yourself, you need to expose for highlights. Fuji cameras do that by default (then people complain their raw look very dark: Fuji just saved your highlights guys, stop whining and push that tone curve) and you get highlights-weighted spotmeters in recent Nikon and Sony high-end cameras (probably Canon ones too, by now – I haven’t read a GAS-inducing websites for years and completely stopped caring about new cameras).

But most cameras still don’t allow you to do that. In this case, you might be tempted to switch to manual exposure settings or set an exposure compensation in A, S, P mode. Then, don’t do that looking at the clipping alert or at the histogram at the back of your camera, because these stats are computed on the JPEG produced by the firmware, after all its corrections.

There is nothing in your camera that lets you see what the actual raw data look like, so all you can do is try different exposure compensations, open the raw in your editor, see how the clipping looks like, and learn how your lightmeter + auto exposure work in different cases.

When in doubt, under-expose. Landscapes situations ? Remove 0.3 to 1 EV. Backlighting ? 1 to 2 EV. Keep in mind, for cameras produced after 2013-2014, that we can squeeze a lot of details from shadows out of these babies, and we get better and better algos to denoise them. But reconstructing missing parts is still a challenge.

The only parts you are allowed to clip are the sun and lightbulbs. But keep in mind they will still clip harshly. Then, in software, to ensure smooth transitions around bulbs, ensure you desaturate progressively toward white-ish in highlights, and tweak your film-emulation curve (S curve, tone curve, base curve, filmic, whatever) to get a smooth roll-off in highlights.

32 Likes

Thanks for very helpful discussion, explanations, and tips.

Guilty as charged. My recent PlayRaw submission is evidence.

I’m trying to work this very consideration into my shooting. It’s tough when metering is predominantly designed to anchor a middle gray, and JPEG renditions mask the real data situation with respect to saturation. I’ve tried the various “pet tricks”, UniWB, etc., and come away sadder for the effort.

My new camera has a highlight-weighted matrix metering mode. Yay, but it’s determination of where to anchor the exposure is apparently based on the JPEG rendition, which appears to leave a stop or so of headroom unused. Sooo, I’ve been experimenting with various +EVs to compensate, and the Afternoon Snack captures were done with +1EV, which for this scene pushes parts of the dip and container into saturation. I’ve been impressed with the efforts of others to reconstruct clarity there, but I do side with @aurelienpierre about not pushing it in the first place.

Matrix metering is probably too complex a mechanism to characterize for this. Nikon’s own documentation talks about referencing a database of thousands of images to determine what to make of the measured regions in the scene. Yikes, sounds complex, but it’s probably just a table of min/max/mean/medians or somesuch, with an assigned exposure that someone previously thought was good for that image. Well, trying to add EV to such is probably folly, as it would take a similar characterization of the scene in my head, which is not even equipped to remember what I had for lunch yesterday… :smile:

IMHO, until the camera manufacturers give a a real raw-based method to anchor highlight-preserving exposure, this will be a daunting exercise, especially for those photographing in changing light or with shadow and highlight ratios that vary scene-to-scene…

4 Likes

Relevant is my post from a few months ago:

Learn your camera and you will be able to infer whether something is clipping from the jpeg histogram.

It helps to use an editor that includes a raw histogram, like RawTherapee or Filmulator. (from the screenshots in your post I presume darktable doesn’t have a raw histogram, or else you would have shown it)

I dearly wish that cameras other than Phase One had raw histograms though…

5 Likes

This is why I always use center-weighted average: it’s extremely consistent.

1 Like

I saw a tip a while ago that suggested spot-metering on the brightest part of the scene and then adding 2-3 stops of exposure compensation (depending on your camera). I’ve had some success from it with just the occasional clipping (when the light is variable).

Agree with @CarVac that an in-camera raw histogram would be great. The technique above often shows a lot of ‘blinkies’ on the camera preview even though no part of the raw image is overexposed. It does give pretty consistent results though.

1 Like

With the camera’s headroom in consideration, I use spot metering to examine the scene and adjust my settings in manual mode accordingly. I have an old camera, so I would have to consider what highlights I would like to forfeit; otherwise mask or move them out of view.

That gives me an idea for darktable… Indeed, we don’t have it.

5 Likes

rawproc’s histogram is of the result of the tool selected for display. If that’s the opened image at the top of the toolchain, it is truly Raw.

:grimacing: :roll_eyes: <— Me about to stop moving exposure compensation dial + 1 or 2 stops on my x-t20.

1 Like

Well, yeah, if your spot-meter exposes for middle grey (18.45%), then you get 2.43 EV between grey and white, so this method makes sense. But as manufacturers become less stupid and begin to account for highlights, in the future years, that trick may not work anymore.

I don’t think jpeg clipping basis is that bad. I often use Uni W.B. If I put the jpeg settings in a known state in my Canon, I’ve found by experimenting that the histogram can go half way along the last division of the display and be ok (there are 5 divisions). I have the jpegs set to minimum contrast and saturation.

What is still tricky though is when the highlights are small in area but still quite important. The small camera display makes it hard to see how far the histogram really goes. I’ve never tried Magic Lantern.

The semantics of “spot meter” should preclude that… Even if they change the 18%, it should be relatively easy with a couple of shots of a grayscale target to characterize it.

Sitting here waiting on delivery of an X-T20 and taking notes…

Then, probably you should. :wink: While I rarely use AETTR (after reading this thread, I definitely will), the RAW EV ETTR hint is always on and I compensate for it in difficult situations.

I fully agree but … easier said than done.
I own m43 Lumix camera (gx9). Since I mostly shoot people I use center weighted metering. I have also activated zebra patter to warn about burned highlight in live preview. Aaaand I look at the histogram all the time. I know that it’s all JPEG base thus I have around 1EV margin most of the time, but that’s rough guess.
In darktable I frequently find that I was too conservative or went to far. It’s not so easy to get right metering for RAW processing in my camera.
I don’t mind burned highlights when I agree to them but what I can’t stand is the sharp/sudden transition.

As I recall there was a good reason not to have RAW (linear) histogram in camera: Most of the time data will reside in first 15% of the histogram while after applying the gamma the histogram will look as we expect it.

90% of the time I’d agree and I shoot for the highlights, however I’ve never really had results that made me happy out of the expose for the highlights method when it comes to back lighting. Despite better denoising in darktable and RawTherepee in later releases the pushed shadows always almost look awful compared to exposing for them in camera and just saying to heck with hightlights. Even the proprietary software packages don’t do a great job with it, or at least they didn’t 5-6 years ago when I last ran them. Indeed, sometimes it doesn’t work out and I’d say for the vast majority of photos you want to watch the right side of the histrogram. Examples of what I’m talking about from my own work.


Granted most of the time I’m using strobe to get the foreground a couple of stops up from the ambient exposure so there’s not as huge of a difference between the highlights and the subject. I’m sure the more technical photographers among you can probably tell me the 500 different ways those photos are bad, should be junked and I should donate my camera to someone who can put it to better use but I think they “look good” for a lack of a better term. Sorry I am my own harshest critic!

It was a different story when I did astronomy professionally, overexposure was a big no-no for data gathering.

But my image is being used in this thread as “look what this dumb !@#$$# did” bad example here so maybe don’t listen to me. :stuck_out_tongue: Just felt the need to defend us no good, dumb, dirty, stinky backlighters here!

2 Likes

@lhutton There’s nothing wrong with clipping if you know what you’re getting and you’re achieving the artistic results you want. As long as you don’t expect the software to magically recover data for you. By not clipping the data you’re just giving yourself more options in post-production and avoiding any troublesome colour casts that the clipping creates.

@aurelienpierre’s new filmic module should go some way to getting you better results if you do expose to the right. Certainly darktable has moved on leaps and bounds since 5-6 years ago.

1 Like

Indeed! This is the point I am making although with less brevity.

TL;DR: Don’t overexpose unless you know what you want and don’t expect miracles from interpolation.

1 Like

I figured some of this out on my own: that is (far) better to underexpose than over. But that doesn’t mean I’m smart. It took me too many years to get there.

I have an eyeball snorkel now so I can inspect my digital display in bright sunlight. And I watch my histograms. Constantly.