Why exposure adjustements can't be made automatic ?

There is this recurrent question that pops every week about darktable tone equalizer, but applies to pretty much every exposure correction in image processing software : why can’t you make exposure correction automatic ?

Sure, how nice would it be to let one more technical thing to the computer and bother about something else ?

From an naïve point of view, it seems super easy : detect the maximum pixel value in image, detect the minimum pixel value just the same, then normalize exposure using:

out_{x, y} = \dfrac{in_{x, y} - \min(in)}{\max(in) - \min(in)}

The output of that is guaranteed to be in the [0 ; 1] range (what we call normalization).

However, in image processing, we work in a discretized space where the theoretically continuous latent image (made by the lens) is recorded sampled, i.e split into an array of pixels having a certain resolution.

When processing images, we have the choice to process the image at sensor resolution (12 Mpx to 100 Mpx) or at screen resolution (2 to 4 Mpx). For obvious performance & user comfort reasons, we usually process it at screen resolution, or close, while editing in a real-time fashion, and bother about full res only at exporting time.

That implies interpolating the image matte down, to “zoom out” and rescale. But that interpolation doesn’t preserve our original image min and max. Interpolation can be seen as a low-pass filter, that averages locally the pixels values, and sort-of blurs to avoid aliasing and stair-casing effects.

As a consequence, the maximum pixel value of the downscaled picture will likely be lesser than the maximum pixel value of the full resolution image, and the minimum pixel value of the downscaled image will likely be greater than the one of the full res. What happens is they are both shifted toward the image local average, as every local variation in the image.

However, because of the low-pass properties, the average exposure value in the image (which is more or less close to middle-grey, usually – except when shooting high-key and low-key) is consistently invariant, no matter the scale. And since it is our perceptual anchor for brightness, it is far more important to leave it as-is, than bothering about min/max.

So the extrema in the image don’t vary along the average, when you change image resolution. The extrema rather contract around the average.

For this reason, automatizing exposure corrections based on min/may measurements would be very unreliable and non-robust, since exposure bounds are scale-dependent, but not the overall brightness, and exposure is a flat correction affecting the whole image at once, with no care for averages or extrema. The risk would then to have a much brighter or contrasted image at low resolutions, and a darker or less contrasted one at high resolutions.

The biggest problem we have is there is no simple way to estimate the value shift of min/max depending on the scaling factor, because it depends on each pixel neighborhood and on the interpolation method used.

An elegant way is rather to compute the image histogram, find the median, and compute an exposure correction to shift that median up or down. The problem is, this is assuming the median == middle grey, which is not true when you are shooting high/low-key, so you still have to adjust the percentile you consider middle grey/exposure anchor, and that’s still not automatic. That’s what the exposure module in darktable does in automatic mode.

6 Likes

There does exist a technique called histogram equalization, but that’s often not what you want.