Exposure compensation vs. adjusting brightness

Hello, I have noticed that I have been adjusting brightness instead of using exposure compensation slider a lot lately. It just allows me to make a picture brighter quick without dealing with shadows/highlights after that. Is that a bad practice? What do you prefer? If I adjust brightness on a raw file in darktable is it a lossless raw tuning or that is still a lossy post processing?

In darktable anything should be lossless, as far as I understand.

In Filmulator there’s the “white clipping point” which can be used to adjust brightness but it gives a different result (still lossless, just different) from the exposure compensation. In Filmulator, exposure compensation is completely linear but after Filmulation, the brightnesses become nonlinear and thus “white clipping point” doesn’t behave linearly.

Almost everything in darktable is nondestructive (not to be confused with lossless).

There are no wrong answers off you’re getting the desired results.

I like to try and adjust the exposure until there is no clipping, then proceed with my edits from there.

About the only thing I can think of is that exposure compensation is graded in stops, while most brightness tools use some arbitrary scale. If you’re doing things like “zero noise” image overlay, you’ll want to use EC, otherwise, mox nix.

I’ve eschewed brightness in favor of ‘contrast stretch’, where you use a curve to scale the image to the ends of the numerical range. I even wrote a tool to automatically find the upper and lower bounds of the histogram and scale with a so-called ‘linear curve’, a straight line between the black point and the white point. In the 0-255 scale, black is usually in the range 0-15 from zero, while white can be down toward 127 if there was not much contrast in the scene. For what it’s worth…

To me, exposure is more intuitive in the field while brightness makes more sense in post.

Brightness adjustment shifts the histogram just to the right,so just. adds a certain value. Exposure adjustment widens the histogram, which looks to me more like multiplication.

What I wrote is not completely correct for DarkTable.

This image has gray values of 0%, 25% and 50%:

Exposure correction of 1 will change this to 0%, 50% and 100%:

Brightness adjustment of 1 produces 0%, 85%, 93%:

It seems that in brightness adjustment complete black is not affected (in contrast to Gimp), while complete white can not be reached (even with multiple instances). In any case, exposure and brightness adjustment produce very different results in Darktable.

2 Likes

The exposure adjustment is a very well defined operation: it consists of multiplying the rgb values by a common factor in the linear camera color space.

On the other hand, I do not think there is a unique definition of brightness. As far as I know, Photoshop (and gimp?) adds a constant value to the rgb values. This is usually applied in the working rgb color space, and the result depends on the gamma encoding that is used. Adding a constant to linear rgb values gives a different result compared to the same constant added to gamma-encoded rgb values.

I think that darktable defines “brightness” as a gamma correction applied to the Lab L channel. As such, it preserves both the black and white points… @houz can probably give a more precise statement.

3 Likes

That is exactly what I am seeing. Exposure compensation boosts blacks and whites so I have to deal with shadows and highlights after that. Brightness keeps everything in more or less mid tones (not too shady not too shiny). The latter does produce more color noise though (at least I feel it does)

Yes. First contrast is added to the L channel (not relevant here), then brightness. Saturation is multiplied into the a and b channels.

The LUT for the brightness part is computed as follows (slightly simplified):

const float gamma = (brightness >= 0.0f) ? 1.0f / (1.0f + brightness)
                                         : (1.0f - brightness);

for(int k = 0; k < 0x10000; k++)
{
  ltable[k] = 100.0f * powf((float)k / 0x10000, gamma);
}

and it’s applied with

L_new = ltable[CLAMP((int)(L_old / 100.0f * 0x10000ul), 0, 0xffff)];

I removed all the parts that deal with extrapolation for L >= 100.0.

When looking at the two extreme values of L = 0 and L = 100 we get:

L_0_new = ltable[CLAMP((0.0 / 100.0 * 0x10000), 0, 0xffff)] = ltable[0]
L_100_new = ltable[CLAMP((100.0 / 100.0 * 0x10000), 0, 0xffff)] = ltable[0xffff]

with

ltable[0] = 100.0 * powf(0 / 0x10000, gamma)
          = 0
ltable[0xffff] = 100.0 * powf(0xffff / 0x10000, gamma)
               = 100.0 * powf(0.9999, gamma)
               = 100.0

The math isn’t 100% correct in the last step but it’s close enough and the differences are not visible, even for big brightness bossts, resulting in gamma quite different from 1.0.

I hope that explained your question. The real code can be found here for the processing and here for the LUT creation.

3 Likes