Solving dynamic range problems in a linear way

I have reverted the algo to the standard implementation. Not a big change, but indeed, the settings changed. It should be more straightforward this way.

Yes, that’s the first impression I’ve got. :slight_smile:

I’ve found some artefacts … on noisy images, but pushing right black exposure to much on normal image has similar effect.

  1. pushing black exposure to right (to clip noise)
  2. activating contrast brightness saturation (with all sliders = 0)
    1

    2

I’ve pushed a bit the sliders for demonstration.
It happens frequently on noisy images. I can get blue, red or green points in darkest zones.
Sometimes I don’t find the right setting to avoid this effect.

I know, that’s a drawback of the log profile : values close to 0 are bounded to - inf, thus the noise explodes in low-light.

You can fix that by decreasing the black level, in the exposure module.

Got it ! That works. :slight_smile:

1 Like

-inf value should stay dark/black. Why do these points explode ?

Because I have to clamp the log input at a threshold > 0. If one pixel is below the theshold and its neighbours are not, that pixel remains dark while everything around it brightened. That increases the relative contrast in noise.

By decreasing the black level, you are just pushing all the pixels away from the pit and dampen this effect.

Think of a log encode as an efficient encoding scheme for energy that is designed to be viewed. That means that the allocation attempts to distribute the information in such a way that all of the values that need to be seen will be allocated evenly.

The critical thing to understand is that log encoded imagery is not designed to be viewed on a generic 2.2 display. It amounts to a heavily lifted display of values.

Log curves appear “film emulsion like” when viewed under a traditional “S” curve. Such a curve will more greatly compress the regions high and low. If you were to apply a generic S curve to your log encoded imagery, you should find it looks quite acceptable.

There are edge cases where you end up with negative values. Sometimes they are non-data, sometimes they are raw offset based encodings for noise floors, etc. Each of these must be dealt with on a case by case basis.

The TL;DR is that for imagery where the 0.0 scene referred value corresponds to “no light”, log encoding schemas work remarkably well. Case by case basis of noise encoding may require some manipulation to get the scene referred data into such a state, and should be performed prior to the camera rendering transform’s log encoding.

1 Like

Is that clamp() function which spreads the values near the limit (-inf) ?
BTW what exactly does this function ? In mechanical world it tightens elements together…

Would be usefull add a toe like in ACEScct?

Breaks log based transforms.

ACEScct was added simply because of legacy folks used to toes in some log encodings. It was a pushback move.

Same here, the clamp function ensures the input are in ] 2^-dynamic range; inf [ so the log doesn’t output -inf

That is okay, I have been burnt out lately. Your feedback has been good. What I would say is that the module still isn’t straightforward to use. An outsider wouldn’t know what to do with it.

What do you mean @afre ? What’s missing ?

So the output data are controlled. This is what I can see on my image 1. just above. Without an other plugin after unbreak input profile the image is what I expect, even if I clip dark values.
The surprise is to see that the following plugin contrast brightness saturation explodes the noise while it should do nothing (all sliders to 0).

I’m a bit afraid to be stupid but let’s take the risk. This should be equal to:

log2(coeff)+log2(max_input)-log2(grey_level) -log2(coeff)-log2(min_input)+log2(grey_level)
= log2(max_input) - log2(min_input)

meaning that dynamic range is independent of grey level, which makes sense to me.

If I’m not mistaken the ouput ( [0,1] or ]0,1[, I don’t know) would be:

(log2(input) - log2(min_input)) / (log2(max_input) - log2(min_input))
= (log2(input) - log2(min_input)) / Dynamic range

What I understand here is that middle_grey is not necessary.
If I’m not yet completely lost, two settings should be enough then:

  • black exposure
  • dynamic range or white exposure.

Wrong or right on the above, I like the results I get with this plugin.
I find very intuitive to work with black exposure and dynamic range (or white exposure).
Usually color picker on black exposure and dynamic range make a good job.
When they don’t, working jointly with indicator over/under exposed and histogram, it’s very easy to tune these settings.
I like also the smooth grading of output tones, easier to achieve than with base curve plugin.

The grey level acts as an exposure gain on the input. The lowest the grey value is, the highest the signal (input) gets amplified before the log, the less the noise gets pushed into the -inf pit.

Your maths are correct, but they only apply if you want your output to preserve 100 % of the dynamic range. In real world, you want to sacrifice the noisy pixels to preserve the mid-tones. That’s why the dynamic range + black exposure are not dependant of the grey level in the UI.

The main problem is you don’t know beforehand what is the highest value of the noise (or the lowest value of true black), so no algorithm can compute the bottom part of the dynamic range (except if you are the sensor manufacturer, and even then…). It has to be adjusted visually by the user.

You might want to consider a parametric contrast curve to stack after the base log. Coupled with a fulcrum based CDL, you’d have a full set of tools for grading an image without many of the silly bits in DT.

To do a parametric contrast curve, the math should be pretty easy to sort out. It’s simply a sigmoidal curve, ideally with a linear section in the middle.

The parameters would be linear slope dynamic range, slope, and fulcrum point corresponding to the stops in your log2 encoding. In an ideal world, the slope is nonzero at the extremes, with a tension control to quantise lower and upper code values to integer bit depth encoding lowest / highest value (EG: At 8 bits per channel, 0.00392156862 and 0.99607843137 respectively).

If the maths are correct, middle grey has disappeared. If I want to sacrifice the noisy pixel I’ve just to push the black exposure slider to right.
In fact, choosing where I set the black point and the white point, I can include or eliminate the edges. Exactly the same as with the levels plugin.
The difference is that the log2 imposes the slope while the levels plugin lets chose the middle point (and change the slope).
I don’t know if that would work but we could achieve a similar effect modifying the base of the log.
image
But as @anon11264400 says that may be silly bits …

EDIT : Not a good idea. @anon11264400 would be right.
As logb(x) = ln(x)/ln(b), modifying the base of the log would not change the output. Sorry.

For now, I have implemented the full CDL stack (saturation, slope/offset/power, fulcrum contrast), and it gives very good results.

I think the parametric contrast curve is easily achieved in the regular tone-curve module. That’s what I do for now anyway.

Yeah log(a) / log(b) ≠ log(a / b). The base of the log is not the slope. The grey level sets how you balance the part of the dynamic range you allocate to the shadows and the part of you allocate to the highlights. If you set the grey to the average luminance of the picture, you recenter the histogram on 50 %, which gives you a safe distribution before ICC color correction. Remember that color calibrations (semi-reflective) charts have roughly 2.5 EV of dynamic range, from L = 17-18 (black) to L = 96 (black). Between 0 - 18 and above 96, the correction is extrapolated, so the ICC profile produces garbage.

Having the 3 parameters (dynamic range, black EV, grey level) is usefull when you are in studio, you can measure the scene lighting dynamic range with a posemeter, and shoot a grey target. So there is no guessing, you input the 3 numbers and you get spot-on exposure.