Solving dynamic range problems in a linear way

Your log encoding would be based on a series of absolute values. That is, the stops up, stops down, and middle grey would be what you choose. The calculation would never change regarding these constants. For example, the value range could be -7.0 to +7.0 stops, with middle grey being set to 0.18. The encoded log range would be a constant here, and your camera result would be encoded assuming you scale the data values separately via exposure.

The normalization deals with that. That is, your minimum exposure should be communicated in stops, so -7.0 as per our example, which puts the divisor as positive.

1 Like

So just to summarize the whole operation…
Input X is in range 0 < X < inf. MiddleGrey is also in that range (but obviously less than the max X).

  1. Turn this into a ratio, relative to a “middleGrey”: R = X / middleGrey. Your values now range 0 < R <= (maximum X value / middleGrey)
  2. Take the base2 log of that, L = log2 ( R ). Values typically now range lets say -7 < L < 7.
  3. Normalise this to a positive range 0 < N < 1
1 Like

So I call that normalizing the log2 (after the log2 call), not a normalized log2 (meaning something else happens in the log2).

Maybe just a language thing!

Interestingly, I see two unnecessary things here because of the normalisation at the end; dividing by a middle grey does nothing (it’s obviously equivalent to log2(X) - log2(middlegrey)) and the base of the log could be anything (because changing the base is the same as dividing by some value, which is cancelled by the normalise…).

It may as well just be normalise( ln (X) ).

Edit: that’s if you’re using a straightforward normalise, taking min value and max value of the input. If you have some predetermined min/max then it’s equivalent to setting the final min and max somewhere between 0 and 1, i.e. offset… multiply.

Is this due to a 12% reflectance middle gray calibration instead of 18%?
In this case we should multiply by 1.5 ( 0.18/0.12 ) to slide the middle gray from 0.12 to 0.18 and with a max value of 1.5.

Edit
Multiply by 1.5 = 0.75 stop ?

Here you go, with @anon11264400 algorithm (slightly modified) :slight_smile: Thanks for your help !

To finish on the log2 correction, I ended up doing that:

float lg2 = data->camera_factor * Log2( ( ((float *)ivoid)[k] + powf(2, data->black)) / ( data->grey_point/100. + powf(2, data->black)));
lg2 = (lg2 - data->black) / (data->dynamic_range); 

The reason is, when we have a huge dynamic range, too many pixels in the darks are clipped to 0, so I add 2^(- dynamic range/2) to lift the log and salvage the lowest EV.

3 Likes

Good discussion. At some point, someone should write a summary or concluding statement. :wink:

1 Like

If you base off of a constant log, it can be useful to have a wider dynamic range for your log than your cameras support, and adjust exposure / aesthetic contrast transfer accordingly. Exposure would need to be applied in the scene linear domain, before the log → aesthetic curve.

Using the log2 formula as per the ACES researchers is a relatively canonized approach.

It has to do with camera encodings.

18% is merely a convention, and one that was carried over to digital conventions. As such, if we shoot a grey card and pegged the grey card exposure to value 0.18 in the normalized linear camera raw file, we can see a problem; the encoding can only retain roughly two and a half stops above middle grey before running out of encoding values. That is 0.18, 0.36, 0.72, 1.44, where the camera value normalized would end up 1.0 of course. Worse is that most of the camera DACs are encoding to 14 bit values, so the ceiling is much lower. As a result, the actual middle grey will end up encoded lower in the normalized linear, and that needs to be scaled back up accordingly.

As the sensors develop, so to will the camera raw encodings need to develop further. The best balance of bandwidth to encoding granularity has been logs, hence why it is adopted almost ubiquitously. It is likely that professional DSLRs will evolve to such a native encoding, with the usual linear toes to dig out data from noise floors etc.

Nothing fancy required. The log2 asserts a roughly camera-like / perceptual distribution given sufficient bit depth encoding, with more than enough room for manipulations at 16 bit or 32 bit float. You won’t find any such kludges in any of the canonized log formats for this reason, including the Academy of Motion Picture Arts and Sciences implementations.

As you see, the user pick-up the target luminance (in L channel) to put in the center of the histogram, then input (or measure on the pic) the actual dynamic range (after the correction), then is able to debalance the shadows range if needed (by default, it puts the 0 EV in the middle), and finally the linearity factor allows to put a bit more contrast and saturation.

It’s really quick to set, especially if one has shot a color/grey target. it’s recovers as much as 16 EV with no trouble.

I’d advise against the tweak on the base log as per the above quote. Just roll with what the experts do.

Aesthetic curve would be decoupled from the base log if the aim is emulating similar responses that emulsion had. It too would he almost trivial to implement via a quadratic / cubic piecewise.

Sweeteners are the next issue, frequently overlooked. Desaturation being perhaps the most critical, and the implementation / approach equally so.

Since the base algorithm is intented to be used before 10 bits lossy video encoding, it doesn’t have to deal with the noise levels we have in RAW pictures. Used as is, it’s really not good enough, I have tested my method on pictures from 5 cameras, the result are very pleasing. The original makes the noise pop out in blacks even at 100 ISO.

The linear factor acts already as a saturation and contrast corrector.

Yes, in darktable we have the levels and tonecurves at the end of the pixelpipe.

False.

The results of the efforts have been tested across a few more than five cameras. :wink:

For aesthetic reasons as well as implementation, it is wiser to keep them separated.

Well then, how do they handle the bottom clipping to make is smooth and blended ? Because I get magenta noisy clips… and that’s ugly. As much as I love maths, if the result is not visually pleasing, the tool doesn’t work.

1 Like

Yes, the reason I assume is the pre-determined min/max values (i.e. the normalise assumes a final range irrespective of the input values). If those are used, you guarantee your reference grey will always map to the same value in the final step… a valuable result if you’re matching several frames. The simplification only applies if you normalise “full range” 0 to 1.

I have read the paper introducing the Sony S-log curve, they tweaked the log lift exactly to match Sony’s sensors noise levels. As it is hard encoded in cameras DAC, it makes sense. But we don’t have such measurements here so… worse case scenario, users will get an error of +2^(-8) = 0.004 for a dynamic range of 14 EV, they could shift the input of she same amount in the exposure module. Not a big deal. It’s all parametric.

1 Like

And expectations. A curve, even with someone who doesn’t entirely understand what is going on, behaves uniformly across the entire range, for example. Bending the values simply isn’t required as history has shown.

Different logs serve different needs. The log formula I demonstrated is taken directly from ACES’s library, and that is a platonic log, with no such toes or vendor supplied needs[1].

It isn’t required, and has additional issues as outlined above. It is better as an encoding log to simply remain in the base two domain.

That is broken software somewhere in the stack. There is no reason why a log encoded set of values produce magenta when unformly curved.

Hopefully the log base approach is working out for your work.

[1] Toe variant notwithstanding, which was a byproduct of colourists who became used to grading directly atop camera vendor footage.

Ok, the new feature is finished and ready to test: Unbreak input profile : add log profile by aurelienpierre · Pull Request #1730 · darktable-org/darktable · GitHub

I have added an auto-optimizer that computes the best parameters, allowing 2 clicks setting.

I will give it a try when someone makes a Windows build with the module. :slight_smile:

You’re that kind of person ? :stuck_out_tongue_winking_eye:

Hi Aurélien
I’ve made a build (windows) from your branch color-grading. The interface I get is this one:
image
The last slider (contrast correction) seems not to have any effect.
I don’t understand well the behaviour of color-pickers. The first click should activate it (it does) but the second one should deactivate it (it doesn’t), correct ?
The contrast correction color-picker sends a message I cannot read properly:
image
Is the auto-optimizer still available ?