[Play_Raw] Dynamic Range Management

Sorry my stupid question: What version of the darktable contains the toneequalizer module? Or how do you use it?

It’s not part of a DT release nor the master branch. You can compile from this PR if you are comfortable doing it.

Only when:

  1. Self-compiled
  2. You locally merge pierre’s pull request (it is not yet merged to master)

I plan on publishing my fusion work as a pull request later this week. Probably not tonight, as I’ve mentioned elsewhere I’ve got other things going on outside of the house.

Thinking about comparisons, it’s in a way equivalent to taking an image and setting a new lower maximum - yet trying to maintain perceptual appearance. For example, setting a new maximum of 0.2 of original (in linear):


How do you make the new darker image look like the original within that constraint? That gives a direct way to compare compression methods without having to rely on memory. Also shows it can never be an exact match!

So after doing some digging and wondering why attempting to clip overexposed pixels was not behaving as expected, I added some instrumentation.

For some strange unknown reason (will track this down tomorrow…), the maximum pixel values seen even for the base exposure were hitting 1.340317 (or, 2.0 in linear space) - Effectively, the moment you turned fusion on with +1 bias, even the base image gets shifted by +1EV (when it should be +0EV) - even with exposure_increment() returning a multiplier of 1

+2EV and +0.5 bias, or setting the basecurve to halve the input exposure, kills the weird behavior in the sky:

TBD: Determine what effect (if any) exposure clipping now has. It appears that it might be one of the extreme corner cases that I’ve ever been able to induce haloing (which is EXTREMELY rare)

Haloing is not rare, you have it on the windows frames.

Anyway, darktable, 2 instances of tone equalizer (linear blending with guided filter…), filmic, color balance, local contrast:

12 Likes

I opened the raw and inspected the pre-demosaiced image, only black subtract and white balance, and I think what looks like haloing on the exterior side of the window frames is manifest by the structure. I’d be curious to see a pixel-peep of what you’re looking at…

Nice rendition, BTW…

I suppose he meant the result posted by @Entropy512. You can clearly see the darkening inside the window panels.

1 Like

@ggbutcher @matze I haven’t pixel peeped either and my takes so far have been careless in terms of colour accuracy, artifacts and haloing. Though fairly well-received, I consider them pretty ugly.

I do think that @anon41087856 is referring to the image itself. E.g., if you look at where the glass meets the frame you would see a soft white halo.

No, I meant, on @Entropy512’s pic, you have halos in the windows:

vs.

No matter how you put it, an algo that produces halos means the light-transport consistency has been violated somewhere.

Thanks for clarifying and sharing. Good result based on good principles. :+1:

My problem with his image (that I noticed right away) is that there seems to be something similar to a light leak or fade. There are patches where the tone changes in an odd manner.

If you reread my comments (from 15 days ago by the way) - I surmised that using exposure clipping was potentially causing the haloing (exposure clipping was active in that particular image).

One could argue that the local contrast enhancement operation Pierre used fundamentally violates light-transport consistency. I’ll need to run some tests, but I wouldn’t be surprised that you could cause haloing by abusing it. After all, if you look in darktable source code, “local contrast” is CLAHE, and the original author of the algorithm (Karel Zuiderveld) has stated that CLAHE can cause haloing. Of note, using tone equalizer without that followon step provided extremely unnatural results for me. So it appears that achieving “natural-looking” results requires violating Pierre’s rules somewhere in the pipeline.

I’ve only seen haloing in exposure fusion in three cases:

  1. Any attempt to apply the algorithm to pixels in linear space. (There is a possibility that this is due to a bug lurking somewhere in the darktable blending code for exposure fusion, I’m digging into this, but with the current state of the code, any attempt at blending in linear space fails horribly. I wound up busy with non-darktable stuff last weekend so couldn’t spend time characterizing the original enfuse implementation more.)
  2. Attempting to use exposure clipping - kind of makes sense since this causes a hard edge in the exposure weights
  3. Having excessive EV shift between exposures

Clahe is deprecated, the new local contrast is called bilat.c and uses local laplacian on a multi-resolution pyramid as default because, indeed, the bilateral filter is anisotropic and does not behave (especially when used in Lab…). Also, in this particular case, the bilat module is put at the end of the pipe, aka when we don’t care about light transport anymore. Same image without colour balance and local contrast:

Essentially the same, just less crunch.

Everytime I see something bad when the theory says “it should work”, there is indeed a programmer-induced bug.

Probably because your clipping results in slope/curvature discontinuities of your transfer function between layers (which should not be a problem in linear space).

Means broken algo with lucky parameters. There is no excessive EV shift, just EV shifts that makes the flaws of the method pop out.

This is something I don’t get. Bear-of-little-brain here thinks EV should be a straight multiplication. No “pet-tricks”. Am I under-thinking this?

In this case, yes, the EV shift is just a multiplier.

However fusion takes multiple images generated with an EV shift and blends them together, based on a set of calculated weights. What I’ve seen is that if things DO break, they happen when the generated images are “too far” apart. (In the “traditional” enfuse approach, this would be if you set your bracketing steps too far apart.)

I haven’t abused the original enfuse implementation yet to see how it behaves in such situations - maybe this weekend I’ll get some time to poke at it again? Interestingly, Google has been cited as performing their algorithm slightly differently than the darktable fusion approach here - instead of generating 3 or more images and blending them together with weighting, Google is supposedly generating one image with +EV shift, fusing it with the base image, then iteratively doing the same thing a few times (so at all times only two images are being fused with each other, but the “base” image in successive iterations has had its dynamic range compressed with previous fusion steps.

I did find a potential difference between darktable’s blending and enfuse’s that may be a contributor to some of the oddities observed:

In enfuse, the mask weights are normalized (such that the total for of the weights for any given pixel is 1.0) before the gaussian pyramid of masks is generated.

In darktable, this appears to be happening after the GP is generated.

Ah, apologies, I forgot we were talking about blending…

A try with ART. Mainly using the Log encoding tool.DSZ_0619-art-1.jpg.out.arp (9.7 KB) ![DSZ_0619-art-1|690x459]

2 Likes

My 1st few steps with DT 3.

DSZ_0619.NEF.xmp (12.7 KB)

I’ve managed to compress everything into display range, but no luck with contrast. My jpg is looking very flat. Also, many areas are out of range and gamut. I’ll have to go through Aurelien’s video a few more times!

Darktable 3.9, Filmic v6

DSZ_0619.NEF.xmp (39.4 KB)

2 Likes

I had to try this just to see what the result would be with enfuse. (-1, +1, +2, +3 EV)

3 Likes