[Play_Raw] Dynamic Range Management

@afre - I’m assuming one of the lua-script approaches or manually exporting then feeding to enfuse?

Not what I’m used to seeing from darktable (although I suspect I know where things are breaking now - the contrast weight in DT’s implementation appears to potentially be WAY off. enfuse default for contrast weight is 0.0!)

I suspect that in the flow darktable uses, contrast weight will simply act to increase the weighting of positive exposures improperly. A “traditional” enfuse workflow (where the highlights of the “bright” images will be clipped severely) that has contrast/saturation weighting turned on will deweight these clipped highlights due to lack of saturation and lack of contrast. In this particular scenario, tracking brightnesses above 1.0 becomes a liability. (However I think that tracking brightnesses above 1.0 eliminates the need for contrast weight in darktable’s exposure fusion use case - they’ll simply get severely penalized in the exposure weight calculation step.)

Need to try when I get home.

Elaboration of my approach.

1 Start with linear no-clip float. (a.tif)
2 Filter pixels, esp. negative values. (a1.tif)
3 Process copy to be in a perceptual range. (a1_.tif)
Step 3 would flatten both the shadows and highlights while adding contrast to the mid-tones.
4 Restore both extremes using enfuse. (a2.tif)

enfuse --exposure-weight=0 --saturation-weight=0 --contrast-weight=1 --entropy-weight=0 --gray-projector=l-star -l auto -o a2.tif a1.tif a1_.tif

Caution Post-processing is required, as enfuse introduces negative values and an alpha channel. Hint Adding grey scale masks to your input images would help you control which pixels contribute to the fusion and by how much.


OK, BOOM - contrast weighting definitely WAS the culprit in many of the issues I’ve been having with highlights and exposure fusion.

Darktable modified exposure fusion with the following changes made:
Perform all operations in gamma=2.4 space instead of linear (despite claims this will cause severe haloing, actually eliminates it)
Disable saturation weight - since all of our images are brightness scaling and we don’t clip, saturation will not change for any pixel
Change exposure optimum to 0.5 and width to 0.2 (enfuse defaults) - these need to be exposed in the darktable UI as sliders, I have no idea why these are hardcoded
Change exposure weighting function from R/G/B peak to R/G/B average (enfuse defaults)
Disable contrast weighting - again, we’re not clipping, and it turns out contrast weighting then becomes just a linear function of exposure shift for a given image
Temporarily disabled all basecurve operations when exposure fusion is active (this is getting undone, proper fix was disabling contrast weighting)
Drop weight for any pixel with exposure >= 1.0 to 0 - TBD, this may not be having much benefit if at all, I’m probably going to nuke it (otherwise it also needs to be a slider - enfuse has an exposure_cutoff option)

After all those code changes, the only settings in the UI are:
Turn on exposure fusion with three exposures
Leave bias to +1.0
Set exposure shift per exposure to +2.0ev

No other changes were made, also I stuck with camera white balance. Once I’ve exposed a bunch of the important control variables in sliders I’ll push the code as a WIP.

Have others done better? Definitely. Once this code is cleaned up, is the work to obtain the image SIGNIFICANTLY less? Yup.

For comparison, here’s Pierre’s tone equalizer, +4.0 EV for the deepest shadows, gradually dropping to 0.0EV for the -2EV highlight band. The highlight region looks much nicer, but the indoor areas look highly unnatural, with that “aggressively tonemapped HDR” look that drove so many people towards enfuse (one of enfuse’s primary claims to fame is that it’s much more natural-looking than most preceding HDR tonemapping approaches)

1 Like

As shown in my command, I gave all of the weighting to contrast-weight because it makes the image sharper. Does it have an adverse affect on take 2? I can’t tell because I didn’t pixel peep.

Where does this gamma come from? How about the working colour space? The latter likely has more of an influence on the colour balance.

How did you do that? If there is clipping in the input images, I usually mask them out. Otherwise, if there is detail, I keep it. I haven’t had any luck with exposure_cutoff.

Yes, looks terrible. (Coming from someone with an ancient low-end colour unmanaged SDR screen. :stuck_out_tongue:)

Hey @ggbutcher, could you have used a polarizer here to help cut some of the glare off the floor and table? (Or did you use a polarizer?)

1 Like

Your rather different order of operations, which appears to have some nonlinear shifts, is likely to behave very differently with contrast weight. At least currently, if there’s just linear multiplication for exposure shift AND no clipping, contrast weight shouldn’t be of benefit - and in fact just becomes a linear function of the exposure shift multiplier.

If you turn on any of the other stuff in basecurve, things could become VERY different. I’m also wondering why a decision was made to disable color preservation when exposure fusion was used - I suspect that operating with this disabled is what makes Pierre hate basecurve so much.

The actual gamma part of sRGB is 2.4 outside of the linear region. The linear region makes it average out to the oft-quoted 2.2. Working color space is the default (apparently rec.2020 linear now?) - changing this could break VERY badly currently. Part of the whole “this is a WIP” thing - the appropriate approach may be to change from working to sRGB after the basecurve is applied, and convert back at the end.

Conditional inside the basecurve_compute_features() OCL kernel (and its equivalent cpu function in basecurve.c) - Since we’re generating the pushed exposures internally, we don’t have to worry about clipping with this flow.

My laptop screen isn’t that much better - but for many years we’re going to have to cater to the lowest common denominator of unmanaged SDR displays. There’s absolutely no decent widely-deployed standard for delivering stills to HDR displays. HEIF/HEIC might do it, but support for that is very limited, especially support with HDR display capability. Right now, if I want to output to an HDR display (such as my Vizio P65-F1), I have to do the following:
Export from darktable as Rec. 2020 linear TIFF
Use ffmpeg with the zscale filter to convert it into Rec. 2020 HLG, 10-bit HEVC codec

Doing this looks AMAZING. But it’s a massive PITA for anyone to view unless encoding a bunch of images to a video slideshow.

Maybe. Last time I used a polarizer I had brown hair, and was a whole lot stupider than I am now… :smile:

Really, I probably need to go back one morning and re-regard the scene. The blueish incident light and the glare look right to my recall, but I can’t remember what I had for lunch yesterday…

The comparison made with Aurélien Pierre 's tone equalizer is unfair IMHO.
Here is what I got with tone equalizer in about 30 seconds (without applying any local contrast boosting after the tone equalizer):

Which is really close to what you have with your exposure fusion for the indoor part :slight_smile: And on the outdoor part, the tone equalizer allows nice control so that the sky remains blue and the mountains highlights remain unclipped :slight_smile:


hi @rawfiner,

nice result. did you used the bascurve or filmic module as starting point?
from the embedded history I only see basecurve, toneequal, rgblevels, …

Interesting, so a combination of a bunch of things.

As I mentioned in my post - can people do better with significant additional work? Yes. I’m not sure if the embedded history for mine showed, but - white balance, demosaic, then basecurve fusion only (the curve itself was effectively disabled by making a linear line) - nothing else.

One of the reasons I’ve liked enfuse so much is that it’s pretty tolerant/adaptive, it’s difficult to get something that looks REALLY bad.

My family currently jokes that my camera has “write only memory” - so getting time spent per image way down so I can clear the backlog that resulted from switching workflows a few years ago (and hence falling WAY behind) is important. (I used to have a pretty quick workflow using ufraw - but it strangely underexposes anything coming from a recent Sony camera by exactly 1 or exactly 2 stops, with the actual value sometimes changing, and with me unable to locate where in the flow ufraw is breaking that dcraw doesn’t. I gave up and started fiddling with darktable.)

I’ve made my life harder by tending to expose for highlight preservation lately which means a lot more work in post - work I shoudn’t be making for myself!

Edit: I just realized that I think I made a mistake in the exposure cutoff function, which would explain why it had far less benefit in the highlights than it should have. While tonight is not supposed to be a coding night (it’s a drinking night! :wink: ), fixing that one issue should only take me less than the ten minutes I have to set aside.

I’m doing the same thing right now. So, my workflow for all imaging - serious and family snapshots - is to make small JPEG proof images with a batch recipe, and for that I just do a linear contrast-stretch that will blow a small bit of highlight in order to get a decent overall spread. I then use those images to select ones for reprocessing. For family work, the proofs are usually sufficient, so that makes things easier.

To preserve highlights, I’m experimenting with the Z6’s highlight-weighted matrix metering mode - it definitely pushes exposure sufficient to meet that goal, but I’m still working through all the implications of that with regard to my proof recipe.

The subject image of this thread was not exposed with this mode, and it accordingly has a wee bit of total saturation in the snow on the mountain, and blue channel saturation in the sky. I’m probably going to venture back over there in a couple of days, to take a weighted mode exposure and to do a proper white balance measurement, at about the same time of day as the original image.

When you say “saturation” I’m guessing in this case you mean “clipping”?

I see the snow as very unsaturated… But channel clipping is exactly what I meant as far as exposure clipping. I reworked the algorithm to use a channel average instead of channel peak for exposure weight (same as enfuse defaults), but calculated exposure cutoff AFTER averaging - which could allow “would-clip” contributors to receive more weight than they should.

Not having to deal with clipped pixels in the pipeline makes things simpler in some regards, but introduces new things you have to consider. (For example, saturation weight in enfuse really is there only to detect when a pixel is clipped in some of the input images, or to handle other camera behaviors that might have altered saturation in an undesirable manner. For exposure fusion in dt - if a pixel has a given saturation value, that value will be preserved when it is scaled during the exposure shift. There could be some artifacts with basecurve when “preserve colors” is turned off, which is why I’m planning as part of my patch to allow it to remain on since it seems like force-disabling it when that feature was added was an oversight based on a conversation with the guy who maintains the preserve-colors function.)

I reserve the term ‘saturation’ for the specific condition of light on the sensor with a higher intensity than the sensor can resolve. Anything in post that drives a value down to display white is ‘clipping’.

Some of the snow pixels behave as ‘saturated’ when they are white-balance corrected, they take on the dreaded magenta cast. But, it’s snow, so it’s a bit dicey to differentiate between a measured white and a saturated white.

The blue channel clipping I observed was manifest in dt’s gamut clipping indicator. Not being familiar with dt, I don’t know where in the pipeline that is determined…

RT 5.6

DSZ_0619.jpg.out.pp3 (11,9 KB)


In my history, basecurve and rgblevels are disabled.
I started by disabling basecurve, then did everything with toneequalizer, then tried rgblevels to see if changing the gamma was giving an improvement, but I finally kept toneequalizer alone (the only other modules are highlight reconstruction and exposure to get back information in highlights).

1 Like

Sorry my stupid question: What version of the darktable contains the toneequalizer module? Or how do you use it?

It’s not part of a DT release nor the master branch. You can compile from this PR if you are comfortable doing it.

Only when:

  1. Self-compiled
  2. You locally merge pierre’s pull request (it is not yet merged to master)

I plan on publishing my fusion work as a pull request later this week. Probably not tonight, as I’ve mentioned elsewhere I’ve got other things going on outside of the house.

Thinking about comparisons, it’s in a way equivalent to taking an image and setting a new lower maximum - yet trying to maintain perceptual appearance. For example, setting a new maximum of 0.2 of original (in linear):

How do you make the new darker image look like the original within that constraint? That gives a direct way to compare compression methods without having to rely on memory. Also shows it can never be an exact match!

So after doing some digging and wondering why attempting to clip overexposed pixels was not behaving as expected, I added some instrumentation.

For some strange unknown reason (will track this down tomorrow…), the maximum pixel values seen even for the base exposure were hitting 1.340317 (or, 2.0 in linear space) - Effectively, the moment you turned fusion on with +1 bias, even the base image gets shifted by +1EV (when it should be +0EV) - even with exposure_increment() returning a multiplier of 1

+2EV and +0.5 bias, or setting the basecurve to halve the input exposure, kills the weird behavior in the sky:

TBD: Determine what effect (if any) exposure clipping now has. It appears that it might be one of the extreme corner cases that I’ve ever been able to induce haloing (which is EXTREMELY rare)