Following up with enfuse and dt

@Entropy512 is clearly interested in enfuse and its implementation in dt. This thread is dedicated to the two, although I will be focusing on the latter, as I don’t use dt often, let alone the dev version. For context, please start here:
A tone equalizer in darktable ? - #67
[Play_Raw] Dynamic Range Management - #17 by Entropy512

At present, I shall be replying to

As an experiment, here are the outputs: exposure10,contrast10,exposure24,contrast24

They are all different. My processing is as follows:
1 Bracket 3 exposures like so: 0ev,2ev,4ev.
2 10 series: apply gamma 2.4 post enfuse processing.
3 24 series: apply gamma 2.4 pre enfuse processing.
4 Normalize and round for web output.

If I take the FeatureSIM of each image (the JPGs) against 0ev (a TIF), I get the following values (higher is better).

[~,b]=FeatureSIM(ev,e24)
  b =  0.10508
[~,b]=FeatureSIM(ev,e1)
  b =  0.10848
[~,b]=FeatureSIM(ev,c24)
  b =  0.10489
[~,b]=FeatureSIM(ev,c1)
  b =  0.10544

It is clear that the 10 series perform better than their 24 counterparts. Therefore, linear input isn’t a detriment to enfuse processing, at least for this image and this quality assessor. The issue lies elsewhere.

@Entropy512, about your last post on the other thread, very nice result. I’m wondering about the change in colorspace with all your recent changes, do you find that is helping only with the halos or the entire result is better than with a linear space?

BTW, the preserve colors feature was added very recently to the base curve, I didn’t added to the fusion because I don’t use it, so I don’t know if is needed there or if it will be an improvement. If it make things better it can be added.

I’ve found that the entire result is better.

FYI, I dropped out the exposure cutoff code with almost no visual difference. (Edit: I think I had an error in my implementation of the cutoff code that might explain why it didn’t help. Might try a trick tonight.)

If you look at the gaussian weighting function used in mertens’ paper, while it isn’t explicitly mentioned that they used sRGB, a default optimum of 0.5 STRONGLY hints at being designed to work in sRGB. (0.5 in sRGB is roughly middle grey, vs. around 0.18 in linear)

(sorry, didn’t label axes - the x axis is EV delta from the clip point, I know pierre likes that representation and that’s one case where I agree with him)

You can see from the plot that:
0.50 in linear weights towards the highlights significantly - this is the behavior I always saw from DT’s exposure fusion - while some highlight detail was preserved, in general the highlights were severely crushed/compressed and “almost but not quite blown”

Attempting to use linear 0.18 (which matches 0.50 in gamma-2.4) has the side effect of constant significant weights at the low end of things.

Attempting to only apply the weighting function in gamma space but continue to blend in linear provides really horrible results. (Sorry, no example, I deleted the code for that approach because it was so consistently bad and I really don’t want to do it again…)

As to enfuse not requiring gamma-encoded input - enfuse is doing a LOT of color management work internally before performing the fusion/blending, I haven’t yet found an obvious transfer curve transform (enfuse’s code is really hard to read) but I suspect based on the experience of @afre that there is one somewhere internally.

Similarly there may be significant differences between enfuse’s contrast weighting function and the one implemented in DT. The one in DT will definitely create an increasing weight for exposure-shifted images that is proportional to the exposure - which is consistent with the fact that highlights got blown badly when it was present.

I won’t be able to do much code analysis tonight, it’s a drinking night. :wink: I’m already behind schedule on getting out the door for work as it is. :slight_smile:

As to preserve colors when fusion was on - the fact that it’s forcefully turned off when fusion is on made me think there was some sort of conflict identified?

Before I head out, for reference, here’s the relative contributions for three exposures with +2EV shift:

Given the significant shift in the weight function, unless @afre changed the exposure optimum when working in linear, it’s impossible for the images to look THAT similar unless somewhere in the internals of enfuse there was a transfer curve conversion that was undone on output.

I’m trying to dig through enfuse to understand:

  1. Their contrast weighting function (it makes my head hurt)
  2. How their weighting function (which appears different from that of Mertens’ paper) actually works, because at first inspection, all weights other than zero (which are processed with a conditional that bypasses the whole evaluation) would get normalized out since they’d be a common multiplier to all images. But as I said, enfuse is kind of hard to follow (heavily templated C++, most heavy lifting happening in a .h file is unusual…)
  3. If all of the colorspace conversions they’re doing would also change the transfer function of the internal working space.

@afre - do you happen to see an error like this displayed?

enblend-enfuse/enfuse.cc at master · jackmitch/enblend-enfuse · GitHub - “: warning: fusing in identity space; option “–fallback-profile” has no effect”

It really looks to me like enfuse attempts to convert to an internal color space for fusion. It’s not entirely clear as to whether a linear->gamma transform would or would not happen based solely on reading the code if linear input were detected. If you’re getting visually similar results, it’s highly likely such a transform is occurring.

Edit:
OK, now I see why fusion and color preservation are currently mutually exclusive.

At some point, someone clone-and-owned process_lut() to create apply_ev_and_curve()

Rather than have process_lut() call apply_ev_and_curve() with a mul of 1 (to avoid having two nearly identical copies of code doing the same thing), they now live separately, and thus apply_ev_and_curve() didn’t get the love that you gave process_lut()

Looks like undoing this forking is something I need to do before continuing on with fusion work.

No, I don’t get that error because I didn’t use identity space. I wonder whether we are using the same enfuse and addressing the same things. I believe I have 4.2. Anyway, you are taking a deeper dive than I have time for, given the circumstances, so you probably know more than me at this point. My knowledge comes from the manual and intuition based on my prior knowledge. Not the papers or implementations. :blush:

PS Please bring your discussions out of Play Raw and place them here or on a separate thread. While discussion is welcome, it is becoming increasingly off-topic there; not focused on the actual Play Raw processing of one another.

Keep in mind I’m trying to make dt behave similarly to how enfuse behaves - the behaviors you’re describing make it sound like feeding enfuse files that it detects as linear data will cause it to internally switch to working with a gamma curve. Otherwise your images would look significantly different unless you were changing exposure weighting when switching from linear to gamma, since 0.5 in linear and 0.5 in sRGB are vastly different. (The latter is consistent with roughly middle grey, the former is pretty far up in the highlights)

At this point, within enfuse I’m seeing values significantly greater than 1 getting passed to the basecurve module. Somehow the basecurve LUTs are behaving as expected despite this 2-2.5x scaling factor (dependent on the settings of the earlier whitebalance module) even though the way the function is written indicates it should be getting passed data that is <1.0 except for stuff that is expected to be above a potential clipping point (outside of the user-defined curve and thus requiring extrapolation)

Switch on fusion and now - boom, anything >1.0 on the input causes severe clipping artifacts unless at least one image is scaled to have a peak value <1.0

This is going to take some significant digging to figure out what’s going on… Wherever the issue is isn’t immediately obvious from reading through the “normal” and “fusion” flows of basecurve.

As to thread management, I was leaning that way but @ggbutcher implied he wanted to follow some of the analysis and development in his own thread. But others might not want that so I’m going to keep work elsewhere but continue to use his image as a great test case that seems to elicit some strange corner cases I hadn’t seen in most of my own test cases.

@Entropy512, I’ve been playing a little with enfuse and it seems that it generates halos with linear data, so maybe is a limitation of the algorithm.
I’d like to play a bit with your enhanced version, do you think that you can share it, even if it is not finished yet?

I’m planning on commenting my code and pushing it up tonight.

I had planned on pushing it up a while ago, but I wanted to fix the issue with preserve colors and fusion first.

Haloing is what I’ve seen, in general, for almost any approach with linear data. It is possible to induce haloing even with my changes, but instead of almost always happening, it usually only begins to show if you push the per-exposure offset to +2EV or more.

https://github.com/darktable-org/darktable/pull/2828