Experimenting with Local Laplacian Filters

Local Laplacian Filters constitute an interesting algorithm for manipulating the local and global contrast of images while minimizing halos. I wanted to experiment with this technique since quite a while, and now I have finally wrote some code to see what one can get.

Local laplacian filters are available in Darktable, and most of the code I have used is derived from there. However, I have decided to follow the original paper(s) more than DT’s implementation:

  • the algorithm is applied to the log-luminance, and not to the Lab channel as in DT
  • I am not using the speed-up proposed by Aubry et al., instead I am using the original version from the Paris et al. paper(s)
  • I have used up to now the most simple version of the remapping function, totally clipping the values above the user-defined threshold
  • the output image is “beautified” with a simple gain+power adjustment, applied to the image linear luminance to avoid color shifts. Again, this follows the suggestions of Paris et al.

My implementation is still far from perfect, and I am still getting some artifacts that I need to understand. Also, the code is rather slow, too slow to think about implementing it in PhF. But results are nevertheless interesting, so let me show few examples from “difficult” images that have been posted in this forum in the past:

Dark faces:

Filmic, when to use?:

RAW challenge: backlit

This is work in progress, so comments/suggestions are very much appreciated!

Pinging @Coding_Dave, @obe and @PkmX as they posted the original images…

7 Likes

Looks like your image links are broken.

Nice work! What sort of artifacts?

The right edge is darker than it should, and the highlights are flattened excessively, but that’s probably a consequence of the too simplistic remapping function I am using.

Really? Maybe that’s an issue on your side? @Entropy512 do you have the same problem?

Looks like it must have been my problem. I can see them now. Sorry about that!

1 Like

Only the right edge? Is it content-dependent or always the right edge? (if you flip your input image left/right, does the darkening flip with the input?)

That sounds like it may be a boundary condition issue. IIRC, the current local laplacian code handles boundaries via a sample-and-hold approach. Burt and Adelson’s 1983 spline blending paper suggested a boundary condition that extrapolated out-of-bounds pixels in such a way that the first derivative at the boundary was constant and the second derivative was zero.

g(-l) = 2*g(0) - g(l)

However my first attempt at experimenting with this in my exposure fusion experiments didn’t go so well - however there’s a REALLY good chance I botched my implementation. That attempt was quite late at night.

As to excessive highlight flattening, when I was working with edgardo on trying to rework dt’s exposure fusion code, I found that most attempts at altering highlight behavior wound up in three categories:

  1. The implementation edgardo submitted clipped all inputs greater than 1.0 and blended in CIELAB space. This caused highlights to start desaturate before clipping. No haloing except in extreme shifts per exposure (I’ve done a bit of investigation, but I think this only happens when you have what I started calling “weighting function inversion” - this doesn’t apply to your approach obviously, but the general highlight behavior might.)
  2. In enfuse, if you give it float inputs, it blends in a logarithmic space instead of LAB - In general what I saw when I did something similar in dt was that this would permit highlights to go well beyond 1.0 even if you aggressively deweighted them, but again the algorithm seemed to behave in such a manner that it would let the highlights go before haloing
  3. If you blended in LAB but allowed values greater than 1.0 into the algorithm, you got GREAT highlight results. However you were almost effectively guaranteed halos

So there may be a tradeoff involved where any approach that is halo-resistant or halo-invulnerable may need to sacrifice highlights in some way?

I can see your example images - Great results!

Looks like always the right edge. It is most visible in the third image… I also suspect a boundary condition. Filling the laplacian levels at the bottom of the pyramid requires a very big padding, and maybe I am missing part of it.

I think you cannot really compare the local laplacian with the exposure fusion. While they both use pyramids, the basic idea is IMHO completely different.

Concerning DT’s implementation of local laplacian filters (in the “local contrast” module), as far as I understand the code clips L values above 1, which is totally nonsense for a module that is supposed to do some “scene-to-display” mapping. For comparison, this is what I get on the second image with DT 2.6.0:

Have you been following the paper from scratch?

You certainly have much more experience than me. In my newbie adventures, one sided darkness meant I had made a mistake somewhere and therefore had to follow the code linearly and output intermediary images to see where it happened.

PS At least, in the two cases where it happened for me, it wasn’t a boundary issue. Rather, I selected the wrong image data set or the right one at the wrong scale.

@Carmelo_DrRaw, I’m working on something similar, but I’m very far from having a acceptable result. Are you following the original paper for some particular reason? Do you think it give better results?

Getting boundaries right is a PITA. The darkening at the right on the second image is so subtle that I wonder if I’m just imagining it - seeing it because you’ve told me it’s there. (that one sort-of looks like uncorrected lens vignetting, you just notice it more because that side of the image is brighter?) I’ll need to take a look at my own versions of that image. Sometimes you swear your algorithm is broken, but if you look more closely at your test case, you see patterns in the image that look like weaker versions of the artifact you were getting bothered by.

True - some of the observations may or may not be useful, but probably not too many. Although one definitely seems to be the case (see below)

All of the math I’ve seen implies that a colorspace conversion to LAB should not be restricted to clipped values in some way - however it seems like every time you work with unclipped values in LAB in dt, Bad Things seem to happen which might be why the LL module does clipping. Either something in the math regarding LAB I’m not seeing in what documentation I’ve read, or possibly something odd in dt’s implementation.

I am curious how you would go above 1. Would that require HDR-CIELAB or something of the sort? Of course, we could work highlight or gamut compression into the module, which I am guessing is what @Carmelo_DrRaw is planning to do.

It’s a good question. I have seen that it looks like for HDR implementations, there has been work on “LAB-like” colorspaces such as IPT and ICtCp (the latter of which is part of Rec. 2100) - I’ve seen references to “hdr-CIELAB” but this seems to be far less common. There’s not any fundamental discontinuity I see in the math for values above 1, however my experience in the past 2 months or so strongly suggests there are some perceptual effects up there, which would explain why enfuse chooses to work with log1p() if the input data might be >1.0 (Such as Stevens’ Power Law breaking in the extremes, which could easily be the case.)

Carmelo avoids many of the potential colorspace conversion challenges by working only with luminance - which makes sense 90% of the time. (The corner case being handling of output clipping - if you don’t take that into account, sometimes you have a luminance <1.0 that has individual R/G/B components well above 1.0 and the result isn’t quite what you hoped it would be.) Edit: Side note, @Carmelo_DrRaw - is there any chance that might be a possible contributor to the highlight issues you’re having?

Much of this is mostly musing, I’ve been kind of taking it easy the past two weeks and my next coding project will be video-related (OpenCL-accelerated HALD CLUT implementation), although who knows when I’ll actually get the time for that - the next few weeks are quite busy with other things (hiking, local music festival, weddings and picnics, hiking trip to Adirondacks, etc.)

Edit: In @Carmelo_DrRaw 's case here, he’s simply avoiding working in lab, if I read what he posted correctly, he is calculating the luminance of the linear data and solely processing that. It appears nothing in his algorithm would have issues with channel values above 1, with the exception of the possible corner case I mentioned where a pixel could have a luminance below 1 but an individual channel above 1 which might have a negative effect on some of the pixels in your highlights if your final output method treats 1 as your display’s maximum possible output.

Exactly!

After a long journey into the cold and unfriendly land of pixel indexing, I have finally manage to sort out the initial issues in my local laplacian filter implementation, and now I get results that look more consistent. In particular, there is no more artificial luminance variation across the image (see for example the dog’s picture, where you would say that there is a strong vignetting applied to the right-bottom corner).

Here are the same images with the new processing:

I would say that the results are really quite convincing, but as usual I’d like to get you feedback…

@heckflosse @fcomida do you have something similar in LuminanceHDR? If not, would there be any interest in implementing this?

6 Likes

The results look really good me!

Yeah - REALLY solid. Wouldn’t mind seeing it directly in RT also, seems to give better results than Fattal (which isn’t to say that Fattal is bad - I’m actually finding it exceeding my original expectations) which is what RT’s Dynamic Range Compression module uses.

The lady in the center of the middle images is starting to look a little unusual… But that’s always a risk with local tonemapping as aggressive as what is necessary to have her appearing as brightly lit as this example is and I’m not sure if any other algorithm would do any better there.

Curious what the issue was.

The main problem was how to properly select the region in the high-resolution image that needs to be re-mapped in order to correctly estimate a given laplacian coefficient in a given layer. I also realized that for proper tone-mapping one has to compute the gaussian pyramid down to the smallest level, in which the image is squeezed into a single pixel. I will post examples of what happens if one stops at one or two levels above the minimum.

2 Likes

Yeah. Trying to truncate levels can lead to Bad Things. It’s one of the numerous contributors to dt’s fusion implementation also not working very well.

Which leads to an interesting question of whether there is any way to properly implement an algorithm that depends on Gaussian and Laplacian pyramids if you want to tile images on memory-restricted systems like dt does.

where do you get the idea that dt clips? i don’t think it does. the fast version of the algorithm requires to quantise the input brightness levels, essentially limiting the preciseness/usefulness to [0,1] on the input. outside, you’ll get halos. i don’t see the algorithm as scene to display mapping. i think it’s a local contrast algorithm in display referred space (hence your log mapping before it).

fwiw it’s my understanding that the original intent may have been to transform the coarse residual after doing only a few levels of laplacian pyramid, then putting back the laplacian coefficients starting from the transformed coarse.