Guiding laplacians to restore clipped highlights

BTW, this is what it looks like with 4 iterations of reconstruction instead of 1:

So I will have to make this module even slower by introducing an iteration parameter…

6 Likes

Gotta justify these expensive GPUs somehow… :slight_smile:

It is a nice result with the 4 iterations though.

1 Like

I’m happy I sparked some curiosity in you!
But I’m also kind of disappointed that it looks about the same, had been more fun if it either worked better or produced lots of artifacts :wink: I guess the Poisson method is too far away from what you are doing.

Anyway, that second try of yours with 4 iterations! That looks much better and is more in line with what I had expected to be possible. Well done! I can feel the sweat from my computer tackling that :rofl:

Thanks, but you will not like the runtimes…

On that note, I do hope that some of the less computationally intensive options remain available for people who don’t have much computing power at their disposal.

All the modules are still there, I don’t think any have been removed ever. so you should be good.

1 Like

I am having trouble merging Aurelien’s code with the master. I am following his code suggestions and after I do the pull for his code I get this error:

hint: You have divergent branches and need to specify how to reconcile them.
hint: You can do so by running one of the following commands sometime before
hint: your next pull:
hint:
hint: git config pull.rebase false # merge
hint: git config pull.rebase true # rebase
hint: git config pull.ff only # fast-forward only
hint:
hint: You can replace “git config” with “git config --global” to set a default

Any suggestions?

Do one of the things the hint text suggests for git. You need to set some global git preferences to avoid seeing that message.

I was very careful and this is what I got:
first I ran: git config pull.rebase false
then:
git pull https://github.com/aurelienpierre/dar
ktable.git highlights-reconstruction-guided-laplacian
remote: Enumerating objects: 216242, done.
remote: Counting objects: 100% (216242/216242), done.
remote: Compressing objects: 100% (43870/43870), done.
remote: Total 215112 (delta 172114), reused 213749 (delta 170756), pack-reused 0
Receiving objects: 100% (215112/215112), 1.06 GiB | 10.45 MiB/s, done.
Resolving deltas: 100% (172114/172114), completed with 857 local objects.
From https://github.com/aurelienpierre/darktable

  • branch highlights-reconstruction-guided-laplacian → FETCH_HEA
    D
    fatal: refusing to merge unrelated histories

Here in the Northern Hemisphere, it’s cold, a bit of extra GPU heating is desirable. :slight_smile:

France isn’t in the Northern Hemisphere?

Also its 90F here today.

Also its 90F here today.

What is this “F” of which you speak?

In Pitlocry it is due to reach a balmy 4C tomorrow.

1 Like

It is about 4 C here too. Speaking of F, I saw someone ask on Reddit recently what a Florida ounce was. They were referring to fl oz. :rofl:

5 Likes

@anon41087856 About tackling unwanted colours (e.g. dreaded magenta), I don’t know if you have some clever process, but I’m so far unable to retain white-balance invariance.

It seems to be two opposing goals - if the linear estimate is done without smoothing it stays more or less independent of WB, but then the estimates can massively over/undershoot. Then if you add a smoothness condition e.g. something like (cov(xy) + e) / (var(x) + e), the estimate degrades to adding a constant most of the time, making it depend on the WB again.

To avoid assuming any colour and treat it only as signal recovery, I’ve resorted to using average of the other two channels as the reference, which does work although with some desaturation (which the eye doesn’t care about). Yet obviously still dependent on WB…

First order methods suck. Don’t ever use them. They degrade your picture into cartoon and don’t look good.

Unless it’s on some metric of chromaticity. In vision, high frequencies live in the realm of luminance, and chrominance contains only low frequencies (I showed that in the first posts here).

So, break your RGB into norm and ratios, and apply first order color smoothing on ratios only. Then restore RGB from norm and corrected ratios.

1 Like

That’s sensible indeed - it seems to me there’s simply no way to extrapolate correct colours computationally from other channels, without either white balance or high level input (e.g. a human). If extrapolation is based only on each channel itself, then perhaps…

You can, but only starting at the second order. Which is what I do here.

For details only recovery yes, but I have doubts about colour if any smoothing/iteration is required. It’s easy to sense check anyway: does the output change when the W.B. is switched before/after?

Edit: yes, of course that affects clip points, but not a major hardship to account for it in code testing. I think it’s a good check for correctness.

Also: for multiplier invariant detail transfer in a pyramid method, a split to patch {standard score, mean, stddev} lets you transfer stdscore much like laplacian. Here’s one scale only (please excuse the silly picture, but it shows the high frequency transfer on the left):
stdscore
I might test that in a pyramid model later.

Edit 2: stdscore not so useful for this - it encodes change of direction without magnitude. Still interesting to know the split of stdscore * stddev ~= laplacian.

1 Like

What is the current status of the guided laplacians module? Is it in the current master?

Yes, should be there under ‘highlight reconstruction’.