highlight reconstruction

i’d like to start a thread about highlight reconstruction here because i missed the opportunity to discuss this on irc yesterday. i have a new implementation which is related to the “colour mode” in dt, but actually works in 2d. but i’m interested: how do people do it? how does it work in rawtherapee for instance?

but first some background for discussion: sensors clip, you can’t get past full well capacity. unfortunately now you do white balancing after that, so an innocent clipped (1,1,1) value becomes magenta, mostly (say (2,1,1.8)). the solution is to clip all channels at the lowest, so they become (1,1,1) after wb and extended clipping. in this example this throws away half of the range in the red channel.

there’s all sorts of hacky business going on that you can do for a single pixel in isolation. but ideally you’d like to reconstruct the brightness from the non-clipped channels (red here) and multiply a colour from the local surroundings.

i thought mean value coordinates were a good candidate for this local interpolation. i know @anon41087856 likes solving differential equations, Poisson’s equation in this case.

there’s an interesting connection between Poisson’s equation and Laplace pyramids, both facilitate gradient domain editing/blending. side info: i can do a Laplace pyramid up/down pass for a 16Mpix raw in 0.5ms.

5 Likes

I think this falls back to a more general topic:

do we define 2D images as:

  1. a continuous gradient field ?
  2. the modulation of a chrominance signal (low frequency) with a luminance signal (high frequency) ?

#1 is compatible with a spectral approach and makes no assumption on the relationship between chrominance and luminance. Assuming the continuity of the gradient field, all you have to do to restore missing parts is to “surf the gradients” from the neighbourhood, for each RGB plane independently. That’s the point of Total Variation inpainting, and other methods involving anisotropic diffusion PDE.

#2 is grounded into our perceptual system, needs a pretty accurate relationship between RGB and chroma/luma and assumes some profiling has been done before. But then, how do you profile clipped and missing parts ?

The reason I like #1 is because it’s the path of fewest assumptions (Ockham’s razor).

One thing I would like to explore though is the possibility to solve the anisotropic diffusion equation under the constraint that \frac{\nabla R}{||\nabla R||} = \frac{\nabla G}{||\nabla G||} = \frac{\nabla B}{||\nabla B||}.

I am not that mathematic oriented, just from systematic “observation” I wonder, whether the middle term is correct or contains a typo/copyerror?

1 Like

Copy/pasting error indeed. Thanks !

fair enough. my simple version is based on #2. if you do anisotropic diffusion based on #1 (i have glsl code for that if you want to toy around, but it’s still slow because it needs so many iterations) you can only use the gradients on the outside. or you enforce your colour constraint and reconstruct gradients based on the non-clipped channel(s).

i like the feature of #2 to keep the ratios between g/r g/b constant. it says: same light as next pixel where it wasn’t clipped, only here it’s brighter. seems plausible to me from a radiative transport point of view, too, not only using perceptual arguments. i think keeping the gradient direction fixed does not guarantee that and might give us colour casts.

so, using a laplacian pyramid kinda does exactly that: reconstruct a smooth gradient field. then it’s integrated into the spatial domain image again when the pyramid is collapsed.

okay, time for some pictures. i don’t have many on this computer, but let’s see some.

disclaimer section:
(none of these pictures are mine, let me know if you don’t want them here)
(exposure down, high end of tone curves flat to overemphasise problems)
(vkdt has some pixel level artifacts, all of this is super proof of concept)

house

dt LCh


dt color

vkdt

fire

dt LCh


dt color

vkdt (you’ve seen that one)

sunset

dt clip


dt LCh

dt color

vkdt

what it does

pretty much build a pyramid, manipulate coefficients to discard overexposed channels and keep the ratio to the non-clipped constant, collapse again. not perfectly tuned yet (sun should have more of a local gradient, should be more reddish at the bottom. now it shows a border. also the sky right of the house shows some fancy colours in blue and green).

Very interesting discussion, thanks for starting the thread @hanatos. Now, I don’t have the background to fully follow, but I like the idea of what @anon41087856 defines as approach #1. There’s some code in RT to do gradient domain image manipulations (that’s what the “Dynamic Range Compression” tool does, based on the Fattal et al. 2002 paper), including a solver for the Poisson equation. It is currently not used for highlight reconstruction, and I don’t have anything concrete to contribute to the discussion right now, but you gave me an idea that I’d like to experiment with. If it works, you will hear more from me. Otherwise I’ll shut up and continue lurking :slight_smile:

2 Likes

Along those lines, there might be relevant G’MIC scripts lying about… (Aware of any @David_Tschumperle?)

If you also mean users I’d like to post some examples (photos) here but probably not today because I am sick.

Is it possible that you exaggerated the difference between dt old and vkdt? I think it is possible to achieve good results with both dt old and RT. The results look very similar in both programs. Unfortunately I don’t understand the math/science behind it but I would not be surprized if it was very similar in both programs. I can only describe “how I do it”.

dt: I choose “reconstruct in LCh” in the module highlight reconstruction. Then I activate the exposure module and pull down exposure by at least 1.5 EV. Then I hide everything but the brightest parts of the image with a parametric mask and add smooth transitions to the mask. Btw reconstruct color introduced something like heavy vertical banding artifacts in the photo I tried it with today.

RT: I choose the profile “neutral” (=disabling the base curve in dt). I check highlight reconstruction in the exposure module and choose method luminance recovery. Then I pull down the exposure until the structure is back in the overexposed area. Of course this makes the whole photo quite dark so I go the the highlights/shadows module and pull up the shadows.

I have to admit that both methods lead to slight halos here and there. But the halos were more tolerable than the overexposure.

This is meant to be a technical exercise. One could participate but you would have to compile and run the code and make sure you can make direct comparisons among the apps and algorithms; i.e., isolate or remove functions that are irrelevant and would as a result muddle the comparison.

thanks for the workflow description! the mask in dt you apply to the exposure module, right?

and yes, you can achieve better results with dt classic on those images. for the fire you need ulrichs “color reconstruction” module though, else LCh mode will always desaturate highlights (optimised for clouds…). some of the colour banding effects can easily be hidden by a sufficiently flat tone/base curve. i intentionally didn’t put such effort in my edits here (neither did i with the vkdt images, that code doesn’t even support most of the advanced trickery).

hmm I’m a bit underwhelmed. dt is out of the game, but I’m not sure about the flat saturated blobs in vkdt. They look more like a reflection in a tin can rather than highlights.

you mean the typical poisson blotches? yes, it does that. i think it’s something we can work on though. you can always just use the coarsest scale, disregard the finer. that makes blotches go away, but will fill everything with a very smooth / almost constant reconstruction. if you use all the details, you get blotches especially at edges. i think there needs to be some kind of edge threshold that knows what is a smooth gradient that can be reconstructed, and what is a branch or a mountain ridge overlapping the blown area. at such cases just using the coarser scale avoids blotches.

but i only had a very short play with this, just enough to see that it can be influenced. absolutely not sure what would be the best thing to do here. can try to find some time to post examples to illustrate what i mean later.

later: now.

(ignore the black pixelated lines that go through the highlight. i can’t tell float white == float16 white in this code, i suppose there’s a simple fix)

only coarsest scale:

use all you can get on finer scales, too:

also notably interpolates C/A at the edges into the image (these aren’t overexposed, they are just ugly). this is something that would need to be dealt with when using diffusion, too. in fact, there’s probably not much to be used as diffusion input at the edges here. you’ll need to diffuse the rest of the image (it has a big sky to the left somewhere out of view, see waveform histogram) and it’ll need to cross the black bar…

Yes. But I needed + base curve (+mask on bright areas) because the photo was still too dark

RT
The original raw (neutral):


highlight reconstruction activated, method luminance recovery, exposure pulled down:

shadows pulled up (needs further editing):

comparison - highlight reconstruction methods in RT:
luminance recovery (I found this method the most useful for this photo):


cielab blending:

color propagation:

blend:

comparison of the end results in dt and RT (luminance recovery/reconstruct in LCh) - the overexposed area looks similar:


Finally the artifacts in dt when using reconstruct color:

btw: I tried to compile vkdt but failed

What about you keep the pyramid layers step to get a flat colourful base, then add another texture step:

  1. Add gaussian noise to output of the pyramid (image) in the clipped areas (where mask == TRUE):
for(size_t i = 0; i < height; ++i)
  for(size_t j = 0; j < width; ++j)
  {
     const size_t index = i * width + j;
     tmp[index] = (mask[index]) ? image[chan][index] + (0.5f - ((float)rand() / (float)RAND_MAX)) * 0.5f
                                : image[index];
  }
  1. Solve the 2nd order periodic PDE \frac{\partial u(x, y, t)}{\partial t} = \frac{\partial^2 u(x, y, d, t)}{\partial \xi^2} + \frac{\partial^2 u(x, y, d, t)}{\partial \eta^2}, where
    \begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}\cos(\alpha) & -\sin(\alpha) \\ \sin(\alpha) & \cos(\alpha)\end{bmatrix}\begin{bmatrix}\xi\\\eta\end{bmatrix}, \alpha = \arctan\left(\frac{\partial^2 u}{\partial y^2} / \frac{ \partial^2 u}{\partial x^2}\right) = \arctan\left(\frac{\partial u}{\partial y} / \frac{ \partial u}{\partial x}\right) + \frac{\pi}{2} and d is the period of the texture:
for(int iter = 0; iter < iterations; iter++)
{
  const int radius = 1 + (iter % 2) + (iter % 4) + (iter % 8); // cycle the sampling distances 

  for(size_t i = 2 + radius; i < height - 2 - radius; ++i)
    for(size_t j = 2 + radius; j < width - 2 - radius; ++j)
    {
       const size_t index = i * width + j;
       if(mask[index])
       {
         const float grad_y = (tmp[(i + h) * width + j] - tmp[(i - h) * width + j]) / 2.0f; // du(i, j) / dy
         const float grad_x = (tmp[(i) * width + (j + h)] - tmp[(i) * width + (j - h)]) / 2.0f; // du(i, j) / dx

         const float alpha = atan2f(grad_y, grad_x);

         const int dx = ceilf((float)radius * (-sinf(alpha)));
         const int dy = ceilf((float)radius * cosf(alpha));

         const float du_dzeta = (tmp[(i - dx) * width + (j + dy)]
                               + tmp[(i + dx) * width + (j - dy)])
                                - 2.0f * tmp[index];
         const float du_eta = (tmp[(i + dy) * width + (j + dx)]
                               + tmp[(i - dy) * width + (j - dx)])
                                - 2.0f * tmp[index];
         tmp2[index] = (du_dzeta + du_eta);
       }
    }

  const float lambda = 0.5f; // updating param in [0, 1]

  for(size_t i = 2 + radius; i < height - 2 - radius; ++i)
    for(size_t j = 2 + radius; j < width - 2 - radius; ++j)
    {
      const size_t index = i * width + j;
      if(mask[index]) tmp[index] += lambda * tmp2[index];
     }
}

It should make the clipped areas less flat.

Nevermind, that’s a stupid idea… The solution is oscillatory since there is do dampening (first order) term.