Magenta highlights vs raw clipping indicator vs filmic white level

I was wrong about the new version being faster, probably forgetting about cropping.

@kofa , it’s tricky isn’t it. But there is an alternative method for highlight reconstruction - go into gimp and paint what you want!

I don’t want to reconstruct those highlights – it’s the surface of the Sun, or specular reflections of the water. The luminance Y norm does what I want, and I can also fall back to filmic v5 and its desaturation.

  const float norm_min = exp_tonemapping_v2(0.f, data->grey_source, data->black_source, data->dynamic_range);
  const float norm_max = exp_tonemapping_v2(1.f, data->grey_source, data->black_source, data->dynamic_range);

  // Compute the norm using the selected variant
  float norm = get_pixel_norm(i, variant, profile_info, lut, use_work_profile);

  // Save the ratios
  float4 ratios = i / (float4)norm;

  // Norm must be clamped early to the valid input range, otherwise it will be clamped
  // later in log_tonemapping_v2 and the ratios will be then incorrect.
  // This would result in colorful patches darker than their surrounding in places
  // where the raw data is clipped.
  norm = clamp(norm, norm_min, norm_max);

  // Log tonemapping
  norm = log_tonemapping_v2(norm, grey_value, black_exposure, dynamic_range);

  // Filmic S curve on the max RGB
  // Apply the transfer function of the display
  norm = native_powr(clamp(filmic_spline(norm, M1, M2, M3, M4, M5, latitude_min, latitude_max, type),
                           display_black,
                           display_white), output_power);

  // Restore RGB
  float4 o = norm * ratios;

  // Save Ych in Kirk/Filmlight Yrg
  float4 Ych_original = pipe_RGB_to_Ych(i, matrix_in);

  // Get final Ych in Kirk/Filmlight Yrg
  float4 Ych_final = pipe_RGB_to_Ych(o, matrix_in);

  // Test-export to output RGB : check if in gamut
  // retain original hue of the pixel and clip chroma at the gamut boundary 
  o = gamut_mapping(Ych_final, Ych_original, o, matrix_in, matrix_out,
                    export_matrix_in, export_matrix_out,
                    display_black, display_white, saturation, use_output_profile);
  return o;

That’s the whole Filmic v6 code. Notice I tried your picture without the gamut_mapping() function, and it yields the same result. Do you see something “trying hard” to do anything ? We log, we curve, and we clamp to the valid range. That’s pretty much all we do. More importantly, all norms are handled by this exact code, so if one works and not the others, that’s only on that particular norm and not on the general method.

The green in color calibration is not a sensor green, it’s already a mix of sensor R, G, B so it averages clipping issues too. Remember green weighs for about 75% in the luminance value, depending on the color space, and the color calibration “green” is actually luminance (in XYZ space) or something close (in LMS space).

As I understand it, any clipped highlights have to be dealt with before filmic (and as early as possible). So that means raw white and black points have to be set correctly (automatic), and “highlight recovery” should have done its job. Only wrinkle is the input data for highlight recovery: ideally it would take into account the BW multipliers set just before it. Not sure if that’s possible (or if highlight recovery could be used before the WB module).

And that’s because colour calibration comes after demosaic. So each pixel is made up of (interpolated) R, G and B. The image also has been pulled through one or two color space conversions. In other words, before we get to color calibration, a lot has been done already to the pixel values, even without any modules added by the user.

1 Like

Also, this what happens if I don’t multiply the ratios back at the output of filmic (filmic’s defaults with auto-set black/white exposures, so max RGB norme):

Smooth.

Now, look at the ratios:

Non-smooth.

Now, change the norm to luminance:

You have to recover highlights at least to get a color consistent with the neighbourhood. That magenta is simply too opinionated in terms of its chroma, considering how off the hue is to begin with.

The reason v5 hides it better is that the ratios are massaged to degrade progressively to { 1, 1, 1 } when we reach display white. Problem is that this “desaturation” is not hue-linear… (pick your poison).

This how the ratios = RGB / max(RGB) look like after only the filmic HL’s reconstruction:

Note that the reconstruction is not smooth at the edges of the clipped region because it’s a stupid 1st order in-painting, so the guided laplacian essentially inherits the same logic but at the 2nd order, which better respects gradients and output smoother results.

1 Like

Aurélien, I read the code already. It’s not the individual C statements that I don’t understand (though you did cause me a few fun moments by referring to filmic v6 as v4 in the code :slight_smile: ), it’s the colour manipulation behind it; but I’ll leave that to you. :smiley:

There are images where the v6 colour science shines (no pun intended), out of the box: bright flowers and the like. And there are situations where I’ll have to learn how to use it.
I think currently there’s a discrepancy: one may use the white relative exposure picker, and then after using it on the whole image, something surprising may result. I know, the i-word… So I guess instead of sampling the (clipped) Sun disc, I’ll have to:

  • adjust the white level manually, or
  • use another norm, like luminance Y,
  • use v5 and its desaturation,
  • use something else.

No big deal.

As always, thanks!

No highlight reconstruction module, no reconstruct in filmic. v6 + maxRGB. Highlights desaturated in color balance rgb.
2022-05-01_20-04-21_P1070278_08.RW2.xmp (7.3 KB)

3 Likes

It took me the whole day and part of last evening, but I finally nuked the part of the problem affecting the color diffusion. Now on the guided laplacian one…

3 Likes

There’s a typo here: darktable/basic.cl at 730151cd871a5acfc9090c6309b2327c2747c0ae · darktable-org/darktable · GitHub

it should read k < 9 instead of k < 0.

1 Like

yeah, I fixed that one already, but there is more :smiley:

Is the cpu path still an issue…If so one thing I notice from a day or so ago is the artifacts that show on the CPU path, ie with OPENCL disabled will show for me with opencl enabled while I am zooming and the preview is updating…When the preview completes the artifacts disappear as the screen display completes it update… I am not sure if this provides any useful information…I see you have made some changes by the sound of it so I should check that out first…

That’s because the low-res preview may be computed on CPU depending on your prefs, even though OpenCL is enabled.

For sure here at work when I tested on my office PC…I thought I had my home box on GPU only but I had been messing around with settings a couple of weeks ago and I might have gone back to default scheduling….Thx

Isn’t there also a typo here in the WB normalization after interpolation - shouldn’t it be RGB[k] / wb[k]? At least it does a different thing than the corresponding OpenCL line.

Another thing I noticed: 5x5 box blur is applied on the binary clipping mask. Doesn’t it cause some of the clipped pixels to get less than full opacity? Would it be good to first dilate the mask by a few pixels and then blur to get 100% opacity on all clipped pixels and a smooth roll-off into the non-clipped surroundings?

(Also, is there a chance of the divisor to be zero here?)

3 Likes

Ooops you are right.

It’s already extended, at the demosaicing step, any pixel interpolated from a clipped pixel is also flagged as clipped.

Without the 5×5 blur, I had weird border effects.

Yes, but the fmaxf(NAN, 0.f) should clip it to 0.

Thanks for your help.

2 Likes

@anon41087856 a couple of things I tried and found probably beneficial (at least removed dark fringes I previously saw):

  1. Never darken a channel in the guided Laplacian reconstruction
  2. Don’t alter a channel if it’s not clipped in the guided Laplacian reconstruction

See highlights_diff.txt (1.5 KB)

I think it might be also beneficial to disregard the center sample when computing the regression. The data in one channel is clipped anyway, so that data point can be thought of as an outlier.

2 Likes

This seems risky because we are working with laplacians in a quasi-Fourier space here, not with the actual image, so there is no definition of dark or bright, only oscillations around average value. Dark or bright appear after we collapse (sum back) all wavelets scales.

After some tests, it seems to move the problem elsewhere, but does not yield smooth reconstruction. Basically, the current dark fringes may appear when the radius of reconstruction is too large, so the darker sky is inpainted into the sun disc.

Right, that makes sense. Perhaps one could try to avoid darkening the image instead when forming the reconstructed image at the last wavelet scale (I don’t remember if the original image is available at that point).