NR Reduction washes out colour - some random experiments without conclusion ...

Yes, the guide image in my filter is user definable, so I have tried many things.

Blurring the guide image can be useful for noisy images, and I included a control for a very light blurring in the filter. Blurring the guide does not degrade the result as much as you could expect.

Another approach is applying a different method of noise reduction to the guide (you could use the same kind of NR if you want). This improves finding similar patches. Using frequency domain noise reduction on the guide can be quite good because it can recover patterns that are swamped by noise much more easily than patch-based methods.

However, I have found the best use of frequency domain NR in conjunction with patch-based, is to do a slight over-smoothing with the patch based and then use frequency domain NR on the difference to recover residual patterns.

Oh! Almost forgot. You can also draw on the guide image to emphasise edges etc. that you know are there but are overwhelmed in noise.

If that is your experience, then are there colorspaces which show less desaturation artifacting than this example case?

Oh gosh! So the nlmeans has it (I can read commented code, yay!) and the chlpca probably too, but my reading skills of C (I am assuming this is C) are not developed enough. :frowning:

That is a really sexy trick! And you do that normalized to local lightness if I remember correctly?!

Alot of food for thought, indeed!

I normalise the noise levels in the residual locally before applying the frequency domain NR. Then I undo the normalisation to get back to ‘normal’.

You might be interested in this paper which uses a combination of patch-based and frequency domain NR:

Data Adaptive Dual Domain Denoising: a Method to Boost State of the Art Denoising Algorithms

1 Like

Not as far as I know. I suppose the amount of desaturation will depend on both the cause of the noise, and the denoising method. But I suspect that denoising of ordinary photos (which have low saturation) generally results in saturation decrease.

When a photo has high saturation, I get virtually no change in saturation. See examples at Camera noise.

As I mentioned, I generally denoise the R,G0,G1,B channels independently, before demosaicing. With DSC09955.ARW, going through the process but with no denoising, we get:

c_bay_den
I have cropped to some bricks near top-left. For this, the C channel of HCL has a mean of 0.148742. The Cz channel of JzCzhz has mean 0.00341112.

With my dnsePsh2Mn denoising, we get:
img
c_bdp_den

The C channel of HCL has a mean of 0.111013. The Cz channel of JzCzhz has mean 0.00283577. So chroma has decreased by denoising. We can always boost chroma if we want:

%IMG7%magick ^
  c_bdp_den.png ^
  -colorspace HCL ^
  -channel 1 -evaluate Multiply %%[fx:0.148742/0.111013] +channel ^
  -colorspace sRGB ^
  c_bdp_den2.png

img
c_bdp_den2

EDIT: Bother, the images aren’t showing. They are at:

Non denoised:
http://snibgo.com/imforums/c_bay_deg

Denoised:
http://snibgo.com/imforums/c_bdp_den

Denoised, with adjusted chroma:
http://snibgo.com/imforums/c_bdp_den2
… but put “.png” at the end of each.

EDIT2: Uploaded images with drag-drop.

EDIT3: Now the original images show. I’ve been notified: “system 1 hour — downloaded local copies of images”. Hooray. Perhaps it was a sidekiq problem, as mentioned on [Friendly reminder] Limit JPEGs to maybe full HD resolution - #40 by Thomas_Do , and now fixed.

@snibgo possible you have some sort of hotlink protection?

I dunno. Nothing in my pixls.us profile looks like a problem. I did my usual practice: upload images to snibgo.com/imforums, and paste the urls here between [ img ] and [ /img ] tags. After a few seconds, the forum software usually copies the images from snibgo.com, but not this time.

When I paste the URLs with [ img ] tags, the forum software doesn’t show that text, but a “image unknown” symbol.

Creating a local HTML page with <img src=“http to snibgo.com”/ > works fine. I can’t find a control panel setting in my web hoster that might create problems.

It’s probably a bug somewhere. Possibly in my brain.

I’m going to do a reminder of noise processing in RT and before.

Before 2010 we worked, Emile Martinec, Manuel LLorens and me on Dcraw (not the original), then on Perfectraw.
Emile Martinec had made a very good noise treaty and we came up with several types of tools : Wavelet, DCT Fourier, Median, Bilateral filter,…
Emile Martinec in the same era, works on Amaze, and me to bring LMMSE to RT

When appaers Rawtherapee, and independenly of tools, the first question was : where to put the treatment, at the beginning or at the end ?
The first version was…at the end.

Indeed there are advantages and disadvantages to the two positions, but everyone explores these advantages and disadvantages.
Now, Denoise is at the beginning.

One of the important things Emil has brought is the notion of MAD (Median Absolute Deviation), which takes into account the signal (wavelet) by level and in a way the “histogram of this signal” (whether for luminance or chrominance).

This “MAD” acts as a filter (or as a guide) and for the same adjustment (slider, curve, etc.) will modulate the action of “shrink function”.
This allowed me around 2013 to develop an “automatic” action for chroma noise.
I also introduced Lab mode for denoise : Lab is better with “normal” noise… the real difference is introduced by the internal gamma (3.0) due to lab, which you can bypass with the gamma slider.

But, because denoise is complex, there is always a but, and we didn’t solve the problem before / after.
When we do a treatment to bring out the shadows or act on the contrasts - shadows higlight, Tone mapping (Fatal, Mantiuk, Desmis…), Encoding log, etc…, the noise increases considerably…and it’s after “denoise”!.
More it can be interesting to keep noise and to be able to differentiate the action.

The first action of this type was in 2014, by introducing denoise luminance in “wavelet levels” which acts at the end ot the threatment. But, two remarks, it’s global and only acts on luminance.
Others actions to improve quality originate from Ingo @heckflosse : very good Capture Sharpening or dual demoisaicing (ex : Amaze + VNG4)

But, the last action is the creation of “Local adjustements” with “Denoise module”. This module allows:

  • of course local denoise (with the 4 delimiters)
  • change the action with “scope”, for example you can denoise only the “reds”
  • I have keep the sames good algorithms from Emil, but I add
    ** possibility to differentiate the action according to the level of detail : for the luminance with 4 levels instead of 1, for chrominance 2 levels instead of 1
    ** add DCT (Discrete Cosinus Transform) for chroma (uses a lot of resources)
    ** other settings to improve DCT and differantiate action between shadows / highlight , red or blue…
    ** posibility to use masks
    ** etc

Of course, nothing is perfect, noise processing is complex, maybe one day artificial intelligence will bring more, but it will always be necessary to take into account the variations of luminance, illuminants, the perception of the photographer and his wishes, etc.

I think a “good” treatment is one that mixes before and after:

  • before (with “Denoise main”) to allow sharpening and reduce some of the defects : this settings must be to the minimum
  • after (with Local adjustements") to refine the treatment that the user wishes, according to the colors, the parts of the image

Jacques

3 Likes

I remember I read this at some point (reading is not understanding, but still).
I can only recommend to go to the ipol.im homepage and play with the demo, not only of this paper, but all other available demos…quite fascinating what denoising with a modern algorithm can achieve. The DA3D paper mentioned lets you also play with a multiscale DCT version as a guide for DA3D, superfast and not much different to a NL-Bayes guideimage for DA3D!

Two things that I forgot to mention that are important :

  1. you can choose the type of wavelets most suitable for your image. Indeed RT chose to use the Daubechies wavelet, and you can choose the number of “taps”, ie the shape and the number of moments of the wave.
    In theory, the higher the number, the higher the “edge performance”. In practice this is not always true and nothing is worth trying
    You can choose (in settings) between : D2 (Harr), D4, D6, D10, D14 (default = D4)

  2. you can get an overview of the action of the various parameters on the noise :

  • “luminance denoise” by wavelet levels (0, 1, 2, 3 and beyond …6)
  • “luminance detail recovery” DCT Fourier
  • “equalizer” white black
  • “chroma fine” by level (0, 1, 2, 3, 4)
  • “chroma coarse” by level (5, 6)
  • “chroma detail recovery” DCT Fourier
  • “Detail threshold” recovery Luminance and Chroma
  • “Equalizer” blue red

And of course “scope” - the “deltaE” function to choose the range of colors to be treated, and “transition” to smooth the result
This deltaE take into account : a) spot reference (the circle center of the Spot) with hue = href, chroma = cref , luma = lref; b) all pixels in selection with there own hue, chroma, luma.
DeltaE is calculated by something as : SQR(href - hue) + SQR (cref - chroma) + SQR (lref - luma)

For that you have to activate in “1+* Mask and modifications - Smooth Blur & Denoise”

  • one of the 3 choice usable for that : “show modifications without mask”, “Show modifications with mask” or “Preview selection deltaE” according to usage.

Of course, the result is not exactly the reality, I amplify the signal so that the results are better visible.

jacques