State-of-the-art FL/OSS noise reduction - any thoughts?

For the next photoflow version, I am planning to some high-quality noise reduction filters.

Presently the noise reduction part is still rather poor: I have LMMSE and IGV demosaicing methods, a derivation of the impulse NR from RawTherapee, and few of the G’MIC smoothing filters.

However, I have never done a full survey of the results that can be obtained with the various options offered by FL/OSS software (and probably I will never manage to find the time for that).

  • We have the many built-in filters from G’MIC (patch-based, NL means, anisotropic, and the new patch-pca method among many others) plus some custom filters (Iain’s noise reduction, etc…).
  • We have the noise reduction tools from RawTherapee and Darktable
  • There is probably other stuff existing in GIMP

What is your experience? What would you recommend as a starting point to incorporate into photoflow? I’m not afraid of porting processing filters from one program to another, and I already have a working interface for the G’MIC filters…

Thanks in advance!

In gimp, I combine different filters. For different noise I have different sollutions. Do you know this filter? “Dcam Noise 2 0.64”, the code is here:

https://searchcode.com/codesearch/view/40118417/

I know it influences the borders of the image, but I don’t know why I can’t find much information about this filter. In combination with Ians “save noiseprint” in G’MIC I often have good results. Do other people know if something is wrong with “Dcam Noise 2”?

edit: I now see “Dcam” is free and not open source, line five of code: “This program is free software”. But still I am curious to know why I can find almost no info about this noise reduction.

I had the most and quickest success with darktables denoise tools. For smoothish surfaces and chroma I usually apply either denoise profiled with wavelets or their flavor of bilateral filter. For textured/detailed surfaces or low iso shots I tend to use denoise profiled with nl means. Finally for the color blotches in high iso images (usually iso 6400+) I use the (wavelet based?) equalizer to reduce the offending frequency range(s).

The draw backs with what is in darktable:

  • No darkframe subtraction or lmmse (that’s why I usually use RT to develop raws for my astro photography, followed by stacking)
  • It can end up in quite a bit of manual work to get high quality results. I’m not sure if the ‘detecting’ surfaces part can be automatized without machine learning and massive amounts of data but it would be nice if it could.
  • The UI is clumsy. For bilateral noise reduction one has to set R, G, B independently which at least I very rarely do. Except for the (non profiled) nl means adjusting chrome/luma independently or adjusting the mix needs to be done via the blending system.

I didn’t get good results with the noise reduction in RT (except for lmmse/igv but that only goes so far) or GMIC. I think part of it is that it’s hard to find the right settings and that they are slow compared to DT which makes the tweaking process a pain.

One thing I wanted to try for a while is running a NL means like filter with several images of the same scene at once. I imagine it could result in quite similar pictures to stacking but without requiring the images to line up perfectly.

1 Like

The new patch-pca method G’MIC filter is really state of the art. But is very slow on simple computers of everyday people. Maybe it is more interactive about five years. :thumbsup:

1 Like

[quote=“iarga, post:4, topic:1132”]
The new patch-pca method G’MIC filter is really state of the art. But is very slow on simple computers of everyday people. Maybe it is more interactive about five years. :thumbsup:
[/quote]Now I had to try that.

[quote=“iarga, post:4, topic:1132”]
But is very slow on simple computers of everyday people.
[/quote]Can I run gmic/gimp on a massive cluster? :smiley: My employer probably wouldn’t be thrilled if I broke their infrastructure to denoise photographs but it might be good fun. Then again given that it takes 35 minutes to run on my i7, it would probably still be painfully slow. :smiley:

The preview is fast enough to be useful but given that it is a non local method I’m not sure how accurate it is. On the full image it takes a long time. The parameters are not exactly self explanatory either. So I guess it’s practically not usable for now. :confused:

1 Like

For a “quick” test I don’t go above 200x200 resolution :wink: But that is a typical resolution of a forum-avatar, so it is still usable.

You should probably rope @David_Tschumperle into any conversation on state-of-the-art noise reduction techniques… :slight_smile:

But that is a typical resolution of a forum-avatar, so it is still usable.

Yes, I didn’t word that correctly it’s not usable for what I had in mind. I also messed up by confusing the patch based version with the block based one. So I’m sorry². :frowning:

I tried it on a smaller image now and the results are nice.

This does work well if the offset between the images is small. Otherwise you have to use a large, time-consuming, search radius. I’ve tested it on some photos of people and due to the natural movement, the search radius needed to be prohibitively large.

[quote=“Iain, post:9, topic:1132”]
This does work well if the offset between the images is small. Otherwise you have to use a large, time-consuming, search radius. I’ve tested it on some photos of people and due to the natural movement, the search radius needed to be prohibitively large.
[/quote]Makes sense. I remember seeing some hash based algorithm(s) which might be less sensitive to the search radius.
Another way might be to build a rough lookup table between the image coordinates beforehand using some sort of feature matching.

But I think even with a small search radius this could be very useful for landscape shots where the camera is on a tripod but things like the wind moving branches introduce motion.

Combining images with small movements of a few pixels should help reduce demosiacing errors/artifacts.

Combining images with small movements of a few pixels should help reduce demosiacing errors/artifacts.

I’ve done this using Hugin, aligning and stacking the results of a burst, but increasing the canvas size to twice the original size (4x the pixels) and then deconvolving. It works great on solid subjects but landscapes are a no go because trees ruin everything; they’re always moving at least a little. It also benefits from a camera with an overly sharp lens and no AA filter, like my GR.

So, from this first round f comments I would say that Darktable’s non-local means tool is a good starting point and a sort-of must have for good and quick noise reduction.

I wonder what could be the differences compared to the non-local means tool in G’MIC (which after some quick experimenting seems to be slower than the DT equivalent). Maybe @David_Tschumperle has a word to say on this point :wink:

@Iain are your denoising multiple experiments available somewhere?

I have a G’MIC filter I’ve been playing with. I,m on Mobil so can’t post it at the moment.

You may be interested in this tool for motion compensated stacking

Edit:

Here is my G’MIC filter. It’s pretty basic. It needs two layers as input. It ‘moves patches’ in the bottom image to match the top one and outputs a single image. The number in the top left corner is how many minutes it took.

#@gimp z_bland: z_bland,z_bland(0)
#@gimp : Scale = float(3,.5,50)
#@gimp : Radius = int(3,3,30)

z_bland:

# moves patches in the bottom image to match the top one

[1] 



 -repeat {2*$2+1} i={$>-$2-1} 
	  -repeat {2*$2+1} j={$>-$2-1}
	  
	  --shift[1] $i,$j,0,0,2  
	  --sub[0,2]
	  --sub[0,3]
	  
	  -l[4] 
	  -abs 
	  -blur {($1*2)-1}  
	  -endl 
	  -l[5] 
	  -abs 
	  -blur {($1*2)-1}  
	  -endl 
	  
	  --min[4,5]
	  -eq[4] [6]
	  -eq[5] [6]
	  -rm[6]
	  
	  -mul[2,4]
	  -mul[3,4]
	  -max[2,3]
	  
	  
	  
	-done 
  -done  
 
-k[2] 

time={$|/60}
-text $time,0,0,25,1,255
1 Like

Thanks for sharing that @Iain. I still need to learn the gmic syntax but it looks like a fairly bruteforceish approach, going through all permutations of shifts and then picking the one with the lowest delta. Still have other stuff that needs to be done, but this will be on my ‘to investigate’ list. Could end up being a cool party trick. :smiley:

Yes that’s it.

I’ve tried some things like braking the image into tiles and offsetting tiles against each other, then doing the above, but I have not got it to work as well as I would like.

Are you talking about general Noise Reduction (From bitmap image)?
Or only image after demosaicing before any Tonal Curve and Gamma Correction?

The 2 are totally different (Though one could be transformed into the other).

More importantly, what would be your time budget?

I’m mostly interested in denoising of RAW images, so I would say both cases with a preference for pre-tonal-curve NR if it can be more effective…

Given that I’m 40, I would say not more than 20 years… :wink:
Seriously, whatever is needed to implement a good algorithm… I’m not making any money out of the software, so I have no specific time constraints.

When I wrote time budget I meant the Algorithm Run Time :wink:.