About RCD and other demosaicing methods

Hmm, you could compile the librtprocess branch of RawTherapee. Should be straight forward. Then you can easily test changes to demosaicers in librtprocess

2 Likes

Ha, the ‘launch’ I’ve been most-busy with is my son taking a new job that requires him to move to the South Pole. And, I sent him off this past weekend to his first quarantine location in San Francisco with a fresh copy of rawproc 1.0, and now he has His List of Things about it
 :boom:

@LuisSanz, @heckflosse’s suggestion about the RT librtprocess branch is indeed probably the most expedient path. But, I’m really interested in whatever you come up with being in librtprocess for rawproc 1.1!

1 Like

Hi all,

May I ask for some help with the development of RCD v3?

I have been working lately on an update of the algorithm. Among other things, I was thinking about changing the way the low pass filter is calculated.

For general photography things should be a bit sharper and false colors should be reduced. But I recently learnt that RCD is also used for specialized photography fields, for example astrophotography, ultraviolet photography or focus stacking.

I wouldn’t like to update the algorithm without proper testing in those fields, but I have neither the tools nor the knowledge to know if things will improve for the specific needs of each field.

It will be very handy if:

  • I can have any kind of raw sample where RCD works noticeably worse or better than other demosaicing algorithms.
  • I can have some ultraviolet and astrophotography raw files and a bit of insights about what is the best output for these cases.
  • Somebody can do a stacking comparison of RCD 2, RCD3 and AMaZE in a challenging photo.

Thanks!

ping @lock042

Hello.

Last time @heckflosse helped us to integrate librtprocess into Siril I made some tests I can share with you. Especially this one:

DCB:


HPHD:

AHD:

RCD:

bilinear:

IGV:

VNG:

LMMSE:

I don’t know if it can help. But this comparison helped me to choose RCD algorithm as the default one in Siril.

Thank you very much, @lock042.

Could you describe why do you prefer RCD rather than, for example, LMMSE? Also, is there something you like more from the other crops that would be nice to have in RCD?

It should be easy to do some things, such as to sharpen or further smooth the output, avoid connected noisy pixels, etcetera, if I get it right. I can even prepare a fine-tuned algo for your specific needs.

If you can share a raw file I can use as a reference, things will be easier.

Thanks!

Thank you for taking care of astrophotography :).
Then, why RCD was a better choice. As you can see LMMSE is very good, but the color loss is too visible. With astrophotography colour are faint, we don’t want to loose it. Also the algorithm is quite slow.

RCD is very good for round object, and in astrophotography you have many round objects: stars. For example, the main default of VNG (that we were using before RCD) is to introduce artifacts in within stars (that are high contrast area). But VNG produce a smooth background we do like (less noisy). So a RCD algorithm able to produce a sky background with the same aspect than VNG would be a huge step forward.

Sure. Please take it: FileSender

Don’t forget we can process a set of 1000 images (sometimes even more). So speed of algorithm is important: we do not use parallelization in the algorithm because the whole batch process is already parallelized.

Thank you, @lock042, for the very valuable info. I have some ideas to try using the raw you provided.

1 Like

Hi @LuisSanz,
Still throwing random requests without actually knowing if they are feasible (so, sorry in advance if they are not). It would be great if RCD was better at avoiding moirĂ©/false colours using something smarter than a median filter (which is not particularly effective and really degrades the colours too much in my experience). I personally don’t care much about having more sharpness, the current version is already quite good there, so I would also accept a slight loss in detail for less color aliasing when needed.
Hope this makes sense


1 Like

These are reasonable requests. I have similar questions. I guess with my limited processing experience, it is about dealing with conflicting interests: speed vs complexity vs robustness, etc. I am interested in the potential for the next method to have parameters that may control where the balance might be.

1 Like

Hi, @lock042.

What a beautiful photo! I have already tried out some of the techniques I can fine tune in RCD or a separate algorithm. Instead of guessing which one is better, do you mind having a look at the crops yourself?

LIGHT_300s_800iso_+19c_20150822-00h10m56s989ms-crop.xcf (66.0 MB)

The crops were developed with dcraw with the least possible post-processing. You will probably need to apply curves. Some differences are very subtle to be seen at that exposure level.

I have divided the image in three groups, with RCD 2.3 being the reference for each of them since it’s what Siril is already using. You might want to look at the composite RGB image but also at individual channels. If you use GIMP, it can help viewing the channels as gray images (Color → Components → Decompose).

A bit of background about the interpolation might help with the decision:

For the green pixel estimation, I can do three different techniques:

  • Estimate the missing green pixels without any color correction. This way, only green pixels are used for finding the missing values. Resolution is lower but there is no pollution from red/blue light sources at all. This will miss stars which were captured by a single red or blue pixel.
  • Estimate the missing green pixels with color correction from the nearest neighbor pixels. At red positions of the Bayer pattern, the interpolation uses green and red pixels, while at blue locations uses green and blue values. This has the best resolution and it’s what AMaZE, LMMSE or HPHD do. But it can lead to artifacts when the red and blue wavelength is far apart. It also amplifies the impulse noise.
  • Estimate the green pixel applying a small degree of spatial correction form a cross-channel low pass filter. This is what RCD does. It’s a trade off between option 1 and 2 regarding resolution and avoids any artifact when the any of the color channels is highly saturated.

For the red and blue pixels estimation, there are also some alternatives:

  • Use the green channel as guide, since it’s already interpolated and has more resolution. This is what RCD and all the other algorithms but VNG-4 and Bilinear do. I’m not sure if it fits astrophotography: for sure, resolution is better, but noise and structure from the green channel passes to the other two.
  • Perform a linear interpolation, using only red pixels for the red channel and blue pixels for the blue channel. It has the same benefits and problems explained above.
  • Incorporate the cross-channel low pass filter as an in-between solution.

I have yet to do more testing regarding the directional filters. I have included some in the file, but I would prefer to work on them after knowing how the pixel estimation will work. This is the part that has the bigger impact on the algorithm speed. Anyway, I have good news regarding this: the current implementation of this part in RCD is really poor when using a single core (my fault) and it will have a very noticeable speed up for the next release. Ingo and Hanno are already working on it.

If you doubt between some of the options, maybe we can test them with a different raw file.

Hi, @agriggio ,

It makes perfect sense and, definitely, the new realese will improve the results in that area.

I already have a pretty effective filter that detects regions with moirĂ©. It has one parameter to control the sensitivity of the detection (actually two, but one will be fixed and won’t be adjustable via the interface). I’m trying out different solutions for guessing the interpolation direction in those areas.

I wouldn’t mind much about sharpness since an artifact free demosaicing admits sharpening better during postprocessing. But, while a lower resolution avoids luminance artifacts, it can lead to false colors around some edges. So there are some cases where extra sharpness is needed, particularly in some diagonals with high frequencies.

2 Likes

Hi, @afre,

There are indeed some things that can be adjusted, although I personally prefer simplicity in interfaces. Depending on the data I collect this days I will decide which parts will allow some degree of user input. Thanks!

Hello @LuisSanz. Thank you for your work. I will take a look at this.
I’m really not a specialist of debayer algorithm so I can tell you what astrophotography needs.

I think that for astrophotography lower resolution is not a good idea. However blue and red fringes in a star is really not wanted too. This is why I think your third point is the best option IMO.

In this case, to better resolution is the best option. Even if noise is present in the picture we have to keep in minde that we will add a lot of picture to really remove noise. So for me that’s not a big deal.

In astrophotogrphy you can divide the image in two areas.

  • area with objects: stars, nebula, galaxy or anything else
  • background area: where sharp structures and high resolution are really not needed.

I don’t know if it helps. Let me know if you need more information.

Many thanks btw.

Cheers,

I’m going to start by saying that I’m clearly out of my areas of knowledge, so take my comments with a grain of salt.

I understand sharpness the same as acutance, and like @agriggio, I wouldn’t mind losing some acutance only if there is no lose in perceived «resolution», understanding the later as «independent visual units» (if that definition even exists).

I mean, if there is some loss of contrast but the details (the pixels) remain differentiated from its neighbors, then there’s no problem. We can correct that with many tools.

Whenever there’s a loss of details by means of some kind of blurring or posterization, then that wouldn’t be ideal to me. And perhaps different kinds of image processing ask for different features from the algorithm.

At this point, I agree with @afre that conflicting interests call for some user control: there will be people that needs no color artifacts, but others need no posterization or blurring, and I’m afraid both can’t be offered at the same time.

For what I can see at first look. All chrominance interpolation tend to add some false colors (green hue) in stars.

I’ve done some demosaicing experiments in GMIC to reduce or remove moirĂ©. See:

Something I have tried is estimating the correct colour (difficult with moiré) and then using that to figure out the missing colour values at each pixel.

My in my experiments the best way to get a colour estimate was to demosaic twice, once interpolating green horizontally and once interpolating green vertically, then interpolating the Red and Blue differences. This gives two different moiré patterns. I then take the FFT of both images tiles and select the minimum values from each tile.
However, the colour estimate smears colours and can’t be used in non-more areas.

If it happens to all the crops, it probably has more to do with white balancing or the camera profile my old version of dcraw has. I would disregard it and just focus on the pixel quality.

While I like the simplicity of just selecting the demosaicing algorithm and letting it do its job, I understand this request. A few messages above I offered to do a method with more user control. It wouldn’t be RCD 3 but something else, with the ability to select the methods for finding the directional strengths and for interpolating the missing green pixels.

I’m afraid it might be frightening for the user to have to guess what the different options do. To be more user friendly it could be profiled, with a dropdown for selecting the type of photography and preload some optimal parameters: architectural photography, astrophotography, portraiture, etcetera, and add an extra option “Custom
” for total control.

I think we need more voices in the discussion. Any opinion on this?

Very interesting, @Iain. Thank you.

From a demosaicing point of view, what I’m trying to do is to estimate the right interpolation direction at any R or B position of the CFA to get the green channel reconstructed. If found correctly, most luminance artifacts and false colors are removed. This suits the vast majority of cases, it’s fast and it keeps the color interpolation as close as possible to the real CFA values captured by the camera, so there is no loss of chrominance detail.

In the original AMaZE implementation, I remember Emil was doing a simple 5x5 variance for each of the horizontal and vertical color differences. The results were smoothed out by a median-like filter to remove outliers and a gaussian convolution. It worked great for most of the cases. If I don’t come up with something better, I will use a similar method.

A different case is when the color aliasing is present in the raw data at a much bigger scale than the tipical 5x5 or 7x7 area used in a local demosaicing. This is rare, but it happens: the images in the topic you link are a great example. Do you still have the raw files? The link to Dropbox is down.

In these cases, even with the correct interpolation direction, a local demosaicing cannot remove the color aliasing. However, while I do believe the tools to fight this should be in every raw developer, I’m not sure they belong to the demosaicing process itself. I think it makes more sense to have a separate filter, like the one you designed, in the UI so that it can be applied independently of the demosaicing algorithm chosen by the user.

1 Like

I quite like this idea.

But even if this new algorithm comes to life, I would be really happy if we also end up with an RCD v,3