Dual demosaic automatic contrast threshold detection

(Ingo Weyrich) #1

A while ago I introduced some dual-demosaicers into RT (for bayer sensors amaze + vng4, dcb + vng4 and rcd + vng4. For xtrans sensors 3-pass + fast and 1-pass + fast).

In current version the user has to determine the value for the contrast threshold to decide which demosaicer is used for the contrasty and which is used for the flat regions.

Recently I started working on automatic calculation of the contrast threshold. In most cases that works very well, but there are also cases where it does not work well (for exampe if there is not flat region in the image and also other cases)

I would like to get some feedback from RT users which already use dual demosaic to find the best solution. Please take a look at the corresponding issue at github but also feel free to answer in this pixls.us issue if you don’t have a github account.

Any feedback is very much appreciated.


(nosle) #2

Compared to my usual manual settings the automatic detection suggests a lower number. I tend to, perhaps mistakenly, set it so that sky or out of focus areas are near flat and dark in the contrast mask preview. In my quick tests my shots get a suggested 9. My manual setting would be 14 or so.

The above crop shows sky and trees. Making the sky as smooth as possible would be good. This is a low iso shot.

Below a shallow dof shot of a childs face with the oof area not demosaiced for maximum smoothness at the suggested setting of 8

(Ingo Weyrich) #3

@nosle Thank you very much. That’s exactly the kind of feedback I need :+1:
The algorithm is tuned to detect the flattest area in the image which in your case is sky.
But though I tested it with a lot of shots from different cameras, most of the test images have been from my main camera, which is still an old Nikon D700, for the simple reason that I have a lot a images to test from this camera.

As others wrote in the github issue, the current settings are quite conservative (means it gives preference to the first demosaicer (Amaze, dcb, rcd for bayer) and tries to optimize the threshold for the flattest region to get rid only of the artifacts caused by the ‘contrasty’ demosaicer.

I have to think about that… See my edit below…


Edit: @nosle : Did you confuse the dual demosaic contrast threshold with the sharpening contrast threshold by accident? Your screenshots show that your are using the values you mention in your post for sharpening contrast threshold…


Simple question. When the contrast threshold is at min or max does it mean that we have one method or the other? Or is there still a bit of both?

Also, is it a binary choice for regions? This algorithm for this region, as opposed to a transitional blend when gradients are less steep. Just a thought.

(Ingo Weyrich) #5

Contrast Threshold at zero means amaze, rcd, dcb, xtrans 3-pass, xtrans 1-pass will be used for the whole image. At threshold > 0 the transitions are blended as described here for sharpening (it’s the same blend function for dual demosaic)

(nosle) #6

I only use the sharpening threshold to visualize the mask. It does show the same mask at the same settings doesn’t it?

What threshold do you get with your D700 on a blue sky bright day Iso 100 sort of shot. I was surprised at results in the 8-9 range as I tend to use higher settings.

(Ingo Weyrich) #7

The visualization of the mask depends on the sparpening threshold and on the dual demosaic threshold.

To evaluate the optimal value for the dual demosaic threshold you have to set a moderate sharpening contrast threshold (e.g. in the range of 7-8). Then activate the contrast mask and increase the dual demosaic threshold starting at 2 until the nothing changes). Because that’s a very long process, I automated it, which is way faster and leads to correct dual demosaic contrast threshold values (e.g. 7-10 for ISO 100 D700 files). As soon as there are no changes visible in the contrast mask (of a flat region) you can increase the sharpening contrast threshold until the contrast mask of the flat regions is black (which is in the 15-25 range for my D700 base ISO shots).

The automatic calculation is only for the contrast threshold of dual demosaic, not for the sharpening contrast threshold

(Ingo Weyrich) #8

I just merged into dev

(Samuel Chia) #9

Hi Ingo,

I have been travelling and thus missed the opportunity to make a timely contribution. As you know, I am absolutely delighted with this new addition of dual-demosaic to RT.

I’m not entirely certain what you meant regarding how to manually choose the ideal threshold amount for demosaicing. I am wondering if the selection mask preview could be solely activated for demosaic, and not require jumping to Detail>Sharpening etc. and going back. My preference is not to use RT to perform any sharpening, so I have to make the additional step to go and turn it off again.

I seem to get the impression that the sharpening contrast threshold should be set to 7 or 8, and then one should zoom in a lot to say 300% or more, look hard and increase the demosaic threshold settings in increments of 1, and observe for when no more subtle changes are visible. It is not easily noticeable at first, until one knows what to look out for. I see tiny changes happening along edges of the contrast mask. Is that how one should judge the threshold amount in your opinion?

I made a big layered TIFF in Photoshop so I could compare the results with DCB+VNG4 with threshold settings in increments of 1 through to 10, and 12, 15, 20 etc. It’s difficult to say which is indeed the most optimal result. My own old and rather unrefined technique to manually blend DCB with VNG4 in Photoshop was to use the “Find Edges” filter to make a mask, and using curves to increase the contrast so there is maximum rejection or reveal of said demosaic layer. This method gives approximately the same result as a threshold of 20. While my own eyes picked 8 as the optimal result before I did this comparison. I can see that in some regions the threshold 8 retains a touch more detail, although there were also subtly more artifacts. None of which should be visible in a print.

The image I was testing on is attached as a raw file in this message. It does not contain clear and smooth regions, rather it has out-of-focus parts. It was shot with a macro lens at f/8, a7R II at ISO 100. It is one of several frames in a focus-stacked sequence, in a panoramic stitched sequence. When I use the ‘Automatic’ checkbox, the algorithm returned a threshold recommendation of “0”, which clearly isn’t what I needed. Strange that the algorithm is not able to pick up the out-of-focus regions as flat regions which require blending with VNG4.

I have also been thinking that even in an image which has no or hardly any smooth regions and is in focus everywhere, a threshold of zero is unlikely to give an optimal result. This can be seen in the underexposed shadows of this autumn leaves raw file, which on the right edge, while sharp, still benefits from the less artifacty VNG4, albeit only slightly. I find it interesting that my preference for a threshold value of 8 in this file is right in the middle of the range you mentioned of 7-10 for D700 ISO 100 files. Perhaps the automatic function should default to a 7 or so if it returns a 0 result.

I see on Github that you considered removing manual override totally. I would not vote for that for the excellent reasons you provided there youself, all three apply in my case.


_DSC1267.ARW (82.3 MB)


Good thoughts. I had similar ones. I wonder how it is pixel peeping with PS. Does it over-smooth or sharpen at certain zoom levels? I don’t remember. If anything, its preview interpolation might be different from RT. I am sure that you took the precautions…

(Samuel Chia) #11

In Photoshop, I use only whole steps when zooming in. 100%, 200, 300 etc. and that uses Nearest Neighbour to scale the image up, thus ensuring ideal assessment of pixel detail. Fractional steps like 115% involve resampling which gives a distorted impression of sharpness. I steer clear of those.

For some reason, I still cannot love the denoise and sharpening quality that RT provides. Fortunately, this dual-demosaic is not tied to requiring sharpening also be activated, just that one needs it turned on if you wish to preview the contrast mask. I also feel that Lightroom’s demosaic is subtly less artifacty than VNG4, especially on high frequency detail, but also in smooth regions. It certainly isn’t as nice as AMaZE, DCB or RCD on high frequency detail, but this might mean the transition from one of the sharp demosaicing interpolators to a further-refined VNG4 could be even smoother. But perhaps Lightroom is already doing a hybrid demosaic trick, I do not know.

I just tried on another image of stars, and the

Since the masking algorithm is based on contrast, it should be possible to have arbitrary ballpark values that would invariably be ideal or near ideal in most situations, plus recommendations for low noise (7-10), moderate noise (20-30?) and high noise images (30+?). Once I work on a variety of images I expect I’d get a good feel of it, and probably would never ever touch the automatic checkbox. I’m just now playing with a noisy starry night sky image, and even so, I still find a threshold of 8 to provide the sweet spot to blend AMaZE or DCB with VNG4.

I love that the attenuation curve was changed from the blue one to the red one (see Ingo’s illustrations on the Github link). That was an excellent call. This kind of blending needs a sharp cut off and not a long roll-off when blending one into the other.

(Ingo Weyrich) #12

The algorithm tries to find a flat region, which is a region where the variance of the L (from Lab) channel is minimal.
It does that in up to two passes. First pass uses 80x80 tiles and if the min. variance of all tiles is <= 1, the threshold will be calculated from the tile with the min. variance.
If there is no tile with variance <= 1 in first pass, a second pass with 40x40 tiles is done. In the second pass a minimum variance of 2 is set as limit. If there is no tile with variance <= 2 the contrast threshold is set to 0. Additionally the average L of the tiles is checked. If the average L of a tile is too low or too high, the tile will not be used to calculate the threshold.

As you see there are some screws which can be turned in code to improve the calculation of the threshold.

In your example file the top left pixel of the 80x80 tile with the min. variance from pass 1 is at y:1840 x:5520 and has a variance of 13.8
The 40x40 tile with the min. variance from pass 2 is at y:1640 x:3000 and has a variance of 3.55
Because 3.55 > 2 the threshold is set to zero.
Being less restrictive (e.g. using 4.0 as barrier instead of 2.0 for pass 2) would use the tile from pass 2 for threshold calculation and would result in a treshold of 12.

If you compile RT yourself, I can point you to the code or provide a patch with some console output.

Exif says it was shot at f/16.

(Samuel Chia) #13

Thank you for the detailed explanation of how the algorithm works. I think in most situations it would hold up just fine. Though I’m not sure why it failed and set the threshold to 0 for my starry sky photos. I suppose the super high local contrast of the individual stars and the sheer quantity of them (though probably not so in a 40x40 tile?) results in variance that is still higher than 1 and 2.

At some point I had figured out how to compile RT myself (just once), but I have entirely forgotten how I got to it since. I don’t think it would be useful for me to attempt to mess about with the code. I have zero programming background I’m sorry to say.

What is interesting however is that I think a threshold of around 8 is actually ideal for most of my images, even high ISO astrophotography, freeing me from the tedium of evaluating every increment for threshold.

Oops, my bad, I uploaded the adjacent file I intended! The point was made anyway :slight_smile: