Diagonal interpolation correction artifacts with AMaZE Demosaicing

I couldn’t see anything worth noting even with unitary multipliers. but maybe my eyes are not trained for this kind of stuff… I must confess that I typically don’t look at pictures with a 500% zoom :wink:
anyway for dcraw this would not be a problem in practice, because (I suppose) nobody really wants to use uniwb for the final output, right? hi

Well, I don’t know how to read the code but I suspect that RT does much more in the background than dcraw would (or at least differently), so just inserting RCD where AMaZE, etc., are in the pipeline might not work. That is the downside of otherwise feature rich software that has had much development from many devs. Just by looking at the raw tab, I get a bit overwhelmed. I mean look at it (neutral processing profile: BTW, is it truly neutral?).

My question earlier still applies. If I were to compare dcraw RCD with RT AMaZE, which settings should I choose in order to make an apples to apples comparison apart from the demosaicing method? Perhaps, it was already answered and I missed the point. Pretend I know nothing, which is mostly true. Thanks.

Just neutral profile

For dcraw-rcd, first I opened the raw file with PhotoFlow, told it to use the default dcraw matrix and to output the image still in the camera profile. Then I opened the PhotoFlow output in GIMP-CCE and saved the embedded camera input profile to disk (which GIMP makes easy to do, though you could also use various command line utilities).

Then I processed the raw file with dcraw-rcd, using these parameters:

dcraw-rcd -v -w -q 4 -4 -T -o 0 -H 0 rawfile.ext

which uses the raw file white balance and saves the image as “raw color”. Then I opened the dcraw output tiff with GIMP-CCE and assigned the profile that was saved to disk from the version of the interpolated raw file output by PhotoFlow.

Well, that’s round-about for dcraw-rcd, but it avoids having dcraw do any color management.

In RawTherapee I used the neutral profile as already specified by @heckflosse , then set the working space profile to Rec.2020 (not sure if that matters), and output in RT’s Rec.2020 color space, then opened the RT version with GIMP-CCE, promoted both the RT and the dcraw-rcd versions to 32f, and pulled them both into the same layer stack.

That’s probably way too much detail :slight_smile:

Well, I stand corrected. The image I chose as a test image was shot using what is supposed to be a very nice high end lens, attached to my Sony A7 by the store attendant before I purchased the camera - I think using an adapter - I really don’t remember what lens it was. I’ve never actually looked at the image before today, other than on the LCD preview in the store. Wow, the image is full of rather extreme chromatic aberration.

Interpolating using UniWB - with or without doing any chromatic aberration correction - and then white balancing in post turns the chromatic aberrations into a technicolor display especially along wires strung between poles, worse if no CA correction is done, but still pretty bad even if CA correction is done.

Interpolating using daylight white balance (it was a blue sky bright sunshine day) without CA correction results in far less noticeable CA, and apply CA during interpolation makes it mostly disappear.

I think I’ll go check some images that were shot using my own lenses.

btw, one more thing. so far we are doing our dcraw vs RT for RCD demosaicing using one image only, unless I missed part of the thread (could be). might be worth enlarging our data set to see if the trend is clear…

@agriggio That was what I was planning to do in my spare time but I have been trying figure out how to do it properly. I will follow the advice of @heckflosse and @Elle and see how it goes.

I would prefer to use a clean pentax pixel shift raw file to compare. Why? Because it will allow us to see the truth independent on the demosaicers we choose. I will have a look whether I have one which I can provide.

Here’s a Pentax Pixel Shift file (caution, it’s a big file). Open it in RT and apply that attached .pp3 file.

demosaic.pp3 (10.8 KB)

That will show you how it should look without artifacts indroduced by demosaicers.
Then change demosaicers in RT to inspect for differences.
It’s not a perfect file because it does not cover all aspects, but still…

Ingo

@heckflosse I followed the instructions from your latest post for RT (demosaic.pp3) and @Elle’s for dcraw (RCD). Both are rather large images. I wonder what people would be interested in seeing here?

PS Wow, it is taking forever to load into GIMP-CCE, being as my system is low end.

PPS @Elle You mentioned using the embedded camera input profile for the dcraw-rcd output and Rec.2020 for the RT output. Since the profiles are different, should I convert the image with the embedded profile to Rec.2020 before making comparisons?

Another example illustrating the output divergence. Sorry for the gif quality, but it’s good enough to see the differences:

RT-rcd on Neutral Profile after a clean installation, nothing else changed apart from the demosaicing method. Dcraw developed with

dcraw -v -T -6 -W -H 2 -w -o 1 -q 4 DSC_0934.NEF

And the raw file, downloaded from RT’s own repository some years ago:

DSC_0934.NEF (5.8 MB)

I have checked the Pentax raw file @heckflosse shared. Minimal differences on most parts of the image, but the same issue arises near the highlights (upper right corner) and there are also differences in the interpolation direction on areas with texture.

Hi @afre - I pulled both images into the same layer stack, which automatically converts the pulled in layer(s) to the color space of the destination layer stack. It’s easier to compare if the images are all in the same layer stack. Though waiting for the view to refresh can take up a lot of time. I cropped that very pretty mountain image to just a portion of the skyline, to make screen refreshes faster. I also experimented with applying a simple USM to the various versions to see how much any artifacts were amplified, and for this the images need to be in the same color space, at least the color space TRCs need to match (both linear or both gamma=2.2 or whatever - for USM the TRC makes a big difference).

Hi @LuisSanz,
so today I was sick at home so I had some time to play with this… here’s what I found.
I think the differences are not due to WB, but rather to the way dcraw and RT deal with clipped highlights. Here is a patch to your dcraw.c that enables a further highlight mode -H 10, which tries to emulate what RT does (when highlight reconstruction is turned off):

diff --git a/dcraw.c b/dcraw.c
index ca28559..f30793b 100644
--- a/dcraw.c
+++ b/dcraw.c
@@ -10206,6 +10206,29 @@ next:
       else
 	ahd_interpolate();
     }
+
+    if (!is_foveon && highlight == 10) {
+        float dmin, dmax, q;
+        unsigned i, c, size;
+        int val;
+        
+        for (dmin=DBL_MAX, dmax=c=0; c < 4; c++) {
+            if (dmin > pre_mul[c])
+                dmin = pre_mul[c];
+            if (dmax < pre_mul[c])
+                dmax = pre_mul[c];
+        }
+        q = dmax / dmin;
+
+        size = iheight*iwidth;
+        for (i=0; i < size*4; i++) {
+            if (!(val = ((ushort *)image)[i])) continue;
+            val = ((float)val) * q;
+            ((ushort *)image)[i] = CLIP(val);
+        }
+        highlight = 0;
+    }
+    
     if (mix_green)
       for (colors=3, i=0; i < height*width; i++)
 	image[i][1] = (image[i][1] + image[i][3]) >> 1;

Here are three crops of your test image above, developed respectively with:

  • dcraw -v -T -4 -H 0 -W -o 1 -a -q 4 -c DSC_0934.NEF

crop-dcraw-h0

  • dcraw -v -T -4 -H 10 -W -o 1 -a -q 4 -c DSC_0934.NEF

crop-dcraw-h10

  • rawtherapee-cli -Y -tz -s -c ~/Downloads/DSC_0934.NEF
    (neutral profile, RCD demosaicing)

crop-rt

It seems to me that the dcraw image with -H 10 is very close to the RT one. What do you think? The difference between -H 0 and -H 10 is in the scaling of the raw values to normalize them to the range [0, 65535]. -H 0 is more aggressive in clipping before demosaicing, whereas -H 10 is more conservative before demosaicing, but then rescales the values later. This is closer to what RT does. As to why this has an impact on your algorithm, well, I have no idea :slight_smile: but I hope this could be helpful…

2 Likes

Bingo, @agriggio! I was also sure it was related to the highlight scaling rather than to white balance.

Is RawTherapee applying that normalization before or after demosaicing? In your patch to dcraw, it seems to affect negatively not only RCD but also PPG and AHD.

It reminds me to the same kind of effect that occurs when the white balance is applied after demosaicing.

Thank you very much for your findings and I hope you get well soon.

1 Like

after

WB is also applied after demosaicing in RT. it wouldn’t be trivial to change this I’m afraid…
do you think there’s a way to compensate for this in RCD?

note that something similar happens also in the original dcraw when you use a nonzero value for -H

1 Like

Hmm, I’m turning in my pixel peeping credentials. It wasn’t the lens, it was the bright sunlight. The artifacts from using UniWB for interpolating the raw file are much less noticeable for images shot under incandescent light. I wonder how obvious these artifacts might be for images shot under complete overcast skies.

At 400% and 800% zoom the artifacts become easy to see, even for images shot under incandescent light, once there is something to compare to. But - as I’m sure everyone else already knew :slight_smile: - there really are artifacts from using UniWB as the whtie balance for interpolating the raw file: noisier shadows (color noise) and the wrong color and brightness for highlight pixels and along edges. In the future I’ll use the correct white balance for the raw files even if it means outputting multiple versions for mixed lighting.

1 Like

folks, I think I read some time back, probably on Pixls, that it’s better to do the white balancing along with the demosaicing, rather than afterwards. If you want the best quality then, perhaps one should use DCRAW and input the factors. But how do you know what they are? Can you use RT to set the white balance wysiwyg, and somehow obtain the factors represented by the temp. and tint? I can’t see an obvious place where this is. Looking at the exif in a jpeg produced by RT, there is “ColourBalance” at “10, 798, 1024, 1024…” and MeasuredColour at “12, 442, 1024, 1024…” but these don’t mean much to me and when I expand the pane, the "…"s remain.

@Elle, I’m confused when you say

Why would you do this, wouldn’t you produce a green image? - but I might be confused!
Surely one would use the appropriate factors for the interpolating/demosaicing, not (1,1,1,1) ?

@RawConvert if you are interested, the RT issue mentioned above is worth reading. TL;DR: the current method in RT should work just fine in most cases, with the added benefit of not requiring any change :slight_smile:

I should also add that just enabling 1 step of false colour suppression gets rid of most of the artifacts you see above for RCD in RT

I see. I also think it’s more efficient for a software with GUI to apply the white balance after the demosaicing. Otherwise every change in temperature or tint would need a full reinterpolation of the raw data or to set up a rather complex system that is probably not worth the effort.

Any non-balanced scaling of the RGB channels after the interpolation would result in that kind of color misalignement near edges, since the demosaicing cannot take the decompensation into account in order to smooth it. I’ll check if I can mitigate the issue in RCD.

1 Like

There are two very different issues here:

On the one hand, people will say “You can’t white balance an image after it’s been interpolated.” This is not true. Whether you interpolate using UniWB or else use the actual white balance that you want the final image to have, as long as the image is still in the camera input profile you can change the white balance after it’s been interpolated, the colors will look just fine. Occasionally I have found it convenient to white balance an image after it’s been interpolated instead of before, for example when the lighting was mixed and I wanted to apply two different white balances.

On the other hand, not white balancing before interpolating can produce artifacts, which I didn’t realize was the case. The artifacts aren’t always obvious. My studio lighting, such as it is, is tungsten lighting, and when using tungsten white balance the artifacts from not white balancing before interpolated are not very noticeable. But if the desired white balance is daylight white balance, the artifacts are considerably more noticeable.

So at 400% zoom for tungsten white balance, I ddin’t notice anything amiss until I had the “white balanced before interpolation” image to compare to the “white balanced after interpolation” image. But at 400% zoom for daylight white balance, the artifacts are glaringly obvious.

I’m guessing it’s the high red multiplier required for daylight white balancing that is key? But this is just a guess. Maybe there is a pre-interpolation white balance that is ideal regardless of the desired output white balance? What about images shot under completely overcast skies vs. images shot under daylight/sunlight? As I don’t know what causes the artifacts I’m not able to generalize.