Problems measuring camera SNR with darktable and Raw Therapee

I am trying to measure my camera signal to noise ratio using linear gamma files converted from RAW with darktable and Raw Therapee (RT).

I took a plain card and photographed it, using Canon 5D mark IV. I made sure the card was very out of focus. I shot it at ISO 100, by spot metering the card and using camera bracketing to give me a 7 shot sequence from -3 to +3 EV. That was to test linearity.

I used the spot meter on the card to shoot a sequence at ISO 100, 200, 400, 800, 1600, 3200 to test SNR.

I then wrote a program to calculate SNR as the mean divided by standard deviation of a 800 x 800 pixel section of the center of the image.

To test linearity I took the mean of the 800 x 800 section from the center of the images at the same exposure.

I made sure that I used settings for both darktable and RT that would be compatible with linear gamma output to a 32 bit per component TIFF file. I also did conversion with dcraw outputting to 16 bit per component TIFF.

Here is the linearity plot, which shows that yes, I am getting values that are linearly related to exposure time (all at ISO 100).

The strange thing being that while RT and darktable are both linear, they have very different values - RT is much brighter.

Note that Red, Green and Blue channel are plotted separately but the Red channel is hiding beneath one of the others.

Here is are the SNR plots which are even more problematic

The problem being that RT has a SNR of about 100 for ISO 100, whereas darktable has an RT of about 50 at ISO 100.

Obviously one should not get different SNR from two different RAW converters on the same file. Why is this happening?

I checked and no noise reduction was used in either one.

Both use the AMZE demosaicing algorithm.

I assume that the reason has something to do with darktable being so much darker.

The shots were done with the spot meter and 0EV correction. So in principle that should yield a “middle gray”, not a super dark tone.

If I take the 800 x 800 center pieces and take the mean all the way to down to a pixel, then darktable gives me {0.0886,0.0884,0.8875} for RGB expressed as floats. That is very dark.

RT gives {0.305,0.305, 0.307}.

dcraw gives {0.0350, 0.0350, 0.0400}

So RT is giving me roughly 3.4X higher values than darktable, and 8.7 times higher than dcraw (blue channel off by more).

darktable is giving me roughly 2.5X higher values than dcraw (but the blue channel is off by more so there is a color shift).

A multiplicative factor does not affect SNR - it shifts the mean and standard deviation by the same amount.

An additive number does change SNR. So basically RT seems to be adding a number to the values, while darktable does not.

Can anybody help with suggestions of what settings to use? Or offer an explanation?

My guess is that there is some setting that I need to adjust for darktable (or for RT) to make this at least get in the ballpark of the same value.

Are you using the “Neutral” profile in RT?

Here is what I am using:

Color Management tab:

Input profile = Camera standard

Working profile = WideGamut
Tone response curve = None

Output profile = RTv4_Large

I don’t see “Neutral” as a choice for any of these. Perhaps Neutral is a different type of profile for a different tab ?

Neutral is in the Exposure tool.

I don’t have anything turned on in the Exposure tool. I don’t see a place for Neutral however.

Also, nothing in:

Shadows/Highlights
Tone Mapping
Dynamic Range Compression
Vignette Filter
Graduated Filter
Lab Adjustments

The place where I find a “Neutral” option is the Contrast tool

However I don’t have it turned on at all.

This?

image

Also consider exporting a reference image.

image

Sorry, I am new to using Raw Therapee

The processing profile is Custom - i.e. the one that I made that has most things turned off.

When I put it to Neutral, it changes the working color space to ProPhoto, and it changes the output color space to RTv4_sRGB which does not sound linear to me.

What does one do with a reference image?

Sorry to be so dense! I am a new user to RT.

My suggestion is to toggle the button on the left and choose (Neutral). That will set everything to a baseline. Then change whatever you need, make and save a custom profile so that you can go back to it later. Rawpedia is RT’s manual. To read about the reference image, see Color Management - RawPedia.

OK, thanks!

I think you stumbled upon what I think is a nonsense in Rawtherapee.
RTv4_Large (that is in fact prophoto ) and all wide gamut profiles that should be linear have a gamma=2.4. That is very misleading.
This is true also for RTvx_ACES-AP0 and RTvx_ACES-AP1, that is, they are NOT ACES profiles.

All the output profiles should be compliant with the profile specs and if you want a specific profiles, you can generate one with the RT tool.

If you want a linear output profile, easiest way is to copy one of linear(g10 suffix) profiles from ElleStone in ./iccprofile/output

1 Like

I will try that but even now it does seem to pass the linearity test…

RTv4_Large is not linear unless you modified it:

explorer_EIP8QsrXE6

You are right, of course, and I got the Elle Stone profiles.

I re-did them using the LargeRGB linear profile for each one.

The linearity test looks good (although oddly, it did previously as well).

The SNR plot now makes more sense

With Raw Therapee and darktable now giving very similar results.

Which leads to a new mystery… I sub divided the 800 x 800 pixel area that I used to test SNR into 16 sub-images, each 200 x 200 pixel.

I used these to test the change in SNR with averaging, by averaging 2, 3, 4, … up to 16 of the squares. The results for darktable look good

Which shows the typical increase in SNR going roughly like the Sqrt[N] for averaging N images.

But this what I get for RT

ISO 200 and up work just fine, but there is something wrong with the ISO 100 case - it is very lumpy.

Visually there is no difference, but here is a plot of the SNR for the ISO 100 case for each program. Each line is a channel as per its color.

So for some unknown reason the ISO 100 RAW file - is interpreted very differently by RT than darktable. RT has much different distribution of SNR across the 200 x 200 pixel patches.

That’s still 40,000 pixels, so this should not be statistical fluctuations.

The plots for RT for ISO 200 and above look like the darktable one - i.e. there is some variation between sample to sample but not so lumpy that it ruins the averaging curve.

Does anybody have suggestions as to why RT would do this?

I have plotted other raw converters and none of them has the odd lumpy.

1 Like

The encoding function mentioned here ProPhoto RGB color space - Wikipedia clearly seems to indicate it is clearly non-linear. Is this information wrong?

@nathanm which demosaicer are you using? That would my first guess as to why there might be a difference on low iso.

I am using AMaZE in both darktable and RT

What I find surprising is that the curves for the 3 channels have the same shape for RT. One wouldn’t expect such a correlation within samples. So either there’s a difference in the demosaicing implementation, or might RT do some hard clamping?

@nathanm

In case you use a RT dev build, can you test please using this setting?
grafik

1 Like

Both Darktable and Rawtherapee should use the same input color matrix, set to camera standard in rawtherapee (not the auto-match) and standard in darktable (not the enhanced Matrix)

Set preprocess white balance to “camera” in rawtherapee

Be sure that highlights reconstruction and capture sharpening are unchecked in rawtherapee.

Indeed, depending on the demosaic algorithm, the degree of introduced correlation between channels might vary. Even if you do e.g. the simplest, bilinear demosaic, when there is no correlation between channels, you will not get a true SNR value: for example, in the G channel half of the samples will have the original standard deviation of the noise, the other (interpolated) half will have 1/2 of that stddev and be correlated to the first ensamble of samples (in that channel only though). SNR should ideally be measured on the raw CFA values directly before white balance and before demosaic (and also before black and white level clipping to avoid additional bias).

I am not using a dev build of RT. The “camera” white balance option is what I have been using.

It’s true that the final pixels after demosaicing are correlated - that can’t be helped except by going to a different type of sensor like Foveon.

SNR measured the way I am measuring it ought to be a good measure of the overall process, even if it is not quite the same as the raw sensor SNR.