Problems measuring camera SNR with darktable and Raw Therapee

As you are not using a dev build, while you used Camera white balance in white balance tool, you can not have used the new raw preprocessing white balance in Raw tab, which can make quite a difference depending on your subject.

There are some subjects where the raw preprocessing auto white balance fails. In 5.8 dev build you now can use raw preprocessing white balance set to Camera

For reference:

Hint: It is easy to obtain and use a dev build. Generally safe to use too. See Release Automated Builds ¡ Beep6581/RawTherapee ¡ GitHub.

1 Like

I downloaded the dev build of RT 5.8-2178-g6cc9537ab

It definitely made a difference, but there is still is a problem for RT output compared to darktable or other raw converters.

Here is the plot of averaging the 16 samples of 200 x 200 for ISO 100

It is better behaved than it was before, but nothing like it should be.

Here is a plot of the SNR for the 16 sub-regions of 200x200 pixels

This does not look as bad as it did previously, but it is still problematic with respect to the averaging behavior. The simplest diagnosis is that there is some sort of correlated noise in the RT converted file.

Here is the sidecar if you want check my settings

_04A4126.tif.out.pp3 (13.5 KB)

Here is a 16 bit per component TIFF of the 800 x 800 pixel portion of the center of the image (which includes all of the 16 200 x 200 sub regions).

middleiso100rt.tif (3.7 MB)

The raw file is the same as posted previously.

So, my conclusion is that there is something wrong with how RT is processing that particular raw file.

Investigation of your pp3 file shows three things:

  1. You have automatic CA correction enabled. Is that really what you want if you’re interested in the camera SNR?
  2. You use a LCP file and correct for geometry and vignetting. The geometry in particular can distort pixel values (maybe unnoticeable by eye, but maybe visible when doing calculations).
  3. You have resize enabled? Why?

Taking there three things into account, I’m even surprised the curves are not deviating more than they are…

You are right and it is a bad formulation from me.
There were two ideas in same statement: that a linear output profile should be used and that also output profiles should be compliant with spec.
Prophoto (ROMM) is not linear and its TRC is not the one used in rawtherapee.

@Thanatomanic seems to have nailed it. If the objective is “to measure my camera signal to noise ratio”, then you want to look at raw values. No white balancing, no un-distorting, no demosaicing, no nothing.

Then the tool used (dcraw, darktable or Raw Therapee) should make no difference. The results should be identical.

If you include, say, a demosaicing step and the tools have identical demosicing then the results should also be identical. But you won’t be measuring the camera SNR.

I agree that SNR should be the same for any raw converter. That is the problem!

That was resolved with having the right settings for RT. But then I found that the RT 5.8 output had problems. Which are clearly due to bugs in RT.

That’s clear because the dev version - which apparently has a new approach versus the 5.8 release version - greatly improved the situation.

I did not realize that I had resize enabled for RT. I will now rerun without it, but it would be very strange if that was the cause because the image was not actually resized.

I had the camera / lens correction enabled and CA as a hold over from previous tests. But they are also enabled for most of the other raw converters I tried - including darktable, and CaptureOne. Not for dcraw, of course because it does not support that.

I left them because I wanted to test camera SNR in a realistic setting - i.e. one that I would actually use. But it seems extremely unlikely to be the cause of the bizarre behavior of the RT output, because CaptureOne and darktable don’t have the problem, but do have lens corrections enabled.

You will get numbers with your approach, but I doubt they’ll represent what you think they represent, after demosaicing, lens corrections etc.

For starters, the demosaicing combines several pixel values. The exact influence of neighbouring pixels depends on the local situation. That means that the measured noise in a given pixel will depend on its neighbours. Not what you want… The implied interpolations will tend to lower the noise, so increase the apparent SNR

Then, you use lens corrections. Those deform the image, which implies another interpolation, messing up the noise measurements again.

If you really want to get the camera (sensor) SNR, you’d have to use something like darktable’s RGB “demosaicing” (which doesn’t demosaic at all), or dcraw’s equivalent. You’ll still have to find a way to avoid any other operations that mess up the data (including white balance, raw black and white point, etc.).

Your current method will measure an SNR, but it will be the SNR after your processing chain with a specific lens (due to the lens corrections you apply). Nothing wrong with that if that’s what you want, but it’s not the “camera SNR”.

I am interested in “practical camera SNR” - meaning a measure of the baseline noise that has a clear relationship to the noise that I can expect to get in a photo.

I am not interested in a theoretical sensor metric that has little to do with a real photo.

Here is a practical example that motivated this. We know that a properly exposed ISO 100 shot will have less noise than a properly exposed shot of the same subject at say ISO 400, or ISO 1600.

How much more noise? My method will find that.

We also know that if we average 2 shots of the same subject that we should get an improvement in noise. How much? Theory tells us it should go roughly like Sqrt[N] where N is the number of shots. But it would be nice to verify that. Again, my method will measure that.

Putting these together, how many ISO 1600 shots do you have to average to have the same noise (i.e same SNR) as one ISO 100 shot?

That is an answer that I want to get. And I have it for most raw conversion software, but there are weird anomalies with RT.

I am re-running without the lens corrections, but unless there is a bug in the lens database or associated code, that should not matter much to a completely out of focus image of an evenly lit plain white card. Plus the lens is a Zeiss macro lens, not some wild fish eye.

1 Like

It doesn’t matter what you are doing. To make a proper comparison, you have to be rigorous. If one processor is doing more or less or different operations, then the comparison is moot. Also, consider how you are using SNR. Statistics are only as good as your methodology and reasons and context for your analysis.

In other words, there is a lot to take into account. Make a table or something of all of the processing steps done by each raw processor you use.

Allow me to give you a tangential example. @Elle, who used to frequent the site, has a website with lots of interesting articles and tutorials. They are dated but look into her comparisons of raw processors. See: Articles and tutorials on Color management. As you can see, you need to be quite thorough.

The old approach (automatic) is still needed for uniwb shooters.

I agree for this one as your pp3 resize setting was scale = 1

Raw ca correction in different raw converters work differently. Even raw ca correction in rt and darktable (though using the same base algorithm) work differently as rt by default does 2 iterations of raw ca correction.

On top of that, if I read your pp3 correctly, you used raw ca correction in combination with lcp ca correction. Use one or the other, but not both.

OK, here is a new round of tests. I was wrong about lens correction mattering (but I still think that when everything is correct it shouldn’t matter).

I tried 4 different settings for Raw Therapee 5.8 - 2178

  • With, and without lens correction
  • With and without white balance pre-processing

Here are the side cars

s1_04A4145.tif.out.pp3 (13.4 KB) s2_04A4126.tif.out.pp3 (13.4 KB) s3_04A4126.tif.out.pp3 (13.5 KB) s4_04A4126.tif.out.pp3 (13.5 KB)

Note I added a prefix “s1”, “s2” etc so they wont collide in the same directory.

Here is a summary for those who don’t want to dive into the sidecars - these are the only things that are different.

Settings 1
LcMode=none
LCPFile=

[RAW Preprocess WB]
Mode=0

Settings 2
LcMode=none
LCPFile=

[RAW Preprocess WB]
Mode=1

Settings 3
LcMode=lcp
LCPFile=C:\ProgramData\Adobe\CameraRaw\LensProfiles\1.0\Zeiss\Canon\Canon (Zeiss Milvus 2_50M ZE) - RAW.lcp

[RAW Preprocess WB]
Mode=0

Settings 4
LcMode=lcp
LCPFile=C:\ProgramData\Adobe\CameraRaw\LensProfiles\1.0\Zeiss\Canon\Canon (Zeiss Milvus 2_50M ZE) - RAW.lcp

[RAW Preprocess WB]
Mode=1

As per posts above, the odd phenomenon occurs for a set of 16 samples, 200 x 200 each, from the center of the frame.

On darktable and other raw converters these are fine.
On RT they are fine for ISO = 200, 400, 800, 1600, 3200

But on RT for ISO 100 they act strangely.

Here is a plot of the SNR for each of the 16 samples, for each of the different RT settings




As you can see, the are radically different.

If we average these 16 sub-samples and look at the SNR of the averages we get graphs like this for settings 1, 2, and 4

And we get this for settings 3

The iso 100 line (cyan blue) should increase roughly like the Sqrt[N] where N is the number of frames being averaged.

This is exactly what happens for all of the other ISO examples, for all four of the RT settings.

Instead, for settings 1,2,4 , RT produces something that has large amounts of correlated noise which cannot be decreased by averaging.

For setting 3, RT produces something that has weird lumpiness to the averaging.

Here is my interpretation:

  1. Something is wrong with the RT output for ISO 100 output for all of the RT settings. It should act like the files for other ISO! There should be nothing about ISO 100 that causes problems.

  2. The lens correction, which I had thought was harmless, does interact with the white-balance preprocessing Mode = 0, in some very strange way to make the weird case of Settings 3. But it does not act that way for Mode = 1.

  3. Note also that I have the same lens correction with darktable and there seems to be no problem.

  4. While the way that I arrived at these (trying to measure SNR for different ISO, by photographing an out of focus board) may seem weird and invalid to some of you, that really doesn’t matter here. Think of my stuff as running an artificial test - like a canary in a coal mine - that is sensitive to the low level noise structure of the converted file.

What I seem to have found is that RT has some very odd dependence on ISO, white balance and lens correction.

It could be that there is some RT setting which would make this all better, but I can’t think of what it is.

For example, why should ISO 100 be singled out as the weird one?

Here is a wild guess: RT is introducing non-random correlated noise of a certain amplitude. For ISO 200 and above, the camera noise dominates but for ISO 100 the mistakes are large compared to the native noise.

Others on this forum are much more expert with RT than I am, and hopefully can tell me if there is a setting that I should try.

Alternatively, darktable seems to get the right answer no matter what. So I could just move on and use darktable, but it is significantly slower than RT.

I believe WB and lens corrections are implemented differently in RT. @jdc would know more about WB.

I would advise against applying lens corrections because they have a greater impact than you might realize from visual inspection alone.

Aside One thought just came to mind. I don’t use dt but run it when curious about something or when I want to help somebody. I noticed that dt’s controls are quite conservative compared to RT. RT’s modules tend to have more of an effect on the image. It is more of a feeling than anything.

Do we have the original data (the raw) in here somewhere? It would help to confirm your stats - some of the above assumes RT is the one doing something different, but it could also be the one not doing something. Some of the graphs have scales which makes differences appear larger, perhaps enough that quantization in your tiff could play a part?

Taking a 3x3 patch standard score your tiff shows this (first channel, obviously I added grid):


Looks like something periodic happening and not exactly aligned with your 200x200 grid (need to click and view full image to see that properly).

Amaze demosaic processes the images in tiles of size 160x160. But I would not expect this causes the issue, as darktable amaze code uses the same tile size.

1 Like

Here is the raw file _04A4126.CR2 (29.0 MB)

1 Like

Thanks for the raw file. I spotted one thing which might cause your issue. Your camera has two different greens. Amaze does not take this into account (only VNG4 does that).

Have a look at the screnshot. You can clearly see the typical pattern of two different greens on the left. On the right I set Green equilibration to 1 to remove that pattern.

So, does that mean I can’t use AMaZE with this camera?

I have AMaZE set for darktable and it seems to work fine.

It would seem like a good feature to tell you if the demosaicing algorithm is incompatible with the camera!

Preprocessing - RawPedia could be a little more instructive. Maybe add a tooltip to the GUI too. @XavAL

Well, I’m not a developer, but I can ask the real developers to add a tooltip :wink: