Before demosaicing - Color noise sensor calibration / profiling. Is this a good idea, or can we skip it?

Hello everyone,

First of all, I’d like to thank all the developers who have put so much time and know-how into Darktable. Unfortunately, I am not a programmer, nor do I have any expertise in color theory, etc. But since Darktable is an open-source project, I wanted to share an idea that has been on my mind for a while, and based on my research, it doesn’t seem to exist yet.

The main idea is to average the individual R+G+B pixels separately, before the demosaic process, statistically across the respective ISO ranges, in order to achieve significantly better color consistency and reduce color noise. The module should also be designed so that users can apply it to their own camera in Darktable.

My (naive) idea:
You photograph (ideally without a lens) a gray card or gradient card multiple times for a given ISO range (e.g., ISO 800).
Since the resolution and the distribution of the RGB pixels on the sensor and the “exposure” via the gray card are known (RAW), you can now statistically average each “color channel” separately on a pixel level. Due to the gray card, the structure of the image is also not altered.

This means, for example, in the red channel, you would only average the red pixels, while all non-red pixels (G+B) in the array are deleted. This should theoretically give you a statistically clean red channel on the pixel level. The same would be done for blue and green. Only after averaging would the demosaic process take place.

In my view, this would effectively prevent color channel cross-talk and result in significantly improved color consistency (no color spots in the blue sky). Theoretically, this should even have the effect of improved luminance noise. I also think that many older cameras could benefit greatly from this. It could also be used to perfectly profile the vignetting of lenses…

As I mentioned… I’m not the right person for technical discussions, but maybe you can make something of this idea. :slight_smile:

I don’t speak English…Translate using ChatGPT

Hallo leute.

Ersteinmal vielen dank an die ganzen Programmierer welche hier soviel zeit und know-how in Darktable einfließen lassen. Ich bin leider kein Programmierer und habe auch kein Wissen in Farblehre etc…

Aber da Darktable ein offenes Projekt ist wollte ich mal eine Idee einbringen, welche ich schon länger in meinem Kopf habe und nach meiner Recherche so noch nicht zu existieren scheint.

Mein Grundgedanke ist dabei, dass man die Pixel R+G+B jeweils getrennt, vor dem demosaic, statistisch über die jeweiligen isobereiche mittelt, um eine deutlich bessere Farbkonsistenz und weniger Farbrauschen zu erzeugen. Auch sollte das Modul so angelegt sein dass man es in darktable für seine eigene Kamera anwenden kann.

Mine naive Idee:

man fotografiert (ideal ohne objektiv) mehrfach eine Graukarte / Grauverlaufskarte für einen isobereich (z.B iso 800).

Da die Auflösung und die Verteilung der RGB-pixel des Sensors und die „Belichtung“ durch die Graukarte bekannt sind (RAW), kann man jetzt jeweils die „Farbkanäle“ einzeln, auf Pixelebene, statistisch farblich mitteln. Aufrund der Graukarte greift man auch nicht in die Struktur des Bildes ein.

Das heißt, man mittelt statistisch im Rotkanal nur die roten Pixel und alle nicht dazugehörigen Pixel (G+B) im Array werden gelöscht. Dadurch erhält man theoretisch auf Pixelebene ein statistisch sauberen Rotkanal. Das gleiche würde mit Blau und Grün gemacht. Erst jetzt nach der Mittelung würde das demosaic stattfinden.

Dadurch würde, nach meiner Vorstellung, das Kanalübersprechen effektiv verhindert und eine deutlich verbesserte Farbkonsistenz erreicht (keine Farbflecken im blauen Himmel). Theoretisch müsste sogar der Effekt eines verbesserten Luminazrauschen auftreten. Ich denke auch das viele ältere kameras davon sehr profitieren könnten. Auch könnte man damit die Vignetierung von objektiven perfekt profilieren……

wie gesagt…ich bin technisch nicht der richtige Gesprächspartner aber vielleicht könnt mit der idee etwas anfangen :slight_smile:

It’s very very easy to remove noise.

The hard part is to remove noise without removing detail.

1 Like

have a look at darktable user manual - raw denoise

It may be possible to simply remove color noise. However, the problem is the resulting artifacts, such as purple blotches in a blue sky. (In my experience, this is very common with Sony 6000 series models; the D750 also had similar issues at times.)

While you do reduce the color noise into a kind of “homogeneous mass,” you don’t actually achieve color neutrality across the surface. That’s why my idea is to profile all pixels per channel using a gray card “kind of like an Excel spreadsheet where the values in each cell are adjusted. Before: 100, 85, 70, 120, 50… After: 90, 95, 88, 92, 91…” , as a kind of correction mask for the individual RGB pixels.

My intention is not to correct luminance noise (details); that would just be a side effect that might occur.

I’m familiar with and have read the article on RAW denoising. If I understand correctly, it adjusts the respective contribution in the channels via “curves.” In my understanding, this only results in a global amplification or reduction within the respective channel. As described, this does not create uniformity in color reproduction. And as ISO increases, the problem is known to become more pronounced.
:slight_smile:

Hallo @Suki2019

Is your noise consistent on a per-pixel basis from shot to shot?

I don’t know Sony cameras, but perhaps there is a setting to record dark noise to help correct raw profiles. Such a procedure might include recording sensor data with the lens cap on.

Noise has been studied extensively for CMOS imaging sensors, well beyond the dt community. You might be interested in learning about learning from astronomical photography. Noise removal techniques are part of Siril, for example. With static images, averaging over several shots helps, as does background subtraction.

For a general article see e.g. Sensor cal and color

With dt specifically, there is a video from @rawfiner that is excellent; Denoising with dt 2.6

It’s an old video, and the interface may have been updated, but the content of the video is “gold.”

I’m not a camera-noise expert, but I do know a little about the sensors. I can say with confidence that Sony’s sensor engineering and process technology are world-class, which may be why most of the world uses Sony imaging sensors.

2 Likes

That’s a good video that I have often shared. The bilateral denoise is now called surface blur in DT and with careful tweaking as shown in the video it can be quite effective

2 Likes

Hello Douglas,

noise is never consistent—that much I’m aware of. However, with cameras that exhibit certain “color errors,” the areas where these occur tend to be the same. (Unfortunately, I don’t have any example photos at the moment (private).) The video you mentioned is very interesting and demonstrates quite well, using the ISO 51,000 example image, what I’m essentially trying to avoid or at least significantly reduce from the outset.

These extreme false colors that can appear at higher ISO levels could, in my view, largely be reduced with the approach I have in mind, by applying a kind of correction matrix that evens out the values. Not in the sense that all pixels would have the same value (like a gray card), but rather so that, for example, an array of 20 pixels would have relatively similar values across the sensor area.

As shown, more and more errors occur in sensors as ISO increases—either due to the AD converter or the photodiodes themselves (amplification), or both. However, each sensor unit tends to behave similarly in this regard. In astrophotography, it also seems common practice to try to eliminate luminance noise and color noise (color shifts) by combining multiple exposures. However, this is not done at the pixel level of the sensors. And in darktable, there is no “accessible” module for that either.

This sounds like something that would be calibrated externally, like using a colour checker card in Colour Calibration module?

Ok, thank you.

I lack the experience to say whether the scheme you propose would work well for “fixed patterned noise” as discussed in the article. Others would need to comment.

To my understanding, with modern cameras high-iso noise is dominated by sensor noise (dark current, photon noise etc.*). So personally my approach is to try to collect more light so that lower iso is sufficient. With my MFT camera I’ve set an upper iso limit of 3200. Larger sensors generally have better low-light performance.

  • (There are some fine details related to signal conversion and amplification, and dual-amp cameras exist to help bridge high and low iso conditions. But I think that’s more of a detail than the core issue.)

Yes—just that this type of calibration (grey card) takes place before demosaicing, so that color shifts and false colors caused by readout errors or amplification (ISO) are minimized.

I would like to refine my idea a bit further and outline a possible workflow.
It is not intended to be a universal solution for all types of noise—that simply isn’t feasible.

The main difference (according to my research and understanding) compared to traditional flat-field calibration is that this method can be generated by the user, is specific to each camera, and can be applied explicitly before demosaicing. Flat-field calibration, on the other hand, is generic, not specific to individual sensors, and is sometimes applied only after demosaicing.

As already described, this module should be accessible to everyone, since it would need to be specific to each camera (different ISO levels, under- and overexposure, possibly even dark frames for astrophotography). However, I think that this could theoretically allow anyone to improve the maximum usable range of their cameras.

This example only refers to the green pixels, since they are present twice and could be considered as a reference channel. However, logically, this approach would need to be applied to all color pixels.

You would take, at a fixed exposure, say 10 shots of a gray card.
This would allow you to calculate an average value for each pixel across those 10 images.

Because the pixel matrix is known, it becomes possible to determine which pixels statistically are underexposed, overexposed, or “normally exposed.”.

Based on these average values (from the 10 shots), correction values could then be assigned to each pixel. This should enable a fundamental sensor calibration at the pixel level, resulting in a much cleaner and more color-accurate image—since this process would take place before demosaicing.

Yes, errors will still occur in subsequent real-world images, but they should be significantly reduced.

Higher ISO ranges in particular should benefit greatly, as fewer false colors should appear. It might also improve luminance noise.

This should not have any significant impact on actual image detail, since it is essentially just a uniform averaging process.

I would program it myself if I had the necessary know-how—if only to test whether it actually works.

From what I understand about your proposed algorithm, is the fact that you want to apply a correction for each pixel individually.
The correction for each pixel is computed as the difference between an average value and the actual pixel value, based on multiple shots of a gray card.

From what I understand about your proposal, is that the correction you have to a apply to a specific device pixel at a given ISO is a constant.

So, I have a question for you: what makes you believe that this correction is a constant ?

2 Likes

The proposed method might work if for a given pixel, the error is constant. Even then you still have to separate that error from the random noise that is always present as well.
So at least you would need a number of images to average (and reduce, not eliminate, the random noise).

However, with a recent camera, the colour spots you see in e.g. a blue sky are not due to systematic errors in the sensor pixels, but to random (“shot”) noise. So in different images the spots will be in different locations… The method you propose will not reduce the colour noise.
And high-ISO pictures will not particularly benefit, as it’s the random part of the noise that increases with ISO.

To be more technical: the level of random noise is the square root of the number of photons captured; at high ISO you capture less photons, so the relative value of the noise is larger: for 100 photons, the noise level is 10 (=10%), for 10 000 (+6.5 EV or so), the noise level is 100 (=1%).


There is one special case where a correction for systematic pixel errors is available: hot pixel correction. But that works usually by taking pictures with the lens cap in place, under the same conditions as the shots to be corrected. You then subtract the black image from the shots to be corrected.

1 Like

Some individual cameras might have unstable photosites as they might have hot or cold photosites due to errors in sensor or the amp.

Personally i have never seen such examples “in real”.

Mostly the vendors knows about the level (blackpoint) where errors increase drastically so we keep that and while demosaicing images we mostly ignore those signals below a threshold for less noise.

1 Like

I believe that it is a statistical constant, since I am assuming an averaged reference (gray card). I’m sure that a sensor, in its individual pixels—even if they reacted 100% identically on an electronic level and were exposed to uniform photons—could never be completely consistent due to manufacturing tolerances. That’s why I assume this could work as a correction before demosaicing in order to minimize errors.

But as I already mentioned… I don’t have any professional expertise in this field.

The issue with the color shifts (purple spots) was—at least on my Sony A6000—always in the same locations. Curiously, they became less noticeable as the ISO increased. Also, this phenomenon occurs with some cameras and not with others. Therefore, I think it may well be related to how the sensor or its electronics process and respond to the signal. That’s why I believe such a “matrix” could work.

Could you share such a raw file?

As I mentioned above, unfortunately, these are photos I can’t post (private back-to-school photos). I didn’t have the camera for very long. But I’ll try to find one or two samples where there’s a lot of sky visible. I can’t promise anything, though.