I see that many photos posted online are noise free. When I started with DarkTable I tried to reduce the camera noise as much as possible. I changed my mind now. I realized that I quite like having a bit of noise in my pictures. It adds flavour to the picture! The only noise I try to reduce is the colour noise. Ie noise is fine, as long it is of the same colour. My settings in DarkTable 3.0.1:
Depends. On really high ISO digital noise gets ugly and grain-noise from analogue photography is much nicer. I do quite often end up removing noise and then adding some grain back in. There’s a darktable feature request open at the moment to add better grain simulation that I’d really like to see.
I think @anon41087856 showed some sort of algorithm that introduces very nice physics-accurate film-grain noise that can be also used to do some very nice dithering. that produced superb results.
A couple of years ago, I worked on the update of the grain module (Let's improve grain).
I agree with you that the current implementation has many pitfalls, and it doesn’t look particularly good, especially at small grain amounts.
I hope to come back to the DT grain module; for now, I am a little lost in the math of the problem. Maybe I should post some updates in the forum to stay motivated, and go forward in the project.
There are definitely others who are interested in this and might want to help, though the mathematics is a bit beyond me. I got a couple of pages into that paper before I got lost.
My idea was to move away from that algorithm; It works very well but at the same time is quite computationally expensive.
So I’m working on a different one, based on similar assumptions. It is still resolution-independent and based on the microscopical properties of film grain, but does not require the Monte Carlo simulation and it skips the simulation of every single grain particle.
Hopefully, my approach will be simpler and much faster, and it will account for other physically-based ingredients. The downside is the inaccurate rendering when you try to zoom on single grain particles. It should not be a problem for the use in digital photography, because that level of magnification is for simulation of microscopy of analog film. Or maybe we want to print 10-meter sized photographs
My usual method for removing chroma noise is with the contrast equalizer:
For severe cases, I also add some smoothing by moving the middle curve, and possibly compensate for the desaturation using the color balance module.
As for luma noise, I used to be extremely conservative in removing it, but the “denoise (profiled)” module has gotten so much better in darktable 3.0 that I am now much less hesitant to use it (when there is a profile), although I do tune the parameters a bit and generally use a relatively low strength, just to remove the “most obvious” part of the noise.
I worked on it a little bit. And probably the best way would be to write some notes in a new dedicated thread. I’m posting here some simple results just for fun.
The main idea is to evaluate the statistical properties of a pixel as a function of its value. I’m following most of the assumptions of the paper Newson et al. (that I linked in the previous post). We assume a pixel as composed of a series of developed grain particles (or “grain clouds”). Each cloud is a portion of pixel-area that can be black or white, i.e. binary. We assume to don’t have overlapping clouds.
In the simplest case, we can also assume to have uniformly sized grain particles. Thus we can use the beta distribution (Beta distribution - Wikipedia) to approximate the probability distribution of the pixel value (I’ll discuss this in the new post). The key point is then to sample from this distribution without evaluating each single grain particle. Moreover, all the resolution-dependent properties of the model come for free, included in the beta distribution (which is pretty cool).
The model is indeed very simple and probably there isn’t much novelty into it. At the same time, the results are already pretty good and it is quite efficient.
I took the amazing portrait of Mairi from @patdavid ([PlayRaw] Mairi Troisieme) and I did a simple BW development including denoising. The image is 24MP. Then I applied the grain. Pixel-area is 34um^2 and grain clouds are 1.5um^2.
What I am doing next is morphing this simple model to include more sophisticated properties:
The size of each grain particle has some variability (lognormal distribution, coming from the synthesis process, and characterized in old papers studying grain with microscopes).
The probability of a grain particle to be developed depends on the number of photons that are absorbed, and thus it depends on the projective area of the grain particle (theoretically at least three photons are necessary to develop a grain because of the photochemistry involved).
I have a rough way to add them, but I need more words to explain it, and I am probably already off-topic. The key point is to keep the resolution-independence of the model coming from the beta distribution.