Depends. On really high ISO digital noise gets ugly and grain-noise from analogue photography is much nicer. I do quite often end up removing noise and then adding some grain back in. There’s a darktable feature request open at the moment to add better grain simulation that I’d really like to see.
I don’t mind some grain, but it all depends on the shot, both technically and/or aesthetically. Colour noise I do not like and I try to get rid of.
B&W needs grain in my opinion. It is too bad that digital high ISO and B&W) film high ISO don’t produce the same effect.
Colour can be nice when there is a (minimal) amount of noise, it gives it just that bit of extra.
Than again; Maybe I’m just missing the analog days (Oh man, no ctrl-z…).
Christopher Mark Perez has a nice article on why grain is a good thing:
And the article on how to add it digitally:
personally I stopped caring about luma noise. only removed chroma noise.
I think @anon41087856 showed some sort of algorithm that introduces very nice physics-accurate film-grain noise that can be also used to do some very nice dithering. that produced superb results.
The grain module in DarkTable is not very good, IMHO. I hope some smart person writes a better option.
Thanks for sharing!
I’m also curious to have a look at this algorithm.
A couple of years ago, I worked on the update of the grain module (Let's improve grain).
I agree with you that the current implementation has many pitfalls, and it doesn’t look particularly good, especially at small grain amounts.
In my free time, I’m working on much more physically accurate simulation. My primary source of inspiration is a recently published resolution-independent algorithm (https://hal.archives-ouvertes.fr/hal-01520260/file/Film_grain_synthesis_computer_graphics_forum.pdf).
I hope to come back to the DT grain module; for now, I am a little lost in the math of the problem. Maybe I should post some updates in the forum to stay motivated, and go forward in the project.
There are definitely others who are interested in this and might want to help, though the mathematics is a bit beyond me. I got a couple of pages into that paper before I got lost.
I assume you’ve seen the links in Use physically-realistic stochastic film grain synthesis · Issue #4451 · darktable-org/darktable · GitHub
I haven’t seen this issue. We are pointing to the same physically based resolution-independent algorithm. Nice!
And there’s a link to a github project where the algos have already been implemented.
My idea was to move away from that algorithm; It works very well but at the same time is quite computationally expensive.
So I’m working on a different one, based on similar assumptions. It is still resolution-independent and based on the microscopical properties of film grain, but does not require the Monte Carlo simulation and it skips the simulation of every single grain particle.
Hopefully, my approach will be simpler and much faster, and it will account for other physically-based ingredients. The downside is the inaccurate rendering when you try to zoom on single grain particles. It should not be a problem for the use in digital photography, because that level of magnification is for simulation of microscopy of analog film. Or maybe we want to print 10-meter sized photographs
I would also be interested in this. I haven’t read the papers yet, but I definitely will.
That sounds fantastic. Please ask if you need any help (that does not involve writing C).
My usual method for removing chroma noise is with the contrast equalizer:
For severe cases, I also add some smoothing by moving the middle curve, and possibly compensate for the desaturation using the color balance module.
As for luma noise, I used to be extremely conservative in removing it, but the “denoise (profiled)” module has gotten so much better in darktable 3.0 that I am now much less hesitant to use it (when there is a profile), although I do tune the parameters a bit and generally use a relatively low strength, just to remove the “most obvious” part of the noise.
any news on the new algorithm? I would like to contribute and try to work on putting the vanilla algorithm into Darktable.
I worked on it a little bit. And probably the best way would be to write some notes in a new dedicated thread. I’m posting here some simple results just for fun.
The main idea is to evaluate the statistical properties of a pixel as a function of its value. I’m following most of the assumptions of the paper Newson et al. (that I linked in the previous post). We assume a pixel as composed of a series of developed grain particles (or “grain clouds”). Each cloud is a portion of pixel-area that can be black or white, i.e. binary. We assume to don’t have overlapping clouds.
In the simplest case, we can also assume to have uniformly sized grain particles. Thus we can use the beta distribution (Beta distribution - Wikipedia) to approximate the probability distribution of the pixel value (I’ll discuss this in the new post). The key point is then to sample from this distribution without evaluating each single grain particle. Moreover, all the resolution-dependent properties of the model come for free, included in the beta distribution (which is pretty cool).
The model is indeed very simple and probably there isn’t much novelty into it. At the same time, the results are already pretty good and it is quite efficient.
I took the amazing portrait of Mairi from @patdavid ([PlayRaw] Mairi Troisieme) and I did a simple BW development including denoising. The image is 24MP. Then I applied the grain. Pixel-area is 34um^2 and grain clouds are 1.5um^2.
I cropped and upsampled two times, then applied the same grain model, just at a different “zoom” level. Pixel-area is smaller.
Look at 100% to get all the grain beauty. Each of these 24MP images takes about 0.7 seconds to be processed with my python script running on a laptop.
Here are some cropped versions from the negative of the last image.
Crops are taken from these positions.
What I am doing next is morphing this simple model to include more sophisticated properties:
- The size of each grain particle has some variability (lognormal distribution, coming from the synthesis process, and characterized in old papers studying grain with microscopes).
- The probability of a grain particle to be developed depends on the number of photons that are absorbed, and thus it depends on the projective area of the grain particle (theoretically at least three photons are necessary to develop a grain because of the photochemistry involved).
I have a rough way to add them, but I need more words to explain it, and I am probably already off-topic. The key point is to keep the resolution-independence of the model coming from the beta distribution.
This looks great. And the math sounds very good–is there a paper describing your method?
would love to see this make its way into darktable, at present I have to “finish” all my images off with G’MIC to add satisfactory grain, but you’ve got here is even better, well done.