So, I worked a bit into the darktable code and I managed to bring out something for adding the grain with the LUT…
Here are some output directly from darktable, using 6400 ISO and 100% strength.
On the left the old darktable output and on the right the modified one. There are three versions with 0, 0.5 and 1 contrast applied from the contrast-lightness-saturation module.
I like the result a lot. About the code, I would avoid using a global variable for grain_lut and instead put it into piece->data. Then you can run evaluate_grain_lut() in commit_params() once.
Addendum: I don’t have the time to go though the math but maybe you know: Is there a set of parameters for your code that would result in the same noise as the current dt code, i.e. a constant weight of 1 everywhere?
Thanks. I definitely need some advises for the code.
The parameter delta in the equations can be used to control the midtones bias of the grain. When delta is big enough, for example when it is equal 2, the results are indistinguishable from the old implementation.
In the code, I implemented a midtones_bias parameter to be assigned to a slider. When is it 0, delta is equal to MAX_DELTA (=2) obtaining the same output of the old implementation, and when it is 1, delta is equal to MIN_DELTA (=0.005) obtaining the full midtones bias.
Right now I’m trying to add the slider.
Perfect. In that case it should be straight forward to add it as an update to the current grain module. When old parameters are loaded they are getting a midtone bias slider setting of 0 to keep the old look.
Once you have something that half way works feel free to open a pull request on Github. That way it’s easy to comment on single code lines and help you with details. Or join us in IRC when you have more general questions about the implementation.
It is still open the second part of the problem, regarding the appearance of the grain and the possibility of better control the dimension of the blotches. After finalizing the LUT part I might start experimenting on that.
This is awesome! I’m wondering if we shouldn’t consider doing a writeup on your work and progress for the main site? Would love to highlight what you’re doing here and the results!
Sorry for resurrecting the thread, tonight I needed some fun.
I wanted to compare the power spectrum of the darktable grain with some real scan samples.
We have already discussed about film grain distribution as a function of exposure, it remained to be assessed the spatial distribution of the grain.
Do you guys have high resolution film grain scans to share?
For now, I only found a couple of Kodak scan samples at this page: http://www.redwingdigital.com/bully-pulpit/film_grain/. To be honest they look too perfect to be real scan.
For the comparison I took a 24 MP 50% gray image and I applied several ISO levels of grain with darktable.
Then I calculated the power spectrum of the grainy images assuming a dimension of 24x36 millimiters of the frames.
In order to better compare the shape of the power spectrum functions I normalized the spatial frequency by the standard deviation. The power spectra are also normalized by the area.
All the real samples are essentially superimposed while darktable grain is a bit off and more Lorentzian-shaped.
128x128 portion of the images upscaled to 24 MP in order to match the darktable output. The first two are strongly affected by jpg compression artifacts.
Now the comparison is slightly more satisfactory than the one of the previous post because the grain samples are coming independently from three sources. I feel more confident about what to look for when hacking the noise generation algorithm.
I am also happy to see that there is some kind of confirmation of the feeling about darktable grain being a little less “organic” than the real one, and I’m not imagining things .
Few years ago I developed some code to simulate film grain, using an approach that is probably similar (but independent) to the one pointed out by Francisco @cribari: given a source image, the code generates a “grainy” version by literally adding one grain at a time, such that the average grayscale value is preserved locally.
I have not worked on this project since quite a while, and unfortunately the code is not yet in a shape that allows to make it public, but I might revise it is there is some interest (although it is REALLY slow on large images).
Nevertheless, I would really be curious to see how it compares with the other samples and methods that have been discussed above. So I put here some samples cropped from initial 6000x4000px images uniformly filles with solid gray:
The grain from @patdavid is quite different from the others, it has much more high frequency content and a bump in the middle, there is a strong bias for a certain grain size. I didn’t normalize it in the same way of the others because the power spectrum doesn’t look decayed at the boundaries of the frequency axis.
The grain from @Carmelo_DrRaw resemble a lot the kodak trix 1600, it is probably slightly more peaked.
Then, I played with the octaves of the darktable simplex noise algorithm. I fitted the kodak trix 1600 power spectum using three octaves: the parameters to be determined were three frequencies and three amplitudes. I think the power spectrum “darktable 1600 proposed” came up quite close at the desired one.
It is possible to easily match other smooth monotonically decreasing power spectrum shapes.
(left) darktable 1600 - (right) darktable 1600 proposed
The difference is small, hopefully It can be appreciated that the grain on the right is more smooth and plumpy while the grain on the left is somewhat sandier.
For @Carmelo_DrRaw, here are the power spectra of the other images you provided. It looks like there isn’t a big change in size for small, medium and large samples. There is a strong change of the amplitude though.
Here are the power spectra of the 50% gray images.
First of all, thanks for your very detailed study!
I think that this is not very surprising, as they result from the same grains laid out at a different spatial density.
One thing that I would find interesting is to see how other methods compare when applied to 10% and 90% gray images… I think that 50% gray is some sort of “special case” which is relatively simple to render with noise generators. Dark and light areas are instead trickier, because they are in reality generated by either very sparse or highly-dense grain distributions.