Let's improve grain

So, I worked a bit into the darktable code and I managed to bring out something for adding the grain with the LUT…
Here are some output directly from darktable, using 6400 ISO and 100% strength.

On the left the old darktable output and on the right the modified one. There are three versions with 0, 0.5 and 1 contrast applied from the contrast-lightness-saturation module.

Here is the added code into the grain.c file.

#define LUT_SIZE 128
#define MAX_DELTA 2
#define MIN_DELTA 0.005

...

float paper_resp(float exposure, float mb, float gp)
{
  float density;
  float delta = - (MAX_DELTA - MIN_DELTA) * mb + MAX_DELTA; 
  density = (1 + 2 * delta) / (1 + exp( (4 * gp * (0.5 - exposure)) / (1 + 2 * delta) )) - delta;
  return density;
}

float paper_resp_inverse(float density, float mb, float gp)
{
  float exposure;
  float delta = - (MAX_DELTA - MIN_DELTA) * mb + MAX_DELTA; 
  exposure = -log((1 + 2 * delta) / (density + delta) - 1) * (1 + 2 * delta) / (4 * gp) + 0.5;
  return exposure;
}

static float midtone_bias = 1.0;
static float gamma_paper = 1.0;
static float grain_lut[LUT_SIZE*LUT_SIZE];

static void evaluate_grain_lut(const float mb, const float gp)
{
  for(int i = 0; i < LUT_SIZE; i++)
  {
    for(int j = 0; j < LUT_SIZE; j++)
    {
      float gu = (double)i / (LUT_SIZE - 1) - 0.5;
      float l = (double)j / (LUT_SIZE - 1);
      grain_lut[j * LUT_SIZE + i]= paper_resp(gu + paper_resp_inverse(l, mb, gp), mb, gp) - l;
    }
  }
}

float dt_lut_lookup_2d_1c(const float x, const float y)
{
  const float _x = CLAMPS((x + 0.5) * (LUT_SIZE - 1), 0, LUT_SIZE - 1);
  const float _y = CLAMPS(y * (LUT_SIZE - 1), 0, LUT_SIZE - 1);

  const int _x0 = _x < LUT_SIZE - 2 ? _x : LUT_SIZE - 2;
  const int _y0 = _y < LUT_SIZE - 2 ? _y : LUT_SIZE - 2;

  const int _x1 = _x0 + 1;
  const int _y1 = _y0 + 1;

  const float x_diff = _x - _x0;
  const float y_diff = _y - _y0;

  const float l00 = grain_lut[_y0 * LUT_SIZE + _x0];
  const float l01 = grain_lut[_y0 * LUT_SIZE + _x1];
  const float l10 = grain_lut[_y1 * LUT_SIZE + _x0];
  const float l11 = grain_lut[_y1 * LUT_SIZE + _x1];

  const float xy0 = (1.0 - y_diff) * l00 + l10 * y_diff;
  const float xy1 = (1.0 - y_diff) * l01 + l11 * y_diff;
  return xy0 * (1.0f - x_diff) + xy1 * x_diff;
}

and into the process function:

evaluate_grain_lut(midtone_bias, gamma_paper);
...
out[0] = in[0] + 100 * dt_lut_lookup_2d_1c(noise * strength * GRAIN_LIGHTNESS_STRENGTH_SCALE, in[0] / 100);
...
3 Likes

I like the result a lot. About the code, I would avoid using a global variable for grain_lut and instead put it into piece->data. Then you can run evaluate_grain_lut() in commit_params() once.

Addendum: I don’t have the time to go though the math but maybe you know: Is there a set of parameters for your code that would result in the same noise as the current dt code, i.e. a constant weight of 1 everywhere?

1 Like

Thanks. I definitely need some advises for the code.

The parameter delta in the equations can be used to control the midtones bias of the grain. When delta is big enough, for example when it is equal 2, the results are indistinguishable from the old implementation.

In the code, I implemented a midtones_bias parameter to be assigned to a slider. When is it 0, delta is equal to MAX_DELTA (=2) obtaining the same output of the old implementation, and when it is 1, delta is equal to MIN_DELTA (=0.005) obtaining the full midtones bias.
Right now I’m trying to add the slider. :wink:

Perfect. In that case it should be straight forward to add it as an update to the current grain module. When old parameters are loaded they are getting a midtone bias slider setting of 0 to keep the old look.
Once you have something that half way works feel free to open a pull request on Github. That way it’s easy to comment on single code lines and help you with details. Or join us in IRC when you have more general questions about the implementation.

I did my first pull request! Oh, I feel good…

Thanks again everyone for the help.

It is still open the second part of the problem, regarding the appearance of the grain and the possibility of better control the dimension of the blotches. After finalizing the LUT part I might start experimenting on that.

7 Likes

This is awesome! I’m wondering if we shouldn’t consider doing a writeup on your work and progress for the main site? Would love to highlight what you’re doing here and the results!

4 Likes

@patdavid I flagged this post to feature in the “From the community” post for this quarter. :wink:

6 Likes

This thread was so full of great ideas it inspired me to use grain in this week’s video. Thanks Pat David for bringing these great minds together.

2 Likes

@harry_durgin I’m happy that this discussion has inspired you, even in a small way. Keep up your nice work!

2 Likes

Watched the video @harry_durgin, always a pleasure to watch the thought process behind the image processing.

2 Likes

Sorry for resurrecting the thread, tonight I needed some fun. :wink:

I wanted to compare the power spectrum of the darktable grain with some real scan samples.

We have already discussed about film grain distribution as a function of exposure, it remained to be assessed the spatial distribution of the grain.

Do you guys have high resolution film grain scans to share?
For now, I only found a couple of Kodak scan samples at this page: http://www.redwingdigital.com/bully-pulpit/film_grain/. To be honest they look too perfect to be real scan.

For the comparison I took a 24 MP 50% gray image and I applied several ISO levels of grain with darktable.
Then I calculated the power spectrum of the grainy images assuming a dimension of 24x36 millimiters of the frames.

darktable, 100% strength, 24MP

Kodak scan samples, 16MP

Here is a quick comparison of the 1600 ISOs.

To me, the power spectrum of the Kodak samples look like more Gaussian-shaped and a little more uniform.

Here are two portion of the images.

trix 1600

darktable 1600

If I understood well i could try to tweak the octaves amplitudes of the simplex-noise in order to balance the power spectrum shape.

4 Likes

I keep you updated because I know you’re dying to see more frivolous grain stuff. :grin:

After a deeper search I found other two film scan samples (of lower quality) for the grain comparison.

Here all the sources:
agfa apx 400 8 MP
kodak tmax 400 6 MP
kodak trix 1600 16 MP
kodak tmax 3200 16 MP

In order to better compare the shape of the power spectrum functions I normalized the spatial frequency by the standard deviation. The power spectra are also normalized by the area.

All the real samples are essentially superimposed while darktable grain is a bit off and more Lorentzian-shaped.


128x128 portion of the images upscaled to 24 MP in order to match the darktable output. The first two are strongly affected by jpg compression artifacts.

Now the comparison is slightly more satisfactory than the one of the previous post because the grain samples are coming independently from three sources. I feel more confident about what to look for when hacking the noise generation algorithm.

I am also happy to see that there is some kind of confirmation of the feeling about darktable grain being a little less “organic” than the real one, and I’m not imagining things :rofl:.

1 Like

IIRC @patdavid has some grain scans he uses to add grain to his images. Maybe he can share his file, too?

Absolutely! This is a T-Max 400 frame (http://farm8.staticflickr.com/7228/7314861896_292120872b_o.png):

2 Likes

Lovely composition, and the bokeh is superb! I guess that’s why the pros still shoot film. :smiley:

11 Likes

An interesting contribution: IPOL Journal · Realistic Film Grain Rendering

1 Like

Thank you @patdavid for the frame sample!

And thanks @cribari for the nice reference!

Few years ago I developed some code to simulate film grain, using an approach that is probably similar (but independent) to the one pointed out by Francisco @cribari: given a source image, the code generates a “grainy” version by literally adding one grain at a time, such that the average grayscale value is preserved locally.

I have not worked on this project since quite a while, and unfortunately the code is not yet in a shape that allows to make it public, but I might revise it is there is some interest (although it is REALLY slow on large images).

Nevertheless, I would really be curious to see how it compares with the other samples and methods that have been discussed above. So I put here some samples cropped from initial 6000x4000px images uniformly filles with solid gray:

50% gray, large grains:

50% gray, medium grains:

50% gray, small grains:

10% gray, large grains:

90% gray, large grains:

I can provide the hi-res un-compresses TIFF files if needed…

1 Like

Nice @Carmelo_DrRaw, thank you for your contribution!
I agree that the idea of simulating the photographic process grain by grain is really charming. :grinning:

So, let’s see some new comparisons. I plotted the normalized power spectra of:

  • kodak trix 1600 (example of the scanned film grain I found so far);
  • darktable grain at 1600 iso;
  • @patdavid’s tmax 400 sample;
  • @Carmelo_DrRaw’s 50% gray medium sample.

The grain from @patdavid is quite different from the others, it has much more high frequency content and a bump in the middle, there is a strong bias for a certain grain size. I didn’t normalize it in the same way of the others because the power spectrum doesn’t look decayed at the boundaries of the frequency axis.
The grain from @Carmelo_DrRaw resemble a lot the kodak trix 1600, it is probably slightly more peaked.

Then, I played with the octaves of the darktable simplex noise algorithm. I fitted the kodak trix 1600 power spectum using three octaves: the parameters to be determined were three frequencies and three amplitudes. I think the power spectrum “darktable 1600 proposed” came up quite close at the desired one.
It is possible to easily match other smooth monotonically decreasing power spectrum shapes.

gray50_24M_1600_256cut gray50_24M_1600_cut256
(left) darktable 1600 - (right) darktable 1600 proposed
The difference is small, hopefully It can be appreciated that the grain on the right is more smooth and plumpy while the grain on the left is somewhat sandier.

For @Carmelo_DrRaw, here are the power spectra of the other images you provided. It looks like there isn’t a big change in size for small, medium and large samples. There is a strong change of the amplitude though.

Here are the power spectra of the 50% gray images.

Here are the power spectra of the 50% gray images normalized by the maximum.

The images with different solid gray colors have the same grain power spectrum shape but different amplitudes.

Here are the power spectra of the 10%, 50%, 90% gray images.

Here are the power spectra of the 10%, 50%, 90% gray images normalized by the maximum.

4 Likes

First of all, thanks for your very detailed study!

I think that this is not very surprising, as they result from the same grains laid out at a different spatial density.

One thing that I would find interesting is to see how other methods compare when applied to 10% and 90% gray images… I think that 50% gray is some sort of “special case” which is relatively simple to render with noise generators. Dark and light areas are instead trickier, because they are in reality generated by either very sparse or highly-dense grain distributions.