Let's improve grain

No, I’d use a 1D LUT that maps lightness of the pixel/grain region to weight. Then you can just look up the resulting factor for your noise when processing the image instead of computing the same thing over and over again. Of course, if moving the whole thing to the linear part of the pipeline would work then everything would become much cheaper and a LUT might not be needed at all. I’ll leave it to @hanatos to think about those things though. I am more familiar with other things in darktable. :slight_smile:

@hanatos thank you for the reference about perlin noise. Now the darktable code from grain.c is a little bit less esoteric…

@houz in my opinion not only the weight of the grain should be addressed but also the shape of distribution. This is the reason I was thinking about a 2D LUT. I made some figures.

Thanks to the comment of @shreedhar, I slightly modified the two equation of the first post in order to use delta as a midtones bias parameter for the grain. Essentially I’m stretching the x axis in order to keep the slope of the sigmoid curve constant.

If we want to incorporate the fake-photographic-developing process of the grain into a 2D LUT i would do something like this:


where, Gu and Gd are “undeveloped” and “developed” grain, L and Ld are the lightness channel before and after the grain addition.

In this way the greater the value of delta and the smaller the midtones bias of the grain. The amount of the grain in the midtones does not change thanks to the added stretching of the x axis.

Assuming the possible value for the grain between -0.5 and 0.5, the 256x256 LUT for Gd is like this:

with delta = 0.005, 0.1 and 2, gamma = 1.
This verify that delta could be used as the midtones bias slider.

Changing gamma, the slope of the response of the photographic paper, the LUT becomes more asimmetrical:

Here delta = 0.005 and gamma = 1, 1.5 and 2.

If I understood well (from https://www.mochima.com/articles/LUT/LUT.html) the access of the LUT array must be done using integer indexes. So, Gu and L must be used as the indexes. The LUT can be computed at the beginning of the grain routine, and eventually updated whit slider callbacks.
In a first step Gu could be replaced with the grain generated right now from darktable.
I’m not sure if the indexing part is feasible for the access of the LUT. What do you think?

Update: Gu axis in figures is flipped, and there are equation typos (see the code in the following posts).

2 Likes

In the simplest lienar interpolation case a LUT works like this: You tell the lookup function the coordinates as a float. That lookup code then takes the nearest control points stored in the LUT smaller and bigger than your float point and interpolates between the stored values according to how far your float coordinate was away. The two dimensions can be interpolated independently.

In C that can look like this, assuming a square LUT. x and y are in 0…1

float dt_lut_lookup_2d_1c(const float *const lut, const size_t size, const float x, const float y)
{
  const float _x = CLAMPS(x * (size - 1), 0, size - 1);
  const float _y = CLAMPS(y * (size - 1), 0, size - 1);

  const int _x0 = _x < size - 2 ? _x : size - 2;
  const int _y0 = _y < size - 2 ? _y : size - 2;

  const int _x1 = _x0 + 1;
  const int _y1 = _y0 + 1;

  const float x_diff = _x - _x0;
  const float y_diff = _y - _y0;

  const float l00 = lut[_y0 * size + _x0];
  const float l01 = lut[_y0 * size + _x1];
  const float l10 = lut[_y1 * size + _x0];
  const float l11 = lut[_y1 * size + _x1];

  const float xy0 = (1.0 - x_diff) * l00 + l10 * x_diff;
  const float xy1 = (1.0 - x_diff) * l01 + l11 * x_diff;

  return xy0 * (1.0f - y_diff) + xy1 * y_diff;
}
2 Likes

So, I worked a bit into the darktable code and I managed to bring out something for adding the grain with the LUT…
Here are some output directly from darktable, using 6400 ISO and 100% strength.

On the left the old darktable output and on the right the modified one. There are three versions with 0, 0.5 and 1 contrast applied from the contrast-lightness-saturation module.

Here is the added code into the grain.c file.

#define LUT_SIZE 128
#define MAX_DELTA 2
#define MIN_DELTA 0.005

...

float paper_resp(float exposure, float mb, float gp)
{
  float density;
  float delta = - (MAX_DELTA - MIN_DELTA) * mb + MAX_DELTA; 
  density = (1 + 2 * delta) / (1 + exp( (4 * gp * (0.5 - exposure)) / (1 + 2 * delta) )) - delta;
  return density;
}

float paper_resp_inverse(float density, float mb, float gp)
{
  float exposure;
  float delta = - (MAX_DELTA - MIN_DELTA) * mb + MAX_DELTA; 
  exposure = -log((1 + 2 * delta) / (density + delta) - 1) * (1 + 2 * delta) / (4 * gp) + 0.5;
  return exposure;
}

static float midtone_bias = 1.0;
static float gamma_paper = 1.0;
static float grain_lut[LUT_SIZE*LUT_SIZE];

static void evaluate_grain_lut(const float mb, const float gp)
{
  for(int i = 0; i < LUT_SIZE; i++)
  {
    for(int j = 0; j < LUT_SIZE; j++)
    {
      float gu = (double)i / (LUT_SIZE - 1) - 0.5;
      float l = (double)j / (LUT_SIZE - 1);
      grain_lut[j * LUT_SIZE + i]= paper_resp(gu + paper_resp_inverse(l, mb, gp), mb, gp) - l;
    }
  }
}

float dt_lut_lookup_2d_1c(const float x, const float y)
{
  const float _x = CLAMPS((x + 0.5) * (LUT_SIZE - 1), 0, LUT_SIZE - 1);
  const float _y = CLAMPS(y * (LUT_SIZE - 1), 0, LUT_SIZE - 1);

  const int _x0 = _x < LUT_SIZE - 2 ? _x : LUT_SIZE - 2;
  const int _y0 = _y < LUT_SIZE - 2 ? _y : LUT_SIZE - 2;

  const int _x1 = _x0 + 1;
  const int _y1 = _y0 + 1;

  const float x_diff = _x - _x0;
  const float y_diff = _y - _y0;

  const float l00 = grain_lut[_y0 * LUT_SIZE + _x0];
  const float l01 = grain_lut[_y0 * LUT_SIZE + _x1];
  const float l10 = grain_lut[_y1 * LUT_SIZE + _x0];
  const float l11 = grain_lut[_y1 * LUT_SIZE + _x1];

  const float xy0 = (1.0 - y_diff) * l00 + l10 * y_diff;
  const float xy1 = (1.0 - y_diff) * l01 + l11 * y_diff;
  return xy0 * (1.0f - x_diff) + xy1 * x_diff;
}

and into the process function:

evaluate_grain_lut(midtone_bias, gamma_paper);
...
out[0] = in[0] + 100 * dt_lut_lookup_2d_1c(noise * strength * GRAIN_LIGHTNESS_STRENGTH_SCALE, in[0] / 100);
...
3 Likes

I like the result a lot. About the code, I would avoid using a global variable for grain_lut and instead put it into piece->data. Then you can run evaluate_grain_lut() in commit_params() once.

Addendum: I don’t have the time to go though the math but maybe you know: Is there a set of parameters for your code that would result in the same noise as the current dt code, i.e. a constant weight of 1 everywhere?

1 Like

Thanks. I definitely need some advises for the code.

The parameter delta in the equations can be used to control the midtones bias of the grain. When delta is big enough, for example when it is equal 2, the results are indistinguishable from the old implementation.

In the code, I implemented a midtones_bias parameter to be assigned to a slider. When is it 0, delta is equal to MAX_DELTA (=2) obtaining the same output of the old implementation, and when it is 1, delta is equal to MIN_DELTA (=0.005) obtaining the full midtones bias.
Right now I’m trying to add the slider. :wink:

Perfect. In that case it should be straight forward to add it as an update to the current grain module. When old parameters are loaded they are getting a midtone bias slider setting of 0 to keep the old look.
Once you have something that half way works feel free to open a pull request on Github. That way it’s easy to comment on single code lines and help you with details. Or join us in IRC when you have more general questions about the implementation.

I did my first pull request! Oh, I feel good…

Thanks again everyone for the help.

It is still open the second part of the problem, regarding the appearance of the grain and the possibility of better control the dimension of the blotches. After finalizing the LUT part I might start experimenting on that.

7 Likes

This is awesome! I’m wondering if we shouldn’t consider doing a writeup on your work and progress for the main site? Would love to highlight what you’re doing here and the results!

4 Likes

@patdavid I flagged this post to feature in the “From the community” post for this quarter. :wink:

6 Likes

This thread was so full of great ideas it inspired me to use grain in this week’s video. Thanks Pat David for bringing these great minds together.

2 Likes

@harry_durgin I’m happy that this discussion has inspired you, even in a small way. Keep up your nice work!

2 Likes

Watched the video @harry_durgin, always a pleasure to watch the thought process behind the image processing.

2 Likes

Sorry for resurrecting the thread, tonight I needed some fun. :wink:

I wanted to compare the power spectrum of the darktable grain with some real scan samples.

We have already discussed about film grain distribution as a function of exposure, it remained to be assessed the spatial distribution of the grain.

Do you guys have high resolution film grain scans to share?
For now, I only found a couple of Kodak scan samples at this page: http://www.redwingdigital.com/bully-pulpit/film_grain/. To be honest they look too perfect to be real scan.

For the comparison I took a 24 MP 50% gray image and I applied several ISO levels of grain with darktable.
Then I calculated the power spectrum of the grainy images assuming a dimension of 24x36 millimiters of the frames.

darktable, 100% strength, 24MP

Kodak scan samples, 16MP

Here is a quick comparison of the 1600 ISOs.

To me, the power spectrum of the Kodak samples look like more Gaussian-shaped and a little more uniform.

Here are two portion of the images.

trix 1600

darktable 1600

If I understood well i could try to tweak the octaves amplitudes of the simplex-noise in order to balance the power spectrum shape.

4 Likes

I keep you updated because I know you’re dying to see more frivolous grain stuff. :grin:

After a deeper search I found other two film scan samples (of lower quality) for the grain comparison.

Here all the sources:
agfa apx 400 8 MP
kodak tmax 400 6 MP
kodak trix 1600 16 MP
kodak tmax 3200 16 MP

In order to better compare the shape of the power spectrum functions I normalized the spatial frequency by the standard deviation. The power spectra are also normalized by the area.

All the real samples are essentially superimposed while darktable grain is a bit off and more Lorentzian-shaped.


128x128 portion of the images upscaled to 24 MP in order to match the darktable output. The first two are strongly affected by jpg compression artifacts.

Now the comparison is slightly more satisfactory than the one of the previous post because the grain samples are coming independently from three sources. I feel more confident about what to look for when hacking the noise generation algorithm.

I am also happy to see that there is some kind of confirmation of the feeling about darktable grain being a little less “organic” than the real one, and I’m not imagining things :rofl:.

1 Like

IIRC @patdavid has some grain scans he uses to add grain to his images. Maybe he can share his file, too?

Absolutely! This is a T-Max 400 frame (http://farm8.staticflickr.com/7228/7314861896_292120872b_o.png):

2 Likes

Lovely composition, and the bokeh is superb! I guess that’s why the pros still shoot film. :smiley:

11 Likes

An interesting contribution: IPOL Journal · Realistic Film Grain Rendering

1 Like

Thank you @patdavid for the frame sample!

And thanks @cribari for the nice reference!