Let's improve grain

I love grain. A little bit of grain is not only pleasant but it helps increase the perceived sharpness.
I also truly love that darktable has a grain module, it makes my workflow easier. If no GIMP retuching is needed I can achieve the final image in a single program, including grain.
The point is that I’m not completely satisfied with the default grain produced by darktable. I never understood completely why. I decided to investigate deeper the problem. I compared darktable grain with other software: GMIC and snapseed (android app from Google). Snapseed probably has the most advanced, but not open-source, implementation of the three.
Let’s see what we can do.

Why darktable grain doesn’t feel right

I will use a test image to explain the problem.


A not-very-artistic photo of a puddle, without grain.

I applied grain to this photo using settings that roughly guarantee a similar amount of grain across the different software. The settings are as follow:

  • darktable: roughness 6400 iso; intensity 50%.
  • GMIC: Kodak TRI-X 1600; grain merge; opacity 0.25; scale 100; other settings as defaults.
  • snapseed: grain 50; intesity 0; default preset.

The settings are a little bit extreme to highlight the grain. The comparison discussion doesn’t change much with lighter grain.


Crop of the puddle, with the grain.

In my opionion, the main problem of the darktable output is that the shadows are not so pleasing. It looks like the noise distribution of the grain is clipped and only the whiter part is present: something like salt particles over a black background.
I think that snapseed is giving the most pleasing grain, delivering a more natural and realistic result. GMIC is also doing a good job but I like it a little less.

Here is a crop of the darktable output in order to better understand the salty effect over the black area.

How nice grain should look

The grain in an analogical photo comes from the irregular structure of the photo-sensible material deposited on the film. When the exposure is transferred to the photographic paper the noise is transferred differently for different lightness level. Approaching black and white the distribution of the noise collapse.

I think this scheme from ref. [1] is usefull for understanding the problem:

Adapted from ref. [1]. Film grain histogram is amplified by the slope of paper characteristic curve in midtone while severely compressed by the plateau of the curve in the highlight and shadow.

In the cited paper the authors found that two characteristics of the grain are important in order to have a good looking result:

  • the distribution of the grain should be wider on the mid-tones and it should became narrower approaching white and black;
  • the distribution of the grain should be symmetrical for mid-tones and it should be skewed for highlight and shadows.


Adapted from ref. [1]. The histograms of the noise in the dark regions is skewed so that the right arm of the distribution is broader than the left one. In the highlight regions the opposite.

For example having a skewed histogram in an highlight region means that there are darker spots than what expected for a symmetrical distribution. The grain appears less uniform. This helps with the organic and realistic feel. The next image hopefully should help to clarify the point. If not some trust is needed.


Adapted from ref. [1].

Let’s calculate some histograms

I created a linear gradient from black to white in order to deal with some numerical results and not only impressions. I applied grain with the same parameters reported above.

Here it is even more evident that snapseed is producing a smoother transition from mid-tones to black and white. Using a parallelism, it resemble a stone column illuminated on one right side with a soft light. It somewhat delivers a three dimensional feel.
GMIC is dealing better than darktable with the shadows but the highlights are still not very natural.

Using 4096x255 (high*width) images, i calculated the standard deviations of the column producing the following plots. They correlate the amount of noise with the input value of the pixels.



From this plots, it emerges that snapseed grain has a very round and symmetric trend. Confirming the preliminary impressions, GMIC output is similar to snapseed but the noise amount does not go to zero at the edges. Darktable has the most peculiar trend, surprisingly showing correlated peaks.

The next plots shows histograms of the noise for given input values, i.e. for given columns of the 4096x255 images.

darktable


Histograms are nearly unchanged for all the lightness input values. Near the edges at 0 and 255 the distributions are clipped.

GMIC


Histograms are more peaked in shadows and highlights but they still looks somewhat clipped.

snapseed


The histograms are very similar to the paper ones, with smooth transition and skewed peaked distributions.

Improved grain in darktable

Now that we know the what are the possible problems of grain we can try to improve. A possible solution could be to use two instances of the grain module. One will deal mainly with the mid-tones, producing symmetrically distributed grain. The second will help with the skewing part modulating the transition towards shadows and highlights.


An example of implementation. Sorry for the foreign language of the interface.

In this example the instance “Grana” is acting on the mid-tones exploiting a parametric mask. The instance “Grana 1” is acting on darker and lighter part of the image.


Gradient with the proposed double-instance grain.

And here are standard deviations of the noise and histograms:

darktable proposed

Finally, the grainy puddle.

Conclusion

I think darktable grain can be improved. Exploiting the main properties of a natural and realistic grain, it is possible to get good results using two instances of the grain module.
A more elegant approach would require to work under the hood to refine how grain in produced and added to the image.

Reference

[1] Kurihara T. et al., Journal of Imaging Science and Technology 55(3): 030503-1–030503-9, 2011

20 Likes

Nice. I did observe the ‘mid bias’ and implement it in my little grain implementation. I did not know about the skewing. Interesting. :slight_smile:

1 Like

There is a very nice research work from Alasdair Newson, who designed some smart algorithms to simulate photographic grain. Worth to look at : Alasdair Newson - Film Grain Rendering

1 Like

@David_Tschumperle Interesting… this looks somewhat similar to the approach I described in my blog a couple of years ago: http://photoflowblog.blogspot.fr/2014/12/resurrecting-old-idea.html

The corresponding code is not yet published as open source, but I was planning to clean it up and put it on github for anyone who could be interested.

My statistical approach has the disadvantage of being very slow, as the image is re-built from scratch grain-by-grain. I’d be curious to see if the algorithm proposed in this paper gives similar results with better performances…

1 Like

That’s an impressive writeup and research into it. We know that darktable’s grain module is far from perfect. Since you seem to understand the math already (which I don’t), would you be interested in helping us make the module better? It requires some knowledge of C programming, the darktable specific parts shouldn’t be that much of an issue as we can help you with that. If you feel that you are up to the task and interested we would love to see you join us on IRC on Freenode, channel #darktable. There is also a web chat client available: Kiwi IRC

2 Likes

@arctic - very nice article. Hopefully you will find time and a good mood to take the offer …

1 Like

@Jonas_Wagner nice little tool the Film Emulator!

@David_Tschumperle I had a rapid look at the paper and the video-slides… it looks very interesting, especially the resolution independence point. Thank you for sharing. Do you have any idea about the performances of the algorithm?

@Carmelo_DrRaw your code produces a really nice grain. What do you mean with very slow?

@houz I would be happy to help! I don’t know if I’m really up to the task… I can start having a look to the code and refreshing my little C skills. For now, free time is a bit polarized by my PhD thesis but it could be a nice way to play around in the evening. I will dig into the problem… In my opinion darktable needs an efficient algorithm with pleasing overall results, maybe losing a bit on low-level realism of grain simulation.

Good to hear. Maybe a simple slider “midtone bias” in the current module would be enough? For old history stacks that would default to 0 to keep the same look as before, new ones could have a more pleasing default.

About the comparison you did, it looks to me as if snapseed (and G’MIC to some extend) added bigger noise blotches than darktable.

I think it is quite slow indeed, due to the Monte-Carlo sampling.

Some updates.
I’m a bit slow in understanding the computation of darktable from the code, so for now I played a bit in MATLAB. But I definitely want to challenge myself with real code.

I don’t know if I’m missing something fundamental so I’ll write down few notes. Criticate freely.

Adding noise using the response curve of the photographic paper

I started slightly modifing a logarithmic sigmoidal function f(x) which represents the typical shape of a response curve of photographic paper,
,
here x is the logarithm of the exposure of the photographic paper, gamma is the slope in the midtones, and delta is a small shift. The values of f(x) represent an hypothetical density impressed on the paper.

Here an example of response curve.

Let’s assume that the grain is present uniformly on the photographic film. The light filtering trough the photographic film will be affected uniformly and we need to add the noise to the logarithm of the exposure.

In order to add the noise we must convert the lightness channel of the photo (that I assume proportional to the density impressed on the photographic paper) in the originating log-exposure values. We can use the inverse of f(x) defined as
.

Taking the lightness channel (L) defined as between 0 and 1, gamma equal to 1 and delta equal to 0.005, we have
.

Now we can add the grain-noise (n) to this function,
,

and then convert the noisy exposure into the noisy density on the photographic paper using the function f
.

We have obtained the grain with a midtones bias and highlights and shadows skewed noise distributions. The average values of lightness are unaffected.

The lightness channel (L) of the photograph should be edited accordingly to the following equation

where k is a parameter to tune the amount of grain. The procedure leaves unaffected the average picture but distort and stretches the noise adapting its characteristics to the specific lightness value.

A quick look at the grain

Using the final equation, we can apply some grain and see how it looks. At this stage I’m using Gaussian random noise:

To me it looks quite similar to the snapseed result.

The highligths/shadows skewed distributions are more evident when watching only the grain subtracting the linear trend:

On the left of the image there are more white dots and on the right of the image there are more black dots.

The statistical properties of the noise are compelling.

Example with Fourier filtered grain

We can Fourier filter the Gaussian noise in order to mimic the spatial properties of real grain and then apply the photographic response curve. In ref. [1] a simple model for the power spectrum in the spatial frequency frequency is used: a sum of two Gaussian function associated at two typical size of film grains.

Here an example using a real image and Fourier filtered grain.

@houz: With Fourier filtering is easy to achieve large blotches but I have some concerns about performances. Am I right?

Reference

[1] Stephenson I, Saunders A., Simulating Film Grain using the Noise-Power Spectrum, EG UK Theory and Practice of Computer Graphics (2007).

6 Likes

Does your function g(x) imply that as we vary delta (instead of fixing it at 0,005) we will get more or less grain? If that is so, then we can control how much grain to be added.

That looks quite nice already. I am not sure however if all those steps of computing f and g are required, as the final curve seems to be linear with noise added, weighted by the lightness of the pixel. So maybe it’s possible to cheat and at least get rid of all the expensive log and exp. Or alternatively pre-compute that into a LUT.

About the Fourier transform, it can be a bit slow indeed. @hanatos might have more clues about that part.

1 Like

just so i understand this correctly. you’re applying the grain in `linear’ and then apply the paper response curve. since it’s already kindof applied, you fake-undo it, apply the noise, and then apply it back. so is all you’re saying we should apply grain before base/tone curves so the user may even have an influence on the type of compression it will get due to the curve transform?

as far as fourier goes: we’d need to link fftw i suppose.

also, since we’re going overboard simulating grain, did you see this:

https://hal.archives-ouvertes.fr/hal-01358392/document

does the grain they produce look good to you?

…maybe to add to this. using some kind of deterministic perlin noise in the gradient domain will already bandlimit your noise pattern.

if you need finer control over the spectral properties of your noise pattern, you can use gabor noise:

http://graphics.cs.kuleuven.be/publications/LLDD09PNSGC/

or for the special case of (anisotropic) gaussian distributions, you may look into this:

https://drive.google.com/file/d/0BzvWIdpUpRx_U1NOUjlINmljQzg/view

btw these guys say that using a wet mount scanner will make your noise appear much softer:

http://cool.conservation-us.org/coolaic/sg/emg/library/pdf/vitale/2007-04-vitale-filmgrain_resolution.pdf

@shreedhar I added delta in order to avoid infinite values in full blacks and whites. Changing delta influences primarly the midtones bias of the grain, but also as you said the amount of grain. The two behaviors can be easily disentangled changing the function, so that delta controls only the midtones bias.

@houz It is interesting the possibility of bake the response curve into a LUT, a 2D LUT right?

@hanatos you understood correctly. I think grain should be added before the application of a kind of response curve mimicking the real photographic process. I’m not sure that applying grain before base/tone curve in darktable will have the same result. Probably having a dedicated editable curve only for the grain is too much control?

The grain from this paper looks fantastic and the simulation process they use is even more fascinating.

I don’t quite understand this sentence probably because i don’t know exactly what the gradient domain is… sorry.

summarizing:
-could be nice to fine tune the spectrum of the grain, this will allows to have more control on the dimension and properties of the grain… the procedural methods based on gabor noise looks smart! I saw that in the paper page of “Procedural Noise using Sparse Gabor Convolution” they have some example code in C++…
-the grain must be added following a response curve in order to fake the real photographic process. Is the LUT route the most efficient way to go?

1 Like

sorry, should have given a reference or so. i assumed perlin noise to be
one of the super basic universally well known things. you can find tons of
references and tutorials, based on this:

for which ken perlin got a little oscar:

http://mrl.nyu.edu/~perlin/doc/oscar.html

the grain in darktable is based on a similar mechanism (i.e. bandlimited
and deterministic already).

No, I’d use a 1D LUT that maps lightness of the pixel/grain region to weight. Then you can just look up the resulting factor for your noise when processing the image instead of computing the same thing over and over again. Of course, if moving the whole thing to the linear part of the pipeline would work then everything would become much cheaper and a LUT might not be needed at all. I’ll leave it to @hanatos to think about those things though. I am more familiar with other things in darktable. :slight_smile:

@hanatos thank you for the reference about perlin noise. Now the darktable code from grain.c is a little bit less esoteric…

@houz in my opinion not only the weight of the grain should be addressed but also the shape of distribution. This is the reason I was thinking about a 2D LUT. I made some figures.

Thanks to the comment of @shreedhar, I slightly modified the two equation of the first post in order to use delta as a midtones bias parameter for the grain. Essentially I’m stretching the x axis in order to keep the slope of the sigmoid curve constant.

If we want to incorporate the fake-photographic-developing process of the grain into a 2D LUT i would do something like this:


where, Gu and Gd are “undeveloped” and “developed” grain, L and Ld are the lightness channel before and after the grain addition.

In this way the greater the value of delta and the smaller the midtones bias of the grain. The amount of the grain in the midtones does not change thanks to the added stretching of the x axis.

Assuming the possible value for the grain between -0.5 and 0.5, the 256x256 LUT for Gd is like this:

with delta = 0.005, 0.1 and 2, gamma = 1.
This verify that delta could be used as the midtones bias slider.

Changing gamma, the slope of the response of the photographic paper, the LUT becomes more asimmetrical:

Here delta = 0.005 and gamma = 1, 1.5 and 2.

If I understood well (from https://www.mochima.com/articles/LUT/LUT.html) the access of the LUT array must be done using integer indexes. So, Gu and L must be used as the indexes. The LUT can be computed at the beginning of the grain routine, and eventually updated whit slider callbacks.
In a first step Gu could be replaced with the grain generated right now from darktable.
I’m not sure if the indexing part is feasible for the access of the LUT. What do you think?

Update: Gu axis in figures is flipped, and there are equation typos (see the code in the following posts).

2 Likes

In the simplest lienar interpolation case a LUT works like this: You tell the lookup function the coordinates as a float. That lookup code then takes the nearest control points stored in the LUT smaller and bigger than your float point and interpolates between the stored values according to how far your float coordinate was away. The two dimensions can be interpolated independently.

In C that can look like this, assuming a square LUT. x and y are in 0…1

float dt_lut_lookup_2d_1c(const float *const lut, const size_t size, const float x, const float y)
{
  const float _x = CLAMPS(x * (size - 1), 0, size - 1);
  const float _y = CLAMPS(y * (size - 1), 0, size - 1);

  const int _x0 = _x < size - 2 ? _x : size - 2;
  const int _y0 = _y < size - 2 ? _y : size - 2;

  const int _x1 = _x0 + 1;
  const int _y1 = _y0 + 1;

  const float x_diff = _x - _x0;
  const float y_diff = _y - _y0;

  const float l00 = lut[_y0 * size + _x0];
  const float l01 = lut[_y0 * size + _x1];
  const float l10 = lut[_y1 * size + _x0];
  const float l11 = lut[_y1 * size + _x1];

  const float xy0 = (1.0 - x_diff) * l00 + l10 * x_diff;
  const float xy1 = (1.0 - x_diff) * l01 + l11 * x_diff;

  return xy0 * (1.0f - y_diff) + xy1 * y_diff;
}
2 Likes

So, I worked a bit into the darktable code and I managed to bring out something for adding the grain with the LUT…
Here are some output directly from darktable, using 6400 ISO and 100% strength.

On the left the old darktable output and on the right the modified one. There are three versions with 0, 0.5 and 1 contrast applied from the contrast-lightness-saturation module.

Here is the added code into the grain.c file.

#define LUT_SIZE 128
#define MAX_DELTA 2
#define MIN_DELTA 0.005

...

float paper_resp(float exposure, float mb, float gp)
{
  float density;
  float delta = - (MAX_DELTA - MIN_DELTA) * mb + MAX_DELTA; 
  density = (1 + 2 * delta) / (1 + exp( (4 * gp * (0.5 - exposure)) / (1 + 2 * delta) )) - delta;
  return density;
}

float paper_resp_inverse(float density, float mb, float gp)
{
  float exposure;
  float delta = - (MAX_DELTA - MIN_DELTA) * mb + MAX_DELTA; 
  exposure = -log((1 + 2 * delta) / (density + delta) - 1) * (1 + 2 * delta) / (4 * gp) + 0.5;
  return exposure;
}

static float midtone_bias = 1.0;
static float gamma_paper = 1.0;
static float grain_lut[LUT_SIZE*LUT_SIZE];

static void evaluate_grain_lut(const float mb, const float gp)
{
  for(int i = 0; i < LUT_SIZE; i++)
  {
    for(int j = 0; j < LUT_SIZE; j++)
    {
      float gu = (double)i / (LUT_SIZE - 1) - 0.5;
      float l = (double)j / (LUT_SIZE - 1);
      grain_lut[j * LUT_SIZE + i]= paper_resp(gu + paper_resp_inverse(l, mb, gp), mb, gp) - l;
    }
  }
}

float dt_lut_lookup_2d_1c(const float x, const float y)
{
  const float _x = CLAMPS((x + 0.5) * (LUT_SIZE - 1), 0, LUT_SIZE - 1);
  const float _y = CLAMPS(y * (LUT_SIZE - 1), 0, LUT_SIZE - 1);

  const int _x0 = _x < LUT_SIZE - 2 ? _x : LUT_SIZE - 2;
  const int _y0 = _y < LUT_SIZE - 2 ? _y : LUT_SIZE - 2;

  const int _x1 = _x0 + 1;
  const int _y1 = _y0 + 1;

  const float x_diff = _x - _x0;
  const float y_diff = _y - _y0;

  const float l00 = grain_lut[_y0 * LUT_SIZE + _x0];
  const float l01 = grain_lut[_y0 * LUT_SIZE + _x1];
  const float l10 = grain_lut[_y1 * LUT_SIZE + _x0];
  const float l11 = grain_lut[_y1 * LUT_SIZE + _x1];

  const float xy0 = (1.0 - y_diff) * l00 + l10 * y_diff;
  const float xy1 = (1.0 - y_diff) * l01 + l11 * y_diff;
  return xy0 * (1.0f - x_diff) + xy1 * x_diff;
}

and into the process function:

evaluate_grain_lut(midtone_bias, gamma_paper);
...
out[0] = in[0] + 100 * dt_lut_lookup_2d_1c(noise * strength * GRAIN_LIGHTNESS_STRENGTH_SCALE, in[0] / 100);
...
3 Likes