Spectral film simulations from scratch

TL;DR
I’m exploring simulations of the full-analog color photography process (negative + print) using only published datasheets and basic principles. The goal is to recreate signature looks from Kodak and Fujifilm using a physically based model (with spectral calculations, grain, couplers, halation, etc.) that offers tunability beyond standard LUTs. More details and code are available on GitHub (agx-emulsion); all results here are with v0.1.0.

The true color of film negatives

A while back, I came across an online discussion about the “real colors of film negatives”. Although I can’t recall the exact source, the key takeaway was that the final colors depend heavily on the second stage of the imaging process, whether it’s the scanner’s color processing or the analog RA4 color reversal printing process. Analog printing seemed like the most authentic way to define the look, especially since companies (primarily Kodak) spent decades refining it.

This idea led me to explore simulating the full analog pipeline of color photography. I am clearly not an expert in darkroom techniques or color science, and initially, I grossly underestimated the challenge. Luckily, I found a few nice book chapters [1,2,3]. Film emulsions are quite sophisticated, relying on finely tuned chemistry with silver halides, several dye couplers, and a pinch of magic. As a trained chemist I have a deep admiration for all the science and engineering needed to make film. For anyone interested in film manufacturing, I highly recommend checking out the series of videos by SmarterEveryDay on Kodak ( How Does Kodak Make Film? series of 3, The Chemistry of Kodak Film, Kodak’s Film Quality Control Process).

Goal and motivation

My goal is to simulate the entire analog photographic process, from film capture to the final print, using only the datasheets and basic knowledge. I would like to capture the look of products from Kodak and Fujifilm starting from publicly available spectroscopic data. For example, Portra film and its matching paper are designed to deliver subtle hue shifts and perfect contrast for skin tones, while consumer films and paper are more saturated and versatile. How much of these characteristics can we recreate from scratch?

While film simulation LUTs share similar goals, they often lack the flexibility to be fine-tuned. In contrast, a fully physically based pipeline can better reproduce the real-world versatility of the negative plus RA4 printing process by offering adjustable parameters to tailor the final look. Naturally, this approach also brings along the inherent limitations of analog photography, so you need to appreciate (or be nostalgic for) the analog process to embrace these constraints.

Negative and print exposure

Here are some test-strips to introduce the capability of the simulation. The overall imaging process is split in two steps: negative and print. Two different exposures can be controlled, and color filters in the enlarger can balance the colors of the print. Here are virtual scans of Kodak Gold 200 at different exposure compensations of the negative.

The following strips are virtual prints on Fujifilm Crystal Archive TypeII at different print exposures (and constant good negative exposure).


Raw file taken from this Play Raw Two Taiwanese uncles playing chess, thank you @streetfighter.

The challenge of using datasheets

Published data are measured with densitometers, RGB or diffuse, and they need to be unmixed to refer to the density developed in each channel independently. I could go deeper in what I am doing if anyone is interested. It is not very complex, but I should write down some formalism and math. Most of the times data is not self consistent after “the unmixing” and I need to apply some reasonable corrections. I am assuming that the film should be able to reproduce a neutral-ish 18% gray when printed, even when under- or overexposed, at constant enlarger filter values. So far, Kodak data mostly behaves well by default, while Fujifilm data is trickier and often requires additional corrections.

Here are examples of virtual prints of neutral gradients shot at different exposures and compensated in the virtual printing process to produce roughly the same exposure First let’s analyze Kodak Portra 400, without corrections and after the unmixing. It looks neutral as it should, just a touch of warmer tones when overexposed.

Below is Fujifilm Pro 400h after the unmixing. It has strong hue shifts and it is not really usable in this state. Maybe additional calibrations are needed but not specified in the datasheet?


After correction of the density characteristic curves, the gradient at base exposure is quite neutral. Still small shifts are visible at extreme over/under exposure.

Preliminary results

Since analog film is designed to work well on skin tones and nature-greenery, I picked some colorful portraits from signatureedits.com/free-raw-photos for showcase (file names of the “default” darktable images have full credits).

Kodak Portra 400 vs Fujifilm Pro 400h


From left to right:
(i) image exported with darktable using sigmoid with contrast set to 2, xmp (13.7 KB)
(ii) Kodak Portra 400 and Kodak Portra Endura print paper
(iii) Fujifilm Pro 400h and Kodak Portra Endura print paper
Some settings: -4Y and 7M enlarger filter, 0.9 print exposure. The input of the simulation is a 16bit PNG from darktable with the same settings as the XMP file, but with sigmoid deactivated and exposure reduced to avoid clipping.
Overall Pro 400h seems to have cooler greens and a little more contrast than Portra 400.


From left to right:
(i) image exported with darktable using sigmoid with contrast set to 2, xmp (9.7 KB)
(ii) Kodak Portra 400 and Kodak Portra Endura print paper
(iii) Fujifilm Pro 400h and Kodak Portra Endura print paper
Some settings: -3Y and 15M enlarger filter for Portra 400, and 0Y -7M for Pro 400h, 1.4 print exposure.


From left to right:
(i) image exported with darktable using sigmoid with contrast set to 2, xmp (9.6 KB)
(ii) Kodak Portra 400 and Kodak Portra Endura print paper
(iii) Fujifilm Pro 400h and Kodak Portra Endura print paper
Some settings: -6Y and 10M enlarger filter, 1.0 print exposure.
Blue colors in Pro 400h have a cooler tone when compared with Portra 400.

Color checker comparisons (Kodak Portra 400 vs Fujifilm Pro 400h)



In the ColorCheckers, outer squares show the sRGB input (scene referred) while the inner squares are simulated prints. Print exposure approximately balanced for Neutral 5 patch.

Consumer print papers


From left to right:
(i) Kodak Portra 400 and Fujifilm Crystal Archive TypeII print paper (gamma factor 1.1)
(ii) Kodak Portra 400 and Kodak Ektacolor Edge print paper
(iii) Kodak Portra 400 and Kodak Endura Premier print paper
Keep in mind that the saturation level is arbitrarily guessed and could be globally reduced in all the prints by reducing the concentration of DIR couplers in the film.

Color checker comparisons (consumer print papers)




Outer squares shows the sRGB input (scene referred) while the inner squares are simulated prints. Print exposure approximately balanced for Neutral 5 patch.

Other film stocks



From left to right, top to bottom:
(i) image exported with darktable using sigmoid with contrast set to 2, xmp (8.2 KB)
(ii) Kodak Portra 400
(iii) Fujifilm Pro 400h
(iv) Kodak Vision3 50d
(v) Kodak Gold 200
(vi) Fujifilm C200
All printed on Fujifilm Crystal Archive TypeII, with -1Y 1M enlarger filters, 1.1 print gamma factor, 1.05 print exposure.

Color checker comparisons (many negatives on Fujifilm Crystal Archive TypeII)






Outer squares shows the sRGB input (scene referred) while the inner squares are simulated prints. Print exposure approximately balanced for Neutral 5 patch.

Kodak Portra 400 and Gold 200 have similar identity, but Portra has more pastel colors. Vision3 50d is more neutral and flat. Pro 400h and C200 are also similar, more saturated compared to Kodak.

More results are in my recent Play Raw history (Profile - arctic - discuss.pixls.us), not all of them of decent quality. Most of the progress happened in the holyday break so earlier stuff might look quite funky. Below are a couple of additional comparisons with darktable base edits (and again sigmoid with contrast set to 2).



The top one is the output of darktable with sigmoid, the bottom one is the simulation with Fujifilm C200 and Kodak Supra Endura paper, +1 stop exposure compensation, 0.9 print exposure, -4Y 2M filters.



Top darktable output, bottom simulation with Kodak Portra 400 and Fujifilm Crystal Archive TypeII, +1 stop exposure compensation, 0.65 print exposure, -3Y -4M filters.

Grain

The simulation builds three sub layers for each channel, imitating modern color negative films where each color layer is composed by 2-3 sublayers with different sensitivity to increase latitude. The stochastic proprieties of each layer and sublayers are imitated keeping into account that faster layers are more noisy, i.e. they have larger particles.


These above are a few strips of Kodak Portra 400 printed on Kodak Portra Endura with vertical size of 1 mm. The average particle areas of the virtual silver halide particles, then converted in dye clouds, is changed. In first approximation, the area of the particles should be roughly proportional to the ISO. In consumer films particles are in the range 0.2 - 2 micrometer diameter, i.e. 0.03-3.2 micrometer squared.


On the left is the only data of its kind I could find from Kodak Vision3 50d (available also for the other Vision3s). On the right is the same data virtually measured from the simulation of the same film stock with grain parameters tuned accordingly. It is computed by processing a virtual photo of a neutral gradient, then the standard deviation at each exposure is evaluated and plotted. From the peaks you can roughly see the structure in sub layers of every channel.

Here is an example with higher magnification crops with Kodak Portra 400 and Kodak Portra Endura.

On the left the print and on the right the virtual scan of the negative. At high magnification, isolated dye clouds become visible. The highest magnification crop has a size 0.35x0.35 mm, and would correspond to an image of 5.4 giga pixels. I guess we could print a very large poster with it.

Saturation with DIR couplers

The level of saturation of the negatives is controlled via developer inhibitor release couplers (DIR couplers). When substantial density is formed in one layer, DIR couplers are released and can inhibit the formation of density in nearby regions, both in the same layer and nearby layers. The diffusion in nearby layers of DIR couplers produces increased saturation (loss of density on the other channels, i.e. purer colors), also referred as interlayer effects.
Here are a couple of examples form signatureedits.com raw files, using Fujifilm C200 and Fujifilm Crystal Archive TypeII.


Exposure compensation +1 stop, 0.65 print exposure, 0Y 15M filter shifts. Fujifilm negatives tends to give very saturated reds, especially at higher DIR couplers amounts. Values that I found reasonable are in the range 0.8-1.2.


Exposure compensation +2 stops, 0.6 print exposure, 0Y 0M filter shifts.



In this desert photo example, the image above has no DIR couplers, while the image below has 1.0 DIR couplers amount. Using Kodak Gold 200, Kodak Endura Premier and 0.9 print exposure.

Halation

Having access to the physically based model, we can insert halation as a blur at the right stage of the pipeline. Usually the red channel is the mainly affected one, because it sits at the back of the film stack. Some light goes through the emulsion layers and trough the support material, then gets reflected back, gets blurred, and exposes again the emulsion. Adding for example 3% of red, 0.3% of green, and 0.1 % of blue blurred halation light, with a sigma of 200 micrometers gives the following result.


In this test image every dot has an increased exposure of 1 stop of light. 14 stops in total when reaching the last dot on the right. The size of the long edge of this picture is 35 mm.



In this example, on the top no halation while on the bottom 8% halation of the red channel. Simulation with Kodak Vision3 to imitate Cinestill, printed on Fujifilm Crystal Archive Type II, enlarger filters -4Y 5M, +3 stops exposure compensation and 0.4 print exposure. Raw file from signatureedits.com.


Another example in which halation is more subtle, from a Play Raw Nice day for a nap under a tree, thanks @lphilpot. Notice the warm halos through the backlit branches. Using 3% red light halation on the right image. Kodak Gold 200 and Fujifilm Crystal Archive TypeII, +2 stops, 0.4 print exposure.

Wanna try it?

You can find more technical info in the GitHub repository agx-emulsion. If you feel adventurous you can install it. Just keep in mind that I am more of a scientist than a developer, so don’t expect too much. I think of this project as an exploration of the film simulation model, code is still quite messy, not production code by any means. All the photos here were created with version v0.1.0.

Some Issues

  • The conversion from RGB to spectral at the very beginning of the pipeline uses [4] that is very simple but require the input image to be converted to sRGB. I am pretty sure there are better ways to work with this. If anyone has any input it would be super appreciated. @hanatos you have some papers on the topic if I am not wrong :grin:

  • It is written in Python and quite slow (many seconds per 2K images). The temporary GUI is not color managed and just a placeholder for interacting at this stage. Plus it has a lot of parameters and might be very confusing.

  • Your opinion on the results is probably what matters the most. Given the data I used, I would guess that the simulation is something like 60-85% accurate or more, which doesn’t say much. :smiley: Any suggestion on how to compare results to real life is welcome. Any expert in film colors that can judge by eye? :nerd_face:

Ultimately, I aim to finalize the model and its profiles, and later with some help have it running on efficient gpu code, like in vkdt. I will use this tread as a log book to report some updates, and hopefully I will manage to keep myself motivated. Possibly I would like to make a scientific publication/presentation if this is novel enough.

References

[1] Giorgianni, Madden, Digital Color Management, 2nd edition, 2008 Wiley
[2] Hung, The Reproduction of Color, 6th edition, 2004 Wiley
[3] Jakobson, Ray, Attridge, Axford, The Manual of Photography, 9th edition, 2000 Focal Press
[4] Mallett, Yuksel, Spectral Primary Decomposition for Rendering with sRGB Reflectance, Eurographics Symposium on Rendering - DL-only and Industry Track, 2019, doi:10.2312/SR.20191216

44 Likes

This is fascinating! Thank you so much for this writeup and sharing your code!

I won’t pretend to understand the chemistry at all. But already your hints about colored grading curves and DIR couplers provide some delicious food for thought that I’ll definitely look into.

3 Likes

whoa really cool! thanks for releasing this amazing body of work and the writeup! again, there’s a quality to the images you present here that i haven’t seen in digital processes before, shows a whole new level of respect for the subject i think. will look into it in more detail and am certainly very motivated to port your code to the gpu :smiley:

will have many questions i think… already wondering how you’d get away with sRGB spectral upsampling…

4 Likes

Also masking couplers are quite an interesting part of the way modern color film works. I was always puzzled by the intense color of the base of unexposed developed film. It turns out it is not simply a byproduct of the chemistry, but it has a functional role. It is a color mask that looses density with increased exposure. This to balance the unwanted absorption by the main CMY dyes formed in the layers. Adding a sort of “negative absorption” to the dyes (when compared to the base).


This from a excerpt that I found googling “masking couplers” that explain it better than other sources, Hunt’s book images are a bit more convoluted. From this forum too apparently pdf link, but not sure from witch discussion.

4 Likes

Sure I would love to discuss this more! :slight_smile:

Film layer sensitivities are quite spectrally broad and spectrally separated. This might be part of the reason the input is not so critical. But I am still quite unaware of the nuances
of this step.

I believe that using a fully absorptive spectral pipeline and saturation boost inspired by density inhibition, mimicking interlayer effects, might contribute to why images look very “dense” and with film colors. Going to the root of this and generalizing better might be interesting, though.

okay i had a quick look over the code, but i couldn’t speak python to save my grandmother :smiley:

so a few questions:

you have tensor diagram contractions, super cool! how many spectral bands are you using? do we have a memory problem?

if i understand correctly the spectral quantities here are densities of some dyes / developed grains and such. i mean, these are [0,\infty), as opposed to transmittances/colours that would be [0,1], right? but even with inhibitors never negative?

smooth spectra are great, they compress well. choosing the right representation might be important for an efficient implementation.

and yes, i think i can provide code for all possible and impossible spectral upsampling methods.

2 Likes

I am using a spectral range 380-780 nm every 5 nm. It is probably overkilling, but I looked at the spectra and eyeballed the step to not ruin the peaks to much. Steps of 10 nm looked too rough for the unmixing/fitting when creating the profiles. The output of the actual simulation might be much less sensitive. It is configured in agx_emulsion/config.py in the SPECTRAL_SHAPE = colour.SpectralShape(380, 780, 5) constant. Haven’t tested to change it for now. The profiles need to be recomputed in case.

This is an example of the actual spectra and curves used in the calculation:


Left curves are effective absorptions of layers; center curves are the conversion of log-exposure to density, then scaled by the CMY spectra on the right for the final density spectrum of each pixel. This happening both for negative and print.

I definitely have a memory problem for larger images. For now, I didn’t optimize at all for it. :grin: I focused mainly on the “quality” of the model.

There are densities (proportional to the amount of dyes and also related to transmittance) and exposures (amount of light absorbed/transmitted etc., raw sometimes in the code). Both positive unbounded. In the interpolation of characteristic density curves and inhibitors calculations, exposures is used as log10(exposure) or called log_raw with range (-\infty, \infty).
This from emulsion.py is the heart of the film part:

log_raw  = np.log10(raw + 1e-10)
density_cmy      = self._interpolate_density_with_curves(log_raw)
density_cmy      = self._apply_density_correction_dir_couplers(density_cmy, log_raw, pixel_size_um)
density_cmy      = self._apply_grain(density_cmy, pixel_size_um, compute_reference_exposure)
density_spectral = self._compute_density_spectral(density_cmy)

The CMY densities (non spectral, density_cmy has three channels) in each pixel are stochastically “chunked” for creating the grain, using Poission/Binomial random numbers.

5 Likes

Love it!

Wanted to learn this stuff when I worked on the sigmoid module, especially related to methods for handling wide gamuts better, but could find good sources like you. Will try to have time to read them and try out your stuff. Looking forward to follow along in your progress.

11 Likes

I am also able to run your software (I had issues in pycharm / matplotlib and I had to copy the img folder into the gui folder) - which is great! I was only able to test your grain and must say: it is a very smooth type of grain. Nice.
If you need someone to test features or do simple stuff (I know a tiny bit of python) I would love to assist. This looks very promising.

2 Likes

Hey @jandren! Nice to hear that you are interested in this.
I can add a couple of plots that might start a discussion, or at least trigger some thinking.
Online sometimes you can find LUT tested against “stress test images”. For sRGB inputs often this one is used (3dlutcreator link):


I don’t like it too much because it is not very smooth from the beginning. But let’s have a look.

Taking only the left square (only a few columns of the image actually) and plotting it in a chromaticity plot gives the following.


All the extreme values of sRGB are reached. The lower part of the stress test image runs over the edge of the gamut, while the top part desaturates and goes towards D65 white.

I was curious to see how the chromaticity plot would be mapped after the simulation. The stress test image is not scene referred, so the print will be quite dark (small latitude), but might still give some insight.

This is using high saturation Kodak Gold and Endura Premier paper.

This is using low saturation Portra film and paper.

I note that shadows now desaturate towards black, and curves from white to black are mostly smooth, with some funky twists (the curves are coming from the columns of the stress test image). Also the gamut stretches outside sRGB, especially on the blue green side.

5 Likes

Great that you managed to run the program! I am glad of the interest :smiley:

I’ve been able to launch the program and play around with some of my own pictures. For now I pretty much kept the defaults and played only with film stock, paper and print exposure. I really like the feel of the results, especially for the robin. Thanks for the tool @arctic !

Kodak gold 200 and ektacolor (left) - Kodak gold 200 and fujifilm crystal archive (right)

Robin with fujifilm c200 and kodak supra endura
Original lacking a bit of exposure :


With more exposure (left) - Changing color filters y shift +2 m shift +3 (right)

With crystal archive paper :

5 Likes

Amazing! :sunglasses:
If you want to play with something I reccomend you to change this:

  • print exposure: to brighten or darken the image
  • negative exposure: boost it if the shadows become underexposed, it should not affect much the image otherwise
  • critical is the use of color filters. Y filter makes the image warmer or colder, M filter makes the image more magenta or green. Essentially fine tune of the white balance.

This is the core of the RA4 printing control system.

5 Likes

What a monster software - in the most positive sense.


It completely depletes my not too shabby desktop PC with 32 GB of RAM just to compute the above visible cropped image. Many of the buttons feel like magic but they work as the tool tip says (pre-flash was a big surprise for someone who has not the slightest idea of analog film development). Great! Please keep working.

1 Like

Definitely there is some optimization to be done in the future to reduce the memory usage. :grin:

Regarding preflashing I have a very good example from a Play Raw High contrasts in a man made wilderness, from @Popanz.

Print paper has limited latitude and predefined contrast, while film negatives can capture a very large dynamic range (easily 10+ stops). Preflashing is a simple hack of the printing process to retain some of the highlight details. Print paper is essentially flashed with some light before the negative projection, i.e. making it more gray-ish and taming down the highlights (have a look at this video for a real life example https://www.youtube.com/watch?v=lcx4ag7iygI). The price to pay is reduced contrast and saturation.



Using Fujifilm Pro 400h with +4 stops of exposure compensation, Fujifilm Crystal Archive TypeII with 0.15 print exposure.

On the top no preflashing while on the bottom 0.01 preflashing exposure through an unexposed film base (by default in the sim preflashing exposure is considered trough unexposed film). You can also tint preflashing by changing enlarger filters compared to the negative print exposure.

4 Likes

This is awesome… I don’t have the knowledge to understand all that is involved, but I can recognize the massive amount of work that’s gone into this, and the results look stunning.

I’m 100% windows at present, but when I have time will look into running up a VM perhaps…
Unless I’ve missed the obvious and there’s a better way to get it running. :wink:

1 Like

You can run it under windows - no problem. Just install pycharm or some other python IDE and this should work.

1 Like

I can confirm that it is running with Pycharm on Windows, just make sure that the working directory is right, when running it directly from Pycharm’s IDE.

If testing on a small screen like a laptop, I have found it useful to change this line to the following :

viewer.window.add_dock_widget(simulation, area="right", name='main', tabify=True)
# Change tabify to True

Otherwise the run button gets lost out of frame.

2 Likes

I usually run the GUI straight form the terminal from the main package folder, e.g. if using conda and following the instruction in the repo README:

> conda activate agx-emulsion
> cd \path\to\main\repo\folder\
> python agx_emulsion\gui\main.py

Keep in mind that everything GUI related is very rudimentary.

My python skills are non existent and running debian I don’t have conda it seems. My attempts at venv fails with a segfault when executing. Anyone got any tips?

This looks like a great project!