the new data works for me. interesting to see the subtly different looks with fitted white balance/filter weights: https://jo.dreggn.org/filmtab/table.html . the portra/portra combination is certainly outstanding. i used some auto exposure, so the push variants of portra 800 look similar in overall brightness. of course needs some manual fine tuning (both exposures) to actually look great.
script to generate the table
#!/bin/bash
films=(
kodak_ektar_100
kodak_portra_160
kodak_portra_400
kodak_portra_800
kodak_portra_800_push1
kodak_portra_800_push2
kodak_gold_200
kodak_ultramax_400
kodak_vision3_50d
fujifilm_pro_400h
fujifilm_xtra_400
fujifilm_c200
)
papers=(
kodak_endura_premier
kodak_ektacolor_edge
kodak_supra_endura
kodak_portra_endura
fujifilm_crystal_archive_typeii
kodak_2383
kodak_2393
)
n_films=${#films[@]}
n_papers=${#papers[@]}
# ${films[0]}
cat << EOF > table.html
<html>
<body>
<table style="width:100%">
EOF
echo '<tr><th>film/paper</th>' >> table.html
for paper in "${papers[@]}"
do
echo "<th>$(echo $paper | sed -e 's/_/ /g')</th>" >> table.html
done
echo '</tr>' >> table.html
f=0
for film in "${films[@]}"
do
p=0
echo "<tr><td>$(echo $film | sed -e 's/_/ /g')</td>" >> table.html
for paper in "${papers[@]}"
do
echo "<td><img style=\"width:12vw\" src=\"img_${film}_${paper}.jpg\"/></td>" >> table.html
vkdt cli -d none -g img_0000.exr.cfg \
--width 256 --height 256 \
--quality 92 \
--filename img_${film}_${paper} \
--output main \
--config "param:filmsim:01:ev film:1.0" \
"param:filmsim:01:film:$f" \
"param:filmsim:01:paper:$p" \
"param:filmsim:01:filter c:-1.0"
p=$((p+1))
done
echo '</tr>' >> table.html
f=$((f+1))
done
cat << EOF >> table.html
</table>
</body>
</html>
EOF
I think it is something related to the python package installation.
If you used pip with pip install -e ., the “-e” is necessary to create a symlink, so every change done to the package folder will be available in the installed package. You can try to uninstall and reinstall the python package.
I love this comparison table!!! I will try to make something similar with agx-emulsion so it will be easier to check the differences, especially in the more edgy combinations. The “cine print film - photographic negative” combinations are experimental and I assume never meant to be used in real life.
Following your idea @hanatos, I made a comparison table of the current default output of agx-emulsion.
Ektacolor Edge paper is an outlier and possibly problematic, showing a green color cast. Changing the filters I can still get good looking images from it, but in that case neutral input colors will not be neutral in the print, but with a slight magenta cast.
Overall I think that the fitted neutral filters in agx-emulsion do not give an always consistent output, and manual tuning is necessary to mediate slight casts and to find a compromised balance. They are nonetheless a reasonable starting point.
from agx_emulsion.model.process import photo_params, photo_process
from agx_emulsion.model.stocks import FilmStocks, PrintPapers
from agx_emulsion.utils.io import load_image_oiio
import numpy as np
import matplotlib.pyplot as plt
image = load_image_oiio('portrait_256.tif')
N = np.size(FilmStocks)
M = np.size(PrintPapers)
photos = np.zeros((N, M, image.shape[0], image.shape[1], 3))
for i, film in enumerate(FilmStocks):
print(i)
for j, paper in enumerate(PrintPapers):
params = photo_params(film.value, paper.value)
params.negative.grain.active = False
params.negative.halation.active = False
params.print_paper.glare.active = False
params.io.full_image = True
params.scanner.unsharp_mask = (0,0)
photos[i,j] = photo_process(image, params)
collage = np.vstack([np.hstack([photos[i,j] for j in range(M)]) for i in range(N)])
fig, ax = plt.subplots(figsize=(10,18))
ax.imshow(collage)
ax.set_yticks(image.shape[0] * np.arange(N) + image.shape[0]//2)
ax.set_yticklabels(film.name for film in FilmStocks)
ax.set_xticks(image.shape[1] * np.arange(M) + image.shape[1]//2)
ax.set_xticklabels([paper.name for paper in PrintPapers], rotation=90)
ax.xaxis.tick_top()
plt.savefig('collage.jpg', bbox_inches='tight', dpi=300)
I’ll have to dig through my stuff to find out exactly how those LUTs were made. I know one of the 2383 LUTs is labelled as K2254-K2383. So it’s the combined effects of a intermediate film and 2383. Here’s the datasheet for 2254. Color Digital Intermediate Film 2254
I have been enjoying this thread a lot - really appreciate the efforts made here.
This body of work inspired me to also start experimenting. As I’m quite familiar with the ideas behind Blender’s picture formation, also called AgX coincidentally, I wanted to see how those ideas would fly for simulating a negative film + print process like you have done here.
Currently the experiment exists as a CTL script for ART. Instead of spectral data, it works on tristimulus data in all stages and uses matrices to account for the spectral sensitivites and the dye characteristics. It implements the whole process which includes exposing the negative, converting density to transmittance, exposing the paper and reading out the reflectance. I’m not certainly the first one with this idea - I believe barselino at Mastodon has been doing something fairly similar and I circled back into those posts after seeing your simulations.
In the negative and paper exposing stages, the same curve is used for all three tristimulus components, and the curves have not been matched to any particular dataset. This probably ignores some of the creative aspect of these curves, but the flipside is that the neutral axis stays neutral as a given.
The mixing matrices at each stage are controllable, which makes for some nice creative control on the end results. One can’t super intuitively tie those to any familiar terminology, though, so maybe the best would be to provide presets to roughly match the look of some familiar film + paper.
Things are still pretty bare bones and there’s a lot more to experiment with, but just wanted to say hi here. At least I managed to implement a version if the DIR couplers, ignoring the effects on the neighbourhood of the pixel, because CTL scripts can’t sample the neighbour pixels at all…
Some results so far. Parameters were quite quickly tuned, and these certainly are not as neat as yours. However, I think there’s some nice mojo to these still, a bit of a departure from a certain “digital” look.
Hey @flannelhead, that’s very cool! And thanks for the appreciation comment.
I think this is a very interesting topic: “how much can we simplify the problem while keeping most of the final style”. It is very cool to see project like yours, where you go very coarse with the bare minimum to simulate the steps, gaining control and ability to drive the simulation with easier parameters.
One drawbacks to use the full data like in agx-emulsion is that some of the control is lost, and messing with them can have quite unpredictable results.
One of the question of mine that I would like to experiment with is try to infer “what is the spectral pipeline adding to the mix?”, and if possible if this effect can be simplified and modeled in some way in a tristimulus simulation. Intuitively the spectral simulation is adding something. When the density of a negative increases, the spectrum of the transmitted light through a negative is not simply scaled but bands shifts due to the saturation of the main absorption peaks. But how relevant are these effects for the final look, this is interesting to explore.
Cool that you managed to simulate the inhibitors from DIR couplers. If you feel like sharing the scripts at any time, it would be interesting to follow your experimentations. I don’t know much about CTL scripts, so it is cool to get to see something different.
The results are promising, indeed!
At the beginning of my experimentations I struggled to see decent colors for quite a while
Pretty neat results from only using tristimulus representation! Any magic tricks added for things like clipping out of gamut colors?
For the matrix representation, would it work to define the film and paper basis colors and then just see it as a transformation between the two? So a user would define them in global coordinates and we calculate the relative transform?
Maybe not so surprising coming from me, what curve did you use for the density? Your post inspired me to check how the film and paper density curves compares in the old tone curve tool I made. Here is the result for Kodak Portra 400 and Kodak Endura Premier:
I’m pleased to see that the analog film and paper properties can be modeled quite well independently. The formulation used in the sigmoid module is close to the film + paper situation but unfortunatly not exact from what I can see so far. Might be possible to bring it in as a non breaking change but have to look closer at that problem. Add the spectral parts and we would have a pretty significant module upgrade.
No magic tricks played so far. The data is taken in in linear Rec. 709 encoded RGB (supplied by ART) and negative components are just clipped to zero individually. This part will need to have something better, as not all of the usual difficult images (e.g. Red Xmas and Nightclub in Troy’s testing image repo) are treated as well as the demos I posted above.
However, from that point on, things are pretty well controlled. The most important point to take care about is that none of the matrices have negative elements. Think about this: it can never happen that having a greater transmittance in one of the tristimulus components results somehow in less density in some of the paper layers.
Hmm, this is an interesting question. So far, the main controls are just rotations and insets of the individual RGB components in various stages of the pipeline. So things are shifted subtly (or not so subtly) in one direction or another, currently I’m adjusting it just by eye and looking at results from Andrea’s spectral simulation. I am not 100 % sure if a colorimetric coordinate approach makes sense here, as the intent is to be closer to the spectral processing.
There are various stages where some kind of spectral projection happens. The pipeline is as follows:
Linear Rec.709 RGB in
Clip negative lobes to zero
Film inset / rotation matrix - this is what would correspond to the film spectral sensitivities
Film density curves
(DIR couplers)
Film density to transmittance
Paper inset / rotation matrix. This is where the relationship of the film spectral dye densities and paper spectral sensitivities comes in play.
Paper density curves
Paper density to reflectance
Final rotation matrix - taking in account also the paper dye reflection spectra
At least phases 3, 7 and 10 are where the various spectra can be considered and where creative control can be taken. It would be surely interesting to explore what would be the most user-friendly way to expose that.
Currently the one from Troy’s repo. The current curves are just eyeballed very quickly and should be improved upon.
Nice results, it seems the total result is nearly there indeed.
Maybe being exact doesn’t matter if one can derive the desired aesthetics from a simpler model.
i have a question about the chemical process. just looked through a few old film scans on my harddrive. what is this:
the couplers i have thus far inhibit, i.e. the negative doesn’t develop so much, i.e. the picture becomes brighter, right? what’s with the black fringes? is that some sort of coupler too? also the radius is really large.
Interesting. Could you give some context on the image. What are we looking at? What is the scale? How was the negative inverted?
I am pretty sure that there are chemical/diffusion effects that we are not considering. For example, there can be local effects on the concentration of developer, that is depleted by the high density areas, and in my mind would act as inhibition.
The inhibitors released by DIR couplers should create a low density edge on the lower density side and a high density edge on the higher density side (because in this second case it is not as much inhibited as in the middle of a high density area, where all the sides provide inhibitors that diffuse into it).
cough yes. the only thing i know for sure is that image was scanned more than 20 years ago, i think 35mm film.
edit: we see: person diving into probably jerlov water type 1C fluorescent green/cyan/blue ocean.
these images are scanned in uhm, some lab? and 2088 pixels wide. the image here is cropped in height, but not in width (but i inpainted and downscaled it because people on it). i suppose it’s some aggressively colourful consumer film stock but i can’t tell you which. the grain is way sub-pixel, i don’t think i can tell much in terms of inter-pixel correlation at all.
this black fringing happens only for this extra cyan blue water and at edges. can’t necessarily say that it has to be a bright or dark edge, maybe just different layers/colour channels.
right that’s the local contrast increase i’m seeing. this particular case in the extreme would cause white fringes on the brighter side of the edge (in positive print).
but yeah, also i’m quite surprised by the large diffusion radius here. my couplers don’t diffuse that much, and if i got that right you were indicating that we wouldn’t necessarily expect it to have much spatial influence.
oh btw i also implemented a code path that takes a scaned analog film negative as input and only does the virtual print. it kinda works but needs manual fiddling with the white balance, and i find myself subtracting an elevated black point or applying a curve to the negative before processing for better results.
Now I see it! Thanks! It is a very large effect indeed.
At the beginning I thought it was some sort of micro detail of a photo.
I would be fun to look at the negative. I wonder if the lab was doing any kind of automatic local contrast adjustment.
That’s super cool! I had some ideas around that but didn’t have the time to try anything recently. How did you solve the issue of converting the RGB input of the scan into dye densities? Do you bypass that and spectrally upsample directly?
hmm good point! i’ll look around, not sure i have the negative.
yes exactly. i interpret the scan as transmittance and upsample that to get approximate spectral power again. on the good side, the upsampling doesn’t work for stuff like collision coefficients/densities, but it’s kinda meaningful for transmittances.
Hey @arctic! I’m pretty sure you posted a black and white image done with agx-emulsion. Have looked through you activity and can’t find it. It was of a woman. Have I dreamed this up?
Still no black and white profiles but I hoarded some more BW datasheets, and I will start hacking the pipeline for this soon. I didn’t design the program abstract enough to make these changes easy. I have another dense week at work, but after that hopefully more spare time and brain space to try new things!
If you mean this Embrace the noise! - #20 by arctic, it was some early experimentation with adaptive grain, that I never really finished or shared. Conceptually it is not too far from one sub-layer of the grain engine here. It was a simple script, without density curves, I should still have it somewhere.