Spectral film simulations from scratch

If this project would get some more exposure, I’m confident this wouldn’t be an unattainable goal with some crowdfunding. But I guess that also comes with new expectations from funders (that are not always familiar with open source software development).

This will be a lot less flexible and push values outside the “boundaries” that Spektrafilm sets.

The way I do it is I edit in darktable as usual, with sigmoid turned on. After/over sigmoid I place the LUT 3D module with a Portra 400 NC- LUT. Then I do perspective correction, denoising and dodging and burning in darktable using a combination of masked tone equalizer and rgb curve modules. When I’m ready I disable sigmoid and LUT 3D and color balance rgb and export to 32 bit (float) OpenEXR in linear ProPhoto RGB that I then open in Spektrafilm. Works very well, but is a bit cumbersome. At least it’s less cumbersome than the actual darkroom. :slight_smile: One day I still hope to see Spektrafilm in darktable…

1 Like

hi all, potentially dumb question here, but i finally (seemingly) managed to get this installed on my mac using uv. after it was completed the terminal says “Installed 1 executable: spektrafilm”–however i have no clue where to find this executable or know how to run it. sorry if it’s obvious, but what’s the next step? how do i find & run this?

Like so:

I do agree, but Salazar wanted to use Photoshop prior to Spektrafilm for retouching work! Does the method I provided push things outside of the boundaries? If so, @slazaar, it might be worth going Darktable (Export Prophoto RGB Linear) → Photoshop → Spektrafilm. The conversion from Prophoto RGB (With a gamma of 1.8) to ACES 2065-1 is probably not ideal, but unfortunately Camera Raw seems to be lacking a few options when it comes to this.

Super helpful - thanks both (+mikae1)

I’m still pretty new to darktable, but it’s been really interesting seeing how much control it gives over the RAW pipeline.

I usually work out of Capture One, but my understanding is that it doesn’t offer a truly scene-linear workflow in the same way except for an option for a linear curve (and I think the same applies to Camera Raw as well), so this opens up a different way of approaching things.

I’ll try a few of the methods suggested - great to have some options beyond the usual C1 / Camera Raw route for prepping files.

1 Like


added a raw file + ra4 print of a color chart to my google drive for others to try to match it. Struggling to get the digital to match the print, maybe you all can try?

portra 400+ Fuji DPii
Lumix S5ii Raw

https://drive.google.com/drive/u/0/folders/1ryifCcPHbDQoFiofn46u1Wiymi4RoxdE

2 Likes

This is beautiful! Did you just pull a hald image into spectra film? How did you manage to get the LUT output and working with Resolve? I’d love to make some spektrafilm LUTs that work with DWG/DWI.

1 Like

Is this simulation any different compared to Genesis?

You can try this tool I posted last week and use a CST from DWG/intermediate to AP0/Linear.

Otherwise it was also pointed out that VKDT works very well for video. I tried it out this week and it looked great…especially with all of the spatial features in Spektral that you miss out on with a LUT.

2 Likes

Awesome examples so far on matching analog prints and S5II RAWs. :ok_hand::sparkles:

As for the equally awesome S5II in-camera Ektachrome LUT, any chance you could share it?

I don’t have an S5II/S5IIx yet, but I’d love to experiment with some RAW samples and maybe try to adapt it somehow for Magic Lantern RAW video (Canon 5D III) using Lattice.

Yes I did just that. In srgb only though. I graded evrything in DWG and put the lut in the last node after color space transfortm to rec709/2.4. Works well enough!

I will try it this weekend hopefully when I’ll have the time :slightly_smiling_face:

Andrea, thanks. :slight_smile:
Spectral is great.
I use DxO for optical corrections and AI pre-sharpening/Denoise, export to linear DNG, and then use Spectral.

2 Likes

I found a nice and very simple program called PNG2Cube. Converts hald image to cube lut.

hah, you’re moving too fast for me to keep up :slight_smile:
here’s a plot of some green in the cc24 and saturated magenta:


due to the nature of the sigmoid spectra, they are based on a quadratic/parabola that either has a peak or a dip. anything in this rough triangle between blue, white, and red has a “dip” shape and will not fall off to zero at the rims.

the spectra are optimised to round trip/reproduce exactly the rgb values when integrated against the 1931 cmf and a D65 illuminant. if i understand correctly you are essentially trying to correct this to make the upsampling closer to the metameric space of the sensitivities of specific film stock, not the cmf of a human observer.

if this is indeed making a lot of visual difference, it opens a whole new rabbit hole… it’s probably better to optimise the spectral upsampling for each film stock’s sensitivities in this case (can get near perfect match then), and also it opens the question whether this should in fact take place as device input transform, i.e. work on raw camera rgb as input. this is already such an ill-posed problem (vkdt has some input device transform based on the spectral sensitivities of a camera, should you possess them). i’m a bit hesitant to add after-the-fact correction here, though spectral windowing makes sense.

what’s the analog correspondence to the window by the way? is there some ir/uv blocking layer in film stock or would that traditionally happen in the glass/coating? i mean, does it in the actual analog world depend on the film stock or is this really just a numerical post process of the data.

thank you so much, I’ll have some busy days ahead but I will play with them soon! amazing!
i especially wanna give a shot to the colorcheker photo, because i made some improvements to the sensitivity-adaptation model of the spectral upsampling in the weekend.

these are outstanding!!!

I did end up to buying a lumix S9 and experimenting with it, I actually bought the camera only for the 3dlut feature for still, and for the ability to shoot stills with the log video pipeline (VLog). the lumix video pipeline have such a nice texture in my opinion, not oversharpened and smoother noise than the still one. I’ll post some pics soon. here a couple of random SOOC that happened to be on my phone.

I lost track of the code experiment but i have the VLog LUT computation script somewhere, very WIP, and very rudimentary, but at least the lut can use the full dynamic range of the camera (12bit only for the Lumix s9 because panasonic is evil and just blocked the slow readout mode with higher snr, arrrr, no mechanical shutter in the camera i know, but… just evil).

very juicy colors!

if you use uv install tool ... you should be able to run the command spektrafilm from the terminal from anywhere

2 Likes

Just to make sure - the Lumix cameras can’t do all the spectral stuff, right? Only lut for contrast/color?

Magic Lantern, the hacked Canon OS came to mind. Imagine if you could do the whole Spectral simulation in-camera…

1 Like

I am starting from real spectra of which i can compute the projection on the 1931 cmfs and the projections on the film sensitivities (ground truth). then i am computing XYZ with the real spectra and upsampling using your algorithm. i get a spectrum with zero error XYZ values when reprojected on the 1931 cmfs, but inevitably will give big errors on the film sensitivities.

the reason is exactly the nature of the sigmoid spectra that have uv/ir lobes if “dip” type. but can extend in near uv and near ir even if “peak” at the edges of the visible range.

here comes the idea of an optimized bandpassed upsampling that can reduce the round trip error with the real spectrum exposures.

then i am very gulty! i tried to go past the optimal per channel bandpass in the weekend. :stuck_out_tongue:

i would say it makes visual difference, and i think that it is very noticeable from no-bandpass to bandpass. and it can reach almost visual imperceptible difference (average max errors <2/20 ev for more half of the corpus, and <3/20 ev for 90+%). the correction adds a “simple” per channel parametric exposure correction map in the xy plane (tc coord.).

here an example with a colorchecker reflectance dataset using D55 illuminant, and projected on kodak_portra_400 sensitivities. the outer square are the real reflected spectra exposures (visualized as straight sRGB, so not the real colors, but it helps seeing the differences).

(left) uncorrected (hanatos2025 spectra), (center) bandpassed, (right) band passed and per channel exposure correction.

the results are quite ok, even if it is a correction procedure, thus intrinsically not very elegant. i see it as a sensitivity-adaptation of the original algorithm. since sensitivities are not so different from cmfs, the adaptation can be encoded in 15-20 parameters per channel (parameters of the bandpass and of the 2d surface smooth function). the benefit is also that it seems we keep some of the good qualities of your underling sigmoid alg, ie it has a smooth solution across the xy plane.

here an example of a fitted 2d function on the xy plane. i am using saturating functions so it is arbitrarily bounded to the range of maximum correction we want allow.

here also some plots of log exposure errors of a few spectral dataset from colour-science. first column is just hanatos2025 roundtrip error, center is just bandpass, and right is bandpass + surface correction.

we cannot expect a perfect planar pancake because the metameric spaces are supposed to be slightly different. but we can compress it in a minimal sense.

overall the bandpass is shared, cheap and easy compute, and the three exposure corrections are also not expensive to compute. it is a dirty solution but seems to work ok-ish and does not require to ship a new lut of the sigmoid spectra in triangular coordinates for every stock. but it is still a correction and might not make people feel clean :laughing:

working with RGB from raw files seems the logical portable standard, even if in the way above implies camera sensitivity exposure -> rgb -> spectra -> sensitivity adaptation -> film sensitivity exposure; but it stays agnostic of the camera sensitivity (we trust the manufacturers/calibrators that the rgb of raw files are good estimates).

anyway the error above are against real spectra so it shows that the procedure, although not elegant is compact in amount of parameters and kinda working in the real world.

in analog cameras, lenses have uv absorption and will gently band pass the near uv region, the near ir is more open. film might have also color filter but this is already an effect included in the sensitivities (that are still density measurment on the effective photo process after all).

but in this case the window takes care of the overshooting of the lobes of the sigmoid spectra compared to real ones, tha’s it. the “dip” simgoid spectra are a particular kind of metamers with huuuuuge non visible contribution (even xrays :slight_smile: ). the bandpass just tame those to mimic the average behavior of real spectra corpus. essentially we are injecting the trends of the corpus in the bandpass and 2D surface, hoping that the simple bandpass+surface model can generalize in a handfull of parameters the sensitivity-adaptation-trasnform of the upsampling algorithm (with low enough error).

if we wanted to optimize the sigmoid spectra in a camera sensitivity agnostic way (thus starting from RGB → XYZ), i guess the procedure would still end up relying on a spectral dataset to minimize the round trip error the upsampled spectra on film sensitivities while keeping an assigned XYZ to the orginal spectra. this because we would not have any ground truth for the film exposures.

but this is not my field and i might be taking huge assumptions that are wrong :grin:

You might wanna have a look at Motioncam Pro on android if by any chance you own an android :slightly_smiling_face:
The mcraw is natively supported by vkdt but if you want your smartphone to spit out amazing images/videos then it might be interesting for you (it’s not foss so I don’t like to mention it on pixls but it started as one by a single dev so I feel tiny bit less guilty).