Spectral film simulations from scratch

Thank you, works perfectly now!

Was this created using the same script available in the repo? I guess I might be missing some required python libs, but I did not see any errors in the output when running, and I tried running it both in venv with the agx dependencies and using system packages, so idk.

yeah we will never find out :slight_smile: i should package and ship this thing. always hesitant checking in generated files into git, but the convenience gain in this case is considerable…

right. i would like to conserve energy. thanks for finding all these extra plots! will think about how to refine the code so it doesn’t slow down…

Which energy do you want to be conserved though?

The simulated process converts photon energy from the light field (or a flux of photons) of the scene into “breaking” predeposited silver-halides to form silver, sensitized to different wavelengths in different layers.

From then on it’s mass ratios in chemical reactions. At least I think that DIR-couplers are not photosensitive themselves. (EDIT: by all means, I could have gotten this wrong)
I think that also means: one can totally “upconvert” IR-exposure to modulate visible light (Kodak Aerochrome) or “downconvert” x-ray exposure to modulate visible light (your typical analog x-ray).

The sensitizers in color-negative can be sensitive to a different wavelengths of light than the formed dyes let pass through in the positive.
I think that breaks energy conservation in my understanding of the term.

mass is energy is all i was saying. in my field we usually keep matter/stuff you can touch as it is, so my number one concern is usually conservation of energy, which is the same thing.

1 Like

In the end Einstein showed the relationship of mass and energy E=mc^2, so we can definely reinterpret Lavoisier’s law of conservation of mass in terms of energy :grin:

1 Like

i can confirm that newly created luts turn purple. after a git bisect in the agx-emulsion repo, i find that this: 0cdb191086811c73de0d06b42124591397a49ac8 is the first bad commit. will have to see what’s going on. it did replace all the profile json files, but it’s likely that just my python conversion doesn’t understand it right. i believe it switched from 10nm spacing to 5nm in the data, and my python is just a bit assumption-happy :slight_smile: i’m sure i can fix it.

2 Likes

@arctic Could you give a bit more info about the compensation removal factor/density/transition? Is this meant to lower the blacks in the “print” to visually compensate for the relatively deeper blacks on a display vs a print, like the EOTF in the davinci resolve color transform node (which is the difference between the REC709 encoding TF and the Gamma 2.4 display TF)?

I’ve been trying to play with it, but no matter what values I use I can’t see any difference in the output. I made sure that it’s active and the glare percent is set at zero. Tried with computing the full image and it’s still the same. Am I missing something?

Also, just curious why the Kodak Endura Premier print paper is so much more contrasty than the other papers. I noticed that the Porta 400 data sheet specifies that the film is designed to be printed on Endura Premier, but the result is far more contrasty than my Portra 400 scans and even more so once you adjust the black/white points like in a scan. Does it have something to do with Endura Premier being designed for digital rather than optical printing? Thanks!

I’ve got vkdt-rawler-pentablet-0.9.99-353-g8c9e66c4-x86_64.AppImage running @hanatos. Can I find an agx-emulsion module in it? Have tried searching for “emulsion”, “agx” and “film” using “filter module by name” but I’m not finding it.

Thanks. Was flipping between the two and was just about to write: TBH, there isn’t much of a difference to my eyes. :smiley:

As I said earlier, I guess it all boils down to what we want to simulate. Early 2010 VSCO presets and profiles for Lightroom and Adobe Camera Raw tried as best as they could to simulate Noritsu and Frontier scanner interpretations for a wide range of films. They obviously hadn’t come up with the splendid idea of using the technical documents. :slight_smile:

For a medium (negative film) that was meant for printing I think it makes more sense to simulate the paper output. But still, for the paper copy to be usable in the digital realm, we would have had to scan it. If we want to emulate the entire chain I think it’d be the following:

  1. (C-41) film development
  2. (RA-4) paper development
  3. scanning

The scan step could be represented by black and white points and possibly a curve control together with a histogram. If agx-emulsion is implemented in vkdt or darktable we could just put levels and curves modules after the new and shiny emulsion tone mapper module. :slight_smile:

For the black and white point controls to be truly usable in agx-emulsion a histogram would have to be there though.

Print paper reflects some of the incoming light, creating glare. This effectively brightens the shadows. Print paper is also designed to counteract viewing glare by making shadows deeper than expected, encoding this in the density curves.
agx-emulsion has random glare simulation that should compensate for this, adding also some noise in the blackest parts of the print.

As discussed with @mikae1, for printing maybe we don’t want to add random glare that will be already present in the final real paper. So in this case the viewing glare compensation removal can brighten a bit the shadows by slightly changing the density curves of the paper.

Here is an example of how it works with transition=0.3 and density=1.2.
density defines the density at witch the compensation kicks in.
transition defines the width (in density values) of the transition from the unaffected region to the compensated region.

You noticed a bug! Thanks! The compensation was actually not happening. I just pushed to the main branch a fix that enables the viewing glare compensation removal. Test it again if you have time.

I think that it is very contrasty because it is a consumer paper, intended to give some wow effect for the average consumer (a bit like bass and high boost in headphones). But I am not an expert of the real paper, since I have never used actual RA-4 paper.

Negatives have huge latitude and in a scan we can actually preserve a lot of it and easily generate lower contrast images. RA-4 print paper is optimized to give pleasant and satisfactory contrast. According to my moderate experience it is more contrasty that you would expect when compared to generic negative scans. But there might be people more expert than me that could comment and have a better view of this.

In the end the simulation does what the data encodes, so if we trust the data (and the monkey digitizing them) this is the contrast that the paper should have.

look for filmsim! :wink:

Right now the scanning step is more of a simulation of the human vision looking at the print. That is probably even better than wanting to simulate a scanner in my opinion.

Indeed! But we can probably have a switch that does “analytic” black-white point correction, making white truly white, and black truly black. For the white there is already special >> print_density_min_factor, that when set to 0 removes absorption of the base of the paper, making white [1,1,1] if present in the print. Since we know the maximum density of the paper (usually approx. 2.5/3) we can also guess the black point and have an automatic correction to make black [0,0,0]. I will give a though of how to do this neatly!

Sorry for the confusion. I wasn’t trying to say that the point of step 3 was to simulate the characteristics of the scanner (à la VSCOs Noritsu/Frontier attempts), but rather to assume a perfect digitization with some ability to tweak black and white points and a curve so (that the exported file can be sent for print).

Perhaps this becomes more philosophical than technical, but what we do when we export the picture (or “Save Selected Layers”) from agx-emulsion is the equivalent of scanning the print. My thought was that it could make sense to give the user basic control over this “scan”.

Thanks for reiterating. I realize now how fantastic this sounds and perhaps it would be enough for your app. I’ll download and give it a try. There’s always GIMP to do further “post scan” corrections. :slight_smile: When agx-emulsion gets implemented in other apps (like vkdt or darktable), levels and curves can be applied via modules placed post agx-emulsion if necessary.

Speaking of darktable. How difficult would it be to port GLSL code C for module use in darktable? Perhaps a question for @hanatos, @flannelhead or @Pascal_Obry?

Dang, no results!

1 Like


has to be int this^ dialog, i.e. apply preset or press hotkey ctrl-p.

also, if you pull vkdt now ships with good 5nm spaced filmsim.lut. this means everybody has to please delete their ~/.config/vkdt/data/filmsim.lut because the stuff in the home directory would take precedence (and likely be the old lut).

1 Like

I’m certainly thinking way ahead of the development, but this could actually be made a fun design quirk. The “compute full image” checkbox could be removed and “Run” could be replaced with “Preview” and “Scan” buttons. The compute time would suddenly make sense. :wink:

Epson Scan calls it Preview and Scan:

VueScan calls it Preview and Scan too:

“Save” would either save the preview or the full image (depending on how it was last rendered).

2 Likes

That’s a very good suggestion, especially because this a UI optimized for slow “processing”, like negative scans with a flat bed scanner. And it somewhat adapt well with the slow processing of agx-emulsion :grin:, that requires a preview to be usable. Also the crop controls are very similar.

Thank you for the suggestion. I had a quick look to magicgui capabilities. This library is super nice for making extremely quick and clean code GUIs, but it might suffer of generality. I had found a solution for adding multiple buttons but ruins alignment of the other widgets. I will spend some more effort on it. Also working on a simple sidecar file for the settings, that will be useful for tracking some tests when I am comparing things.

3 Likes

looking at my code i think i guess the python would look like

def density_dir_model_hanatos(raw, e, dc, M):
    M = M*0.1 # reduced inhibition matrix for matching roughly the models
    e = np.vstack((e,e,e)) # log exposure

    # compute couplers:
    c = np.einsum('ck, cm->mk', raw, M)
    # apply couplers to raw exposure:
    raw = raw - c;
    # now apply our fake D_0(.) which is assuming monochromatic (so we make it mono and apply it 3x)
    e_corr = np.zeros_like(raw)
    e_corr[0,:] = np.einsum('ck, cm->mk', np.vstack((raw[0,:],raw[0,:],raw[0,:])), np.linalg.inv(np.eye(3)-M))[0,:]
    e_corr[1,:] = np.einsum('ck, cm->mk', np.vstack((raw[1,:],raw[1,:],raw[1,:])), np.linalg.inv(np.eye(3)-M))[1,:]
    e_corr[2,:] = np.einsum('ck, cm->mk', np.vstack((raw[2,:],raw[2,:],raw[2,:])), np.linalg.inv(np.eye(3)-M))[2,:]
    # now the only time we evaluate the D lut:
    d = interp_with_curves(e_corr, e, dc)
    return d

which is not really better than your plot:

the idea is that i only have to call the density lut once and can do the rest analytically. also it bugs me that we can’t reverse the measured density curves to “actual” curves that would physically happen in the film before couplers. i tried to run some fixed point iteration as an offline preprocess but the results looked horrible. i might have a bug because my python is abysmal, but also it might just not work like this. results match the uncontrolled colour shifts you described earlier when not respecting the density curves as data.

2 Likes

I uploaded the same shot taken with film & digital here for testing : Tree above stream : digital & film

I was trying to compare the digital converted with agx-emulsion with the film but I struggled to get close, although I might have messed up something.

2 Likes

Answering myself here. Did some quick searching and some seem to suggest that the suffix less portras are somehow inbetween the NC and VC portras of old. Specifically it’s suggested that 160 is most like 160 NC and 400 pulls toward VC and 800 most like VC.

Now the reason for my question in the first place is that I found the agx simulation to produce more vivid and “distorted” images than my film samples. With this info about the new portras it makes sense as the 400 should be more vivid and contrasty than the NC.

Recently I’ve only shot the 160 portra which is close to what I remember the 160 NC being like.

So now we need the 160 simulation for those more earthy lower contrast tones. My agx simulations are looking quite “spiky”