Kodak portra and supra endura shares the same sensitivities (and dye diffuse densities). They are sister papers with just different contrast. In my experience they are also the most prone to show color issues in the development of the model, and they are the ones that kept giving inconsistent results for longer, until I started added more physically meaningful filters and illuminants, etc…
Comparing the sensitivities, they have quite a lot of crosstalk blue-green. Especially the green sensitivities are very blue-shifted compared to others. So my guess is that the filters are pretty critical in the transition at 500 nm for portra (and supra).
Portra (and supra) have also the best skin tones of the bunch, and they are different in this by the rest I tried. I started wondering if I should try different filter sets than the generic colorimetric ones I got from Thorlabs. Maybe there are purposedly designed dichroic filters for color enlargers with even better performances?
This is definitely in my interest and long plan. I love black and white grainy images!
I did already explore a bit in the past black and white sim of grain (some very old example with simple models of grain Embrace the noise! - #20 by arctic + a few posts in the later year/s). The current grain model is more complex and multilayered, with proper density curves. Also the addition of the printing step should add better rolloff of grain.
I am very curios to experiment with multi-grade paper and push-pull development, since there are a lot of curves for this. It is super exciting stuff! I need some time for that of course. I just have also a full time job , but I would love to spend so much more time on working on these models.
As @mikae1 pointed out, there is quite a lot of data sheets available around. Making a profile is not simply getting the curves (in an accurate way), but there is a process of unmixing of the channels and adjusting to make sure that the output is ok. Plus most of the time dye diffuse densities are not available for separate CMY channels, and they need to be reconstructed in a sound way that respect the data that is available. For now only Portra really worked great almost out of the box, all the other negatives required more or less touchups (in a minimal fitting way).
Print paper is more straightforward to profile because usually does not have colored coupled dyes and it is a bit easier to predict. Fujifilm does not publish characteristic density curves for paper, though.
I got all the data manually from the PDFs using WebPlotDigitizer. And then manually worked on them to make sure they would behave properly (mainly I made sure they are able to reproduce a ramp of neutral gray without eccessive tints, adding minimal changes to the density curves). There is some code to make the profiles, but data should be evaluated case by case, because you never know when it is going to be inconsistent and with errors.
The algorithm of spectral upsampling of @hanatos is so much better in any possible way when compared to [Mallett2019]. And it adds just very little computational overhead and some complexity. It works on the full visible locus and allows for more saturation on the input data. When judging subjectively the results, in my opinion it adds distinct “depth and realism” (in a physically based sense).
I completely stopped using the sRGB workflow, and this should tell something on my take.
Hi all! Just merged the large-color-gamut branch, that include the new spectral upsampling method. Now the recommended workflow is to export RAW files to a large linear RGB space, such as ProPhoto RGB or Rec2020, and use hanatos2025 as spectral upsampling method. All of these are new defaults.
Among other changes:
a few functions were rewritten with Numba for increased efficiency, and iterate faster in this testing/dev phase of the model. Now a 6 MP simulation (3000x2000 pixels) takes 10 seconds on my laptop. An update of the gui in preview mode takes 1-2 seconds, now with grain and halation disabled until compute_full_image is clicked. Numba accelerated functions include:
3D and 2D lut cubic interpolation
approximate random number generators for poisson, binomial, and lognormal
linear interpolation faster than np.interp for larger images
added pyFFTW as requirement, for performing faster parallel FFT gaussian filtering for halation. Usually has pretty large kernels
add a spectral band pass filter to the camera (UV and IR cuts), not really meant to be changed but exposed in the GUI to play with
Since a few things changed, if you happens to try it and find issues, let me know. Thanks!
The lego bricks in the foreground and the background lego figure use ACEScg primaries. A bit like the dragon render from @liam_collod the idea to see how “robust” the image formation is.
About the gradient, I think this is indeed one of the critical aspects of a good image formation. I wanted to write a small post about it.
I am curious to test the app again as it seems to have changed a lot over the last weeks !
It might just be my setup, but unfortunately I can no longer run it on my MacOS ARM (M2 Ultra) system. I think it might be something with Numba. When I try to run the command @liam_collod provided above, I get this as a result.
Numba workqueue threading layer is terminating: Concurrent access has been detected.
- The workqueue threading layer is not threadsafe and may not be accessed concurrently by multiple threads. Concurrent access typically occurs through a nested parallel region launch or by calling Numba parallel=True functions from multiple Python threads.
- Try using the TBB threading layer as an alternative, as it is, itself, threadsafe. Docs: https://numba.readthedocs.io/en/stable/user/threading-layer.html
I tried updating some sections of the code and got a little bit further, then ran into this error.
ValueError: No threading layer could be loaded. HINT: One of: Intel TBB is required, try: $ conda/pip install tbb OR Intel OpenMP is required, try: $ conda/pip install intel-openmp
I don’t think those can be installed on a ARM based Mac.
Ok! thanks for testing it.
Could you try to put this on the very top of the main.py?
import os
os.environ["NUMBA_THREADING_LAYER"] = "TBB"
I read that it could fix this kind of issues. If numba is problematic I might want to make it as an optional acceleration, or learn how to use it in a more safe way.
Unfortunately it doesn’t work for my setup. I get an error telling me to install tbb.
ValueError: No threading layer could be loaded.
HINT:
Intel TBB is required, try:
$ conda/pip install tbb
If I add tbb to the requirements or try to install it manually with pip, it doesn’t work as there doesn’t seem to be any wheels available for my system for tbb.
╰─▶ Because all versions of tbb have no wheels with a matching platform tag (e.g., `macosx_15_0_arm64`) and you require tbb, we can conclude that your
requirements are unsatisfiable.
hint: Wheels are available for `tbb` (v2022.0.0) on the following platforms: `manylinux_2_28_x86_64`, `win_amd64`
So I went through and started re-enabling parallel=True in certain files and trying to narrow down where the issue is happening and I think it’s with fast_gaussian_filter.py. Once I enable parallel=True in there, I can no longer launch it. Having it enabled in fast_interp_lut.pyfast_interp.pyfast_stats.py and fft_gaussian_filter.py doesn’t seem to cause any issues with launching the app.
I also tested it with removing
import os
os.environ["NUMBA_THREADING_LAYER"] = "workqueue"
from main.py and just changing parallel=True to parallel=False in fast_gaussian_filter.py and it launched.
Thank you for investigating on this, since with parallel=False the gain is less than 2x (compared to roughly 3-4x that was before), i temporarily reverted in the main branch to scipy’s gaussian_filter. Hopefully it will work more robustly on every platform this way.
interesting, yes maybe there are better filters. i have some more or less (rather less) accurate analytic fit to the thorlabs filters. especially the 500nm are problematic for me. i get positive/well behaved filter weights after the optimisation when i overlap magenta and yellow such that they sum to one but cross over at 500nm. if i match your data better, the weights go haywire.
i’m now using a 2856K thungsten lamp, not 3200K, because eyeballing your graph the low values at 400nm and the high at 800 looked more like that to me (no scientific data driven reason). i think maybe my results look better now (?). i also have a bandpass filter, but so far can’t say it made much of a difference.
one thing i noticed when working with portra film/paper is that i can control the “white balance” by playing with film exposure vs. print exposure. maybe this auto exposure part is where my portra looks so different to the agx-emulsion one, because otherwise i get really similar results now (without halation and couplers for now).
I wonder if there is any spectral response characteristics to be found for Print Film stocks like Kodak 2383. It could be aesthetically interesting to have support for that imaging pipeline in addition to print paper.
Great job and interesting to hear. Maybe I should also go back to the filters and try to use something more similar to what works more reliably for you. If you get any more insight I am all ears!
My reason to use a cooler temperature comes from studying the manual of the Durst M605 Durst_M605.pdf (7.1 MB), where they use a tungsten-halogen lamp that should be cooler than tungsten. I didn’t really try much more to optimize the output.
For the effect of the band-pass filter I think this image from signatureedits that I posted above also, vintage red car image, is a very challenging one. Without filters and using kodak portra 400 it is really difficult to get satisfying reds, as with kodak gold 200. Especially the reflections on the top of the hood.
Even if portra 400 has a lot of latitude, there are shifts with over exposure. In my experience the best look is achieved with the minimal negative exposure that retains dark shadows. Moreover the profiles were optimized using my pipeline (optimization only of density_curves aiming for mid gray neutral prints for a range of negative exposures, using a fitting routine).
For portra 400 not much changed compared to the original data. The original profile without further corrections is kodak_portra_400_au, while kodak_portra_400_auc has a small correction to the density curves. kodak_portra_400_au has only the “unmixing” of the density done, and does not depend at all by the agx-emulsion pipeline. Note that not all of the _au profiles give good results, most of them are affected by heavy color tints in agx-emulsion.