That makes sense.
I am investigating a bit on the cyan misbehavior, that is really only clear when overexposing the film. So probably some crosstalk among the channel should be preserved/manually introduced to guarantee desaturation. It might be very well connected with some aspect of the film/paper profiles creations.
awesome. i think for the most part i do exactly that. i suppose something with a bit of alignment of the curves/x positions… and practice with the m y filters. but i do like the results already:
now i want to do at least some grain before cleaning up and pushing…
That was fast!
It is getting closer Great job!
If you need any in input on the grain I can help.
I use poisson and binomial random numbers in simulating grain. And apply gaussian blurs according to particle sizes (or better amount of density per particle). I guess there are crazy fast random number generators for GPUs.
yes please. that data. the 3x3 per density. what is it? it’s like 1e-80 or something that is clearly not 32-bit floating point any more. also, it’s resampled following data locations given by density_curves if i read that right. can i just resample it to a uniform distribution of densities? so i can do it all the same for all the profiles, and store in the same texture.
edit: maybe i was not looking at the full array… for higher densities the numbers become way more normal…
In the json there is density_curve_layers
. That array is again represented on the same log_exposure
axis [log_exposure, sub_layers, main_layer]. The sublayers are ordered from the most sensitive and large particles to the least sensitive and smaller.
This above is an example in which RGB total is
density_curves
while the remaining nine curves are density_curves_layers
. The sum of density_curves_layers
along the sub_layers axis gives density_curves
. We can have also a functional version of them, based on gaussian CDFs.
The weird thing of interpolating with density instead of log_exposure came as a consequence of the DIR couplers model. When applying DIR couplers, we need to do some trickery, and the relationship log_raw
-density_cmy
is not anymore simply given by density_curves
. To solve that, since density_curves
is anyway monotonic I used it to interpolate the given density of a sublayer (density_cmy_layers
) given the final total density of the layer (density_cmy
) after couplers. The other way is to output an effective log_raw_after_dir_couplers
and use that for the interpolation.
Haha you two are crazy fast!
On the topic of testing the spectral upsampling, the engineer in me had liked proof of robust and smooth results for any RGB input. The RGB gamut boundary is one way of generating that. Here is another possibility I generated with some python hacking.
constant_sum.exr (3.7 MB)
And the script if you want to modify anything.
generate_constant_sum_slice.py (948 Bytes)
It provides a slice of the RGB volume of constant sum, so a plane with the normal [1, 1, 1] with plenty of “negative colors” around the valid triangle. Making sure that this test plane works fine for both low, mid and high exposure (i.e. “all”) and it should be a pretty good proof of the spectral upsampling working well. This is what I was planning on testing with but I won’t be able to provide results in adequate time for your tempo so I hope a proposed test image will help enough.
My expectation is that all colors, even monochromatic lasers, at some point go to white.
Implementing grain is interesting from a pipeline perspective. Adding grain post interpolation is one of the best ways of hiding interpolation artifacts. My 21 MP 5D Mark II files looked amazing at almost 100 cm on the longest side if images were run through Alien Skin Exposure’s color film simulation post interpolation. I used this a lot when doing exhibition printing for work. This has always meant I can’t do everything in darktable or Lightroom.
Upsizing images with grain already applied, on the other hand, looks rather terrible.
It was a long time since I tried vkdt (looks like that will change soon!), is it possible put modules/effects post interpolation/upsizing?
hm i have explicit resize nodes that instruct the graph where you want resolution to change if both ends don’t agree. with the film sim i’ll probably make it an explicit upsampling thing that would interpolate / catmul rom the input image and then simulate grain on the output size.
can’t tell you how much i’m enjoying generating noise. normally i spend my days trying to reduce noise in estimators…
If I get that right, that means grain is applied after the upsampling? That would be just… Amazing!
Not all noise/grain is equal! Enjoy your upsampling masquerading.
I had some spare time recently (that’s not always the case, though), and I got a bit over-excited for the spectral upsampling method by @hanatos. In my opinion It definitely improves the results in practical tests. I like images much more, at least.
Thanks for sharing!
I am not an expert but that sounds like a reasonable expectation.
For the simulation of film that I am doing, I have the fear that if the sensitivity of a channel is exactly zero at the wavelength of the monochromatic laser (this is the case for the data as they are now), there will be no density generated in that layer. Making it more difficult to reach white in the final print. Also if the dye generated in a layer with development (let’s say the one that has non zero sensitivity to the laser) does not have residual absorption in all regions of the spectrum, reaching white might be even more difficult. These also because there are maximum densities that can be created.
This is good input. I could try to make the sensitivities decay smoothly such that that they are never exactly zero. This shouldn’t change much the final images in normal conditions, but would improves desaturation behaviors with over-exposure.
How should I treat the negative colors here? Maybe a limitation by my implementation: if I import the image in linear Rec2020 for example, I cannot ask to generate spectra outside this. The alg by hanatos actually could work on the full visible locus, but for sake of optimization I pre baked a Rec2020 LUT.
Should I import for example in linear Rec709? Is this what you envisioned?
For now I computed a couple of default simulations (Kodak Gold and Portra Endura), importing in linear sRGB (Rec 709). Probably we should isolate the spectral upsampling part to study better this aspect, also the interaction with the film sim is very interesting in my opinion. Plus, they looked colorful and fun enough to be shared.
They might show some glaring mistakes in my code. For sure before the spectral upsampling I convert in linear Rec2020 and clip negative values, leaving the upper unbounded. This to be able to use my LUT of spectra.
(left) hanatos2025, (right) mallett2019
That’s some grain!!!
Can’t say that it is enjoyable to look at in this image , but it a good start! I am amazed by your speed, and the speed of vkdt of course.
hehe yeah it’s completely nonsensical, pretty much just binom(poisson(something made up of thin air that looks almost like the density))
. certainly not an indication of what it is going to look like/looks in your code.
not sure this test image is super relevant. i mean these coordinates are waaaaay outside:
any even partially meaningful input device transform would take care that these values are a slight bit more real. these aren’t even close to spectral locus. here i marked all the values (interpreting input as rec709/linear) that are within the super large rec2020 gamut (it touches the boundaries of the spectral locus):
edit: this is spectral locus:
so if anything it will test the out-of-gamut inpainting of the upsampling map.
Plotting the xy chromaticity is very telling of the extreme range of the image. Thanks for the analysis!
After these comments, I had some fun and I also attempted to make another scene referred test image, more oriented at verifying the smoothness of the full simulation; still trying to be in a large enough gamut to be meaningful and telling of the capabilities of the spectral upsampling. Also I wanted it to be HDR.
My attempt looks something like this:
gradient_hdr_rgb.exr (390.8 KB)
Code
from agx_emulsion.utils.io import save_image_oiio
import numpy as np
import scipy.ndimage
import matplotlib.pyplot as plt
N = 64
x=np.linspace(0, 1, 2*N)
y=np.logspace(12, -6, 4*N, base=2) * 0.184
z = np.zeros_like(x)
grad_rg = np.stack((x,1-x,z), axis=-1)
grad_gb = np.stack((z,x,1-x), axis=-1)
grad_br = np.stack((1-x,z,x), axis=-1)
grad = np.concatenate((grad_br,grad_gb,grad_rg, grad_br,grad_gb,grad_rg), axis=0)
grad = scipy.ndimage.gaussian_filter(grad, (2*N/4,0), mode='wrap')
grad = grad[:8*N,:]
grad /= np.sum(grad,axis= -1)[:,None]
grad = grad[np.newaxis,:,:] * y[:,np.newaxis,np.newaxis]
grad = np.fliplr(grad)
save_image_oiio('gradient_rgb.exr', grad, bit_depth=32)
plt.imshow(grad)
The image is made by scaling these RGB profiles with a log-spaced amplitude. The sum of RGB is 1 for the base profile, and the intensity spans from -6 to +10 stops of 0.184 midgray (i.e. [0.184,0.184,0.184] * scaling factor).
If interpreted as Rec2020 it covers this profile in the xy chromaticity space:
It is not going to the edges, but it tries to be smooth.
With a default simulation (deactivating auto-exposure) we get:
Kodak Gold 200 and Kodak Portra Endura, (left) hanatos2025 (right) mallett2019
And with Kodak Portra 400 and Kodak Portra Endura, (left) hanatos2025 (right) mallett2019
I had a closer look at the cyan region for some insight on the “cyan discontinuity”. If we take a vertical section at around 2/3 the x axis we get this:
(left) sRGB output, (right) Linear Rec2020 output
sRGB is clearly clipping creating the hard edge in the cyan.
Outputing in Rec2020 (and then reinterpreting here on the browser in sRGB) shows a smooth cyan transition (with Kodak Gold 200 and Portra Endura).
Everything looks rather smooth.
Maybe we are boosting too much the saturation in the sim (although I find the saturation levels of the images pleasing), or the saturation achievable in physical prints prints cannot fit very well in the sRGB gamut and we get easy clipping. Possibly a combination of the two.
Thanks for the great work, I played with this and the ART integration. I found it to be very exciting.
The only issue I noticed that stood out was using the agx_emulsiom GUI, being uncolour managed, on macos gives significantly different gamma / contrast when saving a layer compared to the viewing window. Unfortunately this is a bit of a blocker for actually using it much, but it’s somewhat fixable with adjusting the black point and contrast afterwards.
I think these simulations could be a real killer feature for open source photography and I am excited for the possibilities, such as exporting negatives I can use my regular film workflow on, in an attempt to unify my workflows. I also think the grain and halations are quite realistic and finally solve that pixels shouldn’t be the finest unit of detail in an image shot on film.
Maybe it was Daniele Siragusano who once showed a chromaticity plot of projected print film?! I actually can’t remember who it was, maybe it was Troy Sobotka.
BUT it was surprisingly large, almost reaching blue and red corners of the visible locus.
The dyes in the print can be dense enough in two channels that the resulting Blue and Red colors basically only transmit light of the edges of the visible spectrum.
So yes, far greater gamut than sRGB at least in the blues and reds.
But I guess you can try yourself by plugging the test image not into the negative exposure part of your pipe but at the end to see the extent of your spectral-print-gamut in terms of xy-chromaticity plane.
EDIT: (I know this is anecdotal, but I am trying to find the chromaticity plot…but I can’t.)
some initial images with grain:
this is the digi clean, for reference:
and here with grain applied:
this is grain by layer, i.e. showing grain only for one of the three layers and the other two develop digi clean:
i just took the grain colour/layer area multipliers from the agx gui. my mathematics are not super clean, i hope to not get caught with it on the bright side it is only moderately slower with grain applied. went up from 25 to 28ms for full res.
i find super cool how the grain contributes to the image and makes it appear sharper.
next: make option to upscale 2x or 4x work, and then implement DIR.
Exactly what I thought when I saw your images. Even though, when you look at both, the original is obviously sharper (e.g. the eyelashes)
I sometimes add noise to an image if it appears too soft for printing. A textured paper has a similar effect.
oh man, i just have to say it again… this simulation is soo incredibly cool. i just spent an hour easy just converting a bunch of pictures. the most random shots turn into magic with the filmsim applied… skin tones are deep and shadows exciting… rolloff is soft and just right…
the only thing i struggle with is white balancing, i’ll probably convert the json/list of white balance weights to some vkdt presets that change with film/paper combination. in case anyone wants to test my WIP, vkdt git master has it. (docs here, you’ll need the filmsim.lut
data file and then apply the filmsim.pst
to any image you’d like: press ctrl-p
in darkroom mode, type filmsim
and then enter
).