LGM 2019 presentation on Filmulator

I missed it when it was uploaded but now I found it so I’ll share it here.



Thanks! Amazing actually! I am a bit baffled that Filmulator is not more in the limelight. Will test ASAP. :slight_smile:

1 Like

Okay, wow. I took a rather difficult Fuji RAW file and was able to get to a result that I couldn’t get with DT. In like two attempts with maybe 10mins spent on each attempt. On a 2core windows7 machine with not enough ram, 1.2GB available. Maybe one can get there with DT…but I couldn’t.

I like how artifact free the developer depletion idea comes out (if you don’t overcook it with ‘Drama’). Would you leave the algorithm untouched now, or are you thinking about modeling other physical-chemical processes during the development? (i.e. two photon absorption as a silver grain activation process which is very non linear for low fluxes, films which tend to be sharper than others because of local embedded chemisty, fine grain developers which destroy large grains into smaller chunks, ‘hard’ developer chemistry…)

It might go against your idea of ‘simplicity’ but might we see different demosaicing options if they would be available? I couldn’t find switching the demoisaicing options…but it wasn’t necessary in this case actually, but are there any?

the black and white button sounds promising. Couple it with a color filter after demosaicing but before film emulation.

that buttery smooth canvas! super useful and makes it feel responsive, even if it takes 15s to recompute.

I am very much going to use this. Looking forward to any further development.


The algorithm is going to stay as-is.

It only simulates film in an abstract sense. It doesn’t simulate each grain of film, it just says “there are this many grains (floating point, not integer) under this pixel, and they’re all this diameter after this development step”. In the end, a perfectly uniform gray input should yield a perfectly uniform gray output as well, with zero grain.

Demosaicing I might make available, but I might hide it away in the settings tab. I could make it still be set per-image, but it’s not something I would want to change often so it would hide out over there. Anything in librtprocess could be made available in Filmulator; if you want more algorithms then we just need to port more algorithms into librtprocess.

I was considering putting B&W before Filmulation in the pipeline, but that would make it much slower in responding to human input, so I put it after.

I’m glad you like it!

I understand. I would not like to be understood that I think you should simulate each grain. The processes that I mentioned are part of what makes the chemical development process quite complex. All of those are affecting global and local contrast in different ways. Your algo is the first to my knowledge which tries to model what is chemically going on. Yes, in an abstract sense, but still different from ‘just-tonemapping’ (although this might be what it is mathematically). And the results are lovely.

Was just asking this because with fuji-raw there is the ‘simple’ markestejn and the 3 pass version (and sometimes color-smoothing passes). Again I haven’t seen problems with it, but wonder how to deal with it IF it comes up.


Have you tried playing around with the “Film Area” parameter? It might do what you are looking for: change the scale of the contrast effects.

1 Like

I sure did and think it works like a charm!
(suggestion: right now it displays SF, then MF and then LF when changing the parameter. This could step through various sizes of film formats more explicitly…e.g. something like: S8, S16, S35 3-perf/half frame, 24x36mm/8Perf, 4x4, 6x4.5, 6x6, 4x5, 8x10)
It only partly ‘solves’ my problem. I should read up on this more before trying to explain it.

But for example: a simulation of 2-Photon-Absorption could be realized by a not so simple tonecurve adjustment in the shadows (yep, I’ve seen the shadows slider). A tonecurve is global though, underexposing is very local depending on exposure AND very non-linear in terms of developer consumption because of the very non-linear light absorption process. The simulation/emulation accounts for the developer consumption but not for the non-linearity at low intensities, right?

The locally embedded chemistry which makes some films sharper or more acute than others, I have no idea how this is chemically done and how it affects the developement process.

The fine grain developers apparently break up larger silver grains so that a larger core is surrounded by smaller chunks. This softens high frequency details on top of the local chemistry.

And this is just B&W chemistry, the chromogenic-chemistry/color-films is probably even more complex.

I am not criticising Filmulator for being incomplete by the way. It just made me think about how different a digital RAW workflow is from what we had for almost 100years before it. Those hundred years have probably decades of manhours spent on achieving a visually pleasing tonemapping.

okay, this post got too long, sorry! :slight_smile:

1 Like

The film simulation gives linear exposure up to the highlight rolloff point.

Way back when I started the program, I had considered adding a toe, but instead just added highlight rolloff, and because I liked that result it just stuck that way.

It definitely doesn’t give the stark shadows that a contrasty film does—in fact, Filmulator is pretty bad at that, because it recovers the shadows. This is something to experiment with maybe.

I never knew that about the different developers, fascinating. It’s probably not something I can do at all here.

1 Like

Here’s an attempt at it.

Left is normal, right is with some toe added to darken the shadows. It’ll be adjustable.


Whoa! Flippin’ insane!

Don’t worry, you’re doing a great job already.