Filmulator algorithm in other editors

I’m wondering if the Filmulator algorithm could be decoupled from the graphical UI and integrated in other editors.

I’m asking because I am currently working on an experimental version of PhotoFlow that allows to combine several bracketed exposures into a single HDR image with unbounded values (i.e. floating-point values are allowed to go above 1.0 and still correctly handled by several processing tools).

Could the Filmulator algorithm be used in this case to tone-map the HDR image? Have you already tried something like that?

2 Likes

Indeed, Filmulator can be used to handle wider dynamic range scenes and compress them into a narrower range. I did this in the command line version of Filmulator.

Preferably you’ll want some degree of highlight rolloff in the source, and if the scene is crazy wide dynamic range you also need to up the development simulation iteration count for stability of the solutions. If you have too few iterations, the first order approximations for the chemical reaction rates break down and result in some negative values.

(edit: You can save some computation relative to simply running more simulation steps by iterating the first order approximation of the development reaction several times per diffusion if necessary, that functionality is not currently written into the code though it should be easy to add)

The way Filmulator handles things now is that the iterations are fixed at 12 but highlights are forced to roll off asymptotically to 1.0, but in your case you’d want the extra control.

Would it be fine for you if I would hand you a test HDR file, so that you can show me what can be achieved?

Unfortunately I still could not compile filmulator due to the QT5 dependency that is difficult to to get in the distribution I’m currently using… maybe I can limit myself to the command-line version, but then I need a first educated guess of the parameters before I can start experimenting.

In terms of the Qt5 dependency, I use Qt right from the source rather than the repos. But that won’t help you try out HDR files, because right now I have no means for TIFF input. Maybe I’ll whip up something quickly this week.

The command-line version works, but is rather less satisfying, because it’s not interactive.

If you’re worried about the default parameters for the CLI version, they’re in configuration.txt. Try raising the layer_mix_const in the range of 0:1 (Drama is the layer_mix_const times 100) and playing around with the film area in a range from 100ish to 100,000 (though the upper ones are best reserved for panoramas with huge FoV).

Usage would be filmulator -c [configuration file] -t [tiff filename]

If you want to send me a test file, it’s best as a 16-bit tiff in linear space. A floating point tiff might work but I haven’t tried anything but 16-bit tiffs in with the current tiff loading function. Either way I’ll try to make it work.

If you just need a quick way to get HDR data in and out you should add support for PFM. The code is tiny and while files are big it’s easy to read and write. Have a look at darktable’s code:

Reading:
https://github.com/darktable-org/darktable/blob/master/src/common/imageio_pfm.c#L34-L104

Writing:

It’s more about the UI and database aspect than the actual image importing; I do already have 16-bit tiff input in the code, and that should be sufficient for most HDR uses. I’m trying to figure out an appropriate way to include it in the Import tab in a way that’s compatible with my future plans (I have too many plans…) But I’ll take a look.

I think that for single-file HDR images one needs 32-bit floating point values, because the upper bound is basically undefined (there is no physical limit in luminance values).

To be honest, Filmulator probably is not the best algorithm for doing crazy tone mapping from files with more than 16 stops of dynamic range. It likes noiseless shadows and highlights that it is certain aren’t clipped, but it won’t do the big flattening operations that other tone mapping algorithms can do.

What I am looking for is not the strong local contrast enhancement of many of the HDR images that one can find on flickr, but more a realistic film-like effect.

The starting point will not be a single RAW file, but a 32-bits floating point image obtained by blending several exposures into a single HDR image (something similar to what HDRMerge does, but with image alignment and RAW processing in addition). In this case, shadows will be obtained from over-exposed shots that get under-exposed in the RAW processing, and will therefore have very low noise…

Few “filmic” tone mapping operators are already available on the market, but I’d be really interested to see how your original approach compares to those.

Ah, okay.

I’ll include any and all formats as necessary, I just need some time to integrate non raw formats into the interface (needs import in place, and needs hiding of the tools for auto CA correct, highlight recovery, etc).

Is there already a way to send an image buffer to the filmulator algorithm, together with the processing parameters, and get back the result? Or is this still deeply mixed with the UI part?

It’s pretty much independent. Our pipeline as a whole needs some other interface classes, but the filmulation process itself doesn’t need anything.

Just as an aside, I popped into the darktable IRC today to ask a question and somehow got roped into writing a Filmulate module, which seems to be relatively simple to do, although I’m unsure of how nice to use it’ll be without the prefilm histogram and without full resolution pipeline caching. I haven’t implemented any of the actual image processing in yet, though.

Hopefully my experiences getting the code to work outside of its native environment will let me help you better.

2 Likes

It’s not easy as just copy/pasting the code, it seems, but the results are…fantastic. Not in a good way.

Imgur

And now I have it mostly working.

The sliders don’t work, so I’ve overridden the film area to somewhat smaller than normal, and the drama to 40.

It needs the zoom settings sorted out though; it wants the full image area at whatever resolution is needed, but right now I don’t even know what it’s set to do.

And now the sliders are working. Kinda. They don’t save the settings though.

Very interesting! I would like to try incorporating that into PhotoFlow as well… do you have the dark table module already available somewhere?
Maybe we can try to do that together…

A couple of questions:

  • is the algorithm capable of processing a scaled-down input image, and generate an output similar to what you would get by scaling down the full-res processed image?

  • is it possible to process just a portion of the input image? If yes, how many border pixels are needed? Is this a predictable function of the processing parameters?

Sorry for the long questions…

The code is in the filmulate branch of darktable on my Github. There’s a filmulate.cc with the interface code in iop, and a filmulate subdirectory containing the image processing files taken from Filmulator (suitably modified).

Filmulation is fully scale independent, but it requires the whole image. The darktable people were very insistent on trying to sample the developer from a low resolution image, but that’s even less ideal than a blur scheme we tried a while back, where we’d downsample, blur that, and then upsample. Any aliasing that occurs as a result of downsampling is reflected into the developer concentration, and the darktable low res pipeline has a lot of aliasing…

[quote=“CarVac, post:18, topic:1033”]
Filmulation is fully scale independent, but it requires the whole image. The darktable people were very insistent on trying to sample the developer from a low resolution image, but that’s even less ideal than a blur scheme we tried a while back, where we’d downsample, blur that, and then upsample. Any aliasing that occurs as a result of downsampling is reflected into the developer concentration, and the darktable low res pipeline has a lot of aliasing…
[/quote]What is the goal of down sampling? Just saving memory/bandwidth? Is the developer low pass filtered anyways?
From your descriptions film area always sounded a bit like a low pass. Do you have a description of the algorithm somewhere?

The algorithm goes thusly:

  1. Run a first order differential equation to grow silver crystals and consume developer. The rates are controlled by the developer concentration.
  2. Blur (diffuse) the developer laterally within the active layer. This is a 100-200 pixel standard deviation blur. Film area controls the radius: stuff diffuses farther relative to the image size on smaller film sizes.
  3. Diffuse between an inactive reservoir and the active layer.
  4. Repeat (default 12 times); halfway through, mix all of the developer to simulate agitation.

We tried to downsample before blurring in the hopes it would be faster than our original method, iterated box blurs at full resolution, which it was, but at the expense of quality the more the advantage was. However, in the end we switched to infinite impulse response two pass filters which are even faster and more accurate.