Filmulator algorithm in other editors

Guys, thanks so much for this work. I think it is exciting to see and I really like the out of the box results of Filmulatar. The field stone struck me of a great example of why this technique is so interesting. Seems you would have to mask the stone before applying the Velvia tool to get closer with Darktable straight. I understand that it emulates local developer exhaustion. Do the controls take you into exhausted developer? Black and White?

The control “Drama” lets you adjust the rate of replenishment from a reservoir (mimicking the part of the developer liquid that isn’t in contact with the film). If you reduce the replenishment, the developer that’s actually reacting gets depleted more. If you have overdrive off, it’ll completely mix the developer once midway through, replenishing it fully, but if you have overdrive on, it doesn’t mix the developer.

For B&W you should just convert after the fact.

I just added a working color space setting, that lets you downconvert to Rec.709 (like sRGB) instead of Rec.2020. This helps control the brightness of highly saturated channels when outputting in sRGB.

Rec.2020:

Rec.709:

(with all other settings equal; basically I can raise the shadows more without ruining the red channel detail using the smaller color space)

I built the filmulator branch of @CarVac’s github repository of darktable today. Aside from that I don’t know what result to expect and that I do not have reasonable test pictures since I built and tested it in my lunch break at work, it seems to work pretty well (ubuntu 15.04). I’ve noticed just two (very) little issues: It seems that the brightness range is clamped to values up to about 3/4 of the histogram (changes with rolloff boundary but is never 1). I had to use the curves to correct the white point, since the clamping seems to be introduced by filmulator and the exposure controls do not change the overall exposure anymore. I guess that behaviour is OK for a tonemapping algorithm, but maybe the filmulator module should get an exposure/whitepoint correction slider? The second issue is that the slider loupe (right click on sliders) works only for the film size but not for the other sliders.

The brightness range is limited to about 3/4 because as far as I know, after Filmulation, there’s no function for pulling down highlights that lie outside the histogram anymore; if you want it to default to something brighter and have a slider to darken, I’ll add that.

Rolloff boundary affects the rolloff of highlights before the film simulation; so does the ‘exposure’ tool.

When using it, I would disable base curve, adjust exposure to manage the histogram nicely, and then enable Filmulation, adjust its parameters, and then move on to the tone curve. Adjust the white point, and then adjust the shadows. It’s a similar flow to what I’d do in Filmulator, moving in order of the pipeline, but the results (as shown before) end up very different.

RE: the slider loupes, I have no idea why that might be. I’ll investigate. To be fair, though, none of the parameters require any precision at all.

Thanks for the explanations. Makes perfect sense with the brightness limit. I just wonder what would be the best way to recover brightness after filmulation, since there are so many possibilities, not only the curves but levels etc.

With the slider loupes I had the same impression, they are not needed for filmulator. I wonder if it is possible to use the slider widget without the loupe. What would be missing then is a possibility to directly type in a value. However, I had a closer look and it is not that the slider loupes don’t work at all but it seems that the mapping of ranges of the loupe and the overall slider range is somehow weird.

Anyway, thanks for bringing filmulation to darktable! :smiley: It’s a very interesting approach.

@CarVac Meanwhile I’ve quite significantly progressed in the inclusion of Filmulator into PhotoFlow.

However, before I can release an experimental version I need to clarify a couple of details:

  • how should I modify the parameters for a scaled-down version of the input image?
  • what is the minimum input region size which is needed to correctly process an output region of WxH+X+Y (where W and H are the region dimensions and X,Y are the offset)?

By the way, the filmulptor code is included in an experimental photoflow version that allows unbounded corrections of the exposure, so the brightness limitations mentioned above for darktable are not present…

The algorithm is scalable: you can resize the input image (keeping the aspect ratio, of course) and the output will be the same.

However, whatever the resolution you provide to it, it needs the entire image. Otherwise, the brightness is going to change dramatically depending on where you look: if the viewport only covers a dark area, it will be lighter than expected, and if the viewport covers only a light area, it will be darker than expected.

Regarding brightness: tools coming after it in the pipeline must realize that it’s not linear anymore. You can still try to apply the sRGB gamma at the end, but it won’t match the Filmulator output, which has a different curve instead of a standard gamma.

One approach to better match Filmulator would be to follow the filmulation with black point and white point controls, then apply that curve (a quadratic bezier curve from 0,0 to 0.2,1 to 1,1) and then undo the sRGB gamma, and treat that as roughly linear.

Is the PhotoFlow pipeline RGB? Lab? If RGB, what primaries, or is that selectable?

Actually, it can be whatever you want, depending on the chosen output colorspace of the RAW processing step. By default, I’m using linear Rec.2020, but the user is free to select both the primaries (currently I have built-in primaries for sRGB, Rec.2020 and ACES) and the tone response curve independently (so that one can for example use a Rec.2020 colorspace with the L* encoding).

One can also directly output Lab values from the RAW processor, but that’s not useful here.

I need to think how I can efficiently incorporate a filter that requires the whole image data into memory, because this is not straightforward in the current processing scheme.

It should be fixed now.

I also updated the pipeline order to put Filmulator in its proper place.

Tested again and slider loupes are OK now. Thanks!

How do you like what it can do in darktable? Any good results, or just trying it out?

Whenever I’m using it in darktable I’m constantly comparing it against what I expect from Filmulator and that strongly guides my processing tendencies, I’d like to see someone else’s interpretations of its capabilities.

Not yet, did test it in my coffee/lunch break at work, so only boring lab images. If I find time I will test it at home with more suitable pictures.

Now that darktable has also moved to an RGB pipeline, would it make sense to resurrect this idea?
I know we have filmic, but filmulator often produces well-retained contrast, saturated but not over-saturated colours, and natural-looking highlights with out-of-the-box settings, which take me the combination of several modules with filmic et al.

2 Likes

It very well could be worthwhile for users to have the option to use Filmulator in a full-featured editor as well as in its own simplified UI.

I wonder how much the old module would have to change… anyone in the darktable crew have any advice?

It would still have the same issue as before in that it has a whole-image ROI, thus slowing down the editor…

3 Likes

I’d still prefer having a slow tool that I can choose not to use over having it not available at all. The stand-alone Filmulator, on its own, has OK performance (especially if I take it into account that I only have to use one tool instead of multiple). But I do miss darktable’s tools there (e.g. denoising, arbitrary rotation etc.)

2 Likes

i think we mainly didn’t pursue this idea to integrate filmulator into dt because of performance issues. requiring the whole image is not something that helps matters in dt’s pipeline. that said, i have (dysfunctional) code that runs the complexity of the filmulator inner workings on gpu for the full image. iirc it took like 20ms to run on a full image. that was probably 16–24 megapixels, i forget. this means there is probably hope to make this very fast indeed if we spend a bit of work on it. i’m really motivated to look into this, i think there are also still very interesting effects to be simulated between silver halide grains and developer.

5 Likes

:drooling_face:
sounds like heaven!

Would that be for dt or vkdt?

It’s in vkdt. I haven’t had the time to sit down and grok the GLSL and such to bring it up to shape; so much to do in vanilla Filmulator.