I see where they are coming from. But it looks like the main problem is doing the bandlimited interpolation when sampling the lower resolution image which actually might be more expensive than just running your IIR on the bigger image. But I’m not experienced enough to know that with any certainty. Still thanks for describing the algorithm.
[quote=“CarVac, post:20, topic:1033”]
However, in the end we switched to infinite impulse response two pass filters which are even faster and more accurate.
[/quote]Unless you happen to be running on a GPU.
Currently things are almost completely working in darktable, but I don’t have processing domains (which part of the image to request from the previous stage in the pipeline) set up properly (my initial attempt to reverse-engineer it failed), and I want to have an in-depth discussion with the devs about the best place in the pipeline for it (which apparently has to be set in stone).
The Filmulation parameters are identical between the two, and I tried my best to match the colors and tones using highlight reconstruction, WB, and curves in darktable. On the other hand, Filmulator is running in the larger Rec.2020 space for
This must have to do with the details of highlight reconstruction, and the fact that darktable uses lab for much of the pipeline whereas Filmulator always uses RGB.
and I just found a Filmulation-induced crash in darktable. Time to investigate.
edit: my investigation has found that it’s not in fact Filmulation-induced, it’s just a bug, which I then reported, along with a fix from qogniw on the IRC.
Both have drama all the way up, plus overdrive, but default film size (which accentuates the hard shadow on the street).
The out of the box curves are very different. I didn’t touch Filmulator’s curve (the shadows or highlights slider) but in darktable I had to make it concave (equivalent to lowering the shadows slider in filmulator).
Filmulator brings out more of the blue color in the shadow, but the ‘velvia’ tool in darktable can equalize it somewhat.
For Filmulator, that’s the completely default rendering. I only turned on CA Correction. I adjusted darktable to match the brightness (adjusting exposure before Filmulation and the tone curve afterwards).
With the ‘velvia’ tool in darktable, I was able to match the strength of Filmulator’s colors, but it affected neutral colors (the rock in the middle and the clouds). Vibrance couldn’t do it (in fact, that tool is very, very subtle).
I’m kinda glad they turned out different. I like that each has its own look. Perhaps the less bombastic darktable rendering is a better starting point for heavier manipulations, for example, while Filmulator is pretty much what I want by default.
However, I think integration into RawTherapee might match Filmulator more closely, since Filmulator uses the film-like curve type algorithm from RT in the post-Filmulator processing.
Guys, thanks so much for this work. I think it is exciting to see and I really like the out of the box results of Filmulatar. The field stone struck me of a great example of why this technique is so interesting. Seems you would have to mask the stone before applying the Velvia tool to get closer with Darktable straight. I understand that it emulates local developer exhaustion. Do the controls take you into exhausted developer? Black and White?
The control “Drama” lets you adjust the rate of replenishment from a reservoir (mimicking the part of the developer liquid that isn’t in contact with the film). If you reduce the replenishment, the developer that’s actually reacting gets depleted more. If you have overdrive off, it’ll completely mix the developer once midway through, replenishing it fully, but if you have overdrive on, it doesn’t mix the developer.
I just added a working color space setting, that lets you downconvert to Rec.709 (like sRGB) instead of Rec.2020. This helps control the brightness of highly saturated channels when outputting in sRGB.
I built the filmulator branch of @CarVac’s github repository of darktable today. Aside from that I don’t know what result to expect and that I do not have reasonable test pictures since I built and tested it in my lunch break at work, it seems to work pretty well (ubuntu 15.04). I’ve noticed just two (very) little issues: It seems that the brightness range is clamped to values up to about 3/4 of the histogram (changes with rolloff boundary but is never 1). I had to use the curves to correct the white point, since the clamping seems to be introduced by filmulator and the exposure controls do not change the overall exposure anymore. I guess that behaviour is OK for a tonemapping algorithm, but maybe the filmulator module should get an exposure/whitepoint correction slider? The second issue is that the slider loupe (right click on sliders) works only for the film size but not for the other sliders.
The brightness range is limited to about 3/4 because as far as I know, after Filmulation, there’s no function for pulling down highlights that lie outside the histogram anymore; if you want it to default to something brighter and have a slider to darken, I’ll add that.
Rolloff boundary affects the rolloff of highlights before the film simulation; so does the ‘exposure’ tool.
When using it, I would disable base curve, adjust exposure to manage the histogram nicely, and then enable Filmulation, adjust its parameters, and then move on to the tone curve. Adjust the white point, and then adjust the shadows. It’s a similar flow to what I’d do in Filmulator, moving in order of the pipeline, but the results (as shown before) end up very different.
RE: the slider loupes, I have no idea why that might be. I’ll investigate. To be fair, though, none of the parameters require any precision at all.
Thanks for the explanations. Makes perfect sense with the brightness limit. I just wonder what would be the best way to recover brightness after filmulation, since there are so many possibilities, not only the curves but levels etc.
With the slider loupes I had the same impression, they are not needed for filmulator. I wonder if it is possible to use the slider widget without the loupe. What would be missing then is a possibility to directly type in a value. However, I had a closer look and it is not that the slider loupes don’t work at all but it seems that the mapping of ranges of the loupe and the overall slider range is somehow weird.
Anyway, thanks for bringing filmulation to darktable! It’s a very interesting approach.
@CarVac Meanwhile I’ve quite significantly progressed in the inclusion of Filmulator into PhotoFlow.
However, before I can release an experimental version I need to clarify a couple of details:
how should I modify the parameters for a scaled-down version of the input image?
what is the minimum input region size which is needed to correctly process an output region of WxH+X+Y (where W and H are the region dimensions and X,Y are the offset)?
By the way, the filmulptor code is included in an experimental photoflow version that allows unbounded corrections of the exposure, so the brightness limitations mentioned above for darktable are not present…
The algorithm is scalable: you can resize the input image (keeping the aspect ratio, of course) and the output will be the same.
However, whatever the resolution you provide to it, it needs the entire image. Otherwise, the brightness is going to change dramatically depending on where you look: if the viewport only covers a dark area, it will be lighter than expected, and if the viewport covers only a light area, it will be darker than expected.
Regarding brightness: tools coming after it in the pipeline must realize that it’s not linear anymore. You can still try to apply the sRGB gamma at the end, but it won’t match the Filmulator output, which has a different curve instead of a standard gamma.
One approach to better match Filmulator would be to follow the filmulation with black point and white point controls, then apply that curve (a quadratic bezier curve from 0,0 to 0.2,1 to 1,1) and then undo the sRGB gamma, and treat that as roughly linear.
Is the PhotoFlow pipeline RGB? Lab? If RGB, what primaries, or is that selectable?
Actually, it can be whatever you want, depending on the chosen output colorspace of the RAW processing step. By default, I’m using linear Rec.2020, but the user is free to select both the primaries (currently I have built-in primaries for sRGB, Rec.2020 and ACES) and the tone response curve independently (so that one can for example use a Rec.2020 colorspace with the L* encoding).
One can also directly output Lab values from the RAW processor, but that’s not useful here.
I need to think how I can efficiently incorporate a filter that requires the whole image data into memory, because this is not straightforward in the current processing scheme.