It’s been an embarrassingly long time since my last update, but I finally have a new Filmulator feature coming down the line: I’m improving the responsiveness of editing.
For the longest time, Filmulator has always rendered only the full resolution image. While this leads to sometimes extremely slow response to sliders (especially early in the pipeline, and on large (>24mp) images), that never bothered me as the creator, since the editing parameters are simple and I know exactly how all of the sliders behave, letting me pick appropriate settings even without visual feedback.
In exchange it gives you the ability to zoom and pan around the image freely with no waiting or lag.
However, the waiting on the sliders makes learning the program a challenge for newbies, so as of a few minutes ago, I have implemented dual pipelines: a low resolution quick preview that runs first, followed by the full resolution.
This has the effect of dramatically improving how quickly the image responds to early sliders like white balance, at the expense of getting a temporarily blurry image until the full resolution image completes.
You can get it from the quick_pipe git branch or as an AppImage here for the next week or two (v8). It should be enabled by default, but in any case you can turn it on and off via the new setting in the Settings tab, followed by restarting the program.
I’ll be polishing it some more in the coming weeks, such as letting you select your desired preview resolution (for example, you might want to be able to set it to approximately your native screen resolution) and maybe more. So try it out and give me feedback!
Wow, I had to close Firefox to really test that because with caching enabled, Filmulator seemed to take up to 12.5 gigabytes of ram. (About half that with the caching in the full res pipeline disabled)
But even at 100mp it is nice and responsive, aside from the initial raw loading and demosaic.
So during development, one step I did was to simply swap in the quick pipeline to replace the full size pipeline, and I discovered that it can deliver nearly 30 updates per second (or thereabouts), at least at this image size.
However, if you tried this branch, you would find that it’s not actually updating anywhere near that quickly.
Why is this? Because when it would simply spit the image onto the screen and wait for the next input, it could start the new processing cycle immediately. However, as implemented, it has to wait for the old processing to finish a step before it decides to quit and start over.
So today, I spent a few hours refactoring the validity checking so that it’ll be able to check far more frequently: as frequently as once per outer loop. Hopefully. I’ll back off a little if it impacts performance.
I am also updating the appimage link at the top of the thread. You can go here too if you want.
I just fixed a bug in cropping: if you enable cropping, you can drag the handles before the full size image is ready, and if you’re adjusting the crop at the moment that the full size image is completed, then your crop will get all screwed up.
So I disabled the drag handles until the full size image is ready.
(When the appimage build finishes, I’ll update the links.)
When you would select a new image, the small image would process, showing in-process histograms from the downsampled image in the meantime, then the large image, updating the histograms with smoother full-image ones.
However, upon moving a slider, the histograms would go from the low-res to the high-res, and then mysteriously back to the low-res ones.
Something was calling the pipeline again, and somehow it was not displaying but the histograms were updating!
I redid the state machine which I use for loading images, made it a bit cleaner, but that didn’t fix it. However, I found that the sliders were emitting an extra update signal upon releasing the mouse, which would tell it to load a new image without actually changing anything. Thus, it kinda broke a lot of assumptions my state machine made, and caused strange behavior.
Luckily that was a one line fix (deleting a line) and now it’s working nicely, and the code is cleaner to boot. Woo!
It’s still relatively nuts… it seems that the downsampling significantly waters down the smallest highlights, resulting in the histogram changing shape. That’ll be fixed by letting you use larger quick pipes if you want. The resolution will be selectable from the settings.
Also, the histogram is noisy for the quick pipe. Currently it samples every five pixels in both directions for speed, but I might change that to be more often (up to every pixel) for small image sizes.
I made smooth editor image default to off, but I’m currently fighting a gremlin in the blur algorithm so it may be a bit before I release a new build.
What would you like the orange progress bar to do? Fade out? Reduce in saturation so that it’s not so distracting?
Middle mouse button clicks: so basically when you middle click on another image, it will immediately change all the sliders? Hmm… My main concern there is discoverability; that’s easy to do accidentally and have no clue what happened. On the other hand, I do acknowledge that the current copy and paste UI isn’t great.
I’ll have to look into adding ‘protect skin tones’ to vibrance; I basically never ever use vibrance (in favor of Drama which boosts saturation in its own way) so I will need to see how it works.
Right now I’am not sure if it’s at 95% or perhaps 98% or done, so whatever would say “I’am done”, could desaturate or turn to green or just remove itself from the gui.
Yes, the workflow goes like: a. open a new image, b. realize that it is shot with almost exactly same settings as previous, c. middle click previous (or some of already seen ones), (undo if turns out wrong) d. adjust some sliders, e. done.
But still keep the old copy/paste behavior.
So there’s been radio silence for a while, and that’s because I was investigating a new issue that the Quick Pipeline has exposed.
At very low film areas, the blur radii used for diffusing developer in Filmulator become extremely large, hundreds of pixels wide.
It turns out that for large images, at very low film areas, the algorithm (van Vliet) we use to compute blurs becomes extremely inaccurate, with very strong ringing artifacts, beginning when blurs exceed 200 pixels and become strong at around 500 and above.
The solution to this is to downsample the layer that needs blurring until the blur diameter is something reasonable; I’m thinking 20 pixels or so. Then we can just bilinear upsample back to the original size.
First things first: implementing bilinear downscaling, blurring, then upscaling does work. But it’s slow.
So then I decided that we could just adjust the Filmulation algorithm to perform bilinear scaling in the develop stage, and then always store the developer at low resolution. But that isn’t feasible to do with OpenMP in a performant manner, so in order to implement that efficiently, we need to redo the main algorithm in Halide. So I decided to put this off until later.
Next up: today I added two new aspects to this feature.
Stealing the demosaiced image. Now, it only demosaics once for both pipelines, and shares the result. This makes selecting a new image far less patience-testingly slow than before. A dramatic improvement, if I say so myself.
Now you can set the desired resolution of the preview. In the previous builds, the preview only rendered at thumbnail size, fitting into a 600x600 window. Now you can set it in a range from 100x100 all the way up to 8000x8000. (If the preview size is bigger than the actual image, it’ll still process twice at original resolution, though.)
These changes were made surprisingly easily, so I pushed the branch, and… the build failed.
It turns out LibRaw has discontinued the demosaic packs. That means no more Auto CA Correct and no more Amaze demosaicing.
So… now it’s time to implement auto-CA-correct and demosaic and highlight recovery functionality inside Filmulator instead of foisting it off on LibRaw.
I basically need to reconstruct a large part of the pipeline, which means more than just CA Correct and Amaze, I need to handle
X-Trans
Highlight reconstruction
Going from camera color space to working color space, including WB (Filmulator currently has LibRaw spit out the image according to the camera set white balance and then re-adjusts it, therefore losing a bit of highlight dynamic range)
I’m not sure I’ll be so concerned with handing things like pixel shift, but maybe…
Where in the RawTherapee code can I find sequence of pipeline steps for reference?