Optical flow computation did a lot of progress recently, and gives good estimates even in challenging situations.
I wouldn’t make an OpenFX plugin for that, because every year there’s a newer/better method to solve that problem.
I think it would be better to work on offline processing of the input sequences, which would produce EXRs containing the forward (to the next image) and backward (to the previous image) OF.
The state of the art is currently RAFT but it’s limited in resolution (or needs a huge GPU). More recently, RAFT-NCUP was proposed to overcome these limitations. Both have code.
This solves the question “how to compute the optical flow”.
But then, what would you use OF for? There are multiple possible uses for it.
One you have the forward and backward OF, it’s possible to do retiming for example. I think Natron has almost everything, but it would be probably more convenient to make a plugin that does the backward warp to the previous and next frames in one pass. What would a retiming OFX plugin look like?
What other uses do you see for OF?
Recently, Foundry added the “smart vector” toolkit to nuke (11 or 12 don’t remember) and it’s basically just an image warping based on accumulative OF.
It’s quite useful… and their only true “new” feature. (Maybe Natron should not totally overlook the silly “feature war” to gain popularity )
Also retiming with OF you mentioned is really mandatory for serious VFX work and also a mainstream use case.
OF is also used to add motion blur on live footage shot with high shutter speed or 3D without motion vectors.
I agree that rendering externally is an option,(hence my previous question about launching python scripts from within Natron… on windows )
To make this usable for many users and help Natron gain popularity it should be possible to hide the dirty in/out/in process in a pyplug using a default temporary render folder for example. Preferably rendering frame by frame to allow tweaking settings and keeping the overall Natron workflow intact.
I know this is very much an edge or niche case, but I really like using, or rather misusing, optical flow for glitchy warping effects on images. When you blend between two images really slowly you get a lot of nice artefacts. Here’s a compilation I made on using ffmpeg’s interpolation methods
And a blog post explaining how it works Motion Interpolation for Glitch Aesthetics using FFmpeg part 0 | Antonio Roberts
In my workflow the benefit of having it in one software would be more about being able to preview it as part of the composition. It’s not a massive inconvenience having process offline/in other software though.
Retiming & motion blur are the big ones but there have been some interesting uses of motion vectors as a component of neural-network upscalers, I know Nvidia’s DLSS uses them for upscaling and I think there’s a few others out there.
At my last VFX job one of the compositors had this cool tool that would take motion vectors and select a pixel that would be turned into a tracking point, that was pretty neat but it was also extremely heavy compared to regular tracking. Useful for some things in a pinch though.
May I add that most of us compositors are used to “old basic OF” thanks to Foundry O’Flow and Adobe Ae interpolation
Aside from retiming and motion blur, here is a cool tutorial demonstrating how optical flow nodes can basically be used for creating clean plates.
Old topic BUT here’s another example that uses optical flow to create new frames from a set of known good frames and replace unusable ones in the sequence!
Rebuilding bad frames using optical flow.