Using Natron to solve Facial Motion Capture

Good evening… my first post here. My name is Kashif and I am a 3D artist who is moving into using motion capture. So I have been looking for programs to aid me in solving facial capture data (multiple videos used of my face with markers) and a friend of mine lead me to Natron to try out. I love that it’s node based and that is a plus for me since I was also looking at NUKE (too expensive), After Effects (Can’t solve more than one piece of footage), PFTrack (Damn near perfect, I can load up to four videos and create a master document, but too expensive for an indie), Syntheyes is good (but I hate the UI and it takes too much time to set up), and now I am looking at this.

My question is, would I be able to use Natron to solve facial capture data by using Four cameras, and track each marker in the footage and export this to my 3D app as merged points? Think of like Vicon Cara or high-end motion capture systems that handle facial animation.

https://www.youtube.com/watch?v=mAKveiQ--ys

56 28 03 12

Thank you. And i am sure because it is open source, some amazing programmers could come up with something.

NOTE: The screen shots are from PFTrack 2017 PLE and After Effects.

2 Likes

To bump this topic, I was thinking of a node that could be added to Natron in order to create the ability to track all of the markers on my face. There is a video tutorial on YouTube that showcases the mocap node in PFTrack, where it shows how to track markers on an object from three different cameras…

https://vimeo.com/225978068

This would be EXTREMELY useful for those who are doing facial performance capture and won’t have to shell out thousands for software…

If you want to do a 3D track, use Blender. Blender has a very good 3d tracker and solver, almost as good as PFTrack.
Natron does not have a 3D workspace at this point.

I’d like to add that Nuke and Natron are VFX Compositing software and are not primarily meant for tracking…
The way you do 3D tracking in Nuke aids you in 3D Compositing and set extensions, etc.
There are a few plugins that were made for Nuke (https://www.keentools.io/) which enable you to do facial motion capture, but I’ve not used them.
For something like this, I’d really suggest using a software that was made for this sole purpose, like PFTrack, Syntheyes, etc. If you can’t afford any of these, Blender, as I already mentioned, is the best option you can use.

I’d seen a video of facial motion tracking in Blender before but I forgot about it, and I just found it in one of my playlists:

Although this is great… it doesn’t use support multiple angles like a multicam head rig (which most studios use for facial performance capture) and it doesn’t take entire face into account.

In Blender, mocap is a split workflow consisting of 3d tracking, solving and then linking that tracking data to armatures or other objects. Blender isn’t built primarily for mocap. It’s just a great FOSS option out there if you cannot afford any commercial software or plugins.
Also Natron doesn’t have a true 3D workspace yet so even if someone manages to make such a node, either in the form of a PyPlug (Gizmos for Natron) or a complete plugin like the ones from keentools.io I mentioned in one of my previous replies, it won’t work.
Also with development at a halt, there is no way we’re getting a true 3d workspace anytime soon.
So If you have a very specific requirement, i.e. multiple angle support, etc. better to go for software that you know can fulfill your requirements.
Blender can utilize multiple camera angles for tracking and facial mocap, but it’s a very twisted workflow. I think there might be some mocap plugins out there for blender but I’ve not heard of any prominent ones.
So the best you can do is search for any other free software that can do what you need to do, or get specialized commercial software.

This is the “twisted workflow for multicam tracking in Blender” i was talking about: