Automatic rotoscoping

Hello,

I’d like to do automatic rotoscoping in a video, in order to detach it from the background (either with Blender or Natron, or any open source tool that runs on linux). Ideally, I’d like to just paint roughly the edges, and let the software find the precise borders of the object. I’m not interested in manually specifying each point of the curve, and I can’t use green background.

Is it possible?

Thanks!

No, not in Natron.

Depending on what your background actually is, there might be other ways to achieve your goal.
They are not automatic in the way you describe though.

(The more complex your background is, the more detail it has and the more similar it is to the foreground, the more complicated things will be, that is why greenscreens are used: one color, one even brightness across the frame, no features that can be ‘confused’ with the foreground)

You can create a mask using opencv, for example:

Thank you for your answers.

@PhotoPhysicsGuy Could you give more details? I’m fine with semi automatic methods, I don’t care to help a bit the program, just want to spend 5mn per frame.

@bazza I tried it, but 1) it takes ages to compute it for a single frame 2) after 5mn… either it just stop with a message “Killed”, without any reasons, or it just crashes my computer. Even if it would have worked, I’m afraid its way to ressource costly: I would need minimum half a year just to process a 25mn video.

Test with this:

./rm-bg file.png

It is not that fast but it takes less

So, I don’t know your footage and if this could work and actually save you some time. But…

you could always do a rough roto and input this into a luma keyer. Luma keying means that you make a mask from a difference in brightness. That would require your background to be darker (or brighter) than your foreground. this would work for a uniform black (or white) background even without the rough roto.

If you have a cleanplate of your background, meaning an image or image-sequence which does not contain the foreground element you want to isolate, you could use an image difference keyer (PIK node in Natron) for masking out the background. this is further from automatic, usually still done with rather uniform greenscreens. I am guessing that the background does not have to be green for the math to work…but that depends on the implementation.

As you might have guessed, a compositor like Natron has a lot of tools to achieve what you want to achieve, all there to be higher quality and less time consuming than manually adjusted roto-shapes. But they all still require some knowledge and a certain level of foreground-background separation. The better that separation, the better all of this will work (even the stuff that @bazza postet works better with more uniform backgrounds)

Cheers!
p.s.: is that 25min of footage already shot? can you reshoot with a greenscreen? If you can reshoot and you are not proficient with natron…reshooting properly is much simpler than trying to fix horrible source material in post.

Thanks a lot for your help! So since scripts are quite slow and not super precise, I decided to buy a greenish textile (hard to find nice green screen during containment, and for some reason on the camera the green appears as blue… Guess I need to check again my white balance :-P) and last time I tried, the result was quite nice when I was wearing a black shirt, but today the result is very bad with a white shirt. It is very easy to see with your eyes the difference, but for some reason the keyring is very bad: it also takes all white values! I also tried other keyring methods, but I can’t find working set of parameters, and I’m not sure how to use PIK mode

Here is my background:


and here is a screenshot of the video:

When I try to apply a keyring and look at the alpha channel, it also puts the arm (completely white).

I tried to follow this tutorial.

I also tried the PIK node, and… it changes my color to pink!

I’ve the feeling that this operation is simple: if the distance between the color is larger than some value, then keep this color, otherwise remove the pixel in the alpha channel. Why are these algorithm changing my colors, or taking colors that have nothing to do with the original color? Is it because white can be seen as a green very light? If yes, can’t I say “please, don’t consider only the chroma, but also make sure the color is not too light”?

Sorry if my questions feel super stupid… :-\

1 Like

Note that I tried with Color key in blender, and the result is much better (but much longer to render):

2 Likes

Don’t expect to just run everything through a keyer and be done, typically you should handle your key and extract the alpha in a separate operation from anything you do with the colour. Once you have an alpha that is good merge it into your colour pipe with a shuffle node and premultiply it.

Your questions are not stupid.
I’ll try to answer some of them, anybody feel free to chip in.
Some remarks regarding the new background: That color looks a bit odd, but a nice whitebalance might help. Sit away from the greenscreen if you can so that you do not cast shadows on it. Illuminate it evenly so that the creases are not as visible. Apart from that, good job!

Slapping the keyer onto your footage and looking at the alpha shows that you need to find better parameters, a better key color maybe (a different place to pick), for it. Ideally the alpha should look pure black for the greenscreen and pure white for you and almost nothing in between (this is a bit simplified, the greenscreen tutorial tries to deal with motionblur in front of a greenscreen…but that is not your problem at the moment).

The PIK node has some despill operation checked and is using a different screen color (that probably affects the subtraction math). Despill is an operation trying to get rid of residual green that illuminated the foreground. My best guess is that unchecking ‘use alpha bias for despill’ makes the despill paramater available so you can set it to zero. The chair artefact comes from, you guessed it, the chair being part of the background AND being white. If you have a stool, use that, or take the chair away for the cleanplate.

The keyer algorithms take a color volume around the color you picked for the subtraction math…how the shape of the volume is selected in the RGB cube of the working space, and how you adjust that volume can differ. I don’t know what blender does better, but the result is impressive! With a bit of fiddling you should be able to arrive at the same result.

@Shrinks99 assumes that you know the difference between associated/premultiplied Alpha and unassociated/unpremultiplied Alpha. If you do, good. If not: it’s the difference between using and not using an existing alphachannel for calculation of pixelcolor. You can generate whatever alpha you want, stick it to your footage (merge it in) and then tell Natron to use it (premultiply).

I hope this helps! :slightly_smiling_face:

1 Like

If you are up for experimenting with AI, I used recently the https://runwayml.com/ to remove some backgrounds. But I guess for your purpose a bit too experimental… As the other stated above, it needs several tweaks to get a proper key an Natron.
maybe this might help: https://github.com/MrKepzie/Natron/blob/master/Documentation/source/guide/tutorial-AlternativeMatteExtraction.rst

btw.
curious about the resulting vid you are doing :wink:

Edit:
maybe make a google search for: “OFX Plugin Keyer” you might find and test (and report here) things like:

http://casanico.com/

or

1 Like

Hey! Sorry to answer only now, back that time I was super busy and I forgot to answer after. I just wanted to thank you a lot for the help. I finally managed to get my head around color key. It does require a bit of time to fine all appropriate parameters, but “Grade” is super useful to turn grey into black/white to avoid semi-transparent object (just ensure to check “clamp white/black” otherwise you may get strange colors when you bring the color back). The biggest issue I have is that usually my borders are a bit dark (not sure why, maybe it’s linked with despill), so I want to erode them by a few millimiters, but then it can create “holes” inside hairs when a single green pixel was appearing. But most of the time it was enough (using a small blur also helps), and I was using rotoscoping when I needed more specific changes. Here is the kind of nodes I have, not sure if everybody uses this kind of thing or note:

image

Some tricks also: I noticed it usually works best when I select the darkest parts of my green screen to color grade it.

To achieve a Chroma key based on Hue (like I did with Blender) instead of RGB, you can also use the HSVTool, it’s explained in the documentation here.

Also, a last advice for newcomers: even if machine learning is certainly great, it’s already quite hard to properly get rid of a green screen, so if you can re-shot, it may worth considering to buy at least a 20€ green fabric :wink: And the better your lightning is, the easier it will be to remove the green screen :wink: