Export in Natron

Natron can export transparency and webm format as alpha??

yes. Use yourfilename.webm and check alpha. [Has to be matroska]
BTW video format output from Natron should be avoided whenever possible. Should use PNG/TIFF/TGA sequence.

Screenshot from 2020-06-13 06-09-58

As @cgvirus says you should render transparent images from Natron. You cant take these transparent images into Kdenlive to render into a video with alpha channel. Just select Video with Alpha in the Render settings.

Screenshot from 2020-06-14 22-35-20

@cgvirus and @hellocatfood: It would be great if some of you could contribute a simple documentation chapter that answers these questions:

  • how do I convert my video to frames (almost necessary for H.264 & H265)? What format should I use for frames? (esp if the video is 10bit or 12bit)
  • how do I view the audio from the original video files in Natron for synchronization (using AudioCurve)?
  • how do I render my composited sequence? (again what frame format, issues with alpha…)
  • how do I convert my frames back into a video and mux audio (either using a GUI tool like kdenlive, or using ffmpeg from the command-line).

If you feel more users should contribute, make a call for documentation contributors in the forum or on FB.


I will try to gather others input on these questions in FB. Our workflow is mostly done with ffmpeg arguments. I will input those as well. Let’s see.

I have a question on this issue. I always wondered why Natron has trouble with H.264/5 codecs. How is it handling things differently then, let’s say Kdenlive. Almost all of FOSS uses FFMpeg. Just curious what the difference is. (I know Natron is a professional application and designed for professional codecs not for working with mainly end user codecs)

Thats because with many video codecs, frames are made to be encoded or decoded sequentially (the only exceptions are codecs using only intra frames, such as ProRes or DHxHR).
Natron accesses the frames in random order, which make things a bit complicated, especially using the FFmpeg API which is not made for that and very low-level. If we could have a robust API over FFmpeg to just “fetch fram f of that video sequence”, that would solve eveything.
Of course, rewriting properly the FFmpeg reader would also solve this, but the FFmpeg API is a mess, and a moving target.
For writing videos, Natron should be able to do it properly, but there are still a few incompatibility issues (eg with DNxHR) that have to be fixed. However, I never recommend rendering a complex graph to a video.
I think if there’s a clear section in the manual about how to use other FOSS tools to perform these tasks, this will help a lot.

1 Like

I’m asking for this doc from you, because we have no developers to do the code rewrite, but we have talented users who know solutions and can document these, even if they use external tools.
The doc should basically be text and images. Preferred text format is markdown or rst, but it’s ok even if it’s a word doc, as long as you don’t use a crazy layout. Whantever the format is, I’ll convert it to RST first using pandoc, and then edit


Thank you for the information. It cleared up my lack of understanding with how Natron works with H.264/5 I get the rest of things. I’ll try and help CGVirus get something together.

Lemon Term:

The problem occurs with the way H264 compression algorithm is designed. To encode, h264/5 uses b and p frames. Suppose a 1 second shot of 30 fps has 30 frames. h264/5 will pack similar pixels of some forward frames (B frames) like frame no 2 and 3 in frame no 1 to compress the size. Also it uses no 3 and 4 to compress similar pixel in no 5 frame (p frame). Then blend them all together to create final frames. (less frames in packets)
That’s how it’s smaller in size.

Also to decode it uses again these algorithm for which CPU needs more power and time just to decode and extract those frames again.

Now this is the problem. First the decoder needs to go back and forth to decode these frame. Secondly, we need to capture the frames almost in real time and send them to memory. Waiting for this B and P frame to process will kill the header to scrub and for compositing will create a latency. This latency will record in the memory. That’s why we will have frame jumps. Which will also halt other nodes in latency. This will become a problem for trackers and roto. As they will miss some frame in screen pixels. So the roto and tracker will be inaccurate.

In VFX and editing, if the farm wants to (usually and for sure never) use h264/5 as input, they can use Intra frames. It’s easy to do in ffmpeg using arguments: -bf 0 and -g 1. Which ensure h264 will not use b and p frames. But h264 decoder will run the cpu processing anyway and although we will have all frames in memory, the ammount of processing power needed for decoding just the video is not even close to effective.

Usually in editing Intermediate format like proress or dnxHD or QTL is used. Because they don’t use any aggressive frame binding rather packs bunch of still images in a container with quality loss or no loss.

Usually for vfx image sequences (PNG/TIFF/TGA/EXR) or above Intermediate format are used because it’s simple to decode and send to memory immediately. And better for tracker and roto as well.

I think we can create a transcoder node with ffmpeg and pyside to transcode them directly from natron for Non GUI folks.

1 Like

Also the most complete GUI tool for ffmpeg

  1. Shotcut (Win/OSX/LNX)
  2. Kdenlive (Win/LNX) MAC is not good
  3. ffMultiConvert (LNX)
1 Like

So if you want examples of how to use the tools you mentioned let me know and I’ll screen capture shots on how to do it.

The basic code for Natron to request FFMpeg execution for image transcoding is simple enough but should it be encapsulated in a node or should it be a Menu wizard script? Verified Natron script code below.

import os
import subprocess
subprocess.call([‘ffmpeg’, ‘-i’, ‘test.mov’, ‘image%d.jpg’])

1 Like

Should this be a seperate node, or should the read node have an option to transcode to image sequence (maybe even by default) in a subfolder adjacent to the original file?
The former might/should/could make the read-node complain about non-image sequences. The latter would need a logic inside the read node to opt for the image sequence once one is there.

I think that the best solution is a wizard from the main menu. It would avoid any confusion with existing nodes and is the way other programs like Kadenlive do it. Creating it as a separate node or a menu wizard also removes it from the intended functionality of the read node IMO. I’m open to any ideas though.

I haven’t tried coding a script with pop up windows yet in Natron but from the docs it looks possible.

Programming question: How do I write the code for the on click event for the buttons?

1 Like


Programming question: How do I write the code for the on click event for the buttons?

You can see the audio_vlc.py line 13-16 & 166-171 (linking)
You will need a your_main_filenameExt.py for external execution
see Audio_VLCExt.pyfor the implementation. line 54- 99

1 Like

Should be a seperate node or python menu. read node is hard to crack.

1 Like

@cgvirus @tmorley I’ve started responding to some of the questions @devernay asked. For the moment I’m doing this here https://docs.google.com/document/d/1YY9wSg5_nPCdIwGz7_r8cuhO1ZXOV6TODQQdcixEkjo/edit?usp=sharing I’ll move to markdown at some point.

There’s some questions I don’t know the answer to, like which format to use for 10/12 bit input and there’s no doubt more that needs to be added to the ffmpeg commands.

Also if anyone knows Windows and Mac software and can write about it please do!

Any input would be great

1 Like