Should the Natron viewer display unpremultiplied RGB?

Many errors reported in this forum (around 1/3) come from the fact that users believe that WYSIWYGITVF (What You See Is What You Get In The Video File), and they don’t think about checking the alpha channel of what gets into the Writer.

The Natron viewer only displays either premultiplied colors (equivalent to a merge over solid black), or a merge over a checkerboard. Almost nobody ever checks the checkerboard render.

But by default, when writing to a file format that doesn’t handle alpha, Natron will unpremultiply by alpha, to get the true colors. The problem is that many people (even non-beginners) create composite which have a non-solid alpha, and unpremultiplication create artifacts that are too often interpreted as bugs.

Should the Natron viewer display unpremultiplied RGB?
  • no, everything should stay as is is
  • no, but when writing to a format that doesn’t have alpha, Natron should by default merge over solid black (i.e. drop alpha) rather than unpremultiply (i.e. divide by alpha)
  • yes, but not by default (could be an “alpha” button next to the checkerboard button)
  • yes, by default, and it could be disabled by clicking the checkerboard button (aka merge over checkerboard) or a “alpha” button (aka merge over black, ie show premultipled RGB)

0 voters

For the record, I didn’t report this as an error. I don’t think I suggested it was anything other than a lack of my own understanding.

I agree with Hank (@Shrinks99) on this:

1 Like

Viewer displaying premultiplied colors is in my opinion, pretty standard behavior and it shouldn’t change. Users need to understand that it is not WYSIWIG and learn how to properly check alpha for export.
However if there was to be any change, I’d personally like it to be something like the ‘alpha button next to the checkerboard button’ you mentioned or the ‘flood alpha’ checkbox in the read node like Hank mentioned. Those two seem to be the only reasonable “fix”.
Changing the way the viewer works or the way Natron renders will make it non-standard and confusing for people who have been doing things the right way all along.

1 Like

(Perhaps unsurprisingly) I still agree with the points I made previously. I believe they offer the best solution to the outlined problem for users as well as keeping standards inline with other software.

1 Like

Of course you didn’t, but I still consider it a design error on Natron’s side. Users should understand what is written to disk.

The solution cannot be in the reader IMHO: it is either in the viewer or in the writer, because bad things can happen anywhere in the compositing.

Personally I would think that having the Writer not doing unpremult by default (just drop alpha) when writing RGB formats is the best option, because it’s WYSIWIG. Users can always insert an “Unpremult” node before the writer if that’s what they want, and they can even preview the result!

BTW, I already made the change in the Merge node: RGB images are considered opaque by Merge by default, simply because I don’t want Porter and Duff to hate me for having invalid premultiplied values: (R>0,G>0,B>0,A=0) is simply invalid in a premultiplied workflow like Natron or Nuke.

I don’t see the point in Nuke’s “auto alpha” option: “When enabled, if the Read produces RGB but no alpha channel, then assume the alpha should be 1 if it’s requested later on.”. Why the heck would someone want another value than 1? You want 0.5? Set it! You want 0 and die in hell? Do it yourself!

Now, Natron has it always on.

I don’t see the point in Nuke’s “auto alpha” option: “When enabled, if the Read produces RGB but no alpha channel, then assume the alpha should be 1 if it’s requested later on.”. Why the heck would someone want another value than 1?

You likely would not, that’s why the “auto alpha” checkbox is a boolean and not a slider? Maybe I’ve missed your point here. While I don’t personally make use of this workflow I have seen compositors add things to their comp in the main B pipe, all with alpha, and then re-grain only the parts of the frame with alpha at the very end. I don’t like doing things this way but some folks do, it’s a valid workflow. ¯\_(ツ)_/¯

Here’s an encoded PNG overtop a JPEG with no alpha. In Natron turning on the B input magically gives it an alpha (where’d it come from??), in Nuke it makes no difference. These images are all from 2.3.15.


Natron checked B Alpha. Where’d it come from? B has no alpha channel assigned. Checking this on should make no difference and it should always be checked on unless the user decides it shouldn’t be. These checkboxes shouldn’t denote new channels we’re adding to the comp, just channels that are being brought through to the merge operation. If you want to add a new checkbox that says “create alpha for input X” or something and even have it checked on by default if there is no alpha present I’m all for it. My issue here is consistency of how operations are handled in the program.


Natron checked B Alpha unchecked. This is proper and legitimate, also I can’t see any issues with the image? Isn’t this supposed to be where the problem is?


Nuke Alpha B enabled. Works as expected, no magical alpha comes from anywhere


Nuke Alpha B disabled. Same function

Other noteworthy difference, Nuke floods alpha in transform nodes if black outside is checked and no alpha is found. As for why they do this I have no idea.

Also R=1,G=1,B=1,A=0 is plenty valid for encoding light.

Emissive passes do not have alpha channels because light has no mass and does not occlude things in the same way that physical objects do. As of the latest beta when two EXR passes with no alpha channels are merged by the merge node and output into RGBA an alpha is created. Meanwhile Nuke does nothing with it because Nuke’s merge node just merges existing things like a good merge node should.

The solution cannot be in the reader IMHO: it is either in the viewer or in the writer, because bad things can happen anywhere in the compositing.

If users decide to delete their alpha channels that should otherwise be there (because in this situation they’re on and flooded by default, you’d have to actively replace them) before merging then that’s on them. I am still solidly convinced that fixing this problem by promoting files incapable of encoding alpha channels to alpha on read by default is the best solution. Images commonly used to encode emissive passes (EXR) should also be exempt from this change unless the EXR only contains RGB data. As I mention above, “merge” nodes should only perform operations on passes that are available to them and checked off.

I really hate to appeal to authority with a “but Nuke does it this way” argument, Nuke does a lot of stupid things that I dislike but I really think they got basic image transformations correct. Merging over black when only writing RGB data is also a good idea.

Finally, (while I have your attention haha) any chance the tab padding PR could be merged before the next beta? I’d like it in this release if possible and I consider it ready for prime time despite the 1px line visual bug from 2.3.15 and before that it doesn’t fix.

Hi, I implemented in the latest beta a WYSIWYG solution, which doesn’t require changing anything to the UI!

The only change is that when writing an image with a non-solid alpha to a format that doesn’t support alpha, it simply drops the alpha channel, which is equivalent to merge over solid black before conversion. This is exactly the image you see in the viewer by default.

Previously, it was doing unpremultiply, convert to file colorspace (eg sRGB), and premultiply again. So you ended up with sRGB premultiplied by an alpha that you didn’t know…

@Shrinks99 Concerning PR#564, there’s one issue left to address, and I’ll merge.

1 Like

So the merge node doesn’t add or remove anything unless the user tells it to? If so this solution sounds great!

Changes made BTW.

Just tested in the new 2.3.16v3 beta and no, it seems that if no alpha is available the merge node still floods alpha channels when the user tells it to pass alpha information through the merge node

This is still inconsistent, the tooltip tells users that when ‘A channel A’ is selected it will “Use alpha component from A input(s)” This effectively acts in the complete opposite way when there is no alpha… Schrodinger’s checkbox :stuck_out_tongue: In all serious though, this is not a system that performs intuitively.

I still really think Foundry has their merge system 100% correct. If we want to promote people’s non-alpha files to have alpha it shouldn’t be done within the node that performs compositing operations and it absolutely shouldn’t be done by making the existing system perform in the exact opposite way. The new write system is also good and should stick around. :+1:

This is consistent.
RGB images are Opaque. Check the input clip information (“Info” tab).

Opaque means A=1. 100% consistent.

Just uncheck the alpha checkbox for input B, and this ignores it. still consistent.

Not in a premultiplied workflow.

Please explain what “adding a value to premultiplied RGB” means, I’m curious.

These are probably remnants and bad habits from unpremultiplied/unassociated compositing software (AE)

These are probably remnants and bad habits from unpremultiplied/unassociated compositing software (AE)

Actually the opposite is true, most of Adobe’s straight alpha nonsense results in 1110 type scenarios not being properly supported, much to my chagrin! RGB images can be opaque but RGB data can also be written (and regularly is written from within render engines) with A=0, while being completely valid in premultiplied compositing, EVEN when writing to formats that don’t support alphas!

In this post I will try to explain why this is the case and how it relates to Natron’s proposed compositing changes and what exactly I mean when I say that Foundry has gotten it right.

“The alpha channel of 0 indicates that this pixel will obscure nothing.”

Compositing Digital Images (1984) Thomas Porter, Tom Duff

The key word here is “obscure”. Porter-Duff compositing does not treat pixels with A=0 as simply “transparent”. Instead, the alpha channel denotes a percentage of geometric occlusion which isn’t just as simple as calling it “opacity” either, RGB channels denote the emissive value of a pixel as backed up by Jeremy Selan:

If you’re writing a renderer, you ask yourself “how much energy is being emitted from within the bounds of this pixel”? Call that rgb. “and, how much of the background does this pixel occlude?” That’s alpha. This is how all renderers work (prman, arnold, etc) , and its called ‘premultiplied’. Note that at no time does prman have an explicit step that multiplies rgb by alpha, it’s implicit in our definition of ‘total pixel energy’.

Thoughts on Alpha (2011) Jeremy Selan

Consider a pixel where rgb > alpha, such as 2.0, 2.0, 2.0, 1.0. Nothing special about this - it just represents a ‘specular’ pixel where it’s emitting 2.0 units of light, and is fully opaque. A pixel value of (2.0, 2.0, 2.0, 0.0)? Nothing special about this either, it represents a pixels that’s contributing 2.0 units of light energy, and happens to not occlude objects behind it. Both of these cases can cause trouble with unpremultiplied representations.

Thoughts on Alpha (2011) Jeremy Selan

Photons have no mass, they have no geometry, they physically cannot occlude things, therefore purely emissive pixels have no alpha value… but they do have RGB values!

The above contains my reasoning for the proposed implementation earlier in this thread:

  1. Wanting to flood alpha for images that are read into Natron without alpha channels is noble and will save some headache for people who currently have to manually add alpha channels for their plates that are in formats that don’t support alpha channels (or perhaps more accurately, forget to do so). Chances are the objects in these images are solid objects and we can reasonably assume that all image formats read into Natron which cannot contain alphas should be solid and occlude anything placed behind them.

  2. Simply dropping alpha when writing to formats that don’t support it is valid and correct. RGB=1 A=0 is a fully encoded white pixel (colourspace aside) and should be written out as 1 as long as nothing with an alpha is occluding it. Remember, PNG is broken, alpha is not just “transparency” or “opacity”, it’s a percentage of occlusion and if nothing is occluding it then it is fully emissive! Once I learned this it actually took me a while to wrap my head around this concept. It can feel somewhat counter-intuitive to the way we use alphas in practice to mask things.

  3. Applying these flood operations automatically, by default, in merge nodes ignores the fact that RGB=1 A=0 is valid and makes a bunch of assumptions about the type of image being fed into the merge node.

In short, should the viewer display it? YES!

Personally I would think that having the Writer not doing unpremult by default (just drop alpha) when writing RGB formats is the best option, because it’s WYSIWIG.

This is the best solution.

1 Like

You only have one checkbox to uncheck to get what you want (uncheck one checkbox in Merge, or add a shuffle node).

RGB images are Opaque by default, because that’s the way they should be considered most of the time (let’s face it, most objects are not translucent).

Setting alpha to 0 by default causes a lot of confusion: the (r,g,b,0) images can only be composited in very specific ways, whereas (r,g,b,1) are more straightforward. I’ve seen too many composites where an (r,g,b,0) image was used as the background.

The only case where one can use (r,g,b,0) is for images that represent the light coming from translucent objects and for subsurface scattering, and that image never exists alone (it is one out of many render passes). Why should this be the default? I don’t get it.

  • Default covers most of the use cases
  • User still has control to change that

→ no issue

These are good questions.

RGB images are Opaque by default, because that’s the way they should be considered most of the time (let’s face it, most objects are not translucent)

Sure, I generally agree with this assessment. This is a good argument for setting A=1 in read nodes… Not anywhere else though.

I’ve seen too many composites where an (r,g,b,0) image was used as the background.

Me too, and this is perfectly valid, arguably it has workflow advantages too! Those pixels are fully emissive and they don’t occlude anything behind them (as it is the background) and fully visible as their alpha value of 0 has not been used to premultiply yet. For PNG images we can reasonably assume that they should be premultiplied by default on read. Items with A=1 placed above an r,g,b,0 background will occlude that background but that background doesn’t need to be r,g,b,1 for this to be composited correctly as explained above.

The only case where one can use (r,g,b,0) is for images that represent the light coming from translucent objects and for subsurface scattering, and that image never exists alone (it is one out of many render passes). Why should this be the default? I don’t get it.

This is the biggest misconception about alpha channels that I am trying to dispel with the sources linked above. That is indeed one use case — light emitted from objects with no mass — but treating image data read in with no alpha as fully emissive by default is A-OK and a good default, assigning it one on read would be a bonus to ensure it can be composted overtop other images correctly by default but it is not required for a compositing system to implement alpha channels correctly. This isn’t really about use cases and how people manipulate images in the graph though, it’s about ensuring the program operates and composites in an accurate and correct manner.

Example time!

Lets look at a concrete example of this implemented properly in Nuke and examine why Nuke works in the way I’ve described above. In Nuke when we read in a JPEG it has no alpha channel and has RGB channels, this image is read in and put into the graph in an un-premultiplied state. As we can see it’s fully visible and fully emissive (as expected), in my selection RGB is all greater than 0 while A=0.

When we scale this emission value by its alpha value (premultiply) it disappears, this is also expected as it has no alpha and so when applying this operation we can reasonably expect to see nothing.

So far everything works as expected, lets comp something! In this next image we have taken a PNG image and premultipled it. We’ve gotta do that in Nuke with a separate node so we don’t get the artifacts that @Zalpon was experiencing, Natron helpfully has this on by default for PNGs in the read node. If we look at the image in RGBA view we can see our torus overtop our beach background however only the torus has alpha? How can this be the case if alpha channels denote transparency?

The answer of course is that they don’t denote transparency, as I’ve said before and backed up by the two sources above (who have won a combined total of 7 Academy Awards for technical achievement… that’s 7 more than me!) that explain the fact that alpha channels denote geometric occlusion. Because there is nothing else in front of the background with an alpha value greater than zero the background is fully visible and emissive as expected.

By changing the merge node and assigning alphas where there are none while performing a compositing operation we are creating a system that does not operate correctly or as expected. Flooding alphas on read for images with no alpha and premultiplying automatically will ensure that these images can be composited with an “over” without becoming a “plus” due to lack of alpha by default, while providing that alpha channel for manipulation by the user before it ever hits a compositing operation. This would be a helpful feature, though it does not mean that backgrounds with an unpremultipled alpha of 0 are incorrect, or should be invisible.

3 Likes

Part Three: The Things That This Change Breaks

In my last two posts I broke down how un-premulted alpha works and why Nuke handles alpha compositing correctly. In this one I will showcase the new issues brought upon by this approach and how it will affect Natron’s users as well as (again) making the case for assigning alphas to file types that don’t support them on read instead. @devernay I really hope you don’t find me too bothersome by this point… like everyone here my interest is in contributing positively to the software :slight_smile:

First let’s start off with a basic ‘over’ with the new approach:

At first glance this is definitely a better result than what would happen in 2.3.15 where (due to a lack of an alpha channel) the ‘over’ operation essentially becomes a ‘plus’. Because an alpha is assigned this looks correct by default, neat! …But this method has some drawbacks.

Firstly if we move the image around anywhere we get a 1px black border due to ‘black outside’ being on by default in the Transform node.

When assigned an alpha on read this is no longer an issue because the alpha correctly corresponds to the plate size (of course now the merge node also detects one and doesn’t flood it). This can also be achieved by toggling off ‘black outside’.

When an image with RGB channels is transformed and blurred (or defocused) before compositing with ‘over’ we get an ugly black border due to the image alpha being flooded. Imagine being given a poster asset from a client as a JPEG or a TIFF (which probably has no alpha if exported from Photoshop), bringing it into the program, tracking, comping, and defocusing. This is the result if that image is not given an alpha of 1 prior to transforming and blurring the asset.

Of course Natron’s existing behaviour isn’t really any better by default! As mentioned before, due to the alpha being zero the ‘over’ operation just becomes a ‘plus’ which is never going to be what the user would want. The only correct solution here is to give the image an alpha on read (or through a shuffle node) before it hits the transform so that it is composited correctly.

Nuke notably doesn’t do this automatically and (in my opinion) that sucks though it will flood alpha in Transform nodes if none is present and therefore users are less likely to encounter these problems. I don’t think that’s a very elegant solution though and I remember being personally confused by this exact problem when I was first learning how to comp.

There’s what we want! ‘Output Components’ (another instance that should also probably be changed to “channels”?) is now set to RGBA and it is transformed, blurred, and composited correctly because the alpha correctly corresponds to the plate size. Nice! :smiley:

Here’s another instance where this becomes a problem. Imagine that poster is tracked in and transformed with motion blur, this is the output if alpha is flooded on merge instead of on read:

Again this can be fixed by assigning an alpha on read:

Final Thoughts

I hope I have been able to clearly articulate the problems with this approach through the sources I’ve cited and the applied examples shown both in this post, and my previous one. My theses remain the same:

  1. Flooding alpha on read for formats where it is unsupported will provide users with a better experience, flooding on merge will create inconsistencies and new problems for users—doubly so for less experienced compositors.
  2. Merge nodes should apply compositing operations to existing channels, they should not add new data to them.
  3. Alpha denotes geometric occlusion which is more complex than transparency and should be implemented accordingly.
  4. Alpha should be dropped when writing to formats that do not support it.

I would also like to thank Troy Sobotoka for his resources on the topic as well as his time. While I understand he’s no longer on this forum anymore he has spent multiple hours of his time privately helping me comprehend premultiplied alpha compositing well enough to properly make these arguments and I figure that effort should be credited somewhere.

Due to everything I’ve stated above, I would highly recommend that at least the changes to the merge node specifically be reverted before 2.3.16’s full release. Again, if anyone has questions I will do my best to answer them!

2 Likes