These are good questions.
RGB images are Opaque by default, because that’s the way they should be considered most of the time (let’s face it, most objects are not translucent)
Sure, I generally agree with this assessment. This is a good argument for setting A=1 in read nodes… Not anywhere else though.
I’ve seen too many composites where an (r,g,b,0) image was used as the background.
Me too, and this is perfectly valid, arguably it has workflow advantages too! Those pixels are fully emissive and they don’t occlude anything behind them (as it is the background) and fully visible as their alpha value of 0 has not been used to premultiply yet. For PNG images we can reasonably assume that they should be premultiplied by default on read. Items with A=1 placed above an r,g,b,0 background will occlude that background but that background doesn’t need to be r,g,b,1 for this to be composited correctly as explained above.
The only case where one can use (r,g,b,0) is for images that represent the light coming from translucent objects and for subsurface scattering, and that image never exists alone (it is one out of many render passes). Why should this be the default? I don’t get it.
This is the biggest misconception about alpha channels that I am trying to dispel with the sources linked above. That is indeed one use case — light emitted from objects with no mass — but treating image data read in with no alpha as fully emissive by default is A-OK and a good default, assigning it one on read would be a bonus to ensure it can be composted overtop other images correctly by default but it is not required for a compositing system to implement alpha channels correctly. This isn’t really about use cases and how people manipulate images in the graph though, it’s about ensuring the program operates and composites in an accurate and correct manner.
Example time!
Lets look at a concrete example of this implemented properly in Nuke and examine why Nuke works in the way I’ve described above. In Nuke when we read in a JPEG it has no alpha channel and has RGB channels, this image is read in and put into the graph in an un-premultiplied state. As we can see it’s fully visible and fully emissive (as expected), in my selection RGB is all greater than 0 while A=0.
When we scale this emission value by its alpha value (premultiply) it disappears, this is also expected as it has no alpha and so when applying this operation we can reasonably expect to see nothing.
So far everything works as expected, lets comp something! In this next image we have taken a PNG image and premultipled it. We’ve gotta do that in Nuke with a separate node so we don’t get the artifacts that @Zalpon was experiencing, Natron helpfully has this on by default for PNGs in the read node. If we look at the image in RGBA view we can see our torus overtop our beach background however only the torus has alpha? How can this be the case if alpha channels denote transparency?
The answer of course is that they don’t denote transparency, as I’ve said before and backed up by the two sources above (who have won a combined total of 7 Academy Awards for technical achievement… that’s 7 more than me!) that explain the fact that alpha channels denote geometric occlusion. Because there is nothing else in front of the background with an alpha value greater than zero the background is fully visible and emissive as expected.
By changing the merge node and assigning alphas where there are none while performing a compositing operation we are creating a system that does not operate correctly or as expected. Flooding alphas on read for images with no alpha and premultiplying automatically will ensure that these images can be composited with an “over” without becoming a “plus” due to lack of alpha by default, while providing that alpha channel for manipulation by the user before it ever hits a compositing operation. This would be a helpful feature, though it does not mean that backgrounds with an unpremultipled alpha of 0 are incorrect, or should be invisible.