I’m coming from the CGI world and noise is as much if not more of a concern for us than it is for photography.
Many denoising techniques have popped along the years, and there is one that is particularly powerful (although the research in that field is so hot right now that many many techniques including machine learning are in use) is to, instead of rendering one frame at say 10,000 samples/pixel (that’s quite similar to your photon hit rate per sensor photodiode, with the difference that in rendering that SPP count is constant for each pixel while it is random in photography), we can render 2 frames at a much lower SPP with 2 different ‘noise seeds’ and use a denoising software that compares both shots to better determines what’s noise and what’s texture/image feature. The results are usually stunning (check out www.innobright.com), although in CGI we can provide other buffers to help preserve image features even better, just the two raw renders with different noise seeds does a really good job).
As far as I understand Canon’s Dual Pixel Raw, the raw file contains a A+B result of both sub-pixels and an A only version. I wonder if comparing A+B to A couldn’t help drive a stronger yet more respectful of the image’s feature denoising.
Actually when I have the chance to get my hands on a dual pixel raw file, I’ll myself see if Canon’s software enables you to save A+B and A separatly and extract B from that to check it out in Altus denoising (or similar). Right now I can’t but the idea remains open maybe for discussion/investigation.