The phones are catching up

Larger sensors have slower readouts, which is a big limiter here. This is partly because advanced manufacturing technologies scale nonlinearly in cost with sensor size - this is why stacked BSI (Exmor RS for Sony) are exceedingly rare for APS-C and FF, but have been standard for smartphones since before the Sony A9 was released.

IIRC Google was doing 60 FPS at full sensor resolution/bit depth 8+ years ago (Edit: Very few camera manufacturers are even exceeding 30 FPS fullres raw bursts right now)

There are solutions to burst stacking in post for any camera such as Tim Brooks’ HDR+ pipeline ( HDR+ Pipeline ) and GitHub - kunzmi/ImageStackAlignator: Implementation of Google's Handheld Multi-Frame Super-Resolution algorithm (from Pixel 3 and Pixel 4 camera) - but these often choke due to excessive movement from frame to frame due to the low framerate of most larger cameras. Rotation, especially, causes the HDR+ tiled align-and-merge to derp up pretty badly.

@priort IIRC Google didn’t start implementing MFSR until the Pixel 4 and that was only for Night Sight mode - newer devices use MFSR instead of the legacy HDR+ tiled align-and-merge for all modes. The Pixel 4 also saves DNGs from the legacy HDR+ pipeline even if the JPEG was done via MFSR - REALLY bad if you used any digital zoom at all which is where MFSR really shines.

3 Likes