Apple ProRAW: What's the perspective for future support in FOSS?

For the most part, the only actual role ML has in Google’s pipeline is to finetune AWB.

The rest of the pipeline, both the old HDR+ pipeline and the new NightSight pipeline (most of which which is also applied to the non-NightSight flow now) have no machine learning role.

The legacy HDR+ pipeline even has an opensource implementation - HDR+ Pipeline

The new NightSight pipeline has three notable changes:

  1. Multiframe superresolution - this one is inherently why any recent mobile pipeline provides a demosaiced image. Companies are being extra secretive about these algorithms (Google, to note, explicitly introduced errors into their published paper and somehow got ACM to accept a paper with blatant errors… The errors are discussed in GitHub - kunzmi/ImageStackAlignator: Implementation of Google's Handheld Multi-Frame Super-Resolution algorithm (from Pixel 3 and Pixel 4 camera) which is an open source implementation of the new pipeline)
  2. A particularly fancy AWB algorithm - ML does have a role in this one
  3. Enhancements to the tonemapping algorithm that alter its behavior when illumination is below a certain absolute value in lux. But overall it’s still just a variation of Mertens exposure fusion operating solely on the luminance channel.

Great update and explanation Andy…thanks

what’s the latest status right now for rawtherapee to open proraw?
i failed to launch a new build Test of RawTherapee on MacOS 10.14-15, 11, 12β Mojave-Catalina-Big Sur-Monterey (intel code) - #6 by HIRAM

I believe the functionality was added in May.

In encode.su I’ve made my point before that the - by now old - jpeg xr would be perfect for this.

It 16bit support is nice, it even supports faux-floating point, which basically means in this case it an encode at 16bit but set a black and white point. So your camera could write an image that could contain ‘overbrights’ so you could reduce highlights afterwards.

16bit tiff, I guess not. But imagine that the jpegs it writes are actually 16bit files and could contain the whole captured range, while still being ‘ready for displaying’ right from the file.

Yes I would use it then. Although my Sony a7m2 is just too old to have the the good in-camera noise reduction and sharpening, so I would still use raw where I would want to extract everything.

But files as big as jpegs, quick to decode even on slower hardware, ready to view, but with enough data to change exposure and white balance afterwards. Sounds awesome to my ears.

And the format is even open and standardized, and uses no different algorithms than basic jpeg (basically increased dct blocks and memory requirements).

Jpeg xl would also work quite nicely, but is new and modern with different algorithms.

Formats like webp and avif and heif can be at most 12bit I believe (correct me if wrong).

2 Likes

Not sure - although intelligently chosen logarithmic encoding at 12 bits will in most cases be more than sufficient to handle being visually indistinguishable from linear 16 even when abused to an extreme

At least as long as you’re recording in a wide enough gamut that there isn’t gamut clipping…

Looks like 1 part Apple doing something interesting, and 3 parts marketing. They named it after their “popular” video codec, which is not the easiest to work with in Linux, shall we say…

My bet is to work with it using open source tools might not be worth the squeeze, but your milage might vary.

Still need to find an environment that is not “just so I can to ProRes RAW video” for doing video work. The options, other than Apple ones, are not my usual workflow.

Again, I suspect the same with anything Apple does in photography. If you use their hardware and tool, it is easy to get decent results. Anything else… you end up asking yourself, “why am I tying myself into a knot for this?”

1 Like