Apple ProRAW: What's the perspective for future support in FOSS?


I totally understand the difference between what you get with a proper camera (DSLR, mirrorless, etc. ) and with any phone on the market, but still, I do like to shoot with my phone when that’s the only thing I have to capture a scene and then try to get the best out of it. That’s why I’m interested in RAW support on various phones.

With many phones, the output is a standard dng file, which is usually well supported with various FOSS utility. Now, Apple has announced the ProRaw format with the Iphone 12 pro, and it doesn’t seem to be standard at all. More info here:

I wonder if FOSS devs will be able to support this kind of new RAW format and how hard it could be to do so… Any insight on this?


1 Like

Is it just me, or are they not really telling people what ProRaw actually is? I see marketing lingo thrown around but no explanation what gets saved and how it’s to be interpreted. Is ProRaw just a fancy name for HDR10? :man_shrugging:

ProRaw “provides many of the benefits of our multi-frame image processing and computational photography, like Deep Fusion and Smart HDR, and combines them with the depth and flexibility of a raw format”.

My best guess is that it is an image cooked with Apple’s algorithms, but 16-bit, and maybe with HDR and not too much compression, with the result that it can be processed much more than a standard JPG. It’s an interesting concept.

I’m wondering when the first multi-sensor, multi-lens camera will appear (as a standalone camera, not a phone).

After reading the marketing prose, I have a similar view. Minus the computational pre-conditioning, it seems to want to solve the same ‘problem’ Adobe aimed at with DNG, providing all the look stuff in the metadata.

If they don’t produce a scene-linear RGB in ACES2065-1 colorspace in some high-bit depth, the film folks won’t think it goes “Pro” far enough…

Indeed, how “pro” it will be is the big question… Of course, after “will they document the format”.

Apple could document the format, so software developers could easily read the files, and do clever stuff with that wonderful data, which will make the devices more popular and Apple will sell bucketloads.

Or Apple could make the format proprietary and support access only by an API that requires proprietary closed-source Apple software to be incorporated in end products. By so doing, the hardware devices will appeal only to people who don’t need raw data, and a few developers who buy-into the Apple universe.

I have no insight into Apple, but I know which I would bet on.


Apple is a bit of a mystery. Sometimes it pushes the envelope, develops or upholds useful standards and does good. Other times it is obstinate, rude and downright obtuse.

I have a feeling that this time around it is a rebranding or repackaging of existing standards. Apple is too big nowadays to be interesting in a good way.

From the company that decided to ditch the standard headphone jack for something proprietary, I know where I’m betting my money.

1 Like

It was tried and failed.

Well, smartphones are proving that it is indeed possible. Maybe Light was ahead of its time, or too ambitious, or poorly managed. I am sure multiple capture will play a large role in future photography.

Anyway, thanks for the link, I was not aware of that story :slight_smile:


Light and similar products have had the same issue as the topic. Proprietary opaque software and file format. Having multiple focus points or depth imagery is neat and all but if we can’t open and preview the file with a typical raw processor or generic math / image processing package then it is a nonstarter.