Apple ProRAW: What's the perspective for future support in FOSS?


I totally understand the difference between what you get with a proper camera (DSLR, mirrorless, etc. ) and with any phone on the market, but still, I do like to shoot with my phone when that’s the only thing I have to capture a scene and then try to get the best out of it. That’s why I’m interested in RAW support on various phones.

With many phones, the output is a standard dng file, which is usually well supported with various FOSS utility. Now, Apple has announced the ProRaw format with the Iphone 12 pro, and it doesn’t seem to be standard at all. More info here:

I wonder if FOSS devs will be able to support this kind of new RAW format and how hard it could be to do so… Any insight on this?


1 Like

Is it just me, or are they not really telling people what ProRaw actually is? I see marketing lingo thrown around but no explanation what gets saved and how it’s to be interpreted. Is ProRaw just a fancy name for HDR10? :man_shrugging:

ProRaw “provides many of the benefits of our multi-frame image processing and computational photography, like Deep Fusion and Smart HDR, and combines them with the depth and flexibility of a raw format”.

My best guess is that it is an image cooked with Apple’s algorithms, but 16-bit, and maybe with HDR and not too much compression, with the result that it can be processed much more than a standard JPG. It’s an interesting concept.

I’m wondering when the first multi-sensor, multi-lens camera will appear (as a standalone camera, not a phone).

After reading the marketing prose, I have a similar view. Minus the computational pre-conditioning, it seems to want to solve the same ‘problem’ Adobe aimed at with DNG, providing all the look stuff in the metadata.

If they don’t produce a scene-linear RGB in ACES2065-1 colorspace in some high-bit depth, the film folks won’t think it goes “Pro” far enough…

Indeed, how “pro” it will be is the big question… Of course, after “will they document the format”.

Apple could document the format, so software developers could easily read the files, and do clever stuff with that wonderful data, which will make the devices more popular and Apple will sell bucketloads.

Or Apple could make the format proprietary and support access only by an API that requires proprietary closed-source Apple software to be incorporated in end products. By so doing, the hardware devices will appeal only to people who don’t need raw data, and a few developers who buy-into the Apple universe.

I have no insight into Apple, but I know which I would bet on.


Apple is a bit of a mystery. Sometimes it pushes the envelope, develops or upholds useful standards and does good. Other times it is obstinate, rude and downright obtuse.

I have a feeling that this time around it is a rebranding or repackaging of existing standards. Apple is too big nowadays to be interesting in a good way.

From the company that decided to ditch the standard headphone jack for something proprietary, I know where I’m betting my money.


It was tried and failed.

1 Like

Well, smartphones are proving that it is indeed possible. Maybe Light was ahead of its time, or too ambitious, or poorly managed. I am sure multiple capture will play a large role in future photography.

Anyway, thanks for the link, I was not aware of that story :slight_smile:


Light and similar products have had the same issue as the topic. Proprietary opaque software and file format. Having multiple focus points or depth imagery is neat and all but if we can’t open and preview the file with a typical raw processor or generic math / image processing package then it is a nonstarter.

And here we have more information:


Several smart phones can capture RAW images. Some of them (at least the Pixels, AFAIK) record not “just” the RAW sensor information, but actually the stacked and shifted data. In the newer Pixels, the RAW files are not mosaiced, as they use subpixel-aligned exposures to capture full color at every pixel. Very impressive. These files are actually a joy to work with, as they have a greatly increased dynamic range and reduced noise compared to “straight” RAWs.

As far as Apple’s marketing text goes, it sounds like they are doing that as well. But we’ll have to wait and see what Apple will actually put into the DNG files.

Sadly not the case, DNGs are generated using the “legacy” HDR+ align-and-merge algorithm, not the new multiframe superresolution system.

End result is, for example, that if you used digital zoom in a Pixel 4XL (e.g. reliant on MFSR), the JPEG is incredibly detailed, but the DNG is just a small crop of the sensor and nothing more.

At least for unzoomed images, the legacy align-and-merge leads to quite good DNGs. I still need to find a tonemapping solution that works as well as Google NightSight. Google actually published their algorithmic improvements in a separate paper, so I could in theory implement the changes, but that would require implementing Mertens exposure fusion in RawTherapee which would be a lot of work and I just haven’t had the motivation to do it.


Thank you for the clarification!

I think this post is quite interesting:

I thought the blog post made proRAW sound completely uninteresting, save for storing the depth map information. Its a demosaiced linear tiff… Snooze.

The new DNG spec has gain maps and semantic masks. See

Of course, this isn’t Raw data. This is results from the computational photography, not the inputs.

If Adobe have published the full v1.6.0.0 DNG spec, I can’t find it.

It looks like the ProRAW actually IS just a DNG 1.6 and Apple and Adobe worked on the spec extension together…

1 Like