Apple ProRAW: What's the perspective for future support in FOSS?

A post-demosaic RGB DNG, at that…

As food for thought, maybe more-relatable to some here than Apple stuff: If your camera offered 16-bit TIFF output, would you consider using it and ditching raw processing?

If the in-camera demosaic was good, and truly no other cooking was done to the image, probably yes.

On the other hand, it’d save only one step in processing, and file size is likely to be larger.


Yes, but not only demosaic, it presumably includes all the multi-frame computational photography goodness, like the Pixels…


Absolutely not!

1 Like

If the demosaic is perfect, the filesize is good (lossless and lossy-compressed would be nice) and in case of multi-frame computational stuff shows imperceptible artefacting: yes, absolutely. It doesn’t need to be TIFF, I’d also take compressed 16bit halffloat EXR (has room for addtional layers, e.g. depth, multiframe…).

1 Like

That evaluates to a no :wink:


At this point, I’d say not, as I’m happy with the control I have over my images. But, if I were back at the decision point where I decided to eschew JPEGs, I’d give it some thought. At the time, the large issue for me was how badly JPEGs degraded as I worked shadows, and 16-bit depth would have taken care of that. Now, with the Z 6 and the 24-70 f4, in-camera lens correction would be interesting to consider…

Is there a “perfect” demosaic process? Is it even possible? I read somewhere that demosaicing astronomic images is best with a non-normal algorithm. (I’m currently trying to snap the Saturn/Jupiter conjunction.)

And then we have denoising, which I prefer to do before demosaicing. But how much to denoise is an aesthetic decision, not a simple automatic technical matter.

How do you do that?

I know! :smiley:
But I do not think that everyone is as pedantic as I am. So in evaluating what might be a viable file format, that has to be factored in somehow. Requirements for an ‘ideal’ file format hinge not necessarily on my ludicrous demosaicing requirement. For some the absence of artefacts is enough. If I phrase it like that, anyone working on such a format can deduce when I might be happy (very late in the game that is). With multi-frame fusion…maybe someone can come up with something that satisfies my needs. Since I will almost never know the ground truth, engineers could design ‘visually indistinguishable’ and make me think it’s perfect.
On the other hand, many file formats that I use don’t meet my list above…and I still use them. So while I want ‘perfect’ I am seemingly not too hung up about it either. :man_shrugging:

p.s.: I think the Fuji medium format cameras (bayer filter without AA) can output TIFF from their in camera processing. Maybe I have to look at one of those, to see whether I could be happy with that!

It’s sparse sampling…so I think it’s impossible to get the ground truth, you can only come that close to it.

I use dcraw to make four linear grayscale images, one per RGGB channel, denoise each, and build a DNG file with a mosaic image, which can then be demosaiced.

I show how I do this, with some noise reduction techniques in Camera noise. My current favourite is “masked push towards mean”. New techniques are being developed all the time, and I haven’t tried them all, and I’ll probably have a new favourite sometime next year.


Pentax cameras can as well. I think all of them for a long time. Probably 8 bit though but i don’t know.

1 Like

As a pedantic technical point: a DNG file is a TIFF file.

Checked and the Pentax tiffs are indeed only 8-bit

@snibgo except for the dng tiffs!

@snibgo I find that dcraw has trouble opening many of the Play Raws in the past couple of years. Could be something wrong with my installation, incompatible filter patterns or formats. What are other ways to retrieve the unprocessed pixels?

@afre: I last updated dcraw source code on my machine in 2017. It can’t read some of the Play Raw files. Sometimes, the version distributed with ImageMagick does the trick. Sometimes that doesn’t work but IM, compiled with libraw, does work, but that always demosaics the image. When those all fail, I give up.

I just came across a nice article…it looks like a DNG file with some new tags and it stores the data pre-demosaiced. The reasons are discussed and it is interesting at first I thought well that removes flexibility but they could also use machine learning to select the demosaicing…in any case I found this an interesting read esp the part about the new hdr gain table data …I think they will for now make it an open format…

Edit I see this was already posted in the thread…nice breakdown by a thirdparty app developer nonetheless

1 Like

It was an interesting article…looks like lots of extra data to provide some new angles for raw processors to use The gain maps sounded interesting…

If it is anything like Google’s implementation, machine learning has nothing to do with it.

The reality is: Storing ALL data isn’t practical when you are stacking a burst of 10+ images.

A lot of people complaining about loss of control over demosaicing don’t seem to understand that for any mobile device with acceptable image quality, it is no longer a case of tweaking demosaicing algorithms, it is a case of adaptive stacking of a multiframe burst, with 10+ images being standard nowadays. For Google who have published their whitepaper, it’s additionally superresolution because of human hand tremors slightly shifting the camera.

Working with a single image will have the traditionally unacceptably poor DR, saving all images in the burst is simply not practical.


I think they were suggesting that the phone could do a frequency analysis on the image or determine between night and day and then tweak the demosaicing or mix of demosaicing used to process the image…they threw around the word machine learning…I guess everyone will have a threshold for what they think is machine learning…on the fly optimized demosaicing was one example