Apple ProRAW: What's the perspective for future support in FOSS?

Is there a “perfect” demosaic process? Is it even possible? I read somewhere that demosaicing astronomic images is best with a non-normal algorithm. (I’m currently trying to snap the Saturn/Jupiter conjunction.)

And then we have denoising, which I prefer to do before demosaicing. But how much to denoise is an aesthetic decision, not a simple automatic technical matter.

How do you do that?

I know! :smiley:
But I do not think that everyone is as pedantic as I am. So in evaluating what might be a viable file format, that has to be factored in somehow. Requirements for an ‘ideal’ file format hinge not necessarily on my ludicrous demosaicing requirement. For some the absence of artefacts is enough. If I phrase it like that, anyone working on such a format can deduce when I might be happy (very late in the game that is). With multi-frame fusion…maybe someone can come up with something that satisfies my needs. Since I will almost never know the ground truth, engineers could design ‘visually indistinguishable’ and make me think it’s perfect.
On the other hand, many file formats that I use don’t meet my list above…and I still use them. So while I want ‘perfect’ I am seemingly not too hung up about it either. :man_shrugging:

p.s.: I think the Fuji medium format cameras (bayer filter without AA) can output TIFF from their in camera processing. Maybe I have to look at one of those, to see whether I could be happy with that!

It’s sparse sampling…so I think it’s impossible to get the ground truth, you can only come that close to it.

I use dcraw to make four linear grayscale images, one per RGGB channel, denoise each, and build a DNG file with a mosaic image, which can then be demosaiced.

I show how I do this, with some noise reduction techniques in Camera noise. My current favourite is “masked push towards mean”. New techniques are being developed all the time, and I haven’t tried them all, and I’ll probably have a new favourite sometime next year.

3 Likes

Pentax cameras can as well. I think all of them for a long time. Probably 8 bit though but i don’t know.

1 Like

As a pedantic technical point: a DNG file is a TIFF file.

Checked and the Pentax tiffs are indeed only 8-bit

@snibgo except for the dng tiffs!

@snibgo I find that dcraw has trouble opening many of the Play Raws in the past couple of years. Could be something wrong with my installation, incompatible filter patterns or formats. What are other ways to retrieve the unprocessed pixels?

@afre: I last updated dcraw source code on my machine in 2017. It can’t read some of the Play Raw files. Sometimes, the version distributed with ImageMagick does the trick. Sometimes that doesn’t work but IM, compiled with libraw, does work, but that always demosaics the image. When those all fail, I give up.

I just came across a nice article…it looks like a DNG file with some new tags and it stores the data pre-demosaiced. The reasons are discussed and it is interesting at first I thought well that removes flexibility but they could also use machine learning to select the demosaicing…in any case I found this an interesting read esp the part about the new hdr gain table data …I think they will for now make it an open format…https://blog.halide.cam/understanding-proraw-4eed556d4c54

Edit I see this was already posted in the thread…nice breakdown by a thirdparty app developer nonetheless

1 Like

It was an interesting article…looks like lots of extra data to provide some new angles for raw processors to use The gain maps sounded interesting…

If it is anything like Google’s implementation, machine learning has nothing to do with it.

The reality is: Storing ALL data isn’t practical when you are stacking a burst of 10+ images.

A lot of people complaining about loss of control over demosaicing don’t seem to understand that for any mobile device with acceptable image quality, it is no longer a case of tweaking demosaicing algorithms, it is a case of adaptive stacking of a multiframe burst, with 10+ images being standard nowadays. For Google who have published their whitepaper, it’s additionally superresolution because of human hand tremors slightly shifting the camera.

Working with a single image will have the traditionally unacceptably poor DR, saving all images in the burst is simply not practical.

2 Likes

I think they were suggesting that the phone could do a frequency analysis on the image or determine between night and day and then tweak the demosaicing or mix of demosaicing used to process the image…they threw around the word machine learning…I guess everyone will have a threshold for what they think is machine learning…on the fly optimized demosaicing was one example

For the most part, the only actual role ML has in Google’s pipeline is to finetune AWB.

The rest of the pipeline, both the old HDR+ pipeline and the new NightSight pipeline (most of which which is also applied to the non-NightSight flow now) have no machine learning role.

The legacy HDR+ pipeline even has an opensource implementation - HDR+ Pipeline

The new NightSight pipeline has three notable changes:

  1. Multiframe superresolution - this one is inherently why any recent mobile pipeline provides a demosaiced image. Companies are being extra secretive about these algorithms (Google, to note, explicitly introduced errors into their published paper and somehow got ACM to accept a paper with blatant errors… The errors are discussed in GitHub - kunzmi/ImageStackAlignator: Implementation of Google's Handheld Multi-Frame Super-Resolution algorithm (from Pixel 3 and Pixel 4 camera) which is an open source implementation of the new pipeline)
  2. A particularly fancy AWB algorithm - ML does have a role in this one
  3. Enhancements to the tonemapping algorithm that alter its behavior when illumination is below a certain absolute value in lux. But overall it’s still just a variation of Mertens exposure fusion operating solely on the luminance channel.

Great update and explanation Andy…thanks

what’s the latest status right now for rawtherapee to open proraw?
i failed to launch a new build Test of RawTherapee on MacOS 10.14-15, 11, 12β Mojave-Catalina-Big Sur-Monterey (intel code) - #6 by HIRAM

I believe the functionality was added in May.

In encode.su I’ve made my point before that the - by now old - jpeg xr would be perfect for this.

It 16bit support is nice, it even supports faux-floating point, which basically means in this case it an encode at 16bit but set a black and white point. So your camera could write an image that could contain ‘overbrights’ so you could reduce highlights afterwards.

16bit tiff, I guess not. But imagine that the jpegs it writes are actually 16bit files and could contain the whole captured range, while still being ‘ready for displaying’ right from the file.

Yes I would use it then. Although my Sony a7m2 is just too old to have the the good in-camera noise reduction and sharpening, so I would still use raw where I would want to extract everything.

But files as big as jpegs, quick to decode even on slower hardware, ready to view, but with enough data to change exposure and white balance afterwards. Sounds awesome to my ears.

And the format is even open and standardized, and uses no different algorithms than basic jpeg (basically increased dct blocks and memory requirements).

Jpeg xl would also work quite nicely, but is new and modern with different algorithms.

Formats like webp and avif and heif can be at most 12bit I believe (correct me if wrong).

2 Likes

Not sure - although intelligently chosen logarithmic encoding at 12 bits will in most cases be more than sufficient to handle being visually indistinguishable from linear 16 even when abused to an extreme

At least as long as you’re recording in a wide enough gamut that there isn’t gamut clipping…

Looks like 1 part Apple doing something interesting, and 3 parts marketing. They named it after their “popular” video codec, which is not the easiest to work with in Linux, shall we say…

My bet is to work with it using open source tools might not be worth the squeeze, but your milage might vary.

Still need to find an environment that is not “just so I can to ProRes RAW video” for doing video work. The options, other than Apple ones, are not my usual workflow.

Again, I suspect the same with anything Apple does in photography. If you use their hardware and tool, it is easy to get decent results. Anything else… you end up asking yourself, “why am I tying myself into a knot for this?”

1 Like