Smartphone raws never quite the same as the jpegs

IMG_20200108_171429.jpg.out.pp3 (11.5 KB)

1 Like

I didn’t try to emulate the jpg but instead tried to emphasize the dreaminess of the image. Baiscs in RT and then some “magic” in Gimp:

1 Like

Hi,
my Pixel 3a is also really good. I always save both JPEG and RAW.
After obtaining the right color profile for RT the results are great.
I find that sometimes in difficult conditions you easily get better
pictures from it than from a dslr or mirrorless.
Of course its sensor is just mediocre and the lens is just average,
but even for RAWs it takes several shots and merges them together in
what google calls a computational raw.

@aadm I’ve experimented only briefly with some profiles. I need to revisit this in a more systematic way. I have a spyderchecker24 and I have tweaked x-rite software to calibrate it and create icc, also used dcamprof and argyll to make icc profiles, I have converted the dcp files for the pixel 3a from adobe to icc. And used Pascals colormatch script and darktable chart to do a jpg color match approach. I mainly use Dt but when playing around with this color profile stuff RT is so much easier to see what is going on, esp the ability to use dcp files and turn on and off the various components of the profile. The problem in playing around in so many workflows for creating the icc files etc I have not been very systematic and actually I had a large catalogue of old shots from my lumia phone so I had been working more on that one…I will post any findings or thoughts back to the thread if I find something worth sharing…

1 Like

@aadm This is the default ACR tone curve from the DCP file

ProfileToneCurve": [
[ 0.000000, 0.000000 ],
[ 0.019608, 0.019608 ],
[ 0.058824, 0.098039 ],
[ 0.196078, 0.419608 ],
[ 0.392157, 0.709804 ],
[ 0.784314, 0.952941 ],
[ 1.000000, 1.000000 ]
]
}

2 Likes

Because I like to play with stuff like this, I converted the curve data to a rawproc curve command and applied it to my test image:

I’m not at the right computer to look at it, but I think this just a bit darker than the embedded JPEG. Here’s the rawproc command:

curve:rgb,0,0,5,5,15,25,50,107,100,181,200,243,255,255

Control points are the Adobe data multiplied by 255…

Basic default processing

I took a quick snap leaving work and processed the raw file 3 ways with only the base curve from the pixel 3aXL dcp file, only filmic defaults no color preservation, with a custom icc processed with a modified version of the colormatch script . Provided are the original raw, the in camera jpg, filmic and basecurve processed raw files and using only the colormatching icc…no filmic or basecurve…just for comparison of starting points generated by each method for raw vs jpg…

BaseCurve P3aXL

In Camera JPG

Color Match ICC

Filmic Defaults

Original Raw

@priort ya need some vignette correction!

The vignetting is far more pronounced in the raw files. The adobe lens corrections don’t correct for vignetting and there are none in DT but I did try the Huawei P10 lens correction files just for fun and they actually do quite a nice job…

Filmic with Vcorr

Colormatched ICC with Vcorr

Ya I was just posting about that …but my original post was intended to apply no modules or corrections other than those listed…ie focus on starting point differences…

Great atmosphere in this scene! To think you can get a shot this good with a phone…

pixel-3a-IMG_20200108_171429.dng.xmp (63.4 KB)
darktable 3.2.1

@aadm: Nearly a week of examples, suggestions and discussion. Maybe you could comment on that?

@Thomas_Do and others: I’m sorry if I haven’t commented yet! I still wanted to test a few more of these settings that I also liked. I wasnt expecting that much interest out of a phone shot!

The first settings I have tried did not return the same “crispiness” of the jpeg, which was the original idea behind my post (something that I also couldnt reproduce). But then so many other variations that I enjoyed but did not test – that’s why I havent written anything so far.

I will keep the conversation alive anyway, sorry if I seemed uninterested in the followups – that was not certainly my intention! thanks to everybody, I will come back with some more meaningiful remarks soon.

Thanks for the “progress report” :wink: .

I also have a 3a and am astounded at the quality of the jpeg’s that come out.

The toning is really amazing. I actually found an interview with the guy who was managing imaging at google, and subsequently left to work on imaging elsewhere. He was speaking about it much like a photographer developing a raw, saying they were trying to reach the right balance, contrast, brightness, color…etc. Interesting read, cant remember where I found it. he said he left partially because he felt the work was pretty much done.

I have to agree with him. Looking at the raw images, they are unremarkable in nearly every way. Yet the toned jpegs are spectacular. Its really something. I guess all that expertise and effort going into getting the best out of one particular hardware configuration may be part of the explanation. But i cant help but feel there is some ‘magic sauce’ that would be difficult to replicate with a raw developer, whether RT, adobe or what have you.

The pipeline Google used in the past is described here:

An open source implementation of the entire pipeline is at:
https://www.timothybrooks.com/tech/hdr-plus/

RAWs saved by Pixel phones are saved out after the tiled align-and-merge but before tonemapping AFAICT. It appears that when saving out a RAW, Google does not use their multiframe superresolution algorithm (which replaced the tile based align-and-merge in recent Pixels, and I believe may have been retrofitted to older ones with an update), saving out a cropped image that is still Bayer-mosaiced. This limitation is especially apparent at higher digital zoom settings, at which point the JPEG is vastly superior in sharpness due to the multiframe superresolution algorithm. An attempt was made to create an opensource implementation of the MFSR algorithm, but it has stalled - GitHub - JVision/Handheld-Multi-Frame-Super-Resolution: An implementation of the paper Handheld Multi-Frame Super-Resolution

Note that the tonemapping algorithm described in the whitepaper is a variation on the Enfuse exposure fusion algorithm, where two synthetic images are generated from the original raw image at different EV compensation, and then exposure fused.

You will never be able to reproduce Pixel JPEG tonemapping in darktable because the official position of the darktable development team is that Google’s algorithm is “pixel garbage”.

At least with current opensource tools, the closest you’re likely to get is RT’s Dynamic Range Compression tool - Fattal '02 doesn’t always provide as good results as the Google approach, but it’s quite good in nearly all scenarios, enough that I have basically not bothered reimplementing the Google algorithm in RT as I’ve got higher priority projects. One case where Fattal '02 definitely fails compared to Google’s approach is handling highly saturated monochromatic (e.g. colored LED) lighting. I have yet to find a way to come close to matching Google’s approach here without major hassle.

Alternatively you might be able to get some interesting results using LuminanceHDR - as I said, RT’s implementation of Fattal '02 is good enough for the majority of my own use cases so I haven’t played with LuminanceHDR yet.

This interview?

that first image is very nice, went for something quite heavily stylised.


IMG_20200108_171429.dng.xmp (13.6 KB)

1 Like

But you would expect Jpegs to look better than raw images straight out of camera. The beauty of RAW is not what the unprocessed file looks like, it’s what the processed file looks like. Do you really think the OOC Jpeg in this thread is superior to all the attempts people have made on the raw file? I don’t. Google may apply some special sauce, and that is likely good enough for people who want to share phone photos quickly - but that is not raw shooters.

2 Likes

The HDR+ presentation is very illustrative and perhaps easier to digest for most people.

dt includes Local Laplacian Pyramids, which is supposedly what Google HDR+ also uses… OTOH, Google might be doing other “clever” stuff as well… So the “pixel garbage” you might be referring to is for the super-resolution part only?