DNG or JPEG with top-notch smartphones (Pixel 4 etc)

Hello everyone,

Just out of curiosity, since I don’t own a top-notch smartphone (e.g. Google pixel 4 and the like) for taking pictures…

What format do you suggest for taking pictures with smartphones, Raw (Dng) or jpeg?

The easy answer might be Raw (that is, Dng) since it contains more data to work with, later on, with Rawtherapee - Darktable and other similar open source softwares.
However, with jpeg you take advantage of all the software know-how of Google algorithms shipped with Pixel 4.

Take for instance a look at these samples (Google pixel 4):
As usual, the jpeg images are already quite good, out-of-the-box, compared to the “similar” Dng (Raw) images which naturally need a much in-depth work to make them outstanding.

In addition, I have read the Dng format might not be always handled correctly by all open source Raw converters

Smartphone manufacturers invest much much more time and money in camera post-processing than camera manifacturers.

Taking a look at this example in portrait mode:

The camera does a quite good job at the colors in skin, background and the jackets (although the white parts look a bit unnatural).

That bokeh simulation in the background looks good, but take a look at the areas where loose hair overlaps the background (especially between the two faces).

I worked on a few RAW files from smartphones, some from my own camera and some from play RAW here at pixls.us.
In all occasions I had to put much more time and efforts in fixing white balances, color casts, barrel distortions, noise and other problems than in RAWs from cameras with bigger sensors and better lenses.

Personally I use my smartphone for quick snapshots but I don’t post process them except for cropping or reducing image resolution when I want to move a picture into my photo archive.

My advice: shoot in JPG and RAW, even if you don’t plan to work on the RAW. You can just move the files to a computer where there is enough storage for all files for the case you want to edit a RAW afterwards.

1 Like

Hello @pphoto

In all occasions I had to put much more time and efforts in fixing white balances, color casts, barrel distortions, noise and other problems than in RAWs from cameras with bigger sensors and better lenses.

Thanks for sharing your experience on this topic :slight_smile:

This was what I was suspecting too.

Due to the smartphones’ smaller sensors, all in all, IMHO, It does not make so much sense to work with Dng raw files in the end…
This might be especially true with the Google pixel 4 (or 3) smartphones where the jpeg images take advantage of the very powerful magic tricks performed by Google algorithms. In essence, these jpeg images are alreday extremely good and with the Dng corresponding files you do not have much leeway to really improve the final outcome…

With the Huawei Pro 30 images this is probably different because the sensor is bigger and the jpeg images, from what I have gathered watching some videos on YouTube, are not on par with the jpeg images produced with the Pixels smartphones…

BTW, at work, I generally shoot all my images as RAWs (NEF) :slight_smile:

One observation: The DNGs from a Pixel 4 are saved after the burst align-and-merge phase - so they have significantly higher dynamic range than a single frame capture.

You cannot get the data from the individual frames of the burst to implement your own alignment and stacking methods.

You cannot automatically select which camera module is used - you can get close by setting it to 1.8x or 1.9x, which will select the telephoto module. Go past this, and unfortunately the DNG will be cropped.

It has been claimed in a DPReview article that the DNGs are saved after going through the newer superresolution pipeline (some discussion at https://www.dpreview.com/forums/post/63211371 ) and that the DNGs are then “re-mosaiced” to improve compatibility with RAW converters that can’t handle Linear DNG data, but this is not my experience. The DNGs are clearly using the legacy 2016 HDR+ algorithm, as evidenced by the fact that at 8x zoom, the DNG is less than 1000 pixels wide.

So at 1x and 1.8-1.9x, DNG will give you potentially better results if you are comfortable with tonemapping techniques. I can get pretty close to JPEG results using RawTherapee’s Dynamic Range Compression module, although there needs to be some fine-tuning to handle highly monochromatic LED lighting. I’m probably going to be revisiting my work with exposure fusion, since at least the original HDR+ tonemapping approach was to feed the Mertens algorithm with synthetically exposure-shifted images. It isn’t clear what the current tonemapping approach is.

At any other zoom range, especially past 1.9x, the JPEGs will be better because they have been processed using Google’s handheld superresolution stacking algorithm as opposed to the legacy HDR+ align-and-merge approach.


Hello @Entropy512

Thanks a lot for your technical information about the Google Pixel 4.

From the plethora of YouTube video reviews I have watched lately it looks like, on the whole, it is currently probably the best smartphone for taking pictures.
For recording videos it looks like the Apple iPhone 11 is the best one (most reviewers agree on this…).

Unfortunately, the Pixel 4 is also extremely pricey at the moment :slight_smile:

If price is an issue, the Pixel 3 series (even the 3a) have almost the same main sensor as the Pixel 4 (only improved the lens slightly to f/1.7 from f/1.8). If not already, the image processing algorithm updates concerning mentioned HDR+ etc. should trickle down to the 3 series as well, just might run slower (especially on the 3a), so you might be getting almost equivalent DNGs if that’s your goal. Of course Pixel 4 JPEGs can also benefit from the additional sensor, and those will probably be better.

As an aside, the new DNG spec 1.5 now includes the possibility of storing a depth map and partially processed “enhanced image data”, it’ll be interesting to see which camera vendors and SW start leveraging those…

The Pixel 4 camera has an option to save depth data for social media, but even with it turned on, I’ve never seen it.

It MAY only get saved in portrait mode? I need to try this.

With depth data, you could emulate the simulated wide-aperture-lens approach that the JPEGs do in portrait mode - apply a circular OOF PSF that varies in size depending on depth.