Newbie trying to achieve the Google Pixel "look", please critique

I just got my first REAL camera in March, an OM-D E-M5 II, after not having done much photography since the film days (still have the Canon A1 from my teenager years). Cell phone photography has gotten me back into real cameras. Now, I expected that the OOC jpegs would probably not be that good, but I didn’t expect they would be THAT poor. So, I am definitely going to shoot raw and I have settled on Darktable for processing.

I am trying to replicate the Google Pixel “look” from my 3a (I know, probably over-processed, but my family quite likes the look and always wants ME to take photos at events or on vacation, everyone else is on iphone). Here is a picture taken in my backyard to compare the OM-D with the 3a, not a great scene or composition, just as a comparison baseline:


OM-D jpeg OOC


Pixel 3a


P5150281.ORF.xmp (10.5 KB)
My first attempt at creating the “pixel look”

P5150281.ORF (14.2 MB)
Raw file

I have learned a lot from this forum already by browsing, but I would appreciate feedback from you experts whether I am approaching this the wrong way and what can be improved. Thanks in advance.

Please don’t forget to explicitly state the license you want to release your image with! A common license used here includes something like below:

This file is licensed Creative Commons, By-Attribution, Share-Alike.C

2 Likes

But an amazing backyard, so kudos for that!

1 Like

Trying to get close to the Pixel. But it’s nothing I would normally aspire to.

DT 3.8.1

P5150281.ORF_.xmp (10.0 KB)

1 Like

This would be my “normal” edit.


P5150281.ORF.xmp (9.9 KB)

1 Like

Ironic…I have the 3aXL and the JPGs for me are often showing weird colors and artifacts from over sharpening…so I end up processing pixel photos as raw files to get better colors. Pixels use the blue channel to create that dark contrasted look at least it seems so to me…

Quick observation:

  • stronger local contrast adjustment in the background than foreground
  • background received haze removal so there is less blue and more uniform
  • your attempt in OP has a colour cast that still needs to be corrected

Thanks, this is quite nice. Looks like what I tried to achieve with filmic was done with tone equalizer.

As I said, I am quite new to this. The pixel 3a pictures have been always quite pleasing to my eyes, but I agree on the over-sharpening bit. At this point, I am more interested in getting the tone mapping right.

Thanks for the tips. I don’t know if I understand, but all adjustments were global, I didn’t use any local masks. As to the color cast, I have yet to dive into color balance. I have just set the WB to camera settings for now. My general approach is DR compression with filmic, then add local contrast and, possibly, some dehaze. Not sure of this makes sense or not.

To match the phone’s output, you will need to edit each region/object of interest separately. To compensate for the small sensor, phone manufacturers make extensive use of processing tricks. You may want to follow @s7habo’s videos or PlayRaws entries. His edits are vibrant and next level.

Doing things to the image like this…

The recipe is often described like this…

" Pixel phones also have a very distinctive gamma and contrast curve with the aggressive HDR+ tone-mapping, where the images are generally underexposed for highlight detail with increased saturation and contrast and a cooler color temperature, which creates that very appealing Pixel look people are so fond of but can’t always explain why they like"

So I guess directing the edit like this would get you closer…I think it is this cooler blue contrast that often leaves skin a bit light(washed out) and pale to my eyes on many of the jpgs I shoot of family…

Same here…
P5150281.ORF.xmp (12.8 KB)

I think filmic rgb is not an “artistic” tool and you should try to use the other moduls earlier in the pipeline to archieve your look. I hardly ever change anything in filmic except contrast settings in look tab to control global contrast of the photo.

1 Like

1 Like

I’ll try to take a crack at this later, but for reference, the HDR+ pipeline of older Pixels is well documented. There are papers that describe newer pipelines (such as Night Sight), although these are known to have some errors.

HDR+ Pipeline is a reference implementation of the entire legacy HDR+ pipeline. Everything but the tiled align-and-merge operation (used for stacking bursts) can be done alternatively in other software.

For tonemapping of a single image, the general idea is to do the same synthetic exposure fusion approach Google does:
Export two (or more, Google only uses two) images with different exposure values
Feed the results to enfuse

Thanks for posting
darktable 3.8.1


P5150281_02.ORF.xmp (15.2 KB)

Have you tried that “legacy” pipeline you referenced…It sounds like it uses and underexposed burst sequence not a bracket?? Is that correct??

Yes, I have, although with some tweaks to stop the pipeline at the end of the tiled align-and-merge and spit out a DNG for further processing by other tools. (IIRC, upstream has implemented a better version of my hax, I haven’t used that particular pipeline in a while.) The basic concept of the remainder of the pipeline is “feed synthetic shifted exposures to enfuse”. An alternative in RawTherapee is the “dynamic range compression” tool, which works for 95%+ of my use cases. (I need to revisit an old Epcot shot as one of the corner cases where DRC does not deliver the results Google did, but the Pixel shot was using Night Sight which has a few additional tricks that include terms poorly defined by Google in the implementation. See Google AI Blog: Night Sight: Seeing in the Dark on Pixel Phones and http://graphics.stanford.edu/papers/night-sight-sigasia19/night-sight-sigasia19.pdf - you can see in section 5 that the tone mapping algorithm has had a few small tweaks. Also of note is that except for live preview, the only real role AI and neural networks have to play in Google’s pipelines are the AWB algorithm in section 4 of the PDF.)

The tiled align-and-merge is good for certain scenarios - it does not give you as much “dynamic range per image shot” as bracketing, but does give you better motion rejection/handling. A big challenge is that the tiled align-and-merge works for very high-rate bursts (many mobile sensors are bursting at 30 FPS), but starts failing if you’ve got high amounts of motion between shots in a burst (< 10 FPS with long dead time between frames). Legacy HDR+ doesn’t handle camera rotation well either. But for the most part, anyone with an MFT or larger camera can probably start with a single raw shot and then tonemap.

Newer Google pipelines to a hybrid approach where they do bursts, but at two different exposures. The old HDR+ pipeline was just a constant-exposure (underexposed to preserve highlights) burst.

1 Like

Thanks for being very generous with you time and experience…