Newbie trying to achieve the Google Pixel "look", please critique

This would be my “normal” edit.


P5150281.ORF.xmp (9.9 KB)

1 Like

Ironic…I have the 3aXL and the JPGs for me are often showing weird colors and artifacts from over sharpening…so I end up processing pixel photos as raw files to get better colors. Pixels use the blue channel to create that dark contrasted look at least it seems so to me…

Quick observation:

  • stronger local contrast adjustment in the background than foreground
  • background received haze removal so there is less blue and more uniform
  • your attempt in OP has a colour cast that still needs to be corrected

Thanks, this is quite nice. Looks like what I tried to achieve with filmic was done with tone equalizer.

As I said, I am quite new to this. The pixel 3a pictures have been always quite pleasing to my eyes, but I agree on the over-sharpening bit. At this point, I am more interested in getting the tone mapping right.

Thanks for the tips. I don’t know if I understand, but all adjustments were global, I didn’t use any local masks. As to the color cast, I have yet to dive into color balance. I have just set the WB to camera settings for now. My general approach is DR compression with filmic, then add local contrast and, possibly, some dehaze. Not sure of this makes sense or not.

To match the phone’s output, you will need to edit each region/object of interest separately. To compensate for the small sensor, phone manufacturers make extensive use of processing tricks. You may want to follow @s7habo’s videos or PlayRaws entries. His edits are vibrant and next level.

Doing things to the image like this…

The recipe is often described like this…

" Pixel phones also have a very distinctive gamma and contrast curve with the aggressive HDR+ tone-mapping, where the images are generally underexposed for highlight detail with increased saturation and contrast and a cooler color temperature, which creates that very appealing Pixel look people are so fond of but can’t always explain why they like"

So I guess directing the edit like this would get you closer…I think it is this cooler blue contrast that often leaves skin a bit light(washed out) and pale to my eyes on many of the jpgs I shoot of family…

Same here…
P5150281.ORF.xmp (12.8 KB)

I think filmic rgb is not an “artistic” tool and you should try to use the other moduls earlier in the pipeline to archieve your look. I hardly ever change anything in filmic except contrast settings in look tab to control global contrast of the photo.

1 Like

1 Like

I’ll try to take a crack at this later, but for reference, the HDR+ pipeline of older Pixels is well documented. There are papers that describe newer pipelines (such as Night Sight), although these are known to have some errors.

HDR+ Pipeline is a reference implementation of the entire legacy HDR+ pipeline. Everything but the tiled align-and-merge operation (used for stacking bursts) can be done alternatively in other software.

For tonemapping of a single image, the general idea is to do the same synthetic exposure fusion approach Google does:
Export two (or more, Google only uses two) images with different exposure values
Feed the results to enfuse

Thanks for posting
darktable 3.8.1


P5150281_02.ORF.xmp (15.2 KB)

Have you tried that “legacy” pipeline you referenced…It sounds like it uses and underexposed burst sequence not a bracket?? Is that correct??

Yes, I have, although with some tweaks to stop the pipeline at the end of the tiled align-and-merge and spit out a DNG for further processing by other tools. (IIRC, upstream has implemented a better version of my hax, I haven’t used that particular pipeline in a while.) The basic concept of the remainder of the pipeline is “feed synthetic shifted exposures to enfuse”. An alternative in RawTherapee is the “dynamic range compression” tool, which works for 95%+ of my use cases. (I need to revisit an old Epcot shot as one of the corner cases where DRC does not deliver the results Google did, but the Pixel shot was using Night Sight which has a few additional tricks that include terms poorly defined by Google in the implementation. See Google AI Blog: Night Sight: Seeing in the Dark on Pixel Phones and http://graphics.stanford.edu/papers/night-sight-sigasia19/night-sight-sigasia19.pdf - you can see in section 5 that the tone mapping algorithm has had a few small tweaks. Also of note is that except for live preview, the only real role AI and neural networks have to play in Google’s pipelines are the AWB algorithm in section 4 of the PDF.)

The tiled align-and-merge is good for certain scenarios - it does not give you as much “dynamic range per image shot” as bracketing, but does give you better motion rejection/handling. A big challenge is that the tiled align-and-merge works for very high-rate bursts (many mobile sensors are bursting at 30 FPS), but starts failing if you’ve got high amounts of motion between shots in a burst (< 10 FPS with long dead time between frames). Legacy HDR+ doesn’t handle camera rotation well either. But for the most part, anyone with an MFT or larger camera can probably start with a single raw shot and then tonemap.

Newer Google pipelines to a hybrid approach where they do bursts, but at two different exposures. The old HDR+ pipeline was just a constant-exposure (underexposed to preserve highlights) burst.

1 Like

Thanks for being very generous with you time and experience…

I also like to thank everyone, I am learning a lot, although some of the info is over my head.

I maybe should have clarified this in my OP, I am NOT trying to replicate exactly what Google is doing in GCAM, I realize that there are bursts of shots and lots of computational photography involved. Nor do I want to do bracketing for HDR photography with my OM-D (except in very select cases). I just want to use the natural dynamic range of the OM-D camera to get a Pixel-like look from a single exposure. My limited tinkering with this in DT seems to show that the OM-D actually has plenty dynamic range, that bests the shadow detail from multi-shot exposures of the Pixel even in a single exposure.

What I am rather looking for is a simple workflow that expert post-processors, like you guys, think makes sense, so I can give the OM-D photos a little more of that Pixel “look” that seems to be pretty pleasing to many people. Hope that makes sense.

What I am learning is that a lot of other posters seem to use tone equalizer to achieve the DR compression effect, rather than filmic, before adding back local contrast.

As I mentioned, if you do want to do that:
Export two (or more) different exposures with different exposure shift in your raw processor
Feed the result to enfuse

That applies the elements of Google’s pipeline that are relevant to a single exposure from a camera.

(I’ll try to do an example later this week, the timing is bad at the moment.)


P5150281.ORF.xmp (10.1 KB)

1 Like