Newbie trying to achieve the Google Pixel "look", please critique

To match the phone’s output, you will need to edit each region/object of interest separately. To compensate for the small sensor, phone manufacturers make extensive use of processing tricks. You may want to follow @s7habo’s videos or PlayRaws entries. His edits are vibrant and next level.

Doing things to the image like this…

The recipe is often described like this…

" Pixel phones also have a very distinctive gamma and contrast curve with the aggressive HDR+ tone-mapping, where the images are generally underexposed for highlight detail with increased saturation and contrast and a cooler color temperature, which creates that very appealing Pixel look people are so fond of but can’t always explain why they like"

So I guess directing the edit like this would get you closer…I think it is this cooler blue contrast that often leaves skin a bit light(washed out) and pale to my eyes on many of the jpgs I shoot of family…

Same here…
P5150281.ORF.xmp (12.8 KB)

I think filmic rgb is not an “artistic” tool and you should try to use the other moduls earlier in the pipeline to archieve your look. I hardly ever change anything in filmic except contrast settings in look tab to control global contrast of the photo.

1 Like

1 Like

I’ll try to take a crack at this later, but for reference, the HDR+ pipeline of older Pixels is well documented. There are papers that describe newer pipelines (such as Night Sight), although these are known to have some errors.

HDR+ Pipeline is a reference implementation of the entire legacy HDR+ pipeline. Everything but the tiled align-and-merge operation (used for stacking bursts) can be done alternatively in other software.

For tonemapping of a single image, the general idea is to do the same synthetic exposure fusion approach Google does:
Export two (or more, Google only uses two) images with different exposure values
Feed the results to enfuse

Thanks for posting
darktable 3.8.1


P5150281_02.ORF.xmp (15.2 KB)

Have you tried that “legacy” pipeline you referenced…It sounds like it uses and underexposed burst sequence not a bracket?? Is that correct??

Yes, I have, although with some tweaks to stop the pipeline at the end of the tiled align-and-merge and spit out a DNG for further processing by other tools. (IIRC, upstream has implemented a better version of my hax, I haven’t used that particular pipeline in a while.) The basic concept of the remainder of the pipeline is “feed synthetic shifted exposures to enfuse”. An alternative in RawTherapee is the “dynamic range compression” tool, which works for 95%+ of my use cases. (I need to revisit an old Epcot shot as one of the corner cases where DRC does not deliver the results Google did, but the Pixel shot was using Night Sight which has a few additional tricks that include terms poorly defined by Google in the implementation. See Google AI Blog: Night Sight: Seeing in the Dark on Pixel Phones and http://graphics.stanford.edu/papers/night-sight-sigasia19/night-sight-sigasia19.pdf - you can see in section 5 that the tone mapping algorithm has had a few small tweaks. Also of note is that except for live preview, the only real role AI and neural networks have to play in Google’s pipelines are the AWB algorithm in section 4 of the PDF.)

The tiled align-and-merge is good for certain scenarios - it does not give you as much “dynamic range per image shot” as bracketing, but does give you better motion rejection/handling. A big challenge is that the tiled align-and-merge works for very high-rate bursts (many mobile sensors are bursting at 30 FPS), but starts failing if you’ve got high amounts of motion between shots in a burst (< 10 FPS with long dead time between frames). Legacy HDR+ doesn’t handle camera rotation well either. But for the most part, anyone with an MFT or larger camera can probably start with a single raw shot and then tonemap.

Newer Google pipelines to a hybrid approach where they do bursts, but at two different exposures. The old HDR+ pipeline was just a constant-exposure (underexposed to preserve highlights) burst.

1 Like

Thanks for being very generous with you time and experience…

I also like to thank everyone, I am learning a lot, although some of the info is over my head.

I maybe should have clarified this in my OP, I am NOT trying to replicate exactly what Google is doing in GCAM, I realize that there are bursts of shots and lots of computational photography involved. Nor do I want to do bracketing for HDR photography with my OM-D (except in very select cases). I just want to use the natural dynamic range of the OM-D camera to get a Pixel-like look from a single exposure. My limited tinkering with this in DT seems to show that the OM-D actually has plenty dynamic range, that bests the shadow detail from multi-shot exposures of the Pixel even in a single exposure.

What I am rather looking for is a simple workflow that expert post-processors, like you guys, think makes sense, so I can give the OM-D photos a little more of that Pixel “look” that seems to be pretty pleasing to many people. Hope that makes sense.

What I am learning is that a lot of other posters seem to use tone equalizer to achieve the DR compression effect, rather than filmic, before adding back local contrast.

As I mentioned, if you do want to do that:
Export two (or more) different exposures with different exposure shift in your raw processor
Feed the result to enfuse

That applies the elements of Google’s pipeline that are relevant to a single exposure from a camera.

(I’ll try to do an example later this week, the timing is bad at the moment.)


P5150281.ORF.xmp (10.1 KB)

1 Like

Sorry, but not an attempt to replicate the Pixel look (which I find rather lacklustre).

Newbie trying to achieve the Google Pixel look, please critique.ORF.xmp|attachment (61.9 KB)

My try using RawTherapee
Tone Mapping, Haze Removal, HSV Equalizer, Wavelets → Clarity
Overprocessed to my eyes :smile:


P5150281_RT-1.jpg.out.pp3 (15.6 KB)

My Pixel version:


P5150281.ORF.xmp (8.2 KB)

How I would do it:


P5150281_01.ORF.xmp (8.2 KB)

1 Like

Must say I quite like this image, here’s my attempt:


P5150281.ORF.xmp (8.6 KB)

1 Like

Its funny I had a Lumia phone which in its time took great photos for a phone…There was a certain look to the contrast and the tone mapping…some of which reminds me of the google look…messing around when the old tonemapping module was available (now depreciated) I was able to get some settings with that module that would impart something similar…I thought I had an old preset so that I could access it but I can’t seem to find it so I would have to go back and install an old version or whatever and its not really worth it but it was interesting …for fun I may dig it up and see if my memory is correct or I am dreaming :slight_smile:

Sorry for the Bump, but I would like to add my 2cents to this topic.

I found this a really nice challenge - something to train your eyes, spotting the color, sharpness and contrast differences and, of course, using the tools to minimize them.

I tried really hard, to get a very good match between my edit end the Pixel 3a reference, but I failed because of 3 reasons

  • the reference photo has same subtle color gradients in the greens or the blues (in the sky). These are not decisive for the look, i think, but make the exact matching very hard
  • the OMD picture is a bit blurry in the foreground (the bushes for example)
  • there seems to be a different lighting in the near foreground (the tree, its shadow and the plastic cover)

So after trying several hours with masks and gradients, I ditched this approach and tried a more imperfect but quicker edit.

First of, here is the end result and I think it matches pretty good in the overall contrast and colors, the sharpness and local contrast.

As to what I’ve done… I found the most important bit is the dynamic range compression, while maintaining or enhancing local contrast, which can be done (like @s7habo and others have shown) with two instances of tone eq - this gives really good control over where exactly to place your contrasts.

image

Second, I looked for the colors. I made the sky more bluish and kept clouds mostly neutral with a little bluish cast. Not sure, how the Pixel 3a does this, but I tried to match the colors with the channel mixer, as I like to use this module. I kept an eye on the greens, and balanced them between cold and warm to match the reference.
To match the blues I used the input blue in all three color channels of the mixer and for the greens, I mostly used the input red/green in the blue color channel.

At last I tried to match the sharpness with multiple instances of diffuse or sharpen. This is pretty standard for my editing process and here i used a lens deblur, dehaze and sharpen demosaic, with my personal adjustments and balanced out to kinda match the reference.

So, to wrap this up… I thought I’d add a little information about my thought process, which may or may be not useful for the op (or other interested people), because I think it’s a common theme for, err, newbies, trying to match the style of their previous photos with the phone.

Thanks for the fun and the learning experience on my side, @Sciencenerd!

1 Like