The phones are catching up

Google Pixel 7 Pro (5x optical zoom + digital magic):

Lumix LX7 in P mode (yeah, an old pocketable camera), with P mode program set to optimise for lens sharpness, at maximum zoom, denoising, sharpening, contrast in darktable (not the best processing, but you get the idea):

Cropped to show approximately the same area (the photos were taken a few minutes apart, from slightly different locations):

The phone’s photo is obviously better, but it cost over 600 CHF / EUR/ USD, while the old camera costs about 100. I did not have a modern, pocketable compact to compare to; maybe next time I’ll take the TZ 100, though a Canon G1X III would be a more valid comparison cost-wise (I do not own the Canon, though).

12 Likes

What a location. You should’ve brought the full kit :star_struck:

6 Likes

Flumserberg, at a height of about 2000 m / 6500 ft.


That peak in the background tells us huge forces were at work.

6 Likes

Google’s computational photography is great, imagine if camera companies implemented some of those algorithms in regular digital cameras…

5 Likes

Funny thing is that sometimes I notice that with the Pixel 3a that I am using that my initial impression of the photo is often wow what a nice job. At full screen they look good but if you zoom in there quickly are a lot of artifacts and weirdness that you couldn’t correct if you wanted to. The camera is sort of the opposite…might not look as good out of the gate but zooming in its more faithful … Perhaps the 7 pro being much newer is better but one of the lead articles today on CNET is about how the Pixel 8 cameras seem to have some big issues… maybe just click bait… time will tell… but for sure even with my 3aXL I can often get pretty great keepers from a hike or bike ride without carting around any kit…

7 Likes

Camera companies can afford to stay the same, though their recent move to mirrorless is encouraging. Research into computational photography and machine learning is so very active with the backing of big tech and government behind much of it, so I am not surprised its innovation is outpacing traditional photography. Still, companies do release specialty cameras and other gear that are marvelous, but are just not suitable for the consumer or professional.

1 Like

Although my Pixel takes nice pictures, my favorite feature is that you can set it to shoot RAW+JPEG. Of course these are fake raws, computationally combined from many real frames. They don’t have quite as much highlight latitude as real camera raws, and they -annoyingly- have a local white balance baked in. But otherwise, they fit into my normal post processing workflow just fine. That is truly impressive to me.

3 Likes

Somehow (not sure why) but computational photography feels ok in areas like astrophotography, where were don’t literally know exactly what’s there. But it doesn’t feel ok to me in conventional photography where we do. Maybe it’s related to what we were taught in my early art drawing classes: Draw what you see, not what you know is there.

For me personally, “having” to use a device specifically designed to capture an image (i.e., a camera) is a core part of what photography is all about. If / when the process reaches a point where there’s no effort involved and a button press or voice command is all that’s needed for a perfect image, then there no longer IS a process and therefore there’s zero reason to take the image at all.

4 Likes

Larger sensors have slower readouts, which is a big limiter here. This is partly because advanced manufacturing technologies scale nonlinearly in cost with sensor size - this is why stacked BSI (Exmor RS for Sony) are exceedingly rare for APS-C and FF, but have been standard for smartphones since before the Sony A9 was released.

IIRC Google was doing 60 FPS at full sensor resolution/bit depth 8+ years ago (Edit: Very few camera manufacturers are even exceeding 30 FPS fullres raw bursts right now)

There are solutions to burst stacking in post for any camera such as Tim Brooks’ HDR+ pipeline ( HDR+ Pipeline ) and GitHub - kunzmi/ImageStackAlignator: Implementation of Google's Handheld Multi-Frame Super-Resolution algorithm (from Pixel 3 and Pixel 4 camera) - but these often choke due to excessive movement from frame to frame due to the low framerate of most larger cameras. Rotation, especially, causes the HDR+ tiled align-and-merge to derp up pretty badly.

@priort IIRC Google didn’t start implementing MFSR until the Pixel 4 and that was only for Night Sight mode - newer devices use MFSR instead of the legacy HDR+ tiled align-and-merge for all modes. The Pixel 4 also saves DNGs from the legacy HDR+ pipeline even if the JPEG was done via MFSR - REALLY bad if you used any digital zoom at all which is where MFSR really shines.

3 Likes

On the 3 I found the raw useless when you zoom… It seems since its doing a computational raw that it saves a cropped computed version of a raw so often the raw files would be some 900K file if you zoomed in 3 or 4 times… My old Lumia would save the sensor raw and process a digital zoom for the jpg…. But now with the computational stuff maybe this is not going to happen… In any case on my 3aXL the only raw file worth having a go at is one with no zoom….

Yup, that’s exactly what I meant by the DNG from the legacy HDR+ pipeline and not the MFSR pipeline.

HDR+ does a basic tiled align-and-merge so isn’t really suitable to any form of digital zoom, MFSR performs a technique kinda sorta like pixel shift in some cameras, except relying on random hand motions for shift - Handheld Multi-Frame Super-Resolution - MFSR is why zoomed in JPEGs are decent quality. Apple decided to save their demosaiced superres output when they implemented “ProRAW” (which is just demosaiced DNG 1.6, aka “linear DNG”), but Google for “compatibility” reasons saved out crippled DNGs that were, as you’ve pointed out, nearly useless if you used any form of digital zoom setting.

I haven’t checked yet to see if Google is saving less-garbage zoomed DNGs in the 7 Pro.

2 Likes

My wife has a 7a and I think it might be still crappy but I should see if she will let me touch it long enough to check :slight_smile: But the promo stuff for the new 8 and 8 pro is mentioning saving 50MP raw files so maybe they have finally made the leap?? The crazy thing about those new phones is the con job on the AI features called video boost… you let Google process your video frame by frame using HDR+ or whatever they now call it on their servers and then it sync’s back to the phone… what an obvious way to mine data from people… getting every frame of video people shoot if they opt in…

Funny I had just been watching this yesterday when I noticed this topic…

1 Like

I have a Nokia x30 which is kind of basic but decent - you can only shoot raw on the main lens (I think it’s 25mm-ish) but the DNGs are quite good - a good amount of latitude, not much noise reduction applied and it seems like any multi-shot magic is turned off in raw (“Pro”) mode.

Any zooming is just plain interpolation and can’t match the google magic, but again, it’s disabled in raw mode.
I did try a fancy Samsung Ultra that was on a heavy discount (but still more expensive) when shopping around, which has a claimed 10x optical zoom and 30x “Space Zoom” IIRC. It is impressive but seemed way too computational for my liking, even unzoomed. Odd artifacts when you pixel peep…

I would say it is okay for astrophotography because that is a highly technical discipline and there are only so many ways in capturing images in a data logging manner. That is, it requires more interpretation of data and more mapping of shapes and colours. Whereas general photography is, well, general, with lots of different unpredictable subjects, like a human, and more of what you see is what you get.

1 Like

Apparently for the new Pixel 8, only the Pro version is getting the ‘manual’ controls and 50MP DNG output from the GCam app, even though the non-Pro version has the exact same main camera and CPU.

Very well said.

I’m an amateur but the more I get into photography, the more I realize it’s the process I enjoy and not always the final result, necessarily. Maybe AI will sooner or later be able to take a shitty phone snapshot and create something beautiful out of it. Maybe even something better than what I would have shot with my proper camera and edited for half an hour. But it is the process I enjoy and it is what makes every photo a bit more personal.

Furthermore, there is learning in basic principles when you try to shoot and edit your photos properly. Understanding color balance and grading, exposure, etc all this is, I feel, are valuable skills. Probably transferable, too. Most good film photographers who wanted to go digital had a big head start.

Speaking of film photographers, we’ve gone through that before, with the digital photography revolution, where film was considered completely obsolete and unnecessary by most. Yet there always were people who enjoyed the actual process despite (or thanks to) its more elaborate nature. Nothing beats the darkroom for many photographers, and I totally get it.

1 Like

But computational photography takes none of that away. If the camera merely produces a higher resolution raw with more dynamic range, I don’t see how different it is from having a camera with a better sensor.

We could argue about how “real” it is, but digital photographs are already separated from reality by a few degrees, if it ends up being imperceptible, whats the difference?

2 Likes

I think the discussion here involves two paths:

  • enhanced raws
  • lots of other processing applied by the phone to create a JPG.

And so are analog (film) shots. :slight_smile:

2 Likes

I use GMail, including with my doctor, and share tax documents with the accountant via Google Drive, so videos I make of the cat playing, or of a sudden downpour, are the least of my concern (and I post those on Facebook, too). The photos and videos I take with my phone are synced to Google Photos, anyway.

2 Likes

And your point is, kofa??

(Did anybody say Cambridge Analytics …?)