The phones are catching up

Personally, I deeply distrust our megacorporation overlords, and try to keep things out of their reach.

But I honestly do not know if that’s just tinfoil-hatting, and I honestly do not know whether it’s even effective, considering that I do run Android and MacOS.

Maybe in a few decades we will learn how much of our private life we actually expose that way, and what the ramifications are thereof. It may well be that the adtech bubble will eventually burst, and all that “private” data will turn out entirely useless. Or we will someday wake up in a spystate dystopia. I truly don’t know.

For what it’s worth, my photos go from my phone to my NAS, and from there -together with my camera pictures- into a client-side encrypted off-site backup. Hopefully without touching any FAANG-controlled service on the way. Same goes for my documents and email.

I meant that is that Google and others offer us convenient services for which we pay either in money (directly) or in money they generate using the private data we decide to give them:

you let Google process your video frame by frame […] what an obvious way to mine data from people… […] if they opt in

Also, I think the emails, web searches and location history are probably a lot more important source of marketable data than our photos and videos.

I’m sorry if I was rude, offensive or tactless. This is off-topic anyway; let’s return to the technological aspects of mobile photography and discussing the creative/convenience aspects of that.

2 Likes

It’s just funny. They go through the whole promo on the tensor chip and how it’s the only one that can run Google’s machine learning routines on the phone and even several in parallel even but for this video boost feature they have to process your video on the cloud… that was more my point. I take normal steps for personal security but for sure I have a Gmail account and use Windows etc etc… I don’t post anything about my grand kids. We use internal family resources for that … I just thought this was quite funny to hype the MK of the phone and then tell you about the need to upload for the feature… oh and the phone is green and packaging plastic free but all this upload and downloading uses no power at all does it. :slight_smile:

1 Like

Checkout the latest from Adobe. Project called stardust. You can select and edit elements with simple text prompts. Remove change move any element in the photo. Convert any object to layers if you want… It will soon be very easy to fix or modify any photo. Editing is going to be redefined like it or not… https://youtu.be/w9WCfa6Rvxc?feature=shared

1 Like

I guess heavy video processing is resource intensive. Maybe it could be run on the phone in the background, but would take a long time, heat up the phone and use too much battery? Compared to the actual processing, the up/download probably consumes minimal energy.

3 Likes

Yeah. Some of the computational photography features are a bit more “cheaty” than others.

Like Google’s Magic Eraser - that’s 100% computational fakery.

Google’s HDR+ tonemapping solution (synthetic exposure fusion) is a pretty common technique, but you don’t have to apply it

“fake bokeh” (using depth information to selectively blur the image) is pushing it. It’s at least in theory done by using additional knowledge of the scene (depth information from either multi-sensor parallax or PDAF sites), but due to that data having quality issues that need to have the gaps “guessed” it’s really questionable

Multiframe burst stacking isn’t fakery at all in my opinion.

Note that the role of AI in most stills is way overblown by people who have not read Google’s whitepapers - for example in the Night Sight pipeline, neural networks only serve to provide an AWB algorithm and rapid preview (because the actual tonemapping cannot be done in realtime).

I scratched my head about the “offline processed” videos from the Pixel 8. One, that’s a HUGE amount of upload/download bandwidth, and a true HDR video should not need any significant processing - I’m guessing that they’re talking about the tonemapped SDR version of the video derived from the HDR input.

5 Likes

Have you heard the special lunar photography mode on one of the current phones?
Gives amazingly detailed shots of the Moon.
But…
You can take a picture of a white plate at a certain distance and will turn it onto a full-featured Moon. :rofl:

3 Likes

I have been very pleasantly surprised by my Pixel 6 since buying it last year. I almost feel guilty for using it because I love my X-T5 so much, but there’s no getting round the fact that the phone is just so much easier to carry with you. It’s in my pocket almost all day, which means I can just grab it when a photo opportunity arises. That’s where my X-T5 falls short: i don’t always have it with me.

Looking through my photos recently from the last few years, I have been analyzing which are my favourites and why. They seem to fall into two categories: 1) when I have purposefully gone out with the intention of doing photography with my proper camera and lenses; and 2) when I just happened to have been in the right place and the right time to capture something beautiful with my phone.

None of this is surprising really, but it has helped me make peace with my phone, knowing that it is sometimes the best tool for the job simply because it is always in my pocket.

A few recent PIxel 6 shots:

5 Likes

We have been removing distractions since the beginning of photography. Clone, heal, Gimp’s resynthesizer, and now generative AI - more and more sophisticated tools, but are they all that bad, when it comes to removing distracting elements, replacing them with textures from the scene that would have been visible if the removed item had not been there?

It won’t be long before phone cams will fully match or even surpass many dedicated shooters (most of them have probably already surpassed mine).

Using my trusty little Fuji, though, is a very big part of the experience for me – it just wouldn’t be the same using a phone. I don’t really care at all for ‘mega-pixels-this,’ autofocus-that’ or ‘AI-the-other’ – what I have in my bag is more than capable of capturing an absolute master piece; the only reason it hasn’t is purely down to me.

1 Like

Pop it in the post and send it my way - problem solved. :wink:

I agree on both accounts. No wonder the ‘pocketable compact camera’ is basically dead.
Yet, I find an eye-level viewfinder extremely important – to ensure visibility in bright conditions, for example. Also, I need glasses for reading and for using the phone, but not for hiking; the camera’s EVF has a dioptre adjustment, so it’s easy to use without glasses. Zoom used to be a factor, though as the opening post shows, it’s no longer that important. Finally, bokeh – I don’t like the artificial bokeh, and currently it’s far from perfect.

3 Likes

I fully agree. The viewfinder is very important, and basically the ergonomics in general. I love my Pixel for being in my pocket, but nothing else about the picture-taking process with it excites me in any way. In fact, I rarely use the rear screen on my Fuji; it’s viewfinder all the way for me.

I’ll trade depending on what you have. I’m looking for the Fujinon XF200 F2, so if you have that, it’s a deal!

1 Like

There is a section in this video showing google using elements from other images it knows about to help do healing, inpainting, outpainting regenerative stuff in a given image… I think its a big part of their motivation for this video boost is to give them the biggest and best image database for all this computational wizardry coming our way… and as they point out things like text give it problems… https://www.youtube.com/watch?v=bD_HyxHMHPo

I’ll never give up my phone / camera, but only because it’s useful in a brutally utilitarian sense. For a dedicated image-making / image-taking device, the rectangular / flat form factor is hideously bad. Awful. A touch screen is fine for certain things (within limits) but the absence of physical buttons that I can touch in anticipation of shooting (e.g.) is – among many other reasons – a deal killer for me in the context of ‘serious’ photography.

5 Likes

This applies to so many things in my opinion, nothing truly replaces tactile things we can touch. A good example is hardware synthesizers, even if it’s 100% digital synth, it always feel better when you turn the knob with your hand vs a slider on a screen. (I know there are midi controllers that sort of fix that though)

3 Likes

On Android you can also trigger the camera with the speaker volume buttons.

1 Like

It’s flimsy and requires two handed operation vs single handed for a good camera though… (But there are phones with good shutter buttons on the right)

Works single-handed for me (perhaps because the case makes the button a bit harder).

2 Likes