I very much like taking photos and I make lots of them, but I’m not good at the technical side of photography (I’m the kind of person who mostly uses automatic settings). I have a camera, but most photos I take with a smartphone camera because it’s the one I have on me, since I don’t plan anything, just make photos of things I consider interesting. My old smartphone is dying, so I want a new one with a best camera I can find within my parameters.
But I don’t really understand technicalities, so I can’t judge which one is better than others.
The type of photos I make: mostly cityscapes, some landscapes, some sunsets, I don’t really use macro (objects from very close up - only photos of my traditional art sometimes).
Samsung M51 (someone else has, nice camera from what I could tell)
Main camera:
64 MP, f/1.8, 26mm (wide), 1/1.73", 0.8µm, PDAF
12 MP, f/2.2, 123˚ (ultrawide)
5 MP, f/2.4, (macro)
5 MP, f/2.4, (depth)
Front camera:
32 MP, f/2.0, 26mm (wide), 1/2.8", 0.8µm
It stands to reason that S10’s camera would be best, but isn’t a quite low MP a cause for concern? As I had a very old model before, I guess any kind of camera in a new model will be better, but I added information about a smartphone someone I know has and the camera there is pretty nice. I wonder how it compares to other models on the list? (I can’t buy it because it’s too big for me).
I tried to use that tool: Photo compare Samsung Galaxy A41 vs. Samsung Galaxy S10 - GSMArena.com and while it’s difficult for me to judge the results, it seems like S10 is better at colors, but smooths out details that A41 can capture, and M51 (which is the one I won’t buy) is just the best in everything (There was no XCover Pro to see).
(Forgot to add: my constraints for the smartphone are mostly size (16cm height is already a lot…), preferable >6GB of RAM and at least 64GB of space (preferred 128GB), somewhat good processor (doesn’t need to be top notch), and I would prefer a minijack).
If you just use the default camera app, check out the Pixel 5 or the Pixel 4a. The computational photography on both of those cameras/phones is quite nice.
The megapixel count on mobile phones is 99% marketing bullshit. The manufacturer uses a technique called pixel binning, and the effective resolution of those sensors is generally 12-16MP.
You can install GCam (port of the Google Pixel camera software) on many Android devices. See here: https://www.xda-developers.com/google-camera-port-hub/ (but the list may not be exhaustive and up-to-date). You may also install Open Camera or its fork, HedgeCam 2, for an open-source experience.
As Mica said, don’t worry about the number of advertised pixels; for example, two of the highest-rated devices, the Pixel’s camera and that of the iPhone, have relatively low pixel counts (12 and 16 megapixels for the Pixel 4, 12 for the iPhone, I think). A full-HD (1920x1080) screen has less than 2 megapixels, a 4K (3840 x 2160) display has 8.
Try to make 1 to 1 comparisons. In your screenshot, the ISO, speed and illumination are different.
You also have to keep in mind the features the software may have. E.g.,
S, A and M Samsung series are different tiers. Isn’t S supposed to be at the top? Why is it smoother than A? Is it because it is in a certain mode? I can’t answer these questions because I don’t have these devices.
Features can change. Google later withdrew a mode (night mode?) from a phone in their lineup.
Something to consider which I struggle with every day: All smartphones I had so far did not allow for choosing shutter time with auto ISO enabled. I am only able too set all to manual or manual ISO with auto shutter time. This makes photographing moderately moving objects, such as little children in my case, impossible if the light is not 100 % noon summer sun.
OpenCamera does not solve this issue.
I hope, modern smartphones do not have this restriction anymore. A couple of years ago I had a look at the camera 2 api and came to the conclusion that the main reason is an api misconception - it is definitely not designed with photography in mind. But that may hopefully have changed.
I’m with Mica on this – the other day I was comparing the results from my Pixel 4a with a brand new iPhone 11 or 12 or something else (can’t keep up with the new models now, my last iPhone was a 4…) and despite the impression that the back of those iPhone make (a vast agglomerate of lenses, leds etc) the actual results and operational speed were not that different from my Pixel with its lonely tiny camera lens.
Obviously it wasn’t a scientific comparison and I was simply looking at the results on each device’s screen, but for what I do with a phone the Pixel is quite enough (factor in the difference in price too – I guess almost half of a recent iPhone).
And yes, i agree with Mica also on the issue of computational photography – it is quite impressive and really makes a difference on smartphones with tiny sensor and physically challenged cameras. I wouldn’t want it on my Canons or Nikons – I’d rather spend hours fiddling in Darktable – but on smartphones it’s a great plus.
Even if you have good computational photography software in the phone, a good starting point for light collection matters just as much: rather than megapixels as pointed out already, focus on sensor size, and as large as possible aperture. Google’s Pixel main camera got the balance nicely it seems (1/2.55" and f/1.7) and iPhone is very similar as well; though the sensor could be larger for both (e.g. 1/1.7" is not unusual these days: Samsung S21, Sony Xperia 1/5 III). I’d also stay away from overly wide focal length of the main camera: 26-27mm equivalent is ok, 24mm looks too distorted in the corners (looking at you Sony). OIS is also very desirable.
Computational photography is hyped for a very good reason IMO. My first gen Google Pixel with a GCam app in Night Mode captured files that came eerily close to the low light shots from my α7 III. However, these Night Mode captures were super cooked JPEGs which were pretty much unworkable from a post processing standpoint.
This brings me to Apple ProRAW — which is where things are getting really exciting to me. The critics say: it’s not a real raw file! Which is of course true as it’s “nothing but” demosaiced data in a rebranded Linear DNG file (which has been around since the age of dawn).
The big difference from the Linear DNGs we used to have is that these are the result of multiple exposures and some computational trickery. Cooked? Yes! But, much better than a “real” (mosaiced) DNG file from your phone. You get some of the pros of computational photography and you also get a very flat file with plenty of editing headroom.
So, if I was truly interested in phone photography (which I’m not), I’d get the iPhone 12, without hesitation. I say that as a LineageOS user that hates iOS (despite having used an iPhone at work for the last 10 years).
that’s interesting. I am strongly against iphones because of… lots of things, but the most practical aspect, the less “philosophical”, is price. As I said above, I can’t understand spending close to or more than a thousand euros or dollars for a phone – when a google pixel or similar does the same things for half the price (and I still see a Pixel as a bit too expensive for me).
However, I did try to work with raw files from my Pixel and they are next to useless and much worse than the jpegs; as you said, you can’t do much more with these jpegs.
So your comment on these pseudo-raws from the iphone is interesting – it would be nice to see and example btw.
I certainly hope that google will do the same for the Pixels (or android phones in general).
I have a 3a and I initially thought as @aadm stated about the raw files but I found one thing was the heavy vignetting made them look bad and hard to edit. RT will use the gain map to correct this using a Flat field correction from the meta data. It makes a big improvement or I cheat in DT as there is no such support and use the lens settings for the Huawei P10. Not technically correct but the raw files look much better IMO . When I started to really look at the jpg the treatment in many photos was way too harsh and colors were sometimes actually washed out from what I could get in the raw. So now I still will use the jpg but surprisingly I did an icc profile using the colormatch script with a jpg and raw pair and I get nice results with raw files and find I use them as often as I choose the jpg. I will go back and try to do an edit to this post with some examples if I get some time…
Thank you guys, thank you to everyone who answered here. It’s very useful and educational.
I think I will get S10, since it’s better than others I mentioned and I will get the Pixel’s camera app to get the most out of it.
There is only one thing, I checked the M51 specs again and it seems like it has much bigger sensor than S10, because it’s 1/.73" vs 1/2.55". But then I checked Google Pixel 4a and it has 1/2.55" as well. So is M51 much better than both S10 and Pixel 4a, sensor-wise?
And the 52mm camera that isn’t present in S10e (which is significantly smaller than S10, which means more desirable for me) is only 1/3.6". Would that make the S10 much better than S10e (since it has this additional camera) or not (because the additional camera is quite poor anyway)?
Here are two DNGs (HDR+ Enhanced and Night Sight mode) from GCam on a OnePlus Nord. darktable complains about not finding the matrix. They don’t look sharp at the pixel level. They seem to be mosaiced; I don’t know about the bit depth.
Currently ART is the only OSS raw processor that supports the GainMap (lens shading map) included in dngs from the Pixel and other phones. RawTherapee, darktable, etc, do not support it yet. When it’s included in the dng file and is not a no-op, it’s very important to apply it because it’s not just vignetting but it affects each color channel differently. So if it’s not applied the colors can be correct in the exact center of the image and there’s a heavy color cast closer to the edge.
The example in this thread shows how incorrect the colors can be when the GainMap is not applied. In that case it was mistaken for an incorrect color profile.
I think open camera saves tiff’s in a DNG container. You can’t extract a jpg preview from open
camera DNG and the tags showed references to tiff somewhere I think. They are on average 10MB larger than the DNG from the native pixel app so that might make sense. I may be wrong but for sure they are not the same DNG.
Just too a quick shot out my open office door… one pixel and one for open camera…which shoots a very flat raw for me…
Yes, without having inspected the files it’s very likely DNG lossless compression vs. no compression. Open Camera does not compress, while GCam derivatives do. A linear DNG has three channels and a mosaiced bayer raw file has one, so one can guess by looking at the size what flavor it is.