Seeking advice on a replacement camera

Neither do I. When I acquired my first proper camera - a Nikon F in 1962 - I remember it took a good evening to read and understand the manual tothe point of being be able to use the most important 20% of the camera controls right away. That camera was shortly thereafter transferred, without my permission, to the ownership of somebody unknown and unseen; I replaced it with what I could afford: an Asahi Pentax Spotmatic. That required just a few hours to achieve adequate mastery of the camera.

In contrast, most of the camera manufacturers have become too enamored with their own propaganda, losing sight, almost totally, of being able to communicate effectively with their customers through the Human-Machine interface that is a such a key part of camera use.

1 Like

I would only add to this the quick question - why use all the extra features? If you mostly shoot landscapes, you should be able to use it pretty much out of the box.

Set it to A or M or P as you prefer, make sure it’s set to save RAW files and the autofocus is set to single shot.
Find the controls for exposure compensation and MF/AF

Off you go. Never need to touch a menu again!

2 Likes

Unfortunately, unless you get one of the absolute latest top-of-the-line FF cameras, you will run into some limitations because the framerate of RAW bursts from most APS-C and FF cameras are way below what Google and Apple are getting from their sensors. That’s one of the advantages of small sensors - while stacked BSI (Exmor RS) is rare in APS-C and above (and the RS sensors in the RX100 family are kinda meh at this point), stacked BSI (aka Exmor RS) has been standard in phones for many years. It’s going to be wonderful when we see modern stacked BSI hit APS-C and reasonably priced FF cameras.

That said, you CAN feed bursts to implementations of the legacy HDR+ or MFSR algorithms:
HDR+ Pipeline - but I’ve found that any rotation that isn’t handled by your camera’s IBIS will break the algorithm pretty easily.

GitHub - kunzmi/ImageStackAlignator: Implementation of Google's Handheld Multi-Frame Super-Resolution algorithm (from Pixel 3 and Pixel 4 camera) exists for MFSR but is unmaintained and not user-friendly

I did check but sadly, even in the Pixel 7 Pro, RAWs are still mosaiced. I have’t tested what happens at intermediary zoom settings. That’s a big reason why DNG 1.7 was developed - the ability to efficiently save demosaiced images in camera-native colorspace. Samsung is saving JXL-compressed DNG 1.7, Apple has been saving “LinearRAW” demosaiced images for a while (DNG 1.6 was developed in partnership with Apple), Google is still re-mosaicing their images. :frowning:

I’ll disagree with @bastibe as far as shutter lag from stacking. Nowadays most phones are consistently running their capture bursts in the background during preview - so when you hit the shutter, most if not all of the images are already captured. Mobile calls this “zero shutter lag” and it’s existed since before bursting was a thing.

You’re right that there’s “zero” shutter lag im modern phones, as they start integrating before the shutter was tripped. Thank you for pointing that out. IIRC, that’s a relatively recent innovation. But they still integrate for a quarter of a second regardless of your subject (keyed to the shutter picture, so you don’t get too many motion artifacts). I find it quite hard to capture a specific moment on my phone, in a way it isn’t on my camera.

Remember this story? The picture was taken over an extended period of time, and thus showed three points in time in one picture. Quite ingenious, really, but not what was intended. Granted, this was probably taken in pano mode, but the in concept it is what every phone does all the time.

IIRC, the Pixel camera captures enough subpixel-aligned shots that it has all three color channels available for each pixel without demosaicing. And then it re-mosaics that image for saving it into the DNG to save space.

There were some fascinating talks on the subject by Marc Levoy, the former architect of the Google camera app, who pioneered many of these techniques. Last time I heard, the marketing team had taken over camera development, and Levoy left for Adobe.

1 Like

The point is that those features are there only for those who need it. Modern cameras can be used as simply as any film camera, if the user wishes to do so. The option here is that they also give people the power to go beyond that, if they wish so.

2 Likes

On my Pixel 6, shooting in low light still takes more than 2 seconds of capture after pressing the shutter button, a lot of the times in environments with some motion, such as a cat. In regular environments it’s imperceptible and there’s no lag at all.

Re: the ‘any rotation that isn’t handled’ bit: in my uninformed way I have, in my mind, grouped rotation of the camera with movement of the subject under the general heading of ‘ghosting’. It doesn’t matter just how large an error of understanding this is, the result is still the same: there isn’t any acceptable HDR application available to me, to use on my ‘somewhat better than entry level’ photographic gear and ‘better than average’ computing resource. I view darktable as the most advanced image processor that is available to me (i.e. FOSS) so I am intrigued as to why there isn’t wider support for this whole topic of MFSR in darktable.

The reference to HDR+pipeline (or, as the author confusing labels it, ‘HDR-pipeline’) was most interesting to read; I even think I understood about 20% of it. It just re-enforces my question of why we haven’t got some of this technology either in our cameras or in a combination of camera and post-processor. I would have thought that the main camera manufacturers, seeing that they have lost the ‘point and shoot’ market to the phone designers/makers, would be looking to add some significant value to the complete range of cameras which are not yet phones. Particularly to the ‘high-end’/enthusiast compact cameras which are my personal interest.

Get with a MILC with a decent lens (or lenses). Most recent bodies (from the last 5–6 years) with a micro 4/3 or APS-C sensor is going to be sufficient for your purposes, pair them with a lens that fits the specs. Eg if you insist on a single lens but still want near-pocketability, I would pick a used micro 4/3 body and something like a 12–60 PanaLeica, which will give you an equivalent focal length of 24–120mm. But there are cheaper “kit grade” lenses that will easily surpass a compact, and similarly lenses with more limited zoom range or primes that give you good optical quality much cheaper in a smaller package (“pancake” lenses).

If you are OK with changing lenses, then for Panasonic/Olympus I would get

  1. a dedicated prime macro lens if you shoot macro,
  2. the Panasonic 12mm f/2.5 wide angle prime, combined with a 25mm/1.8 or similar, eg the 20mm is nice too, these are small pocketable walkabout lenses (the 25mm is the largest),
  3. the Panasonic 12-32mm f/3.5-5.6 “pancake” is nice too for a walkabout lens,
  4. complement with the Panasonic Lumix 35-100mm f/4.-5.6 for tele

See this channel: https://www.youtube.com/@MicroFourNerds which discusses quite a few models. First, decide whether you need an EVF and IBIS and pick a body based on that, there are lots of choices. Based on your description I would go for small size at the expense of features. Don’t avoid 16M bodies or similar, most of the time you don’t need that many pixels. Sharpness comes from the lens.

The sensor, yes. The lenses on most mobile phones still have to catch up to a decent premium compact.

The “curse” of the camera industry is that image quality comes mostly from lenses, and lenses last for ages and have a high resale value, so once you saturate the market for a particular mount, demand drops. And from a marketing perspective, you cannot make announcements about a shiny new feature that is 300% better than the competition, as improvements are incremental.

The flipside is that there are a gazillion of great lenses and bodies out there at reasonable prices, especially used.

1 Like

it’s really similar in a way to rawtherapee’s pixelshift feature. also vkdt kinda has an implementation of this behind a button (select a couple of images and press “low light bracket” in light table mode) to get:


this is a bracket with 4 telephone images, the view is split screen left bracket right plain single image. it’s a 1/\sqrt{n} game so your returns diminish with growing number of images.

the alignment and masking probably need a bit of refining to yield excellent results robustly in everyday use. i’m not using it much, not sure it’s actually really so much better than denoising, and capturing bursts is a bit more cumbersome. streamlining this use case is probably really best done on the capturing device.

2 Likes

Yes, especially since the device usually has sensor information about the movement of the device (camera) body.

Dunno, but I would speculate that since this is a computationally challenging calculation which these days is coupled with ML-powered inpainting, either on device or in the cloud, most photographers have other priorities that are about actual photography, and camera manufacturers compete on that.

At the end of the day, the greatest advantage of your mobile phone for photography is that it is not larger than your palm and slides into a pocket. No camera competes with these features, because of how physics works. OTOH a decent crop-sensor camera can offer features a mobile phone cannot, for the same reasons.

I would suggest that you do not get distracted by algorithmic gizmos if you want to take great photos. The best that you will get out of them is making images like the 99% of images already out there, maybe technically impressive but dead boring.

2 Likes

Yup. That was definitely in pano mode, or faked to leverage a false accusation at a phone. No phone takes a burst long enough in time for THAT to happen without individual shots still being blurred (Night Sight). Also, at least for Google’s pipeline, the technician’s assertion that AI was responsible is false. Neural networks are only used for preview and for AWB in Google’s Night Sight implementation.

I’ll need to re-test with my 7, but on my 4, it was worse - it was definitely the legacy algorithm, as the raw image would be severely cropped in a digital zoom scenario, while the JPEG was clearly generated from subpixel super-resolution. DNGs with any digital zoom were useless. I normally frown upon digital zoom, but MFSR is one of the exceptions. Astronomers call their related technique “drizzle”.

There’s a general “unix philosophy” of, rather than have one application that does everything in a mediocre fashion, have multiple specialty applications that excel at their “one thing” and are designed to interoperate well with other tools.

Burst stacking, bracket merging, etc. all fall into this category. Duping HDRMerge’s functionality into darktable and RawTherapee would be wasteful, compared to making HDRMerge as good as it can be at its specialty and improving interoperability between HDRMerge and RT/dt - for example I think years ago dt lacked support for floating point DNGs, it was added primarily with HDRMerge output as the test case. I’ve written my own straight average stacker intended for tripod-mounted usage (no shifting or alignment), and will vehemently oppose putting anything similar into RawTherapee itself, because functions like that should be in dedicated preprocessing tools.

As to ghosting - having “same-time” exposures greatly helps here, as does subpixel handling and partial image handling. If you don’t have subpixel handling, you are limited to shifts that cause the same colors of the Bayer array to align - basically 2-pixel shifts with no rotation. This applies to HDRMerge and Tim Brooks’ HDR+ implementation, but not to Google’s MFSR/ImageStackAlignator

These algorithms work best on rapid bursts with minimal time in between them.

Most cameras are doing 10-20 FPS, with 30+ at full resolution RAW only becoming a thing in a select few cameras recently. Google was doing 60 FPS at full resolution 8 years ago.

This can’t be changed without drastic price increases. Advanced manufacturing technologies like stacked BSI go up in cost as a nonlinear function of sensor area. This is why Exmor RS (stacked BSI) has been standard for mobile phones for 8+ years, but there are only a small handful of cameras on the market with stacked BSI sensors of APS-C and larger size - those cameras are frightfully expensive (A9/A9II launched at around $4500 or more than twice the price of similar resolution unstacked-sensor bodies, A1 at $6000, etc.) The RX100 family’s stacked sensor is an older one significantly slower than most phone sensors.

In addition to the manufacturing cost issue, larger sensors simply take more time to shuffle signals/data from one side to the other, so increased physical area tends to automatically lead to reduced throughput/speed.

Also, due to volumes, die-shrinking your ISP on a regular basis to make it faster is a money-losing proposition. Look at how long we had to live with the BIONZ X and its 500-600 megapixel/second throughput limitation before we FINALLY got the XR.

Meanwhile Qualcomm has such insane volumes that they can refresh multiple Snapdragon families on a yearly basis.

1 Like

I don’t understand this quote. When I take an HDR+ picture on my Pixel 6a, and compare it to a raw capture on my Fuji X-T5, it is invariably the Fuji that has more dynamic range.

Could you elaborate on what you are missing?

2 Likes

Having now watched a large part of the YouTube X-T30 functions video produced by Maarten Heilbron, I realise it’s not just Menu hell that was/is deterring me in fully exploiting my X-T30. It’s the understanding of what some of the menu items mean, in plain language, for example ‘Interlock Spot AE & Focus Area’. Then there is the difficulty of trying to deduce why certain menu items are greyed out. Going through every menu and toggling each line item than can be toggled, to see if that enables a greyed-out menu item, is the work of a lifetime. Just watching a part of the referenced video I noticed the ‘number of focus points’ attribute become enabled and then later disabled again - but I cannot correlate this change with any particular option setting.

Further HMI problem example: I was frustrated by my inability to find the setting which would allow the information on the LCD to be continuously displayed - that took 30 minutes of menu examination to find no answer; I finally got it from a now-locked forum question from 2021. As for AE-L and AF-L button actions to deliver functions on my camera, as stated in the manual (to the extent those functions can be interpreted): I have stopped trying to understand this after years of trying.

And that wretched Q button: if there is one factor above all others that disappoint me about the ergonomics of this camera it is the Q button and my inability to avoid pressing it at the worst possible moment. I have now sacrificed the information available via that button by assigning it to deliver ‘back button focus’ instead.

Finally, the idea of being able to quickly and reliably select a collection of settings which optimises the camera’s operation for a particular scene quite depresses me. Yes, I’m aware there are custom settings. This just extends the level of confusion: which of the 6 or so customer setting do I use and can I quickly tell what each of them does?

At last I see a real problem looking for an AI solution (rather than the inverse). I would greatly value a LLM which could quickly determine roughly what scene type I am in and then to ask me a very limited set of structured ‘multiple choice’ questions from which it could deduce the most appropriate settings for that scene. Then, as an option, explain, in a text file written to my SD card, why it has made those choices, thus allowing me to learn something for future use. Yeah, dream on …

Probably focus box size or if it was in auto focus mode or manual focus.

Edit: Just tested on the X100V and it is indeed the box size.

I watched a portion of this video too, and several times I have thought “cool, wish my X-T20 had that”. Thanks Todd (to be clear, I’m not being sarcastic)!

Here, I think you would benefit by having the camera manual handy. You can’t really expect to have the full detail of information about every setting (i.e., the manual) right in the menu system (well, you could expect that, but you won’t get it :smiley: ). I have my physical manual in my camera bag (not that I’ve used it in the field, I prefer doing that kind of learning at home) and I have a pinned browser tab for the manual on my computers.

That button placement really sucks on the X-T30. Don’t know [what|if] they were thinking there.

You might want to check out some of these resources:

John Peltier is very understandable.

2 Likes

Huhu, when you mentionnet the G7X I red GX 7 … which is a compact mirrorless interchangeable, with better sensor and a viewfinder but a bit bigger (not by much, depending the lens you attach to it) see dumb comparison bellow
the GX7 have for it the sensor stabilization which can be coupled to optical stabilization depending the lens, the
" Lumix G X Vario PZ 14-42mm f/3.5-5.6 " for such a setup would be a sensible choice

1 Like

Thanks for these two links - I’ll follow them up later this evening.

I wonder what affordable means… :slight_smile:

3 Likes

while I see your point and did this transition myself from MFT to a7 without regret, a a6100 is still a bit bigger and more expensive than MFT alternatives :slight_smile:

Depends the pocket size (all meanings included :smiley: ) and use cases I guess

Yeah, sure, it will be as affordable, when actually available, as the Porsche Boxster was: UK prices speculation was ‘less then 25K pounds’; price when first available was, realistically, at least 50%, and typically 80%, higher than this. So a new Leica will be affordable, providing of course you are employed at a senior level in an investment bank…

1 Like