Image processing, in this context, is taking raw garbage from a camera sensor and preparing it for display, acknowledging all the technical and psychophysical parameters, in order to match the visual memory of the scene you shot. Because sensor recordings definitely don’t match that at all.
The parameters to be taken into account are sensor metamerism, noise level, sensor to cone-response tristimulus matching, CFA patterns, dynamic range mapping, gamut mapping, medium to medium illuminant (chromatic) adaptation, plus all the optical flaws reversal (noise, moiré, hazing, CA, distortion, blur, etc.), and I forget many of them.
To account for these parameters, you have to understand them and very often make educated guesses to setup your corrections and adjustements, because most of the colour science we have is made of approximated relationships that are valid under certain hypotheses, and no software can automatically assess if the conditions of validity of the relationships are met for a particular image.
That’s what I call an expert thing : it requires training and experience. Just because whenever your image doesn’t meet the conditions of validity of the simplified relationships, you need an extra step of correction to “force bend” the image in such conditions.
You didn’t understand. Just because the algorithm is simple (say a couple of additions and multiplications) doesn’t mean that the quantities we are adding/multiplying represent something easy to understand (variance, chromaticity, integral of the spectral sensitivity along a spectrum, …). There is some level of abstraction to handle there. So, how do you expose these in UI ? You try to use non-scary words that will disallow users to google their actual meaning ?
Then, the problem is, in an image processing pipeline, while most of the algos are relatively simple (except for denoising, frequency splitting, deblurring and such), there are lots of them, each accounting for a particular parameter of your image. So, it’s a complex process, even if built from mostly simple bricks. Same question : how do you expose these controls in UI ?
Lightroom and Capture one have solved that : they don’t expose most of the hard stuff, and impose it for you. Problem : again, their colour science relies on approximations valid in a certain range. What do you do when you are out of this range ? You are locked out…
Except it doesn’t depend. The things that need to be done on your raw picture to prepare it for display are the same in any case. The colour science to do so is hard to understand, not 100% reliable (and often black magic), and also users may want to impose their visual preferences and style.
So the choice is between exposing all the params in GUI or saying users : “this soft is intended to give ok pictures in 80% of the use cases, but screw you if you fall under the remaining 20%”. The scam is software editiors pretend the choice is only about user control and ease of use. It’s not. The choice it whether you get screwed or not when the default params of the soft will fail.
If you find a better way, you are better than me. But so far, I only heard vague political statements about “what should be possible or done” from people who don’t have a clue about what is actually computed in the pipeline, and not a single actual “how to simplify without degrading image quality”.
I get it. GUI should be about users, users are not engineers, good UX is good…
Now, how do I expose the patch-wise covariance threshold between a mask and a guide in a user-friendly way ? How do I make an edge-aware wavelet spectrum decomposition UI intuitive ? How do I auto-tune a dynamic range mapping for every possible camera ever made and every possible image settings consistent at any scaling size ?
Once you look into the specifics, your GUI goodwill vanishes. It’s just a can of worms, and it’s easier to just give classes about how to use the complicated soft than trying to simplify it in a way that will probably only make it worse.