plus alpha blending (occlusion), which is the core of any masking/blending operation, and also uses blur/edge refinements.
All wrong except grain, but the reason lies in chemical diffusion kinetics.
Also unsharp mask is an end-of-life nasty trick from a time where computers were less powerful than lower-market-tier phones. Try deconvolution, you will love it.
The rest is fake news backed by absolutely nothing, except some Dan Margulis guy who pretended otherwise 20 years ago.
Tone manipulation can be done in full linear pipe with log controls, you don’t need a log pipe (see tone equalizer). But since people don’t differentiate view/model/controller in software development, they don’t quite get the subtle difference between applying image operations in some space, and controlling the parameters of said image operations in some space. You can have log controls, even HSL/HSV, and convert to valid RGB before running the algo in linear RGB space. That way, you have better ergonomics while preserving the relationships between ratios and gradients in your image.
I don’t quite get why people still fight over that. Ground truth is physics. Painting is physically accurate. Natural blur is physically accurate. Natural layering/occlusion is physically accurate. There is only one space that is physically accurate : it’s linear. Everything that is not physics is bullshit. Sure, we have perceptual spaces derivated from psychophysics : CIELab 1976 sucks because it’s not hue-linear, CIECAM02 sucks because it will push valid colors out of gamut, CIECAM16 sucks because it still doesn’t do HDR, JzAzBz or IPT-HDR seems ok-ish (still not 100% hue-linear) so far but it’s too recent to be sure and they only do mild HDR (200 nits instead of 100).
Doing color job is no excuse for using non-linear spaces. First of all, because painters have been mixing pigments in scene-referred spaces for the past 25000 years with no issue with color. Second, because simply putting a log or a gamma on your RGB doesn’t make your space “perceptual”. Worse, you will skew hues and colorfulness in unpredictable ways. You need a proper color adaptation model for that, which might end up with pixels encoded in 4-6D instead of 3D (checkout CIECAM02…).
The conversion light → color is perfect in your brain, so leave it to your brain. Don’t use broken color model in software to try simulating that. Cameras record light. Displays emit light. So light transport it is, all along. Path of fewest assumptions.
The only operations that work better in non-linear are the ones that have been specifically designed to work in non-linear. Re-open your notebooks, design some new ones for linear, and you will see that they are more computationally efficient and more realistically-looking than their old counterparts.