I just came across this new video by the well known professional landscape photographer Mark Metternich, who is known for his high-end processing techniques.
He only seems to make use of the capture sharpening and lanczos based upsizing feature, but he seems blown away, to say the least
Okay, this wasā¦ Interesting. And pretty much garbage.
Let me preface this by saying that I already was sceptical about anyone who claims RT has āthe bestā upsizing algo. But boy, after 10 minutes or so of the video I just had to stop watching to take in what is going on. Also, he starts of by saying this is an extremely short videoā¦ Well, 30 minutes isnāt particularly short. But hey, Iām commuting so I have time.
Content wise, there is too much unfounded opinion in this video. He claims the internet contains misleading and useless information about pretty much everything (algorithms, their quality, proper testing methods), but he does nothing to properly educate his audience. Instead, he just skips over these criticisms and continues by presenting his opinion as truth.
And then there are claims that a certain monitor pixel pitch is ideal for viewing detail? What the hell? No context whatsoever, no explanation either.
I mean, he doesnāt even acknowledge the fundamental issue with upsizing: creating information out of ānothingā and doing some smart interpolation. He also fails to mention sinc interpolation, which is technically the optimal solution (if I am not mistaken).
The he starts his RT explanation by highlighting a bug. Great start. His explanation of turning tools on is helpful, but calling it quirky is not. Then he mistakes post-process sharpening with capture sharpening, but mentioned deconvolution and bashes Adobeās Detail slider, pretty much in the same sentence. Uhm, what should I take away from that?
Then it completely breaks down when he insists on changing the Working profile setting. He does not understand three things: that RT internally works with linear rgb data, that the working profile setting in 5.8 is not doing what is says, and that the sharpening algorithms in general should never, ever (edit: okay, maybe sometimes) be applied on nonlinear data to prevent haloing and artefacts (see the lengthy discussion here Quick question on RT RichardsonāLucy implementation for example).
He is also equating quality to processing time. Which really isnāt always the case, especially if youāre using RT on macOS with an M1 for which the software isnāt really optimized.
The final comparison part I just skipped. His claims about seeing more detail by pixel peeping at max zoom, while simultaneously intending to print the thing at 60 inches high makes zero logical sense. The viewing conditions are completely different, as is the perception of detail. Pixel differences do not tell you about that.
All in all, Iām happy this guy likes the (pretty generic) Lancsoz upsizing algorithm in RT. And he shoots great photos. But his knowledge of image processing and so on is severely compromised and should certainly not be taken as authorative imo.
Hi,
since you are ranting about random info on the internet, I just wanted to comment that itās more nuanced than that, and there are different opinions floating around. Just sayingā¦
E.g. an alternative opinion: Why grading in log instead of linear? - #7 by daniele - Discussions - Using ACES - Community - ACESCentral
(some info about the poster here)
Point taken, my source is a random thread on a community board, which is not ideal. And surely things are more nuanced. Iāll make an edit to tone down my criticism.
Hi Roel,
No need to change anything imho. Itās just that sometimes I have the impression that thereās a bit of a āfetishā for linear data, where in reality if other encodings exist is not just because everybody else is stupid
Thanks for giving your opinion, I almost expected a response like this
And I agree with most of the stuff I understand myself reasonable enough.
On the other hand Iām glad that good FOSS tools make their way into and get noticed by the professional industry.
I have a good friend who also works in this industry and obviously mainly has been working with Adobe tools his entire career. When I showed him how I edit photos in darktable, using filmic and diffuse & sharpen his jaw was literally dropping on the table.
Iām not sure though, if more users simply mean more āusersā and therefore more effort to deal with bug & feature requests or if it also means more resources to progress with the projects. Maybe by having more people willing to donate to āresident developersā who can afford to spend more time with the project?
Youāre correct: theoretically and in the absence of aliasing, sinc interpolation is perfect. Itās almost never used in practice, though, because of difficult technical issues that come up when implementing it.
Lanczos is sinc - any sinc implementation needs to be windowed in some way since a real sinc function goes on for infinite time/space, Lanczos is just one option for windowing that has a lot of positive aspects - Lanczos resampling - Wikipedia
In general, ālanczos operating on linear dataā is USUALLY best. Gamma error in picture scaling for more on the whole linear data thing, although according to some sources, I believe Adobe intentionally does scaling on nonlinear data because while it fails in some corner cases, it was perceptually found to be better in human subject testing. (My source for that is āI remember Jim Kasson mentioning it in passing on dpreview ages agoā so that may not be very accurateā¦)
This happens a lot. Iāve seen people claim that Gerard Undone has claimed that Sony users with 8-bit cameras should shoot S-Log2 and not S-Log3 - but itās not too difficult to mathematically show that S-Log3 should NEVER be combined with 8-bit videoā¦ Fortunately for Gerard, people have claimed that he said it but I canāt find a case where he actually said it, so Iāll give him the benefit of the doubt and assume those people misinterpreted what he said (probably when talking about a Sony that did 10-bit videoā¦)
He does give some context? Skimmed the video yesterday and wonāt go looking for timestamps but he did mention that those monitor specs allow you to simulate the viewing of large prints at what he considers appropriate viewing distance. Wasnāt there also a mention of proofing/evaluating sharpening at 50% zoom with the aforementioned monitor setup?
edit: wow almost deleted the comment but managed to get it backā¦
Now that Iām here I have to say his images are really not my cup of tea. Far into the realms of kitch.
Yes, but do you understand what he means by that? Is there a good reason why this is the proper way to evaluate images? As far as I have seen, he does not provide it and I cannot think of any reason myself. This is the point of my criticism: he argues that people do it wrong all the time (because they donāt understand) and that there is an obvious good way to do it, but he fails to explain why himself.
HIs style is certainly annoying but I took it to mean simply that he had come to the conclusion through experience and experimentation. I gather heās made a few prints.
What kind of reasons do you expect?
Is there something special about RTās Lanczos implementation? The algorithm is in gimp and darktable as well, isnāt it?
For this particular point? I would really like to know why 100 pixels per inch, or 0,25 mm pixel size is required to āsee detail as it really isā (06:14). And what is the ācorrect viewing distanceā that most people donāt know (5:54).
My guess from basic geometry is that if your pixel pitch is different, you just need to adjust your viewing distance to get the same resolution.
Not that I am aware offā¦
I took the point to be that you can view the screen as you view a print. Personally Iāve never found it to quite work to change viewing distance to mathematically fake the experience. Iād be curious to hear from people whoāve tried such a pixelpitch matching setup and hear what they think. Would be valuable if it works.
Of course his ācorrect viewing distanceā, āproprietary sharpening methodā and the rest is just salesmanship for paid content. Something I have very little tolerance for. The way he speaks makes me think he might be neuroatypical so I donāt mind it as much in this instance.
Maybe its the Demosaic (e.g. dual) or the capture sharpening that happens earlier??
Perhaps. The only other differences I can think of would be:
- Differences in choice of the value āaā in the algorithm - e.g. choosing a=3 is more costly computationally but should provide better results. Iād be shocked if RT were using 3 and darktable were using 2 though. This guy may simply not have tried darktable?
- GIMP often does all operations on gamma-encoded data
- Not even sure if RT is doing rescaling in the part of the pipeline that is linear-encoded vs. the part that is gamma-encoded. Probably something to dig into, if itās the part that is gamma-encoded then itās a candidate for moving it (or at least having the option to move it) in the future
I assumed that the video guy just stumbled on RT, found it to be good, and went from there. Lanczos is in a lot of software, and seems to be a freely available algorithm, but I just thought Iād ask in case someone baked in some extra goodness.
Iām not a big fan of Marks, but he is essentially correct, and Iāve done videos on this myself.
Firstly, when you view a print hanging on a wall, to take it all in, you will be viewing it at between 2.5x and 1.5x the diagonal of the printed image. Only idiots put their noses up to a print!
Modern printers usually print at a maximum of 5760 dots per inch, and 1440 dpi is considered a fairly low resolution print.
Just because the image might be stored āin archiveā at 300PPI (not DPI) the majority of fools on the internet seems to think that printers PRINT at 300 dpi - like I said - FOOLS. I have yet to come across a printer driver that will accept such a low setting as 1 pixel = 1 dot.
But; even in archive form at 300 PPI, you are viewing the image on your monitor at a MUCH LOWER resolution. I use a dedicated photography monitor, a 27" Eizo ColorEdge, and this device has a resolution of 109 PPI.
So in essence, Iām viewing the image at 1/3rd of itās native resolution!
If I open the 300PPI image in Photoshop and turn on my rulers set to inches, I can instantly see that those āphotoshop inchesā are indeed nearly 3 inches long if I hold a ruler up to the screen.
Photoshop has a function called "view at print sizeā, and in the preferences you have the ability to enter your monitor resolution. Do this and when I personally click āview at print sizeā my view magnification drops, not to 50% but 36.33%.
This magnification shows me what the image would look like printed at its native size AND viewed from my standard screen working distance, which for most folk is around 20 inches.
Irrespective of Lanczos or Photoshop upsampling, print sharpening done CORRECTLY is done at the print head, and at the print resolution - so you can not preview it on your monitor. You can certainly emulate it using something along the lines of the Pixel Genius plugin, but even then, itās imperative that you view it AT PRINT SIZE.
MM will NEVER give away the ALL of the detail in his workflow, because he wants you to pay him big dollar and attend one of his week-long courses at Nevada Fine Art Printers! Printing is looked upon today as some kind of magic juju skill that requires āthe force be with youā, and he likes to temp you into thinking you to can have the force - if you pay him.
But to someone like me who cut their teeth on all this crap years ago in the long dead pre-press industry, itās neither magic, juju or a force of any kind; itās just simple common sense that Iām happy to pass on to anyone - who cares to listen!