Some praise for RawTherapee features by Mark Metternich

I just came across this new video by the well known professional landscape photographer Mark Metternich, who is known for his high-end processing techniques.
He only seems to make use of the capture sharpening and lanczos based upsizing feature, but he seems blown away, to say the least :smiley:

Okay, this was… Interesting. And pretty much garbage.

Let me preface this by saying that I already was sceptical about anyone who claims RT has “the best” upsizing algo. But boy, after 10 minutes or so of the video I just had to stop watching to take in what is going on. Also, he starts of by saying this is an extremely short video… Well, 30 minutes isn’t particularly short. But hey, I’m commuting so I have time.

Content wise, there is too much unfounded opinion in this video. He claims the internet contains misleading and useless information about pretty much everything (algorithms, their quality, proper testing methods), but he does nothing to properly educate his audience. Instead, he just skips over these criticisms and continues by presenting his opinion as truth.
And then there are claims that a certain monitor pixel pitch is ideal for viewing detail? What the hell? No context whatsoever, no explanation either.

I mean, he doesn’t even acknowledge the fundamental issue with upsizing: creating information out of “nothing” and doing some smart interpolation. He also fails to mention sinc interpolation, which is technically the optimal solution (if I am not mistaken).

The he starts his RT explanation by highlighting a bug. Great start. His explanation of turning tools on is helpful, but calling it quirky is not. Then he mistakes post-process sharpening with capture sharpening, but mentioned deconvolution and bashes Adobe’s Detail slider, pretty much in the same sentence. Uhm, what should I take away from that?

Then it completely breaks down when he insists on changing the Working profile setting. He does not understand three things: that RT internally works with linear rgb data, that the working profile setting in 5.8 is not doing what is says, and that the sharpening algorithms in general should never, ever (edit: okay, maybe sometimes) be applied on nonlinear data to prevent haloing and artefacts (see the lengthy discussion here Quick question on RT Richardson–Lucy implementation for example).

He is also equating quality to processing time. Which really isn’t always the case, especially if you’re using RT on macOS with an M1 for which the software isn’t really optimized.

The final comparison part I just skipped. His claims about seeing more detail by pixel peeping at max zoom, while simultaneously intending to print the thing at 60 inches high makes zero logical sense. The viewing conditions are completely different, as is the perception of detail. Pixel differences do not tell you about that.

All in all, I’m happy this guy likes the (pretty generic) Lancsoz upsizing algorithm in RT. And he shoots great photos. But his knowledge of image processing and so on is severely compromised and should certainly not be taken as authorative imo.



since you are ranting about random info on the internet, I just wanted to comment that it’s more nuanced than that, and there are different opinions floating around. Just saying… :slight_smile:
E.g. an alternative opinion: Why grading in log instead of linear? - #7 by daniele - Discussions - Using ACES - Community - ACESCentral
(some info about the poster here)


Point taken, my source is a random thread on a community board, which is not ideal. And surely things are more nuanced. I’ll make an edit to tone down my criticism.

1 Like

Hi Roel,
No need to change anything imho. It’s just that sometimes I have the impression that there’s a bit of a “fetish” for linear data, where in reality if other encodings exist is not just because everybody else is stupid :slight_smile:

Thanks for giving your opinion, I almost expected a response like this :smile:
And I agree with most of the stuff I understand myself reasonable enough.

On the other hand I’m glad that good FOSS tools make their way into and get noticed by the professional industry.
I have a good friend who also works in this industry and obviously mainly has been working with Adobe tools his entire career. When I showed him how I edit photos in darktable, using filmic and diffuse & sharpen his jaw was literally dropping on the table.
I’m not sure though, if more users simply mean more “users” and therefore more effort to deal with bug & feature requests or if it also means more resources to progress with the projects. Maybe by having more people willing to donate to “resident developers” who can afford to spend more time with the project?

Here I compare Lightroom, Photooshop, Topaz GigaPixel and RT Lanczos - guess which I think is best? :rofl: :rofl:


You’re correct: theoretically and in the absence of aliasing, sinc interpolation is perfect. It’s almost never used in practice, though, because of difficult technical issues that come up when implementing it.

Lanczos is sinc - any sinc implementation needs to be windowed in some way since a real sinc function goes on for infinite time/space, Lanczos is just one option for windowing that has a lot of positive aspects - Lanczos resampling - Wikipedia

In general, “lanczos operating on linear data” is USUALLY best. for more on the whole linear data thing, although according to some sources, I believe Adobe intentionally does scaling on nonlinear data because while it fails in some corner cases, it was perceptually found to be better in human subject testing. (My source for that is “I remember Jim Kasson mentioning it in passing on dpreview ages ago” so that may not be very accurate…)

This happens a lot. I’ve seen people claim that Gerard Undone has claimed that Sony users with 8-bit cameras should shoot S-Log2 and not S-Log3 - but it’s not too difficult to mathematically show that S-Log3 should NEVER be combined with 8-bit video… Fortunately for Gerard, people have claimed that he said it but I can’t find a case where he actually said it, so I’ll give him the benefit of the doubt and assume those people misinterpreted what he said (probably when talking about a Sony that did 10-bit video…)

He does give some context? Skimmed the video yesterday and won’t go looking for timestamps but he did mention that those monitor specs allow you to simulate the viewing of large prints at what he considers appropriate viewing distance. Wasn’t there also a mention of proofing/evaluating sharpening at 50% zoom with the aforementioned monitor setup?

edit: wow almost deleted the comment but managed to get it back…

Now that I’m here I have to say his images are really not my cup of tea. Far into the realms of kitch.

Yes, but do you understand what he means by that? Is there a good reason why this is the proper way to evaluate images? As far as I have seen, he does not provide it and I cannot think of any reason myself. This is the point of my criticism: he argues that people do it wrong all the time (because they don’t understand) and that there is an obvious good way to do it, but he fails to explain why himself.

HIs style is certainly annoying but I took it to mean simply that he had come to the conclusion through experience and experimentation. I gather he’s made a few prints.

What kind of reasons do you expect?

Is there something special about RT’s Lanczos implementation? The algorithm is in gimp and darktable as well, isn’t it?

For this particular point? I would really like to know why 100 pixels per inch, or 0,25 mm pixel size is required to “see detail as it really is” (06:14). And what is the “correct viewing distance” that most people don’t know (5:54).

My guess from basic geometry is that if your pixel pitch is different, you just need to adjust your viewing distance to get the same resolution.

Not that I am aware off…


I took the point to be that you can view the screen as you view a print. Personally I’ve never found it to quite work to change viewing distance to mathematically fake the experience. I’d be curious to hear from people who’ve tried such a pixelpitch matching setup and hear what they think. Would be valuable if it works.

Of course his “correct viewing distance”, “proprietary sharpening method” and the rest is just salesmanship for paid content. Something I have very little tolerance for. The way he speaks makes me think he might be neuroatypical so I don’t mind it as much in this instance.

Maybe its the Demosaic (e.g. dual) or the capture sharpening that happens earlier??

Perhaps. The only other differences I can think of would be:

  1. Differences in choice of the value “a” in the algorithm - e.g. choosing a=3 is more costly computationally but should provide better results. I’d be shocked if RT were using 3 and darktable were using 2 though. This guy may simply not have tried darktable?
  2. GIMP often does all operations on gamma-encoded data
  3. Not even sure if RT is doing rescaling in the part of the pipeline that is linear-encoded vs. the part that is gamma-encoded. Probably something to dig into, if it’s the part that is gamma-encoded then it’s a candidate for moving it (or at least having the option to move it) in the future

I assumed that the video guy just stumbled on RT, found it to be good, and went from there. Lanczos is in a lot of software, and seems to be a freely available algorithm, but I just thought I’d ask in case someone baked in some extra goodness.

I’m not a big fan of Marks, but he is essentially correct, and I’ve done videos on this myself.

Firstly, when you view a print hanging on a wall, to take it all in, you will be viewing it at between 2.5x and 1.5x the diagonal of the printed image. Only idiots put their noses up to a print!

Modern printers usually print at a maximum of 5760 dots per inch, and 1440 dpi is considered a fairly low resolution print.
Just because the image might be stored ‘in archive’ at 300PPI (not DPI) the majority of fools on the internet seems to think that printers PRINT at 300 dpi - like I said - FOOLS. I have yet to come across a printer driver that will accept such a low setting as 1 pixel = 1 dot.

But; even in archive form at 300 PPI, you are viewing the image on your monitor at a MUCH LOWER resolution. I use a dedicated photography monitor, a 27" Eizo ColorEdge, and this device has a resolution of 109 PPI.

So in essence, I’m viewing the image at 1/3rd of it’s native resolution!

If I open the 300PPI image in Photoshop and turn on my rulers set to inches, I can instantly see that those “photoshop inches” are indeed nearly 3 inches long if I hold a ruler up to the screen.

Photoshop has a function called "view at print size’, and in the preferences you have the ability to enter your monitor resolution. Do this and when I personally click ‘view at print size’ my view magnification drops, not to 50% but 36.33%.

This magnification shows me what the image would look like printed at its native size AND viewed from my standard screen working distance, which for most folk is around 20 inches.

Irrespective of Lanczos or Photoshop upsampling, print sharpening done CORRECTLY is done at the print head, and at the print resolution - so you can not preview it on your monitor. You can certainly emulate it using something along the lines of the Pixel Genius plugin, but even then, it’s imperative that you view it AT PRINT SIZE.

MM will NEVER give away the ALL of the detail in his workflow, because he wants you to pay him big dollar and attend one of his week-long courses at Nevada Fine Art Printers! Printing is looked upon today as some kind of magic juju skill that requires ‘the force be with you’, and he likes to temp you into thinking you to can have the force - if you pay him.

But to someone like me who cut their teeth on all this crap years ago in the long dead pre-press industry, it’s neither magic, juju or a force of any kind; it’s just simple common sense that I’m happy to pass on to anyone - who cares to listen!