When to scale? How to scale?

I’ve got a couple of questions regarding scaling and I’d be grateful for some input please.

First, I’ve always read that you should sharpen after you scale your image to its final size. But I was wondering if there are any other bits of processing that you should do only after the image has been scaled… Or is it just that scaling is the last thing you do and then if you’re going to sharpen, then you do it after having scaled? Is that correct?

Second, I saw in a thread on sharpening that one user included these steps that related to scaling:

  • Go to G’MIC > Testing > Garagecoder > Unquantize JPEG (reduce edge preservation to 0)
  • To sharpen for a final image size of 900px wide, Image Scale down to 900*1.6667 using Sinc (Lanczos 3) interpolation
  • Finally, Scale Image down to 900px wide using the same interrogator.

So, my questions are: 1) I assume that you “Unquantize” only if the original image was jpeg - not tiff for example?

And 2) what’s the rational for scaling twice once at the dimension multiplied by 1.6667 and then again at the final dimension? To be sure, it worked and I believe I could see a difference but I’m not sure why and wanted to understand it better.

Thanks in advance.

Cheers,
Jules

1 Like

The sharpening after scaling is usually due to losses from the resizing operation (scaling down will lose detail by nature). So a final sharpening pass helps to bring back some of that contrast for the final size.

As to @garagecoder and his filter, let’s see if he’ll chime in. :wink:

Many thanks for that - understood - that makes sense.

But what’s the “magic” behind the 1.6667 multiplier and scaling twice?

I just tried it in my editor, the twice-scale vs one-scale+minimumUSMsharpen, and what I see is about the same final resolution but ever-so-slightly-less edge contrast in the twice-scale.

So, interpolation essentially produces data that didn’t previously exist, one might surmise that doing it incrementally from source to destination produces a more faithful destination rendition of the source. The value of 1.667 seems kinda like an “approach vector” in flying, it gets you close to the final glideslope, but it usually won’t lead you to the proper ground intercept point (on the runway, hopefully…)

Thing is, I think the post-resize sharpen is done not to actually sharpen the image so much as to put in some edge contrast to give the illusion of higher resolution in a smaller image. Thinking about it, while the two techniques seem to produce equivalent results in the nominal case, I’d probably continue to use USM sharpening because sometimes I want to apply just a bit more of it, and there’d be no similar “knob to turn” with twice-sharpen.

For reference, here are the two resulting images, appropriately named:

resize-sharpen:
DSG_3111-resizesharpen

twice-resize:
DSG_3111-twiceresize

1 Like

@jules Interesting. It would be great if you provided a link to the thread.

@ggbutcher The second ref image looks slightly “better” but that may be because I normally go without post-sharpening or because matching two different methods isn’t an exact science.

My point exactly, they are two very different manipulations, targeted at a common objective. Thing is, in reduction resizing it’s not about “making sharp” again, it’s about increasing the edge contrast to produce the illusion of resolution, or acutance. The 'twice-resize" is probably a better application of interpolation in representing the original data, but I’m more about the illusion… :smiley:

1 Like

Yes, since our other conversation on post-sharpening, I have found that some -map tones treatment in gmic does a good job at improving perceived sharpness. I will give this 2x scaling a try in the future.

Exactly as you say, Unquantize JPEG is best used only on JPEGs for two reasons; it specifically targets the quantization artifacts (e.g. “blocky” look, halos) and uses a scale-dependent smoothing (results will alter depending on image size).

Here it was most likely used to round off jagged edges, in which case using “Smooth [mean-curvature]” would be a better choice - but be warned it’s also scale dependent. There are plenty other smoothing filters to choose from in G’MIC, as well as the “Upscale [dcci2x]” which is probably relevant. Worth experimenting!

2 Likes

Many, many thanks for the replies - very helpful indeed - it’s great when it all starts to make sense.

@afre The post that got me thinking and really prompted my questions was this one: Sharpening Workflows - #2 by patdavid

@ggbutcher Thanks for the images - that’s the kind of difference I saw as well - subtle but when doing high resolution work I thought it might be worth doing twice. But I have a question for you - when you do your minimumUSMsharpen - what radius/amount/threshold do you use as a rule for thumb for the sharpen-post-scaling?

@garagecoder Thanks for the explanation of the unquantize function.

Thanks again - this is very informative.

Cheers,
Jules

You’re welcome - i learned something new here, I’ll call it ‘glideslope-resize’, and one I’m going to use on resizing “soft” images, that is, images without a lot of hard lines.

Regarding my sharpening, it’s probably on the order of radius=1.5, amount=the minimum setting available. I use a tool I wrote based on the simplest of sharpening algorithms: a 3x3 convolution kernel. A good visual explanation can be found here:

http://setosa.io/ev/image-kernels/

I think this sharpening is what most folk refer to as Unsharp Mask, but that really was about making a slightly blurry version of your negative and sandwich-exposing it with your original in the enlarger, for film.

Anyway, the kernel illustrated in the explanation webpage is a medium-to-high application; if you reduce the center number and the outer numbers by the same proportion, that makes the “amount” variable. What I’ve found using it for post-resize sharpening is that the first increment provides the biggest marginal benefit, and that margin decreases as you apply higher amounts. So, my tool works on a scale of 0-10, where 0 is no-sharpen, and I usually use amount=1. After that, you start seeing the edge-ringing.

1 Like

@ggbutcher Oh, what a rabbit hole image kernels are! :grinning: Hadn’t come across convolution matrix settings until now and that’s been something of a revelation!

Thanks for the USM input - much appreciated.

Just one last query - just curious - why would you limit the “glideslope-resize” to “soft” images? Because it’s not worth doing on others? Or because it could create noise/artefacts???

Thanks again.

Well, it was more “thinking aloud” than any deliberate consideration. The thought was, typically when I resize for posting, I automatically apply the sharpen=1. What I’ve noticed was that for images with few/no sharp edges, the sharpen benefit was minuscule. So, I made a note-to-self to try this approach with my next soft images. However, most of my favorite subjects are mechanisms, steam locomotives in particular, and it occurs to me that resize-sharpen will probably be more beneficial there where there are lots of nice sharp edges.

With respect to noise, reduction resizing is kind of a cheap denoise, where a noisy patch is reduced to one representative pixel. Still, a proper denoise prior to resize yields better results, in my limited experience.

@jules What it boils down to is that your question has taught me a new thing, and I need to mess with it a bit to get a feel for how/when. Really, Thanks!

Many thanks again for all the information and input.

Cheers,
Jules