Automate Stair Step Scaling

To best retain sharpness, my preferred method of downsizing is to do so 10% at a time, otherwise known as stair stepping. Currently, after exporting full size tiff out of darktable, this can be done manually in Krita, however this is quite repetitive and time consuming, and I wonder if the process can be scripted automatically, in Krita, or other programs such as g’mic, imagemagick, graphicsmagick etc…

The process is this:
a) ( Export Width – Desired Width ) * 0.1 = Increment Size
b) Export Width – Increment Size = New Image Width
c) New Image Width - Increment Size = New New Image Width
Repeat (c) until Desired Width is reached. Then save as jpeg.

( 3800 - 1600 ) * 0.1 = 220
3800 - 220 = 3580
3580 - 220 = 3360

It would be ideal to retain the tags and metadata of the tiff in the saved jpeg, set parameters for jpeg quality, embed color profile, and set scaling algorithm (lanczos3 preferred).

Can it be done?

Should be easy enough with imagemagick through a shell script (Linux). Probably possible with G"mic as well, as you basically need a cli interface where you set the scaling parameters.
After that, it’s a loop, calculations for which can be done within the script. If I had to attack, I’d start with hardcoding all the parameters, except start and end size. Once that works, you can add more parameters to the script, and perhaps have it figure out the start size by itself. Metadata (including tags) should be kept by the program doing the resizing).

But how useful is stair step scaling? I’ve heard about it, but haven’t seen any examples showing its use, let alone any benefits compared to “one-step” methods. And do you use 10% of the difference, or at most 10% of the original image size (yes, I know you use 10% of the difference, but that has some strange consequences for small size reductions…).

I don’t know but wouldn’t each step introduce some noise which would accumulate?
Surely there must be a good algorithm that does it all in one go?

I have just test the process with RT. It can be done with command line and a partial profile, but:

  • I’ve downscaled from 4944px height to 1018px height, in 90% downscaling steps (not fixed steps as OP asked, but 90% applied to the output of the previous step)
  • I have also downscaled in one step, for comparison
  • algorithm used was Lanczos, no post-resize sharpening applied in any step
  • there’s just a hint of contrast increase in some areas by the stair stepping scaling, but just a hint and it’s not noticeable at 100% (although I’m not sure those aren’t artifacts)
  • there’s a clear increase in typical Lanczos haloing, to a point that renders the result unpleasant, but not quite visible at 100%

So all in all, in my test it doesn’t give any advantage.

You are certainly overestimating my scripting ability, but nice to know it is theoretically possible :smile:

Sometimes the difference is negligible and not worth bothering about. At other times it is quite stark. Depends on the subject matter. Here is a quick recent example:

Exported full size tiff out of dt, resized in krita using stair step method above, saved as jpeg 95% quality:

Test Stair Step

Exported to jpeg 95% quality in darktable (resizing done in export module):

Test Non Stair Step

Both using lanczos3

I haven’t seen it.

I too have come across one or two images where the result was unpleasant. It is rare, but has happened - the result looking over sharpened, despite no sharpening being added. In these scenarios, downsizing by 20% increments instead of 10% worked better.

Is lanczos the same as lanczos3? I have only tested with lanczos3 and bicubic, preferring lanczos.

It seems to be Lanczos 3.

Well, with enough free time, I think it would be worth comparing the stair stepping with a proper straight downscale + post-resize sharpening.

But there would have to be a really big difference to justify 15 steps and around 250Mb temporary files from a 17,5Mb raw file.

Part of the reason for doing it is to avoid the need for post resize sharpening. Especially if one uses unsharp mask or something Halo producing.

A 10% downscale as I use makes it 10 steps. The 20% which I have used is just 5.

The difference is not always really big, but on occassion I find the 1 step downsize nigh on unusable. It is then stair stepping is a God send.

Maybe worth noting I do the downsizing in linear space, and only convert to srgb when complete.

@Soupy: Thanks for showing examples. The stair-step method gives slightly greater local contrast than the “normal” method. I can’t see any other effect, positive or negative.

Command-line ImageMagick doesn’t have a facility to repeatedly loop until a condition is true. My unpublished “alfim” (Augmented Language For ImageMagick) does. Of course, so does G’MIC.

With ordinary IM, I would do the job with a shell script that builds a single IM command that resizes for the appropriate number of times.

IM doesn’t contain a filter called “Lanczos3”. It does contain Lanczos, Lanczos2, Lanczos2Sharp, LanczosRadius and LanczosSharp, among other filters.

Nicolas Robidoux has published methods for sharp resizing while minimising other effects, especially halos. See Nicolas Robidoux Resampling -- IM v6 Examples . I use this when downsizing for the web, see Resampling with halo minimization.

Maybe worth noting I do the downsizing in linear space, and only convert to srgb when complete.

That makes a difference, of course. Robidoux’s methods use a kind-of “super-linear” colourspace.


OK I have finally completed a script for this using ImageMagick. Many thanks to @snibgo who helped me a great deal.

There are two scripts (complete with instructions for dummies like me):

  1. Magick-Stair-Step-Resize-Width.txt (6.1 KB)
  2. Magick-Stair-Step-Resize-Height.txt (6.1 KB)

EDIT: Scripts updated 2022-07-25 according to @snibgo code below.

Each script will output two images, based on different algos. The outputs of both are very similar, so for the comparisons below I will simply show the output from 2 Step Sinc.

My process was this:
Export each image from darktable with filmic v6 as final step in pipeline, and srgb output profile - thus filmics gamut mapping goes to srgb for all. Example images can be found in pixls playraw.

a) darktable, 1600px, jpg, lanczos3, srgb, quality 92
b) darktable, full size, tif, uncompressed, 16 bit, linear srgb > 2 Step Sinc, 1600px, jpg, srgb, quality 92
c) xmp so they can be compared to appearance in editor. dt 4 required.
No post-resize sharpening.

View the images below full size.



20201122_07.36.44_DSC_3665_01.nef.xmp (12.1 KB)



a-look-at-shadows-DSC_2481.NEF.xmp (12.1 KB)



DSC_5534.nef.xmp (9.2 KB)



greens.and.browns.rw2.xmp (8.8 KB)

As we see, the stair stepping methods are consistently sharper. The only advantage I see for darktable export is that its file sizes are a few hundred kb smaller - but increasing jpg quality to create larger file sizes still does not match the visual quality of the other methods.

Please let me know if you experience any problems with the scripts.

1 Like

Good stuff.

‘-intent Relative -black-point-compensation’ are settings for -profile, so they need to come before -profile, otherwise they will have no effect.

Your script runs magick multiple times, saving intermediate images to files. We can combine the magick commands for increased performance.

For the 10-step Lanczos method, here is a simple bash script I have called I have also parameterised values for the input and output filenames, and the desired output width.



startpx=$(magick -quiet "${INFILE}" -format %w info:)

echo startpx=${startpx} endpx=${endpx}

magick -quiet "${INFILE}" \
  -filter Lanczos \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize "%[fx:w-(${startpx}-${endpx})*0.1]" \
  -resize ${endpx} \
  -intent Relative -black-point-compensation -profile sRGB.icc \
  -quality 92 \

exiftool -overwrite_original -TagsFromFile "${INFILE}" "-all>exif:all" "${OUTFILE}"
1 Like

Wonderful! Thank you so much. Have updated the above scripts with this code. Cleaner and faster than mine. They also give slightly superior results to the images posted, I guess because of the adjusted order of operations after -resize

I am curious how these results compare to your own downsizing methods mentioned above?

I show Windows commands. AGA_1837_g.tiff is 7378x4924 pixels. I haven’t published it.

r1.png: simple IM

set SRC=%PICTLIB%20140523\AGA_1837_g.tiff
%IMG7%magick %SRC% -resize 300 r1.png

Takes 11 seconds.
r1 is:


bash ./ %SRC% r2.png 300

Takes 62s.
r2 is:

r3.png: resampHM, default

call %PICTBAT%resampHM %SRC% 300x200 d . . . r3.png

Takes 34s.
r3 is:

r4.png: resampHM, parameter 100

call %PICTBAT%resampHM %SRC% 300x200 d 100 . . r4.png

Takes 34s.
r4 is:

With my aging eyesight, I strain to see significant differences. Looking mostly at the white textured wall, I think r4 has the greatest local contrast, then r2, then r1, then r3. I’m not sure if I can see any difference between r1 and r2, een when both are scaled up to fill my screen. I can’t see any halos.

RMSE percentage differences:

    r1  r2    r3   r4
r1  0   0.02  1.7  2.1
r2      0     1.7  2.1
r3            0    1.6

As a rule of thumb, when two photos have a RMSE percent of less than 1.0 that is evenly spread, I could never see any difference.

Based on this, if I was making images for the web, I would use r4, “resampHM, parameter 100”. But I don’t trust my own eyes, so I would get other opinions first.


I have good eyesight - at the least I’m still fairly young and never required glasses. At a glance I would not have noticed a difference between r1 and 2 in the thumbnail. Comparing those two thumbnails in their own window at 100% the difference is apparent, but very subtle. When scaled to 500% r1, 3 and 4 appear to have really dark black shadows, almost crushed, whereas r2 is lighter, less crushed.

r4 certainly has the most pop, and would be my preferred thumbnail. Viewed at 500% coloured edges (roof and sign) look a bit unnatural - dark on one edge. That is not a problem at this size on this image, which is practically black and white, somewhat disguising halos, but I wonder how it would hold up on a colourful image with larger output.

I think you are correct. I found this: Resampling Filters -- ImageMagick Examples
“By default IM defines the ‘Lanczos’ filter as having 3 ‘lobes’.”
“However a 2-lobed ‘Lanczos2’ filter (Lanczos with a default lobes of 2, added for easy user selection)…”

So I guess 3 lobes = Lanczos3, which is what IM uses as default for Lanczos.

Defaults for imagemagick have been set for quite a while to be nicely tuned for a good balance of preserving sharpness and preventing haloing , right ?
Isn’t that what the ‘robidoux’ filter preset was / is ?

Also note there is quite a difference between using -resize and -distort resize.

I’ve also known / read about a resizing trick to scale both in linear gamma , and ‘in gamma 3’ and then combine / overlay the results. @snibgo had a script for it, from an old dpreview thread.

Or just use a softer resizer (quadratic?) and apply a bit of sharpening / deblurring.

Libvips uses simple pixel averaging, upto the the nearest integer factor . (So uses pixel averaging / binning to scale to 2x as small , or 3x as small, or 4x as small, or … Until you are still above your target resolution. Then use lanczos for the latest precise resize to the target resolution.

All tricks to try.

Resizing down in small steps is not something I’ve heard of. Doesn’t mean it can’t work of course :wink: . But still doesn’t seem likely.

Tweaking the two parameters (b,c) of the resize lobes can give you all the power you need to prevent haloing and trade of a bit of sharpness. I don’t really see the need in using a complicated down stepping script.

The section on ‘Mitchell-Netravali’ in the link posted above (Resampling Filters -- ImageMagick Examples) has a chart that illustrates nicely what the b and c parameters can do. A lot of filters in imagemagick are just presets for certain b,c parameters. You can tweak away yourself ofcourse.

But guessing that ‘robidoux’ or ‘robidoux sharp’ are fine for everybody.

Also click through robidoux’s own old article and the section about downscaling: Nicolas Robidoux Resampling -- ImageMagick Examples

1 Like

What happens if you export at 100%, and resize in a single step using Krita?

One question is, does Krita do the resizing in linear space? darktable uses the target space (fails, unless you use a linear output space), see below.

Try scaling these in the browser – Firefox fails badly:

Exporting at 25% and 50%, darktable fails just like Firefox does. darktable linear vs darktable sRGB output:

Gimp has no problems. I cannot test Krita now.

AFAIK it’s all a matter of frequency response. If your scaling algorithm doesn’t destroy high frequency components, your image can be subject to aliasing. Step scaling, by repeated interpolations, destroys the high-frequency components pretty well, but also damages the rest.

On can instead apply a low-pass filter on the image (Gaussian blur) before doing one single scaling.

Keep in mind that one must take care when resizing. Each method has its down and upsides. As @Ofnuts noted, repeat application often murders detail and, as others have noted, causes more issues (halos) than solutions.

As usual, think it through before you do anything in (post-)processing.

Thanks all for the discussion. I will have a more detailed reply tomorrow. For now:

Good pick up on putting a linear profile in darktable’s output module, it is definitely an improvement over srgb (web safe). Makes it a glaring omission not to have a linear srgb profile in that list by default. For these two examples, I see the first one saying ‘RULES’ and second saying ‘SUCKS’.

What does it think of my 10 step script?