Automate Stair Step Scaling

Thanks for the tip – though I may have done something wrong, as the ‘robidoux’ version is actually quite mushy.

1920x1080 output from darktable (with high quality resampling enabled):

![2022-06-26_12-54-52_P1070339-ext-resize|690x389]

GraphicsMagick command:
gm convert 2022-06-26_12-54-52_P1070339-ext-resize.tif -colorspace RGB -filter Cubic -define filter:B=.37821575509399867 -geometry 1920x1080 -quality 90 2022-06-26_12-54-52_P1070339-ext-resize.jpg

Edit: the ImageMagick output is actually sharp, much like darktable’s version:
magick 2022-06-26_12-54-52_P1070339-ext-resize.tif -colorspace RGB -filter Cubic -define filter:B=.37821575509399867 -distort Resize 1920x1080 -colorspace sRGB -quality 95 2022-06-26_12-54-52_P1070339-ext-resize-im.jpg

One does not need to have a linear output profile, as long as the scaling (which is still a processing step) is done in linear encoding.

On a recent imagemagick it’s “a simple” magick -distort resize 25% right? With the cylindrical filters, the robidoux settings are the default, I believe.

Anyway, the golden rule still stands: Do it in linear space if you can, use ‘-distort resize’ and not ‘-resize’ when downsizing. If you do use -resize for speed, use mitchell as filter.

Of course, if you are going for the sharpest output and don’t care about artifacts, there are other recommendations. And the world around us uses Lanczos without thinking and being very happy about it, so :man_shrugging:.

There was a time when I was micro-managing this… I kinda let it go :wink:. You upload your images to platform which often do their own sizing, so it kinda goes away anyway. Maybe for hosting on my own photo website where I have control over display resolution I may think of this.

(as a note, I actually used this on a work-related thing - not photography related - where images of art had to be displayed on an iPad but in proper quality, and the methods used before me were giving moiré, and we actually got complaints. Using -filter RobidouxSharp -distort resize for the resizing in a background queue fixed it, and the web-designers eyes were opened to what Imagemagick can do, instead of being just ‘a slow thumbnail generator’…)

Maybe. I just used the params from the webpage.
The problem is with GraphicsMagick. It does not support distort, so I used geometry. And if I add -colorspace sRGB for the output, it completely breaks the image:
gm convert 2022-06-26_12-54-52_P1070339-ext-resize.tif -colorspace RGB -filter Cubic -define filter:B=.37821575509399867 -geometry 1920x1080 -colorspace sRGB -quality 95 2022-06-26_12-54-52_P1070339-ext-resize-srgb.jpg

You don’t need the 10 steps. ImageMagick, a single step:
rules-sucks-im

Gimp, also a single step:
rules-sucks-gimp

Robidoux is too blurry. RobidouxSharp is better for most circumstances (unless your image is over sharp).

Great point! Doing more experiments, -distort resize seems to give more pleasing results than -resize. Further, using -filter LanczosRadius with -distort resize in one step gives practically the same result as my 10 step method above, making it redundant.

Is a slight improvement over darktable output, almost identical to the LanczosRadius result (see below).

Updated Script (a few different methods, and use of -distort resize):
Magick-Resize-Width.txt (7.3 KB)

Examples from this playraw:

20201122_07.36.44_DSC_3665_01.nef.xmp (13.1 KB)
dt 4

Method:
Export from dt as 16 bit tiff in linear srgb , full size.
Resize using above script: 1600px, jpg at 92 quality, black point compensation
Results listed in order of sharpness. View full size.

Catrom (with a twist):
*most likely to produce halos, but in my tests hard to notice on real world examples.

10 Step Lanczos:

LanczosRadius:

Krita export using Lanczos 3:

*The above 3 results are nearly identical.

RobidouxSharp:

Gaussian-Blur:

Output direct from darktable using Lanczos3:

Now I challenge you to a (resizing) duel. Can anyone produce better? Use same xmp, output to 1600px jpg at 92 quality, so it is apples v apples.

I wanted to post @snibgo old script : Resampling with halo minimization

Resize down with absolute no artifacts And then sharpen.
Might also be an interesting method if you’re sensitive to haloing.




In increasing sharpness.

Things to note:

  • Your catrom version sits around my version 3. I find it way too sharp to be honest, but that might just be the crunchy nature of the image.
  • Your ‘10 step’ version sits just below my version 2.
  • Your ‘10 step’ version shifts the pixel downwards. If you try to overlay them, they don’t match up anymore. It’s just 1 pixel or something, so not important at all. Just something to note.
  • I see a color-difference between my export (in the car window is where I notice it) and yours. Darktable differences, colorspace handling differences, :man_shrugging: ?
  • ‘JPEG quality 92’ doesn’t say anything, every JPEG encoder is different. From what I see, haloing differences are probably masked by JPEG artifacts.
  • I would do comparisons on this on 8 bit PNG files, to make sure you are judging the resize-output, and not what the JPEG encoder makes from it. Export to WebP, HEIF, JpegXL can then alter your results.

I’m even going so far as to say that the defaults in Im produce an image just as sharp as your 10step, with just a little less haloing (looking at the car window edges).

So -resize 1600x1600 in linear space, without any filter tweaks or options.

It does a pretty good job but is clearly not as sharp as the 10 step. it is more similar to RobidouxSharp.

Good point. I chose jpg 92 as a more realistic output setting for my usage, but it certainly throws an added element into the mix.

I particularly like your version 2 and 3. Are you willing to share the recipe?

I’m comparing them at 300%, toggling between them. There is no difference in sharpness or small details. Maybe the ringing artifacts on your lanczos-10step gives the appearence of more sharpness, but when aligning them and toggling between them there is no difference. To be honest, I expected more difference if you know how the files were generated :P.
(different version of DT, different imagemagick? different jpeg encoder, lots of scaling steps vs just one, etc…).

Now that I open them in Affinity Photo and align them manually there in layers, and toggle one on/off, I even see there is a quite a big leap in sharpness in mine (the ‘def_resize’ version, just the -resize in linear space). Around the grill and the headlights of the car there is a big increase in sharpness in mine. I’m thinking you got the files mixed up if you think the lanczos-10step version is sharper here. I do not see any more haloing on mine but I do see more JPEG artifacts in certain sharp areas. But I’ve used a different JPEG encoder I’m sure, and since I have more details there, it’s harder for JPEG to compress.

the ‘def_resize’ version is made as following:

  • Load the NEF into my custom Darktable build (latest R-Darktable, so 3.9 base)
  • Load your XMP
  • Render out at full size 16bit uncompressed tif, in linear rec2020.
  • magick -quiet darktable-output-file.tif -resize 1600x1600 +profile "*" -profile Rec2020-elle-V4-g10.icc -black-point-compensation -intent relative -profile sRGB-elle-V4-srgbtrc.icc pnm:- | cjpeg-static -quality 92 -outfile output-file.jpg
  • exiftool "-icc_profile<=sRGB-elle-V4-srgbtrc.icc" -overwrite_original output-file.jpg

So I export linear rec2020 fullsize from Darktable, load that into Imagemagick to do the resizing (so it’s still in linear rec2020 at that point), then convert the resized to sRGB, and pipe it into mozjpeg to encode. Then finally tag the file with the v4 srgb icc file to be sure.

the -1, -2, -3 and -4 files from earlier were @snibgo 's resampleHM script that I’ve linked to.
resamplehm darktable-output-file.tif 1600x1600 d 50 0 0 outputfile.tif.
The ‘50’ in that line is the sharpening amount. For 1, 2, 3 and 4 I’ve used 50, 100, 150 and 200 respectively.

I don’t think I’ve changed the script (much). I did alter the gamma assumptions to this:

set colspIn=-set colorspace RGB
set colspOut=-set colorspace RGB

Which basically means 'assume input is linear space, and assume output is linear space.
So the script doesn’t do anything with the profile inside of Darktable’s output file, it just assumes it’s linear RGB (but it’s linear rec2020, close enough).

Then the output is written to a TIF file. I then load that tif file into imagemagick to convert the colorspace and pipe it into mozjpeg:
magick output-from-resamplehm.tif +profile "*" -profile Rec2020-elle-V4-g10.icc -black-point-compensation -intent relative -profile sRGB-elle-V4-srgbtrc.icc pnm:- | cjpeg-static -quality 92 -outfile output-file.jpg

I’ve modified the resamplehm.bat file to output the imagemagick commandline that it’s using. And for the sharpening set to 100 (-2 version) this is the oneliner:

magick -quiet 20201122_07.36.44_DSC_3665.tif -alpha off -set colorspace RGB -define filter:c=0.1601886205085204 -filter Cubic -distort Resize 1600x1600 ( -clone 0 -gamma 3 -define convolve:scale=100%,100 -morphology Convolve DoG:3,0,0.4981063336734057 -gamma 0.3333333333333333 ) ( -clone 0 -define convolve:scale=100%,100 -morphology Convolve DoG:3,0,0.4981063336734057 ) -delete 0 ( -clone 1 -colorspace gray -auto-level ) -compose over -composite -set colorspace RGB +depth -compress None DSC_3665_out_2.tif

As I read it (snibgo might give a better explanation if he can still remember it :P)

  • Use the very soft / no-distortion-at-all Cubic EWA filter with a modified ‘c’ parameter to downsize, in linear space.
  • Create a copy of it, apply a gamma of 3 (so you are in a general ‘gamma corrected space’), apply a sharpening filter.
  • Create another copy of the downsized version, apply a sharpening filter, but keep it in linear space.
  • So now you have a linear-downscaled-but-sharpened-in-gamma-3 version, and one linear-downscaled-but-sharpened-in-gamma-1 version.
  • and oversimplifying it too much: These are overlaid on top of each other / merged to give the output.

Interesting, very complex! Will try to replicate tomorrow.
Additional sharpening explains how you got them so sharp. All my versions were without that, mainly because the initial experiment was to see which method gave the sharpest results without it. But of course, post-sharpening is valid. Different jpg encoder and applying it in rec 2020 as opposed to linear srgb may also have created slight differences. I see also you put black point compensation before intent.

I was comparing them at 100% in geeqie and firefox. Zooming to 400% in krita I see what you see. I now think it might have been that 1px difference that created the illusion of extra sharpness to me.

Let’s be absolutely clear, this is how the image should be viewed, and how it should be judged. And then I guess they are all just fine! We’re nitpicking and splitting hairs to the maximum here, sane people wouldn’t do that :wink: .

Exporting JPGs at 100 and viewing the result on multiple display types may also level the playing field. :stuck_out_tongue:

@jorismak’s explanation is accurate. The first copy is free of halos. The second copy, and the mask, have halos. At light/dark edges, halos make the dark side too dark and the light side too light.

The two copies are merged using a grayscale autoleveled version of the resized linear image as a mask.

Where the mask is black, we use pixels from the first copy. Where the mask is white, we use pixels from the second copy. Otherwise, pixels are blended according to the lightness of the mask. Hence the masking composite reduces the over-darkening of the dark side of edges. A more sophisticated method would also reduce the over-lightening.

To see intermediate images, we can insert debugging operations of the form “+write x.png”. Using a sharpening amount of 300, “over-sharpened” to exaggerate halos:

Copy0:
rsd__copy0

Copy1:
rsd__copy1

Mask:
rsd__mask

Result:
x

Here, I show lossless PNG images. JPEG compression tends to create halos, so shouldn’t be used when comparing one algorithm against another. Of course, the overall workflow may require JPEG outputs, so then we need to include that in comparisons of workflows.

I founded something strange, the test image is from this playraw Beneath Giants - How to make it look good.
(darktable’s xmp DSCF9876.RAF.xmp (15.5 KB))

Linear downscaled

gamma downscaled

crop 1:1

The image downscaled in linear gamma looks too much desaturated on my monitor

1 Like

How did you do the scaling?

It sounds like they weren’t all normal RGB colorspace. If one is for example ‘linear rec2020’, but you lose that info somewhere and display it as normal sRGB, it looks desaturated.

It might also be that it just looks that way because there might be a lot more white / highlights preserved, or something like that. (The same as a denoised version can look darker, but what happens is that the noise is removed that made darker parts look more greyish). Just thinking out loud.

If you explain your workflow, I (and others) can maybe poke holes in it if you did something wrong somewhere :wink: .

1 Like

Thanks, it wasn’t clear but I wasn’t asking for help.
I think this image is good for testing linear vs non-linear resize.

:see_no_evil:

In that case, may i have your input file ? I know it’s from a play raw , but that means I first need to run it through Darktable and I may end up with something different.

It’s nice to have a tinker with an i age that responds very different to 'working space ’

With thanks to @snibgo I again have an updated method.
This time, I take the catrom-spline output from my previous script, and combine it with Robidoux using a mask - catrom-spline for fine details and Robidoux for coarse details.
It seems to give sharpness somewhere between Alan’s v2 and v3, but with less halo’s/dark edges.

Magick-Resize-Composite-Height.txt (6.9 KB)
Magick-Resize-Composite-Width.txt (6.9 KB)

Output:

Exporting Age’s edit as full size 16bit tiff from dt and using this script.

  1. output profile and export profile in dt both set to linear srgb:

  2. output profile and export profile in dt both set to srgb:

There is indeed greater saturation in the non-linear version. I don’t know how to explain it.

1 Like