Anyone use upscayle regularly?

Just curious if anyone uses upscayle for photographic images (or anything else). I came across this post by @lylejk G'MIC 3.0: A Third Dose to Process Your Images! (Summary of 2 Years of Development) - #13 by lylejk

I tried it on this image - a very heavy crop that someone wanted a 8x12in print of. It worked fairly well - maybe not miraculously, but made a real difference in terms of printability. At least if you don’t look too close. :wink:
Before


After

Edit: these files licensed Creative Commons, By-Attribution, Share-Alike. in case anyone wants to try something out!

I don’t but I want to see how it works for adding resolution to my 24mpix shots to print them at 16x20. I tried just some regular resizing, and they were soft, I wasn’t happy with it.

1 Like

I used to use Upscayl regularly for my work in creating posters and print media for clients who had poor snapshots for illustrations, but for a while now I’ve been using chaiNNer a open source node-based image processor as a better alternative, since you can use and combine different AI models and also do other image corrections in a modular way.

Here is an example with combined models for upscailing and face restoration:

Original:

Tafel

Upscaled with face restoration:

There is also an open database of models for different purposes that can be used in chaiNNer:

6 Likes

Wow… that node graph makes my head spin! :smile:

I’ll have to give chaiNNer a try. At least if it works on Windows. I’ll find out. Thanks!

Edit: There is a windows build… downloading now!

1 Like

Do not worry, this complexity is not necessary. :rofl:

I just quickly loaded one of my pre-made chain where I experimented with several models. One of the models has a nice sharp edges but at the same time made the rest too much smooth and the other has preserved the details very well and was less sharp at the edges. chaiNNer has a nice node for edge detection that I then used to sharpen the edges with the first model and the second model for detail preservation in the rest areas.

3 Likes

Just for quick reference:

After installing chaiNNEr itself, I recommend installing all other dependencies:

And if you have a graphics card that supports CUDA, turn this on in the settings as well. This will increase the processing speed enormously.

After that you can assemble a chain. Here is for example a simple version for upscailing.

Photo and model is loaded, and upscaled photo is then output to the viewer and simultaneously saved in the same directory as original. With text append we give a new name for the upscaled version of the photo, which is composed of original name and name of the AI model:

You can then save this chain as a template.

6 Likes

Thanks! That helps. Do you know how much disk space the dependencies need? I’m struggling a little at present - I have a 120GB SSD for my main system and it keeps filling up!

I use linux version and accordingly do not know how much the dependencies need for Windows verison. My config directory of chaiNNer has about 5 GiB.

It should be noted that I use integrated version of python, which is recommended.

You also need space for different AI models, but they vary in size, from about 60 to 400 MB.

2 Likes

I use automatic1111’s stable diffusion webui and it comes with a handy tab for doing upscale with various techs, including esrgan (which is what upscayl uses afaik, except it doesn’t let you add custom models). I’ve had some success upscaling some crops to make A3 prints, but it needs to be tuned carefully per image.

Sometimes it’s a good idea to upscale to more than you need and then downsize + sharpen with richardson lucy or other algorithm.

Another one I found good but doesn’t use any machine learning, is gmic’s DCCI2x. It can work fairly well on some images, specially for printing.

EDIT: @123sg can I use your image as sort of a “play raw” to see what kind of a result I can achieve?

2 Likes

Certainly!

I’ve added a licence to my OP.

1 Like

Like @hatsnp I use stable diffusion (a1111) and compared numerous different models, before settling on three that work best most of the time. I will edit this post with those three a bit later. They are much better than the model used on upscayl, which is itself an improvement on most non Ai methods. However, this might not be helpful to you if you don’t have much space as stable diffusion can take up a fair few gb.

EDIT:
Best models are: 4x_foolhardy_remacri, 4x_nickelbackFS_72000_G, 4x_NMKD-Superscale-SP_178000_G
These can be found at: https://openmodeldb.info/
They are .pth so I guess they should work in chaiNNer
They are much better than r-esrgan and esrgan as used by @hatsnp below
Haven’t tried 4xFFHQLDAT as seen in @s7habo’s screen shot.

2 Likes

If you have installed and use automatic1111’s stable diffusion, you can also use it in chaiNNer with the advantage of being able to use additional functionality of chaiNNer as well:

2 Likes

Oh wow, this is very compelling, thanks for letting me know.I’ll give it a shot at installing it later. Sometimes I run some gmic scripts on new images, this must be perfect to automate that.

1 Like

This is the best I got without putting too much time into it. upscaled with r-esrgan 4x + esrgan 4x and then added film grain in gmic to get rid of the smooth textures it left here and there. Let me know if you want the png file before any processing, it’s 12mb so I won’t upload it here unless necessary :slight_smile:

For comparison here is a gmic only try with dcci2x and richardson lucy. Not very successful in my mind, since the original image didn’t have that many details to work with in the first place.

2 Likes

The first one looks very good - similar detail level to my effort with upscayle, but more natural looking. Not that impressed with the second - probably not the best starting point for that method.

Thanks!

1 Like

:point_down:

I can also recommend other models from Helaman

2 Likes

Here is 4x_foolhardy_remacri

I actually think slightly blurrier models are suitable for this image. Sharp models like nmkd-superscale-sp and nickelbackFS seem to pick up the jpeg scratchiness a bit too much. Therefore, the improvement over the other esrgan models is only slight here.

1 Like

Oh yes I tested a few of those. His esrgan trained models didn’t yield as good results as those I listed (at least not for photography, but there are many models there for different purposes). Unfortunately I don’t think his DAT trained models are supported by a1111 stable diffusion yet. His models for jpeg compression are well suited to this kind of image.

2 Likes

JFI, the original file I posted here is a darktable export at 95% quality (IIRC) so I wouldn’t have expected many artifacts.

But… I did push diffuse or sharpen a bit - do you think a less heavily sharpened image would be a better starting point?

I more take the philosophy of using the best model for the job at hand. Those models might work better on other subject matter, so I wouldn’t change your processing to suit the model. I’d change the model to suit the processing. From memory all the tests I did were on png or tiff, so it could just be they don’t handle jpeg as well. For me, the best starting point would be an uncompressed image.

1 Like