Feature request: Image enhancement via Deep Learning

While searching for algorithms for noise reduction, I came across this project on GitHub and thought it would be an amazing feature for working with low res imagery. My understanding is that it is licensed under GNU or similar so perhaps it could be implemented in G’MIC (or indeed other open source apps).

Using a database of reference imagery, the algorithm can accurately increase the resolution of pixelated or low res imagery. The results are outstanding. Have a look here:

Would love to hear your thoughts!

1 Like

Other similar:

1 Like

Thanks bazza for the links. The possibilities look very promising! :grinning:
I’d love to see open source apps champion deep learning features

In different publications already say me answered that they do not want that have deep learning. But we insist :smiley:

Oh, do you have any links to those by any chance?
To an extent I agree that ai should be approached carefully, in that design choices and creativity should be left to the user to decide. But in practical areas like noise-reduction, resolution/upscaling I think deep learning can be useful in limited areas.

https://discuss.pixls.us/search?expanded=true&q=neural%20%23software:gmic

There are several discussions. Really I do not have knowledge to program a perceptron so no as it could implement a neural network in gmic

1 Like

@cloudbusting Welcome to the forum! Machine learning has been brought up many times in the forum but the thing is that it is not the focus or expertise of the developers and participants that frequent here. There are software and communities that do so; e.g., http://opencv.org/.

1 Like

It seems the challenge is resources then, if it requires server processing? Some posters mentioned faster algorithms though. I think the one I linked initially is fast according to their documentation. I’d love to see what @David_Tschumperle has to say :stuck_out_tongue:

Ah I see, thank you @afre Apologies for being offtopic

The neural networks are used to to be very slow and have models of the learnt very heavy really if it can solve of another way is preferable to not to use them.

@cloudbusting Don’t worry about being off-topic. It is your topic after all :wink:. I think most solutions are achievable without the need for excessive measures such as machine learning. Also, if we cannot solve problems using simple measures, applying more complex techniques would definitely confound!

1 Like

This program does something similar use means of imagick.

If you have a sample image with noise or resolution problems, do share. Tell us (the forum) what your current workflow is like so that we can give you proper advice.

1 Like

I think that’s probably better suited for another thread someday! :smiley:

I’ve basically started out with film photography, and I was looking for ways to get more resolution and less grain/noise from negative scans when I came across the algorithm I posted.

I figured there must be a way of scanning multiple passes of a negative in order to identify and reduce grain. Given grain is a physical artifact on the film, I thought a comparison of how the same negative is scanned (perhaps with different light/brightness) could yield an algorithm or something to identify the grain separate to the image. I’m not a developer so that is as far as my thinking goes!

Once my camera returns from servicing, I’d be happy to provide some negative scans for the community to work with, if there is a solution to this.

I don’t often recommend closed software, but VueScan has a multipass scanning option that may be of some use.

You could also vary the scan brightness then use Hugin to align and blend the images together.

As for pixel size of the scanned image, maybe you need a better scanner? I’ve made some pretty large prints from 35mm scans, up to 30x40in that were fairly acceptable.

1 Like

Hi @paperdigits I haven’t scanned film yet but I’ve been doing a lot of research on the topic. I have access to a hasselblad at a photo gallery so pixels aren’t a problem so much as the film grain itself. I shall try a few noise reduction methods in the near future and possibly compile some info for the community in the hope of finding a better solution

Neural network on G’MIC? Well, this would solve GIMP and Krita selection weakness by a huge amount. I hope this happens one day.

1 Like

Some words about the possibility of having neural network-based methods in G’MIC.
I’ve started studying these kind of methods and from what I’ve read so far, what I can say is:

  • There is everything in G’MIC to create artificial neurons networks, including convolutional layers, pooling, and so on…
  • Methods relying on neural networks have two main aspects : 1. learning, and 2. evaluating.
    Concerning the evaluation aspect: I’m still not sure how fast G’MIC can perform to evaluate a feature using a neural network, particularly if the network is deep. Basically, the evaluation consists in a lot of image convolutions and matrix operations (mostly multiplications). These two are implemented in G’MIC, and are even parallelized, so it may happen that neural network evaluation could be fast enough in G’MIC, when evaluated on a machine with several cores.
    Concerning the learning phase: it is definitely slow. People writing scientific papers about NN tell it requires sometimes several weeks of training, with GPU-based convolutions and matrix multiplications. So, even when GPUs are used, it is slow as hell. I don’t expect then to have fast learning methods in G’MIC. No way.

So, only if the neural network evaluation phase can be fast enough in G’MIC (and I still cannot tell because G’MIC does not rely on GPUs for this kind of tasks), then maybe I’ll be able to implement some of the interesting image processing methods using NN. At this stage, I can only hope that this is possible.

In any case, this will require a lot of work and testing, so I would say you shouldn’t expect to see such things coming in G’MIC at least before 2018. All the code for those NN-based algorithms proposed on github are relying on external machine learning libraries (often used in Python), which are definitely not easily integrated in G’MIC. This means that probably the best way to go is to recode those machine learning abilities directly as G’MIC code. This seems to be possible, what I don’t know is if that will be fast enough.

Anyway, that is something I’d like to explore in the next year. But that is not as easy as ‘take a code from github and integrate it as a G’MIC command’.

6 Likes

Thanks for taking the time for your feedback. The future at least looks exciting for G’MIC. I’ll be sure to stick around for that :slight_smile:

2018? That’s gonna take awhile, but I will stick around. 2020 looks like the year of every noticed open source graphic programs.