Machine Learning Library in G'MIC

Some (good) news:

I had some time last week and this week-end to make progress on the G’MIC Machine-Learning library, and I’m very happy to announce that I’ve been finally able to set up a first filter that uses ML for the G’MIC-Qt plug-in!

This new filter is simply named Repair / Denoise. It uses a convolutional neural network to denoise images. This filter can be found in the latest developement version of the G’MIC-Qt plug-in (version 3.0.0_pre, posted yesterday at : Index of /files/prerelease). It looks like this at the moment:

It’s a quite CPU-demanding filter, so do not use it if you don’t have at least 4 cores :wink: And even with that, it will take a lot of processing time if your image resolution is large.

There is also an associated command denoise_cnn that can be used from the command line as well (e.g. to batch-process several noisy images):

$ gmic sp colorful,256 noise 15 cut 0,255 +denoise_cnn 0

This example will render this couple of before/after images:

The convolutional neural network used in this filter comes in two flavors:

  • One for processing “soft” noise, trained with images where synthetic gaussian noise has been added in the RGB channels independently (so this is mainly a colored noise).
  • One for processing more “heavy” noise, trained with images where synthetic noise has been added, but this time, in the HSV channels independently.

In both cases, thousands of natural images have been used for the training (I’m actually using the Lorem Picsum webservice to build a training set). The synthetic noise added for the training has random amplitude, so that the network is able to adapt itself to different levels of noise.

The network training has been achieved only by using the functionalities of the integrated G’MIC ML Library, which was quite a challenge!
The neural network is basically a simple ResNet, with 11 convolutional layers (3x3conv, with width varying from 64 to 8). Each of the two flavors of this network has been trained during at least 8 hours.

As these neural network are quite shallow, they have less than 100k learned parameters, which means they don’t require a lot of storage (both networks are stored, compressed, in a 720K file).
The network files are then downloaded directly from the G’MIC server when the command denoise_cnn is used for the first time.

The inference of the network is done “patch by patch” (with patch size 64x64), so image patches can be processed in parallel.

Well that’s it! I’m very happy because all this is the result of hundreds of hours thinking about the design of the ML library structures, learning how neural network training works, implementing the whole stuff from scratch, and finally testing and debugging for hours… But finally, with a result!

A lot of things remain to be done, but for me, this is a first milestone for having ML-based image processing algorithms in G’MIC.

Stay tuned :+1:

18 Likes