Style transfer soon in G'MIC


(G'MIC staff) #1

Hey Guys,

I worked hard this week end to implement an idea I got for doing style-transfer.
I’ve ended up with a prototype that is really capable, so I wanted to share some results with you.
There is still a lot of work and testing to do, to make it as a usable filter for everyone, but at this point I’m now pretty sure this will be done one day :wink:

Style transfer consists in transferring colors and textures from a “style” image to a “target” image.
Like in the example below, where I used the filter to transfer the style of the Van Gogh’s Starry Night to a photograph of a cat (so, generating the famous Starry Cat painting :smile: ).


Here are some results I got, with this filter prototype. I’ve run my experiments on a 4-core laptop (running Ubuntu Linux), and every result took less than 1 mn to generate, which is not real time but still reasonable.
I hope I will be able to release it soon !

G’MIC Style Transfer examples:


Finally! would be the collective cheer. Thanks @David_Tschumperle.

(Mica) #3

Saw a bunch of these on twitter, pretty awesome, and looks like some good fun will be had!

(Tobias) #4

Could you tell us more? Have you used a neuronal network (AI) or is it a traditional algorithm?


Asking the real question. :stuck_out_tongue_winking_eye: I am sure machine learning is coming along too. :sweat_smile:

(dumb) #6

Some fine-tuning needs to be done to prevent the fading artefacts but it’s a really good result (especially for a mad experimental creep like me).

(edit: now that I take a second look, even that ‘fading’ is handled very well.)

( #7

This is similar PatchMatch algorithm?

(Tobias) #8

I used the images from the “red hotrod” and “The Great Wave off Kanagawa” example and uploaded them to to compare the results:


I think your script could be improved by having less visible repeating patterns and better showing the contour of the car.

And here the van Gogh version:


:thinking: The flaws in the original post are much more noticeable the larger the images are. The degree of success of the style transfer seems to depend on the final viewing size of the image. E.g., the images in the OP looked fine on mobile but not on a larger laptop screen; and if the pattern is too small or dense, it would be inappropriate on a smaller image.

( #10

It works in big format by that it does not use neural networks for all his processes

(G'MIC staff) #11

I’ve made some progress today, and the results are often better than with my previous version.
Still a lot of things to experiments and room for improvements, but I’m on the right way.
Hopefully it will be ready before X-mas :partying_face:




Would we be able to place point to point feature matching? Like, if I put one point on the reference image, and another point on the target image, the style transfer filter would utilize the pointed area on the reference image to the target image.

(dumb) #13

I feel like a simple filter with a few parameters and a more complicated one (an ‘advanced mode’) with loads more would do the trick, allowing both smoothly-made stuff and nutty experimental rubbish at the same time depending on what one’s after.

( #14

Some programs use a mask of color to guide the application of the style


That’s a nice suggestion. The eyes of Van Gogh to the headlights of the car.

(Sebastien Guyader) #16

Awesome stuff @David_Tschumperle


Has G’MIC gone too far?

No, we must go further.

(Lyle Kroll) #18

Really cool stuff, David. Actually use Style Windows (from Give-a-away of the Days site a few months back) as well as visit Google’s Deep Dream site on occasion (low rez for both; Style Windows is very taxing on my system resources so can only run on low rez images). Look forward to experimenting with your preset soon. :slight_smile:

( #19


my result with gmic :heart:


1 thing I don’t like is that it lost the feather details.