I’m particularly not so interested into making a pixel-art scaling algorithm at the moment, but it’s a challenge I might take into consideration if I feel like something to do with coding. I’m not sure how to work with matrixes on G’MIC, but I’d be interested into a sample to replicate HIlbert curves.
One of the things that always like me that it exist is something that can recognise the sprites real and replace them by new or the best respecting the layer of sprites…
Something that disarm “collage”. Allowing create the image in HD or with other caracter in his place…
To know as I arm each layer can help us of the titleset:
Ok, now that my desktop PC is overheating, I may actually go back to coding G’MIC filter and this is now on my radar. Side note, I wish @Joan_Rake1 is here.
As they can separate the planes could create something like this…
I don’t know enough about emulators to know if the filter works on the per layer basis; probably not.
@bazza, I never asked: are you working on an animation or game? If so, do you mind showing some of your work (via a link or something)?
I said to do it to graphic level: visual-demosaic
I don’t think that’ll be happening. I wonder if Tensorflow c++ api could be integrated to G’MIC to enable AI G’MIC coding.
There is something you may not realize about neural network methods: the NNs used are huge, composed of billions of coefficients, and usually take several hundred mega or tens of giga in memory to work.
This is not something that can be considered for a software like G’MIC, where filters run on the user’s computer (and not online).
Even if we were to add tensorflow features to G’MIC one day, this would not solve the problem: the neural networks currently described in scientific articles are memory monsters, not at all suitable for integration into mainstream software.
The study of light neural networks with equivalent performance is an open research subject, and few things exist. I think we have to wait a little while until the size of the networks becomes more reasonable before we consider integrating these things into G’MIC.
Well, that’s a downer. On the bright side, computers are getting more efficient each years or that may be too hopeful, so the memory argument gets less relevant over time or that’s just being hopeful, but on the downside, we’re still on the prototype stage for these things to come.