Bonus
lollololol
I got an access to OpenAI’s DALL-E 2 today, and this thing clearly goes a step further
“A cute magician tiger inside a magician hat holding a magician’s wand, photorealistic”:
Impressive; hopefully you can somehow integrate calling this technology through the GIMP/GMIC plugin, David to auto-generate such things by typing in simple text that is.
Considering the model uses 3.5B parameters (which then requires approx. 14GB just for the storage), just forget about it
Just was hoping you could call an online widget and then import results back into G’MIC; figured the full program would be an impossibility. lololololololol
lololol
My sentiments on AI; we are all going down that rabbit hole. Heck, deepdreamgenerator has become one of my favorite toys. lololololololol
Here I see a case of integration of AI and Photoshop. I really hope that with the progress of technology and the reduction of occupied space, we can also achieve it.
Reduction of the required memory for these neural net models is actually one of the biggest challenge nowadays. I don’t know yet how much we can expect to reduce the size of these models.
Oh dear. The New York Times is Furrowing Its Brow over DALL-E 2
We Need to Talk About How Good A.I. Is Getting by Kevin Roose. August 24, 2022.
A friend of mine said that the current model has been optimized to within 2G. According to the comments on twitter, it will eventually be 100m
Emad on Twitter: “For what it’s worth I believe #StableDiffusion will eventually get down to 100 megabytes, loads of optimisation to come. We have some fun announcements this week coming Already amazing to see what everyone is creating, we are going to accelerate that. 6 days in…” / Twitter
I read the article and it doesn’t say all that much. Perhaps, I am already too tech savvy and not representative of NYT audience.