After fruitlessly trying to get google’s deepdream to work, I stumbled upon a derivative called deepdreamer, which has a working python3 command line interface.
Installation requires the standard python scientific stuff and caffe. Apparently it works with video as well as stills, and the program will generate animation GIF's too, although that gave me an error related to scalar integers. The dreaming starts with the command python3 deepdreamer.py '~/Desktop/try1/creek3.jpg' --network googlenet_places205 --octaves 5 --itern 4 --dreams 200 --gpuid -1 --zoom false
Here are some results. You are all familiar with the ubiquitous default trained network, which makes your image look like puppies, cars, and spaceship things. I also tried the googlenet_places205 network, which makes everything look like pagodas/waterfalls/windmills.
Input: Freedom Train (1948)
12 iterations into the familiar default dream nightmare:
I haven’t been lucky building caffe on Mac yet, probably because it doesn’t like MacPorts’ pythons. Berkeleyvision recommends caffe be installed on Mac using homebrew, to which I have been reluctant to switch since I build other stuff with MacPorts.
However I was able to get caffe built on Debian 9 and Ubuntu 1604 relatively easily in comparison. Caffe 1.0.0 is currently working with python3/pip3.
I have a pair of OpenCL 1.1 cards in the machine, but they give me a llvm floating point error on caffe’s opencl branch when selected.
@paperdigits caffe-cpu is single core, I have to build manually from github and enable openmp with a cmake directive. Caffe-gpu works with CUDA Nvidia cards from Compute 30 on up. Sadly my tiny old Nvidia gpu someone gave me at work is Compute 20.
If you are interested in visualizing the results during processing, you’ll want to --itern less and --dreams more. Increasing the resolution brought out the original natural structure more as well as RAM usage. I filled up 10 GB with the 2048px wide shot. The 44 iterations took 44 minutes.
As they say in the movies: zoom in quadrant 4, enhance
Here is a very well made musicvideo, with probably not the best song, but they really go on details on how they achieved their results and how to optimize the rendertimes.