Deep dreaming with deepdreamer

After fruitlessly trying to get google’s deepdream to work, I stumbled upon a derivative called deepdreamer, which has a working python3 command line interface.

Installation requires the standard python scientific stuff and caffe. Apparently it works with video as well as stills, and the program will generate animation GIF's too, although that gave me an error related to scalar integers. The dreaming starts with the command python3 '~/Desktop/try1/creek3.jpg' --network googlenet_places205 --octaves 5 --itern 4 --dreams 200 --gpuid -1 --zoom false

Here are some results. You are all familiar with the ubiquitous default trained network, which makes your image look like puppies, cars, and spaceship things. I also tried the googlenet_places205 network, which makes everything look like pagodas/waterfalls/windmills.

Input: Freedom Train (1948)

12 iterations into the familiar default dream nightmare:

Switching to googlenet_places205, a quick video made from stills:

Input: a photo of Coyote Creek beneath Anderson Reservoir in California.

200-iteration video made with ffmpeg -start_number 0 -i /home/bb/Desktop/try1/creek3.jpg_%d.jpg -vcodec libx264 -s 1080x720 -r 5 creek1.avi

Iteration 8:

Iteration 74:


That’s awesome, like a Dali painting but with lots more LSD.

Interesting. Results look like paintings on textured metallic canvases.

@HIRAM how are you installing all this software? You’re on a mac, right?

If I have a free minute, I’m going to try and use nixpkg to install all this stuff.

I haven’t been lucky building caffe on Mac yet, probably because it doesn’t like MacPorts’ pythons. Berkeleyvision recommends caffe be installed on Mac using homebrew, to which I have been reluctant to switch since I build other stuff with MacPorts.
However I was able to get caffe built on Debian 9 and Ubuntu 1604 relatively easily in comparison. Caffe 1.0.0 is currently working with python3/pip3.
I have a pair of OpenCL 1.1 cards in the machine, but they give me a llvm floating point error on caffe’s opencl branch when selected.

Everything works in a terminal

I installed Ubuntu 17.10 in a VM and everything is available from the package manager(s) (apt and pip).

I guess I need to install this on bare metal to take advantage of my GPU, eh? It seems that caffe-cpu isn’t even multi-threaded.

@paperdigits caffe-cpu is single core, I have to build manually from github and enable openmp with a cmake directive. Caffe-gpu works with CUDA Nvidia cards from Compute 30 on up. Sadly my tiny old Nvidia gpu someone gave me at work is Compute 20.

Unfortunately in my VM, the script runs out of (12GB) of RAM and 2 GB swap then dies or freezes my entire machine :stuck_out_tongue:

Monitoring the thermal status and logical CPU & RAM usage of the 4ghz i5 system processing an OpenMP deep dream.

Input: The creek photo processed with Fattal tmo in LuminanceHDR, more resolution than above, different colors.

python3 'try2/creek3fattal.jpg' --network googlenet_place205 --octaves 5 --itern 4 --dreams 11  --zoom false --gpuid -1

After 44 iterations (The 11th dream):

If you are interested in visualizing the results during processing, you’ll want to --itern less and --dreams more. Increasing the resolution brought out the original natural structure more as well as RAM usage. I filled up 10 GB with the 2048px wide shot. The 44 iterations took 44 minutes.

As they say in the movies: zoom in quadrant 4, enhance

python3 '/home/bb/Desktop/try2/creek3-quadrant4.jpg' --network googlenet_place205 --octaves 3 --itern 4 --dreams 11  --zoom false --gpuid -1

Zoom and enhance quadrant 4

python3 '/home/bb/Desktop/try2/creek3-quadrant42.jpg' --network googlenet_place205 --octaves 1 --itern 4 --dreams 11  --zoom false --gpuid -1

Enhance third harmonic.

python3 '/home/bb/Desktop/try2/creek3-quadrant42en.jpg' --network googlenet_place205 --octaves 4 --itern 1 --dreams 11  --zoom false --gpuid -1

44 minutes? Eek… Bare metal, here I come!

Here is a very well made musicvideo, with probably not the best song, but they really go on details on how they achieved their results and how to optimize the rendertimes.