Slitscan Self Portrait

I made a self portrait using slitscan technique. My process was as follows:
First, I set up diffused lighting 45 degrees to incident of direction of camera facing me. Then, I set up the camera, Blackmagic Pocket Cinema Camera, with a 135mm enlarger lens on bellows, shooting raw CDNG video. I set up an office swivel chair, and had my mother in law rotate me as I sat as still and centered along the axis of rotation as I could. I shot in vertical orientation, committing the sin of vertical video to maximize resolution, getting 1952 height by use of Rawtherapee processing without raw borders (cool thing about foss is there is no borders to what you can do, lol).

I took my sequence and processed a frame in the Rawtherapee GUI, (raw attatched) and exported my sidechain file (attatched), using a flat and low contrast settings to maximize available shadow and highlight detail for later post processing, as well as capture sharpening set to auto radius to hopefully coutneract the imperfect focus of my moving self.
Headshot.pp3 (12.4 KB) BMPCC_1_2020-04-06_0953_C0011_000518.dng (1.9 MB)

I then used the command line interface of Rawtherapee to render all 1400 raw files to tiff, and imported the tiff sequence to Davinci Resolve, where I used video stabilization to improve the smoothness of rotation and evenness of the resulting slitscan, as the chair was moved by hand, and I of course couldn’t be perfectly still, being a living human. After stabilized, I used Optical Flow interpolated speed reduction to get more resolution out of a slower scan. Then I exported back to tiff, and followed this tutorial for imagemagick commands to crop the tiffs to vertical slits and then concatenate to one image:

The raw output was still messed up a little in terms of proportions, so I did some mesh warping until it looked good, and then I used cloning, healing and infilling to fix the edge, as well as clone out an artifact from my left eye from blinking. I then did more extensive retouching, as well as dodge and burning to take my hair out of the deep shadow range and add further contrast. I used frequency separation and cloning to smooth out my face, and then enhanced the saturation of my eyes, as the BMPCC tends to want to produce dull blues.


This is pretty amazing.

1 Like


(Wondering what it would have been like if you had held a less neutral facial expression.:stuck_out_tongue:)

That’s perfect for Blender UV mapping.

1 Like

It’s to hold perfectly still, wearing anything more than a dead man’s stare would risk extra movement. Now if I had a super slow mo high frame rate video camera, or was doing the slitscan effect optically instead of digitally, then I could go fast enough to smile, but I didn’t have either.

It’s kind of like the daguerreotypes of old, where people had to stay stiff and serious because they couldn’t hold a smile for the 15 minute exposure, as referenced in this article:
(Nothing really special, just the first search result of ‘No Smile Daguerreotype’ on Google)

1 Like

Lol, I heard lots of references to 3D textures when I posted this pic on a Weird and vintage lenses Facebook page. You are welcome to use it for personal 3D experimentation that never leaves your hard drive, just don’t start selling video games with me as a demon!


Reminds me that a possible low-tech(!) substitute is setting your smartphone to panorama mode and rotating in front of it. Never tried with myself, but worked well enough to get a scan of a bottle:

(all done hand-held in a restaurant, with a friend rotating the bottle by hand)


Unless you are Chef Burak. In his FB videos, whether he is demonstrating his cookery or digging a ditch, he always smiles and looks directly into your soul.

I wonder: Did you wrap a print of your portrait around a cylinder just to see how it looks like?

1 Like

Great idea, I’ll have to try that.