As part of this I’m playing around with dithering algorithms. One of the things I wanted to try was to throw simulated annealing (or I guess depending on the definition something close to it) at the problem.
Use a reconstruction function to “undither” the image
Calculate the difference between the undithered image and the reconstructed one
For every pixel
probabilistically swap the color of the pixel with a probability depending on the difference and the “temperature”
Repeat many times
In practice the results are far from perfect and the algorithm is terribly slow. On the hardware I use for the camera (a raspberry pi zero), it takes about 20 minutes to run 1000 iterations. While this E-Ink camera thing will definitely be a slow experience, that’s of course a no go.
But hey, at least it’s somewhat interesting to watch the algorithm do it’s thing.
Yes, the increase in sharpness can to some extent be recreated by simply sharpening the input to regular Floyd Steinberg dithering:
The over actuance can also be reduced in the simulated annealing version by sharpening the reconstruction.
It still has some properties I find interesting, in that the grainy output creates different artifacts than floyd steinberg. In the cat example the grainy texture instead of “banding” in the background and the lack of “waves” in the dark parts at the bottom are examples of this.
Next up I’ll probably try a more reasonable implementation for the camera using a bit of local contrast enhancement before dithering and possibly a bit of anisotropy & blue noise to break up the regular patterns formed by regular error diffusion dithering.
@Iain since you are here already and quite knowledgeable in image processing:
I’m currently dithering in linear RGB. It looks gimp dithers “in gamma”.
The images of michael angelo in the wiki also seem to be dithered “in gamma”.
While I can see some benefits to that (retaining more detail in tones humans are more sensitive) it seems off to me. At least I don’t see why the combination of the dithered pixels wouldn’t be linear.
I would think that Linear RGB would be the right way to do it for accuracy. However, I think that you might want to try a custom curve for the e-ink display so that it looks nice. For example, I would clip near-blacks to full black so the dark areas don’t look like they are covered in dust spots. I also suspect that the e-ink display might not produce a linear dithered output. (ie. does a 50% dither input reflect 50% of the light?)
I think at the end of the day it is a matter of it giving a pleasing aesthetic. In that sense, I agree with @Iain’s points. That and try it on all types of images.
I agree in that the end result is what matters. Since the dithering algorithm is going to be part of a processing pipeline I prefer to avoid side effects.
I’m now working on a slightly more sane approach to perform dithering which should work in the camera. Essentially a bit of predither sharpening & local contrast enhancement and then an adaptive dithering algorithm that goes form dithering using blue noise in smooth areas to error diffusion when there is more variation.