I couldn’t help myself. I just had to try. I had to try and build my own raw developer.
Well, “my own” is perhaps over-stating it a little. It’s mostly Darktable, but pared down to the essentials, and running entirely on the GPU.
Let’s rewind a little. I love Darktable! But… I am a software developer by trade, a signal processing scientist by training, and image processing engineer at work. I have opinions about stuff, and especially about software stuff and image processing stuff. So I tried Capture One instead, for a good year. It looks nicer, it works well, but its tools are terribly restrictive compared to Darktable. So I tried Lightroom, for half a year. It looks worse, that AI stuff is really useful, but it also didn’t trickle me like Darktable does. Time and time again, I came back to Darktable. I even contributed some minor changes and addons to better fit my vision. But all the while, this nagging feeling, that it could be better, if only…
I watched with horror and fascination as Aurelien did his fork. I follow Hanatos’ vkdt with interest. I’m amazed by Glenn Butcher’s rawproc. And I knew, some day, I’d have to try and build my own.
So the other day I had some time on my hand, and decided to give it a go. I’d start with technologies I know, and see how far I’d get. Something simple to load images. The Python package rawpy did the trick. A bit of UI around it, with PySide and QML. But the meat of the program was always going to be running in a GPU shader. So I built a little bridge to pass the python image data into a QML shader. And amazingly, I got this prototype to display a raw file within an hour or two.
A slight hiccup was the file’s bit depth. Cool as Qt/QML is, it is mainly built for sRGB surfaces. It took me a short while to figure out that I could use a QtQuick3D.QQuick3DTextureData
provider to pass a 16 bit texture to a shader. Another Challenge was getting the GPU image back to the CPU for saving, as QML is built for displaying things, not processing. All things considered, QML was perhaps not the best choice overall. But at least it makes GUI prototyping easy.
With these basics out of the way, I set about porting a few of Darktable’s algorithms to my shader. Sigmoid was first, to get viewable colors from the raw data. The algorithm itself was quickly ported into my shader, with copious help from an LLM, somewhat to my shame. It was just very good at translating C/CL code to GLSL. Then I needed to hook up some sliders to the shader parameters, and presto, a prototype of a raw developer.
Second came Color Calibration. This required a color picker, which was another challenge, as there was not yet a way of querying a GPU surface for pixel colors. So I essentially needed to render out a thumbnail screenshot whenever pixel colors were required. The same process is used for saving a rendered image as well.
I built my processing pipeline at the photo’s native resolution. GPUs can run entire video games, so a couple hundred megabytes of flat pixels hopefully won’t tax it too much. So far, this seems to work perfectly fine, with the full image processing pipeline running easily at 60 FPS. That’s merely a 25 MP image on a fairly beefy GPU, though. I recon I’ll have to do some performance profiling some time in the future. Funny thing is, this is currently all interpreted code; no compilation necessary to run any of this. I think I needed that as a contrast to the endless compile cycles at work.
Initially, I was going to use the aforementioned screenshot thumbnail for calculating the waveform as well. But why involve slow Python code when you could run on the GPU? And indeed that worked well, so now I had a smooth, real-time waveform display.
Then came the tone equalizer. I was a bit worried about this one, as it requires blurring of the image to build its mask, which could be slow on the GPU. But that turned out to be easy to do, and the GPU didn’t even break a sweat. It was actually very interesting to get into the guts of this module and see how it works.
Next came color balance. To my surprise, this was a bit of a headache. That’s actually understating it; it took me a good few evenings, and eventually, single-stepping through Darktable’s source code with a debugger, side by side with my own code, complicated by the lack of a debugger on the GPU. But eventually, it, too, was ported to my shader.
As you can see from the screenshot, this actually makes for a competent little raw editor. It does not yet have a name. I don’t yet know if it has a future. It still lacks myriad critical features, such as file system browsing, saving/loading editing parameters, export settings, just to name a few. To say nothing of denoising, sharpening, masking, the color equalizer.
But it’s been fun, and highly educational. It was really cool to step into darktable’s image processing code and actually see how stuff works! I already learned a whole lot from that, and hopefully will continue to do so. While my code is clearly Open Source, I haven’t yet made the repository public, as it’s still way too messy to share.