Hello,
I find that RawTherapee and Darktable have some great algorithms, but personally I would create the user interface differently. Now I could start another open source raw developer. But I was wondering if there was any attempt to create an open source image signal processing library, that contains all the typical functions like exposure, contrast, curves etc, independent of the UI? Similar to openCV, but for raw development?
If that does not exist, what are your opinions about such an approach, would that make sense? I would think that this separation would allow for better optimizations of the algorithms as no old strings from the UI have to be considered. Also I would hope that it could be more accessible to developers, with clear inputs and outputs, where each function can easily be optimized independently. Also maybe a more modern accessible programming language would make sense, with the possibility to have wrappers to other languages.
A number of people recently felt the need to develop their own RAW processor. It appears to be reasonably easy to get a prototype with a nice UI and some basic functionallity working. But people vastly underestimate how much effort it is to get from this state to an actual useful application.
So it makes sense to pool resources. Everyone wants a “next generation” RAW development software, but a zoo of single-developer pet projects will get us nowhere.
The community should either focus on one single project or at least get the hard color science parts developed once, so different developers can attach different UIs to the same underlying base.
VKDT currently looks like the most promising candidate in both cases. It is not very usefull yet in its current state, but the technical foundation is solid and it is very fast. Also the logic/algorithms are separate from the UI, so it might be suitable as a backend on which other developers put their own UI.
There are many reasons to start writing a piece of software. Perhaps to scratch an itch, to try something out, to explore how things work, to prove something to yourself… Even if done in the Open Source arena, not all purposes benefit from sharing.
Regardless, if your itch is the lack of a good image processing library, Godspeed and fair winds to you, my friend.
Heaven knows the world needs something better than OpenCV. In too many ways to count.
once upon a time there was the discussion to refactor out the shared code from RT and DT… sadly that never really came to something proper.
but then again … if you are going to aim for modern apps then using Vulkan Compute (ala vkdt) instead of CPU processing might be a better target anyway.
and that project is alive and well and updated?
last time i checked it looked kinda dead.
and actually DT and RT use that shared code and no longer have copies of that code?
I think using something like Rust‘s wgpu makes sense:
wgpu is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.
If something is new is developed, it should consider modern platforms, including mobile, and also be accessible for development, with modern automated documentation etc.
It’s actually quite interesting/amusing to see what “image (and signal) processing” means to people: each time, completely different things!
I myself am the author of a library that could be described as a “image and signal processing library” (CImg), which I began developing in 1999 when doing my PhD thesis in a public research laboratory.
When I started the implementation of this library, I was focused on being “generic” enough, meaning able to deal indifferently with 2D images of pixels, 3D images of voxels, with arbitrary number of channels. So clearly, nothing related specifically to photography. Of course, I could see 2D color images as a subset of those supported “image” data, but most of the algorithms included in my library were designed for at least 3D multi-spectral images (so, where things like “exposure”,“contrast” may become a bit abstract, when they still have a sense!).
OpenCV, on the other hand, is mostly focused on offering fast computer vision algorithms for 2D images (typically acquired from a webcam), like face detection, optical flow, … You want to process 3D images of voxels with OpenCV ? No luck, better try ITK, another “image processing library” initially designed for manipulation/visualization of medical images (so where 3D images are common). And we could list a lot of other libraries and see that they all have their own philosophy and definition of “image processing” (some are targeted to the processing of large images, like libvips, others to using GPU-only processing, and so on…).
At the end, what it means is that “image (and signal) processing” is such a broad and varied field, with such different areas of application, that it is hardly surprising that no “unifying” library has emerged in all these years. And to be honest, I can’t see how that could happen in the future.
It is the expression “typical functions” that made me react
Even “typical images” would be no well defined enough!
Thanks for your feedback. I don’t know if it is possible to generalize everything, or if some specialization on photography makes sense, to consider if you are working on a linear raw file, if it’s gamma corrected etc, to be able to have all modules working well together (and not always transfer back and forth in different color spaces for example).
I should mention that ImageMagick exists. The core image processing function are written in C. Some of those use GPU. Many language interfaces are provided: C, C++, Python, etc.
IM is aimed at images rather than general-purpose signals. It has virtually no interactive functions. For processing digital camera files, it uses libraw for decoding the file, demosaicing, etc.
It is possible to build an image editor on top of IM (I have done so, unpublished), but to get anywhere near the capabilities of RawTherapee or darktable would need masses of work.