Now that DT 2.6 and RT 5.5 are almost or already out, we could look into how we can make better use of that library.
I already poked @hanatos about the idea and he seemed to like it.
DT and RT already share some code, but the code so far is copied and even modified. Darktable adds support for region of interest and scale invariance. If we want to remove the code duplication it would be nice if we could move that support also into librtprocess.
Another question would be which of the darktable features we could move into librtprocess. the first one that comes to mind is probably the auto perspective correction for which we would need to poke Ulrich I guess.
Also from the packager side I hope we can come up with a model that will end up in a librtprocess.so.x.y.z and not a similar model as we have it for rawspeed at the moment.
c++ is fine but not a language for interfaces. you can’t pass STL/c++ things over API boundaries if you’re planning to do dynamic linking. the only sane way to API interfaces i know of is use plain c at least for the interface (not only about name mangling but also about different heaps for different new calls and about compiler versions that change STL interna)
what’s the scope of the project? to be really useful i think it should have many code paths (in dt we have at least 32-bit oldschool, SSE4.2, and GPU/opencl)
we require all modules to be scale invariant as much as they can be to support fast preview rendering. also the modules need to know about regions of interest (and i mean without resampling the input first)
we’ve had a lot of fun with memory constraints and tiling in the past. i think a generic API
should support this.
I’m interested in having a C interface but I’m not sure how to do it. This is my first library.
Can the current C++ interface exist alongside a C interface?
@hanatos The scope of the project is really to facilitate the raw handling: demosaic and highlight recovery, with optimum performance and image quality. This is work that can easily be shared between raw processors.
Regarding tiling and region of interest, we already have the ability to process a window of the main image.
okay, i’m sure we can figure out the c API interface thing (re: std::function: you can have type checked function pointers in c just fine, but i don’t know why you would in an API interface? and std::array passed through API boundaries sounds like asking for trouble indeed).
about the scope. so it’s not raw decoding (that would be rawspeed/dcraw) but a bit of processing? no colour management, no tone/contrast curves, … etc, but only the most basic filters that run on raw? wouldn’t that mean that projects would still need their own processing pipeline, making edit histories incompatible, and increasing the probability that they would just take the code and dump it into their own processing pipeline API instead of using this one?
if going through the trouble and writing a shared library for raw processing, can we go all the way?
along the lines of scope/generality. some of the code seems to be templatised on data type (T *buffer) only to silently be assumed to be float later (float *temp = buffer). is there any reason not to assume float for everything?
re: GPU support. since nobody in the industry seems to be willing to support opencl going forward, what do you guys think about vulkan compute shaders?
The interface as it is now isn’t set in stone. I haven’t even written up documentation for it, and there aren’t many projects using this yet, so I’d rather make any changes now.
Yes. Filmulator has its own unique pipeline, and it’ll never be compatible with other edit histories. Same thing with PhotoFlow, where the user defines the pipeline. I guess I can see how someone might want to have a complete pipeline available so they can just create a new UI around a completed backend, but that’s not my aim here.
The short-term goal for librtprocess is just to replace functionality that LibRaw removed in the latest version. I’m not even going to replicate all of it; just the core algorithms that people couldn’t reasonably be expected to code for themselves, like white balance.
If we want to share other, non-raw aspects between the various FOSS photo editing software, I lean towards having a separate library for more generic image processing. Or use GEGL?
I might be convinced otherwise if there are good arguments for it.
so you would have basically logic modules which people then could call from their pipeline code with a thin wrapper (UI and hooking it up in the pipeline)?
importing dt code is simple. you just remove all the functionality (sse, GPU, tiling, ROI rendering). thing is we wouldn’t be able to use it any more then. that’s what i mean by generality of the API… ideally you’ll make more than 3 users happy with the offered feature set.
i’m not sure i understand what’s hard about implementing white balance in the example above? getting to the wb coeffs? would the purpose of the library be to maintain camera specific lists of metadata? that would actually be very useful by itself.
My intention for librtprocess is to attract more people to contribute to some algorithms (mainly raw processing steps).
For example Amaze demosaic. Currently I am the only one who makes improvements to Amaze (and the last one is long time ago). So the current code is stuck at SSE2 level (with some SSE4 stuff). Of course there would be improvements possible using AVX or even AVX512 or whatever will come in future.
But I can’t do them at the moment because though my machine supports AVX it has only 4 AVX units for 8 cores.
Having Amaze code in a library would simplify contributing imho and also could increase motivation to contribute, in case the library is used in more than one target.
@hanatos TBH when looking into how long it takes from “we receive the data to support a camera” to actually shipping this thing to the user. the situation is not good. I dont mean it as a blame but something I want to push for improving. So I was already poking @heckflosse to extract all the data from the code into camconst.json. so we could have something like camera-update-data along the lines to lensfun-update-data.
So if librtprocess should also become the interface to rawspeed and we can actually unify the data set that way … it could be another worthwhile improvement.
I will do another post with the same audience for the complete thoughts on camera-data-update.
and yes having the algorithms shared and maybe different people providing the different optimization paths is another huge plus.