If it happens very early in the pipeline (like denoise profiled), I would be fine with just trying out a default and tweaking it only if necessary, even if it takes a minute. Also, fiddling with settings interactively could just produce a preview on a crop that the user is interested in.
In case you are not aware of this: most FOSS contributors just work on whatever they are interested in, regardless of popular demand. So it is pretty useless (and occasionally, counterproductive) to try to pressure them like this.
Not saying you shouldnāt use processes that take a long time. I just donāt think that their place is in an interactive program. If thereās a way to have a fast way to set ep such a module, for application on export only, that would be fine (but rather different from how modules work currently)
Keep in mind that the position in the pipeline isnāt really important for working with a module. Itās the position in your workflow that is important. And as soon as you have a slow module activated, you pay the price every time you make any adjustment, even in a completely different module.
Up to a point, that can be circumvented by only activating the slow module(s) at the end of the workflow, but I like to do e.g. sharpening on something as close to the final result as possible, and denoising has an influence on the (perceived) sharpnessā¦
Well, you posted in the ādarktable sectionā, that introduces a certain bias.
the question of whether somebody of you knows similar applications that also run under linux and whether there will be an AI-based denoising tool in darktable in the future?
Darktable integration would require a couple of operations ((transposed) convolution and ReLU) written in C/OpenCL.
Per https://discuss.pixls.us/t/exporting-importing-photos-in-full-32-bit-representation-to-train-a-neural-network-for-image-denoising , It would also probably make most sense to retrain the model such that it applies earlier in the pipeline (probably right after demosaic, before or after exposure) rather than right before sharpening. For that the dataset would have to be reprocessed (the images need to be aligned then have the exposure matched between shots). I would be happy to release all of the raw files if someone is interested in doing the work but (as is usually the core issue) I will have no time to work on this in the foreseeable future.
Ideally we should also try different, simpler, architectures to minimize the runtime. (Though I am with @hyhno in that ādenoising [in darktable] also takes a minute if you increase āsearch radiusā in ādenoise (profiled)ā to a certain value.ā)
ā¦ in fact i managed to download some of the raw files using the link you posted quite some time ago. it took me weeks to do that, and to tell the truth i never went back to test if it actually finished
from what iāve seen there are some subtle issues, not so much with alignment but with exposure/highlights due to aperture changes maybe? but there are also images that may be very viable to use for pre-demosaic processing.
yes. and if you ask me you donāt want to fix this in dt. i reimplemented something similar to the wavelet denoising in vk/glsl and it runs in a few milliseconds (single digit) on a 2080Ti, full raw. you cannot reach this kind of interactive performance in dt, not even when only operating on a preview/cropped/downscaled buffer. the whole pipeline setup is too clumsy and drops performance on the floor between kernel executions. finally the pixels are copied back to cpu and then carried through a couple of libraries between gtk and xorgā¦
in the olden days in dt we tried to keep module execution times below 40ms, then below 100ms and now iām not sure. having processed a few videos my pain threshold is actually lowered considerably nowadays.
sounds promising! keep optimising! a factor of 100 speedup is often realistic to reach on gpu architectures when starting from a naive implementation (though i donāt know your base level here). maybe remove a few channels or levels?
My understanding is that darktable stores intermediate results. The later the module is in the pixelpipe, the more of these are invalidated and need to be regenerated after module parameters change.
The workflow I envision for this kind of denoising is for it to happen after demosaicing, with early potential user intervention before or right after setting the exposure. Not a lot of other modules would be active in a typical workflow. Conversely, whatever changes later should not invoke denoising again.
The models seems rather complicated, I imagine developing that kind of models in C would be quite the nightmare. Thereās a library (cONNXr) that can run ONNX models (so development/training could be done in a different environment) but it doesnāt support GPU. Thereās onnxruntime in cpp but it seems not like the lightest dependencyā¦
Yes you can play one note and sing another note above it then then the interference between the two notes produces other notes filling in a chord. I have done it on the French horn. Going off topic here!
Beyond being interested, having time and will, there is skill too. Truth is, most devs/engineers have forgotten all their maths 10 years after graduation, so forget about reading paper and doing R&D. Most of image processing dev work in FLOSS is adapting code taken from another project. Turning a research paper into actual code is a rare skill.