Thanks for working through this. I’ll have go over your patch in more detail when I have a bit of time. Feel free to open an issue over in github if you have the inclination. If you have them, logs of the original errors you encountered would be nice to have and thats probably a better place to post them.
Same goes for this, although a sample raw would help me track that down, too.
Do you happen to know what version of gmic you’re running, and whether the raws are lossless compressed?
that might be related to the same bug mentioned above, darktable devs are still figuring out handling TIFF files correctly (as it could be RAW file or non-RAW file)
darktable version is 5.2. Looks like it is exactly same problem as mentioned. Thanks for this update. Hope to see the patch incorporated into darktable in the next version.
I wish this nind-denoise or similar AI based masking/denoising is part of darktable itself instead of a separate batch program.
It will most likely be a combination of built in, for adding the result to the workflow, and script for interfacing with the external tools that actually do the AI.
This gives the most flexibility, since AI is still new and constantly changing.
Hey I just wanted to say I tested this and it works incredibly well. I did it purely via python in the CLI, not sure if I should be testing a workflow within darktable/lua but I am really excited about this being somehow part of the pipeline.
Tested on a MacBook Pro with M4 (basic) chip, takes about ~60 seconds or so to run per image. Using .NEF images from my Nikon Z6iii.
In the meantime I can already edit my images on darktable, then use this to denoise after. Amazing. No need for DxO
One thing I am wondering is if there is a way that I’m missing to input a whole directory to denoise? Currently I wrote an alias so I can just run the command via denoise <file_name> instead of needing to input my darktable-cli and gmic path etc, so my current solution is just
denoise <image_1> && denoise <image_2> ...
but in the original post there was mention of inputting a directory. I can’t seem to find a way to do that or any mention of so in the code.
Yeah, I’ve read through that, thinking (as I presume you were) that there would be a lot of overlaping code between denoising and ‘AI’ masking (image segmentation) modules, just as you might find in corresponding the academic space.
Then I dived into the code and realized the functional requirements are vastly different. The design of AI masking is being driven by the need to have it be an interactive and responsive UI element. It makes for a rather more sophisticated (or complex, depending on your perspective) approach to integration relative to what might be warrented by denoising, which can be either appended (or possibly prepended) to your workflow. Denoising is also much, much more computionally intensive in its current state, but will likely close the gap in time.
All that to say - I don’t think they’re writing that code to serve as a generalist interface for ML. Its a self -contained sort of thing. Porting this to opencv-dnn would likely be a bit premature - Python is much more suited to experimentation.
Hashing out the lua side of things, so as to properly hack this together with darktable, is on my short list or things to do. That at least will let you run the workflow without leaving darktable’s UI.
*(edited to add some more detail and in the interest of clarity) *
I had a chance to use AI segmentation mask in KDenlive today, following this tutorial:
Basically I select a short zone/clip, let AI finds all the segments available for each frame (takes several minutes), then interactively select the segment/mask, then it’ll generate the masks for each frame following my selection (takes several minutes, again). It’s definitely not a responsive process, but the bar for a video editor is generally lower than a photo editor
Even Profiled Denoise or “diffuse or sharpen” (depending on the number of iterations) are slow for an interactive workflow, I used to tweak them into presets and applied on export instead. The same would be for AI denoise. Although, for high ISO RAW, not having false color or color cast due to noise would change the editing workflow. So, having the flexibility to save AI denoise until export, or to do it upfront (e.g. TIFF or DNG intermediate/cache file) is important.
It looks like joint demosaicing and denoising will result in best image quality. Other papers seem to agree too, but I have just glanced over few papers.
It would be great if we could make even a standalone tool from this. I took a look at the repo but I’m not familiar with the tooling, so it would take some time (which I don’t have much atm) to make anything out of it.
That was my conclusion as well, although the difference between “best” and what we have now isn’t likely to be earth-shattering. I haven’t seen an ablation study that perfectly matches this, but you can sort of infer from similar benchmarking efforts that there’s just not a huge amount of noise left to remove. At some point we’re going to have to stop chasing metrics and start discriminating based on our own aesthetic preferences - we might even be at that point already.
I think this thread is getting too long to follow/keep track. Whenever you’re ready, just create a new thread with your instruction, I’ll link to yours in the first post of this thread.
Those black spots are likely clipped values. If you see those black spots when exporting directly from darktable then it’s a module in darktable, but if you see them only when exported with nind-denoise script then it’s likely GMIC when applying RL-deblur.
I don’t have problem processing your NEF with my DT (5.3.0+27~gb67d715c74-dirty). However, loading your XMP sidecar gives me a black image, and I have to go down the history stack all the way to rawprepare to get the image to display, perhaps you have a different workflow from mine.