Darktable AI Edge Detection

Yes. Human intelligence has some understanding of monkeys and trees, and a general understanding of how objects can mutually occlude each other – parts of the monkey are behind the tree, and parts are in front. In principle, AI could also understand this, and recognise that the area immediately below the monkey is connected to it but has colour and texture that resembles the branch elsewhere, and that the furry texture of the branch-like object at bottom-centre identifies it as “monkey” rather than “branch”.

I know that @David_Tschumperle has been implementing Machine Learning in G’MIC. I look forward to further developments in this area. From what little I know about AI, it seems to be difficult to implement and verify, and even more difficult to debug.

The way ON1 does it is not bad. You can go through rounds of using a red and green brush to paint very roughly over things to keep and things to exclude. After each pass you can sort of see where the confusion lies and toggle your brush to provide another sample…works quite well but it is slow at least on my older machine and you still have to use the refine or chisel…type brush on the edges most times but still a nice tool …so I could see a natural evolution for DT might be to extend the range selection pipette in the parametric mask and let you do addition range selections to include and exclude to update the mask …and allow it to sample edges and hue etc from each sample to modify the mask…

That sounds like GrowCut segmentation. The method combines human intelligence (“this part is certainly monkey, and this part is certainly tree”) with fairly simple image-processing to find boundaries between the parts. Interestingly, it also has a confidence factor, showing how confident the algorithm is about which part each pixel belongs to.

1 Like

I’m a GIMP newbie, what are those 5 methods for edge detection?

This is how it works just for a visual…in case I gave you any misconceptions…

So as you can see I was wondering if some range selection might be able to be incorporated into DT in a similar manner by enhancing the current selection pipette…because I don’t really think this is AI but rather some iterative edge detection that can use multiple samples of color and tone??? I really have not idea but that would seem like what it is doing…

These are two common ones…

https://docs.gimp.org/2.10/en/gimp-tool-fuzzy-select.html

and

Davies media tutorials will cover almost every aspect of GIMP if you are new. to it…

2 Likes

Interesting thread. Found it after doing a web search wondering if any Darktable developers were working on AI masking. Though after reading more than a year later, I don’t feel the reasoning against AI holds up; esp after using Adobe Lightroom latest release.

AI masking is a game changer!! Even the times when it’s not perfect, its still dramatically faster to let AI spend a couple seconds doing 90-100% of the heavy lifting of creating a mask, then sometimes refine with easier UX tools.

I’m a Darktable user since at least 2018. But recently tried lightable, because Linux lacks a good/rapid video editor (for social media) I finally subscribed to Adobe. Lightroom comes with the suite so I kicked the tires. Masking is now head & shoulders faster(/better?) than Darktable in 90% of my use cases. (People+objects+skies)

But I get it, it’s open source. If you want it, build it. Just thought I’d share my two cents.

did you ever use the mask refinement tools? especially to separate objects the combination of course drawn masks with luminance, chroma and hue parametric masks + refinements can do the jobquite easy.
AI results corresponds to the effort spent for training - thats quite easier if you have access to a whole bunch of customer images in your cloud :wink:
You can see this dependency with chatGPT - the english results are quite good, but foreign language request are often quite lousy due to insufficient training.

1 Like

Short of AI, the ability to save and load mask presets independent from the module would also provide a quicker starting point. For instance Skin is invariably going to be orange hue and Mid tones. That’d be a nice mask preset to load in one click.

Out this morning, and took this picture. It isn’t one of my best, but my aim would be to increase the separation between foreground and background, probably by blurring the trees and the other side of the loch and brightening them.

How would an AI produce a mask for this picture?

no need for ai - darktables maskin capabilities aren’t that unusable ;):
image
just a minute spent for demo (mainly chroma + a bit hue and luminance parametric maskig + refinement by feathering, opacity and contrast); unfortunately just a jpg - based on the raw the mask could be further refined by a detail threshold to exclude the water.

5 Likes

What if you set your mask as a preset in a no op exposure module named skin…then use it as a parametric mask either from the default or wherever you drag it in the pipe. Then module after can use it as a raster mask… not perfect but if you hit on a mask that gets the skin tones for you. Quick and dirty for skin I sometimes use the skin tones preset of the CLUT module and just tweak one or two of the nearest brown patches from the picker selections… if its a small tweak this works pretty well and it can be effective…

1 Like

You sort of state the reasoning…

https://www.macrotrends.net/stocks/charts/ADBE/adobe/net-worth#:~:text=Interactive%20chart%20of%20historical%20net,12%2C%202023%20is%20%24159.43B.

Short of a 150+ billion dollar company DT has all the same resources at its disposal :slight_smile:

Actually not long ago Adobe looks like it was worth 2x that so its not all rosy over there right now :slight_smile:

Lots of things were worth 2x what they are now, to be fair

Ha Ha …don’t remind me…

2 Likes

The numbers in my retirement nestegg app, for instance… :flushed:

No kidding

I know, I am a constant user of masks. I took a different approach to you, masking out the hills, but this shows the versatility of masks within DT.

I was more interested in how an AI would tackle this rather than an actual demonstration. I am more than willing to post the Raw file if it would be useful.

why hoping for AI when human brain can do it with a few slider movements…
darktable is for users that enjoys squeezing most out of a picture - AI is for the lazy ones with an ‚good enough‘ approach :wink:

1 Like

Hoping?

During my working life, I struggled to convince people that efficiency was, at best, a secondary goal. My thesis was that we should be looking to make people more effective at the tasks that they undertake.

The same is true here, masking is an effective way to isolate elements from a picture for further work. My question was, is the AI a more effective way of doing so (or at least, can it provide an initial basis for that further work).

From my reading, the AI sounds OK if you have slabs of colour (faces for example), that you wish to work on. The real question is, how it handles cases where the structure is less apparent, in my case where you want to select or deselect the few pixels making up the twigs.

The difference between us and the AI? We can see the structure and the meaning of the picture, the AI can only detect changes in pixel values.