processing that sucks less?

oh, great to hear you got your hands in the code! sorry made a mess of history… but rebasing should work? now i have published all remainders, so i’m not planning to do a force push again.

mouse wheel sounds useful! i can imagine it would be fast, doesn’t have to reprocess…

histogram lines is probably aliasing… i’m splatting the values quite brutally with int atomics without filtering. still i was surprised by the relatively high cost.

let me know if you have a PR.

So far it was more about learning but I would love to contribute. I’ll make a try.

Hi @hanatos !

I know (almost) nothing about code, but this POC (now a bit more than just a POC) looks very impressive ! Bravo !!

I have some beginner questions:

  • As the thread was flagged with darktable keyword (reason why I saw it), could you please say what it has in common with dt ? if anything.
  • Do you think this could end up as a beta software at some point, or are you still just playing to just see how where you can get with this tech ?

I’ll keep an eye on the thread to see where you manage to drive this ! :slight_smile:

tldr; Impressed by the perfs of your demos

Thanks
GLLM

heya,

the initial developer i guess. i still agree with some fundamental design decisions of dt, so some of the vkdt code base will look familiar to people familiar with dt.

yes. i’m a little distracted and overcommitted etc, but i’m actually using this software for my very sporadic photography these days. another possibility would be merging back the faster pipeline into dt proper, or merging essential features from dt into here. at this point i don’t think super fast / modern GPUs are so widely available that we could make a hard switch to a GPU-only pipeline. which kindof makes me feel better about progressing slowly :slight_smile: but i’m convinced this way of doing it is better.

1 Like

Until now trying to run vkdt is something I have on my “want to eventually do it”-list. One thing that made me a bit hesitant is drivers, because I like drivers when I don’t need to interact with them at all, and there has been some talk about drivers in here :slight_smile: What I did not consider a problem is a low-spec gpu - I do have an old, integrated intel gpu (T450s: Intel HD Graphics 5500, same as you it seems), due to these posts:

I.e. my impression was that this full gpu pipeline would outperform a cpu one on (almost) any gpu (obviously still benefitting from a beefy one). However that seems to contradict with your initially quotes statement about fast gpus not being available. I know this is in early prototyping stages, i.e. no definitive answers can be given, I am looking for ballpark estimates/hints:
Can vkdt run on quite low-spec gpu and perform better than the traditional cpu only/mixed-cpu-gpu pipelines. If there are “too low-spec” ones, any ballparks for minimum required specs for it to be viable?

about drivers: since recently, the driver support in debian’s apt sources is really very good. i went back to just using nvidia driver and vulkan sdk from there, apt has the latest and greatest (in sid at least, that is). very happy not having to mess with a manual .run install.

about the GPU spec. i talked to a few folks who have pre-5500 intel gpus, where vulkan support was lacking. i mean we’re talking 10 year old equipment here. still vkdt has a hard dependency on some vulkan features, so lack of this support means you just can’t run it at all.

the other thing is that i did not spend much thought really optimising the code for low-end machines. obvious things like dt does it (only reprocess the pipeline from modules that actually changed their parameters and such) i just didn’t care about, because on my nvidia GPUs this would strip off half a handfull of milliseconds. on older intel, this introduces noticeable lag (more in the 100s of milliseconds).

so: yes it will perform better even on old hardware. the caveat is that “better” here means it does what it does faster. only it always processes all modules and full res and full crop, where today dt processes downsized to your viewport resolution and only the visible crop (often this is like 20x less work, but comes with some substantial headaches for local operations that require some amount of context or even global histograms…). on my nvidia the vulkan implementation eats this 20x for breakfast. on older intel not so much.

while i believe this illustrates the point that vkdt’s pipeline/processing graph is better, for a practical piece of software you’d also be interested in the final user experience. this either needs some careful implementation of caching (i’ll get to this as processing graphs become more complex i suppose), or your GPU needs to be past the turnaround point (i have no good data here, the bigger and newer the better…).

1 Like

About merging back to old darktable: I think there are already some interesting features in vkdt - apart form the pure GPU pipline - that maybe should be merged into old darktabe asap. E.g. the local laplacian filter works in linear RGB in vkdt but in Lab in old darktable. Am I seeing this right? Unfortunately I am too stupid to do this myself.

… i believe the current default config applies the local contrast after the curve, so strictly speaking it’s not linear (though the working space is linear rec2020 throughout the pipeline). it’s easy enough to move it around though, both in vkdt and dt (well in dt i guess it works in Lab so there would need to be some wiring work in the code).

but the old pipeline with the full-crop preview + fine res full pipelines which need to talk to each other to compute approximate laplacian coefficients… is no fun to program, when you can have a much simpler one which is also faster. so my motivation backporting stuff to the old pipeline (not even just old dt) is a bit limited.

Thanks @hanatos for your replies !

I tried to compile it, but failed … I guess I’ll just monitor this thread until one day, something packaged automagically appears for noobs like me :wink:

So you think it is easier to modify local contrast in old darktable so it works in linear RGB than backport local contrast from vkdt?

Guess what: after I wrote this comment I got an encouraging private message from someone stating that he was convinced that I could do it :laughing:
Sure it was meant as a compliment to my overestimated intelligence when I don’t even understand what a Laplace pyramid is…

I did not mean that you should do it. Nevertheless I am sad that you are not developing old darktable any more.

Still playing with code, I’ve some questions and/or difficulties.

  • black point. A rggb value 0 remains 0. This seems aligned with the filmcurv code (black slider). But I would like to be able to offset a bit those black points. Do I miss something ? Should an exposure black slider be added ?
  • contrast. When I connect it before filmcurv the developped image is ok, but exiting darkroom the thumbnail remains black (I got also a darkroom black image but I’m not able to reproduce every time).
    If I connect it before hist the displayed histogram looks strange. then connecting it to display, the histogram is back ok. And the thumbnail is ok too.
    The contrast radius has no effect.

black point: yep, no control for that. should probably be added to the exposure module. however, i also have quite a bit of garbage intertwined in this module because i wasn’t willing to spend another module for colour transforms and white balancing. we should probably rip out exposure as a real exposure thing, so it can be combined with the “draw” module for dodging/burning, and have a “colorin” matrix/lut thing for the colour part.

would you have a .cfg or screenshot of your connectors for this second case? i’d like to reproduce and fix this. possible that the thumbnails graph conversion fails somehow, i wanted to touch this code again anyways (reuse the cache that already exists in dr mode, now it reprocesses the whole thing which costs a few ms when exiting dr mode).

…just tried to reproduce. swapping llap/contrast and film curve modules works here. you sure you didn’t accidentally disconnect the display module? if the 1D visualisation in the gui is confusing, there is also “vkdt-cli --dump-modules” and “dot -Tpdf <” to visualise the graph in 2D.

you sure you didn’t accidentally disconnect the display module?

Normally not, because playing with the position it works, but only before hist and display.
All the other positions work in darkroom (not always but I haven’t captured the conditions) but produce black thumbnails.

Those which work:
011.NEF.cfg.txt (1.6 KB) 013.NEF.cfg.txt (849 Bytes) 014.NEF.cfg.txt (1.6 KB)
Those which show a black thumbnail:
016.NEF.cfg.txt (1.6 KB) 017.NEF.cfg.txt (1.6 KB) 021.NEF.cfg.txt (1.6 KB) 022.NEF.cfg.txt (1.7 KB)

Here an example where darkroom is black.
Using this default config: default-darkroom.cfg.txt (1.2 KB) and connecting contrast before filmcurv ending with 020.NEF.cfg.txt (1.6 KB) and
main.pdf (14.9 KB)

This black darkroom does not happen when contrast is already connected in default-darkroom.cfg.

image

1 Like

aaah i see, sorry i thought you meant the local laplacian. the contrast module is the other one using the guided filter and a single pass, so it would only ever boost detail at one scale. i changed something maybe in the guided filter code and the module broke. i should probably look into getting it back. until then contrast is known broken…

(okay, this is stupid, broken code online. please pull and try again)

Your last commit is still on 8th June… :blush:

oops too many remotes, should be on github now too.

Works great now. No black image any more. Thanks !
What about contrast / radius ? Any issue or plan about it ?