Ok, thanks. This does give me a bit of a headache but obviously it is possible. Maybe I will try it a second time eventually.
as we were talking about gui concepts, here are some more screenshots of the “gui in layers” (like an onion. you need to peel it a lot to get to the core):
surface level, just your custom favourite gui elements (from config file):
all parameters of all modules which are currently connected to the output image in some way (this is very similar to darktable’s right panel in darkroom mode):
pipeline configuration, “node editor”:
these images are straight “scrot” (these aren’t mockups, it’s functional) so they are colour managed for my display. may look dull on your screen, sorry.
Ok this will be a long post.
First a few remarks concerning color management:
It seems to do something, but I am not sure it is the right thing. Here are 2 screenshorts. The first one is without color management and shows more saturated colors, the second one is with color managment and shows less satureted colors. Since my screen’s color space is slightly larger than AdobeRGB, the direction is correct.
But here is another screenshot:
The same screenshot is opened here in Geeqie (top, with color management) and Lximage (bottom, no color management). The screenshot opened with Geeqie should show correct colors, however, when I open this photo with vkdt, it looks like the screenshot in Lximage. I think this is also visible in this post. Now I am not sure why this is, maybe it has nothing to do with vkdt, maybe something is wrong with my color management setup. Bottom line: there is a difference between with and without color management in vkdt, but the colors seem to be too saturated, both with and without color managment. (The colors are less saturated in other color managed apps.)
Then, a question: What type of profile does vkdt want? I chose 3xGamma+Matrix. Does it like XYZ LUT profiles (e.g.; I did not test this)? Does it care about profile types?
Finally: apparently setting up color management for vkdt does not work like described in the documentation, and there might be a small mistake in the file names. First I ran the script read-icc.py, as desribed in the documentation, after I had installed the profile:
[anna@annapc bin]$ ./read-icc.py please supply icc profile as command line argument usage: ./col.py your-display.icc
But col.py does not exist! Is this a joke?
Finally I was able to set up color management in vkdt by running
Well so far I only tested color managment in vkdt once though, so I am not sure if the result would be exactly the same thing if I did this once again.
Second: I have information about successful compiling of vkdt on Fedora 31 and unsuccessful compiling on Manjaro and Debian Buster/MX Linux 19. Compiling on Manjaro was very easy, most packages were already on the system, and first it looked like it was a success, but when I tried to run vkdt the screen inside the window frame was just black, and there was no information about errors in the terminal. On Fedora (which I am using now) it was more or less easy but most packages have slightly different names. (I don’t know how useful this info is.)
Finally (bit off topic) dual graphics/Nvidia prime: I was actually able to install the original Nvidia 440.44 driver on Debian testing and MX 19 (which should behave like Debian Buster). Battery life is less than 3 hours on Debian testing without actually using the Nvidia card, which is not really great but better than with the old driver form the repo (less than 2 hours without Bumblebee). And during those 3 hours, the system crashed completely twice. On Fedora Nvidia prime works out of the box and batterly life is a little under 5 hours (I think this is also true for Ubuntu), but the driver is in some special repo, so it is not the “original” driver. On Debian with the old driver from the repo + Bubmblebee, battery life is longer than 5 hours. MX19/Debian Buster did not crash with the original driver, but darktable was extremely slow with OpenCL. Btw: installing the original driver was a bit complicated but not really difficult. However, I did not do anything but installing the driver. I tried that custom xorg.conf thing but it just broke my X-server (I only changed the BusID).
I think I wanted to mention one or two more things but I forgot what it was LOL
I am a such snail. I think I needed like one hour for writing this post.
thanks for testing all this!
let me try to address a few things:
./col.py is an oversight from a previous version. should be
read-icc.py as you figured, yes.
colourmanagement: if i understand correctly you took a screenshot of managed vkdt and then loaded that into geeqie? that is, the buffer contains display-rgb but is likely tagged as srgb in the jpg/png of the screenshot? so geeqie is tricked to display it as srgb, i.e. apply another srgb->display matrix on top, thus double-attenuating the saturation? so if anything, the lximage and the vkdt window should match (not much of a sanity check though).
display output profiles are theoretically a linear thing (you sum up the contributions of three fixed light sources, multiplied by three coefficients which are the input). there may be the oddball non-linearity that i will never understand which cannot be corrected by uploading TRC to the videolut via dispwin for instance. but i would consider that esoteric enough to not support LUT profiles as output. (camera input is a different thing, this is not representible by a matrix and i’d really love to have better/spectral data here).
also i wanted to avoid linking to some bloaty icc thing and thus just extract the 12 floats i’m interested in up front (and this is the only way of getting colour transforms into vkdt now). in the end, i’m merely interested in a rec2020 → display matrix (9 floats) and three gamma values (another 3 floats). i’ll attempt to get that out of the A and B tags and assume a D50 white point of the icc profile. any other kinds of data stored in there it’ll probably just fail horribly.
Ok. I think I understand your first thought. Lximage is correct because it displays correct colors if the opened image and the screen have the same or at least a very similar color space. But the screeshot probably has no color space info and so Geeqie wrongly assumes it is sRGB.
So do you agree or believe me that the colors are still too saturated in vkdt?
Is it possible that this is because rec2020 and/or the sensor’s color space is larger than my screen’s color space and so I’d need a special screen profile with perceptual data? Years ago I calibrated a laptop screen with 60% sRGB and an external screen with 98% sRGB, and I displayed the same photos of colorful orchids on them, and the photos looked oversaturated on the laptop screen. That’s because the laptop screen cannot show lots of the colors in the photo. Florian Hoech taught me that I need to create a special profile with perceptual data and set the rendering intent in the image viewer to perceptual. Then I will see more similar colors on both screens, even though they are quite different. But I am really not sure about this because I am not good at this very technical stuff. Could I actually test this?
long time no updates, so here goes some intermediate thing which made a mess of history. i’ll probably need to force push at some point after this. anyways. changelog:
speed: here’s a video comparison to darktable. spent some time making it fast, bottleneck is now rawspeed and different types of raw images will have way different characteristics here. scrolling up and down until all thumbnails are there (this is on a laptop):
thumbnails are now respecting the aspect ratio and the crop module would even allow you to rotate things in super clumsy ways. also the colours of the thumbs are fixed and use the display profile now.
hotkeys: double click to enter dr mode. press ‘E’ to enter/leave dr mode. ctrl-x exits the program. f11: full screen and back. dwm users may have to make the window floating first.
revamped the pipeline view, you can now more conveniently connect things in simple cases:
like move up/down, insert, disconnect. sometimes it even works.
the favourites now only show up if the module is actually connected in the graph.
conveniently, there’s now an “export selected” button if you selected an image and it will write something, i think “output.jpg” in the current folder or so. can’t select multiple images yet.
darkroom mode now creates sidecar .cfg files for the images when you exit dr mode. this means it will actually pick up your changes next time.
- can now select output images from instance names of modules in graph, via
- substantial changes to the graph processing, enabling feedback loops for animations and such. will not be able to explain until there are modules making use of it.
- no more SDL, now using less bloaty glfw3
- database will look for .cfg files, not images. if images exist without cfg, it’ll use these and replace the cfg by
I’ve seen some movements on your repository… but I’m getting a lot of merge conflicts when I try to rebase…
Some fun with histogram on previous version. I haven’t found the origin of horizontal darker lines…
Zooming with mouse wheel is also quite fast.
oh, great to hear you got your hands in the code! sorry made a mess of history… but rebasing should work? now i have published all remainders, so i’m not planning to do a force push again.
mouse wheel sounds useful! i can imagine it would be fast, doesn’t have to reprocess…
histogram lines is probably aliasing… i’m splatting the values quite brutally with int atomics without filtering. still i was surprised by the relatively high cost.
let me know if you have a PR.
So far it was more about learning but I would love to contribute. I’ll make a try.
Hi @hanatos !
I know (almost) nothing about code, but this POC (now a bit more than just a POC) looks very impressive ! Bravo !!
I have some beginner questions:
- As the thread was flagged with darktable keyword (reason why I saw it), could you please say what it has in common with dt ? if anything.
- Do you think this could end up as a beta software at some point, or are you still just playing to just see how where you can get with this tech ?
I’ll keep an eye on the thread to see where you manage to drive this !
tldr; Impressed by the perfs of your demos
the initial developer i guess. i still agree with some fundamental design decisions of dt, so some of the vkdt code base will look familiar to people familiar with dt.
yes. i’m a little distracted and overcommitted etc, but i’m actually using this software for my very sporadic photography these days. another possibility would be merging back the faster pipeline into dt proper, or merging essential features from dt into here. at this point i don’t think super fast / modern GPUs are so widely available that we could make a hard switch to a GPU-only pipeline. which kindof makes me feel better about progressing slowly but i’m convinced this way of doing it is better.
Until now trying to run vkdt is something I have on my “want to eventually do it”-list. One thing that made me a bit hesitant is drivers, because I like drivers when I don’t need to interact with them at all, and there has been some talk about drivers in here What I did not consider a problem is a low-spec gpu - I do have an old, integrated intel gpu (T450s: Intel HD Graphics 5500, same as you it seems), due to these posts:
I.e. my impression was that this full gpu pipeline would outperform a cpu one on (almost) any gpu (obviously still benefitting from a beefy one). However that seems to contradict with your initially quotes statement about fast gpus not being available. I know this is in early prototyping stages, i.e. no definitive answers can be given, I am looking for ballpark estimates/hints:
Can vkdt run on quite low-spec gpu and perform better than the traditional cpu only/mixed-cpu-gpu pipelines. If there are “too low-spec” ones, any ballparks for minimum required specs for it to be viable?
about drivers: since recently, the driver support in debian’s apt sources is really very good. i went back to just using nvidia driver and vulkan sdk from there, apt has the latest and greatest (in sid at least, that is). very happy not having to mess with a manual .run install.
about the GPU spec. i talked to a few folks who have pre-5500 intel gpus, where vulkan support was lacking. i mean we’re talking 10 year old equipment here. still vkdt has a hard dependency on some vulkan features, so lack of this support means you just can’t run it at all.
the other thing is that i did not spend much thought really optimising the code for low-end machines. obvious things like dt does it (only reprocess the pipeline from modules that actually changed their parameters and such) i just didn’t care about, because on my nvidia GPUs this would strip off half a handfull of milliseconds. on older intel, this introduces noticeable lag (more in the 100s of milliseconds).
so: yes it will perform better even on old hardware. the caveat is that “better” here means it does what it does faster. only it always processes all modules and full res and full crop, where today dt processes downsized to your viewport resolution and only the visible crop (often this is like 20x less work, but comes with some substantial headaches for local operations that require some amount of context or even global histograms…). on my nvidia the vulkan implementation eats this 20x for breakfast. on older intel not so much.
while i believe this illustrates the point that vkdt’s pipeline/processing graph is better, for a practical piece of software you’d also be interested in the final user experience. this either needs some careful implementation of caching (i’ll get to this as processing graphs become more complex i suppose), or your GPU needs to be past the turnaround point (i have no good data here, the bigger and newer the better…).
About merging back to old darktable: I think there are already some interesting features in vkdt - apart form the pure GPU pipline - that maybe should be merged into old darktabe asap. E.g. the local laplacian filter works in linear RGB in vkdt but in Lab in old darktable. Am I seeing this right? Unfortunately I am too stupid to do this myself.
… i believe the current default config applies the local contrast after the curve, so strictly speaking it’s not linear (though the working space is linear rec2020 throughout the pipeline). it’s easy enough to move it around though, both in vkdt and dt (well in dt i guess it works in Lab so there would need to be some wiring work in the code).
but the old pipeline with the full-crop preview + fine res full pipelines which need to talk to each other to compute approximate laplacian coefficients… is no fun to program, when you can have a much simpler one which is also faster. so my motivation backporting stuff to the old pipeline (not even just old dt) is a bit limited.
Thanks @hanatos for your replies !
I tried to compile it, but failed … I guess I’ll just monitor this thread until one day, something packaged automagically appears for noobs like me
So you think it is easier to modify local contrast in old darktable so it works in linear RGB than backport local contrast from vkdt?
Guess what: after I wrote this comment I got an encouraging private message from someone stating that he was convinced that I could do it
Sure it was meant as a compliment to my overestimated intelligence when I don’t even understand what a Laplace pyramid is…
I did not mean that you should do it. Nevertheless I am sad that you are not developing old darktable any more.
Still playing with code, I’ve some questions and/or difficulties.
- black point. A rggb value 0 remains 0. This seems aligned with the filmcurv code (black slider). But I would like to be able to offset a bit those black points. Do I miss something ? Should an exposure black slider be added ?
- contrast. When I connect it before filmcurv the developped image is ok, but exiting darkroom the thumbnail remains black (I got also a darkroom black image but I’m not able to reproduce every time).
If I connect it before hist the displayed histogram looks strange. then connecting it to display, the histogram is back ok. And the thumbnail is ok too.
The contrast radius has no effect.
black point: yep, no control for that. should probably be added to the exposure module. however, i also have quite a bit of garbage intertwined in this module because i wasn’t willing to spend another module for colour transforms and white balancing. we should probably rip out exposure as a real exposure thing, so it can be combined with the “draw” module for dodging/burning, and have a “colorin” matrix/lut thing for the colour part.
would you have a .cfg or screenshot of your connectors for this second case? i’d like to reproduce and fix this. possible that the thumbnails graph conversion fails somehow, i wanted to touch this code again anyways (reuse the cache that already exists in dr mode, now it reprocesses the whole thing which costs a few ms when exiting dr mode).
…just tried to reproduce. swapping llap/contrast and film curve modules works here. you sure you didn’t accidentally disconnect the display module? if the 1D visualisation in the gui is confusing, there is also “vkdt-cli --dump-modules” and “dot -Tpdf <” to visualise the graph in 2D.