processing that sucks less?

Out of curiosity I tried to build this program on Debian testing but I get this:

I think I installed the dependencies and Vulkan (I read that I only need the Nvidia driver which includes vulkan support and vulkan-utils). But I think something is not configured correctly, some path is missing or so.

I will give it another try on Ubuntu, maybe that’s easier (but first I need to set up Ubuntu on a pendrive).

What about libvulkan-dev?

Yes for runtime, but you need header files for compilation. On Debian derivates this packages need to be installed in addition. Package names usually ends with *-dev. And there is no difference between Debian and Ubuntu here …

yes, thanks, that was it. After I installed libvulkan-dev make complained about another missing file but that was easy to google as well.

Well… I managed to open a raw file with it and it seems very very simple… so far I cannot see that it is faster than darktable old with OpenCL, but as mentioned I only opened a filed and moved around the silders a bit. Probably there are more command-line options.

Is there a manpage for this or something like that?

darktable works on a roi (region of interest) normally or a smaller image, vkdt loads the whole 24MP image into the GPU memory. Moving a slider work on a huge image and the rendering takes just miliseconds for the whole stack. This is pretty impressive!

There is nothing else to see, this is a prototype nothing else. To get to a feature set like darktable currently provides it will take several years probably.

I know. But… I am not good at math (I think meanwhile you know that). Wouldn’t that mean that I need 24 GB of GPU RAM for a 24 MP photo? I think my GPU only has 4 GB.

But wasn’t there something like exporting or saving a raw? Or thumbnails?

24MP: 24 million times sizeof(uint16_t) which is 48 million bytes, 48MB. the floating point buffer is f16 but 4x per pixel, so it’s four times that. tldr: you’re fine with 4GB.

and yes, no features. no export, no saving raws or jpgs. i mean it does all that from the command line interface, but i intentionally didn’t wire any useful features at this point. i want to stabilise the core/processing graph before putting it into production.

1 Like

the floating point buffer is f16 but 4x per pixel

If I recall correctly, float16 has 11bits of precision for the mantissa, so this is losing precision for any input format that uses 12 bits or more, right?

the input buffer i store in uint16_t. only after processing (done in 32-bit float) it ends up in f16. but i discussed this with ingo too… it’s a matter of changing f16 to f32 in a config file if you’d rather. you don’t even need to recompile. bandwidth is expensive and i’m trying to gain speed where i can at this point.

Ah, if processing is done in 32 bits, then all is good. Thanks for the explanation.

The way I am dealing with this aspect in PhotoFlow is to consider masks as sub-mages, that are edited the same way as the main buffer: masks are built as a stack of layers (gradients, paths, blurs, HSV masks, etc…), each layer having different blend options to combine it with the rest of the mask. I fund this to provide great flexibility with a relatively small UI overhead.

So are there more commands or not?

not sure what you’re looking for. you can build your own processing pipeline by creating graphs of modules. the modules are found in the bin/modules/* subdirectories and do largely undocumented things. you may find an occasional readme.md, some of which may be outdated. the list of modules is by far not complete enough to facilitate real raw development.

to construct a graph yourself, you can manually create .cfg files, as in the bin/examples/ directory. there you can load modules, interconnect them, set parameters, and finally execute the graph using vkdt-cli -g graph.cfg.

if you overwrite bin/default-darkroom.cfg with a custom config, or put a .cfg next to a raw file, these will be loaded in the gui in dr mode. bin/darkroom.ui finally has a list of parameters that will be exposed in the gui for tweaking.

i think that’s all, this may be the exhaustive documentation of everything it does :slight_smile:

3 Likes

I think I’d like to test that. What’s the command for it? I suspect it is somehow linked to the thumb.cfg file and the thumb module.

Btw: I checked out the contents of the bin folder and the config files - you can imagine that it looks quite complicated to me… but I have the impression that the program can already do quite a bit… I mean it is actually useful but operating it from the command line, building the graphs etc is not really “comfortable”.

for light table mode (it does not rotate images, it doesn’t preserve the aspect ratio, the downsampling is outrageous, and there is no colour management, etc…), start the program with an argument pointing to a folder of raw files:

./vkdt /path/to/folder/of/raws/
1 Like

Although first this sounded very easy I only see bnw butterflies :frowning:

Xlib: extension “NV-GLX” missing on display “:0”.

I guess this is related to the fact that I have dual graphics. I think I have installed the glx packages.

yeah dual gpus are a lot of fun. do you have optirun? does that even still exist? if you’re on ubuntu, apparently they have something called nvidia-prime to switch gpus? my last dual gpu laptop was from 2011 or so, i’m sure i can only be of very limited help here.

I think switching GPUs is not the issue. Google appears to think that it is a bug in the latest Nvidia driver from the Debian testing repo.
I will try again, although somehow this is getting painful LOL. I was also wondering whether I should/could run vkdt with the Intel GPU only (nvidia switched off).
Maybe someone else knows more about this?

just a quick update again.

in an effort to make the pipeline feature complete, i implemented rasterised drawn masks. the rest of the modules are based on compute shaders, rasterising lines requires a separate render pass with a traditional vertex/geometry/fragment shader pipeline. this is done as a proof of concept and supported in the pipeline (in this example wired to a blend module that modulates exposure):


again, the gui is only wired just enough to be able to debug it, such that it’s impossible to confuse it with production software at this point:

i suppose the #1 blocking issue why i wouldn’t use it for real work by now is the ui. the obvious lack of features/modules is probably only #2.

3 Likes

I am really impressed by the progresses you are doing on this, and I’d love to see your engine at the heart of some kind of “photoflow next generation”. I am practically offline until the end of January, but then I should be able to actively work on photography tools again.

Concerning the UI, I would really love to see some kind of collaborative work among people in this community, at least for defining the design and basic usability concepts. Then the fewer of us who have good experience in UI programming can do the actual work…
Obviously, I personally love the concept of adjustable editing layers, maybe coupled with an alternative node-based representation. PhotoFlow was my experiment in this direction, good or bad I do not know.

Concerning the possible UI toolkits, I see practically two possibilities:

  1. use one of the big ones (KDE, GTK, wxWidgets). This would have the advantage of a good integration with different OSes, but comes at the expense of a bloat of dependencies, and some color-management oddities on macOS (where Cairo limits us to sRGB). Also, I do not know how well they integrate with a Vulkan-based rendering.
  2. immediate-mode UI toolkits (dear imgui or others). I have no direct experience with them, but they seem to allow better control on the graphics rendering, possibly including custom color management. There are also ready-to-use packages for displaying a node-based representation, and they are extremely lightweight. However, I have no idea how well they integrate with the OSes, although they might even allow an easy port to Android devices if we are interested in this option.

My scattered thoughts end here… what are yours?

thanks for your comments, that is very motivating. would be great to discuss a few ui concepts before diving into implementation. i think a few things can be done by combining “prefabs” of pieces of module graphs. like tying a module to a blend node + maybe a drawn mask node. for more advanced things (use a guided filter to blur the resulting mask…?) maybe sometimes a full node editor is a good thing. i’d like to avoid ui complexity if possible.

as of toolkits: i’m personally not interested in bloaty toolkits any more. i think they are more limiting than empowering. colour management? 10-bit support? performance issues? point releases break the api? i’m not patient enough to have all of this again.

by OS integration you mean look and feel and desktop integration? i think even with gtk the most intergration you get is calling system("xdg-open"). but i may not be the best person to talk to about this. my “desktop” is dwm and i usually run applications full screen.