processing that sucks less?

This! There is maybe a steep learning curve to using these pieces of software, but I’ve always found their node based approach very intuitive.
Also I believe their modules are fairly (if not completely) decoupled from their GUI.

And additionally on the topic of “what goes where”, your GUI could reflect the type of operation that is permissible by a certain node. I mean: if it requires linear RGB input, it should only be possible to match certain outputs to this node.
You could also go free for all, but then the user should have a pretty good idea about what is going on internally.

A pretty large downside to nodes is that they require a lot of screen space. With say blender it’s fine because a small preview works for most of the blender use cases. For photography I like my preview large.

I feel, perhaps wrongly, that nodes are good when you’re building something that will ‘last’. Say a material in Blender, you’ll use it all over the scene and perhaps store it for other projects. Much of photo processing is custom tweaking. With this line of thinking nodes are better suited for building auto apply profiles than for actually editing on specific photo.


i agree. so maybe have a user facing node editor to develop certain pipeline templates, and then these can be applied and only a few relevant sliders could be tweaked. i was going to call node editors in this context “bloatware”, but while still working on pipeline order it’s actually fairly annoying to always change the input text file to reorder modules… and since there are already imgui node graph implementations this might not be much work.

1 Like

This would make it a sort of modal raw developer. Despite being a vi user I’m not so confident its a great idea for photo editing. I’d really like to find out though, because it could be great!

random update: i have wired fake-lighttable mode with thumbnails, to get a sense of speed there. still need to battle-test it with a few thousand images (wget -r or so).

rawspeed loading becomes a bottleneck during thumbnail creation. most of this could be i/o but it’s also an issue on my system with nvme ssd. need to experiment with interleaving disk io/cpu processing/gpu upload/gpu processing in multiple threads. loading a raw takes anything between 10–5000ms.

short video about image thumbnail list + entering darkroom mode (gui is completely constructed from scratch/config file for the specific processing graph which is loaded per image when you enter dr mode, also the whole full-res image is processed all the time, even when just panning around):

thumbnails are pre-created for this video (being lazy with threading i made a command line utility for this test). also you’ll see that it really isn’t a raw editor but a performance testing prototype (no colour management whatsoever, doesn’t clip negative values in noisy images, no highlight reconstruction, doesn’t even flip image orientation, thumbnails destroy aspect ratio, …).


This looks awesome!

Whoa! What?!
That’s ludicrous speed.

I see it’s the same old Karfiol laptop.
Too bad that I am too stupid to compile this.

I’ve compiled vkdt today. I get funny stuff with the amdgpu driver. It loaded at least once and that was really impressive. The rest of the time I got funny colors and a shutdown of the system.

It is probably a bug in the amdgpu driver. Lets see if someone from AMD will fix it:

I just compiled the vkdt but when I run it, I get segfault. I debugged it a bit and I think the reason is that the code enables some shaders (sparseResidency*) which seemingly are not available for my GPU (Geforce GTX 960M) in driver version 430.26.
The debug messages: output.txt (6.4 KB)

oh, thanks for diving into this. i’m not even sure i require sparse residency features, could try and switch them off (or at least output some more helpful messages, but what’s the fun in that). not sure a 960M is much fun to use either though. sounds like it might be same slow as my builtin intel. for such devices i should really start to do some basic optimisations.

fwiw it still works here switching off the sparseResidency features in qvk.c. the only one i’m a bit afraid of is the “Aliasing” one though, i’m heavily aliasing the memory of multiple vkImages.

My absolute pleasure. I am very interested in your approach to the next generation raw processing software and can’t wait to try it!

I see :slightly_smiling_face:

Interesting! I get this error:

[ERR] graph does not contain suitable display node main!

More precisely, I disabled these:

.shaderResourceResidency = 0, 
.shaderResourceMinLod = 0,
.sparseResidencyBuffer = 0,
.sparseResidencyImage2D = 0,
.sparseResidencyImage3D = 0,
.sparseResidency2Samples = 0,
.sparseResidency4Samples = 0,
.sparseResidency8Samples = 0,
.sparseResidency16Samples = 0,
.sparseResidencyAliased = 0,

Enabling each of them leads to segfault.

progress! how are you running it? the gui does not require the -g parameter any more (it has instead a default-darkroom.cfg that is loaded for all images that don’t have their own history yet, which is currently all of them…). the command line interface (vkdt-cli) still takes the -g argument.

maybe you’re trying a .cfg that wasn’t updated with the code? i’m now looking for a display instance named “main” for the output in the cli, because a graph may have many outputs. in the gui, the “main” display is the center view and “hist” is the histogram window in the top right.

My bad! I was running with the vkdt-cli as:
./vkdt-cli -g examples/histogram.cfg -d all

I ran vkdt (without -g parameter) with a raw photo as input but again got a segfault. This time it happens in the line 1221 of pipe/graph.c when n=99 and c=1

I don’t get a segfault for a jpg input file but it only opens an empty (black) window.

hm that sounds wrong. 100 nodes in your graph? maybe some memory corruption happened before this? hard to remote debug… i’d maybe try to use “make sanitize” and rerun to see if memory gets corrupted before the crash, or look at the “-d all” output whether there are any unusual things going on.

And merge my PR :stuck_out_tongue:

oh, indeed. still not used to these fancy web things…

llap module creates 89 nodes (there is a nested loop in its create_nodes()). Shouldn’t this happen?

Tried. It doesn’t seem to be a memory corruption issue

Nothing seems unusual in the log messages.

I couldn’t figure out what llap actually does. Is it possible to exclude it from the pipe? I just tried to do so in the cfg file (connected filmcurv to f2srgb and disconnected hist) but got a bunch of vulkan errors.