absolutely loving it!
Just digged out Raws of a timelapse I did a few years ago just to try that feature and almost couldn’t believe how it played ~20GB of Raws as a live video while panning around and moving sliders. The GPU fans weren’t even spinning I think. Very impressive!
There is no UI for key frames yet, right?
also it took me some time to figure out how to get back to the light table (found the esc button, is there another way?).
What is the easiest way to set white balance? I see there is no dedicated module called WB. I guess it’s the last three sliders in “colour”? How do I use the picker tool for that (if there is one already)?
Thanks for this very nice piece of software!
I can really see this gaining some traction in the near future
29 posts were split to a new topic: vkdt on Windows
Hi @hanatos, all
I hope I wont irritate some people with the below
I’m part of those amateur photographers who shoot 99.9% JPEG … Yeah, Fujifilm’s SooC JPEG are quite nice.
However, I do like to adjust my pics in PP.
I’ve noticed that, when opening JPEG files in vkdt, they get the following adjustments:
and get over contrasted, over saturated with burn highlights, and a bit too much clarity to my taste…
Do you think it’d be possible to default these settings when loading JPEGs to values which default the pictures as the JPEG itself, i.e. as follows:
expo: 1.0
light: 1.0
contrast: 1.0
clarity: 0.0
shadows: 1.0
highlight: 1.0
Addendum: I understand RAW aficionados want their software to fully develop raw (i.e perform significant adjustments). JPEG shooters usually want to slightly adjust the pic to their taste (usually small amount of contrast/saturation/HL/shadows/clarity/… changes).
RAW have indeed been created to benefit the whole sensor capabilities. But the existence of RAWS does not mean JPEG should not be slightly adjusted. The possible range change is just considerably smaller
Thanks a lot,
GLLM
Anyone had success with running and compiling vkdt on an Intel Mac?
that is correct. i mean the ui is you press ‘k’ while hovering over the ui control and it will insert the keyframe for this parameter at the current animation frame (in the ‘esoteric’ tab).
i used to have ‘e’ as shortcut key but removed it. probably a good idea to introduce a real keyboard shortcut/configurable thing, i’ll put that on the list. maybe double click is also an expected action here?
yeah that’s built-in in the colour module. each module needs to read/write to global memory, so a lower number of modules is faster. also for ui i think it makes more sense to have fewer modules.
anyhow, picking i would do as outlined here: https://jo.dreggn.org/vkdt/src/pipe/modules/colour/readme.html and apply one of the colour-monk-XX
presets (ctrl-p). they will go towards skin tone, not white, but it’s easy enough to reset the target colour to neutral white from any of them.
oh very good point indeed. for a quick fix, edit the bin/default-*.i-jpg
files (one is for thumbnails and one for darkroom mode) and just remove the llap
and probably also the filmcurv
modules
from the default pipeline. i shall make vkdt load such files with priority if i find them in your .config/vkdt
directory in the future.
in the past this has worked. macintosh computers don’t do graphics, so you need moltenvk in between vulkan and the hardware. i think maybe newer drivers already include this but i don’t know.
let’s politely say that i do not regret that the least.
impressive effort… i thought this would be pretty much impossible by now. travelling at this point, but once i’m back let me have a look at github, i can probably factor out the hard dependencies on linux kernel interfaces to a shared place so freebsd people can drop in their code easily too. started to put some of it into fs.h
i talk to the sys and proc interfaces and v4l2 and use posix libraries. this are not dependencies on libraries but on linux.
please do read the readme or see here
One more question: To me the source code looks like all processing is done on 16bit float per colour channel, is that correct? If yes, does this pose a limit on what can be done in vkdt?
that is the default, yes. the graph supports anything really, all you need to do is replace a couple of *
or f16
in the connector files by f32
. i evaluated the difference for quite a while and in terms of processing photographs i concluded the difference is insignificant (to a point where i was hard pressed to spot differences at all). i suppose there are applications where you’d want more bits. the beauty of gpu programming here is that the kernels are largely independent of the texture format, which is handled by the texture units and served as 32-bit float to the compute units.
on the other hand you’ll cut your bandwidth usage in half, which has quite a big speed impact for bandwidth limited operations as we have here.
the only place where some extra adjustment needs to be made is when uploading pfm from 32-bit float to 16-bit gpu memory, and similarly on the way out towards export.
https://github.com/hanatos/vkdt/blob/master/src/pipe/readme.md explains how to get strings to be used as parameters into modules and mentions that this would be helpful to load lensfun parameters (last paragraph). But as far as I understood the “lens” module uses a lens model different from the one used in the lensfun project. Is there a way to correct lenses supported by lensfun? How is the “lens” module supposed to be used? Trial and error using the sliders?
Thanks a lot for fixing most of the bugs I filed by the way
thanks for filing! it’s the only way they will eventually be fixed…
as to lensfun: i implemented this new lens distortion model because it’s very simple and scales to fisheye distortion too (no need for separate models). also the parameter set is very small, so indeed i got away with manual slider tweaking in the limited tests i drove so far.
you can always create a preset (ctrl-o
and then filter for lens
and only include these) to reuse on other images. but yes, ideally there’d be a community database for such things. maybe it’s possible to convert lensfun profiles? the model works well enough so i’m not very motivated to implement more distortion models for compatibility (though that’s probably an option too). can’t use lensfun cpu code anyways.
Here’s an example file to reproduce issue 45:
DSC01098.ARW (41.1 MB)
Just load the file in vkdt with the default parameters and pull up the exposure in colour module to +2ev (can’t upload the cfg file, neither here nor on github) produces this image on an amd rx 6600:
Hi all,
just curious, and asking because I cannot test it myself : has anyone tried vkdt on a laptop wich has no dGPU (but only a iGPU) ?
It is even possible (any vulkan on recent iGPU ?) ?
If so, is it working ? And if yes, what about the performance ?
I’m considering a laptop, but between portability & a discrete/dedicated GPU (dGPU), one has to make a choice … and I’d like to know my workable options before moving on.
Many thanks
GLLM
Hi, it should work, I think I tried some years ago and even the dev started this program on a laptop with igpu. Intel does support Vulkan, the drivers are available.
I was curious and installed vkdt on ubuntu 22.04 from obs repo.
But I got error below, when i tried to execute it.
(nvidia 510.73 driver, vulkan and vkcube works fine)
I remember a few years back some trick was needed for nvidia, but i forgot about it and can’t find.
I would appreciate a hint, but this is just out of curiosity… so don’t waste much time to respond.
$ vkdt -d all Downloads/IMG_3017.CR2
[gui] glfwGetVersionString() : 3.3.6 X11 GLX EGL OSMesa clock_gettime evdev shared
[gui] monitor [0] DP-4 at 0 0
[gui] vk extension required by GLFW:
[gui] VK_KHR_surface
[gui] VK_KHR_xcb_surface
[qvk] dev 0: vendorid 0x10de
[qvk] dev 0: NVIDIA GeForce RTX 2070
[qvk] max number of allocations -1
[qvk] max image allocation size 32768 x 32768
[qvk] max uniform buffer range 65536
[qvk] dev 1: vendorid 0x10de
[qvk] dev 1: Quadro P620
[qvk] max number of allocations -1
[qvk] max image allocation size 32768 x 32768
[qvk] max uniform buffer range 65536
[qvk] dev 2: vendorid 0x10005
[qvk] dev 2: llvmpipe (LLVM 13.0.1, 256 bits)
[qvk] max number of allocations -1
[qvk] max image allocation size 16384 x 16384
[qvk] max uniform buffer range 65536
[qvk] device 2 does not support requested feature shaderSampledImageArrayDynamicIndexing, trying anyways
[qvk] device 2 does not support requested feature shaderStorageImageArrayDynamicIndexing, trying anyways
[qvk] device 2 does not support requested feature inheritedQueries, trying anyways
[qvk] num queue families: 3
[qvk] picked device 1 with ray tracing and with float atomics support
[qvk] validation layer: loader_validate_device_extensions: Device extension VK_KHR_deferred_host_operations not supported by selected physical device or enabled layers.
[qvk] validation layer: vkCreateDevice: Failed to validate extensions in list
[qvk] error VK_ERROR_EXTENSION_NOT_PRESENT executing vkCreateDevice(qvk.physical_device, &dev_create_info, NULL, &qvk.device)!
[ERR] init vulkan failed
[ERR] failed to init gui/swapchain
well yes working. intel is better than AMD in my limited experience when it comes to strange bugs and features (intel has float atomics…). but of course performance is quite different to a really strong dedicated gpu.
oh interesting you have a 2070 and a quadro? which one has the screen cable attached to it?
anyways the 2070 should work really well (assuming it has the screen cable you’ll also be able to run the vkdt
gui). if you want to help vkdt
pick the right device, place this in your ~/.config/vkdt/config.rc
:
strqvk/device_name:NVIDIA GeForce RTX 2070
(replace the similar line that ends in :null
if you have it)
I’ve tried vkdt on my i5 IceLake laptop. It runs well generally but is a bit sluggish sometimes; adjusting some sliders take a noticeable amount of time to update, making fine adjustments difficult.
one more datapoint processing a random image on a laptop (both GPUs here not great):
Intel(R) UHD Graphics (CML GT2):
[perf] total time: 498.737 ms
NVIDIA GeForce GTX 1650 with Max-Q Design:
[perf] total time: 68.817 ms
Thanks @hanatos and all those of took a moment to reply to my questions !
Seeing @hanatos last post, it seems that even a moderately good dGPU would greatly help. I’m thinking Nvidia 3050 mobile (as opposed to higher end mobile versions of 3060/3060Ti/3070).
If anyone has another input (another dGPU or an Intel or AMD one), please dont hesitate to post !
Anyway,
thanks all !
GLLM
Thank you for the reply and advice.
Yes, 2 outputs from 2070 and one from p620, in kde plasma output enabled only one from 2070.
I tried your advice and put the configuration to config.rc and in the output it shows now also line:
[qvk] selecting device NVIDIA GeForce RTX 2070 by explicit request
But the rest is the same, ends with the same error.
I tried also to set p620 to power idle mode and then it showed something for split second before it segfaulted. However then also vkcupe segfaults. So it seems it’s related to p620 presence.
When i will have more time i will try to play a bit more with it. Maybe in separate virtual machine with 2070 passthrough.
…did you update the obs package since yesterday? maybe the old version reports the right output but still picks the wrong device? there should be a line like
[qvk] picked device XXX with/without raytracing ...
that tells you the device id (that should match the name printed earlier on in that slot), as an additional sanity check.