vkdt devel diary

well it can be automated in github. if you want i can point you again to the docs on how.

sure, why not. it’s not high on my list of priorities, but would not be hard to do.

right. i think maybe i should remember. or i’ll just do a post-push hook locally so i don’t have to click my way through micr0soft-owned websites.

the github integration is super easy when using their stuff.

https://openbuildservice.org/help/manuals/obs-user-guide/cha.obs.source_service.html#sec.obs.sserv.token_usage

but of course you can also just have a local hook with

osc service rr graphics:darktable:master/vkdt

1 Like

@hanatos Do you really think EXIF data is bloat?

I am not sure what this is. I edited and exported a photo in vkdt. The greens in the exported photo look different than the greens in vkdt. It’s a different hue apparently.

100% view in vkdt:

Exported as jpg:

Exported as pfm, opened in GIMP, rec2020 profile assigned:

I think color management is set up properly everywhere.

Should I upload the raw file and the sidecar file?

I also want to add that I have a screen whose color space is slightly larger than AdobeRGB, especially in the green area.

I tried to test this with other photos but autumn landscapes seem to behave differently and the exported pfm looks more or less the same as the preview in vkdt.

Anyway, if I export to jpeg the exported photo usually looks less saturated than the preview in vkdt, but that might be explained by the fact that one part of the colors is just left when converted to srgb. Although that’s also a bit suspicious.

yes please. i’d like to double check what else you did to the photo, the ui seems to have quite a few controls open :slight_smile:

not sure i can assess screenshots colour managed for your display and viewed in my firefox probably assuming srgb etc… but the gimp and the vkdt image look similar enough to you?

P6082098.ORF.cfg.txt (3.1 KB)
P6082098.ORF (18.8 MB)

I don’t know how well you can see it but to me it is quite clearly visible that in vkdt the greens look warmer (more yellowish) and in the exported files cooler. It is not a very big difference but to me it is clearly noticeable.

Well, maybe one more thing: in the screenshots that I posted, to me the exported (gimp+jpg/xnview) look more similar, however it is clear that the jpg is a bit less saturated because one part of the colors was just cut away. But the hue is similar.

haha awesome… so much saturation :slight_smile: you know you can just ctrl-click and dial in a number in the imgui widgets? you can set saturation to 100 in one module instance, no need to put two.

thanks for the tip

hm clearly i haven’t setup cm for gimp at all. but my jpg export and vkdt agree here.

Maybe it’s more visible at fit to screen view.

Apparently, the lens module makes some photos blurry. Not every photo. I don’t know what it depends of.

The first screenshot is without the lens module, the second is with the lens module, I did not change any settings in the module.

Edit: I think this module needs to be rather late in the pixelpipe/graph.

the lens corrections should be before crop if you’re trying to correct actual lens distortions, so i think your position in the pipeline is good.

the lens module does some rather careless resampling. i should use the derivative of the distortion function to compute the kernel size/smoothness and potentially use nearest neighbour resampling for minification. who cares about aliasing for still photography, right?

the crop module has a special case when no rotation is used, it’ll just copy pixels over instead of applying a resampling kernel.

Shouldn’t vkdt be in the AUR?

hm i don’t know anything about arch, so i would be surprised if aur had vkdt.

i just merged a branch that implements support for evaluating gmic’s gaussian/poissonian resnet as gpu shaders. i’ve been working together with @David_Tschumperle on this. really he’s done all the work and i just ported it over to the gpu, which wouldn’t have been very easy without his support in debugging my broken implementation.

a few initial observations: my implementation is stupid:

  • it runs out of memory very quickly, does not attempt any tiling
  • it fetches everything from global memory and is thus very slow
  • it’s split into way too many small kernels

and currently it evaluates a 1080p image in ~300ms on a low end 1650GTX laptop version.

this is really just a starting point now, i need to do a faster implementation and then likely tweak the network architecture for real-time performance and temporal stability (video).

I was just going to ask which module is the most resource heavy, in order to make a “performance test” by putting lots of instances on top of each other and see when vkdt slows down. I just tried it with 5 instances of deconv, but this Nvidia 1660 Super still does not really slow down. It takes about 1/2 sec to update the preview when I move a slider, with 5 instances of deconv.

heh, yeah if you want a dead slow one that probably crashes because memory, try cnn, see instructions here: https://github.com/hanatos/vkdt/tree/master/src/pipe/modules/cnn

That’s fast! (compared to my CPU implementation :slight_smile: ).
Also, I’m thinking about retraining the network because I’m not completely happy with some of the result it renders for a few particular cases. This should not be an issue to change the weights once it’s retrained of course.