Raw image stacking in vkdt

I’m currently trying to stack/merge a set of raw images with vkdt to see how it would perform for some of my use-cases, so far with limited success. I’m aware that this is a topic that is already on the list for a future how-to, but am hoping to get a few pointers in the right direction :slight_smile: vkdt is admittedly often still a black box to me.

With the pipeline I’ve configured right now, the alignment between the images seems to work (bit hard to tell tbh), but the colour channels seem to be separated(?). Initially I only see reds, the blue channel becomes visible when I bump the exposure by 8 or so and green is completely gone:

I assume that I haven’t connected or configured the blend and align modules correctly but am at a loss how to proceed.
The node view for reference:

If it helps I can also upload the set of images.
I’m using the Arch package vkdt-0.5.9999 on Manjaro if that is relevant.

1 Like

nice! you even set it up as an animation with feedback connections! i assume you found the simpler “merge into current” button in lighttable mode? it would generate a single-frame graph.

possible that the input image parameters (matrix black white etc) aren’t passed on correctly through the align/blend step? i can have a look if you can share input images (at least two?) and the .cfg (which blend mode did you use for instance).

@hanatos Thanks for the quick response and help!
I did indeed completely miss the “merge into current” button :see_no_evil: Seeing what it does in the node graph makes it a lot clearer as well. I had thought about the basics of this this approach (i.e. one i-raw module per image) but thought it could somehow be avoided by having all images of the stack being input by one module. With the button it’s quite straight forward though :slight_smile:

After some quick fiddling I find the results to be truly astounding! I need a bit more time to check the impact of changing the parameters but vkdt allowed me to merge 11 hand held shots with a lot of movement in between, without it being noticeable. It more or less replaced a ND filter and tripod for this image and is the quickest workflow I’ve found for scenarios like this by a significant margin.

One thing I noticed is that the Laplacian module can generate some weird borders on top or on the right hand side of the image:

I’ve uploaded two images (can’t seem to upload more as a new user) and the .cfg file from the previous try, if you are still interested in finding out what happens to the input image parameters with the animation pipeline.

If there is interest I can provide the rest of the image stack for others to play around with.

DSCF8708.RAF (29.3 MB)
DSCF8709.RAF (29.2 MB)
DSCF8708.RAF.cfg (2.5 KB)

The files are licensed under CC BY-SA 4.0

1 Like

alright, thanks for the files! i only had a quick look, but i think there are several issues that i’d like to streamline (i never tried to align photo stacks in an animation, only videos and 3d renders):

  • there’s something about the image format that may not be passed on through align and blend so that denoise is confused about white and black points or mosaic or not.
  • align in the first iteration has no initialised buffer on the feedback input and will compute garbage. this probably needs a parameter/flag to pass the first image through cleanly in animations with feedback into align, or some downstream handling to discard the first frame.
  • the blend weights here are bound to be wrong and actually the accum module implements exactly the right logic (but has no misalignment/disocclusion mask). might need an accumulate blend mode with mask.

i really want to make this work though. for focus stacks for instance, the method to create the full tree does probably not scale to 100s of images. anyways not in acceptable processing time (animation kinda distributes the load between frames).