Multiframe DNG support for darktable

Hi I implemented support for multiframe DNS to rawspeed, PR here: [WIP] support for multiframe DNGs by maruncz · Pull Request #350 · darktable-org/rawspeed · GitHub

It is now tested only with pentax pixel shift dngs.
Do you know about other multiframe DNG sources?

this breaks external API, merging must be coordinated with changes in darktable.
my plan is to change darktable pipeline to support multiple frames, and merge them in demosaic module.

3 Likes

support for DNG was staightforward. i dont see any obvious way to implement it for PEF right now, but this is of course possible in the future.

That’s great! I shoot Pentax myself and usually use .DNG Files. What do I have tobdo to Install this?

well it is not finished yet, i made base support in rawspeed, it now needs support in darktable
and of course must be accepted by maintainers

Iirc, Pentax cameras can produce .DNG as their native raw format. Then it’s no different from any other raw format (except that it is perhaps somewhat better documented)… Conversion from another format to dng is a different kettle of fish, of course.

As the addition seems to be for a specific raw format, what makes it different from e.g. CR3?

Right. But that doesn’t prove anything at all. The “mess” with dng files around is more a problem of quality - scanner soft is notorious for writing crap. Also people feed nonsense to a converter and scratch heads why something is wrong.

imho this is not a problem of the dng spec. At least there is a spec (and it’s even written readable in most parts) so we don’t have to rely on reverse engeneering as with other raw formats.

4 Likes

Do you have a reference/ jurisprudence for this? As we are talking about unsigned digital data, any raw file can be manipulated, so please explain where .dng is worse in this respect than other formats.
Otherwise, such a remark is just spreading FUD, and I think we’ve seen more than enough of that (in a different context) over the last two years.

Can we stop bikeshedding DNGs and try and help @maruncz please! Sounds like a great feature for darktable.

9 Likes

Welcome to the forum! I don’t know if there are any multi-frame DNGs here: https://raw.pixls.us/.

there are only pentax DNGs

I recall Leica having multi-frame DNGs… though people have opinions about expensive brands. :stuck_out_tongue:

I’m now moving to darktable to implement multiframe support.
Is there anyone who would help me understand how pictures go through pixelpipe?
There is dt_image_t but it looks like it does not hold data itself.
Then there is dt_mipmap_buffer_t does this hold image?
There is also some cache.
What would be correct way to implement multiframe support?
I was thinking that we should pass array of images through it.

Basically dt uses a single pixelpipe per purpose, each module get’s the section of the image presented as input depending on several requirements, thats due to the region-of-interest (ROI) concept. It would be very hard to get around this - something like what you mentioned above like process each frame in demosaic and blend/mix later.

If possible it would be best to do processing / blending on pure bayer sensor data. (All dt structs don’t support multi-frames right now and it would be an absolute mess to extend that). maybe in imageio_rawspeed. I am somewhat time limited but would offer help via mail.

that is problematic, you need user input because frames may not align properly

ok this seems complicated, but i might formulated it wrong. I meant to extend image structure to handle possibility of multiple frames. i thought that you pass image from module to module in order.

Anyway if you could please elaborate more on the concept i would appretiate. I think my email is wisible in github profile (i dont want to write it here because spam bots).

heya,

there’s no such thing as an image struct that is handed around. there is a cache darktable/pixelpipe_cache.h at master · darktable-org/darktable · GitHub that stores pretty much float* for use in a ping-pong buffer pattern throughout the pipeline. as pointed out before, these aren’t full size, but may be cropped regions of interest and also scaled to usually smaller than full resolution.

the image operations themselves can work through various code paths, using SSE or not, and potentially the GPU using opencl (which involves a fair bit of buffer copying and management in general). all of this is done in an ever growing function here: darktable/pixelpipe_hb.c at master · darktable-org/darktable · GitHub which recursively goes through the list of modules and processes buffers.

it was long and winding to begin with and has grown substantially with all the features that have been added in hindsight. as hanno pointed out, processing arrays of buffers instead of buffers sounds like the sort of change that would likely be on par with multiinstance/reordering in terms of code complexity and potentially introduced bugs.

best of luck with your project, given a fair bit of patience it can probably be done. maybe you can find a way to pass around these buffers in some other way, at least that will likely make sure to not introduce bugs/interference with the other features of the pipeline.

that pipeline is one of the reasons why i worked on an experimental rewrite of this core functionality, based on a node graph instead of the linear pipe. one of my use cases was an alignment algorithm that would take several raw images as input and merge them into one aligned output. it fact it might be interesting to try this machinery on one of the pixel shift dng.

Hi,
here is my progress report:
i work here: GitHub - maruncz/darktable at multiframe
I modified imageio_rawspeed.cc, mipmap_cache.c to support multiframe images.
Extended rawprepare is now able to process all frames.
Now i work in demosaic, to merge those CFA frames into one RGB.

But i have issues with that. I boiled it down to equivalent of monochrome passthru but it does not work.

If i see whole image, it looks same as monochrome passthrough see mono-full and ps-full,


but when i zoom in, my algorithm is broken, you can see that the image is completely broken see ps-zoom, instead of like mono-zoom.


i compared my version with monochrome, but i cannot see difference.

sidenote: demosaic gui is not working right now properly, ignore that pixelshift checkbox.

1 Like

So it looks like i have to ignore offset in roi_in in demosaic.
I also found out, that for some reason, rawprepare works only when pipe type is export because output of rawprepare only works woth darktable-cli, if i’m in darktable darkroom, rawprepare only gets first frame, the rest is black.

So i think i’m back at the start. For some reason, data between imageio and raprepare get reorganized and im reading nonsense. It doesn’t work if there is cropped roi, but in fullscreen darkroom or export it’s ok.

Hi, next update.
It looks like i fixed all issues except GUI.
I had no prior experience with gtk, so i just added simple combobox to enable pixelshift, and idea is that it will override demosaic method.
This is temporary solution, because if i implement some sort of motion correction, i need fallback demosaicing.

But my checkbox sometimes doesn’t react to change and especially for export it don’t work at all

So i think, that first stage is complete.
I created simple algorithm to merge pixelshift frames, i hope maintainers will like it.

2 Likes