Sorry for not responding earlier, I’m on vacation so this will have to be short.
The sort of workflow you describe is exactly how I took a few of my waterfall shots on vacation last year - so far I have only gotten one this year for various reasons (bad weather, too crowded, family too impatient, etc.)
For a long-exposure waterfall shot, I did as follows:
An ND filter is still needed most of the time, but the requirements are significantly relaxed - more later
Set up the camera to preserve highlights in the shot - lowest ISO, narrow aperture
Ideally, the exposure time is at least the camera’s continuous drive framerate, if not longer, because for this use case, you want the shutter duty cycle to be as close to 100% as possible. When the shutter closes between shots, you run a risk of some artifacts
In addition, if you’re simulating a REALLY long exposure, you want the exposure time to be long enough that the camera is able to write shots to SD faster than it takes them (e.g. buffer never fulls up)
As a result a 3-stop (ND8) may not be enough, you may need a 5-stop (ND32) for specular highlights off of water in full sunlight
BUT - a 5-stop that doesn’t have magenta casts is a LOT less expensive than a 10-stop, you can use them, and don’t have to worry nearly as much about light leaks
You also need a way to keep the camera’s shutter going, so you at least need a remote cable release, preferably one with a “bulb” latch, unless your tripod is MASSIVELY overspecced for your camera - because you’re effectively doing a “bulb” shot, but the camera will be splitting your “one” bulb shot into many (by being in continuous shooting mode instead of bulb mode)
Take the shots, and use siril to stack them in averaging mode. In this use case you should be on a solid tripod so you don’t need to use any of the alignment modules - and in fact can strictly average while retaining a Bayer-mosaiced (CFA) image on the output
Take the FITS output and convert it to TIFF
Take that TIFF and rename to DNG, then apply appropriate metadata for your camera (ColorMatrix1 and the Bayer CFA pattern metadata at a minimum) - I’ll dig up a link to an example script to do this when I get back from vacation
At least last year, for this workflow, I couldn’t find any way to have siril output in a format other than signed int16, which is non-ideal when you’re averaging a lot of frames. I need to revisit that to see if I can figure out a way to get better precision, but for my use case, even the DR recorded by a few frames was more than enough and I was primarily shooting more frames for motion smoothing.
Load that resulting DNG into RawTherapee or other processing software, and tonemap/etc to your heart’s content. The end result will be an image similar to what I posted in another thread that you’ve participated in.
Again - I’ll try to post more when I get back from vacation
Alternative workflows, depending on use case, are:
HDRMerge (excels for merging bracketed shots)
Tim Brooks’ implementation of Google’s HDR+ (excels for merging shorter handheld bursts, ESPECIALLY with motion within the frame, due to Google’s tiled align-and-merge approach) - might be able to find this just with “Tim Brooks HDR+” on Google, otherwise - will dig up a link when I get home
In general, I’ve always found RawTherapee’s DRC module to meet 95%+ of my tonemapping needs. At some point I may play with LuminanceHDR for the remaining 5, or resume my work with variations on exposure fusion but within RT instead of darktable. (I’m not sure exactly what trick they used, likely just having the right saturated pixel metrics, but their variation on enfuse handles subjects lit with monochromatic LED light better than almost anything else I’ve tried and I have yet to reproduce it with any postprocessing flow.)