I am trying to linearize my workflow of doing panoramas using Hugin and Darktable. My current workflow looks like this:
I do all of the linear (or scene-refered) operations in Darktable
Export as 32 bit floating point TIFF
Stitch in Hugin
Reimport the Hugin TIFF in Darktable and do the rest of the operations on the panorama (tone-mapping via filmic, local contrast, etc)
My problem comes from picking the color space for step 2. If I pick Linear REC2020, then Hugin forces me to operate under its HDR workflow, meaning that I have fiddle with their exposure merging options to avoid creating ghosting. If I export in sRGB, Hugin lets me only stitch the images and no ghosting appears, but I am not sure if Hugin does operate in a linear space when using sRGB images for input and I’d like to avoid using a smaller colorspace for intermediate steps.
I’ve been thinking this slightly differently, as I have not been able to use linear input data for Hugin.
I select one image from the panorama and do all the linear processing for the image in darktable and finally use filmic RGB to convert it to display referred. Then I copy paste the tools stack to the other images of the panorama and check the result. If some image needs tweaking, I do it, and copy-paste the modified tools stack. Repeat until satisfied. To my understanding all the source images to Hugin should have the same processing. And finally export as 16 bit TIFF.
Then I stitch the images in Hugin
Last step is to view the result in darktable, do the final crop, fine-tuning and any artistic processing if needed.
Do you mean, that your workflow is:
1 edit the images and export them as non-linear tiffs
2 create Hugin project and align the images
3 recreate the TIFFs so that exclude Filmic RGB and the later tools
4 stitch the linear TIFFs in Hugin using the project created in step 2
5 apply Filmic RGB and other later tools the stitched output
I suppose that it may work, but you now have to fiddle with algorithms to avoid ghosting when I shouldn’t have to (and also add processing time using HDR algorithms in a workflow that doesn’t need it).
I don’t understand. Why would you have to fiddle with algorithms and add processing? I just had a look at my Hugin logs for two projects and it seems that it first does HDR merging on each stack, and then stitches the HDR stacks with enblend. But if you don’t want to fuse multiple exposures, which I understand to mean that you don’t have stacks in your Hugin project, then the HDR merging will just say “Only one input image given. Copying input image to output image.” and then enblend will stitch as usual. At least that’s what happens when I check “Panorama Outputs” → “High dynamic range”.
Actually, I don’t know. But it’s my go-to workflow these days. I export to linear floating-point TIFF from darktable (after applying lens correction), import and align/stitch that with Hugin, and
export the result as an HDR image which I import back into darktable for tone mapping with filmic (after remembering to set the proper input profile).