Darktable + Hugin Linear Workflow for Panoramas

Hi everyone,

I am trying to linearize my workflow of doing panoramas using Hugin and Darktable. My current workflow looks like this:

  1. I do all of the linear (or scene-refered) operations in Darktable
  2. Export as 32 bit floating point TIFF
  3. Stitch in Hugin
  4. Reimport the Hugin TIFF in Darktable and do the rest of the operations on the panorama (tone-mapping via filmic, local contrast, etc)

My problem comes from picking the color space for step 2. If I pick Linear REC2020, then Hugin forces me to operate under its HDR workflow, meaning that I have fiddle with their exposure merging options to avoid creating ghosting. If I export in sRGB, Hugin lets me only stitch the images and no ghosting appears, but I am not sure if Hugin does operate in a linear space when using sRGB images for input and I’d like to avoid using a smaller colorspace for intermediate steps.

Any pointer would be appreciated.

1 Like

Just a side-note: there’s a LUA script that can create panoramas within darktable and uses hugin. Optionally you can open the hugin GUI.

1 Like

enfuse/enblend (whichever does the HDR merging) expect non-linear input (Merten’s algo). HDR merging should be very simple in linear.

Say you have 3 frames, at -1 EV, 0 EV, and +1 EV:

  1. open image 1 (-1 EV), save it to float 32. It should have RGB code values between 0 and 1.
  2. open image 2 (0 EV), apply a black offset of +1 (exposure module), then save to float 32 bits. It should have code values between 1 and 2.
  3. open image 3 (+1 EV), apply a black offset of +2 (exposure module), then save to float 32 bits. It should have code values between 2 and 3.

Open the 3 images in a photo editing soft supporting layers and parametric masking.

  1. mask the first image with 100% opacity where values are in [0; 1], 0% elsewhere,
  2. mask the second image with 100% opacity where values are in [1; 2], 0% elsewhere,
  3. mask the third image with 100% opacity where values are in [2; 3], 0% elsewhere.

Merge, and apply filmic tonemapping. Done. Anything else is just hocus-pocus to account for broken colour models.

1 Like

@anon41087856 No, no, I don’t want to fuse multiple exposures, only to do stitching, but for some reason if I feed Linear REC2020 TIFFs to Hugin, it thinks I must want to do HDR merging :frowning:

file a bug then :wink:

1 Hugin can be confusing. Usually takes me a while to get the settings right.
2 I had a similar problem. See Hugin colour management bug?

Hi @pitbuster,

I’ve been thinking this slightly differently, as I have not been able to use linear input data for Hugin.

I select one image from the panorama and do all the linear processing for the image in darktable and finally use filmic RGB to convert it to display referred. Then I copy paste the tools stack to the other images of the panorama and check the result. If some image needs tweaking, I do it, and copy-paste the modified tools stack. Repeat until satisfied. To my understanding all the source images to Hugin should have the same processing. And finally export as 16 bit TIFF.

Then I stitch the images in Hugin

Last step is to view the result in darktable, do the final crop, fine-tuning and any artistic processing if needed.

That is better than nothing, but ideally you should do the Hugin operations on linear data to maintain loss od precision to a minimum.

I think I discovered a workaround: first export non-linear tiffs and create the Hugin project. Then re-export the images as 32-bit tiffs, reopen the Hugin project and do the actual stitching.

Do you mean, that your workflow is:
1 edit the images and export them as non-linear tiffs
2 create Hugin project and align the images
3 recreate the TIFFs so that exclude Filmic RGB and the later tools
4 stitch the linear TIFFs in Hugin using the project created in step 2
5 apply Filmic RGB and other later tools the stitched output

@Juha_Lintula Exactly.

Can’t you use masks to exclude the ghosts from the problematic images? (Just remember to uncheck “Only consider pixels that are defined in all images (-c)” in the HDR merging options.)

I suppose that it may work, but you now have to fiddle with algorithms to avoid ghosting when I shouldn’t have to (and also add processing time using HDR algorithms in a workflow that doesn’t need it).

I don’t understand. Why would you have to fiddle with algorithms and add processing? I just had a look at my Hugin logs for two projects and it seems that it first does HDR merging on each stack, and then stitches the HDR stacks with enblend. But if you don’t want to fuse multiple exposures, which I understand to mean that you don’t have stacks in your Hugin project, then the HDR merging will just say “Only one input image given. Copying input image to output image.” and then enblend will stitch as usual. At least that’s what happens when I check “Panorama Outputs” → “High dynamic range”.

Actually, if you do it with Hugin, you don’t even need to bother with all of that, as Hugin reads the EV from the EXIF data, and also appears to automatically ignore clipped values. :smile:

Hugin doesn’t do simple linear fusion though, it’s always the overdone Mertens algo.

Is it? Are we talking about the same thing? I am talking about hugin_hdrmerge, which can do a simple average or use Khan’s method and outputs an HDR image (TIFF or EXR), not about enfuse.

Oh, I never heard about anything else than enfuse/enblend in Hugin. Is hugin_hdrmerge new ?

Actually, I don’t know. :smiley: But it’s my go-to workflow these days. I export to linear floating-point TIFF from darktable (after applying lens correction), import and align/stitch that with Hugin, and
export the result as an HDR image which I import back into darktable for tone mapping with filmic (after remembering to set the proper input profile).

@spid sounds interesting, could you write a small tutorial?

Because I don’t need to do an HDR merge, I have only one exposure per frame that I want to stitch in a panorama, but Hugin assumes that if I open 32-bit tiffs I want to do HDR merge :frowning: