Does Hugin prefer linear (gamma = 1) or non-linear color space in input files

I have a few questions:

  1. Does Hugin prefer linear (gamma = 1) or non-linear color space in input files?
  2. What is best to use and when? If I compare linear and non-linear input files to Hugin for stitching panoramas, then I can sometimes get very different stitched output results from Hugin.

I am using enblend blender in Hugin.

'stitcher' -> 'processing - blender' -> enblend

I have been using 16 bit tiff files with linear color profiles as an input to Hugin for stitching. I did this since I assumed that enblend merging presupposed this. However, checking enblend manual I find this:

The conclusion I do, when reading the text above is that enblend handles both linear (gamma = 1) and non linear color profiles.

An example: When I stitched a night panorama with a clear night gradient sky from very dark sky to much brighter sun below horizon then, non-linear 16 bit tiff (sRGB) works better than linear 16 bit tiff (Linear Rec 709) as input to Hugin stitching. The linear stitching result is not a faithful representation of the input tiff files. All input tiff are in gamut and not overexposed.

16 bit tiff linear light (Linear Rec 709) input stitched to panorama. I get some banding and this was not how the input files looked like. (I have tried other linear color profiles, same issue).

16 bit tiff non linear light (sRGB) input stitched to panorama. The stitched result is very similar to the input files. There is a smooth gradient in the brighter parts of the sky.

What is going on here?

Hugin can work with both. You may have to set the camera response type to the corresponding type (EMoR or linear).

Thank you @stoffball. I can now se that ‘Camera Response’ is ‘custom (EMoR)’ in the example provided, for both linear and non-linear case.

Is there a theoretical preference for either, EMoR or Linear?

I don’t have done a systematic study.
The camera response curve should match the input images:
sRGB/gamma encoded → custom (EMoR)
linear → linear.

The remapper nona is working in the linear color space to do the photometric corrections.
The stitcher enblend works in the CIELUV color space.
The input → internal → output conversion is done for both automatically.

So for linear images less conversions are necessary, but not sure how big the effect on the output is in real life.

Thank you @stoffball, I was unaware of camera response tab. I am aware now. That is a step in the right direction. It seems one has to manually set response curve.

Slightly out of context, I get brightly colored clusters of pixels in dark areas with --blend-colorspace=CIELUV. I can often avoid that with --blend-colorspace=CIELAB.

Currently, yes.
This will be improved in the next version.