Any RAW Pano Stitchers out there?

Anyone aware of a panorama stitching program that will take RAW files as input and also output RAW?

Not possible.

@BzKevin
There is an dt Lua script, that sends selected images to hugin to create the panorama and then reimports the result. Afaik it uses TIFF as export and import format, so it is completely 16bit.

Saving images as RAW is not useful for everything other then the camera.

You practically have a RAW stitcher if you feed Hugin with 16-bit-TIFFS that you developed with RawTherapee using the profile “neutral”. I am not sure if you are actually aware what the real raw file is.

1 Like

AfaIk, Affinity Photo can make panoramas directly from RAW files, and probably Photoshop, too. But they do convert (demosaic) the Raws to something like a tiff before they stich them. However, Hugin is much better at the actual stitching than AP or Ps. And they cannot save the result as a raw file.

The TIFF Hugin workflow is what I have been using actually. Was just curious if there was a RAW equivalent.

I don’t know the details of it, but I know that Adobe’s LR has been capable of stitching RAW files and generating the output as a .dng file for a while now. I have a version of LR, which is a number of years old now, which does this. I can post some of the output files when I get home if people are interested in reviewing them.

Thanks to all for the input!

I should mention the reason I ask is that I have felt constrained when using the RAW > TIFF > Hugin workflow. Perhaps I am doing something wrong, I’ll need to go back and review to ensure I maintain 16b depth at all points along the way.

Where I notice this most (I use dt) is when trying to make any exposure adjustments to the output file. I find that even very very minimal pushes to the slider will begin to over/under expose parts of the image.

I often “enfuse” my panoramas. Is it possible that the enfuse step is messing with my bit depth or limiting my file in some other way?

@Morgan_Hardwood Is it truly impossible to do stitching on raw data (not demosaiced), or is it simply useless? Or perhaps put differently: is it actually possible to encode to a certain raw format, or is it strictly decoding?

1 Like

@Thanatomanic to a programmer few things in the programmosphere are impossible, but for all practical intents and purposes, stitching raw, undemosaiced data is “not possible”. You would need to correctly handle all the constraints undemosaiced data carries with it on both sides of the fence - both before stitching and after stitching. “No program” will stitch raw data, and “no program” will demosaic some unknown gigantic stitched image which does not meet the precise specifications of a raw file coming directly from a camera (again, “no program” in practical terms, as of course you could use G’MIC or write one yourself). Stitching would be limited to certain offset multiples so as not to break the Bayer or X-Trans pattern. If the raw data has special pixels, such as those from PDAF photosites, then stitching would break their pattern. You would need to figure out how to deal with geometric transformations in a way that the CFA pattern remains intact. And so on for every dirty little trick raw files have to offer (think HDRMerge). Of course @agriggio could write such a program in an afternoon, @heckflosse could optimize it before midnight so it stitches gigapixel panos on a C64 CPU, @Hombre could design a GUI, @jdc could ensure it produces accurate colors and @floessie would constify it and ensure it follows C++ standards and best practices, and we could call it cRawchet, but for all practical intents and purposes, no.

9 Likes

And even more, the OP asked for the stitched output to still be in raw format…

@BzKevin those “raw” Adobe DNG files are about as raw as a frozen pre-baked pizza from the supermarket. The problem you described arises from the image being clamped to a certain tonal range (e.g. [0;1] and you lose all data below 0 or above 1). One way around that is to save an unclamped (16-bit float or 32-bit TIFF) image and hope that Hugin does The Right Thing and saves a still-unclamped image in the same quality with correct colors, which you can then compress dynamically after stitching in RT/dt/GIMP etc. You will encounter obstacles doing so, as very few people test that such images are read, manipulated and saved correctly, so bugs related to these things go unfixed and even unnoticed for years. An easier solution is to squash the dynamic range of the raw file in any way (an inverse “S” curve to lower the contrast, for instance - only global adjustments, nothing local) so that you don’t have to deal with extreme values in the first place, then just save as an ordinary 16-bit TIFF, stitch, and put the punch back into the pano after stitching while avoiding the relatively uncharted 32-bit territory.

Thanks for the info Morgan. Your suggested workflow makes a lot of sense. I think I’ll give that a go, especially since it doesn’t really add a lot of overhead in terms of steps in the workflow. The compression and de-compression could all be done with saved tone curve presets applied on export/import.

Yes, I know Lr can do that but - as always - Lr is cheating! I guess the Lr-DNG just contains a TIFF, since DNG can be a lot of things really! The result is just a demosaiced image in a lossless format. Actually I could check but I am kind of lazy. I wonder what the size (im MB or GB) the result is. And one would need to calculate a bit which I am not good at. Anyway, make a panorama with Hugin and make one with Lr (of the same files), save one as TIFF and the other as DNG and compare the file size.

I question the need for the two compression / de-compression steps. IMHO if you process each input raw file in RT with the Neutral profile, the image should be plenty flat enough. As long as camera WB was set to a fixed value, there is nothing to do in RT?

Respectfully, I have a few objections to your points.

  1. I think it’s just “raw”. It’s not an acronym that stands for something else unlike e.g. “DNG”.
  2. I believe doing more at scene-referred undemosaiced stage, and getting more flexibility at the display-referred mode conversion stage, is always good.
  3. You can definitely output full raw data after processing raw data without any cheating. It is nothing out of ordinary: you take raw pixel array, you operate on it, you produce another pixel array. Done. There are tools, such as CornerFix, that do this.

However, most of those operations take and output a raw file with the same dimensions and pixel count. Panorama stitching would be much less easy to do, and it definitely takes to the extreme what is considered OK to do purely in raw space. Also, the usefulness of the resulting raw (DNG) would be reduced, though still can give control over demosaicing and white balance.

I think it technically could be done, though it would be more finicky and less forgiving than stitching in display referred formats. Here’s how it could go:

  1. Apply flat fields, dark frames, etc. to a set of raws. Get a devignetted set of raws.
  2. Determine desired orientation of raws relative to each other by matching features. Might be significantly more difficult in raw space, I think.
  3. Write raws, positioned according to (2), to a blank raw canvas. In theory, get pixel-for-pixel correspondence of the final result, probably doing some averaging on pixel values around stitching points to even out.

I gloss over perspective realigment of images in 3D space. This is extremely weird to do against undemosaiced pixel data (because suddenly what, a single green pixel on the sensor occupies 8 pixels of image space if it’s closer to the edge?) but it is commonly required in panorama shooting and without it you would be limited to only certain tricky ways of shooting panoramas.

Stitching a panorama means that each output pixel will generally be a combination of two inputs that are not discrete input pixels. So to calculate the output pixel, we need to interpolate each input between pixels. A mosaic image has only one channel per pixel, so to get three channels (R, G and B) we need to interpolate each channel according to which channels are in the close neighbourhood.

So, yes, it would be possible to combine the operations of demosaicing and stitching. But it would be very messy. The output would, of course, be demosaiced. That output could then be mosaiced by dropping two-thirds of the values.

I can’t see any benefit in doing this, except perhaps a small performance gain. But programatically it is much cleaner to separate the operations of demosaicing and stitching.

EDIT: There is another possibility. Instead of creating an output with three channels, the process might create a mosaiced output, so each output pixel has only one channel. The value in that channel for that pixel would be an interpolation of pixels in the input images. For example, an output red pixel might be interpolated either from all input channels (which implied demosaicing inputs) or only from the red channels of the inputs (so there is no demosaicing at that stage).

The output from this is a mosaiced image. Hence we could stitch first and demosaic later. However, I think setting red output from only the red channels of the inputs would discard information (the general continuity of colour within small neighbourhoods), so the overall quality would be lower than demosaicing first and then stitching.

But I wouldn’t want to discourage developers. If anyone tries this, the results might be interesting.

2 Likes

Stitching a panorama means that each output pixel will generally be a combination of two inputs that are not discrete input pixels.

Yes, I kind of tried to skirt around that issue by not allowing any perspective transforms but that is impossible for typical panorama shooting. And because perspective transforms in raw space as it exists now are totally impractical as you say it would mean outputting stitched but undemosaiced raw data would also be impossible. So yeah. I don’t think temporary undemosaicing for stitching purposes would be the right approach, and any other approach may be unrealistic :slight_smile:

Ultimately I was wrong to revive this topic I suppose.

The closest you can get here is to have an output of a demosaiced image in the camera’s native color space - e.g. demosaic and stitch, but don’t do ANY color transformations.

Mi Sphere Converter (an Android app for use with the Xiaomi Mi Sphere/MADV 360 camera) did this - demosaic and stitch with no colorspace transformations, outputting a DNG.

You could achieve this by demosaicing in RawTherapee but not performing WB or color transforms (this is another use case for WIP: Bundled profile for reference image by Entropy512 · Pull Request #6646 · Beep6581/RawTherapee · GitHub ), stitching in Hugin, then performing all further processing operations on the result. Note that you’ll have to do some metadata mangling here, as the PR above is intended for a color profiling run and as a result the gamut/primaries are completely wrong in the output images, you’d need to take the Hugin output and retag it as a DNG with appropriate metadata.

Probably easier to just do basic white balance adjustment and output as either Rec. 2020 or ProPhoto from RT before stitching in Hugin, and then finishing the resulting pano back in RT.

1 Like