I suspect what is happening is that when you’re aligning and stacking, if distortion is in play, while your alignment is “good” in one part of the image, it fails in another, and that is perceived as adding blur. I’ve had similar issues with global align-and-merge scenarios if lens distortion exists.
Not sure what the appropriate way to solve this is - I need to dig into the depths of some of siril’s more advanced stacking methods (drizzle, etc) as in my case, I was involved in a niche use case where global alignment was not needed (tripod mounted, terrestrial image sequence). Using lensfun to perform distortion correction is only applicable if you’re demosaicing the images, which isn’t always used for siril workflows.
@thumdugger thank you for the detailed explanation, to summarize, the fact that Siril does not have distortion correction makes it produce images that are not very good for the shorter focal lengths, but beyond the fact that stacking can make things worse as @Entropy512 said, a single image is already distorted and there’s not much we can do about it.
This is also the reason why astronomy instruments have field correction lens or optical formulas that are so expensive. Something unachievable by DSLR unless software is created and used to do something vaguely similar.
@Entropy512 drizzle in Siril is not the real drizzle, it is simply up-scaling images by a factor 2. Bayer drizzle would be very good but it’s hard to implement.
In which case it should probably be possible to preprocess the inputs - something like using RawTherapee for distortion correction, and export linear high-depth TIFF from RT for each image then feed that to Siril.
darktable admittedly handles this much better - being able to select the output color profile’s gamma properties with the output file format, and IIRC defaulting to linear for high-bitdepth TIFF. RT’s user interface for setting an output profile to linear is reeaaaally clunky right now and needs work.
Interesting. Definitely a niche thing, but a possibility to improve interoperability.
Some of those data fields are things that are known to RT (assuming you are OK with “instrument name” being the camera model) such as focal length and camera model, some things are not going to be known to RT (such as RA/dec). Most likely if someone is using RT to distortion correct, they are going to be using a more traditional camera lens, so “TELESCOP” could be set to the lens model identifier. (If RT is able to distortion correct, then the lens and camera model have to be available to pass to lensfun)
Would NAXIS3=3 be appropriate for a demosaiced RGB image?
The Astro-TIFF support would simply be copying metadata so that if you do a siril export to TIFF, processing with RT, then reimport from RT to siril, you don’t lose what’s useful for astronomy image processing afterwards.
Originally I was thinking that the gamma issues (which are for the most part a separate thing) would not affect things here, but there may be some scenarios where it might be.
Yeah, in general most use cases I thought of originally would not have a need to roundtrip from siril to RT back to siril, but after further thinking that may be the scenario sometimes.
Distortion correction - There’s no metadata so far, and the individual images would be processed on their own identically. (Assumption: Anyone doing distortion correction shot things with an SLR or mirrorless and a wider-angle lens, and thus there isn’t astronomy-related metadata in the images.)
However, there’s the challenge of darkfield subtraction - this is fundamentally an aspect of the sensor itself. So darks need to be stacked prior to distortion correction. RT can do DFS, but it can’t handle the stacking - I remember at ONE point seeing a discussion of someone who tried to stack darks in siril but had significant challenge getting the stacked darks into RT as a proper darkframe. Will need to poke at this more. Stacked darks is one of the few cases where I’d expect roundtripping back to siril, as opposed to RT being only at the end or beginning of the pipeline.
Lights/flats - this is a fun can of worms, since some aspects of the behavior here are properties of the sensor, and some are part of the lens itself. The latter is likely handled by lensfun for a profiled lens, but what about the sensor-specific behaviors? Decomposing these two behaviors and handling them independently is going to be… challenging…
The spec says some of these are considered nonessential for TIFF since there are already parts of the TIFF standard that are redundant for this, some of these are redundant with DNG too, and some concept of “AstroDNG” would be beneficial. If the “redundant” components are not part of the extended metadata, then there’s no need for software to have to be careful about keeping them consistent. (As an example, let’s say RT ingests an “AstroDNG” - it demosaics it, thus altering NAXIS3 - it would be better, since NAXIS3 is redundant, if it simply weren’t there. Similarly for BAYERPATTERN since that’s redundant with DNG.)
Preserving metadata is a heck of a lot easier than comprehending it and modifying it if necessary. It might even be possible as-is with RT. If it isn’t, that’s something that will definitely have to wait until after the big exiv2 rework on RT’s side. I don’t think ANYONE wants to go anywhere near metadata until after that because it would have to be redone.
In addition to the TIFF color profile improvement, we may need to look at other blocking points for RT <-> Siril interoperability.
Siril does not use a lot of the TIFF metadata, only the basics like exposure, focal length and date, and the Astro extension in the description field because this is a copy of the FITS header that Siril uses for its own images and some acquisition software capture in Astro-TIFF apparently. It doesn’t look like we manage the Bayer pattern for example.
So we would expect the Astro extension to be kept as is, but as you said this can lead to wrong information if the image is demoisaiced for example. However, on the Siril side, this important part of the Astro extension would probably be overwritten by the real image data and parameters.