Siril needs distortion correction in stacking

I can explain this phenomenon.

The first problem is that Siril does not apply the appropriate display transform, which is typically the sRGB TRC in most cases, to its preview image. So, a raw (linear) image is rendered too dark in the preview window to begin with; that is, the raw linear image the user sees in Siril’s preview window is not linear data.

Next, the user processes the linear image based on what they see in the preview window. This usually involves applying non-linear operations like Asinh until the user is satisfied with the look of the preview. The processed image is then saved with the ICC sRGB profile but Siril does not apply the sRGB TRC. Basically, the saved image is stretched more than required in order to compensate for the missing display transform; that is, the image would look too bright (or washed out) if the equivalent operations were applied in color managed software.

Finally, the user opens the saved “sRGB” image in a color managed application (darktable, rawtherapee, photoshop etc). These software typically use an intermediate linear working space, like linear Rec2020, for processing. The software is expecting sRGB data and so applies the sRGB TRC accordingly to transform the image data into the working space before rendering the image in the preview window. Being color managed, unlike Siril, the software also applies the appropriate display transform to the preview image. These two operations, i.e converting from sRGB to the working space and then applying the display transform, will effectively cancel each other provided that the display profile is also sRGB. When that is the case, which is certainly true for the majority of users, then the image displayed by the color manged software will more-or-less match Siril’s preview image (discounting gamut mapping).

So, it’s the ubiquity of the sRGB color space rather than luck which explains why things look OK to most users. Unfortunately, this is rather a false sense of security because the reference for comparison, i.e Siril’s preview image, is incorrect to start with. I see two potential problems for users when the ICC profile is appended to the image file:

  1. When the display media is not sRGB, e.g AdobeRGB or printer color space, the displayed image colors will not match Siril’s preview (remember the user bases the look on Siril’s preview image).

  2. A linear image, say a stacked light calibrated with biases and flats, is transformed into non-linear data when it’s loaded into color managed software. Any operations intended for linear data are actually performed on non-linear data but the user is not really aware of this.

1 Like

Hello, as Cyril said, we don’t know much about this side of image management, this is also the case for me. I don’t understand this sentence. How does a raw linear image, displayed with a linear transfer function, does not display linear data?

1 Like

Transfert functions used in siril are nonlinear.

Disclaimer: I’m going to completely ignore gamut and primaries for now. As I said, this is in general far less egregious than getting the TRC wrong. As such, any time I say “sRGB” in this post I am talking about the transfer function part of sRGB, and not the gamut/primaries.

Aha. Based on this, it sounds like:
siril (mis)feeds linear data to the display - which is in most cases expecting the sRGB TRC
As such it looks dark, so the user does heavy massaging to fix it
In the end, the data has been so heavily massaged that it looks good when (mis)interpreted as being sRGB TRC
This data is then exported and (mis)tagged as having an sRGB TRC - two wrongs wind up making a right in these use cases, but are problematic for users who wish to further postprocess linear data in another application

Honestly, naively assuming the display is sRGB in both gamut and TRC would be better than the current scenario, it’s going to be a solid match for 95%+ of users (hey all of the perfectionists would love for everyone to have a calibrated display, but it isn’t going to happen. Hell, I don’t even have one at the moment but I at least understand that this could cause problems for me.)

A better (partial) interim solution would likely be:
In preview, transform data to an sRGB TRC (which matches 95%+ of displays out there, and in fact most OSes will assume that a non-color-managed app is putting out sRGB anyway, so the OS will likely handle most of the remaining 5% well)
For export, transform data to sRGB TRC and tag as such if it’s low bit depth
For export, preserve data as linear and tag it as such if it’s high bit depth

Probably have a toggle that lets the user override the transforms to preserve legacy compatibility.

This would likely result in users having to perform far less manual massaging to have an image looking “good”, they would have a reasonable starting point.

For reference, the last time I used siril I ignored the preview, because my goal was linear raw in, average-stacked raw (not even demosaiced!) out.

I suspect what he means is a raw linear image, misdisplayed with a nonlinear transfer function (since that’s what appears to be happening based on his description), because unless you’re color managed and TELL the OS/compositor that you are feeding it linear data, the OS is going to assume you’re feeding it sRGB-encoded data

1 Like

A display will have it’s own color profile, which includes the so-called ‘gamma’ correction. The majority of displays will have a native gamma similar to the sRGB TRC. Of course, it’s more complicated than that but for simplicity we can safely assume that a display does not have a linear gamma (i.e 1.0). The display will apply its (non-linear) gamma correction to any data it receives. Color managed software will apply the color profile of the display accordingly including the (inverse) gamma correction (i.e the display transform). However, Siril ignores the display transform and feeds linear data to the display. The linear data will be gamma corrected yielding non-linear data and effectively darkening the image rendered in the preview window.

This is simple to test for yourself. Debayer a raw dslr image using Siril (be sure to set the image gamma correction to linear in the DLSR debayer preferences) and save the file as tiff. This represents linear image data. I recommend to use a daylight image for the test so it’s easier to see the effect. Open the file in Siril and set the display mode to linear (not AutoStretch). Then, open the same file in darktable or Rawtherapee with the input color profile set to Linear sRGB or Linear Rec709 and the display profile set accordingly (typically sRGB). Be sure to disable any tone mapping, white balance etc settings that the software applies by default. Now, compare the brightness of the two images – they should be the same but the Siril preview will be much darker.

1 Like

Here is an example for completeness. Below is the same image processed in the following three ways:

  1. Darktable linear reference. Debayered (VNG), white balance (1.92, 1.0, 1.49), linear input profile (linear Rec709), display profile sRGB. No other image processing operations are applied. This image represents the reference for linear data as rendered by color managed software.

  2. Siril linear default: Debayered (VNG), white balance (1.92, 1.0, 1.49), linear image gamma correction, linear display mode. This image shows the default rendering of the reference in Siril v1.06.

  3. Siril linear default plus sRGB TRC: Debayered (VNG), white balance (1.92, 1.0, 1.49), linear image gamma correction, linear display mode, sRGB TRC display transform (applied with pixel math). This image approximates the image preview if Siril would apply the display transform.

As already mentioned, Siril ignores the display transform. This is evident in 2nd image, which is much darker than the reference. Once the the display transform is applied (3rd image), Siril’s preview matches the darktable reference.

Honestly, I’m not sure what the best approach for Siril is regarding color management. Your approach could well be a very good one, I need to think about it a bit more (i’m a bit slow sorry :blush: ). Somehow the current process seems to work for most people as is, so maybe the immediate first step would be to simply document how color management currently works. This might help to mitigate a lot of initial confusion. Then, maybe start a dedicated new topic (since this discussion is not really related to the OP), or revive Siril Color Management, to gather some other ideas from the community on how best to handle color management from none, full support (probably beyond the scope of Siril), to something in between. In any case, it sounds like the Siril’s developers would very much welcome some assistance on this topic.

darktable linear reference

Siril linear default

Siril linear default plus sRGB TRC

1 Like

Thx for all you explanation. Need to plug my brain and read again and again.

I will open a new thread and pin it asap.
Kind of busy right now.

1 Like

Yup, getting back to the original topic - lens distortion can definitely be problematic for any stacking approach that attempts to do global alignment. Not sure if there are possible workarounds for now such as preprocessing images in RawTherapee for ingestion into siril for stacking.

(As someone who is primarily a Linux user, I subscribe to the philosophy of making tools that interoperate well together, and focus on doing one thing really well. siril’s forte is without a doubt stacking, and I think for quite a few use cases, it’s better to find a way to make it easier to pre/post process images in/out of siril, with siril being used primarily for stacking. Trying to fill in the gaps in other aspects in siril would just be duplication of code and an additional maintenance burden when there are already great tools for those other jobs.)

Hello, I missed this very important message for the original topic of distortion.

As you may have guessed, Siril developers don’t know much about the things that seem to be obvious to most folks around here, having to do with digital camera picture management.

I’ll start from the beginning then, and to quote you will help:

As much as we do not handle distortion in Siril, we do not add any either. We only use the raw data from the sensor, if the stars end up square or very elongated, it’s because the optics create a geometrical aberration off-centre. I understand you expect something to happen in the software to correct this, I wouldn’t even know how this works personally, but how did you “test the lens specifically for distortion”?

I’m sorry that your (and many others here) experience with Siril is a failure in the most part. I don’t know if we will be able to improve this, I hope we do at some point, at least by providing a gateway to software that does things as you expect…

For reference, the continuation of the colour management discussion happens here: Color management in Siril, need help and refactoring.

@vinvin

No worries, sir, and for the record I love Siril. I’ve been playing around with PixInsight, and it is very impressive, but I keep coming back to Siril and nothing much is probably going to change that any time soon.

I am a software developer so I tend to understand things from that perspective and tend to approach my issues via math and code.

The lens thing opened up a whole can of worms for me. The lack of documentation specificity was frustrating for me but I very much understand how hard good documentation is to write and how much effort it takes.

That said the real problems surrounding the lens had nothing to do with lens or Siril. I dove deep into a lot of years old forum posts trying to figure out what was going on and eventually figured out that the biggest problem child was Sony.

The 12-sided polygons I was seeing on certain images and stretching is, from what I can tell, an anti-vignetting mask that Sony applies without me asking them to. They definitely do this on their jpg images (which I turn off and don’t use) but I was shocked to find out they do this on the RAW images, too.

I figured out some ways to trick the camera into not doing this including slightly rotating the lens so the mount pins don’t connect to the lens pins … doing this make the lens unrecognizable to the camera so it treats it as an old school, manual, lens and doesn’t apply any anti-vignetting to it. This is the only reliable way I found to be able to ensure that mask is not applied as it can’t be turned off in the settings.

And then it also turned out that the cameras love to drop from 14bit image capture down to 12bit image capture for a whole host of reasons that aren’t remotely obvious to anybody … especially someone as new to cameras as I am. This one was incredibly frustrating because I was losing bit depth without knowing it and there are at least five reasons it does this … I was doing two of those things … sometimes … sometimes not.

And lastly there is the matter of lossy compression. This is apparently another “feature” I, in no way, want and that I can’t turn off. It can result in losing up to one bit of color depth all to save me disk space I don’t need saved and (possibly) ensure more predictable write times to the disk. I hate this “feature” … very much.

So, without knowing it I was thinking I was taking 14bit raw images that were, instead, often invisibly turned into 12bit raw images that were then lossy compressed to, at worst, 11bit images, that were forcibly stamped with an anti-vignetting bit mask into the RAW data itself.

I don’t know why Sony claims it supports RAW images given that the camera deeply messes with these images without being asked to and most of the time without any indication it is doing it nor any way to turn it off via settings (or at all).

But, for the record, after much frustrating learning over the past several weeks, I can say that:

  • Siril is awesome, I still use it way more than anything else
  • the lens in question is great and I understand much better now how to use it
  • I now understand that wide field, (like, 20mm effective focal length) imagery comes at a cost, always, and that has to be taken into account when using it
  • Siril is not optimized to deal with many of the quirks of 8-50mm imagery as it is much more focused on longer lens imagery … which is something I’m learning to work around, also
    *and, lastly, that Sony evidently just wants me to feel pain for something I must have done in a previous life but there is no other explanation for their atrocious behavior with regard to their RAW files on the camera range I use.

In the end thank goodness my primary camera and lenses is my old D7000 I picked up used a number of years ago. It does what it is supposed and its bag of obnoxious trickery and unwanted features is vastly, vastly, smaller than the more modern and “user friendly” Sony I am also using.

Thank you for responding though. I know with the new update work you folks have been super busy and I greatly appreciate your efforts!

Siril is amazing and greatly appeals to the programmer in me.

I suspect what is happening is that when you’re aligning and stacking, if distortion is in play, while your alignment is “good” in one part of the image, it fails in another, and that is perceived as adding blur. I’ve had similar issues with global align-and-merge scenarios if lens distortion exists.

Not sure what the appropriate way to solve this is - I need to dig into the depths of some of siril’s more advanced stacking methods (drizzle, etc) as in my case, I was involved in a niche use case where global alignment was not needed (tripod mounted, terrestrial image sequence). Using lensfun to perform distortion correction is only applicable if you’re demosaicing the images, which isn’t always used for siril workflows.

Extremely annoying. Covered in some threads over on dpreview - Color Polygons 2021: Sony Alpha Full Frame E-mount Talk Forum: Digital Photography Review for example. One of my rants on the subject at Re: Which lens corrections affect RAW (A7R IV): Sony Alpha Full Frame E-mount Talk Forum: Digital Photography Review

@thumdugger thank you for the detailed explanation, to summarize, the fact that Siril does not have distortion correction makes it produce images that are not very good for the shorter focal lengths, but beyond the fact that stacking can make things worse as @Entropy512 said, a single image is already distorted and there’s not much we can do about it.
This is also the reason why astronomy instruments have field correction lens or optical formulas that are so expensive. Something unachievable by DSLR unless software is created and used to do something vaguely similar.

@Entropy512 drizzle in Siril is not the real drizzle, it is simply up-scaling images by a factor 2. Bayer drizzle would be very good but it’s hard to implement.

In which case it should probably be possible to preprocess the inputs - something like using RawTherapee for distortion correction, and export linear high-depth TIFF from RT for each image then feed that to Siril.

yes it should, save the gamma issue…

That would be great for rawtherapee to use the Astro-TIFF format:
https://siril.readthedocs.io/en/latest/file-formats/Astro-TIFF.html

Are you referring to Input/output color profile support has a clunky UI · Issue #6644 · Beep6581/RawTherapee · GitHub - which should be partially helped by WIP: Bundled profile for reference image by Entropy512 · Pull Request #6646 · Beep6581/RawTherapee · GitHub even if iccstore: Allow loading profiles from user-writable configuration directory by Entropy512 · Pull Request #6645 · Beep6581/RawTherapee · GitHub doesn’t make it in. Which reminds me I really need to finish up work on those PRs.

darktable admittedly handles this much better - being able to select the output color profile’s gamma properties with the output file format, and IIRC defaulting to linear for high-bitdepth TIFF. RT’s user interface for setting an output profile to linear is reeaaaally clunky right now and needs work.

Interesting. Definitely a niche thing, but a possibility to improve interoperability.

Some of those data fields are things that are known to RT (assuming you are OK with “instrument name” being the camera model) such as focal length and camera model, some things are not going to be known to RT (such as RA/dec). Most likely if someone is using RT to distortion correct, they are going to be using a more traditional camera lens, so “TELESCOP” could be set to the lens model identifier. (If RT is able to distortion correct, then the lens and camera model have to be available to pass to lensfun)

Would NAXIS3=3 be appropriate for a demosaiced RGB image?

No, I’m referring to what was say above a few weeks ago and that was forked to this thread: Color management in Siril, need help and refactoring.
So, a Siril problem.

The Astro-TIFF support would simply be copying metadata so that if you do a siril export to TIFF, processing with RT, then reimport from RT to siril, you don’t lose what’s useful for astronomy image processing afterwards.

As @vinvin said, I was think of not losing astrometry solution for example. This is very important in Siril.

Yes, once demosaiced it is NAXIS3=3. Keyword BAYERPAT describes which Bayer pattern is used.

Originally I was thinking that the gamma issues (which are for the most part a separate thing) would not affect things here, but there may be some scenarios where it might be.

Yeah, in general most use cases I thought of originally would not have a need to roundtrip from siril to RT back to siril, but after further thinking that may be the scenario sometimes.

Distortion correction - There’s no metadata so far, and the individual images would be processed on their own identically. (Assumption: Anyone doing distortion correction shot things with an SLR or mirrorless and a wider-angle lens, and thus there isn’t astronomy-related metadata in the images.)

However, there’s the challenge of darkfield subtraction - this is fundamentally an aspect of the sensor itself. So darks need to be stacked prior to distortion correction. RT can do DFS, but it can’t handle the stacking - I remember at ONE point seeing a discussion of someone who tried to stack darks in siril but had significant challenge getting the stacked darks into RT as a proper darkframe. Will need to poke at this more. Stacked darks is one of the few cases where I’d expect roundtripping back to siril, as opposed to RT being only at the end or beginning of the pipeline.

Lights/flats - this is a fun can of worms, since some aspects of the behavior here are properties of the sensor, and some are part of the lens itself. The latter is likely handled by lensfun for a profiled lens, but what about the sensor-specific behaviors? Decomposing these two behaviors and handling them independently is going to be… challenging…

The spec says some of these are considered nonessential for TIFF since there are already parts of the TIFF standard that are redundant for this, some of these are redundant with DNG too, and some concept of “AstroDNG” would be beneficial. If the “redundant” components are not part of the extended metadata, then there’s no need for software to have to be careful about keeping them consistent. (As an example, let’s say RT ingests an “AstroDNG” - it demosaics it, thus altering NAXIS3 - it would be better, since NAXIS3 is redundant, if it simply weren’t there. Similarly for BAYERPATTERN since that’s redundant with DNG.)

Preserving metadata is a heck of a lot easier than comprehending it and modifying it if necessary. It might even be possible as-is with RT. If it isn’t, that’s something that will definitely have to wait until after the big exiv2 rework on RT’s side. I don’t think ANYONE wants to go anywhere near metadata until after that because it would have to be redone.

In addition to the TIFF color profile improvement, we may need to look at other blocking points for RT <-> Siril interoperability.
Siril does not use a lot of the TIFF metadata, only the basics like exposure, focal length and date, and the Astro extension in the description field because this is a copy of the FITS header that Siril uses for its own images and some acquisition software capture in Astro-TIFF apparently. It doesn’t look like we manage the Bayer pattern for example.

So we would expect the Astro extension to be kept as is, but as you said this can lead to wrong information if the image is demoisaiced for example. However, on the Siril side, this important part of the Astro extension would probably be overwritten by the real image data and parameters.