Siril needs distortion correction in stacking

We only did this so that photoshop/Lightroom would open the images properly. Without that it was catastrophic with a lot of complain.

If you have a better solution don’t hesitate to contribute.
The role of Siril is not to play with the color profiles. This is the role of the software used after.

It’s exactly the other way around. This way, software like Photoshop and Lightroom are not properly opening the images, since they assume they are in sRGB when they aren’t.

Sure, I’ll look into that!

Yet “playing with color profiles” is exactly what Siril is doing, I would say, since it is haphazardly attaching color profiles when it’s not correct.

But this is not mandatory:
image

Maybe but doing like that, now, users see image in photoshop as it is in Siril. While it wasn’t…

Oh, and to understand the origin of that:

I think part of the confusion here is that sRGB happens to describe both a gamut (primaries) AND a transfer function.

It is possible (and very common) to combine sRGB primaries with a transfer function other than sRGB. (frequently linear - see WIP: Bundled profile for reference image by Entropy512 · Pull Request #6646 · Beep6581/RawTherapee · GitHub and Reference TIFFs are not tagged with an ICC profile that indicates linear data · Issue #5575 · Beep6581/RawTherapee · GitHub)

In the event that Siril is saving linear data, the profile currently being embedded is definitively wrong. In the event that siril is not saving linear data into TIFFs (when almost the entire siril pipeline operates as linear…), then it really should be, along with an ICC profile that properly describes it as being linear data. At least looking at src/io/image_formats_libraries.c · master · FA / Siril · GitLab I’m not seeing any application of an sRGB TRC, so the output is likely linear unless something is triggering nonlinearization elsewhere? Ideal defaults would be to encode 8-bit output nonlinearly, and 16/32 bit output linearly.

The profile is also vastly oversized (it doesn’t REALLY matter for the most part), but saving a 1024-point LUT TRC for something that can be described using parametric TRCs is really unnecessary.

WIP: Bundled profile for reference image by Entropy512 · Pull Request #6646 · Beep6581/RawTherapee · GitHub provides an example of an ICC profile with sRGB primaries and linear encoding.

While it is technically wrong to save raw sensor data with an ICC profile that indicates sRGB primaries, it isn’t nearly as egregious as misrepresenting linear data as not or vice versa. (In the case that the input files are raw sensor data from libraw, then you could probably generate appropriate primaries on the fly from the metadata reported by libraw though, which would be ideal.)

This attitude is, by the way, one of the primary reasons I stopped using siril for stacking and wrote my own stacker, albeit for a fairly specific subset of siril functionality. ( pyimageconvert/pystack.py at master · Entropy512/pyimageconvert · GitHub ) If you intend to remain a non-color-managed application, then you should make it less painful for users to postprocess your images in a color-managed postprocessing workflow. The requirement to use FITS as an intermediary on input was understandable/acceptable, as while it was unnecessary in my use case it is necessary for memory management in many others. The difficulty of performing further postprocessing after stacking, on the other hand, appears to be an intentional design decision based on comments like this. I would have poked at potentially improving the export system, but again, comments like this imply to me that such a contribution would be rejected as being not compatible with design intent.

“not color managed” and “program output is not intended to be processed any further” are fundamentally incompatible attributes for anything that is processing raw sensor data from a camera. If you’re processing raw sensor data, you MUST be color managed OR support further postprocessing by an application that IS color managed.

I fully agree with this one here. Specifically in regards to whether or not to apply an sRGB TRC - I’m 100% OK with “don’t do colorspace/gamut conversions”, and mostly OK with “yeah the primaries/colormatrix are wrong”, but 16-bit TIFFs should at least have the option for linear export.

Hello @Entropy512 . Thx for your message.

In Siril, and in general in astrophotography, we try to stay as much as possible with a linear image. However, at the end of the process, we usually want to stretch the histogram, either to work in another more classical software, or directly show the image.

In Siril there are 3 algorithms doing such things:

Generally, TIFF images are images that have been stretched by one (or more) of these algorithms. These tools are dedicated to astronomy images, but that’s all I can say, because I don’t have enough skill in color managements. Adding sRGB is probably a mistake for the purists, however it fixed a problem that was reported very often.

I don’t know why you are saying this. All contributions are very welcome, especially in a domain where none of us know something. If someone want to contribute to improve the export part of Siril, please do it.

I’ll take a look at this later then. So based on this, you are doing some transformations, but NOT transforming into a gamma-encoded space?

If that is the case, then linear is definitely the correct transfer function for the exported ICC, however it might be pure luck that some of those transforms make things look OK when misinterpreting as sRGB.

In general:
Linear encoded data misinterpreted as sRGB looks very dark overall
sRGB data misinterpreted as linear looks washed out

Do you have any issues with adding a dependency on LCMS2? It would be far better to generate ICC profiles on the fly based on image metadata/properties than to store canned bytestreams for some predefined ICC profiles. elles_icc_profiles/make-elles-profiles.c at master · Entropy512/elles_icc_profiles · GitHub is one reference for generating ICC profiles programmatically, RT’s source probably has another one.

I do agree but … All images exported from Siril, and opened as sRGB in GIMP or other software looks great… This is why I’ve used it. For me, sRGB was the profile that best matched the image display and I have not enough skills to understand where the issue is.

Not at all. It won’t be for 1.2.0 but adding LCMS2 will be a great improvement, for sure.

Thx a lot for your help.

I can explain this phenomenon.

The first problem is that Siril does not apply the appropriate display transform, which is typically the sRGB TRC in most cases, to its preview image. So, a raw (linear) image is rendered too dark in the preview window to begin with; that is, the raw linear image the user sees in Siril’s preview window is not linear data.

Next, the user processes the linear image based on what they see in the preview window. This usually involves applying non-linear operations like Asinh until the user is satisfied with the look of the preview. The processed image is then saved with the ICC sRGB profile but Siril does not apply the sRGB TRC. Basically, the saved image is stretched more than required in order to compensate for the missing display transform; that is, the image would look too bright (or washed out) if the equivalent operations were applied in color managed software.

Finally, the user opens the saved “sRGB” image in a color managed application (darktable, rawtherapee, photoshop etc). These software typically use an intermediate linear working space, like linear Rec2020, for processing. The software is expecting sRGB data and so applies the sRGB TRC accordingly to transform the image data into the working space before rendering the image in the preview window. Being color managed, unlike Siril, the software also applies the appropriate display transform to the preview image. These two operations, i.e converting from sRGB to the working space and then applying the display transform, will effectively cancel each other provided that the display profile is also sRGB. When that is the case, which is certainly true for the majority of users, then the image displayed by the color manged software will more-or-less match Siril’s preview image (discounting gamut mapping).

So, it’s the ubiquity of the sRGB color space rather than luck which explains why things look OK to most users. Unfortunately, this is rather a false sense of security because the reference for comparison, i.e Siril’s preview image, is incorrect to start with. I see two potential problems for users when the ICC profile is appended to the image file:

  1. When the display media is not sRGB, e.g AdobeRGB or printer color space, the displayed image colors will not match Siril’s preview (remember the user bases the look on Siril’s preview image).

  2. A linear image, say a stacked light calibrated with biases and flats, is transformed into non-linear data when it’s loaded into color managed software. Any operations intended for linear data are actually performed on non-linear data but the user is not really aware of this.

1 Like

Hello, as Cyril said, we don’t know much about this side of image management, this is also the case for me. I don’t understand this sentence. How does a raw linear image, displayed with a linear transfer function, does not display linear data?

1 Like

Transfert functions used in siril are nonlinear.

Disclaimer: I’m going to completely ignore gamut and primaries for now. As I said, this is in general far less egregious than getting the TRC wrong. As such, any time I say “sRGB” in this post I am talking about the transfer function part of sRGB, and not the gamut/primaries.

Aha. Based on this, it sounds like:
siril (mis)feeds linear data to the display - which is in most cases expecting the sRGB TRC
As such it looks dark, so the user does heavy massaging to fix it
In the end, the data has been so heavily massaged that it looks good when (mis)interpreted as being sRGB TRC
This data is then exported and (mis)tagged as having an sRGB TRC - two wrongs wind up making a right in these use cases, but are problematic for users who wish to further postprocess linear data in another application

Honestly, naively assuming the display is sRGB in both gamut and TRC would be better than the current scenario, it’s going to be a solid match for 95%+ of users (hey all of the perfectionists would love for everyone to have a calibrated display, but it isn’t going to happen. Hell, I don’t even have one at the moment but I at least understand that this could cause problems for me.)

A better (partial) interim solution would likely be:
In preview, transform data to an sRGB TRC (which matches 95%+ of displays out there, and in fact most OSes will assume that a non-color-managed app is putting out sRGB anyway, so the OS will likely handle most of the remaining 5% well)
For export, transform data to sRGB TRC and tag as such if it’s low bit depth
For export, preserve data as linear and tag it as such if it’s high bit depth

Probably have a toggle that lets the user override the transforms to preserve legacy compatibility.

This would likely result in users having to perform far less manual massaging to have an image looking “good”, they would have a reasonable starting point.

For reference, the last time I used siril I ignored the preview, because my goal was linear raw in, average-stacked raw (not even demosaiced!) out.

I suspect what he means is a raw linear image, misdisplayed with a nonlinear transfer function (since that’s what appears to be happening based on his description), because unless you’re color managed and TELL the OS/compositor that you are feeding it linear data, the OS is going to assume you’re feeding it sRGB-encoded data

1 Like

A display will have it’s own color profile, which includes the so-called ‘gamma’ correction. The majority of displays will have a native gamma similar to the sRGB TRC. Of course, it’s more complicated than that but for simplicity we can safely assume that a display does not have a linear gamma (i.e 1.0). The display will apply its (non-linear) gamma correction to any data it receives. Color managed software will apply the color profile of the display accordingly including the (inverse) gamma correction (i.e the display transform). However, Siril ignores the display transform and feeds linear data to the display. The linear data will be gamma corrected yielding non-linear data and effectively darkening the image rendered in the preview window.

This is simple to test for yourself. Debayer a raw dslr image using Siril (be sure to set the image gamma correction to linear in the DLSR debayer preferences) and save the file as tiff. This represents linear image data. I recommend to use a daylight image for the test so it’s easier to see the effect. Open the file in Siril and set the display mode to linear (not AutoStretch). Then, open the same file in darktable or Rawtherapee with the input color profile set to Linear sRGB or Linear Rec709 and the display profile set accordingly (typically sRGB). Be sure to disable any tone mapping, white balance etc settings that the software applies by default. Now, compare the brightness of the two images – they should be the same but the Siril preview will be much darker.

1 Like

Here is an example for completeness. Below is the same image processed in the following three ways:

  1. Darktable linear reference. Debayered (VNG), white balance (1.92, 1.0, 1.49), linear input profile (linear Rec709), display profile sRGB. No other image processing operations are applied. This image represents the reference for linear data as rendered by color managed software.

  2. Siril linear default: Debayered (VNG), white balance (1.92, 1.0, 1.49), linear image gamma correction, linear display mode. This image shows the default rendering of the reference in Siril v1.06.

  3. Siril linear default plus sRGB TRC: Debayered (VNG), white balance (1.92, 1.0, 1.49), linear image gamma correction, linear display mode, sRGB TRC display transform (applied with pixel math). This image approximates the image preview if Siril would apply the display transform.

As already mentioned, Siril ignores the display transform. This is evident in 2nd image, which is much darker than the reference. Once the the display transform is applied (3rd image), Siril’s preview matches the darktable reference.

Honestly, I’m not sure what the best approach for Siril is regarding color management. Your approach could well be a very good one, I need to think about it a bit more (i’m a bit slow sorry :blush: ). Somehow the current process seems to work for most people as is, so maybe the immediate first step would be to simply document how color management currently works. This might help to mitigate a lot of initial confusion. Then, maybe start a dedicated new topic (since this discussion is not really related to the OP), or revive Siril Color Management, to gather some other ideas from the community on how best to handle color management from none, full support (probably beyond the scope of Siril), to something in between. In any case, it sounds like the Siril’s developers would very much welcome some assistance on this topic.

darktable linear reference

Siril linear default

Siril linear default plus sRGB TRC

1 Like

Thx for all you explanation. Need to plug my brain and read again and again.

I will open a new thread and pin it asap.
Kind of busy right now.

1 Like

Yup, getting back to the original topic - lens distortion can definitely be problematic for any stacking approach that attempts to do global alignment. Not sure if there are possible workarounds for now such as preprocessing images in RawTherapee for ingestion into siril for stacking.

(As someone who is primarily a Linux user, I subscribe to the philosophy of making tools that interoperate well together, and focus on doing one thing really well. siril’s forte is without a doubt stacking, and I think for quite a few use cases, it’s better to find a way to make it easier to pre/post process images in/out of siril, with siril being used primarily for stacking. Trying to fill in the gaps in other aspects in siril would just be duplication of code and an additional maintenance burden when there are already great tools for those other jobs.)

Hello, I missed this very important message for the original topic of distortion.

As you may have guessed, Siril developers don’t know much about the things that seem to be obvious to most folks around here, having to do with digital camera picture management.

I’ll start from the beginning then, and to quote you will help:

As much as we do not handle distortion in Siril, we do not add any either. We only use the raw data from the sensor, if the stars end up square or very elongated, it’s because the optics create a geometrical aberration off-centre. I understand you expect something to happen in the software to correct this, I wouldn’t even know how this works personally, but how did you “test the lens specifically for distortion”?

I’m sorry that your (and many others here) experience with Siril is a failure in the most part. I don’t know if we will be able to improve this, I hope we do at some point, at least by providing a gateway to software that does things as you expect…

For reference, the continuation of the colour management discussion happens here: Color management in Siril, need help and refactoring.

@vinvin

No worries, sir, and for the record I love Siril. I’ve been playing around with PixInsight, and it is very impressive, but I keep coming back to Siril and nothing much is probably going to change that any time soon.

I am a software developer so I tend to understand things from that perspective and tend to approach my issues via math and code.

The lens thing opened up a whole can of worms for me. The lack of documentation specificity was frustrating for me but I very much understand how hard good documentation is to write and how much effort it takes.

That said the real problems surrounding the lens had nothing to do with lens or Siril. I dove deep into a lot of years old forum posts trying to figure out what was going on and eventually figured out that the biggest problem child was Sony.

The 12-sided polygons I was seeing on certain images and stretching is, from what I can tell, an anti-vignetting mask that Sony applies without me asking them to. They definitely do this on their jpg images (which I turn off and don’t use) but I was shocked to find out they do this on the RAW images, too.

I figured out some ways to trick the camera into not doing this including slightly rotating the lens so the mount pins don’t connect to the lens pins … doing this make the lens unrecognizable to the camera so it treats it as an old school, manual, lens and doesn’t apply any anti-vignetting to it. This is the only reliable way I found to be able to ensure that mask is not applied as it can’t be turned off in the settings.

And then it also turned out that the cameras love to drop from 14bit image capture down to 12bit image capture for a whole host of reasons that aren’t remotely obvious to anybody … especially someone as new to cameras as I am. This one was incredibly frustrating because I was losing bit depth without knowing it and there are at least five reasons it does this … I was doing two of those things … sometimes … sometimes not.

And lastly there is the matter of lossy compression. This is apparently another “feature” I, in no way, want and that I can’t turn off. It can result in losing up to one bit of color depth all to save me disk space I don’t need saved and (possibly) ensure more predictable write times to the disk. I hate this “feature” … very much.

So, without knowing it I was thinking I was taking 14bit raw images that were, instead, often invisibly turned into 12bit raw images that were then lossy compressed to, at worst, 11bit images, that were forcibly stamped with an anti-vignetting bit mask into the RAW data itself.

I don’t know why Sony claims it supports RAW images given that the camera deeply messes with these images without being asked to and most of the time without any indication it is doing it nor any way to turn it off via settings (or at all).

But, for the record, after much frustrating learning over the past several weeks, I can say that:

  • Siril is awesome, I still use it way more than anything else
  • the lens in question is great and I understand much better now how to use it
  • I now understand that wide field, (like, 20mm effective focal length) imagery comes at a cost, always, and that has to be taken into account when using it
  • Siril is not optimized to deal with many of the quirks of 8-50mm imagery as it is much more focused on longer lens imagery … which is something I’m learning to work around, also
    *and, lastly, that Sony evidently just wants me to feel pain for something I must have done in a previous life but there is no other explanation for their atrocious behavior with regard to their RAW files on the camera range I use.

In the end thank goodness my primary camera and lenses is my old D7000 I picked up used a number of years ago. It does what it is supposed and its bag of obnoxious trickery and unwanted features is vastly, vastly, smaller than the more modern and “user friendly” Sony I am also using.

Thank you for responding though. I know with the new update work you folks have been super busy and I greatly appreciate your efforts!

Siril is amazing and greatly appeals to the programmer in me.

I suspect what is happening is that when you’re aligning and stacking, if distortion is in play, while your alignment is “good” in one part of the image, it fails in another, and that is perceived as adding blur. I’ve had similar issues with global align-and-merge scenarios if lens distortion exists.

Not sure what the appropriate way to solve this is - I need to dig into the depths of some of siril’s more advanced stacking methods (drizzle, etc) as in my case, I was involved in a niche use case where global alignment was not needed (tripod mounted, terrestrial image sequence). Using lensfun to perform distortion correction is only applicable if you’re demosaicing the images, which isn’t always used for siril workflows.

Extremely annoying. Covered in some threads over on dpreview - Color Polygons 2021: Sony Alpha Full Frame E-mount Talk Forum: Digital Photography Review for example. One of my rants on the subject at Re: Which lens corrections affect RAW (A7R IV): Sony Alpha Full Frame E-mount Talk Forum: Digital Photography Review