This thread is the starting point of such a contribution in order to not flood the other topic anymore.
This is too late for the Siril 1.2.0 beta but could be implemented in the stable release if we find a suitable workflow.
No problem on not making 1.2.0 - good work should never be rushed (and I’ve been really busy with other stuff lately, so will take some time to get back up to speed with siril as it’s been a long time).
Also this probably would wind up being something that needs a multi-phased approach - initial short-term should be handling display transforms and tagging of linear data better, and just kick the gamut can down the line.
At some point, probably either some digging needs to be done in the raw color management/color transforms, or just lean into focusing on Siril as a stacker and pushing the color management heavy lifting to other tools, with a focus on Siril preserving the required metadata to do this. (e.g. for input that originated from a DSLR/mirrorless raw file, don’t do color transforms, just stack, and output a DNG that has appropriate color metadata for further processing. Note that it is possible and in fact becoming very common to have DNGs that are already demosaiced now that mobile devices are doing their own variations on stacking - it’s often referred to as “linear DNG” which is a bit of a misnomer because mosaiced DNGs are usually linear too…)
Cecile tagged me in the discussion. I’m afraid colour management is an area I know nothing at all about, at least implementation-wise, though I’m happy to help adapt any of my code if changes are required, or with testing. Great to see we have a willing volunteer to contribute in this area!
We do have a bit more of an interest in post-stacking processing now, with improved stretch capabilities and other new functionality since the 1.0 series, so ideally I guess we would handle the colour transforms correctly through the processing stages rather than assuming that would be done later by another application.
I haven’t encountered the phrase “perceptually linear”. It is potentially confusing. The usual phrase is “perceptually uniform”. The JzAzBz colorspace is an attempt at perceptual uniformity, meaning that equivalent numeric differences have equivalent perceptual differences, regardless of their positions in the colorspace.
Linear colourspaces are not perceptually uniform. Perceptually uniform colorspaces are not linear.
Yeah. The description/high level design looks good to me, including:
When saving a file:
a. FITS are saved in their working color space. A preference toggles whether ICC profiles should be
embedded in saved FITS files (this defaults to TRUE).
b. 32-bit files supporting ICC profile (only TIFs) will be saved in linear color space with a linear
ICC profile embedded in the file.
c. 8- and 16- bit files supporting ICC profiles will be mapped to sRGB color space if they aren’t already
in it, and saved with a sRGB profile embedded.
d. 8- and 16- bit files that don’t support ICC profiles will be mapped to sRGB color space and saved
with no embedded profile. I believe this best meets the assumptions of other software that such
file formats will use sRGB.
That’s pretty much exactly what I was thinking. Might want the option for linear for 16-bit (int or float).
A TODO (probably for a MUCH later MR) would eventually be to consider other gamuts other than sRGB - but as I kinda mentioned in past comments, for the time being, just handling linear vs. nonlinear correctly is a massive step forward. Handling gamut is a whole other can of worms, although could potentially offer some powerful tools in terms of blending from multiple sources (for example, h-alpha captures would be something that lies outside of the sRGB gamut).
Camera input profiles are an even bigger can of worms, given that pretty much any profile out there was generated from a reflective target (or generated to be a best-match for a reflective target), while 95%+ of siril use cases are going to be recording emissive sources.