Hi,
First of all, thanks to everyone involved in Siril, it’s an amazing software and the documentation is great !
Lately I’ve mainly be doing wide nightscapes of the Milky Way using a DSLR (Nikon Z6) and 20mm f1.8 lens, stacking RAWs of 10s exposure each. I’ve managed to make some nice images but I’m trying to see if I can make them better.
For this, I would like to make the constellations more visible. With a decent sky, in real life, the eye can easily distinguish them from the background, but my images look a bit “flat”, too many stars that just look like a sea of white dots. I’m guessing this has something to do with how the human eye responds non-linearly to the low light at night, which is different from a CMOS sensor (and how its signal is transcribed to an LCD screen in a well-lit room).
I believe the information I need is in the image, but hidden in indistinguishable shades of white. In other words, I want to accentuate the difference between bright stars and less-bright ones.
I cannot use the GHS transform or any other stretching function for this because :
1/ it’s hard not to alter the milky way background, there is an overlap in luminosity ranges
2/ brighter stars contain darker shades on the border of their gaussian curves, which I don’t want to alter while decreasing the luminosity of other stars
Therefore I cannot use a global function (like GHS or PixelMath) on the image, I need to modify the image locally around each star. My main idea is to use the result of the PSF star finder, and use a combination of the computed magnitude and FWHM to compute a coefficient to apply on each star with a local stretch function, probably using pySiril.
But before going into this rabbit hole, I’d like some advice. Has someone here tried this before? Is there a better/simpler way to achieved the result I want?