HDR processing in Siril ?

Many Astro targets are HDR - meaning they have a high dynamic range. Stars are the simplest case - an exposure long enough to get dim stars will almost certainly saturate bright stars.

Pixinsight and APP have HDR features - but I can’t seem to find it in Siril. Is it there? Or planned for a future release?

Well you can still see bright stars and faint stars without saturating them using a histogram transformation. If you want to combine a short exposure and a long exposure there is no specific feature in siril for that, but I think people managed to do this with stacking of the different processed images.

I do not agree. IMO, only two targets need HDR. M42 and M31. That’s it. (maybe some brighter cluster like Omega centaury and M13)
I think that people have a really bad use of HDR tool.

For example, here: PixInsight — Multiscale Processing with HDRWaveletTransform
For me, the result is absolutely awful and the galaxy should never been like that

2 Likes

I’ve actually used siril for non-astro work for this exact goal - stacking a bunch of terrestrial images exposed to preserve highlights in order to synthesize a very long shutter time.

Workflow for this was: ARW images into Siril, stacking, convert resulting FITS into TIFF, rename to DNG, add appropriate DNG metadata, tonemap in RawTherapee:

It would be beneficial if, at some point, Siril could output a DNG file instead of FITS with the appropriate metadata tags to match the input files (e.g. ColorMatrix for the camera in question, appropriate CFA repeat pattern and arrangement, etc.)

Once you have demosaiced and stacked your images you cannot retrieve a CFA image. Why in this case, DNG would be better than a TIFF file?

1 Like

Um, I would have thought you of all people would know that not all siril workflows demosaic since it is your own software… The workflow for the image above most definitely had siril outputting a Bayer-CFA FITS that I converted to DNG and then demosaiced, color managed, and tonemapped in RawTherapee. Yes, there are other workflows that let you merge a sequence of images, but siril is one of the ones that doesn’t run out of memory when you have a 100+ image sequence (In this case, emulating a 150+ second exposure with a bunch of 1.5 second exposures)

Even in the case of exporting a demosaiced linear DNG vs a color-converted TIFF, last I checked siril was not using DCP profiles for camera color management, and implementing this in siril would be quite redundant compared to outputting in a format that can be used by later software that has more complex camera profiling support. Edit: Also, DNG instead of TIFF has the benefit that it will gracefully fall back to a minimal color management solution (ColorMatrix1/ColorMatrix2) in the absence of a DCP profile for the camera that was used in the event that the user does not wish to perform colorspace conversions inside siril and leave the conversion from the camera colorspace until near the end of the stacking pipeline.

If TIFF output without colorspace conversion is now in siril (it was not before, requiring the slightly annoying intermediary step of converting FITS to TIFF), that’s at least a step forward since that now only needs a rename-and-tag.

With respect to HDR and astro - you don’t need a deep sky object to get HDR targer - star fields alone are HDR.

The range of stellar magnitudes that you can capture is MUCH larger than the dynamic range of current sensors - whether they are DSLR or specialized astro sensors.

Taking a shot as N copies of a shorter sub exposure helps - a shot that does not saturate in the subs will have the correct color.

However, even with this approach it is all too easy to oversaturate stars.

Note that I am not talking here about garish HDR effects, my goal is much more modest - shoot rich, wide angle, star fields where the stars have their natural color rather than being saturated to white.


Let me ask my question a different way. Siril currently lets you average N subs. Is it possible to specify weights to make this a weighted average?

I don’t think it’s possible with regular means. You may duplicate images to give them more weight…

Duplicating images to try to achieve weighting is cumbersome because the weights are not likely to be integer factors.

What about multiplying an image by a factor prior to stacking? That is essentially the same as weighted average. I bet there is some way in Siril to do multiplication by a factor. Or this might be expressed as a linear transformation of pixel values

Good idea indeed, and it’s possible: the fmul command. The problem is that this command only works with the loaded image, so you’ll have for each image to do this sequence: load, fmul, save. Then you can stack your average without normalization and without rejection.

Why are saying without rejection? Why wouldn’t rejection work?

The reason to do a weighted average is to accomplish HDR - i.e. combine subs with very different exposure. At its heart, HDR is an average where you adjust for the exposure with a weight.

So, we expose one sub at 10 second exposure, and a second sub at 20 seconds. To combine them we must either multiply the 10 sec exposure by 2, or divide the 20 sec exposure by 2.

After this adjustment the subs are on the same basis and I think that rejection ought to work.

Same question for normalization, why couldn’t you do normalization also?

It just depends on what you want to achieve. With HDR sometimes people want one a bright image of the faint part of a nebulae and an image with only the bright part of it. In that case, images are very different and normalization will not make the desired effect and if the levels don’t match the rejection will also do nonsense (anyway, rejection on two images cannot work).

If you want to make the weights in a way you have your images at the same level ok, maybe it can work, it’s not easy, but then I don’t see what good you can make out of this technique.

I don’t like this method.
Why?

Imagine you are shooting M42 at different exposure.
At 60sec, the heart of the nebula will be saturated. In this case the method will not work (except if we give a 0 weight to the saturated pixels, etc… but it is not always easy to say if a pixel is saturated)
For me, the best things to do is to stack separately each exposure time then to compose with specialized software like Enfuse.

I have the idea, maybe someday, to use enfuse in Siril.

Any method to capture a subject with wide dynamic range, like M42, will need to have a range of exposures which has every part of the subject appear unsaturated in at least some of the photos.

In daytime photography that means a series of shots with different exposure time. In astro we typically have a series of many subs at each exposure, which can actually help avoid saturation in some cases.

True HDR processing would give zero weight to saturated pixels. It is (almost certainly) possible to do this in Siril.

Enfuse is also doing a weighted average - and it has options for what to do with saturate pixels. The main difference is that Enfuse or other exposure blending approaches make a linear file that they envision you can just use directly, unlike HDR where it makes an HDR file that you then have to tonemap.

Tonemapping operations - done with Photomatix, Aurora HDR and other programs can simulate Enfuse quite easily. Or do something more wild.

So Enfuse is actually a particular subset of HDR

VERY specific - in fact it’s almost entirely unique in that it’s one of the only examples of:
Multiple LDR images in, tonemapped LDR out - without EVER having an intermediary HDR step

Also note that enfuse does NOT make a linear output - in fact it’s actually explicitly designed to work in colorspaces that are “perceptually linear” which means NOT mathematically linear since human perception is fundamentally nonlinear

It can be used to tonemap a single HDR input by generating synthetic LDR exposures from the HDR input, then fusing them to a tonemapped LDR output, see:
compressing dynamic range with exposure fusion | darktable (sadly, this implementation is horribly broken and will never be fixed)
http://www.hdrplusdata.org/hdrplus.pdf - Google’s HDR+ algorithm, which uses a variation on the enfuse algorithm to tonemap the output of their tiled align-and-merge pipeline

Unfortunately, getting siril output ingested into other tonemapping tools is not user-friendly at all, as described in my previous post