Signal, noise and lucky imaging

Tempted by the examples published here, in which some great photographers showed stacks composed of thousand of frames, I’ve tried my humble experiment on this field. I’ve taken “only” 730 frames of M1 with 10sec exposition time. Obviously 730x10 frames are not enough to get a complete clear image, but the image is comparable with other 2h images that I’ve taken from the same object with more traditional exposition times.

But, I’m not here to talk about my pictures, but to talk about how powerful is SiriL. I don’t know if you are aware that SiriL could be used as a command line program, without UI, to run scripts.

I’ve used this feature, combined with bash scripting, to perform an incremental stack of the 730 frames. I’ve stacked first 2 images, then 3, then 4… until a last stack of 730 frames. So at the end I’ve 730 frames. Each frame is the result of one stack that goes from 1 image to 730. I’ve combined them in sequence (again, using SiriL) into a SER file to “see” how signal emerges from a sea of noise.

lineal_F001-730

Without SiriL this video of stacks would not have been possible, so thank you very much to keep providing scripting capable tools, on this days of prevalent UI tools :slight_smile:

5 Likes

Interested in writing a tutorial for website? :slight_smile:

And thanks for this kind message :slight_smile:

That’s awesome! Thank you, and congrats for the image too!

A tutorial? Do you mean an article explaining how I’ve generated this animation and the (very simple) scripts used to automate SiriL?

Yes I’ll do, of course. But this will be a one shot tutorial… I don’t believe that nobody will need it :wink:

That’s a really nice visualization - and what makes it even better, it doesn’t look like what I would have expected it to. What would make a nice addition in my opinion is a frame counter, progress bar or something a bit like that. Anyways, really awesome work!

Just one tidy bit, a 53 mb gif is probably not the most efficient way of encoding a few seconds of black and white images. :slight_smile:

@Jonas_Wagner do you have a better approach that could be shown on a web browser? I’m very interested in how to show this animation on a more efficient/universal way.

I’d suggest encoding it as webm video. Gave it a quick shot, to see whether discord can deal with ti:

The only real drawback on the web nowadays is that apple still refuses to (widely) support it, so for widest browser support it would generally be advisable to upload a h264 video which should work almost everywhere. In practice it’s patent encumbered and given the context - a forum about foss graphics software - probably out of place. :slight_smile:

Update: looks like it works, and even at about 1/20th of the size of the original the quality is bearable in my opinion.

Does not work on an iPhone or iPad unfortunately

That is amazing. Was it taken in a tracking equatorial mount? DSLR? I have only an alt-azimuth mount for a DSLR. I can only take exposures a few seconds long, and never thought there would be such a huge difference between some dozens and some hundreds short exposures (I always assumed the main difference would be made by having deep exposures more than reducing random noise). Maybe is time for me to try a multi-night shooting session with some 100 images per night :thinking:

Siril can export in webm.

No, is not a DSLR, is a planetary camera, the QHY5III462C. The secret to be hable to use short exposures is the sensor’s read noise. On my camera is around 1e/px, so I can stack very sort exposures. But in a DSLR you have between 10 and 6 e/px. Only some recen Sony cameras has a read noise around 1e. If you read noise is higher you will need a minimum exposure to have your signal over this error. For a DSLR y will try with 30” exposures.