Processing a nightscape in Siril: a tutorial

As some here asked, I’m trying to write a sort of tutorial for processing a nightscape in Siril.

For this purpose, I’m sharing the raw files I used for the image I presented here, except that for this tutorial I limited the number of frames for the sake of bandwidth and processing speed. You can find and download the raw files here.

Setup

The raw files are placed in distinct subfolders according to their use:

  • bias/offset frames → ./Bias (20 files)
  • dark frames → ./Darks (15 files)
  • flats field frames → ./Flats (15 files)
  • main subject/light frames → ./Lights (10 files)

Bias, dark and flat field frames are also called “calibration” frames, their purpose being to improve the quality of the image by correcting the signal-to-noise ratio (the role of the bias and darks) and vignetting (the role of the flats). There are several places, such as this one, where you can grab some info about the different types of frames for astrophotography.

At the root of the folder, I placed two text files with .ssf extension, these are scripts used by Siril for batch processing the files. Quite useful. I you want to run a script from Siril, place the .ssf files in ~/.siril/scripts. On restarting Siril, a new Scripts menu appears in the top menu bar, allowing you to launch the installed scripts.

I suggest you download the whole folder, and move the scripts as indicated above. This way, if you set the working directory in Siril to the root of the folder, launching the script named processing_from_raw.ssf will automagically process the raws and create the output image in both .fit and .tif (16-bit) formats.

Step-by-step processing

I will present the steps I used to process an image of the Milky Way. I don’t know if it’s the best way, but it’s probably close to what the developers of Siril advise to do for the general case starting from raw files (actually, I started from one of their scripts and just slightly adapted it).

We will start with processing the calibration files, and then processing the lights.
The first step is to set the working directory to the root of the folder by clicking on “Change dir…”:

1. Preparing the bias frames:

We will use the 20 bias frames to generate a master-bias frame. To load the bias frames, click on the “+” button as shown (make sure that you select “RAW DSLR Camera Files” in the combo box) and select the bias frames located in the Bias subfolder:

In the “Sequence name” field, enter “bias” (or whatever you see fit) to set the prefix of the sequence and subsequent files, and click “Convert” to convert the files to the FITS format, which is the main format used by Siril. Note that you don’t need to demosaic the files yet, make sure the “Debayer” box is unchecked.
When done converting the bias frames, a window will pop up showing a preview of one of the bias frames. Note that since it’s not demosaiced, it will only show as a B&W channel image.
At this point, the bias frames are loaded and ready to be processed to make a master-bias frame.
Now go to the “Stacking” tab, and choose “Average stacking with rejection” as stacking method, and “No normalisation” under the normalisation combo box. You can leave the Sigma parameters at their default (unless you know or want to experiment for better values). It should look like this:

Clicking on the “Start stacking” button. The resulting master-bias frame will be saved as bias_stacked.fit in the Bias subfolder.

2. Preparing the flat field frames:

Since the flats also contain the sensor readout noise (contained in the bias frames), we should remove it by subtracting the master-bias.
First, go back to the “File conversion” tab, remove the files already loaded by clicking on the button located just below the “-” button, and by clicking on the “+” button select and load the flat frames located in the Flats subfolder. Like for the bias frames, “Debayer” should be left unchecked, and click on “Convert”.
Go to the “Pre-processing” tab, check only the “Use offset” box, click on “Browse” to select the Bias/bias_stacked.fit file, and click on “Start pre-processing”:

Now to generate the master-flat, go to the “Stacking” tab, this time set Normalisation to “Multiplicative” (and still “Average with rejection” for the stacking method) and click on “Start stacking” to produce the pp_flat_stacked.fit master-flat frame in the Flats subfolder:

3. Preparing the dark frames:

As for the bias and flats, you need to load the dark frames. Go back to the “File conversion” tab, remove the files already loaded, select and load the dark frames located in the Darks subfolder. Again, “Debayer” should be left unchecked, and click on “Convert”.
The darks need to be stacked the same way as the bias frames. In the “Stacking” tab, choose “Average with rejection” and “No normalisation”, and click on “Start Stacking”:


Now the master-dark frame is saved as Darks/dark_stacked.fit.
Note: if you take images often in the same conditions (same air temperature, same exposure settings) you save the dark_stacked and pp_flat_stacked files, and re-use them to process future light frames faster. I read on some forums that some astrophotographers keep their calibration files and use those for around 1 year, before taking new calibration frames.

4. Preparing the light frames:

Now it’s time to start processing the light frames, by first subtracting the darks (which also contain the bias signal) and the flats (from which bias has already been subtracted).
As before, select and load the light frames in the “File conversion” tab, still without debayering.
Then go to the “Pre-Processing” tab, check “Use dark” and select the Darks/dark_stacked.fit file, then check “Use flat” and select the Flats/pp_flat_stacked.fit file. Make sure that the other boxes are checked as in the following screenshot:

Note that “Cosmetic Correction” can also be done from the “Image Processing” tab.

Click on “Start pre-processing”, this will produce new FITS files with the prefix pp_light_ and the corresponding .seq file. These files are loaded, and now before the next steps it’s time to demosaic them. There’s something strange in the GUI, in that after pre-processing, when you uncheck “Use dark” and “Use flat” boxes, the “Debayer FITS images before saving” and the “Start pre-processing” button become grayed out.
To demosaic the files, you have to go back to the “File conversion” tab, remove the selected files and load the 10 pp_light_000xx.fitfiles, check now the “Debayer” box, write “db_pp_light” as the sequence name, and click “Convert”:

The pre-processed lights will be saved as FITS files, and the corresponding db_pp_light.seq file loaded. Two preview windows will open this time, one with the 3 RGB channels separated, and one with the RGB composite image.

Now go to the “Register” tab, and simply click “Go register” keeping the default option. If you have more 8GB of RAM, you can try checking the “Simplified Drizzle x2” box (it will up-sample the images by a factor 2, increasing the RAM usage by a factor 4). Siril will detect the stars and register each of the 10 images. The preview windows will be updated. By the way, you can play with the zoom and select “AutoStretch” to get a better preview of the selected image:

Next, go to the “Stacking” tab and make sure that “Average with rejection” is still selected as stacking method, and that “Additive with scaling” is set for Normalisation. Click on “Start stacking”. The resulting aligned and stacked image will be saved as Lights\r_db_pp_light_stacked.fit. At this step, you can also save the resulting image as JPEG, TIFF, PNG, etc. for further processing in your favorite image editor.

5. Post-processing the image

Siril can do some more or less specialized post-processing to your image. I found it interesting to use.

  • While the stacked image is still loaded in Siril, you can apply a log transform (it is in linear mode in Siril). I haven’t found how to do it in the GUI, but you can simply type “log” in the “Console” field at the bottom of the main window.
  • Still in the console field, you can use the command “crop” followed by the coordinates of the bounding box in pixels, to crop the image (some auto-detection tools in Siril require the image to be cropped to remove the borders introduced by aligning the images, in order to work properly). For example, my image can be cropped by typing “crop 30 30 5950 3970”.
  • You can apply green noise removal in the “Image Processing” tab > “Remove Green Noise…”.
  • Lucy-Richardson deconvolution can be applied in “Image Processing” tab > “Deconvolution…”. 10 iterations and a Sigma value of 0.6 are a good starting point.

The resulting image can be saved as JPEG, TIFF, PNG, etc. for further processing in your favorite image editor or as a finished image if you’re satisfied.

6. Processing for the foreground

The “problem” with this whole process, is that because the images have been aligned with the stars as reference, the foreground will be blurred because earth moved between successive frames. What I do is to reprocess the light frames from just after the calibration step (i.e. after the dark and flat frames subtraction) but only skipping the stars registration step. By doing so, the foreground will undergo the same pre- and post-processing, and the resulting image will have a sharp foreground and trailing sky.
I provided a script (processing_from_raw_foreground.ssf) which will do that for you, if you already used the first script or if you use the same file naming convention as in the script.

Finally, in your favorite image editor, you can combine the “sky” and “foreground” images using a mask, to get both the sky and the foreground sharp.

Here’s what I obtained following these steps (but using the scripts), after just combining the 2 images in Gimp:

And after quick curve and saturation tweaking in Gimp:

Feel free to let me know if some instructions are not clear enough, or if I can improve this tutorial in any way (typos, omissions, or anything). Thanks for reading!

16 Likes

Oh, this is exactly what I was looking for! Shouldn’t we put this as an article on the front page, pixls.us? :slight_smile:

1 Like

Yes, but we need to proof read and test it :slight_smile:

1 Like

Yes, please proof read it and test it!

Awesome! Thanks for following through with it. :night_with_stars:

@sguyader Great tutorial, thanks!

Also thanks for the scripts! If you don’t want to follow all those steps it’s just a matter of running the scripts (although I don’t advise it for the sake of learning the fundamentals of astrophotography)

One note: in the link you provided about the types of frames, I read this about the darks:

It is important that dark frames are taken at the same thermal temperature as the lights since the thermal signal is dependent on temperature. They should also have the same exposure length and ISO.

In other words, this means taking the darks during the shooting session, right?

In that case, assuming you’re going to take dozens of light frames, you must cover the lens with the lens cap and shoot (or let some kind of automated script that you’re running inside the camera to keep shooting). This will repeat until you have a desired number of darks. Example: light #1 → light #2 → … light #10dark → light #11-> … light #20dark → … and so on.

On the other side, the other types of frames do not necessarily have to be taken during the session.

Do you agree?

This might be out of left field so to speak: ideally, I think it would always be good to take the dark frames, colour reference, etc., during the time of the shooting, but it could be tedious, impractical, overkill or not always possible to do.

Indre, as sais @afre dark frames should be taken during the session ideally. Often, astrophotographers take their dark frames at the end of the shooting session. An intervallometer can be used to take the number of dark frames you need in an automated way. Yes it’s quite annoying, but it really helps fighting the noise.

1 Like

Thanks @sguyader
Another question, you didn’t subtracted the bias from the darks, as the link you provided suggests.
Any particular reason?

Well, that’s also a question I’ve asked myself. In fact I followed the steps found in the scripts uploaded on the Siril website, thinking that they follow the gold standard.
The bias master frame is subtracted from the flat field master frame, but not from the dark master frame. Why? I don’t know. It’s something we should ask to the authors of Siril.

1 Like

Here’s what the Siril guys say about Bias/Offset and Dark frames (excerpt from here):

Dark

Dark frames are made at the same exposure time and ISO as the subject light frames but in the dark: use your lens/telescope cap or close the shutter for example. They contain the thermal noise associated with the sensor, the noise being proportional to temperature and exposure time. Hence, they should be made at approximately the same temperature as the light frames, this is the reason why we make dark frames at the end, or in the middle of the imaging session. Like with the BIAS frames, the more dark exposures are used for the calculation of the master dark, the less noise will be introduced into the corrected images. The master dark frame should be created by stacking dark frames with the median algorithm (or Winsorized by checking the rejection levels at the end of the process, they should be lower than 0.5 percent), but be sure to use No Normalisation .

WARNING: Remember that dark frames are always composed from real dark signal and bias signal. If you don’t apply dark optimization, you can leave the bias signal and your masterDark will be in fact masterDark+masterBias. In consequence subtracting this master to the light frames will remove both signals. However, applying dark optimization makes things different by multiplying masterDark by a coefficient factor not equal to 1. In this case, you must subtract masterBias from each dark frame.

So, my interpretation is that if you follow the tutorial, subtracting the master dark frame from the light frames removes also the bias/offset signal. This is why the bias/offset master frame by itself is used only for the flat field frames treatment.

1 Like

In summary, no need to subtract bias from darks, correct?

That’s my understanding, using the workflow I posted.
In fact, a dark frame also contains bias/offset signal, so it makes sense: subtracting the dark master removes both the “true” dark noise as well as the bias noise.

1 Like

Yes, I think it averages out, but @gadolf you could try both just to be sure. :nerd_face::joy_cat:

1 Like

A good tutorial, and I’m busy going through it. I’ve got Siril 0.9.10, running on a MacBook Pro under Mojave (10.14.4).

One point, I would like to make: you say ‘Now go to the “Register” tab, and simply click “Go register” keeping the default option (which I take is ‘One Star Registration’).’ I cannot simply click “Go register”; I have to ‘select an area in image first’. So the question is: What do I select? A small portion of the image, or the entire image?

Cheers,
Biff

Hi,

First of all, thanks for trying to follow the tutorial.
I don’t have Siril in front of me, but I think by default it was set to “Global star alignment”, which is what you need to use for alignment of untracked frames (as there is some rotation, and “One star registration” allows only for shifting, not rotating). Maybe there’s a difference in defaults since version 0.9.10, or in the Mac version.
So, try to set the method to “Global star registration” and see if it works.

2 Likes

I did. It did!! :+1:
Thanks for the prompt reply.

Cheers,
Biff

Glad it worked! I just checked the latest version from git (0.9.11), and here on Linux the default is still “Global star registration”. So maybe it was a Mac thing.

By the way, my original tutorial has been edited as a pixls.us article here.

Edit: @Biff before trying my tutorial, did you by any chance select yourself “One star registration” in a previous attempt? I see that if you just select “One star registration” and close the program, on the next program launch it keeps the algorithm you have selected the last time.

Hello,

reading the above contributions, I think I should clarify somewhat the meaning of the technical terms dark, bias, flatfield.

dark: Each pixel in a CCD accumulates charge (i.e. signal) even without light falling on it. This is due to the thermal motion of the atoms in the detector. As such, it can vary from pixel to pixel, it is a function of temperature and it is (except in hot and cold pixels) proportional to the exposure time. The noise introduced by the dark in the image cannot be removed by dark subtraction. Dark subtraction can only remove patterns in the dark distribution. Therefore it makes sense to derive a dark frame as an average over as many darks as possible, each with an exposure time as long as possible. In astronomical practice, dark frames are taken with an exposure time of at least one hour.
bias: This is a constant signal introduced by the electronics of the CCD camera to avoid the occurrence of negative values in the data due to read-out-noise for unsigned integer representation. If the bias is not subtracted, the CCD data are not linear, with severe consequences for image calculations! In an astronomical data reduction workflow, the first step is to subtract the bias from all frames. Then you have to bother no more about it. However, this only works correctly in 32bit-data or higher, not in unsigned integer 16bit data. If you do not subtract the bias from the dark frames, subtracting the correctly averaged darks from the images also, of course, subtracts the bias.
Flatfield: It not only corrects vignetting, as stated above, but its prime purpose is to remove the pixel-to-pixel sensitivity variations in the detector and any global sensitivity variation across the detector. Each pixel has its own individual sensitivity to light. So even if the detector is illuminated by a spatially constant light flux, its signal would not be constant without the flatfield correction. Dark frames are not flatfield corrected since the source of the signal is not the incoming light. Depending on the exposure time, flatfield frames may have to be dark subtracted. Bias has to be subtracted from them in any case.

I do not know, if the raw CCD-data from a consumer camera are already flatfielded to some extend by a camera-specific flatfield frame stored in the camera. This may well be the case and would reduce the purpose of flatfield frames to remove vignetting in the optics, as stated above.

I hope this helps understanding the procedures described above.

If you want to read the gory details I can refer to the EMVA standard 1288 or the book by Richard Berry & James Burnell (2005) “Handbook of astronomical image processing”, which is oriented at the amateur astronomer.

Hermann-Josef

4 Likes

@Jossie, I bookmarked this post. I’ve been following threads in the DPReview Astrophotography forum for about a year now, and from that developed a fragmented understanding of these terms. Your explanations are quite clear and complete - Thanks!

The practitioners at that forum who use consumer cameras do all of the above, mostly with CMOS sensors. Even for my regular photography, understanding these mechanics have helped me more appropriately consider the phenomenon of noise in the lower end of my cameras’ sensors.