first stack seq build (cfa-debayer)

I’ve been having issue today after doing latest beta install to try out the new features.

My workflow is basically place OSC images in and check ‘cfa’ debayer option and do convert and then carry on to next steps.

I don’t do scripts as manual process makes more sense to me.

But the very first step of converting .fit and building .seq just doesn’t finish through.

So I thought since it was beta version and went back to 1.2.5 that I had before.

After reverting to 1.2.5, the same problem persisted. Basically my computer just gets hung up completely and have to manually force reboot it as memory usage is just out of whack.

In the cms logs, window I’ve been seeing -
Out of memory in debayer_buffer_new_ushort (…/src/algos/demosaicing_rtp.cpp:110) - aborting

but this is something that doesn’t get spit out in the actual log for some reason. I don’t know why.

I also don’t have a clue as to why it is running into out of memory error when I just have plenty of it.

I’ve never had any issue with Siril before at all except what’s happening now.

So not able to figure out what is going on and why it is happening.

If I don’t check cfa debayer option, then .seq gets built fine but then data is stripped down. Not checking cfa makes sense if I have mono data but I don’t.

In between different installs, I made sure that all of the siril related files from comp were removed so that when I did the previous version-it wouldn’t be affected. But yet it did.

Let me know what other information is needed to make sense of this problem.

I last used siril with 1.2.5 back in march’25 and had no issue at the time.

Hello and welcome,
converting with debayering is heavy for memory, people rarely do that because debayering is supposed to happen after calibration. This should still not happen, I thought I fixed this a while ago; there are a few things you may check:

  • do you have the default settings for memory management? visible in the performance tab of the preferences
  • how many processors does siril detect for how much available memory? written at the top of the main window
  • lowering the number of processors should fix the problem if it’s not a settings problem, but you’d have to do it on every siril launch

Good luck

1 Like

I had played with the numbers in lowering them and making them higher as well. Higher wouldn’t be good but wanted to see. Didn’t help in either case.

Debayering I do it because my images are from OSC. If it was mono then it wouldn’t have mattered much. Plus as noted if I didn’t-without debayering I could stack. But I couldn’t do more images.

Solution turned out to be increasing the RAM. I am glad it fixed it.

It would be nice if Siril can utilize graphics card-if it could then I wouldn’t have run into this issue.

But one can hope.

GPU won’t help for preprocessing where the bottleneck is I/O.

1 Like

Again, you’re not supposed to convert with debayering as it’s supposed to be done after calibration.
So you’re saying siril is not correctly detecting the amount of free memory and you had to install more. Can you tell us how much you had before and after, how many processors are detected and what size are your images?
Thanks

“Again, you’re not supposed to convert with debayering as it’s supposed to be done after calibration.”
–So even for OSC shots, debayering is not to be done beforehand during .seq build? It would be nice to update that in the manual processing documentation.

But my guess is the amount of Hard Drive memory that got filled up now compared to before which could have not let siril do memory management well enough. But I had this issue and ended up doing a specific hard drive just for siril processing which actually helped-and this issue was about 1 or 2 years ago-so I’ve been using Siril fine all those times.

Images I have are 26MP images at bin 1. I was able to do about 30-40 hrs of data with debayering fine. Sure process was slow but it was working fine.

Previously RAM was 16Gb and now I got it to 64 Gb. I use the computer for other heavy processing too so figured I’ll just go to max to help other processes as well–as I was planning to upgrade RAM at some point in future but not this early.

The way this issue has happened is making me wonder where siril is having hard time basically. I’ve shared the log message with Cyril on Bluesky and helped explain what they meant–which is what led me to getting more RAM.

But I just can’t understand why now I started having issues with RAM and not before. I mean there could be memory issues in my computer but pretty much everything else in my computer is working fine. It was only Siril that really had difficult time moving forward.

Thank you for clarifying that.

I don’t understand the problem with the memory then, one question you haven’t answered is

how many processors does siril detect for how much available memory? written at the top of the main window

About the documentation, it is already there:

This option should generally not be used if the images are bias, dark and flat images, or light images intended to be pre-processed.

and in the manual pre-processing tutorial too with less explanations:

Check that the Debayer option is unchecked (we will do debayering later)

1 Like

that part doesn’t make sense.

how is memory tied into how many processors (I mean speed wise yes but other than that-I am not aware)?

I let siril use all 20 core processors that are available on my machine. And the memory used to be whatever the default value was which I can’t remember because I’ve changed it now.

with the default memory use value (with 16GB of RAM before the upgrade) and all 20 processors being utilized-siril was working fine with the way I mentioned I’ve been doing processing all this time despite you clarifying otherwise(–and thank you for pointing out the notes. much appreciated.)

So my guess is something did change where siril is also relying on something else which is not obvious from memory and processor settings. my first guess was the beta version acting up but then using the older and older version didn’t yield any solution either.

it has to be amount of space on the HD that memory could be using to offset process or something-because then upgrading the RAM now-everything is back to working the way it was.

the relation between the number of cores and memory is simple: each core processes one image, so memory has to be large enough for all of them. But there are protections most of the time, to not load more than can fit in memory. Conversion is a bit different because it doesn’t know the image size before opening them, it has been modified to open the first image and compute how many images fit in memory, so it should still not fail.

I’ll try to see if something changed in 1.4 that would explain the regression.
Thanks for your help

1 Like

That is what I was thinking too if there was a regression.

But then I kept wondering more as I tried the older and older versions - it should’ve worked out the way it had been for me.

Right now using the beta version, I am not having any issues since I upgraded the RAM-but didn’t want to make you bit paranoid about this-as I am still wondering, but at least I am over a hurdle where I thought I could never use siril again. That would’ve been very upsetting.