According you, would it be interesting to propose a new alignment algorithm in Siril to propose users a more accurate stacking of planetary images?
I feel like that I could try to integrate “my algorithm” within Siril, maybe by trying to understand and then kind of inspiring from register_shift_dft()
and stuff around. But I’m afraid I don’t have enough spare time to browse and understand Siril’s code base.
Following code gives general principles of my idea:
-
I hope this may help me to get some feedback about new alignment algorithm - as I’m really noob in image’s analysis, I might miss important dimensions and propose a dead-end way here, or something that couldn’t be often usable/accurate,
-
Also, if code below seems relevant, this might help me getting some kind of support from some Siril’s devs.
/* assuming
Mat img; // containing image loaded using for example
// imread(filename, IMREAD_COLOR );
Mat pattern; // pattern to look after in img
// (eg. Siril's selection)
*/
/* we need a temporary 'result' buffer to detect pattern location
within img */
int results_w = img.cols - templ.cols + 1;
int results_h = img.rows - templ.rows + 1;
result.create( results_h, results_w, CV_32FC1 );
/* this method from OpenCV is the key; it detects pattern's location in img
My Python script uses patterns converted as grey levels' images,
but results seems good using color pattern images too. */
matchTemplate( img, pattern, result, TM_CCORR_NORMED);
/* few variables used to analyse result just computed */
double ignored1;
double score;
Point ignored2;
Point matchLoc;
/* score would be the matching score of pattern within img [0 .. 1.0]
matchLoc.x and matchLoc.y the coordinates of pattern within img. */
minMaxLoc( result, &ignored1, &score, &ignored2, &matchLoc, Mat() );
/* at this point, infering that we first compute matchLoc on reference image,
we can then compute it for all other images and set shifts in x/y of the pattern, relatively to
reference image, whenever score>=0.9 for example, as in ecc.cpp's findTransform() */