CFALD — CFA-Luminance Drizzle

With Siril, I’ve managed to get my concept tested. I my HDD head crashed last year, so I’ve lost all my raw subs. If anyone has OSC or DSLR data, preferably all ready in a Siril project they would be happy for me to test on, that would be great!
Here is the process:
FALD — CFA-Luminance Drizzle

A RAW-domain luminance extraction technique for OSC astrophotography

Author: Shaun Slade (2025)
Version: 1.0

Abstract

CFA-Luminance Drizzle (CFALD) is a novel processing technique for one-shot colour (OSC) astrophotography that extracts a high–signal-to-noise-ratio luminance channel directly from the RAW Bayer CFA before demosaicing. By averaging each native 2×2 RGGB block into a single luminance value and mapping it back into a 2×2 region, CFALD preserves full-resolution image dimensions while achieving true binning-level noise reduction. Drizzle integration then reconstructs subpixel detail lost in the averaging step, particularly with dithering. The resulting luminance frame behaves similarly to a dedicated mono L exposure and can be combined with the same interpolated stacked RGB for a full LRGB workflow using only OSC data.

  1. Method Overview

CFALD operates purely in the RAW domain, prior to interpolation, colour mixing, or debayering.
This provides cleaner noise characteristics than any synthetic luminance derived from RGB images.

1.1 CFA-to-Luminance Mapping

Each 2×2 Bayer block contains one R, one B, and two G samples. CFALD computes:

L=R+G1+G2+B4L = \frac{R + G_1 + G_2 + B}{4}L=4R+G1+G2+B

This luminance value is then written back into the same 2×2 region, producing a full-resolution frame composed of uniform 2×2 luminance tiles.

This step preserves:

registration compatibility

star profile geometry

stack alignment consistency

drizzle subpixel offsets

while still giving the noise reduction of a true 2×2 bin.

  1. Drizzle Reconstruction

Because each subframe is naturally offset (or deliberately dithered), drizzle integration can recover resolution normally lost to averaging. Drizzle reassigns the luminance values onto a finer sampling grid, providing:

improved detail retention

smoother low-SB gradients

reduced fixed-pattern noise

faithful reconstruction of subpixel structure

The result is a full-resolution, high-SNR luminance master.

  1. Workflow Summary

Load RAW CFA frames.

For each 2×2 block, compute luminance:
L=R+G1+G2+B4L = \frac{R + G_1 + G_2 + B}{4}L=4R+G1+G2+B
and write L back into the corresponding 2×2 region.

Register all luminance subs.

Drizzle-integrate the luminance stack.

Stack RGB separately (standard debayered workflow), using the same RGB subs (like superluminance).

Combine CFALD L with RGB using an LRGB blend, typically with star protection.

  1. Practical Results

Testing on DSLR OSC datasets shows:

~1.5–1.8× practical SNR gain over debayered RGB luminance

visibly improved dust-lane structure

fainter galaxy halos recoverable without chroma noise

higher stretch tolerance

reduced fixed-pattern noise (even without dithering)

LRGB behaviour similar to mono + RGB workflows

  1. Limitations and Considerations

Best results achieved with dithering (enables optimal drizzle reconstruction).

Flats and biases must be applied before CFALD extraction.

Very undersampled data may show block artifacts before drizzle.

Works best on galaxy and nebula structure; star cores should remain RGB-only.

Conclusion

CFALD enables OSC users to generate a true luminance channel directly from RAW sensor data, achieving mono-like luminance behaviour without a dedicated mono camera. This technique meaningfully enhances faint-structure detectability and noise performance using the same acquisition time, offering a new LRGB processing workflow for OSC imagers.

December 8th 2025


1 Like

I’ve just used this method on my 2023 NA nebula data:

This is 5h of full-spectrum DSLR data with a kit lens, ~4.5hrs @ f5.5(!)
1st is my original OSC stack.
2nd,using CFAL.

No AI, no noise painting — just using the raw sensor data more efficiently.


Here are crops to show noise:
1 OSC workflow:


2) CFALD workflow.

Sorry this was with my sigma 105mm @f4.

CFALD_Siril_Workflow_v1.0.pdf (355.5 KB)
Here is the exact workflow I use for Siril.

The current Python implementation is still experimental and tightly coupled to how different RAW libraries decode CFA data, so I’m not releasing it publicly just yet. I want to be absolutely sure that the reference implementation matches the Siril workflow 1:1 and produces consistent results across different cameras before putting it out there.

Once the method is fully validated and the tooling is stable, I’ll make a clean, documented version available. For now, if anyone wants to test CFALD properly, I’m happy to process your pp_light (no debayer) files and return the CFALD luminance and stacked results so you can compare directly on your own data.

CFALD_Siril_Workflow_v1.0.pdf (355.5 KB)

A comparison of M42, small crop.
1300D, Sigma 105mm @ f4.5
1 RGB Starless Monochrome


2 CFAL Starless.

I’ve stretched both L and RGB identically and made starless versions to compare structure and noise.

The CFALD luminance consistently shows lower noise and smoother faint detail.

v1.021
Modified section 4.3…

4.3 – Spatial Resolution

Contrary to long-standing assumptions in OSC processing, combining the four CFA samples inside each RGGB block does not destroy spatial resolution.
This belief persisted for decades because the OSC community implicitly treated the 2×2 Bayer pattern as if it were four independent spatial samples, exactly like a 2×2 set of mono pixels.

In reality, the four CFA pixels in an RGGB cell are not four independent spatial samples.
They are four filtered spectral measurements of the same underlying spatial location.
The spatial sampling is determined by the sensor pixel grid itself—not by the colour channels.

Key Insight

  • Combining mono pixels destroys spatial information, because each pixel represents a different point on the sky.

  • Combining CFA pixels does not destroy spatial information, because R, G1, G2, and B represent different spectral components of the same spatial point.

For this reason, CFALD can combine the four CFA values into a luminance estimate while keeping all four pixel coordinates intact.
The sensor’s full spatial sampling is preserved, and dithering offsets between frames remain valid.
This allows the CFALD luminance stack to be drizzled—reconstructing spatial detail close to the sensor’s native resolution.

This is fundamentally different from hardware 2×2 binning, which collapses four pixel positions into one and permanently removes the subpixel sampling information required for drizzle.
CFALD bins radiometric values while preserving geometric sampling.

Historical Note

To our knowledge, no existing OSC workflow—commercial or amateur—has ever produced a luminance channel by combining CFA values before debayering while maintaining native pixel geometry and then stacking/drizzling it as full-resolution data.
The prevailing assumption that “2×2 CFA combination must lose resolution” prevented this approach from being explored.

CFALD demonstrates that this assumption was incorrect:
CFA combination does not reduce spatial sampling, and OSC sensors can produce a true high-SNR luminance channel at full resolution.

So, things are moving along quickly.

Sorry about the repeated messages.

Here is an updated whitepaper attached.

TLDR:
Announcement
​​​​​​​The significance of CFALD lies in demonstrating that the Bayer mosaic can be exploited in a way previously overlooked: the four CFA measurements inside each RGGB cell can be combined into a high-SNR luminance estimate without reducing the sensor’s spatial sampling. Because the geometric lattice is fully preserved, drizzle reconstruction remains effective, recovering a large portion of the high-frequency structure normally associated with mono luminance. This correction in our understanding of Bayer sampling allows OSC cameras to deliver a mono-like luminance channel through software alone, substantially improving broadband workflow efficiency and data quality. While CFALD does not eliminate the physical advantages of true monochrome sensors, it narrows the gap significantly and constitutes a meaningful advancement in OSC image processing.
CFALD Whitepaper v1.3.pdf (124.5 KB)

Sounds cool, but can we use one thread for now? You have like five open for what I can see is basically the same thing.

1 Like