ZERO love for the superpixel???

What are superpixels?

From the pixinsight documentation: (77)

The Superpixel method is very straightforward. It takes the four CFA pixels (2x2 matrix) and uses them as RGB channel values for one pixel in the resulting image (averaging the two green values). The number of pixel in the resulting RGB image is one quarter of the original CFA image, having half its width and half its height. This method is very fast. It has virtually no artifacts and is very well suited for demosaicing oversampled images (images where the sensor resolution is considerably higher than the resolution of the optics).


I was thinking about how simple the algorithm would be to impliment in code and came up with something like:
(for a GRBG arrangement)

I really like the look for whatever reason. However, superpixel gets shot down upon every such feature request.

Is there any way possible to have the superpixel become one of those bottom-of-the-list de-bayering methods in RawTherapee?

Or do we need to use a separate tool do have this algorthm available? Will GMIC do straighforward superpixel?

2 Likes

The original feature request was closed due to lack of information, specifically, evidence of benefits over the existing demosaicing algorithms. If an example is provided, the feature request will be re-opened.

1 Like

Anything which eliminates demosaicing, i.e. guesswork, seems to be beneficial, I reckon.

If you are asking about G’MIC, I believe I asked about or did this a long time ago for the sake of learning how to process images at a pixel level. You may be able to find it somewhere. I see the utility, but not sure if it is worth adding to something like RT. I recall dcraw having a half mode; maybe use that.

My early PlayRaws on the site used this method or others like it quite a lot, but I got tired of using 5 different techniques and tools to arrive at a result.

PS RT uses both LibRaw and dcraw. If you have the will, you could probably take advantage of the half mode.

I actually thought this was optimal as well and tried doing it in Python. Turns out its not a good idea! You will end up with strong color fringes when a edges partially cover a “super pixel”. An example with your image. Imagine its split in half being dark to the left and bright ro the right. The center column will then have a strong blue cast. Flip the bright and dark and it will have a red cast. Have a thin diagonal strand be properly resolved


You will unfortunately get much much better results using one of the softer demosaicing algos like llmse and then do a 2x2 mean.

6 Likes

The superpixel algorithm is, in a way, a demosaicing algorithm. It still guesses colors by assuming that the light at all four sensels in a superpixel is accurately represented by the red, average of the two greens, and the blue value. That’s why it can still produce demosaicing artifacts like the one Jakob mentioned.

2 Likes

If the sensor resolution matched the lens resolution then yes that could occur. I have 35MP and none of my lenses can resolve to that fineness. (The RAWs have a diagonal screen)

Now that it’s been mentioned, dcraw -h is the trick that rings a bell for me. BUT dcraw is ancient and doesn’t support CR3.

Ultimately a drag-n-drop superpixellator is what I’m after if inclusion in RawTherapee seems a no-go.


In RawTherapee with the None debayer method, the difference in optical resolving power and oversampling shown above as a moire screen.


The comparable area with my usual selection, the Amaze+VNG debayer method shows a fairly flat blue sky.

I get that there is no benefit to superpixel and that there are artifacts. But the same could be said about selecting None.

Here is a CR2 I’ve converted with dcraw -h -6 -T as an example
 Tiff converted to 100% JPG:


Here is the same image thru RawTherapee Amaze, Neutral profile, 100% JPG:


Looking at the RAW in RawTherapee with None method:


Same with Amaze:


A look at the dcraw -h -6 -T output (shows the edging artifact)


Link to RAW image: https://github.com/Benitoite/superpixel/raw/refs/heads/main/IMG_1557.CR2

5 Likes

Thanks for your examples.

I forgot to mention in my earlier post the reason that I used this method. It was to detect edges using the flaw that Jakob rightly pointed out. Nowadays, I am just lazy and use the green channel of a demosaiced image.

rawproc has the half demosaic algorithm, and (mostly) current libraw, so it should open CR3s and expose them to ‘half’ damage


I was using it for proof processing, 800x600 renditions that didn’t seem to suffer the artifacts, but when I realized it only added maybe a second to processing 100+ images, I just switched my proof toochain to rcd, and then I didn’t have to mess with it if I selected a rendition for better/further processing.

In that case, is it possible to produce an RGB image without demosiacing or guesswork of any kind? Perhaps by pixel-shifting 


Isn’t that Foveon?

I was referring to a Bayer CFA raw capture, not Foveon.

I think by definition, a Bayer (or XTrans) mosaic needs to be demosaiced to make RGB triples.

The superpixel @HIRAM refers to is really just the half algorithm, where the four adjacent RGGB measurements are just combined into one RGB triple with the GG measurements averaged. The resulting image is half the size of the original raw array. This has been in dcraw since dirt. It’s an exceedingly simple algorithm, not really a dimensions-preserving ‘proper’ demosaic, but it still deconstructs the mosaic


2 Likes

The problem with the Bayer pattern (well, all colour filter arrays) is that the red, green, and blue pixels are spatially separate so creating a full RGB pixel requires guesswork (demosaicing).

It the image is perfectly monochromatic then perhaps you could skip demosaicing. The half size demosaic would work if there are no image features with details finer than 2 pixels.

For me, 2 pixels is not worth crying over in the preview. Here’s my iOS app RAWUnravel using Superpixel (dcraw -h) for preview and AMaZE for the final output:

As you can see the working preview has a tinge compared to the AMaZE output. The trade off is the preview only takes two seconds to generate whereas the output is several seconds long depending on your processing power. Nothing compared to the ultra nice tiling methods, but it gets a result albeit an iffy one. I orginally came across superpixel in DeepSkyStacker, which did seem to handle relatively monochromatic star images pretty well.

Superpixel preview vs AMaZE demosaic

1 Like

I AMaZE vs superpixel the only difference in the pipeline? I can see color differences (fingers top), and the blue fringe looks like CA which is then removed.

1 Like

And the obvious difference is resolution as well, although I guess that’s implied in the algorithm

1 Like

I benchmarked some algorithms on my travel laptop in Darktable (-d perf). It is not a powerful machine by any means (and it has no GPU), but that’s what I have handy at the moment.

On a 20 Mp image, RCD takes 0.01s, while AMAzE takes 1.6s. Given that RCD gives reasonable output very quickly, and that it is a significant improvement over superpixel, frankly, I just don’t see the point of the latter in practice.

This would be my explanation for @HIRAM’s original question: you can do so much better with existing methods at an imperceptible cost, so why bother with superpixel?

1 Like

Interesting. Never heard of RCD before just now. Something to test as it appears in this thread it’s available in librtprocess.