Fun with Summed Area Tables

Okay, it was my fault: I started it. Been pestering @garagecoder and @David_Tschumperle in PMs about summed area tables and related matters. It was also gcd’s fault for creating a traffic jam On the road to 3.0. This thread is a detour, a new road for all things SAT. Please post here, so others can follow.

The SAT is an old concept, coined in the year I was born, but used way before then. (I know many of you are way older than me: respect!) The concept doesn’t have to be limited to 1-2D either. One can generalize it to higher-dimension images; e.g., SVT (summed volume tables). You can read about SAT here: Summed-area table - Wikipedia.

They are also known as Integral images.

It can be confusing though since some authors refer to them as two separate things. I suppose it may depend on how they are calculated.

@snibgo Any thoughts on precision? That is what I am currently wrestling with. I.e., by itself, the basic SAT driven box filter looks okay, but when I sub it into my guided filter, it causes impulse artifacts; when I clip the image to range, I see random mini boxes and noisy edges. @garagecoder’s gcd_satboxfilter command doesn’t seem to cause such problems. I don’t know what he is doing differently that prevents them from occurring.

… since some authors refer to them as two separate things.

I haven’t encountered that. Can you give examples?

I use ImageMagick Q32 HDRI which internally uses (I think) 64-bit floats. When I need to save them, which isn’t often, I use 32-bit floats. I haven’t encountered precision problems. I expect 16-bit floats would have problems.

I use Integral Images extensively when I need windowed mean and/or standard deviation.

No problems using them for guided filters, but that’s a complex topic. I reckon there are multiple types of GF, which I call:

  1. “Simple”, when the guide image is single-channel (grayscale), or when the guide image is multi-channel (colour) and we want to apply each channel of the guide image independently to the input image (which might be RGB or graycale).

  2. “Complex” uses data from all guide channels for every output channel.

2a. “GrayCol” where the main image is grayscale and the guide is RGB, so this the same as “Complex” but does less work so is faster.

Not all implementations support the “Complex” type.

Here is one experiment on precision: SAT box filter driven afre_gui0s clipped to [0,255]. Contrary to visual quality, image #2 actually uses the more precise SAT; i.e. the SAT box filter is closer to boxfilter (PSNR). The assumption is that the artifacts will all go away if I am close enough to boxfilter.

The exact parameter values aren’t important but if you are interested I am using radius=1 and smoothing=1e-2, which are as low as I would go with my filter. In general, the higher the radius and smoothing, the more likely it is for the artifacts to go away. BUT the threshold values are different for every input image and the presence of discontinuities doesn’t scale with the parameters; i.e., the pattern isn’t predictable even though there is a clear trend.

I strongly suspect something in your guided filter is the root cause, rather than any accuracy problem with SAT. For example, here’s a reference self-guided filter using afre_box0:

afre_variance_patch : skip ${1=1}
  repeat $! l[$>] +sqr afre_box0 $1 sqr.. rv - max 0 endl done

afre_guided_single : skip ${1=1},${2=400}
  +afre_variance_patch $1 +add. $2 div[-2,-1]
  +afre_box0 $1 mul[0,-1] *.. -1 +.. 1
  mul[-2,-1] afre_box0. $1 add

I hope you can forgive me using your afre prefix! Anyway, the point is there are no extreme output values going on.

Good time to reexamine my custom guided filter. I haven’t touched it since the initial improvements on the vanilla. Problem with going my own way means that I have to do my own pest control.

Why does it accept convolve with any kernel, boxfilter, blur and gcd_satboxfilter? It is also possible that I am one step away from making vanilla SAT box filtering compatible.

I doubt it will be badly wrong. Usually when things like that happen with my experiments it turns out to be a stray negative/infinity/nan, or just something outside expected range (too large or close to zero). Solutions are usually cut, max or add at the appropriate point. I usually narrow it down by commenting out from the end backwards to investigate the image stack. Painstaking, but often worth the effort.

I am mildly allergic to debugging just as I am to kiwis. :kiwi_fruit: While it is entirely possible to avoid consuming kiwis, I guess it is a matter of time that buggy code catches up to you.


As mentioned, I am trying little tricks to increase precision. What do you make of this? I am not sure what this means.

image

There is also something called double double precision. Basically, you can double up a type, but I wonder how feasible it is in G’MIC or if it is even practical.

I’ve already been thinking about that, because there’s increased error with increased x+y. The main sticking point is the added complexity of boundary handling. Not something I would like to tackle myself, but perhaps there’s a better way I haven’t thought of.

I made a stealth edit. What do you think of doubling precision? Seems like a nightmare to get around with value management and custom arithmetic.


Another optimization is to do per window SATs.

There’s also a process twice with mirror xy approach. I think the main thing to keep in mind is how badly you need the accuracy for a particular goal. Sometimes “good enough” is right in front of you! Higher precision data types would be slower and use more ram, so it’s always a trade off.

Dabbled. :slight_smile: Inefficient since you are processing the same pixels twice and then some.

With SAT in G’MIC, we have already made a trade off. To me it is about the fun of trying stuff out, even if it is something as old as I am. :partying_face:

Yes, I’ll be surprised if there’s a way to improve accuracy significantly without also having a performance hit though, especially as we’re inside an interpreter. If I find it a problem for my upcoming ideas then maybe I’ll take a second look :wink:

A bit of fun with gcd_boxfilter_local:
leafcameraman

gmic sp leaf sp cameraman ri. ..,3 n 0,21 +gcd_boxfilter_local.. . gcd_boxfilter_local[1] [0] rm[0] add

For Integral Images (aka SAT), of large high-bit-depth images, we need high precision. For example, DSLR images of 7500x5000 pixels, 16 bits per channel, the SAT needs to be recorded with 64-bit floats.

For smaller images, eg 600x600 pixels, 32-bit floats are sufficient.

This is based on a round-trip from an ordinary image to a SAT and back to an ordinary image. Doing that entirely within ImageMagick (Q32 HDRI):

%IM7DEV%magick %SRC% -process 'integim' -process 'deintegim' x0.tiff

%IM7DEV%magick compare -metric RMSE %SRC% x0.tiff NULL:

0 (0)

The image is re-created exactly.

But now save the intermediate result, the SAT, in a file:

%IM7DEV%magick ^
  %SRC% ^
  -process 'integim' ^
  -depth 64 ^
  -define quantum:format=floating-point ^
  sat.miff

%IM7DEV%magick ^
  sat.miff ^
  -process 'deintegim' ^
  -clamp ^
  -depth 16 ^
  -define quantum:format=integer ^
  x1a.tiff

%IM7DEV%magick compare -metric RMSE %SRC% x1a.tiff NULL:

For large images, we need to save the SAT with “-depth 64”, and we get a zero difference. If we have only “-depth 32”, the result is entirely wrong. Not simply “low quality”, but badly wrong. I don’t understand why.

For small images, “-depth 32” is fine.

1 Like

I’ve added a g’mic cli command gcd_decumulate as the reverse of cumulate xy. This time it’s constant window, so you only need specify a size like gmic sp cumulate xy gcd_decumulate 1. I assume it will take a little while to be available in filters update.

To clarify, this is what I mean by afre_box0 being different from boxfilter. We have two issues. All images below are normalized for visual impact. It is easy to see how this may affect a guided filter, which performs box filtering on the input images, their variances, etc., multiple times. The more the command gets repeated, the more its flaws will show. It doesn’t take a very large image for artifacts to emerge.

1 Accumulation error from the SAT.

gmic rand500.png +afre_box0 1 boxfilter.. 3 - r {[w,h]-2},100%,100%,0,0,.5,.5

Difference (boxfilter size=3 vs SATbox radius=1 on 500x500 random source image)

gmic_000001

It gets worse with larger images; e.g., PlayRaw sized (1920x1280).

gmic rand1920.png +afre_box0 1 boxfilter.. 3 - r {[w,h]-2},100%,100%,0,0,.5,.5

2 From SAT to box filtering, ignoring boundary conditions (cropped edges).

gmic rand.png +afre_box0 1 boxfilter.. 3 - r {[w,h]-2},100%,100%,0,0,.5,.5

Difference (boxfilter size=3 vs SATbox radius=1 on 50x50 random source image)

gmic_000004


PS If I substitute, afre_box0 with @garagecoder’s command like so +f 3 +gcd_boxfilter_local.. . rm.., I get the exact same result as #2.

Easy perhaps, but certainly not obvious. For a start, boxfilter is itself approximate (evident when comparing to convolve). You need to consider the way the variance is calculated (average of squares minus square of average) - in some cases that can lead to a “negative variance”, so that needs to be handled.

Then considering the (self) guided filter, it’s basically a ratio between the original image and a blurred image. The amount of blurring is also from a ratio of the variance to itself plus an eplison. So that will mostly have an impact to the amount of blurring when a region has low variance.

An example self-guided filter using gcd_decumulate for box filtering:

gcd_box0 : cumulate xy gcd_decumulate $1

gcd_variance_patch0 : skip ${1=3}
  repeat $! l[$>] +sqr gcd_box0 $1 sqr.. rv - max 0 endl done

gcd_guided_single0 : skip ${1=3},${2=400}
  +gcd_variance_patch0 $1 +add. $2 div[-2,-1]
  +gcd_box0 $1 mul[0,-1] *.. -1 +.. 1
  mul[-2,-1] gcd_box0. $1 add

We can assume error is quite low at the top left, so a self comparison using a mirrored image will do:
gmic sp tiger +gcd_guided_single0 3,2000 mirror.. xy gcd_guided_single0.. 3,2000 mirror.. xy c 0,255

Not much difference there! If you’re looking for a reason to avoid a lengthy debugging session, that is not it :wink:

Sorry I’ve not been able to follow all the discussion, but there is maybe something to consider.
In latest version of G’MIC (2.9.5 and 2.9.6), a bug have been fixed in the cumulate command : basically, the accumulation value was stored as a float (32bits) rather than a double (64bits), and the accumulated errors were higher with the 32bits version. I don’t know if this is related but just in case…

Also, concerning double-valued images : This is currently possible to have G’MIC working with double-valued images, but:

  1. It replaces float by double everywhere, meaning it’s not possible to have both float and double valued images at the same time.
  2. It requires you compile your own version of the interpreter, so it’s definitely not user friendly, not easy to spread to others.
  3. float-valued images has been chosen as the default, because it is a good compromise between value precision (it is able to handle 16-bits integer images flawlessly) and memory usage.

However, there is a trick that can be done to manage double-valued images inside the math expressions. Indeed, the math parser of G’MIC stores all its variables with double (not float), so you can actually manage an array (vector) of double in a math formula.
Something like this, here to compute the accumulation of values along the X-axis:

foo :
  eval "
    # Get image data as an array of 64 bits double.
    img = crop(#0);

    # Define macro to get/set pixel value in the array,
    # viewed as a 2D color image.
    pix(img,x,y,c) = (img[x + y*w#0 + c*wh#0]);

    # Compute image (here, accumulation along the X-axis).
    for (c = 0, c<s#0, ++c,
      for (y = 0, y<w#0, ++y,
        accu = 0;
        for (x = 0, x<h#0, ++x,
          accu += pix(img,x,y,c);
          (pix(img,x,y,c)) = accu;
        )
      )
    );
    copy(i[#0,0],img); # Transfer result back to 'float'-valued image
  "

Here, with G’MIC 2.9.6, it gives an equivalent result to cumulate x.

Aha thanks, very good to know the bigger data type also applies to vectors within math expressions!

@afre also thanks for demonstrating the drawbacks with summed area, it’s good to make everyone aware. There will definitely be occasions where it causes a problem. It seems to me many of the possible “fixes” would negate the speed advantage, but I think it’s still useful without that.

Edit: Hmm, I guess we could do a 64bit cumulate inside the begin() portion of a fill? :thinking: