Kernel gymnastics

Papers on image processing examine novel ways of using and computing kernels. Let’s discuss!

1 Non-traditional ways of calculating the convolution, whether to output the same result more efficiently or to output a much more elegant result.

2 An adaptive kernel that changes size or shape based on the input, other kernels or masks.


Note: I am slow to understand math and code and I am not the only one. Remember to make the conversation accessible to our visitors. Thanks!


To start the conversation, here are 2 topics

A I am reminded of the exchange between @Reptorian and @garagecoder (Reptorian G'MIC Filters - #367 by Reptorian) about a variable kernel size box filter based on a mask. I wonder a if there are new ideas or development in this area, and b what the applications have been.

B I started asking about calculating distances (along specific paths) in kernels in G'MIC exercises - #605 by afre with the goal of detecting features such as edges. In retrospect, movement cost (common in games like The Battle for Wesnoth, which I highly recommend BTW, and GIS) might be an apter description than distance.

A I am reminded of the exchange between @Reptorian and @garagecoder (Reptorian G’MIC Filters) about a variable kernel size box filter based on a mask. I wonder a if there are new ideas or development in this area, and b what the applications have been.

Theoretically, you can create more accurate emulation of field of depth stimulation. That’s one example I can think of. Another theoretical application is better edge detection. Garagecoder’s code is quite fast, however I definitely would appreciate a way to use custom convolution with variable kernel size, and being able to blur by map. It’ll open door to new things.

Even though gcd's code is faster than yours, it is super inefficient since most of the info is discarded. However, I don’t know what it would take to operate on each pixel neighbourhood only once. Of course, as I alluded to in the OP, there are papers that share findings on how to do things like that, but they are above my ability to implement. :sweat_smile:

A generic adaptive convolve would certainly be nice. :wink::laughing:

I think all of those are things only @David_Tschumperle or someone that understands gmic code well enough can do. I don’t know how to implement it at the code level. I don’t think it can be done via g’mic scripting.


Just thought of another idea. Erode/dilate can be improved here. Imagine being able to use a mask.

You can already specify a kernel for erode or dilate albeit a constant one. I guess what you are asking for is a guided erode or dilate.

It is actually a lot faster. I let your command run without tic toc in the background. On my system, I would say that it takes more than 10 min. :sweat_smile: In any case, I don’t think I would have a use for either result. Perhaps, with a few conditional weights, it would be more robust.

Not just right ! The “mask” of erode/dilate can be weighted as real kernel.

Could you explain what you mean?

Try a non-binary kernel similar to correlation/convolution with option “real-mode==true” in erode/dilate. They can be considered as some sort of non-linear convolution!

That is what I meant. However, it is not what we are discussing, which is to blur or convolve each pixel differently based on a mask.

ImageMagick can do that, with “-compose Blur”. I show some examples on Selective blur.

Thanks, I have forgotten about that. :slightly_smiling_face: Will check it out later.

I’m not sure what you want, but perhaps what I call Dark paths. Given a grayscale image, what path between any two pixels is the darkest overall? And similar problems. I implemented these in C, for ImageMagick. They do not involve kernels, so perhaps are not what you want.

G’MIC already contains plenty of blur algorithms that basically do that.
The most generic one is what command smooth implements, where you can even specify a field of diffusion tensors to guide the diffusion at each point.

Now, today, my plan is to show you how you can implement your own custom adaptive kernel processing using the math parser. It can be done quite easily.

When it comes to designing custom processing that behaves differently on each pixel, just have the reflex to turn to the math parser. Its possibilities are infinite :slight_smile: (my opinion is that IM should really focus on improving/speeding up their own math evaluator, it’s so convenient to design custom pipelines).

Here is a first attempt:

#@cli custom_smooth : _kernel_size>0,_nb_kernel_orientations>0
#@cli : Show an implementation of a custom smoothing algorithm,
#@cli : with adaptive oriented smoothing kernels.
#@cli : Default values: 'kernel_size=5' and 'nb_kernel_orientations=32'.
#@cli : $ image.jpg +custom_smooth ,
custom_smooth : check "${1=5}>1 && ${2=32}>1"
  e[^-1] "Smooth image$? with kernel size $1 and $2 kernel orientations."

  # Pre-compute a set of oriented smoothing kernels.
  $1,$1,$2,1,"
    begin(s2 = int(w/2));
    !x && !y?(ang = 180*z/d; R = rot(ang));
    i = abs((R*[x-s2,y-s2])[1]);
    max(((s2-i)/s2)^3,0)"
  l. s z normalize_sum a z endl

  # Loop over images to process.
  repeat $!-1 l[$>,-1]

    # Estimate orientations of the image contours (i.e. index of the kernel to use on each pixel).
    +b[0] 2 norm. g. xy a[-2,-1] c f. "[atan2(i0,i1)%pi,0]" channels. 0
    n. 0,{$2-1} round.

    # Smooth image with adaptive kernels.
    f[0] "*
      begin(
        const S = $1;
        const hS = int(S/2);
        const boundary = 1;
      );
      P = crop(x-hS,y-hS,0,c,S,S,1,1);  # Neighboor of the current pixel, for current channel
      ind = i(#-1);                     # Index of the kernel to use
      K = crop(#1,0,0,ind,0,S,S,1,1);   # Retrieve 2D smoothing kernel
      sum(P*K)"
    rm.
  endl done
  rm.

Then:

$ gmic sp pencils noise 40 c 0,255 +custom_smooth 9

Of course, I’m not telling this is a ‘good’ result in term of image denoising quality. It is just here to illustrate that adaptive kernels can be implemented quite easily with the G’MIC math parser, and in a reasonable processing time (example above takes 0.311 seconds on my 4-core laptop, for a 600x400 color image). Using adaptive smoothing kernels for denoising isn’t really in the air anymore :wink:

And if you adapt the example above, you can do different processing with adaptive kernels, not only convolution. Replace the sum() by max() or min() to get dilation/erosion for instance.
As I said, the possibilities are infinite :slight_smile:

Example of a variation for creating painterly effects.
I’ve just replace the sum() by the max() (leading to oriented dilation), and the normalize_sum by otsu , (to have binary kernels).

And now:

$ gmic sp cat +custom_smooth 9

I used your method (rounded edges) as a guide to mine (splotchy) and thus get best of both worlds.

noise 40, yours, mine, mine [yours]