G'MIC exercises

Is this the same as

?

What do negative eigenvalues mean? I assumed that direction would be a part of the eigenvector instead. If I were to do an operation such as sqrt, then I would get nans. Thoughts?

> sp tiger eigen
[0] = 'tiger':
  size = (750,500,1,2) [2929 Ki
  data = (281.32,264.575,251.55
-52.4573,-52.4573,-58.3622,-77.
  min = -91.5441, max = 494.501
  1. Linear transformation matrices scale, rotate, shear spaces.
  2. For certain linear transformations, there can be eigenvectors that identify directions in which aligned vectors only change in length under the transformation, though generally, the transformation alters vector orientations as well as length.
  3. Eigenvectors have associated with them eigenvalues, which indicate the scale factor that applies to vectors aligned with the eigenvector. This eigenvalue can be negative, in which case aligned vectors reverse their directions.
  4. Note that a vector which reverses its direction under a negative scaling still retains its original orientation.
    See
  5. Grant Sanderson’s YouTube tutorial
  6. Wikipedia’s Eigenvalues and eigenvectors,
  7. G’MIC article Eigenvalues and Eigenvectors

-eigen is not generally a command that gets applied to general purpose images. It has a pretty specialized remit.

-eigen finds the eigenvalues and eigenvectors of images that are made up of tensors, small symmetric matrices that are useful for describing the strength and orientation of gradients at a pixel in an original image. That original image was probably processed by an analysis tool like -structuretensors or -diffusiontensors. Each pixel in the output of these commands describes a tensor, which, in turn, characterizes the strength and orientation of an intensity gradient in a similarly situated pixel from the original. From such outputs, -eigen further “expands” tensor fields into two images, the first containing eigenvalues, the second an abridged set of eigenvectors, that identically describe the same gradients, but in a form more congenial for certain classes of computations.

After absorbing (at least) Grant Sanderson’s YouTube video, take on the (yet to be refurbished) G’MIC tutorial on -eigen. The G’MIC -eigen command assumes its input image encodes a tensor field. For the two dimensional case, each pixel represents a tensor: to wit: the upper diagonal of a 2 × 2 symmetric matrix; the input image has three channels, R and B holding the main diagonal values and G holding the off-diagonal value. These matrices, tensors, describe the direction and magnitude of a gradient (think: incline) at the pixel of an image that has probably been analyzed by -structuretensors or -diffusiontensors. There is also a version for the three dimensional case, where tensors are 3 × 3 symmetric matrices and the encoding images have six channels: three for the main diagonal, 3 for the off-diagonals. Typically, one analyzes an image using -structuretensors or -diffusiontensors, both which produce tensor fields computed from the given image. Tensor fields may not be a convenient format for some lines of computation. For that purpose eigen “unpacks” tensor fields into pairs of images, one containing eigenvalues, the second containing an abridged set of eigenvectors. See eigen2tensor, which “repacks” eigenvalue - eigenvector image pairs back into single-image tensor fields.

Here is a Rube Goldberg do-undo pipeline:

gmic sp tiger eigen eigen2tensor

Anticipating your refreshed eigen and friends tutorials. I am still not sure enough about them to apply them.

Noise and shift invariant edge and corner mapping

What I am most interested in is how I could use them and other methods (such as phase congruency, which I still don’t know how to implement) to generate edge and corner maps. Although my current afre_edge is acceptable, it feels arbitrary. I have had another afre_structure in the works for a long time. It may be a little more robust but it isn’t much better. Ideally, as well, I want the yet to be determined method to be as noise and shift invariant as possible. Practically, it has to be a relatively fast algorithm; otherwise, there is no point (pun) to it. Thoughts? Strategies?

PS @anon41087856 recently presented something. If you could lay it out for my snail mind, it would be appreciated. Be gentle.

Can’t find the discussion. Haven’t looked very hard, but don’t see anything recent that rings the bell. Wrapped up in NYC affairs at the moment. Maybe I can look more thoroughly this evening.

A silly face:
encodeme
A silly face, afre-edgified:
sillyface_afre_edge

gmic -i sillyface.png afre_edge 0,1,1,1 -o. sillyface_afre_edge.png

A structure tensor silly face:
sillyface_st

gmic -i sillyface.png structuretensors. n. 0,255 o. sillyface_st.png

As above, but -eigen unpacks the tensor field produced by -structuretensors into a pair of component images, one containing per-pixel eigenvalue pairs, the second containing the components of one eigenvector. Since tensors are symmetrix matrices, their two eigenvectors are at right angles to one another. So giving the orientation of just one eigenvector suffices.
sillyface_eval
Two channels: per-pixel eigenvalues for each eigenvector
sillyface_evec
Two channels: per-pixel eigenvector, ch 0: x, ch 1: y, normalized. Other eigenvector inferred: it is 90° rotated from the expressed eigenvector.

For what it is worth, the “classic” G’MIC approach to selective (anisotropic) smoothing is based on finding the orientation of edges. Maybe these notes here could be useful to your thinking — maybe it is already a part of your thinking and I haven’t noticed. Couldn’t find afre_structure in all of the usual places to get a read on your thinking. Still under wraps?

We desire a smoothing kernel that blurs in parallel to edges, not across them. To achieve this, we need to know about the orientation of edges. G’MIC’s smoothing pipeline achieves this by computing per-pixel datasets that describe edge orientation in the immediate locale of the pixel: these are tensors 2×2 (for 2D) symmetric matrices that express the direction of the steepest gradient (an eigenvector) and how steep it is (its associated eigenvalue). The direction parallel to the gradient is inferred, but its associated eigenvalue is given. The ratio of the two eigenvalues measures “edginess”, a big up-the-gradient eigenvalue paired with a small along-the-gradient eigenvalue is a strong signal for an edge.

G’MIC’s “classic” edge-obsessed approach doesn’t particularly accommodate corners; modest to extreme smoothing oft-times generates smudge-clouds at corners — a flurry of eigenvectors that are not entirely correlated. No smoothing approach is the be-all and end-all of noise management. That’s why people are still inventing noise-elimination methods.

One of my more arcane and earlier “Beginner’s Cookbook” tutorials stemmed from hijacking G’MIC’s selective smoothing machinery for other purposes, like imitating hair. Perhaps Do Your Own Diffusion Tensor Fields would be some use to you. Hope this helps.

hi, it’s also splatting stuff to some image buffer, but it’s a different algorithm. the above mentioned demosaicing is first computing some gaussians according to the detected edges in the image and then splats these into the output image. the bilateral filtering approaches often do splatting in 3d (or higher, the lattice mentioned above implemented in dt does 5d). it also accumulates data/colour in the vicinity of the sample center, but different.

It is on his site.

1 Like

Ah! Thank you. I have some play time this afternoon…

EDIT: Never mind, this doesn’t work after all.

Something seen in “multi-scale” approaches is varying kernel sizes and/or input resolution scales. How about scaling/lowering resolution of a convolution kernel instead? It could be called “scaled convolution”, where it can be imagined the pixels of the kernel get bigger (covering a larger part of the input image, but at lower resolution). I assume it’s already been done elsewhere - perhaps with a different name?

Anyway, It’s quite easy to do fairly efficiently using a box filter. What I’m wondering is how it could be made even faster/simpler. Below is a method using the math parser, but there are definitely other ways (e.g. using shift / warp / grid resize). Anyone got ideas?

Edit: another question - is it equivalent to downsizing the input? I’m assuming it isn’t, because the windows change per pixel in the image.

#@cli gcd_convolve_scale : [mask],_scale>=1
#@cli : Convolve selected images with specified mask and kernel scale.
#@cli : $ image.jpg (1,0,1) +gcd_convolve_scale.. .,10
gcd_convolve_scale : check ${"is_image_arg $1"}" && isnum(${2=1}) && $2>=1"
  pass$1 0
  repeat $!-1 l[$>,-1]
    boxfilter.. $2
    f.. "
      begin(
        const boundary=1;
        const interpolation=1;
        const kw = w#1;
        const kh = h#1;
        const hkw = (kw-1)/2;
        const hkh = (kh-1)/2;
        const S = $2;
        ref(crop(#1),K);
        px(x,y) = (K[x + y*kw]);
      );
      for(Q=0;X=0,X<kw,++X,
        for(Y=0,Y<kh,++Y,
          P = j((-hkw+X)*S,(-hkh+Y)*S);
          Q += px(X,Y) * P;
        );
      );
    "
  endl done rm.

Personally, I prefer to control scale by kernel size because there is no need for transformation to and from spaces and it is very easy to understand and implement. I find transformations alter the data too much, more for some like resizing than others. However, the issue for pixel kernels is that they are discrete and so don’t leave room for the in-between scales.

But, this is computationally far more expensive if you want to reach the same size of kernels.

Looking closer at the convolve command, I see there are parameters for stride, start, end, dilation. Could those be used to scale the kernel? Any clues about what they do?

You are right. These parameters have been introduced quite recently, when I started to write the ML library for G’MIC.
See : https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d

The dilation parameter in particular, allows to apply a smaller kernel while dilating it to reach a larger neighborhood (a trou convolution).

Aha, yes dilation seems to be what I’ve done in the math parser. So that combined with boxfilter gives a larger but lower resolution kernel. I’m struggling to get it to work in g’mic sadly:

gmic run "sp cat 2,2,1,1,-1,1,-1,1 convolve.. .,1,0,1,-1,-1,-1,0,0,0,-1,-1,-1,1,1,1,1,1,1,1"
[gmic]-0./ Start G'MIC interpreter.
[gmic] *** Error in ./run/ *** Command 'convolve': [instance(600,550,1,3,0x55ae01510250,non-shared)] gmic<float>::convolve(): Invalid xyz-start/end arguments (start = (0,0,0), end = (-1,-1,-1)).

Yes, that’s because the doc is not correct.
(-1,-1,-1) cannot be used for designing the max coordinates for the xyz range.
You have to explicitely give the correct values :
{-2,[w,h,d]-1}

That’s because it is actually possible to apply the convolution on a xyz range that is outside the image domain, like (-10,-10,0) - (-1,1,0) and it still makes sense if your kernel is large enough (here e.g. > 10px).

I’ll fix the doc ASAP.

1 Like

To make it match my version, I also had to set centre 0,0,0 (not -1,-1,-1 as suggested). I don’t know if that’s right or not. The good news is it works the same and is a lot faster now!

Edit: nope, still got problems with the centre (I was testing on 1px kernel)
Seems I need to set the actual kernel centre explicitly, it doesn’t work on automatic
And it doesn’t appear to cope with fractional positioning either… damn

Maybe it is because automatic kernel centers for convolution and correlation for even-sized kernels are flipped ?
If I remember well, for a 2x2 kernel, kernel center is at (0,0) for the correlation, while it is at (1,1) for convolution. I don’t remember the reason, but there was one good reason :slight_smile:
I have to check this.

Yes that’s it. And the reason is that the convolution by the mirrored kernel must be equivalent to the correlation by the kernel.