afre's G'MIC diary

I know this is a little too much to ask for, but I would like to see more details on filters as in information about them. When I do filters, I add info myself.

It’s a simple language!

I agree that it’s the best language for image processing, and it is the bridge for software independence i.e you don’t have to use one software to use x filter or another.

Behind the command Problem solving and determining the viability of a (null) hypothesis is a labour of love or necessity. We all do it at some capacity, whether formally or informally, by brilliance or stumbling in the dark. Mastery is achieved when you have that moment of clarity – call it being in the zone, making a discovery or having a revelation.

I will explain afre_edge since @Iain asked. The concept is dead simple and it embodies the distilling of mastery that I just waxed poetic. I like dead simple because it means that even I can understand it comfortably and the resultant command is guaranteed to be fast, even on the saddest laptop (mine).

Observe the following command.

gmic 51,51,1,1 f x==25?255 o line.png repeat 2 gradient_norm done o gradient_norm.png

A picture, or two, is worth a thousand words. The second is normalized for your viewing pleasure.

line gradient_norm

Subtract the first from the second. afre_edge in a nutshell. :exploding_head: :coconut:

For context, what gradient_norm does to lines is double around them, leaving a void where they were, which is awkward for edge detection but exploitable for afre_edge. The gradient_norm by itself does this (normalized)

As you can observe, lines such as whiskers “double up” when a good detector should keep the line.

More commentary on attribution

Sometimes it makes complete sense, not just out of obligation but also out of respect and appreciation. It is a shame that some people don’t do it.

Others times it can be mundane because it is generally accepted (one cannot copyright vocabulary) or a parallel work (attribution is unnecessary, though a referral would help contextualize the problem). I feel that these two cases cover lots of what I do. Still, it might be necessary in order to avoid being accused of plagiarism or copyright infraction, which can utterly destroy you in an academic or corporate setting (of which I am not a part, thankfully).

Inspiration is a greyer area. In most cases it is illogical: it doesn’t have to have anything to do with the outcome. Regardless of the source it can do one a great service if it helps one achieve his or her goals. That said, we want to ask What were his or her influences? I find that such characterizations are decided by the critics and historians, either fairly or unfairly depending on who you ask. A photographer can create an original work but that doesn’t mean that the work has no relation to other creators or works.

1 Like

That’s really smart. :nerd_face:

You won’t believe how complicated I can make thinning edges.

I find that rolling up my sleeves and blocking out the noise helps me develop a better (as in simpler and faster) command than the algorithms from my reading list. In fact I have always done life that way and though it is the harder path it is more rewarding in my perspective but not someone looking in.


A. Remember that all new changes require G’MIC 2.8.x and my commands don’t factor in alpha or CMYK (CMYKA). As a result, they may misbehave in GUI apps where layers contain an alpha channel.

B. afre_orien is finally retrievable via gmic update. It is the counterpart to afre_y50, as orientation is to norm, which is to say that it is the colour component.

C. afre_softlight CLI GUI is in a PR and should be retrievable soon. It is invertible and adaptable. I will explain below. Take this example:

gmic sp tiger +afre_softlight 0 +afre_softlight[0,1] 1,1 rm..

1 Since there is only one layer, the image will blend itself.

2 First parameter is inverse={ 0 | 1 }; second is reverse_order={ 0 | 1 }. If we do a normal soft light blend of tiger and then use the original tiger to inverse blend, you get the original tiger back, short of some rounding errors. PS an invertible soft light is much slower than the stdlib one because it uses 2 power operations.

Take this next example:

gmic sp boats,chick,david +afre_softlight 0 +afre_softlight[0,1] 0,1 rm[3]

3 Since boats is grey scale while chick is RGB, the output will inherit the one with the most channels, which would be RGB. This shall be limited to GA, RGB, RGBA only.

4 The x and y axes are also unmatching. To prevent image edges from showing, the blending layer will be zoomed in or out so that the original layer will fit inside of it. Since salient content is almost always near the centre, the blending layer will also be centred w.r.t. the original.

5 The command takes a minimalist approach to indexing. It allows the user to select any number of images but will only blend the last two on the list. Since G’MIC orders its selections, a convenience parameter reverse_order has been included.

6 The blending is normalized and respects the base layer’s range. This means that the effect is spread out to all of the tones.

7 Issues GUI:
a The plugin preview doesn’t align the input layers properly after the resizing and repositioning of the blending layer. A workaround to this to Reset Zoom once in a while.
b At least in GIMP, after committing the change, the canvas is cropped at the right and bottom edges. Ideally, I would want no crop to occur and to have the resultant image centred on the canvas.

Suggestions on 7a and 7b are welcome.

PS 7a is explained in On the road to 2.8 and 7b below.

On 7b, that’s why I suggested a official new command called oisc which stands for original image size command. The parameters goes on the line of oisc “”,[copied from resize]. And it would resize all images to their original size. The part between quotation is your command.

Let me explain 7b in more detail. If I blend david.png with chick.png, the output inherits the size of the base layer david.png. When I select new layer, it works as expected. See top thumbnail on the right panel: we can see a full alpha border.

If I do an in-place output, however, the alpha gets cropped on the right and the bottom.

In my opinion, the GIMP canvas shouldn’t change when the in-place output option is selected. If cropping is necessary, I would rather invoke the Fit Canvas to Layers command. In CLI, we wouldn’t have this problem because a canvas doesn’t exist. In-place and insertion result in the same output since each image in the list is separate.

PS ImageMagick does have a virtual canvas and its repage option can reset it. Perhaps we can do something similar in the plugin to manage the canvas in the GUI editor.

It’s a interesting idea to have g’mic downside canvas. In Krita or GIMP, upsizing is possible. In Paint.NET, not possible.


afre_softlight GUI and afre_sharpenfft CLI are in a PR ready to be accepted.

afre_sharpenfft uses FFT to compute a high pass which is then used to sharpen the image. It has two parameters: strength controls how much and size controls the granularity. GUI will come later.


Strength 10, Size 1

PS Included a list of commands and filters adjacent to the EOF to show what is available and where.

# List of Commands and Filters
# 'gci' = GUI/CLI, 'cli' = CLI only, 'gui' = GUI only, '*' = GUI prefix is 'afrx'
# gci : afre_vigrect afre_vigcirc afre_softlight* afre_edge afre_cleantext
# cli : afre_gnorm afre_hnorm afre_y50 afre_orien afre_box afre_sharpenfft
# gui : fx_darken_sky fx_gamify fx_hnorm

afre_reorder has been accepted. Context: Feature request 'Reorder List'.
afre_log2 transforms images into or back from log2 space.

Custom guided filters

I am tired of holding them back from public use. afre_gui0 and afre_gui1 will be available once the PR has been accepted and code propagated into the update files. I call them “custom” because I made some improvements. For the traditional guided filter, use G’MIC’s core filter guided.


They are base filters meaning that they won’t have all of the features and safeguards that my private experimental filters have. They are also not optimized; they will be incredibly slow. Guided filtering works best with small radii anyway (because haloing is undesireable), so slowness for larger radii isn’t a problem to me personally.

afre_gui0 is the self-guided filter and afre_gui1 requires the user to specify a guided. I will combine them into one command later, extended them and use them in more sophisticated filters with additional features and considerations.

In terms of input for afre_gui1, I have followed the same conditions as the G’MIC’s default guided; i.e., the width, height and depth (the 3rd dimension) of the base and guide images must be the same. As with afre_softlight, it will be adaptable to any input range. As for output, if the range expands, I will leave it as is, including negative values. (The simplest way to deal with outliers is to cut (clip) them to range.)

Small update
Have been busy dealing with real life challenges. As usual, many of them are serious and draining, so progress in G’MIC development has been slow.

1 Scripts don’t age well as G’MIC evolves. E.g., fx_darken_sky, fx_gamify, fx_hnorm and afre_cleantext are old and, to begin with, were rushed to fulfill a need in the forum. I will have to update and make them robust later.

2 I am always in search of a minimalist way to generate an acceptable image from a near-raw image. I have been playing with adaptive gamma algorithms that could be used to brighten an image while maintaining natural contrast. Might lead to a command.

Need to compare it with my current brighten-contrast methods. In terms of speed, it is fast. Also, there is no need to compete with “filmic”: plenty of folks are doing that already and I have never been that interested in it. Here is an example of my current implementation in action:

The lightest part doesn’t have as much contrast as other people’s entries – I have to work on that –, but overall I am quite satisfied with this preliminary test.

1 Like


afre_reorder bug fix: before, it reversed the order of the latter images if the input list is shorter than the total number of images.

afre_gnorm improvement: no more gradient reversal on thin lines. This will improve the performance of afre_edge. It may as a side effect cause lines to be too thin. Increasing the recovery parameter will help thicken them. Edit Not as thin anymore; I have tweaked the range of afre_gnorm to be the same as gradient_norm.

1 Like

Specifically, where is “gmic update”
There is an update button in my GMIC panel. Does not seem to load these items.


– Added afre_gleam GUI CLI. Read about it here: […] selective illumination of the photo […]. It is a work-in-progress and experimental but I pushed the commit to let @s7habo test it.

– Added afre_contrastfft GUI CLI. Part of the FFT series. Seems to perform better locally than a contrast curve in regular spatial space. Also updated afre_sharpenfft CLI to behave similarly. Edit Behaviour reverted: I would rather clip the extremities than introduce halos. afre_sharpenfft now has GUI.

– I am uncertain how much more afre_cleantext can do for Working with afre_cleantext filter and G'MIC plugin. There is still room for improvement; however, the requests being made appear more relevant to scanning and post-processing technique rather than the filter specifically.

My commands are meant to be minimalist and definitely won’t address sophisticated subjects as detection and machine learning. As noted, I have a hard life and so won’t have the wherewithal to deal with the thread beside the long posts I have already written there.

@Reptorian has some ideas, which I hope to collab with him once we both have more time. Edit Looks like I may not need the collaboration after all: afre_contrastfft might just serve in its place.

I’m just tagging @sambul81 just to let him know what might happen.

In theory that is possible. We’d just have to figure out how to extract some bit of letter information by using saturation or chroma and lightness info to determine where’s the supposed letter are, and normalize then cut from that information. But, we’ll see what happens in the end since the initial plan of ours is to collaborate on the filter, and we’ll get back to you on that.

Please continue the discussion on the appropriate thread. No need to cross-post.

afre_sharpenfft and afre_contrastfft are complete with CLI and GUI accessible to the public. :partying_face: Another set that I can remove from my private *.gmic files.

Warning and discussion about afre_sharpenfft I jumped the gun and changed its behaviour without checking to see how it performed. The new behaviour was more in line with typical sharpening where haloing begins to emerge as the kernel size and amount increase, which to me is unacceptable! I have reverted it to the previous version where it clips the darks and lights and flattens the contrast to a certain extent, nothing that followup processing can’t ameliorate. Please verify that you have the correct parameters; otherwise, update your filters! Should have the following:


  Sharpen selected images with Fourier transform.
  Default values: 'strength=10' and 'size=1'.
1 Like

unknown command or filename afre_contrastfftpreview