Misc G'MIC external filter updates

I’m starting this to note down various of my own filter updates/workings, rather than spamming the official release threads. Anyone else can add here as well of course.

First one:

Something I kept thinking about but never got round to making… a smooth version of cut. The idea is to keep the gradient you would get by thresholds, but not “lose” any values. Anything outside the range is mapped into the output range with smooth curves. Here shown with a slightly smoother than default setting so it’s visible:

gcd_softcut : skip ${1=0},${2=1},${3=0.05}
  f "
      const B = max($1,im);
      const T = min($2,iM);
      const P = $3 * (T - B);
      const bt = B + P;
      const tt = T - P;
      const br = bt - im;
      const tr = iM - tt;
      const G = br / P;
      const H = P / tr;
      f(X,A) = (X / (X * (1 - A) + A));
    V = (i - im) / br;
    W = (i - tt) / tr;
    i < bt ? P * f(V,G) + B :
    i > tt ? P * f(W,H) + tt :

This should also be invertible but I haven’t worked on that yet.


This reminds me of a CLI command I created to modify lighting of images. It’s named rep_exp_sig_adj, and it’s used to adjust grayscale fractal images.

I don’t think it needs to be invertible. But, I think it might be. Maybe some edge cases might require more inversion?

I might have a go at making the inverse, but that’s more a curiosity than something useful.

Main use case here is doing some operation which creates out of bounds values, but you want to do further processing after. It can prevent unwanted image defects appearing (caused by clipping when using cut).

1 Like

I made something like this a long time ago. If you can make it reversible (out of curiosity, of course), I will use your method.

Well, it’s not very interesting as I say because you need to know the original min and max (e.g. it’s not enough to use 0,1 or 0,255). Here it is anyway, you’ll notice it’s very similar code:

gcd_unsoftcut : skip ${1=0},${2=1},${3=0.05}
  f "
      const B = min($1,im);
      const T = max($2,iM);
      const P = $3 * (iM - im);
      const bt = im + P;
      const tt = iM - P;
      const br = bt - B;
      const tr = T - tt;
      const G = P / br;
      const H = tr / P;
      f(X,A) = (X / (X * (1 - A) + A));
    V = (i - im) / P;
    W = (i - tt) / P;
    i < bt ? br * f(V,G) + B :
    i > tt ? tr * f(W,H) + tt :

Edit: simplified it a bit

1 Like

That’s gcd_softcut and gcd_unsoftcut tidied up and pushed. Hoping to do the same for some other commands soon.


Added gcd_self_guided command. This has two purposes over the usual guided filter: it’s optimised for self-guided use only, plus the regularisation is invariant with image range for any given value.

Epsilon is instead calculated as: regularisation * sqr(image_max - image_min). You could use that with the stdlib guided of course, but it saves some typing :slight_smile: .


Added gcd_weighted_box and associated gui filter “Weighted Boxfilter”. This does a variance weighted box average, which is useful as an edge-aware smooth dilation/erosion (or anything in between):


Hmm, there is a resizing algorithm I seen that does involve using that. I can’t find it, but seen the link in Lospec discord channel. I will let you know if I can find that.

EDIT: Found it! - https://johanneskopf.de/publications/downscaling/paper/downscaling.pdf

1 Like

Interesting, will need to check if it’s a similar method.

Other than than, I just added a gui filter “Simple Tone Curve”, which presents a quick way to do basic tone adjustment while keeping local contrast. It uses a sigmoid-like curve and the above weighted box filter for contrast restoration. Nothing amazing but it’s easy to use.

That paper seems to be about per-pixel bilateral kernels (also doing SVD on each window), looks a lot more complicated. I don’t know if it’s related yet, but it would probably take me a while to understand. One for the list to come back to later perhaps…

Updated “Simple Tone Curve”

  • Fixed a major bug: it probably didn’t work for anyone else yet, as I was referencing a command in my local file which nobody else has!
  • The saturation control should work better now (changed working colour space).
  • Better limit handling for linear inputs.
1 Like

Auto Balance:

Have noticed an oddity where very small increments in the Area value will create a disproportionate increase in brightness. Move past that increment and brightness reverts to a lower setting.

Linux, gmic-qt 3.2.5

In standalone mode, I have a 4000x3000 jpg opened, in Auto Balance “channel” is Lab, without “balance RGB” or “Reduce RAM”. Set “Area” to say 20 or 30, then increment with the right arrow key (goes up in 0,2 intervals). Every few steps, I at least see a marked increase in brightness, followed by a return more or less to where it was before.

Same thing seen for other channels and “Balance RGB”.

Overall, the filter works (VERY well!) , but I do find this behaviour odd. Is this the filter itself or from the boxfilter blur?

Apologies if this is the wrong place…

Hi @roydenyates welcome to the forum!

Glad you find that filter useful, it’s an old one so I’ll need to be cautious about any changes. I should have time to look at it properly towards the end of this month. I’ll see if I can at least reproduce the problem now.

Update: so far, I’ve not been able to make this happen. I tried downgrading to version 3.2.5, no problems either. Would be grateful if somebody else checks it too!

OK. I can break it.


This pipeline operating on testimage.png generates an image with NAN pixels:

$ gmic -run '-input testimage.png -name. testimage +ac[testimage] \"gcd_tonemap\ 34.375\",lab_l,2 +fill. isnan(i(x,y))?255:0 

34.375 seems to be a killer area. Found it with a script that steps the area argument. In looking for sudden changes of brightness, I took pairwise differences of the the average intensities of successive evaluations of the test image. Didn’t quite get sudden changes in brightness. Got NANs instead.

Here’s the testbed script that exercises gcd_tonemap:

test_gcd_tonemap : -check isint(${1=1024})" && "\
                          ${1}>0"           && "\
                          isnum(${2=0})"    && "\
                          ${2}>=0"          && "\
                          isnum(${3=200})"  && "\
                          ${3}>=0"          && "\
   foreach {
              =>. testimage
              =>. diffave
	      note="Delta area\ per\ step:\ "{($hi-$lo)/$size}
              e $note
              repeat {$size+1} {
                                 +ac[testimage] "gcd_tonemap "$area,lab_l,2
                                 if $>
                                    =[diffave] {ia#-1-ia#-2},0,{$>-1},0,0
                                    if isnan(i(#$diffave,0,$>-1))
                                        =[diffave] 0.5,0,{$>-1},0,1
                                        =[diffave] 0,0,{$>-1},0,1
                                    e "Area,\ Diff\ "{$>-1}"–"{$>}"\ :"$area", "{i(#$diffave,0,$>-1)}"."
              replace_infnan[diffave] 0
              display_graph[diffave] 1000,800,1,0,$lo,$hi,-1,1,"Cumulative Area, "$note,"Delta\ ia"

gcd_tonemap, I’m sure you will recognize, is one of the kernels of Auto Balance (the memory pig one). I didn’t torment gcd_tonemap_inplace because @roydenyates left Reduce RAM unchecked.

Using the script this way:

gmic -m test_gcd_tonemap.gmic testimage.png test_gcd_tonemap. 1024 o. /dev/shm/diff_1024.png

produces a graph of pairwise Δia between successive output images as Area ranges from [0,…,200].
Where there are green spikes, one of a pair of images has NAN pixels. So pairs of NANs in the data set indicates that only one image is outlier, since that outlier gets used in the taking of two successive differences. Isolated areas in the range [15.5,…,38] seem problematic.


That’s it for now. Can’t play with toys anymore, so I haven’t stepped through gcd_tonemap to see why it spits out NAN pixels in that area range. Ah me. I have to dress up, make my hair look like it’s been combed, go to dinner, then Carnegie Hall. Or maybe the other way around. Probably I’ll wind up in the wrong place. But at least you’ll have mysteries to mull over, come Sunday morning after breakfast.

1 Like

Thanks, that’s incredibly helpful - sadly it will be a week or two before I can work on a fix, but this narrows it down well.

Impressive forensics! Thanks for the responses.

As @garagecoder didn’t replicate the in gui behaviour, @grosgood 's image is good to have.

Without toggling sRGB or the RAM settings, YCbCr produces a marked dark departure at area 17.4.
Lab produces a brightening at 19.2, a darkening at 19.8 and again a lightening at 21.0. Amongst others of course.

What puzzles me is that the nan issue seems not to result in changes to brightness. Indeed, running gcd_auto_balance dd.ddd,0,2,0,0 either side of and incl 34.375 on testimage.png results in images that are not perceptively different. Also, the in-gui brightness fluctuations seem not to be a reflection of delta ia.

Be aware that when “Not-a-number” diagnostic bit-patterns find their ways into image pixels, all of the image-wide metrics: im, iM, ia, iv, id, is, ip, ic, in become non-computable; an “average” with a NAN in the data set is not meaningful. In light of such, I would not entirely trust what display presents in such cases; it has to punt in some fashion. That is a spelunking adventure for another day.

You might notice the creation of a diagnostic image in the upstream post: +fill. isnan(i(x,y))?255:0; such a diagnostic image flags pixels with NAN pels and would produce red, green, blue, yellow, magenta, cyan or white diagnostic pixels, depending on how the NAN pels distribute in the pixel of the affected image (healthy pixels turn out black in the diagnostic). Insofar as I have spelunked the affected NAN images, the number of affected pixels are few in number. One, usually. Two, rarely. I have not found an affected image with more than three “damaged” pixels. What’s more, the diagnostic color is always white: whatever the malady is, it affects all the channels of the pixel — and it happens very, very infrequently.

That suggests a quick-and-dirty fix: upon an image reporting an ia of nan, invoke the extra step of fill[iamanan] isnan(i(x,y))?0:i to send NAN pels ⇒ 0. Since these failures seem to occur at frequencies of no more than three pixels in 1024² = 1,048,576, the fix would be practically invisible. @garagecoder, being a consummate professional, probably wouldn’t rest until the underlying numerical issue has been bought to light. I, with the moral fibre of a Brooklyn denizen, would take the filter-and-forget route in a New York minute.

I might take this up this morning. `Tis Sunday, after breakfast, and my curiosity is piqued.

Stay tuned.

Got it.

gcd_tonemap : skip ${1=100},${2=255}
  repeat $! l[$>]
    / $2 +boxfilter $1%      # Note 1
    +-. 0.5 sign. *. -1      # Note 2
    +*.. -2 +. 1 /[1] .      # Note 3
    +eq. 0 +[-2,-1] /[0,-1]  
    +sqr.. +[0,-1] max[0] 0  
    sqrt[0] *[0,-1] - * $2   
  done done
  1. Note 1: For Area percentages ≈40% or less ($-expression: ${1=100}), it is possible that the box filtered output image, [1], can have pixels that are exactly, on-the-money-honey, 0.5. Killer pixels. Extremely unlikely, of course, but in a 1024 × 1024 image, a 1-in-1,048,576 chance is not that chancy. For area percentages ≈40% or greater, iM on the box filtered image [1] drops below 0.5 and the trap never gets sprung. So what is the trap, exactly?
  2. Note 2: Subtracting 0.5 from image [1] sets the killer pixels to zero in the newly minted image [2]. Taking signs of the pixels in [2] leaves the Killer pixels at zero. Multiplying [2] through by –1: the Killer pixels remain zero.
  3. Note 3: Remember image [1]? The image of Killer pixels in [1] are multiplied by -2, leaving these in the newly minted image [3] at 0.5 × -2 = -1. To image [3], we add 1: the Killer pixels in [3] are now zero. Dividing image [1] by [3] puts the zeroed-out Killer pixels in the denominator of the division, annoying the math library. It expresses its displeasure by setting the Killer pixels in [1] to inf. That’s the ball game. the Killer pixel infs further annoy the math library in subsequent operations and become nans.

So goes the “Exactly 0.5” corner case. Perhaps, going by the Principle of Least Disturbance, close out gcd_tonemap with (something like):

if isnan(ia) fill. isnan(i(x,y))?sum([j(-1,-1),j(0,-1),j(1,-1),j(0,-1),j(0,1),j(1,-1),j(1,0),j(1,1)])/8:i(x,y) fi

to wit, set NAN pels with the average of the immediate neighborhood and call it a day. Seems to work here, so long as an entire locale of pixels doesn’t become a NAN cluster…

That’s it for now.


Thanks once again, you’re right that I’m not content to smooth in the cracks caused by NaN! I didn’t document this one even for myself, so I’ll need to reverse engineer. I might manage that this weekend if not before.

I had a quick look, I think I have a solution; it involves mapping potential div by zero, to zero. That’s possible without increasing memory usage, but will mean three more ops. There will be some slowdown, but I guess a slower reliable filter is what’s needed.

Update 2:
That’s it pushed. I guess you’ll need a recent version to get the update, once sync happens.