G'MIC exercises

discard: couldn’t remember what it was called! What does “-1=none (memory content)” mean? How does it differ from “0=none”?

For command resize, argument interpolation means:

  • 0 (none) : does not interpolate the pixel data, just enlarge or cut image dimensions, but keep the meaning of the axes.
  • -1 (none, memory content) : does not interpolate the pixel data and does not keep the meaning of the axes. Just consider the pixel data as a linear buffer and resize it.
    Typical example:
$ gmic sp lena,512 +r[0] 256,256,1,3,0 +r[0] 256,256,1,3,-1

gives:

In the first case (interpolation=0), the image is just cut to half-size, in the second case (interpolation=-1), that’s the image buffer which is cut at the end.

There are two subjects that I have been blindly exploring.
1 Dealing with chromatic aberration.
2 How to make a filter more noise and / or edge-aware.

1 What I have been doing so far is finding the highlights and then greying the edges. It is such a crude method because (a) the fringing can have inconsistent thicknesses, and (b) there a loss in colour data and no colour recovery. I just realized that stdlib already has the fx_chromatic_aberrations filter, though I don’t understand it completely.

2 I don’t quite understand how the weights are generated in fx_equalize_local_histograms. It is likely to do with it being abstracted by math expressions. Something to do with the standard deviation and how pixels differ from adjacent and global values.

I found a paper with a nice figure on various conditions a filter might want to consider.
image

PS I wouldn’t mind if devs of other apps chime in (e.g., @agriggio who likes to make experimental modules in RT and @snibgo who likes to explore different topics with IM).

dcraw corrects for chromatic aberration, with its “-C” option, by moving the values of two channels towards or away from the centre. Effectively, the image is split by channel into three images (ImageMagick “-separate”), then two are resized, and they are then merged ("-combine").

In experiments with Nikon 20mm and 15mm lenses, I concluded this simple scheme is helpful, but may not be enough. I suspect (but haven’t verified) that a better correction needs barrel/pincushion distortion. But I didn’t finish the experiments, and haven’t written them up.

On noise – well, I like it. Call me weird. It’s the digital version of grain. I detest the modern fashion for smoothing skin so it looks like sprayed plastic. Instagram has a lot to answer for.

But we can do smooth, if we want. For example by separating an image into cartoon and texture, manipulating in various ways, and re-combining.

hi @afre, interesting that you mentioned me and ‘eqalise local histograms’ in the same message: I’ve just recently started to take a look at the “local histogram equalisation” in gmic, because I like its results. unfortunately, the gmic language is very hard to read for me (I blame myself as I didn’t properly RTFM :slight_smile: so my progress is slow. I plan to read some more about the underlying algorithm, but I wouldn’t mind some tips from the knowledgeable people…

Could you elaborate a bit more on that? I just started reading about lens corrections, so it is a topic that I don’t know much about.

@agriggio I knew you would be interest :slight_smile:. What are your thoughts on chromatic aberration? Maybe I should just read the RT docs :blush:.

A single-element lens bends light (“refraction”), so rays from a distant object that hit different parts of the lens are focused on a single point on the sensor. The amount of bending depends on the colour of the light, and on the glass used (its “index of refraction”, IOR, which varies according to wavelength). Blue light bends more than red light. So if white light passes through a single-element lens the red, green and blue components will bend by different amounts, and hence will focus at different places. This is chromatic aberration (CA).

A different colour may focus at a different distance from the lens (axial CA, aka longitudinal CA) or a different distance from the centre of the sensor (transverse CA, aka lateral CA), or both. The distances depend on the lens aperture but also on the distance of the light source from the plane that is in focus.

In film photography, CA is difficult to correct in post. In digital photography, transverse CA can be reduced by a geometrical distortion of the red, green and blue components of the image.

Camera lenses reduce CA by using multiple elements with different IORs. But the problem can’t be entirely removed.

I took a photo that included a dense tree, with glimpses of the sky visible through the green leaves as small white dots. Here is a crop from the bottom-left, magnified.

set SRCNEF=%PICTLIB%20120918\DSC_0314.NEF

set sPROC=-strip -crop 9x9+54+4881 +repage -scale 5000%%

%DCRAW% -6 -T -w -O ca_1.tiff %SRCNEF%

%IM%convert ca_1.tiff %sPROC% ca_1.png

Observe blue fringing top-right and red fringing bottom-left. Imagine this white blob is really made of blue, green and red blobs. To get this result, the coloured blobs must be offset, so the blue blob is towards the top-right (towards the centre of the original image), and the red blob is towards the bottom-left (away from the centre of the original image). This is “lateral chromatic aberration”.

We can correct the image by enlarging the blue component of the image, which will move the blue blob outwards. We do the opposite with red.

%DCRAW% -6 -T -w -C 0.99980 1.00005 -O ca_2.tiff %SRCNEF%

%IM%convert ca_2.tiff %sPROC% ca_2.png

This has reduced the red and blue fringing. There is still some blue fringing, but if we remove that, we cause purple fringing on the opposite side.

I found these numbers by trial and error. They can be found automatically from a photo of a grayscale object (eg a newspaper): separate the channels into three grayscale images, then find the scale factors that make the images most closely match.

The above assumes that lateral CA causes a simple resizing in the red and blue channels, so the opposite resizing fixes it. This is a good first approximation. The “most closely match” test can be repeated at different parts of the image, to get the parameters for a more precise barrel/pincushion distortion.

3 Likes

The purpose of this filter is simulate chromatic aberrations, not remove them :stuck_out_tongue:

I have been maintaining a user.gmic and adding all sorts of wackiness, though most of it is just for convenience and edge cases.

@garagecoder has told me to be more confident and @Brian_Innes expressed interest in my sample image here, so I thought it might be a good idea to share it in this thread to get feedback before I make it official. It is a riff off of the gradient_norm. However, I don’t know if it is proper to call it a Hessian norm.

#@gui Hessian norm : fx_hnorm, fx_hnorm_preview(0)
#@gui : Strength = float(1,.5,1.5)
#@gui : Contrast = int(50,1,99)
#@gui : Invert = bool(0)
#@gui : sep = separator(), note = note("Filter by <i><a href="https://discuss.pixls.us/u/afre">afre</a></i>. Latest update: <i>2018-05-09</i>.")
fx_hnorm :
  af_hnorm ^ $1
  c 0,$2%
  if $3 negate fi
  n 0,255

af_hnorm:
  repeat $! l[$>]
    +hessian[0] xx +hessian[0] xy +hessian[0] xz +hessian[0] yy +hessian[0] yz hessian[0] zz
    sqr + s c + sqrt
  endl done

fx_hnorm_preview :
  fx_hnorm $*
1 Like

Added in your Testing folder.

I cobbled together a filter (for command line ATM, not plugin) for the sunbeam thread. It is kind of buggy and gross but maybe we could salvage something from it. :blush: The convolve becomes more expensive as dimensions increase, so I made the decision to resize, which contributes to the ugliness.

Edit: For those who aren’t familiar with G’MIC scripting or cannot bear to read the crude code, the filter has 5 parameters:

$1 → length of beam (not to scale; larger value means longer beam).
$2 → diagonal direction (choose: 0,1,2,3).
$3,$4,$5 → colour of beam.

beams_test: skip ${1==1}&&{2==0}&&{3==104}&&{4==220}&&{5==255}
  r={w} r2dx 200,1
  +l
    +l repeat {$1*20}
      +l.
        if {$2==1} shift -1,1
        elif {$2==2} shift -1,-1
        elif {$2==3} shift 1,-1
        else shift 1,1
        fi
      endl
      + c 0,1
    done endl
    100%,100%,100%,100%
    if {$2==1||$2==3} gaussian. 10,1,-45
    else gaussian. 10,1,45
    fi
    convolve.. . rm. + c 0,255
  endl
  l.
    s c *... $3 *.. $4 *. $5 / 255 a c c 10%,100% n 0,255 * 3
  endl
  +f.. min(I)>0?I:I#1 k.
  r2dx $r,1

The image again for your convenience:

1 Like

Speaking of blurs:

1 How does one go about blurring in one direction? Making a kernel that blurs in that one direction only? A command that does that, with length, angle and blur amount as parameters, would be nice.

2 Also, how would we deblur motion blur, given length and angle? I might be wrong but it looks like the build in commands only deal with unfocused deblur.

Command blur_linear ?

Yes, this is missing, but deblurring without artefacts is usually a ill-posed problem, so not so easy to solve.

a. before blur_linear
blur-0


b. after blur_linear
blur-0


c. the goal
GIMP → (c) = masked (b) on top of (a). Easy to see where I masked (b). :blush:
blur-2

I wish to do something like this but with another form, that format have to use?

gmic shape_cupid 480 +skeleton3d ,

Besides the blur question, I have another that is simpler. If I use split xy,-200, what should I do to reassemble the tiles back in the proper order? Edit: I should add that the tiles are equal in size but the number of tiles, rows and columns are variable.

Sample
tetris

is posible import other shapes?
is posible export in 3d?

You should use probably split yx,-200 instead of split xy,-200. The former first splits the image along y then each splitted part along x which is often the natural order to consider : at the end, your list of images varies first along x, then y.
Then you can use append_tiles to re-create the full image.

split yx and append_tiles are inverse transformations.

I figured it out but thanks for replying.   tetris is a fun little command BTW. :slight_smile:

Command skeleton3d takes a binary image as an input, so you can theoretically any shape you want.
For the export, I’m afraid that for now, there are not many possibilities.