G'MIC exercises

What I meant was that the command up to null: doesn’t output anything so there is nothing to redirect. Must be something wrong with my IM version or setup…

PS I just posted about this in the IM forums under Bugs: http://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=33645.

PPS They got back to me. It isn’t a bug but a change in syntax. In IM7, we prefix showkernel with morphology:.  convert xc: -define morphology:showkernel=1 -morphology Convolve:0 Gaussian:0x2 null: 2>&1 | findstr Kernel

I have 2 problems that I am trying to figure out.

A. Dilate and erode aren’t very smooth. We end up with distinct shapes (square being the default).


If is better if I do a partial open



B. Related to #1 is that I would like to blur inward and outward.

My attempts here aren’t great. When inward, some outward pixels appear to have been affected and vice versa. Also the differences between the edges and the first blur values are too large to be natural. Maybe combine blurring with morphology?

This problem might be similar to the one that raw processors address where highlight and detail reconstruction are concerned.

Default morphological operations dilate and erode are indeed working in binary mode (Dilation (morphology) - Wikipedia), this is the most usual way dilation/erosion with a structuring element are defined.
But there exists also a real mode which makes them work with float-valued kernels (Dilation (morphology) - Wikipedia). This mode can be activated by setting the argument is_real to 1 when calling those operators.
E.g.:

gmic sp tiger 32,32 gaussian. 20% n. 100,255 dilate.. .,1,1

Same for erode:

About blurring a mask inward/outward, one suggestion:

It’s not really a blur, but using the distance function is great to get shaded mask around the contours:

$ gmic sp tiger norm ge 30% distance. 0 c 0,10 n 0,1

That’s a linear decay, but you can still make it more gaussian if needed afterwards.

1 Like

Nice example for functional morphology.
Still I would recommend odd kernel sizes to avoid shifts!

@samj’s thread Place some points on outlines with G'MIC got me thinking about making a starry sky out of an image. Here is what I have come up with so far:

I am naive about what stars are supposed to look like because I haven’t really examined a night sky. In my mind, stars should be round points of various sizes, colors and intensities, with a small Gaussian blur and perhaps a twinkle caused by our atmosphere and other forms of scattering (e.g., as a result of the camera hardware). Right now the image has lines, though I have finely chopped them up.

I will address color first. I would have to know how to color each line-object with a random but plausible color. I blotted out the greens and purples to the rainbow_lut but don’t know how to remove the black bars. I would also have to randomize the intensities within a reasonable range.

star_lut

Bonjour,

@afre

-to_rgba[-1]
-replace_color[-1] 100%,0,0,0,0,255,255,255,255,255

:o)

@samj The intent is to generate a rainbow clut without greens and purples. A lazy way is to use the rainbow_lut and remove the offending I elements. I don’t know how to do that. ATM I can only make them black. Then the plan is to apply this to the white specks of the tiger.

@afre

Can you illustrate your request because the French translation is not understandable.

:o)

This would be the French. Google Translate was helpful but I had to revise it.

Je voudrais représenter les couleurs de l’arc-en-ciel sans les verts et les pourpres dans un clut. Une manière paresseuse est d’utiliser rainbow_lut et d’enlever les couleurs que je ne veux pas. Au lieu de remplacer la couleur avec du blanc ou du noir, je voudrais enlever l’élément RGB (I) de clut lui-même.

@samj, sometimes DeepL:

gives more understandable translations: ???

L’intention est de générer un clut arc-en-ciel sans greens et violets. Une façon paresseuse est d’utiliser le rainbow_lut et d’enlever les éléments I offensants. Je ne sais pas comment faire ça. ATM Je ne peux que les rendre noirs. Ensuite, il est prévu de l’appliquer aux taches blanches du tigre.

@afre, to remove a particular vector value in an image:

foo :
  # Generate rainow lut with random holes (filled with -1).
  rainbow_lut f ">set()=(f=x+u(w/10);d=u(w/20));init(set());x<f?I:x<f+d?vector3(-1):(set();I)"

  # Discard colors having values -1.
  +discard. -1 r. {h/3},1,1,3,-1

discard: couldn’t remember what it was called! What does “-1=none (memory content)” mean? How does it differ from “0=none”?

For command resize, argument interpolation means:

  • 0 (none) : does not interpolate the pixel data, just enlarge or cut image dimensions, but keep the meaning of the axes.
  • -1 (none, memory content) : does not interpolate the pixel data and does not keep the meaning of the axes. Just consider the pixel data as a linear buffer and resize it.
    Typical example:
$ gmic sp lena,512 +r[0] 256,256,1,3,0 +r[0] 256,256,1,3,-1

gives:

In the first case (interpolation=0), the image is just cut to half-size, in the second case (interpolation=-1), that’s the image buffer which is cut at the end.

There are two subjects that I have been blindly exploring.
1 Dealing with chromatic aberration.
2 How to make a filter more noise and / or edge-aware.

1 What I have been doing so far is finding the highlights and then greying the edges. It is such a crude method because (a) the fringing can have inconsistent thicknesses, and (b) there a loss in colour data and no colour recovery. I just realized that stdlib already has the fx_chromatic_aberrations filter, though I don’t understand it completely.

2 I don’t quite understand how the weights are generated in fx_equalize_local_histograms. It is likely to do with it being abstracted by math expressions. Something to do with the standard deviation and how pixels differ from adjacent and global values.

I found a paper with a nice figure on various conditions a filter might want to consider.
image

PS I wouldn’t mind if devs of other apps chime in (e.g., @agriggio who likes to make experimental modules in RT and @snibgo who likes to explore different topics with IM).

dcraw corrects for chromatic aberration, with its “-C” option, by moving the values of two channels towards or away from the centre. Effectively, the image is split by channel into three images (ImageMagick “-separate”), then two are resized, and they are then merged ("-combine").

In experiments with Nikon 20mm and 15mm lenses, I concluded this simple scheme is helpful, but may not be enough. I suspect (but haven’t verified) that a better correction needs barrel/pincushion distortion. But I didn’t finish the experiments, and haven’t written them up.

On noise – well, I like it. Call me weird. It’s the digital version of grain. I detest the modern fashion for smoothing skin so it looks like sprayed plastic. Instagram has a lot to answer for.

But we can do smooth, if we want. For example by separating an image into cartoon and texture, manipulating in various ways, and re-combining.

hi @afre, interesting that you mentioned me and ‘eqalise local histograms’ in the same message: I’ve just recently started to take a look at the “local histogram equalisation” in gmic, because I like its results. unfortunately, the gmic language is very hard to read for me (I blame myself as I didn’t properly RTFM :slight_smile: so my progress is slow. I plan to read some more about the underlying algorithm, but I wouldn’t mind some tips from the knowledgeable people…

Could you elaborate a bit more on that? I just started reading about lens corrections, so it is a topic that I don’t know much about.

@agriggio I knew you would be interest :slight_smile:. What are your thoughts on chromatic aberration? Maybe I should just read the RT docs :blush:.

A single-element lens bends light (“refraction”), so rays from a distant object that hit different parts of the lens are focused on a single point on the sensor. The amount of bending depends on the colour of the light, and on the glass used (its “index of refraction”, IOR, which varies according to wavelength). Blue light bends more than red light. So if white light passes through a single-element lens the red, green and blue components will bend by different amounts, and hence will focus at different places. This is chromatic aberration (CA).

A different colour may focus at a different distance from the lens (axial CA, aka longitudinal CA) or a different distance from the centre of the sensor (transverse CA, aka lateral CA), or both. The distances depend on the lens aperture but also on the distance of the light source from the plane that is in focus.

In film photography, CA is difficult to correct in post. In digital photography, transverse CA can be reduced by a geometrical distortion of the red, green and blue components of the image.

Camera lenses reduce CA by using multiple elements with different IORs. But the problem can’t be entirely removed.

I took a photo that included a dense tree, with glimpses of the sky visible through the green leaves as small white dots. Here is a crop from the bottom-left, magnified.

set SRCNEF=%PICTLIB%20120918\DSC_0314.NEF

set sPROC=-strip -crop 9x9+54+4881 +repage -scale 5000%%

%DCRAW% -6 -T -w -O ca_1.tiff %SRCNEF%

%IM%convert ca_1.tiff %sPROC% ca_1.png

Observe blue fringing top-right and red fringing bottom-left. Imagine this white blob is really made of blue, green and red blobs. To get this result, the coloured blobs must be offset, so the blue blob is towards the top-right (towards the centre of the original image), and the red blob is towards the bottom-left (away from the centre of the original image). This is “lateral chromatic aberration”.

We can correct the image by enlarging the blue component of the image, which will move the blue blob outwards. We do the opposite with red.

%DCRAW% -6 -T -w -C 0.99980 1.00005 -O ca_2.tiff %SRCNEF%

%IM%convert ca_2.tiff %sPROC% ca_2.png

This has reduced the red and blue fringing. There is still some blue fringing, but if we remove that, we cause purple fringing on the opposite side.

I found these numbers by trial and error. They can be found automatically from a photo of a grayscale object (eg a newspaper): separate the channels into three grayscale images, then find the scale factors that make the images most closely match.

The above assumes that lateral CA causes a simple resizing in the red and blue channels, so the opposite resizing fixes it. This is a good first approximation. The “most closely match” test can be repeated at different parts of the image, to get the parameters for a more precise barrel/pincushion distortion.

3 Likes

The purpose of this filter is simulate chromatic aberrations, not remove them :stuck_out_tongue: