G'MIC exercises

A single-element lens bends light (“refraction”), so rays from a distant object that hit different parts of the lens are focused on a single point on the sensor. The amount of bending depends on the colour of the light, and on the glass used (its “index of refraction”, IOR, which varies according to wavelength). Blue light bends more than red light. So if white light passes through a single-element lens the red, green and blue components will bend by different amounts, and hence will focus at different places. This is chromatic aberration (CA).

A different colour may focus at a different distance from the lens (axial CA, aka longitudinal CA) or a different distance from the centre of the sensor (transverse CA, aka lateral CA), or both. The distances depend on the lens aperture but also on the distance of the light source from the plane that is in focus.

In film photography, CA is difficult to correct in post. In digital photography, transverse CA can be reduced by a geometrical distortion of the red, green and blue components of the image.

Camera lenses reduce CA by using multiple elements with different IORs. But the problem can’t be entirely removed.

I took a photo that included a dense tree, with glimpses of the sky visible through the green leaves as small white dots. Here is a crop from the bottom-left, magnified.

set SRCNEF=%PICTLIB%20120918\DSC_0314.NEF

set sPROC=-strip -crop 9x9+54+4881 +repage -scale 5000%%

%DCRAW% -6 -T -w -O ca_1.tiff %SRCNEF%

%IM%convert ca_1.tiff %sPROC% ca_1.png

Observe blue fringing top-right and red fringing bottom-left. Imagine this white blob is really made of blue, green and red blobs. To get this result, the coloured blobs must be offset, so the blue blob is towards the top-right (towards the centre of the original image), and the red blob is towards the bottom-left (away from the centre of the original image). This is “lateral chromatic aberration”.

We can correct the image by enlarging the blue component of the image, which will move the blue blob outwards. We do the opposite with red.

%DCRAW% -6 -T -w -C 0.99980 1.00005 -O ca_2.tiff %SRCNEF%

%IM%convert ca_2.tiff %sPROC% ca_2.png

This has reduced the red and blue fringing. There is still some blue fringing, but if we remove that, we cause purple fringing on the opposite side.

I found these numbers by trial and error. They can be found automatically from a photo of a grayscale object (eg a newspaper): separate the channels into three grayscale images, then find the scale factors that make the images most closely match.

The above assumes that lateral CA causes a simple resizing in the red and blue channels, so the opposite resizing fixes it. This is a good first approximation. The “most closely match” test can be repeated at different parts of the image, to get the parameters for a more precise barrel/pincushion distortion.

3 Likes

The purpose of this filter is simulate chromatic aberrations, not remove them :stuck_out_tongue:

I have been maintaining a user.gmic and adding all sorts of wackiness, though most of it is just for convenience and edge cases.

@garagecoder has told me to be more confident and @Brian_Innes expressed interest in my sample image here, so I thought it might be a good idea to share it in this thread to get feedback before I make it official. It is a riff off of the gradient_norm. However, I don’t know if it is proper to call it a Hessian norm.

#@gui Hessian norm : fx_hnorm, fx_hnorm_preview(0)
#@gui : Strength = float(1,.5,1.5)
#@gui : Contrast = int(50,1,99)
#@gui : Invert = bool(0)
#@gui : sep = separator(), note = note("Filter by <i><a href="https://discuss.pixls.us/u/afre">afre</a></i>. Latest update: <i>2018-05-09</i>.")
fx_hnorm :
  af_hnorm ^ $1
  c 0,$2%
  if $3 negate fi
  n 0,255

af_hnorm:
  repeat $! l[$>]
    +hessian[0] xx +hessian[0] xy +hessian[0] xz +hessian[0] yy +hessian[0] yz hessian[0] zz
    sqr + s c + sqrt
  endl done

fx_hnorm_preview :
  fx_hnorm $*
1 Like

Added in your Testing folder.

I cobbled together a filter (for command line ATM, not plugin) for the sunbeam thread. It is kind of buggy and gross but maybe we could salvage something from it. :blush: The convolve becomes more expensive as dimensions increase, so I made the decision to resize, which contributes to the ugliness.

Edit: For those who aren’t familiar with G’MIC scripting or cannot bear to read the crude code, the filter has 5 parameters:

$1 → length of beam (not to scale; larger value means longer beam).
$2 → diagonal direction (choose: 0,1,2,3).
$3,$4,$5 → colour of beam.

beams_test: skip ${1==1}&&{2==0}&&{3==104}&&{4==220}&&{5==255}
  r={w} r2dx 200,1
  +l
    +l repeat {$1*20}
      +l.
        if {$2==1} shift -1,1
        elif {$2==2} shift -1,-1
        elif {$2==3} shift 1,-1
        else shift 1,1
        fi
      endl
      + c 0,1
    done endl
    100%,100%,100%,100%
    if {$2==1||$2==3} gaussian. 10,1,-45
    else gaussian. 10,1,45
    fi
    convolve.. . rm. + c 0,255
  endl
  l.
    s c *... $3 *.. $4 *. $5 / 255 a c c 10%,100% n 0,255 * 3
  endl
  +f.. min(I)>0?I:I#1 k.
  r2dx $r,1

The image again for your convenience:

1 Like

Speaking of blurs:

1 How does one go about blurring in one direction? Making a kernel that blurs in that one direction only? A command that does that, with length, angle and blur amount as parameters, would be nice.

2 Also, how would we deblur motion blur, given length and angle? I might be wrong but it looks like the build in commands only deal with unfocused deblur.

Command blur_linear ?

Yes, this is missing, but deblurring without artefacts is usually a ill-posed problem, so not so easy to solve.

a. before blur_linear
blur-0


b. after blur_linear
blur-0


c. the goal
GIMP → (c) = masked (b) on top of (a). Easy to see where I masked (b). :blush:
blur-2

I wish to do something like this but with another form, that format have to use?

gmic shape_cupid 480 +skeleton3d ,

Besides the blur question, I have another that is simpler. If I use split xy,-200, what should I do to reassemble the tiles back in the proper order? Edit: I should add that the tiles are equal in size but the number of tiles, rows and columns are variable.

Sample
tetris

is posible import other shapes?
is posible export in 3d?

You should use probably split yx,-200 instead of split xy,-200. The former first splits the image along y then each splitted part along x which is often the natural order to consider : at the end, your list of images varies first along x, then y.
Then you can use append_tiles to re-create the full image.

split yx and append_tiles are inverse transformations.

I figured it out but thanks for replying.   tetris is a fun little command BTW. :slight_smile:

Command skeleton3d takes a binary image as an input, so you can theoretically any shape you want.
For the export, I’m afraid that for now, there are not many possibilities.

dont work. I go to go back to try

:frowning:

I think that I will be able to solve this with iflamate of inkscape

Sorry about the edits. I didn’t know what I wanted to ask.

Question 1

Basically, I want to manipulate the luminance data but it seems that the three methods are different, especially luminance.

luminance_test:
  sp
  +l[0] ac "b 3",ycbcr_y rgb2ycbcr channels 0 endl
  +l[0] +b 3 blend luminance rgb2ycbcr channels 0 endl
  l[0] b 3 luminance endl

The difference is that the luminance method uses coefficients whereas the YCbCr methods use a matrix transform. Maybe I am wrong that they should be equivalent…

Question 2

I have included the scripts for luminance and rgb2ycbcr below.

#@cli luminance
#@cli : Compute luminance of selected sRGB images.
#@cli : $ image.jpg +luminance
luminance :
  e[^-1] "Compute luminance of image$?."
  v - remove_opacity srgb2rgb
  repeat $! l[$>]
  if {s==3} sh 0 sh[0] 1 sh[0] 2 *[1] 0.22248840 *[2] 0.71690369 *[3] 0.06060791 +[1-3] rm[1]
  elif {s!=1} norm n 0,255
  fi endl done
  channels 0 rgb2srgb v +

#@cli rgb2ycbcr
#@cli : Convert color representation of selected images from RGB to YCbCr.
#@cli : $ image.jpg rgb2ycbcr split c
rgb2ycbcr :
  e[^-1] "Convert color representation of image$? from RGB to YCbCr."
  v - mix_rgb 66,129,25,-38,-74,112,112,-94,-18 + 128 / 256
  repeat $!
    sh[$>] 0 +. 16 rm.
    sh[$>] 1,2 +. 128 rm.
  done v +

I would like to adapt both to reflect the luminance of Rec.2020 D50. I might be doing it wrong, so I need people to confirm. This is my version of luminance:

Y50_: skip ${1=0}
  sh 0 sh[0] 1 sh[0] 2
  if $1 *[1] 0.27904 *[2] 0.67535 *[3] 0.04561 # Rec.2020
  else *[1] 0.22249 *[2] 0.7169 *[3] 0.06061   # Rec.709
  fi
  +[1-3] rm[1] channels 0

My math isn’t good, so I settled for the values found in Elle’s Rec2020-elle-V4-rec709.icc and sRGB-elle-V4-srgbtrc.icc. Since I am working in linear gamma, I don’t need srgb2rgb and rgb2srgb. How do I do this to rgb2ycbcr and ycbcr2rgb? @jdc

Hello @afre

I don’t know and I don’t use GMIC :slight_smile:

jacques


Garanti sans virus. www.avast.com

Sorry, my last post was all over the place. I have pared it down to sanity levels. (After 6 edits. :blush:)

@jdc I pinged you regarding the second question because I think you may be able to help me figure out how to do the transformations between RGB and YCbCr.

@afre
Sorry but I had not understand

For this trnasformation, there are many formulas which are all approximate, because we are in RGB, and therefore this does not take into account the workspace (sRGB, Prophoto,…)
Here the one I used in my adaptation of Auto White balance
“Robust automatic WB algorithm using grey colour points in Images” in the branch “autowblocal”
float Y0 = 0.299f * rl+ 0.587f * gl + 0.114f *bl;
float U0 = -0.14713f * rl - 0.28886f * gl + 0.436f * bl;
float V0 = 0.615f * rl - 0.51498f * gl- 0.10001f * bl;

But if you want a transformation good in all cases, you must use instead of YCbCr, xyY
You can make a transformation as the one

void Color::rgbxyY(float r, float g, float b, float &x, float &y, float &Y, const double xyz_rgb[3][3])
{
float xx = ((xyz_rgb[0][0] * r + xyz_rgb[0][1] * g + xyz_rgb[0][2] * b)) ;
float yy = ((xyz_rgb[1][0] * r + xyz_rgb[1][1] * g + xyz_rgb[1][2] * b)) ;
float zz = ((xyz_rgb[2][0] * r + xyz_rgb[2][1] * g + xyz_rgb[2][2] * b)) ;
float som = xx + yy + zz;
x = xx / som;
y = yy / som;
Y = zz / som;
}
Where xyz_rgb[3][3] is the transformation matrix associated to working space
At the end
x ==> red channel
y==> blue channel
Y==> Luminance
all is between 0 and 1, and allows CIE1931 diagram.

I used this transformation for example in “ItcWB” - Iterative temperature correlation white balance" (always in “autowblocal”) - for this procedure (Itcwb), I put a copyright

Jacques

A small complement.
All RGB formulas correspond to the second row of the RGB / XYZ matrix
ex for Aces_p1 {0.284448, 0.671758 , 0.043794}
for sRGB {0.2225045, 0.7168786, 0.0606169}

The formula with 0.299f , 0.587f , 0.114f is a “median” formula often used to take into account all working space…but obviously, it’s never good :slight_smile:

1 Like