G'MIC exercises

Besides the blur question, I have another that is simpler. If I use split xy,-200, what should I do to reassemble the tiles back in the proper order? Edit: I should add that the tiles are equal in size but the number of tiles, rows and columns are variable.

Sample
tetris

is posible import other shapes?
is posible export in 3d?

You should use probably split yx,-200 instead of split xy,-200. The former first splits the image along y then each splitted part along x which is often the natural order to consider : at the end, your list of images varies first along x, then y.
Then you can use append_tiles to re-create the full image.

split yx and append_tiles are inverse transformations.

I figured it out but thanks for replying.   tetris is a fun little command BTW. :slight_smile:

Command skeleton3d takes a binary image as an input, so you can theoretically any shape you want.
For the export, I’m afraid that for now, there are not many possibilities.

dont work. I go to go back to try

:frowning:

I think that I will be able to solve this with iflamate of inkscape

Sorry about the edits. I didn’t know what I wanted to ask.

Question 1

Basically, I want to manipulate the luminance data but it seems that the three methods are different, especially luminance.

luminance_test:
  sp
  +l[0] ac "b 3",ycbcr_y rgb2ycbcr channels 0 endl
  +l[0] +b 3 blend luminance rgb2ycbcr channels 0 endl
  l[0] b 3 luminance endl

The difference is that the luminance method uses coefficients whereas the YCbCr methods use a matrix transform. Maybe I am wrong that they should be equivalent…

Question 2

I have included the scripts for luminance and rgb2ycbcr below.

#@cli luminance
#@cli : Compute luminance of selected sRGB images.
#@cli : $ image.jpg +luminance
luminance :
  e[^-1] "Compute luminance of image$?."
  v - remove_opacity srgb2rgb
  repeat $! l[$>]
  if {s==3} sh 0 sh[0] 1 sh[0] 2 *[1] 0.22248840 *[2] 0.71690369 *[3] 0.06060791 +[1-3] rm[1]
  elif {s!=1} norm n 0,255
  fi endl done
  channels 0 rgb2srgb v +

#@cli rgb2ycbcr
#@cli : Convert color representation of selected images from RGB to YCbCr.
#@cli : $ image.jpg rgb2ycbcr split c
rgb2ycbcr :
  e[^-1] "Convert color representation of image$? from RGB to YCbCr."
  v - mix_rgb 66,129,25,-38,-74,112,112,-94,-18 + 128 / 256
  repeat $!
    sh[$>] 0 +. 16 rm.
    sh[$>] 1,2 +. 128 rm.
  done v +

I would like to adapt both to reflect the luminance of Rec.2020 D50. I might be doing it wrong, so I need people to confirm. This is my version of luminance:

Y50_: skip ${1=0}
  sh 0 sh[0] 1 sh[0] 2
  if $1 *[1] 0.27904 *[2] 0.67535 *[3] 0.04561 # Rec.2020
  else *[1] 0.22249 *[2] 0.7169 *[3] 0.06061   # Rec.709
  fi
  +[1-3] rm[1] channels 0

My math isn’t good, so I settled for the values found in Elle’s Rec2020-elle-V4-rec709.icc and sRGB-elle-V4-srgbtrc.icc. Since I am working in linear gamma, I don’t need srgb2rgb and rgb2srgb. How do I do this to rgb2ycbcr and ycbcr2rgb? @jdc

Hello @afre

I don’t know and I don’t use GMIC :slight_smile:

jacques


Garanti sans virus. www.avast.com

Sorry, my last post was all over the place. I have pared it down to sanity levels. (After 6 edits. :blush:)

@jdc I pinged you regarding the second question because I think you may be able to help me figure out how to do the transformations between RGB and YCbCr.

@afre
Sorry but I had not understand

For this trnasformation, there are many formulas which are all approximate, because we are in RGB, and therefore this does not take into account the workspace (sRGB, Prophoto,…)
Here the one I used in my adaptation of Auto White balance
“Robust automatic WB algorithm using grey colour points in Images” in the branch “autowblocal”
float Y0 = 0.299f * rl+ 0.587f * gl + 0.114f *bl;
float U0 = -0.14713f * rl - 0.28886f * gl + 0.436f * bl;
float V0 = 0.615f * rl - 0.51498f * gl- 0.10001f * bl;

But if you want a transformation good in all cases, you must use instead of YCbCr, xyY
You can make a transformation as the one

void Color::rgbxyY(float r, float g, float b, float &x, float &y, float &Y, const double xyz_rgb[3][3])
{
float xx = ((xyz_rgb[0][0] * r + xyz_rgb[0][1] * g + xyz_rgb[0][2] * b)) ;
float yy = ((xyz_rgb[1][0] * r + xyz_rgb[1][1] * g + xyz_rgb[1][2] * b)) ;
float zz = ((xyz_rgb[2][0] * r + xyz_rgb[2][1] * g + xyz_rgb[2][2] * b)) ;
float som = xx + yy + zz;
x = xx / som;
y = yy / som;
Y = zz / som;
}
Where xyz_rgb[3][3] is the transformation matrix associated to working space
At the end
x ==> red channel
y==> blue channel
Y==> Luminance
all is between 0 and 1, and allows CIE1931 diagram.

I used this transformation for example in “ItcWB” - Iterative temperature correlation white balance" (always in “autowblocal”) - for this procedure (Itcwb), I put a copyright

Jacques

A small complement.
All RGB formulas correspond to the second row of the RGB / XYZ matrix
ex for Aces_p1 {0.284448, 0.671758 , 0.043794}
for sRGB {0.2225045, 0.7168786, 0.0606169}

The formula with 0.299f , 0.587f , 0.114f is a “median” formula often used to take into account all working space…but obviously, it’s never good :slight_smile:

1 Like

This is what I wanted to know. In other words, Y0 depends on the type of linear RGB being used. U0 and V0 stay the same. Is that correct?

PS According to Wikipedia, the coefficients for U and V differ as well. I guess I need to determine the matrices for Rec.709 and Rec.2020…

PPS @David_Tschumperle I noticed that YUV conversions have a division and multiplication by 255 while YIQ ones don’t. Is this a bug?

It’s the same thing, the coefficients are average values.
The only way to have correct coefficients is to use the transformation xyY (or its derivatives Lab, …)

The values XYZ in the matrix are calculated from the primary working space and the white point. So they will be different for example for Rec2020 in D65 and Rec2020 in D50, and of course for Rec2020 and sRGB, etc.

You can find many matrix, in the branch “testoutputprofile” (RawTherapee) in the file iccmatrices.h

I will be away for about a week

jacques

I don’t remember why but that’s the range that was decided for these colorspaces.
a YUV with [0,255] range is YCbCr instead.

I am not at home.
But there is a mistake in
Y=zz/som
Must be Y=Y/65535.
Jacques

As I can see the code of “pack_sprites” in gmic?

In file ‘https://raw.githubusercontent.com/dtschump/gmic/master/src/gmic_stdlib.gmic’, searching for the string pack_sprites : will show you the code of the corresponding command.

Forget if I asked: what is the difference between .gmz, .cimg and .cimgz? I always assumed that z means compressed and that .gmz and .cimgz were the same…

Yes, the z means the format is compressed (lossless compression).
Basically the .gmz format is equivalent to .cimgz but it also stores the names of the image in the list
(technically, a .gmz is actually a .cimgz with a last image encoding the image names).

Going back to my original question on blurs, I could adapt blur_linear to be asymmetrical by masking out the part of the kernel that I don’t need. (I did that on the image but not the kernel in post #78.)

blurSE

Now, I just need to figure out how to chop the kernel at any angle.


I still need guidance on where to begin exploring de-blurring such a single-direction blur. I have a very unsteady hand when taking photos. However complex the camera shake is in 3d and temporal space, I often find that the blur is prominent in one direction.

Edit: I just noticed deconvolve_fft. That should work on blur_linear but not a single direction one.