G'MIC exercises

So now that I’ve got a working backwards and forwards transform, how will I use all this data? It’s not as simple as a colour space transform because CIECAM02 is really a model which specifies LMS colour spaces and meaningful correlates with given conditions. In fact, what I’ve got is a slight generalisation of it because I haven’t assumed that the luminances of the illuminant and reference white are the same! So here’s what I could do:

  • Make a command which acts on a few correlates.
  • Make conversion commands which somehow store the input model parameters and output LMS images.
  • Build a command which converts RGB to three of the seven correlates - one combination from Jch, JMh, JcH, JMH, Jab, Qab, Qch, QMh, QcH, QMH and also stores the other input model parameters in some way.

Is it true that G’MIC can have up to 9 channels per image? I might need more for the last two options. That would mean I can use the same image without changing its name to store all variables and also use different illuminants, surround ratios and more for each pixel…

I think memory is the limiting factor for the number of channels or slices.

I suggest making smaller commands as David has done in stdlib. No need calculating all the correlates every time, just the ones that the user wants and is necessary for the calculations.

Also refer to @jdc’s work on RT. I can’t take a screenshot as the module panel is extensive with collapse-able sections. Also read the RawPedia entry on it (translated).

Looks like I’m back to this thread.

Now, I would like to make a map to generate noise level based on local frequency. After generating a Chirikov map, and then anti-aliasing, I noticed areas that are quite noisy are affected in a way that I don’t want to see. Anti-alias target should be areas with less noises at a local level rather than everywhere.

You can see a sample here :

image

See those swirls around the main rotor? I don’t want to see that.

Use variance_patch or similar to make a mask. E.g. variance_patch 15 yields

swirls_

1 Like

My question is below the details here. I just want to explain what I plan to do and why.

Ok, now I think I want to do my own extended version of this. I don’t get it entirely though. @garagecoder may have more info, and I read that David knows too.

Before introducing the EDR, some symmetries must be shown. In the image below, it can be seen that the pixel E has a four-edge symmetry, so that any rules applied to an edge, must be applied to the others.

This part confuse me. How am I suppose to implement the rules to other? Max difference? The tutorial doesn’t explain it, so hence why I called @garagecoder even though he’s probably not here.

I know xBr in gmic exists, but I wanted to write a easier to understand code for xbr as part of the pixel art rescaling project.

It means the algorithm is for an edge. I think if there is an intersection or presence of 2 or more edges, then you would have to somehow repeat that for each edge respectively. (Mad respect for Hyllian and his game interpolation contributions.)

The overlapping of edges seem to be handled by if then statements at various levels depending on the type of overlap and the strength and complexity of the individual edges.

You could of course ask him yourself on that forum. Last time I checked (years ago), he has his own web page too with contact info.

It seems that I have to use the max edge as part of the rule. find(vector,max(vector),0,vectorsize) seems like the way I can get the vector id which contains the highest number within the vector. I will have to ask Hylian to confirm.

An argmax of differences or distances could also be useful in this case. I am an unpaid first line volunteer now, so I won’t have time or grey matter firing properly for some time (up until COVID lock down ends). All the best for your unending quest of porting filters to G’MIC.

I’m confused about this line of #@cli linethick
Particularly the

n  = [-dP[1],dP[0]]/max(1e-8,norm(dP))*th/2;
round([ P0 - n, P0 + n, P1 + n, P1 - n ]);

If I’m reading that right, there’s 8 vector in the round block, right?

Yes. I wonder why David didn’t use line. I guess polygon predates it.

Thanks. Also, I think I found a way to get 3D Brehensan algorithm. Bresenham's Algorithm for 3-D Line Drawing - GeeksforGeeks

So, now with this, I can actually finish a filter as I wanted 3d capability. Not that I will use it, but the possibility of what I can do.

After discovering that convolve does indeed work on 3 dimension, I sort of want a easier way of doing this.

(-1,3,-1) l. s x a z endl convolve.. . rm.

I’m sure (-1,3,-1) can be rewritten so that I don’t have to use l. endl section of this code. It appears that I think I would need to go in the longer route to avoid using the l. endl section, but I really would like to know if there’s a character that can be used to define depth rather than channel (^) or row (;).

Also, if you’re wondering, I realize this can be used to improve tiled form by adding more visibility to forms by convolving along z within the tile color reference image which is a small 3D image. I tested it using the code above, and it really does make the form more visible while still preserving the color.

EDIT: I got it now.

(-1/3/1) convolve.. . rm.

Yes, that’s it !

1 Like

Sorry for the very late reply @Reptorian

Did you get anywhere with xBR? I have very little knowledge of the algorithm - the gmic filter I created could be considered a dumb conversion of C# code; It required no real understanding. I’m certain there is both a better description of the algorithm and a better implementation! I wouldn’t mind having another look if you haven’t already.

Honestly, I haven’t gotten anywhere, but I did released some other pixel art scaling algorithm for gmic in here somewhere. It’s the harder one that I had not managed to replicate.

That sounds like I should look at it then :smiley:

One thing I notice immediately is the strange “distance function”. While I don’t really want to start poking holes in it, the conversion of an rgb difference to ycbcr looks suspect.

1 Like

What have been thinking about lately? I typically process image and then blend back pixels that I didn’t want to change or change as much. This is inefficient. If I could process only the neighbourhoods that matter, I could literally chop off a huge chunk of processing time. Guess I would need to use the math processor for that; viz. eval. I am still not good at coding or picturing what I need to do code-wise to make it work. I am a person of ideas but implementation is not my strong suit.

@afre You could try using if(condition_meet,code block,I); in context of fill.

I means original image pixel.

@garagecoder is better than me at doing those. He did blur based on image value. So, that could help.

I am pretty sure it filters the whole image n times and is slowed down by storing and overlaying. This is untenable for slow algorithms. By slow, I mean any algorithm I usually experiment with.

If you have an example, we’re ready for it :slight_smile: