# help: Can I get the binarized edges?

I get a quite similar result with :

``````foo :
repeat inf {
+label[0] 0,1 area. 0,1 le. 1 # [1] = mask of transitions points
+f[0] "
const boundary = 1;
i(#1)?(
!j(#1,-1)?(col = J(-1)):
!j(#1,1)?(col = J(1)):
!j(#1,0,-1)?(col = J(0,-1)):
!j(#1,0,1)?(col = J(0,1)):
(col = I);
col;
):I"
rm..
-[0] . norm[0] iM={0,iM} rm[0]
if !\$iM break fi
}
``````

@David_Tschumperle Tested your algorithm with my test image code. Turns out a lot of artifacts in case of blurry image while there exists large color blobs.

I’m starting to think both algorithm works for two different related “kind” of images. Just wish there was a universal solution now, but there might not be.

EDIT: Maybe AI exploration for this?

The documentation claims:

Each pixel is replaced by the most frequent color in a circular neighborhood whose width is specified with radius.

However, looking at the code, I don’t think that is accurate. The neighborhood seems to be a sliding square window of the given radius, so with radius `1`, the window is `3x3`. And the colour frequencies seems to be determined by their intensity (eg luminance) only, so if all pixels have the same intensity, the result will be wrong.

1 Like

`fx_vector_painting ` shows curious promise:

``````gmic davidblob.png fx_vector_painting. 9.75
``````

And, for reference:

``````fx_vector_painting :
foreach {
split_opacity l[0] {
+luminance b. {10-\$1}%,1,1
f. "dmax = -1; nmax = 0;
for (n = 0, ++n<=8,
p = arg(n,-1,0,1,-1,1,-1,0,1);
q = arg(n,-1,-1,-1,0,0,1,1,1);
d = (j(p,q,0,0,0,1) - i)^2;
d>dmax?(dmax = d; nmax = n):nmax;
)"
blend shapeaverage
}
a c
}
``````

The directional mask leaves faint artifacts. Track that down and this could be a winner.

2 Likes

My own filter that I had pushed into gmic-community gives me this:

If I can figure out how to extract those pesky out of place pixels as a single channel mask, I believe this can be solved.

Also, my filter works for images with larger blur too.

EDIT: Getting there.

Now, I have to do indexing in-place via math parser. That’s not the fun part.

Current code:

``````#@cli rep_color_region: _threshold[%]>0
#@cli : For use in images with large color blobs, this simplifies a image.
rep_color_region:
skip \${1=25%},\${2=2%}
threshold={cut(\$1,0,1)}
foreach {
+round
+colormap. 0,,0
+index[-2] [-1],0,0
+area. 0,0
ge. {ia*\$threshold}
*.. .
negate.
-.. .
rm.
colormap. 0,,1
crop. 1,100%
map. ..
rm..
index.. .,0,1
rm. rv

100%,100%,100%,1

1,1,1,2

eval[0] >"
begin(
off_place=[-1,1];
);

pixel_in_place=0;

repeat(4,position,
xp=off_place[position&1];
yp=off_place[position>>1];
pixel_in_place=(I==J(xp,0,0,0,1))&&(I==J(0,yp,0,0,1));
if(pixel_in_place,break(););
);

if(!pixel_in_place,
pixel_position=[x,y];
i(#-2,pixel_position)=1;
da_push(#-1,pixel_position);
);

I;
"
area_fg.. 0,0
inrange.. 0,{iM*\$2},0,1

eval "
size_da=da_size(#-1);
point=size_da-1;
repeat(size_da,
if(!i(#-2,I[#-1,point]),da_remove(#-1,point););
--point;
);
da_freeze(#-1);
"
}
``````

Last thing to add is a eval which do inplace indexing with surrounding pixel that are not part of the mask as color reference using the 1d strip image.

Also, allow it to utilize 3 dimensions, but I’ll do that much later.

That being said, even when all of this is done, some human intervention would have to be used. Not a big deal though.

2 Likes

I’m pretty sure that by not using `shapeaverage` but something more clever (finding predominant color as I’ve done in one of my previous try), this should work even better.
I’ll give a try today hopefully.

Yes, I think so.

Perhaps for @makkkraid and others, the solution can be deployed as an updated G’MIC-qt Vector Paint filter with an extra ‘preserve colors’ tick-box or some such, one to switch between the off-the-shelf `shapeaverage` blend mode and the color managing custom blend mode.

On the other hand, perhaps the custom blend mode can accommodate all current use cases as well, making the switch unnecessary. Vector Paint has a lovely, minimalist UI.

You won’t see me much in this play-pen for the next few days. Life calls. I’ve enjoyed reading the code developments and evolution, particularly @Reptorian 's. There’s raw tutorial stuff here.

I’m really afraid I can’t understand the technical replies.

the only thing what I can do is just reporting.
@Reptorian 's new Color Region filter is worked my actual texture example.

Thank you all wise men.

2 Likes

Looks like I managed to pull it off:

Left: Input.png
Middle: Without Stray Threshold
Right: With Stray Threshold

Artifact remains though that’s up to the user to manually fix it. Two iterations in two different layers with different mask will do that job. The second parameter I added determines which stray pixels gets indexed, so areas with thickness of 1 will be used to pixel another color in the surrounding region.

Time for me to push the change. Pushed. Final thing to do is to have users to have to option reduce reference image size for analysis of dominant color.

Edit: There’s still probably a way to remove some detection of stray pixels to further reduce artifacts with stray threshold.

EDIT:

Finally, I have solved it!

Erm, stray pixel removal seem to have issues on the example image provided by @makkkraid. I don’t know how to deal with that, but the idea of using a smaller image to create palette colors as a option might help solve this issue. Works on my gradient blob and davidblob cases.

EDIT:

By reducing the number of adjacent found color, I was able to make the example image provided by mentioned user work really well.

2 Likes

Have not read all the threads, so if someone already mentioned this, then my apologies, but if you just downsize your original then upsize with no smoothing (interpolation, disabled) you can get quite steppy results. Simple and effective.

You lose a lot of precision that way. The point is to simplify images that are somewhat simple into the most basic level with minimal details loss. I do think there may be a iterative technique to build from that.

Yea; just tried that, Reptorian. Got some alright results using ministick without relief and also quantized color reduction but both those methods also introduced artifacting. Will have to think harder on this one, but looks like you got some good results, Reptorian.

As a side, this is best I could do with quantized color reduction on top layer set to value.

Also, GIMP use to have max color preset but looks like this was removed for whatever reason. I believe with max color, you can get rid of the transitional colors. We may never know.

edit:

Surprisingly, Bilateral Smoothing gave a pretty good result.

Another step done : I’ve added a new blending mode `shapeprevalent` in command `blend` that replaces each region of the blending layer by the most frequent color of the corresponding regions of the input image.

Will try to use this in cunjunction with `fx_vector_painting` later, to see what happens

3 Likes

David, if it doesn’t work out, maybe look into my own algorithm at gmic-community and try to understand how it works? It does have the big issue of being slow and occasional image edge bug.

@David_Tschumperle
Thank you for the creation of the ‘shapeprevalent’ mode :o)

Here a test that uses this mode with the ‘foo’ function.

1 Like

@David_Tschumperle This new blend looks interesting indeed.
BTW, is there a page in the documentation which describes all the blend modes?

Should be here, `https://gmic.eu/reference/list_of_commands.html#blending_and_fading`, but @David_Tschumperle `blend` has two entries and doesn’t have a description currently.

PS. I think a model documentation for blend modes is GIMP’s. `https://docs.gimp.org/en/gimp-concepts-layer-modes.html` If someone could write with that clarity, that would be great.

There’s also Krita Blending Mode. Some of which G’MIC has, but is missing from GIMP. - Blending Modes — Krita Manual 5.0.0 documentation