Reptorian G'MIC Filters

More newlines than I expected. Looks good.

Thorn Fractal now has overload functions support for Custom formulas and it is used in alternating chaos series.

A small picture to demonstrate thorn fractal with overload function:

image


Now, I have added the legacy alternating chaos modes back. There is now 8 different alternating chaos mode. Furthermore, on the gui filter, you’d note that overload options do not show up, they will only show up where applicable i.e custom alternating formulas.

1 Like

Public note to self - http://paulbourke.net/fractals/dla

Seems useful

1 Like

@Reptorian, it seems that your ‘reptorian.gmic’ file is currently breaking the update script.
Not sure what you’ve done there, but could you please review your recent changes ?

Seems fixed now.

@David_Tschumperle : I do not remember putting gui in place of cli, but as you note, I was working so much on improving fractal filters and finishing up on Thorn Fractal. It was exhausting to improve Thorn Fractal to full flexibility.


That being said, I may attempt this - Cracked Floor Effect | Paint.Net Fans

Once welshblue at getpaint.net forum does a new concrete texture tutorial, I may finally update Construction Material Texture.

1 Like

Ok, started work on the new construction material. Starting with recreating the crack.

rep_turbulence : check "${1=32}>0 && ${2=6}>0" skip ${3=3},${4=0}
e[^-1] "Render fractal noise or rep_turbulence on image$?, with radius $1, octaves $2, damping per octave $3, mode $4."
repeat $! l[$>]
 f. 0 
 +noise. 10,0 
 b. $1,0
 -. {ia} abs.
 ^. $4
 repeat $2-1
  +noise.. 10,0 b. {$1/2^$>},0
  replace_nan 0
  -. {ia} abs.
  ^. $4
  *.. $3 +[-2--1]
 done
 rm..
endl done

Does this version look better than builtin turbulence? I eliminated integer-based choice here as well as difference mode.

$ gmic 256,256,1,1 rep_turbulence 12,3.5,3,1.2

It is hard to tell because turbulence is, well, turbulence and random. After 10 comparisons of iterations, I would say that the builtin is tighter clustered than yours.

EDIT: Deleted earlier message as it’s no longer relevant.

New Filter! - Blur [Splinter]

Preview of new filter


Never mind about the question. I do wish ( ) can work with new lines rather than conjoined line.

:thinking: Still very different. Maybe call it something else. dog example reminds me of the refraction and dispersion of gems.

What do you mean?

I wanted to create a 5x5 vector. So, I figured I could do

(1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;1,1,1,1,1)

However, I think that would be so much better if it were possible to do

(1,1,1,1,1;
1,1,1,1,1;
1,1,1,1,1;
1,1,1,1,1;
1,1,1,1,1)

Also, making a new filter testing a convolution of 20 in the middle while -1 in the rest.

That will part of the new construction material texture.

(1,1,1,1,1;\
 1,1,1,1,1;\
 1,1,1,1,1;\
 1,1,1,1,1;\
 1,1,1,1,1)

That worked. Not on GUI code filter though.

Why is that? Do you have an example?

Yes.

Pinging @David_Tschumperle to let him know about this issue.

EDIT: I had to add two slashes to make it work?


Side note:

Any idea on how to optimize the Splinter Blur? I tried adding in conditionals to use convolve and convolve_fft depending on kernel size. Still too slow.

Sometimes you have to escape the escape. :stuck_out_tongue:

From David’s timing convolve (neumann) still beats everything. I don’t know his exact command and inputs. Am curious what your timings are for your system for your application.

I trimmed out the time to generate result by half.

https://github.com/dtschump/gmic-community/commit/26ff58b57ec59450139876440853057cab96849d

At this point it comes down to how convolve internally works. It still is quite slower than PDN version.

With the dog, I get 19 seconds with the new changes. Ideally, it’d be 5 s as in PDN.

In this case, I try to time the command without the convolve(fft) steps; i.e. generate an intermediary file to skip the processing. I do this all the time to separate the timing of the various parts. Without looking at your code, I bet there are ways to optimize your scripting a little more.

I’m on my search to a optimized version finally. I still don’t get how to get the min of value from here though.

Right side is optimized, and the left side is the non-optimized. I decided to loop within fill block to get a star, then use convolve_fft to get the image. It should take very little time to do the optimized version. And the value difference is because the left is normalized.

I believe that I can get the min or max using erode,dilate. Problem is that it has to be weighed, and I don’t know how to do that with a gradient star image. Any idea?

Here’s the code to work with

New Optimized Splinter Blur Convolution Map
#@cli rep_splinter_blur_convolve_map: _length,_duplicates,_thickness,_angle,_contrastr,_bisided={ 0=one-line | two-line }
#@cli : Create a convolve map for directional blur. This enables one to create a convolve map for one-direction motion blur.
#@cli : Default values: '_length=10%','_thickness=5%','_angle=0','_bisided=1'
rep_splinter_blur_convolve_map : skip ${1=10%},${2=3},${3=5%},${4=0},${5=0},${6=1}
repeat $! l[$>]
 hypo={(sqrt(w^2+h^2)/2)}
 if ${is_percent\ $1} length={round($1*$hypo)} else length={round($1)} fi
 if ${is_percent\ $3} thickness={max(round($3*$hypo),1)} else thickness={max(round($3),1)} fi
 
 rm
 
{$length},{$length},1,1
 f "begin(
  const sides=$6;
  const thickness="$thickness";
  const hw=(w-1)/2;
  const hh=(h-1)/2;
  rad2ang(v)=(v/180)*pi*-1;
  const start_ang=rad2ang($4);
  const dup_ang=rad2ang(360/$2);
  rot_x(a,b,c)=a*cos(start_ang+dup_ang*c)-b*sin(start_ang+dup_ang*c);
  rot_y(a,b,c)=a*sin(start_ang+dup_ang*c)+b*cos(start_ang+dup_ang*c);
  cutval(v)=v<0?0:v;
  maxcutval(v)=v>1?1:v;
 );
 endval=0;
 xx=x/w-.5;
 yy=y/h-.5;
 lx=x-hw;
 ly=y-hh;
 radial_grad=1-sqrt(xx^2+yy^2)*2;
 radial_grad=cutval(radial_grad);
 for(n=0,n<$2,n++,
  line=1-maxcutval(abs(rot_x(lx,ly,n))/thickness);
  endval=max(sides?(line?radial_grad*line):(rot_y(lx,ly,n)<=0?(line?radial_grad*line)),endval);
 );
 endval;
 "
 
 / {is}
 if $5
  avgstat={ia}
  +f (i*2-$avgstat)
  f.. lerp(i,i#1,min(1,min(abs($5),1)))
  k[0]
 fi
endl done

Does the normalization get in the way of optimization? It isn’t optimized if you aren’t getting the same or better result…