'Glitch' Art Filters - again

The other thread is too cluttered. I found something with PDN glitch spiral. It is Ulam Spiral transformation, and there exists codes for it. So, it can definitely exist for G’MIC. There is even an attempt with reverse Ulam Spiral. So, everything the PDN spiral glitch author wanted in PDN can exist in G’MIC.

Example of Ulam Spiral

Could be a base for new filters.

1 Like

The variants look fun too.

I looked to see if G’MIC supports isprime() function, but apparently not. That would make the ulam spiral difficult to do, but not impossible.

I think I found one bug.
When I click OK, it doesn’t apply that filter in the preview, but applies the next recompute iteration.

You’ll have to fix it yourself, she ain’there.

Decided to delete the earlier post as it’s no longer relevant. I have managed to create a spiral matrix transform filter. I posted the filter code and clis to g’mic community.

I just found to older photos with real hardware glitches from a broken sensor. perhaps you can use them as reference or even write a filter to fix them. (I don’t think that is possible.)

2 Likes

This one is very nice: :smiley:

This glitch looks oddly familiar. :thinking:

Pixel Sorting. That what these looks like. The third one however is a bit more complicated than that.

While working on the filter I was talking about in my g’mic thread, I came up with this filter as it’s going to be incorporated into that filter. What glitch does this resemble the most?

NOTE: Random Shade Stripe Layer was added to Mona Lisa, then Destination In Blending mode was used. Then flattened to use the filter I made.

What this filter does is blend pixels by pixels within either x or y axis and either direction. So, you get this pixel stretch effect, but it’s not actually anything like pixel streak. There is different rules that determines how they get blended. However, what does it look like when combined with random shade stripe.

I just did what I wanted. A new glitch art filter may be coming though it would be nice if other scripters here can extend it.

rep_auda_test:
repeat $! l[$>]
 if d==1
  ow={w}
  oh={h}
  os={s}
  unroll x
  s x,$os a c
  s x,-1 unroll x a x
  #Insert your command here#
   f. j(min(j(-10,0,0,0,0,2),i,j(10,0,0,0,0,2))*.05)
  #End#
  s x,-$os
  unroll c
  a x
  #Insert your command here#
  #End#
  unroll x
  #Insert your command here#
   f. avg(max(j(int(x/$ow)*.5,0,0,0,0,2),i),i)
   f. j((x/w)^2*30,0,0,0,1,1)*.05
  #End#
  $ow,$oh,1,$os,"begin(
  const sc=w#0/s;
  );
  poschan=c*wh;
  posxy=x+w*y;
  pos=poschan+posxy;
  i(#0,pos,0,0,0);
  "
  k.
 fi
endl done

Another pic with s x,5 rv[n1,n2] a x

1 Like

Why didn’t I think of unrolling it before? Maybe I could shove an analog style low pass filter in there if I’ve got time (a popular term for that is ‘zero-delay feedback’ but it’s a misnomer). Regardless, you’ve made a template for a whole class of scripts which can simulate sound-to-image databending. I wonder what a PSOLA time-stretcher would do…

I’ve been really busy with other things and I’m still haunted by that 3D cubim-ish script that I wanted to make (I still can’t figure out how to do the antialiasing in 3D)!

1 Like

Well, not the whole template. I just had inserted another Insert your command here and that contains all pixels while channels are appended together. So, now you do have the whole template. I think we could collaborate on it together as it seem to be a long term project.

I decided to experiment with permute. Problem is rolling it back. It can be shortened to permute $val Unroll x “your command here” rollback. I’ll ping @David_Tschumperle for ideas.


@Joan_Rake1 Now, I think I managed to create a better script thanks to @garagecoder.

rep_auda:
('$1')
l. s x remove_duplicates a x if w#-1!=4 error exit fi endl
4,1,1,1,"begin(
numid_xyzc=[120,121,122,99];
);
numid_xyzc[find(#-1,numid_xyzc[x],0,1)];"
img2text. , unpermute_string=${} rm[-2,-1]
permute $1
whds={[w,h,d,s]}
unroll x
#Insert Code Here#
f x%2?j(10,0,0,0):j(-10,0,0,0)
#End#
r $whds,-1
permute $unpermute_string

So, I think this part is solved. The hard part is making it work with variety of different functions and to make it useful for cli and gui at once. I think that’s a big project.

This was achieved with a modified version of the above using this line:

f j(i*(5*x/(w-1)-int(5*x/(w-1))))

Another pic:

1 Like

I’m now trying to make a generalised JPEG-style encoder with a user-specified tile size. I want to use David’s trick of generating a image of individual IDCT tiles. I’ve tried doing it using at "idct", but with larger tile sizes it gets very slow. I want to generate the tiles directly from cosines but I’m not sure how. This one does both at the same time, and I want the first to be what the second is now.

rm
tw=8
th=8
{$tw^2},{$th^2},1,1
#slow
+f "!((x%("$tw"+1))||(y%("$th"+1)))?255" at. "idct",$tw,$th
#should eventually be the faster way
f[0] "begin(tw="$tw";th="$th");
tx=(x-(x%tw))/tw;
ty=(y-(y%th))/th;
cx=cos((x*tx/tw+(1*tx))*pi);
cy=cos((y*ty/th+(1*ty))*pi);
cx*cy
"
rv
nm[-2,-1] real,mine
n 0,255

I’ve got something which looks similar but it’s a bit off. I don’t know if there’s a square root involved…

rm
tw=8
th=8
{$tw^2},{$th^2},1,1
+f "!((x%("$tw"+1))||(y%("$th"+1)))?255" at. "idct",$tw,$th

f[0] "begin(tw="$tw";th="$th");
tx=(x-(x%tw))/tw;
ty=(y-(y%th))/th;
cx=cos((2*x+1)*tx*pi*0.5/tw)/sqrt(tw);
cy=cos((2*y+1)*ty*pi*0.5/th)/sqrt(th);
tx!=0?(cx*=sqrt(2));
ty!=0?(cy*=sqrt(2));
val=cx*cy;
((tx+ty)%2)?(val*=-1);
val
"

The massive slowdown comes from apply_tiles. I don’t get where the idct comes from though and what it’s suppose to do. Maybe with that info I could come up with a theory on how to make a faster version, but you already have a version that’s 15 times faster.

Also, here’s the analysis of difference/xor if that would help

image

XOR Analysis

image

Difference analysis

Thanks. While you were replying I updated it because I found something after doing some research. What I’ve got now is almost like what I want but there’s a constant factor missing and I’m not sure how this constant factor varies.

I think you got it actually with the new update. Blend difference reveals this when comparing normalized images.

[2] = '[unnamed]_c1':
  size = (64,64,1,1) [16 Kio of floats].
  data = (0,0,0,0,0,0,0,0,0,0,1.52588e-05,1.52588e-05,(...),0,7.62939e-06,0,7.62939e-06,7.62939e-06,1.52588e-05,7.62939e-06,1.52588e-05,7.62939e-06,0,7.62939e-06,0).
  min = 0, max = 2.28882e-05, mean = 7.56842e-06, std = 6.54397e-06, coords_min = (0,0,0,0), coords_max = (13,8,0,0).

Now it’s just a matter of normalizing your result.

The constant factor I was looking for is 64, but I can always take that out. I now have something from which I can easily make an IDCT, but in his command, David left out the DCT entirely. I have to figure out how he did that.

rm
tw=8
th=8
{$tw^2},{$th^2},1,1
f "begin(tw="$tw";th="$th");
tx=(x-(x%tw))/tw;
ty=(y-(y%th))/th;
cx=cos((2*x+1)*tx*pi*0.5/tw)/sqrt(tw);
cy=cos((2*y+1)*ty*pi*0.5/th)/sqrt(th);
tx!=0?(cx*=sqrt(2));
ty!=0?(cy*=sqrt(2));
val=cx*cy;
((tx+ty)%2)?(val*=-1);
val*64;
"

dct = discrete cosine transform
idct = inverse of that

2 Likes