I guess it helps once I figured out that this looks like a pipeline… Let me know if I’m getting this straight.
So you create a random-valued array of range according to the working image datatype and of user-defined size:
$2,$3,1,4 noise[-1] 255
you resize the array to suit using it as a LUT for uint8 inputs:
resize[-1] 256,256
you renormalize and do a blur of user-defined kernel size
normalize[-1] 0,255 blur[-1] {$4^2}%
which I guess is more flexible than just having nearest/linear interpolation options…
i guess this is the point at which our methods differ as you mention:
cut[-1] {($5-1/255)/2},{100-($5-1/255)/2} normalize[-1] 0,255
clip upper and lower 25% (default) of the tf and renormalize.
I’m not sure what this reverses. Is this just transposition (swapping input images)?
if $6 reverse[0,1] endif
this syntax baffles me, but I take it this is essentially a LUT reading task.
fill[-3] “i(#2,i(#0),i(#1))”
so disregarding the differences in the smoothing/interpolation approach, the default contrast parameter is adding more regions where the tf is saturated. Like you said, things would tend to get pushed into the corners more than I’d expect. It’s nice to have the extra degree of freedom.
Would there be a way to hold an unmodified version of the random tf and re-use it between invocation? Having these extra parameters has me kind of imagining something where if a particular desirable random tf is found, the user could tick a box to hold the random tf, so that the smoothing and thresholding parameters could be tuned without losing the base tf. That would be pretty dang convenient.
I love such broken textures myself. I can go a bit further:
Now all you need to do is [dives off on a tangent about favorite ways to shred a painting into total abstraction and then coax it back into resembling some sort of fantasy landscape]…