I'm generating new blending modes for Krita

If you like that, wait until you see the upgrade:

#@gui Bomb blend : fx_blend_bomb, fx_blend_bomb_preview()
#@gui : Recompute = button(0)
#@gui : Mesh X = int(16,1,256)
#@gui : Mesh Y = int(16,1,256)
#@gui : Mesh smoothness = float(0.5,0,10)
#@gui : Mesh contrast = float(50,0,100)
#@gui : Reverse = bool(0)
#@gui : Alpha = bool(0)
fx_blend_bomb :
l[0,1] to_rgba 
$2,$3,1,4 noise. 255
if $7 ac. "noise 255",rgba_a fi
r. 256,256 n. 0,255 blur. {$4^2}%
c. {($5-1/255)/2}%,{100-($5-1/255)/2}% n. 0,255
if {!$7} to_rgb. fi
if $6 rv[0,1] fi 
f... "i(#2,i(#0),i(#1))" if {!$7} to_rgb fi
rm[1,2]
endl
fx_blend_bomb_preview :
fx_blend_bomb $*

It can now mess things up properly in the alpha channel as well.

Here are some examples of what this thing can now do. Simple gradients:
image

Alpha channel:

image

Really abstract material (which will most likely end up on those ‘aesthetic’ blogs and similar places):

image

image

…and of course, a lot of fun with horribly-compressed JIFFs.

image

I’m really not able to follow the code here, but it looks like the same sort of idea. I’m assuming that’s a 16x16 tf?

The output seems to be more strongly dominated by primaries and secondaries than I’d expect, so I’m not sure if there’s a difference in our approaches. Then again, it’s hard to tell when things are random…

I hate to ask, but for sake of making sure we’re on the same page regarding the 2D interpolator, could you do me a favor and if possible, run a test pair with a fixed tf for normal and hard variants of your setup? I’m really lost in g’mic.

% 4x4x3 tf.  i assume you'd need to convert to int
tf_r=[0.46793 5.1692e-06 0.87901 0.61863 0.40424;0.26789 0.19705 0.13195 0.83972 0.35911;0.07158 0.14952 0.64716 0.76896 0.21964;0.94989 0.76023 1 0.22937 0.98994;0.8272 0.13955 0.13179 0.08721 0.35122]
tf_g=[0.79758 0.38628 0.080147 0.87035 0.053319;0.74359 0.22975 0.026048 0.56244 0.79968;1 0.57999 0.82849 0.031294 0.44134;0.49606 0.82131 0.35239 0.0477 0.65466;0.46792 0 0.4475 0.87378 0.52436]
tf_b=[0.48131 0.1731 0 0.47259 0.53728;0.54483 0.44578 0.95829 0.80938 0.16761;1 0.23373 0.96989 0.78254 0.70184;0.47504 0.2867 0.85657 0.36321 0.0074168;0.27969 0.76006 0.20659 0.019315 0.60487]
tf=cat(3,tf_r,tf_g,tf_b);

% normal and hard modes
ashes1=imblend(fg,bg,1,'mesh',tf);
ashes2=imblend(fg,bg,1,'hardmesh',tf);

…and of course, a lot of fun with horribly-compressed JIFFs.

Exploiting processing error sources to produce remotely-referential derived images? Now we’re talking!

@Joan_Rake1 Could you write the full command names or a least annotate for the benefit of the non-G’MIC script writers?

Also, what does i(#2,i(#0),i(#1)) mean? Could you comment in on its mathematical form or at least provide an explanation?

I’m really not able to follow the code here, but it looks like the same sort of idea. I’m assuming that’s a 16x16 tf?

That’s the default but users can specify a size up to 256x256 and down to 1x1.

The output seems to be more strongly dominated by primaries and secondaries than I’d expect, so I’m not sure if there’s a difference in our approaches. Then again, it’s hard to tell when things are random…

That would probably be due to the contrast setting being turned up high (>50). It’s a cut-and-normalise clipping combo which chops off higher and lower values before scaling the remainder back to the 0-255 range.

I hate to ask, but for sake of making sure we’re on the same page regarding the 2D interpolator, could you do me a favor and if possible, run a test pair with a fixed tf for normal and hard variants of your setup? I’m really lost in g’mic.

You’re better off using the GIMP plugin so you can see what’s going on if you’re using the command line version for now, but I can give you the command that you need:

fx_blend_bomb 0,16,16,0,73.1,0,0

Keep the two 16s constant; the fourth option is the smoothness and it should be 0 for hardbomb and 2 for bomb. Insert ‘display’ into the command’s code to see what’s happening, if you try: …1] fi display f... "i(… then you’ll see that the TF matrix is an RGB image if the last option is 0 and an RGBA image if it’s 1.

Exploiting processing error sources to produce remotely-referential derived images? Now we’re talking!

I love such broken textures myself. I can go a bit further:

image

Just caught this.

Could you write the full command names or a least annotate for the benefit of the non-G’MIC script writers?

#@gui Bomb blend : fx_blend_bomb, fx_blend_bomb_preview()
#@gui : note = note("Creates a random transfer function 'mesh' and then blends images accordingly. Based on method shown <a href="https://discuss.pixls.us/t/im-generating-new-blending-modes-for-krita/8104/16">on discuss.pixls.us</a>.")
#@gui : Recompute = button(0)
#@gui : Mesh X = int(16,1,256)
#@gui : Mesh Y = int(16,1,256)
#@gui : Mesh smoothness = float(0.5,0,10)
#@gui : Mesh contrast = float(50,0,100)
#@gui : Reverse = bool(0)
#@gui : Alpha = bool(0)
#@gui : Normalise = bool(0)
fx_blend_bomb :
# select the first two images in the list and convert them to rgba
local[0,1] to_rgba 
# create transfer function mesh image and apply noise
# if we're using the alpha channel, apply noise in alpha channel too
$2,$3,1,4 noise[-1] 255
if $7 apply_channels[-1] "noise 255",rgba_a endif
# resize mesh using nearest-neighbour interpolation and blend to smooth it out
resize[-1] 256,256 normalize[-1] 0,255 blur[-1] {$4^2}%
# clip values to add contrast to mesh
cut[-1] {($5-1/255)/2}%,{100-($5-1/255)/2}% normalize[-1] 0,255
# delete the alpha channels of all images if we're not using them
if {!$7} to_rgb endif
# reverse blending images if we choose to
if $6 reverse[0,1] endif
# select colour to fill each pixel with from transfer function mesh image to fill result image with
# use x-coordinate from value of bottom image, y-coordinate from value of top image, channels are independent
fill[-3] "i(#2,i(#0),i(#1))"
# remove matrix and second image
remove[1,2]
# a final normalisation
if $8 apply_channels "normalize 0,255",rgba endif
# deselect
endlocal
fx_blend_bomb_preview :
fx_blend_bomb $*

It’s still got some way to go (you’ll notice that I’ve already added another normalize). I have to sort out multiple layer inputs like G’MIC’s standard blend filter already has.

I wish that G’MIC filter worked on Krita. Well, your CubeHelix filter works just well.

It’s probably a segfault so it’s not something that I can fix. Unfortunately David won’t be back to fix things for weeks if my memory’s serving me well so Krita users will be in limbo for now. I don’t know who else can help but it probably won’t be anything to do with G’MIC.

Confirmed, I can add the missing compatible blending modes from matlab to krita -

This is P-Norm blending Mode. Next up is some modes from IFS Illusions, and Superlight.

Here’s part of the code for P-Norm -

template<class T>
    inline T cfPNorm(T src, T dst) {
    using namespace Arithmetic;

return clamp<T>(pow(pow(dst,2)+pow(src,2),.5)); 
}

Wow, superlight is tough to crack.

EDIT: @DGM - I think I cracked it, but just for float images for now. I can fix that though.

Well, it does look better than Pinlight.

I guess if it’s going to be fixed-parameter, in this case it’s technically the 2-norm, or i guess you could call it a euclidian norm, or ‘hypotenuse’

… also, yeah. I’ve got it easy doing everything in floating point.

I guess if it’s going to be fixed-parameter, in this case it’s technically the 2-norm, or i guess you could call it a euclidian norm, or ‘hypotenuse’

Duly noted.

… also, yeah. I’ve got it easy doing everything in floating point.

Yeah, I can see why. Krita has floating point support, and integer point support. Meaning I have to support both modes. Binary modes are the exception to that rule, and it’s not worth implement workaround or another 16 blending modes to support binary float modes. In the case of superlight, well, it’s not going to be very easy to implement for both, but it is worth supporting for both, and it is much better than Pinlight.

EDIT: Fixed Superlight for Integer Color Space Mode

Now it works on float, and integer.

I guess it helps once I figured out that this looks like a pipeline… Let me know if I’m getting this straight.

So you create a random-valued array of range according to the working image datatype and of user-defined size:

$2,$3,1,4 noise[-1] 255

you resize the array to suit using it as a LUT for uint8 inputs:

resize[-1] 256,256

you renormalize and do a blur of user-defined kernel size

normalize[-1] 0,255 blur[-1] {$4^2}%

which I guess is more flexible than just having nearest/linear interpolation options…
i guess this is the point at which our methods differ as you mention:

cut[-1] {($5-1/255)/2},{100-($5-1/255)/2} normalize[-1] 0,255

clip upper and lower 25% (default) of the tf and renormalize.

I’m not sure what this reverses. Is this just transposition (swapping input images)?

if $6 reverse[0,1] endif

this syntax baffles me, but I take it this is essentially a LUT reading task.

fill[-3] “i(#2,i(#0),i(#1))”

so disregarding the differences in the smoothing/interpolation approach, the default contrast parameter is adding more regions where the tf is saturated. Like you said, things would tend to get pushed into the corners more than I’d expect. It’s nice to have the extra degree of freedom.

Would there be a way to hold an unmodified version of the random tf and re-use it between invocation? Having these extra parameters has me kind of imagining something where if a particular desirable random tf is found, the user could tick a box to hold the random tf, so that the smoothing and thresholding parameters could be tuned without losing the base tf. That would be pretty dang convenient.

I love such broken textures myself. I can go a bit further:

Now all you need to do is [dives off on a tangent about favorite ways to shred a painting into total abstraction and then coax it back into resembling some sort of fantasy landscape]…

Yeah, I kind of figured there’d be a duality of paths for different image types. I was straight up lazy defaulting to double, and you can imagine I pay for it in memory whether It’s needed or not.

The results are looking good. What parameter value did you settle on?

Hmm, I’m looking at the pdf document you sent me, and I picked the one that is not too extreme or too soft, but just right. 2.3333 is just right for P-Norm. However, for superlight, I think I might create 3 varients with 3 different parameters. I’ll continue exploring IMBLEND blending modes with fixed parameters.

Having a few variants would easily work. Subtle control over the parameter really isn’t that necessary. Regarding my first response in the thread, is the addition of a bunch of extra modes going to cause objection or problems? Is this something that is going to be either optional or user-configurable? I haven’t had a chance to install 4.2, and I really don’t remember how Krita was configured.

That, and I hate to say this, but I hope you don’t think MIMT/IMBLEND are part of Matlab. I’m pretty sure if you referred to them as Matlab modes, you’d probably confuse the Matlab users most of all – since they’d be wondering when Matlab could do that. The FEX is all user-contributed files. I’m not associated with Mathworks or anything.

On the topic of sources, I have a question. I’ve found two different formulae for the ‘softlight’ variant attributed to EffectBank in the following sources, but I’ve not found anything more definitive. The website no longer exists.

This looks ideal
This is almost identical, but faster.

I was wondering if you’d seen any other sources regarding this mode.

1 Like

so disregarding the differences in the smoothing/interpolation approach, the default contrast parameter is adding more regions where the tf is saturated. Like you said, things would tend to get pushed into the corners more than I’d expect. It’s nice to have the extra degree of freedom.

Clipping will do that, yes. I can add a softer contrast enhancement formula and then provide an option to switch between that and simply clipping the image.

Would there be a way to hold an unmodified version of the random tf and re-use it between invocation? Having these extra parameters has me kind of imagining something where if a particular desirable random tf is found, the user could tick a box to hold the random tf, so that the smoothing and thresholding parameters could be tuned without losing the base tf. That would be pretty dang convenient.

That would be easy; by default, the removal of the mesh (remove[1,2] where [2] is the mesh) comes towards the end of the filter but I can add an option for G’MIC to output it anyway. It can also be reused and copied in the same invocation.

Ok, I’ll edit my post to avoid confusion.

Light, and Shadow from IFS Illusions are done. I get the naming convention now. But, Bright and Dark throws me off in terms of naming convention. So, I renamed Light to Illumination, and Shadow to Shade. Brighten will be renamed to Lighten, and Dark will be renamed to Darken. Not sure about the naming convention here.

By the way, I’m down to six compatible blending modes from IMBLEND. I could emulate mesh to a degree with the aid of conditions, modulus operation, but I don’t feel it’s worth the pain doing that.

I will be adding a mesh G’MIC filter soon but I’m not sure what I should do for the opacity setting. What should vary with that parameter?

In IMBLEND, opacity is just a scalar which controls the final composition after blending. For the simple case with no alpha content on the inputs:

out = opacity*resultfromblend + background*(1-opacity);

… or whatever is the conventional composition for the environment when alpha is present (e.g. SRC-OVER)

I really never could figure out what the intent was with those modes. That’s another reason I was interested in figuring out what used to be on the site. The developer clearly had his mind set on solving problems with the ‘softlight’ mode. I can only assume there was purpose behind these other modes. Some just seem so radical I can’t really even guess.

For what it’s worth, I don’t know that the renaming really clarifies things to that end. It’s kind of the curse of blend mode naming. The goals are often similar, but the english language has only so many ways to succinctly rephrase “make brighter/darker”. I have often entertained just using other languages simply for the latitude.

It’s all your choice in the end, but I’m reluctant to change names unless it better helps describe the math, the relationships between complements/transposes/inverses, or the characteristics which differentiate it from other modes of similar utility. I changed ‘parallel’ to ‘harmonic’ because that’s mathematically the more correct description,but I left the EffectBank modes as they were simply because I didn’t want to change conventions without confident reason on my part.

If you can figure out the intent, then we’re both better off, but ‘lighten’ and ‘darken’ at least need disambiguation. At least in the sources I found, EffectBank already had a unique non-relational ‘lighten’ and ‘darken’ mode anyway.

I have just added Modulo blending mode for Krita, and in addition to that, I have added divisive Modulo. Divisive Modulo is kind like Modulo Addition in a way. Hard to explain. Here’s what Divisive Modulo is:

template<class T>
inline T cfDivisiveModulo(T src, T dst) {
    using namespace Arithmetic;

    qreal fsrc = scale<qreal>(src);
    qreal fdst = scale<qreal>(dst);

    if (fsrc == zeroValue<T>()) {
    return scale<T>(mod((1.00/epsilon<T>() * fdst),1.00));
}  

return scale<T>(mod((1.00/fsrc * fdst),1.00)); 
}

What this does is divide the maximum channel value by the source layer, and then multiply with the base layer, and then apply modulo operation. So, if source layer is halfValue, you get the equivalent of Modulus Addition if the two layers are the same. I actually think these two makes great blending modes for abstract art, and glitch art. Divisive Modulo can also be used to alter gradient.

Next up is smooth divisive modulo.

1 Like