Multiple Inputs for GLSL (Shadertoy) Node (extra parameters?)

The shadertoy node can take up to 4 inputs. But I need 5 (or maybe up to ~15). There is an ‘Extra Parameters’ option but I fail to see how to connect that to any input nodes. Hard-coded values do not help.

Please help :slight_smile:

If you’re wondering why I would want that - head on over here.

Sorry, only 4 inputs in the ShaderToy we distribute (although it can be compiled to have more).
What I would do is to implement the median of 3 inputs, and make a tree of such nodes.
However, i’m not sure the noise comes from the reading process. It is most probably present in the recorded signal, in which case you should resort to better denoising techniques (I personally recommend the DenoiseSharpen node included with Natron, and get Natron 2.3.16b1 to be able to run it without a crash)

Thank you for your prompt answer!
I’ve seen median of median techniques but am unsure if that works so well for 5 inputs and I’m not sure what to do with a dynamic number of inputs. This is why I originally didn’t want a tree-like structure. Dynamically picking from X nodes has the same problems all over again.
In case that’s unclear, consider 5 inputs of frame n and another 5 of frame n-1 for time-based noise filtering: If the difference between frames is too big I would not want the n-1 ones to be considered.

I understand that SHADERTOY_NBINPUTS are limited to 4 (and I don’t have the skill to recompile that as I freely admit). Is it true that ‘SHADERTOY_NBUNIFORMS’ cannot be connected to input nodes, however?

As for the nature of noise: you are very correct to question that. I did look at it manually (frame by frame comparison) and can say that it’s really my device’s ‘offering’ to my labors…
I’ve added pictures in the videohelp forum thread if you’re wondering what that looks like.

For the color-blobs in this kind of noise Ducks filter worked wonders. But I’d like to see what I can do with a greater sample-set of capture material in case I can do better.

The uniforms are the parameters. Inputs are textures. OpenGL will have a hard time processing all these textures, and a median filter from a variable number of values will be hard to code in GLSL (maybe it’s feasible, but I doubt it).

The best would be to make an OpenFX plugin. I can make one that takes up to, say, 64 inputs (like Merge). Would you compute the median for each channel separately?

How would your time-based filtering work? Take the median from the 5 images at n and the one from the images at n-1, and then do what?
How do you decide if the difference is “too big”?
If it’s not too big, what do you do with the 10 values?

Did you already try more modern solutions like this one: https://youtu.be/gqo5Q4W0rNk

1 Like

Yes, I would look at channels separately and likely in hsv/hsl colorspace (using RGBToHSL nodes).

While machine learning is all the rage I’m not that thrilled. The sample linked does look significantly better, but I’d love to see a comparison with all the non-ai made adjustments rather than the untouched base. Color correction alone would have a large subjective impact. I don’t have enough experience to guess if the fluid movement and better interlacing could not have been achieved otherwise just as easily.

Admittedly, this might just be my bias since I’m used to sneer at/turn off most 'AI something’s manufacturers promote their stuff with nowadays.

I don’t exactly know how my filter is supposed to look like in the time-based stage. Since I need to put my thoughts into order anyway, here goes:

The basic idea is to have a matrix m[source_index, frame_index, frame_width, frame_height, channel] = value
then compute something like median_lum(n,x,y) = median(m(:,[n-1, n, n+1],x,y, hsl.L))).

Choosing the set of applicable frames (ie [n-1, n], [n-1, n, n+1], [n, n+1] or even just [n]) requires playing around to see what works for my material.

If I had two Natron node parameters TIME_THRESHOLD (that decides by looking at the difference between differences) and a helper variable COMPARISON_BLUR_STRENGTH I would start by doing it like this:

-defining 3 sets of frames grouped by frame time: n-1, n, n+1
-blurring each frame in a set (using COMPARISON_BLUR_STRENGTH)
-summing up the values of each frame per channel and normalizing with respect to pixel_maximumframe_widthheight
-taking the set-median over all normalized sums in the set
-computing differences between set-medians:
diff_prev = absdiff(set-median(n-1), set-median(n))
diff_next = absdiff(set-median(n), set-median(n+1))
diff_diff = diff(diff_prev, diff_next)

  • now there are 3 cases:
    diff_diff < TIME_THRESHOLD: take all 3 sets as input for next step
    diff_diff > 0: take current and next set
    diff_diff <= 0: take current and previous set
    returning, for instance, median_filtering_set = [previous, current]

All of this is just to determine if frames are applicable to be added to the median filtering set.
From now on I’d look at the original (non-blurred) frames:

//the : denotes the complete set of applicable frames, instead of just a singular one
output_frame(n, x, y, channel_index) = median(median_filtering_set(:, x, y, channel_index))

Doing it with separate node-groups for each channel might be prudent in case there are memory concerns with that much data? It’s likely not for my use-case.

There are several potential pitfalls I can see so far. As johnmeyer (forum.videohelp.com/members/13415-johnmeyer) pointed out - if frames are not well aligned between different captures then I’d have to deal with this on a frame-by-frame basis rather than considering whole sets.
Another issue could occur if my blur-sum-compare approach is too rough. Maybe I’d need to do this on smaller frame-chunks.
If the scene is too dynamic I might have to remove both previous and next sets and/or start tracking the scene and work with partial overlaps. No sense in starting with that however.
As a lesser concern - feeding a median function with an even number of inputs might be trouble (dropping one frame from the next-set would be the quick+dirty solution I can think of).

Hopefully the whole thing is robust but since it looks at any one given image up to three times there might be too much weight spread out resulting in a time-averaged smear of some kind.

Very likely I’ll need to fiddle. Probably quite a lot ^^

I’ve looked into vapoursynth and there are available filters (like rgvs.Clense) that do most of what I wanted. Writing extended ones is possible by using ConditionalFilter and FrameEval but I’m still fighting with the basics of in/output formats. Natron spoiled me in this since it simply worked without me even realizing that there might be an issue.

In any case thanks for the help and I will definitely continue using Natron once I’m done with this processing step. vapoursynth editor is ok-ish but far from being as intuitive and polished as Natron.

On a totally unrelated note - are there any plans to get vapoursynth support in Natron by any chance?