1D processing thread

I see that there’s differences. Which begs the question of why are the output different? I did took out the > with the assumption that the output should be similar. With rep_linear_echo, I didn’t see a difference.

A IIR filter (the kind we are discussing) accumulates the values of all previous samples/pixels, so you need to process them sequentially. The advantage of this is you only need to have 2 samples in memory and only a few operations for each sample processed.

In other words, if you wanted to do this non-sequentially, you would need to hold the whole image in memory and the complexity of processing would increase for each pixel.

You would have to do what I described above

n1 = n1 * 0.9
n2 = n2 * 0.9 + n1 * 0.09
n3 = n3 * 0.9 + n2 * 0.09 + n1 * 0.009
n4 = n4 * 0.9 + n3 * 0.09 + n2 * 0.009 + n1 * 0.0009

For echo, you don’t accumulate results. You are only adding two values together so don’t have to do it sequentially.

Edit: If you add feedback to the echo that is a different story.

I just realised a couple of things I should mention.

  1. The variable in the filters needs to be between 0 and 1

  2. I think I messed up the high pass filter. Low pass is correct though

I think that is part of why I have been scratching my head. I haven’t had a chance to observe the filters in action yet. I just need an idea of what it looks like, visually, sonically and figuratively.

In audio it cuts out lows and makes things sound tinnier. In images it gets rid of coarse details and leaves finer ones.

In my mind, subtracting the previous value truncates values. How would it allow highs to be passed through? I am imagining the 1d audio fragment as a string. Lately, my brain has been cloudy, so maybe I am missing the obvious.

High frequencies are highly-frequent fluctuations from sample to sample. This creates significant differences between samples.

Low frequencies are infrequent fluctuations between samples. A low frequency signal means each sample is very similar to the previous one.

If you want to remove high frequencies and let low frequencies passed (a low pass filter) then averaging adjacent samples reduces the highly frequent fluctuations from sample to sample.

If you want to remove low frequencies and let high frequencies passed (a high pass filter) then you want to remove the similarities between samples and keep the differences. So a simple way to do that is to find the difference between (subtract) adjacent samples.

I couldn’t find any tutorials on YouTube which give you audio examples straightaway, they come with all sorts of explanations first instead of digging into what you actually get. I found a loop here https://freepd.com/music/Bit%20Bit%20Loop.mp3 and I used a section of it for some examples: loop1 is unedited, loop1-lp500 has a low pass at 500 Hz and loop1-hp2000 has a high pass at 2000 Hz. Both filters are second-order and neither are resonant.

audiofilterexamples.7z (3.0 MB)