The plugin wasn’t updating or reading the *.gmic files properly. After a few reboots and relaunches, the problem disappeared.
Changes
1 Since afre_darksky works with the rewritten afre_softlight now, I replaced the builtin with mine.
2 I moved the rewritten afre_softlight out of Testing and into the Colors category. It is an implementation of @DGM’s code, which in turn is based on a defunct company’s. I found myself using this soft light blend the most, so it would be appropriate for me to make a command for it. See I'm generating new blending modes for Krita - #51 by DGM. My old implementation, while trying to be helpful, was too complicated and would not be easy to use as a stepping stone for more complex commands.
As usual, update your G’MICs and their filters in a couple of hours after the diary entry.
That was a different method. Now a refinement of the Method 2 in post #155. However, it takes time due to all the looping. Basically, it uses the result from Method 2 as a guide for the noisy image. Since it is so slow, this would be untenable for larger images. With these small images, it takes 2-3 minutes!
Weakness – Areas with more blur in the original noise-free image are destroyed here, as can be seen by the lighted areas of the pencils and the fading text on the left. The fade is more extreme than the original image’s. Some sort of normalization of texture and / or noise is probably needed. My brain still turns to jelly when talking to @Iain or @rawfiner, so any denoise attempts will usually end in defeat. Maybe it is a matter of defeating my timid self. That said, at my own pace, I am getting a nano step better.
Advancement – The main improvement is that these images don’t exhibit the splotching and the edges are preserved albeit filled with holes where I am guessing the high frequency noise used to be.
Made some final touches to the denoise command. I removed unnecessary code that contributed to the problem identified in my previous post. It is a more presentable form of the custom self-guided filter that I worked on for a long time.
I am calling it afre_denoisesmooth (CLI GUI) because it can denoise noisy images and smooth low-noise images (while preserving structure and textures). As I am using my own custom guided filter, it will be much slower than the native one, since I am only scripting it and it is more nuanced. Also, it is because of the amount of looping ( loop_n = radius + amount ). At the moment, I prefer not to add fast versions, which would compromise the performance and stability.
I attempted to use Smooth Denoise, I wish it was faster. I’m not sure if stability is a problem, but performance boost is fine if the sacrifice in details is not too bad.
I haven’t had time to deal with the previous to dos. I have kept a list but it might not match those mentioned on the forum. I have added a handful more here: Spaniel, king of tree stumps! - #11 by afre Placing it here, in this thread, so I can remember.
Minor Update
Been working on the alt versions of afre_gui0 afre_gui1 afre_denoisesmooth . Renamed their suffixes from _fast to _alt because they are barely any faster and are more experimental in nature.
My alt custom guided filters have taken on using the builtin blur (Gaussian) instead of boxfilter (which yielded bad results) as a replacement for afre_box . It is a natural choice because box filters approximate the Gaussian blur anyway. The speed gain wasn’t all that much however.
I turned down the structure weight in afre_gui1_alt for afre_denoisesmooth_alt . It seems to fade the edges rather than preserve them when I iterate the command. I wonder if this applies whenever iteration occurs. If it does, I may consider changing the defaults to structure=0 . Since it is unique to my code, it should probably be dialed back anyway.
As usual, update your G’MICs and their filters in a couple of hours after the diary entry.
What I mean is that I am moving faster than I am realizing ideas. I am playing instead of achieving.
E.g. I have had the components to make a guided filter that matches or exceeds the very recent dt release for a very long time but I am not interested in putting one together. Frankly, the ones I have made are already fun to play with. No wonder Davids tend to put my commands into the fun category. They are more like gag toys that work if you know them intimately but are utterly unreasonable if you don’t. The same with the other commands. If I were about being practical, I would have mended them already, which I have for some of them; e.g. soft light used to be a little too unreasonable, so I made it useful and as a result boring.
I am more interested in exploring uncharted territory and doing idle nonsense because it is fun. Let’s just say I am writing G’MIC poetry in the way @chroma_ghost honed his creativity. I guess that this is what I have always done. I am simply declaring it now. Don’t you worry, I am still playing with “denoising”. Whatever that means with my buckets of sand.
I think this would be a good candidate for my frequency domain detail recovery. You could test this using my noise reduction filter. You can test this by using two layers. Noisy one on top, clean one on the bottom, and in the filter select ‘guide recovery’. This means the filter is extracting the details from the difference between the layers.
As implementation is concerned, I have these problems to solve.
1 Runs at a reasonable speed on my low end machine. 2 Can handle full- and half-sized images similarly. 3 Preserves enough detail to make the filtering worthwhile. 4 Code is easy to examine, understand and improve. 5 Respects physical properties of light and subject matter.
The question is where to compromise. I may have found a better balance this time. With the correct parameters, everything I have tried so far would work as input to @Iain’s denoiser. The challenge is to yield acceptable results without relying on complex code (Iain’s, and David’s on which his depends on). I am still going my own way (I like the adventure). I need not compete with other developments nor am I of a competitive nature.
Small Update
I am realizing how buggy (or limited) my commands truly are. Apologies for not having the wherewithal to address them. Don’t discriminate based on how a command or filter is currently categorized. Those in testing may be just as viable (or buggy) as those in a proper category. I encourage you to try them all, so you could motivate me to fix them.
After a long time of silence, today, I improved afre_brightness. Before, I would decompose the image into luminance and colour components, process them separately and then recombine them. Presently, for colour, I will process the image before separation. I shall adjust the other commands accordingly soon.
I also added a smooth parameter to prevent highlight blemishes in the source image from ruining the output when applying extreme brightness amounts. This might mean I could increase the range of the parameter. The filter may not be ready for that yet. smooth may need more work.
Bonus
I almost forgot to share a GIF I found on Wikipedia. The forum makes it a PNG. Follow the source link to see the animation and its credits.
You don’t have to apologize, you’re just doing your best with the limitation of little amount of bug reports. I don’t get much bug reports with my own filters as well.
Hotfixafre_brightness became radioactive at lower amounts. Committed fix; update your commands / filters in a hour or so.
Updateafre_contrast got the same treatment as afre_brightness and I expanded the amount range to [-200,200].
This is more of a commentary; therefore, more suitable for this thread than G’MIC exercises.
1 Unless I have a good reason, I found that iterations are not only wasteful but cause problems. 2 I don’t like boxfilter (it doesn’t suit my needs), which is why I made my own (not a box filter exactly), but mine is so much slower.
I started reading about various filters such as median and friends because I was exploring how various filters performed in terms of
a colour preservation b edge preservation
Many artifacts come from a combination of the lack of the two and accumulate as processing progresses, which is the reason for point 1.