@patdavid
Some explanation on “Wavelets”. Excuse my very bad english
The “Wavelets” module in the “main” menu has also been greatly improved. It is on this occasion that I introduced the notion of “Attenuation response”, which I will explain as simply as possible, a little later.
Several modules have undergone profound changes, notably by the addition of this function and its corollaries (offset…). For example, the “Contrast”, “Chroma” and other modules.
The " Toning " module has been enriched with a graphic possibility of visualizing the changes.
The " Denoise " module (which I have separated in " Local Adjstments ") is clearly improved, but it limits certain possibilities of wavelet, by its limitation in number of levels of decomposition (memory occupation, processing time). It includes (in “Advanced” mode) complex optimization functions to improve the results, which I will not detail here. Moreover, it is now equipped with a “Chroma” module.
Edge sharpness" (an extremely complex module), which increases the contrast on the edges, is also improved.
A “Blur levels” module is also new. Its usefulness is obvious…
“Residual image” has also been improved, in particular with the “Blur” function and a new “Clarity” module.
“Final Touchup” has also been significantly improved, in particular by the “Directional contrast” function, which acts to differentiate the response of the 3 wavelet decompositions (horizontal, vertical, diagonal) and “Final smoothing”, which allows you to attenuate images that are too “hard”;
A few words about “Attenuation Response”. This addition has been the subject of very long debates, you can find the explanation in the excellent diagrams made by @XavAL , in the “Wavelet” documentation.
To simplify, the wavelet decomposition, if we don’t apply this addition, often risks to bring artifacts. Why is this? Simply because when you act on a contrast slider, for example, all the signal (which is not the luminance, but the variation of it in positive and negative, for each of the levels, and each of the dimensions : horizontal, vertical, diagonal). The basic idea is simple, but difficult to grasp. We look for the maximum of the signal (of course for each level, dimensions), we calculate a standard deviation (on a distribution which is all but Gaussian, but which serves as a reference). From these two values, we concentrate the action on the most pronounced “central” part of the signal, “Attenuation response” will allow us to reduce or increase this “central” part and « offset » to move this central part .
This addition makes wavelets usable in most cases, without adding artifacts.
All this work required the precious collaboration of @Wayne_Sutton , @Andy_Astbury1 , and @XavAL , and of course the code has been improved by @heckflosse
Compared to “Local adjutments”, many modules seem similar, but because of the possible number of levels, and the deltaE, the performances of “LA” are in my opinion superior. Moreover, functions have been added in “LA” such as “Graduated filter” which acts on the contrast (in the wavelet sense), “Tone mapping” : another approach than the other “usual” algorithms, which in negative values allows for example to simulate a “Dodge and burn” and of course the masks and functions related to the masks such as “Recovery base on luminance mask”.
Jacques