@patdavid another change in the same line as mentioned above, I mean within Dual-Demosaic with Bilinear:
“options in RawTherapee that blends the results from”
I think it should be blend instead of blends
@patdavid another change in the same line as mentioned above, I mean within Dual-Demosaic with Bilinear:
“options in RawTherapee that blends the results from”
I think it should be blend instead of blends
@patdavid in the Wavelet Improvements section,
“Main developers: jacques Desmis, Ingo Weyrich”
A capital J for Jacques.
Thanks again for your time and effort!
Ok, you all rock!
Thank you for taking the time to proofread and get me fixes/suggestions. I’ve incorporated all of the changes you’ve mentioned.
I do still have a question - mention was made of improvements to wavelets but I cannot find a link or explanation about what was done. I apologize if I’m being dense but is there a good reference for the improvements to wavelets? Is it just for local adjustments or as part of a larger module?
Could I please get a link to some material to review?
Also - I’ve added the section on Abstract profiles - if someone could take a quick look and make sure it makes sense that would be awesome.
If there are no further changes to be made I can see about publishing this weekend possibly (I still need to test pushing to the website host with the credentials Gabor passed on to me).
The post is looking good, Pat! There’s a few things I wanted to say but I haven’t had the time in the past few days. I’ll get them to you tomorrow. One quick thing though. AFO is an abbreviation for the Spanish of “waveform analyzer”. In English, it’s simply referred to as “waveform”.
I will try, from memory, to note the main changes in Wavelets. Note the excellent quality of Rawpedia - Wavelets (made for the graphics, images, general presentation, by @XavAL , and for the content by Jacques Desmis, @Wayne_Sutton , @Andy_Astbury1 and @XavAL )
For “Abstract profile”, this has been subject to many heated debates (that’s an understatement), mostly related to the lack of understanding of what I had done (what is this thing, it’s not usual, etc.). I think, they can give their opinion that it was validated by myself (obviously), @Wayne_Sutton and @Andy_Astbury1 (with a video)
Jacques
@patdavid
Some explanation on “Wavelets”. Excuse my very bad english
The “Wavelets” module in the “main” menu has also been greatly improved. It is on this occasion that I introduced the notion of “Attenuation response”, which I will explain as simply as possible, a little later.
Several modules have undergone profound changes, notably by the addition of this function and its corollaries (offset…). For example, the “Contrast”, “Chroma” and other modules.
The " Toning " module has been enriched with a graphic possibility of visualizing the changes.
The " Denoise " module (which I have separated in " Local Adjstments ") is clearly improved, but it limits certain possibilities of wavelet, by its limitation in number of levels of decomposition (memory occupation, processing time). It includes (in “Advanced” mode) complex optimization functions to improve the results, which I will not detail here. Moreover, it is now equipped with a “Chroma” module.
Edge sharpness" (an extremely complex module), which increases the contrast on the edges, is also improved.
A “Blur levels” module is also new. Its usefulness is obvious…
“Residual image” has also been improved, in particular with the “Blur” function and a new “Clarity” module.
“Final Touchup” has also been significantly improved, in particular by the “Directional contrast” function, which acts to differentiate the response of the 3 wavelet decompositions (horizontal, vertical, diagonal) and “Final smoothing”, which allows you to attenuate images that are too “hard”;
A few words about “Attenuation Response”. This addition has been the subject of very long debates, you can find the explanation in the excellent diagrams made by @XavAL , in the “Wavelet” documentation.
To simplify, the wavelet decomposition, if we don’t apply this addition, often risks to bring artifacts. Why is this? Simply because when you act on a contrast slider, for example, all the signal (which is not the luminance, but the variation of it in positive and negative, for each of the levels, and each of the dimensions : horizontal, vertical, diagonal). The basic idea is simple, but difficult to grasp. We look for the maximum of the signal (of course for each level, dimensions), we calculate a standard deviation (on a distribution which is all but Gaussian, but which serves as a reference). From these two values, we concentrate the action on the most pronounced “central” part of the signal, “Attenuation response” will allow us to reduce or increase this “central” part and « offset » to move this central part .
This addition makes wavelets usable in most cases, without adding artifacts.
All this work required the precious collaboration of @Wayne_Sutton , @Andy_Astbury1 , and @XavAL , and of course the code has been improved by @heckflosse
Compared to “Local adjutments”, many modules seem similar, but because of the possible number of levels, and the deltaE, the performances of “LA” are in my opinion superior. Moreover, functions have been added in “LA” such as “Graduated filter” which acts on the contrast (in the wavelet sense), “Tone mapping” : another approach than the other “usual” algorithms, which in negative values allows for example to simulate a “Dodge and burn” and of course the masks and functions related to the masks such as “Recovery base on luminance mask”.
Jacques
For your convenience, Wavelet Levels - RawPedia. It may still be a little abstract for some readers… In general, when an image is broken down into wavelets, simply increasing or decreasing each wavelet by a factor (degree) would increase or decrease all the frequencies in each including undesirable artifacts such as FFT-related patterns and image noise related to the frequencies in question. Attenuation in our case is to narrow down the sweet spot per wavelet to reduce them.
Note the method used isn’t perfect. Hence, Jacques saying
Even though I haven’t tried the implementation, I can see it is user friendly: the user manipulates two sliders and sees the changes happen in a direct manner.
Thanks for the link to Rawpedia
The debate was long, because we had to make it clear that:
Once this was understood, with the problems of the language, it was necessary to find a term, it is @Wayne_Sutton , who proposed “Attenuation response” which seems to me to translate what it does.
Yes it is approximate, but it works
Thank you again
Jacques
Completing the list of contributors for each feature (from memory and scanning the PRs and issues):
Edit: Added Andy for his video on waveform and vectorscopes
These are the minor comments I have.
Note: Many of these features are available from the Automated Nightly builds
If “these features” refers to the ones highlighted in the post, then all of them are available in the nightly builds.
@Desmis has explained a summary…
Should the @ be a link?
accounting for the light response of properties of film
There’s an extra “of” (the first one).
The spot removal screenshot could use a caption explaining that the bird is being removed.
From (Jacques) @Desmis and Ingo (@heckflosse)…
The parentheses are flip-flopped.
The color-correlation AWB section is very technical. That’s fine because there’s not much to say from a user’s perspective other than “choose the ITCWB option and that’s it”. The potentially confusing part is regarding the 3 phases and the 2 step process. It should be more clear that the numbers are not a mistake because they refer to two different ways of describing the algorithm.
Would it be better if we say what ITCWB is good for and what problem it is trying to overcome?
My final comments are about the camera-based perspective tool.
The focal length and crop factor (and any additional cropping such as a digital zoom) are combined automatically inferred from the image metadata
This makes it sound like any crop is retrieved from the metadata, but in reality, only the focal length and crop factor are used. The user is responsible for accounting any cropping not already factored into the focal length and crop factor metadata, e.g. by increasing the crop factor and using the horizontal/vertical shift to re-align the image center with the optical center.
We should elaborate on the other aspects of the tool.
The two images I supplied should have captions. The first demonstrates control lines being drawn to correct the perspective in both directions. The second shows the result of applying the correction with some recovery.
That’s all I have to say for now. Thank you @patdavid for writing this wonderful piece!
I too am in favor of simplifying the presentation of Color-Correlation Automatic White Balance.
I can propose the following text (although I don’t mind the detailed explanation, it is only out of step with the other presentations)
Excuse my bad english:
==========
This algorithm “Temperature correlation” is based on a comparison of Raw data of the image (those majority), and a base of 200 colors representative of the visible colors (and not virtual) in spectral data.
The system proceeds in several iterative steps, using a correlation of data (Student) to :
The system gives good or very good results when the illuminant of the scene, during the shooting, is either “Daylight (between 4100K and 12000K)”, or “Blackbody (between 2000K and 4100K)”
Jacques
First excuse my bad english…
Some new things about Rawpedia and the HDR-SDR and Cam functions in Rawtherapee :
In association with @Wayne_Sutton , we have agreed, to make it clearer, and more educational to :
The whole is clearer (at least from our point of view) and allows a better understanding and an easier reading.
Some explanation on these HDR-SDR type functions, on their implementation in Rawtherapee. Of course, you may or may not take it into account to incorporate it in the loads compared to 5.8 :
« Color Appearance & Lighting (Ciecam02/16) » (main)
In summary this module which seems complex is located at the end of the process, and allows in particular in " Symmetric " mode to bring a " chromatic adaptation " of high quality when the conditions of shooting and the real illuminant are different.
"HDR to SDR : A first Approach (Log Encoding – Cam16 – JzCzHz - Sigmoid)" (Local Adjustments)
Introduction :
When we look at Darktable, we can see that the software has been entirely rewritten based on 3 principles and tools: Remove the Lab* Mode, Introduce a ‘Filmic’ module (which is a logarithmic encoding of the data), review all the colorimetry in RGB mode, and in a current branch develop a ‘Sigmoid’ module.
I don’t make any judgement, but I can see that these changes have caused a lot of debate and communication. What about Rawtherapee?
What about Rawtherapee?:
Use of the Lab mode: (especially in Local adjustments) :
https://rawpedia.rawtherapee.com/Toolchain_Pipeline#Colorimetry
The “Log Encoding” module :
The « Cam16 » module :
The « JzCzHz » module :
These 2 modules (Cam16 and JzCzHz) were developed at the end of 2021 thanks to the precious help of @Wayne_Sutton and @Jade_NL . Thanks to them.
Of course, you can help yourself with the documentation in Rawpedia.
Your opinion is important : what do you think ?
Jacques
So is the release coming or not?
Judging from the stream of posts above your post it seems it’s being prepared.
That was already about 2 weeks ago. I guess @Morgan_Hardwood 's free days are over? And nothing happened?
But I think he he is not responsible for releases.