Developing afre_dehaze

@afre

The post processing is an interesting idea. As you have inside the plugin the depth mask, you can apply brightness and color shift modulated by the mask.
Regarding color shift, the main problem is the blue cast caused by Rayleigh scattering (but there are other cast due to pollution for instance). These casts increase with the distance of viewed object.
If not included in the plugin, a way to export the mask would be valuable.

You could be interested by some images in https://keybase.pub/gaaned92/haze/
Regarding blue cast, DSC_3445.NEF could interest you.

You will find the plugin in the same location.

1 Like

Hello @gaaned92

Since we are talking about dehaze options…

By any chance, do you know some good tutorials regarding RawTherapee and its Retinex tool [1]?

From what I have gathered it should be another additional option to dehaze your pictures…
I have looked on YouTube but I am unable to find anything regarding the Retinex option. Only thing I have read by other users is: this tool should be difficult to use… :slight_smile:
Here is a discussion on this Retinex workflow:

[1] Retinex - RawPedia

In any operation, I find that pre and post processing are important. Finding the right balance isn’t easy. I don’t want to decide for the user; but at the same time, I don’t want to alienate them.

Yes, that would be a good thing to remember. I am wondering what you had in mind specifically. Would like to consider that.

It covers colour cast and depth. It becomes a problem when it encounters situations where the assumptions no longer hold; e.g., multiple casts or transmission and depth discontinuities. My re-investigation of transmission has to do with the latter.

Thanks. I am already using a good data set for testing. More won’t hurt.

To my understanding, RT has a few implementations. The one described in RawPedia is @jdc’s, which is way too advanced for me. I am too scared to try it. In fact, I don’t really use any dehaze tools in my workflow. He is a vet though. :stuck_out_tongue:

Another one is by @agriggio, which is based on the same original He et al. paper I am basing my filter on.

Just saw your edit. Retinex and dehaze are indeed related. Recently, there has been research combining the two, which is interesting but beyond my understanding. Anyway, these topics are still being studied because although we have made much progress we are still not doing that well. We, as in, scientists, not me. :stuck_out_tongue:

Another really hazy file to test:

http://rawtherapee.com/shared/test_images/dehaze2.nef

Edit: left without RT dehaze, right with RT dehaze at max. settings. Please ignore the dust spots :wink:

Edit: what looks like dust spots wasn’t dust. It were water drops on the lens caused by really heavy irish mist.

2 Likes

I am sorry to say no :anguished: Furthermore, I never succeeded to correctly dehaze a photo without strong artifact. The depth mask is not correct and even foreground get unnecessary correction.
In RT I use only the dehaze tool.

The Retinex tool is not limited to “Dehaze”, it is a complex tool, which is above all allows the increase of the local contrast … (the algorithm is close).

Taken alone to treat the mist, he is generally inferior to “Dehaze”.

But it has the advantage of being able to separate the foreground of the background notably by acting on “radius” and “contrast”.

Retinex in the main menu “Rawtherapee” is already old and does not have the improvements of the version “Local adjustments”

In “local adjustements”, the algorithm combine “dehaze” and “retinex”. Others improvment have been made that are not all used to treat haze (masks, guide filter,…)

Here are 3 steps of a very foggy image processing - I do not say that it works in all cases …

Whithout process

Dehaze only

DehazeRT + Local adjustements

But nothing is perfect, everything is a matter of compromise and taste.

@gaaned92 I ran afre_dehaze (the same version that I first demonstrated) on the TORINO and, to my surprise, yielded almost the exact same result as your sample (PSNR 53.4567).

I will apply it on the rest later.

PS The output above for torino was an anomaly. I cannot reproduce the result.


Here is an attempt on DSC_3445.NEF. I used the preview image instead of the raw data. The algorithm isn’t as aggressive as @gaaned92’s plugin, which I have yet to try. There is still some haze present and as he has noted the blue is prominent. Rayleigh scattering (and underwater and low-light imagery) is another topic that I will eventually study, provided that the math isn’t too complex. :dizzy_face:

This is done using an updated afre_dehaze:
1 Mitigated the darkening, though it is a WIP.
2 Fixed the atmospheric light code, which will improve the haze and colour correction.

There is also the problem of haloing that I have yet to improve on. This innocent exercise turn project is becoming unwieldy. I have a lot more than what I have discussed here to study and work on. Coming soon, my foot!


And here are the retakes of the previous two samples.

1 Like

Update (More of a discussion, really.)

Transmission

Been working on morphology as it relates to transmission. The challenge of any dehaze tool is to generate a transmission (or depth) map that accurately removes the right amount of haze in the right places. This isn’t easy to achieve even now.

I discovered that reading other papers didn’t help me all that much and that it was better to stick with my own R&D. The main issue that I have been trying to overcome has been haloing and fine detail. The original He paper brought simplicity together with efficiency but it lacks in accuracy. I find that most papers since then have that shortcoming as well to a certain extent.

Since the writing of the original copy of afre_dehaze, I have been investigating ways to come up with a more accurate transmission map. The original had a halo and edge issue that stems from the guided filter that is required to refine the edges of a crude morphology. Earlier editions of the paper demonstrated the use of a Laplacian matrix to optimize the map but it is too complex in terms of math and code for me to unravel. Also the Laplacian soft matting method is too operation intensive to be of any practical use for the filter.

Haloing and edging is not the only issue. There is also the question of detail and differentiation. A simple example where haloing, edging, detail and differentiation are all at play is the bicycle. A bicycle has spokes and sometimes a mesh basket. The original method is unable to dehaze the areas between the metal strands as it does for other background areas. Another example would be tree canopies, where even more types of light related problems exist, many of which are outside of the scope of this filter.

More control over atmospheric light

The #2 problem that I am dealing with is the decoupling of atmospheric light estimation from transmission estimation. Both are interrelated and work in tandem to remove the haze. However, this presents a trade off that is undesirable and unfriendly to the end user of the filter. You can read about the trade offs of patch radius in post #12.

I don’t think I will be able to separate the two completely because I found that any inefficiency in the dehaze method usually results in the failure to dehaze effectively. You can see that in the torino.jpg example in the previous post. Even a couple of changes in the code and method can drastically and irreversibly (because I only keep a few copies of my older code; that is probably why I should do versioning :blush:) change how effective it is.

Other remarks

Anyway, in another life, I would have been a paid scientist, engineer and / or prof of some sort, but life sucks big time. However, at least I can have my fun on this forum. Keep sending more samples and thoughts my way. I love wholesome feedback and banter. :wink:

3 Likes

Vaguely remember halo was worse in DCP using “smooth” erode compared to binary, plus structured morph. can be a lot slower… although I read some things about splitting the structuring element into smaller kernels, it seems difficult.

When I “go dark” that’s often why (apart from work)… need to learn a branch of maths to understand something! A down side of tackling increasingly complex filters I suppose :wink:

Quickly tested separable grayscale morphology, and while it doesn’t give the same exact results, it creates really close effects (up to a normalization factor).

foo : 
   33,33 gaussian. 25% *. 255
  +rows. 50% +columns.. 50%

  sp tiger

  tic +dilate. [0],1,1 toc # 2D grayscale dilation
  tic +dilate.. [1],1,1 dilate. [2],1,1 toc # 2x 1D grayscale dilation
  k[-2,-1] n 0,255

gives this (left : 2D dilation, right: separable 2x1D dilations):

Is this close enough ?
By the way, the timing are interesting:

  • 2D dilation : 0.312 s
  • 2x 1D dilations: 0.067 s
2 Likes

Nice example, that makes it much easier to understand! Time to do some experiments :slight_smile:

Edit: seems good to me, time difference is pretty amazing too (I get about 16% time taken on my machine).

Thanks! Should be close enough for the filter. If only shopping discounts were so deep! I imagined such a solution but as usual didn’t have the coding knowledge to realize it.

Is it coming?

Coming Soon™ :stuck_out_tongue:

I don’t know when it will come. Frankly, I won’t release it even for Testing if I am not satisfied with it. Also, it isn’t the only tool in town; so no one is going to need it.

Been reconsidering my code, the algorithm, what I am doing to extend or outgrow it, and the math and programming involved. The meaning of life and so forth. :unicorn:

1 Like

I skimmed some papers on Rayleigh scattering. Looks like removal

1 Would be a pre-processing step.
2 Techniques tend to use NIR, which isn’t available in conventional photography.

@afre
I was not thinking about complex scientific processing. Just a simple way to partially correct some effects of scattering. see Aerial perspective - Wikipedia.
The correction has not to be complete but only partial to keep a realistic look and furthermore must be modulated by depth mask.
For the colour shift, we can for instance change the white balance and apply with depth mask. (I don’t know if it is the best way).
For contrast, an increase of local contrast would be welcome.

Note: I don’t want to correct the scattering effect of sunset (orange shift) :roll_eyes:

I get what you mean. The thing is that haze removal is supposed to correct this. However, as you said, it needs depth-aware pre-processing help. We are still in early days in simple but effective dehazing, without getting into machine learning (e.g., GANs).

I wasn’t clear above: I hope to add an additional colour correction step with depth awareness, but I am far from that step. ATM, development has stalled because I still have much to learn. I don’t have the math and coding skills to pull off this project yet. :blush: :stuck_out_tongue:

1 Like