Is my computer really this slow?

Maybe you could try the octagonal dilate/erode. Its between square and circular!

Hello hello,

David told me that my little filter DCP dehaze triggered a passionate discussion on CImg optimization. What I can say is that the library has been optimized for years and David has always been very keen and very responsive on integrating patches. Compiler are also quite good nowadays at doing auto-vectorization and approximative math so that optimization is often quite disappointing job to do.

The DCP dehaze filter was a quick draft to see how this works and is based on the following paper
K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, Dec. 2011.

It make a “funny” assumption that most natural images have a “dark channel” and use it to recover a noise free image from an estimated transmission map. So no it is not a retinex like appraoch.

It is very possible that the code of the filter itself is responsible for the lack of performance. If you feel that this is necessary we can spend a bit of time on it.

Jerome

ps: David you got me!

@KaRo
Sadly the octagonal versions use the same method as circ; a mask supplied for each patch. I did think about a custom circle erode based on Bresenham to calculate endpoints for each row of the patch (you just keep adding a simple differential) but it’s debatable whether that would end up faster than a mask - CPUs tend to like a nice buffer to loop over. Certainly there’s no way it would be faster in G’MIC math processor because -erode with mask is a native command.

@Jerome_Boulanger
Nice to see you on here :slight_smile:
Indeed nearly every time I’ve thought something about G’MIC core or CImg can be made faster, I eventually realise it can’t :smiley:

Here is a Chinese blog with information about “dark channel prior” it also points to the same paper Jérôme Boulanger mentioned above.

(Google translate or something else)

Some people here might find this interesting?

Hi again,

Just wanted to add that the implementation is loosely based on the article since I didn’t use the soft matting step as i replaced it with a simple median filter.
S. Lee, S. Yun, J.-H. Nam, C. S. Won, and S.-W. Jung, “A review on dark channel prior based image dehazing algorithms,” EURASIP Journal on Image and Video Processing, vol. 2016, no. 1, Dec. 2016.

Other more recent approaches are also available and use other assumption than DCP.

Jerome

Ok, here’s a first quick and dirty patch. I just copied over some code from RT for the median of 9 values and used it in blur_median.
I benchmarked it using gmic image.jpg -tic -median 3 -toc -q where image.jpg is a 36 MP file.
Processing time on a 4 core machine (median of 7 runs)

before patch: 894 ms
after patch: 237 ms

Edit: Here is a patch which also includes median of 25 values.

I benchmarked it using gmic image.jpg -tic -median 5 -toc -q where image.jpg is a 36 MP file.
Processing time on a 4 core machine (median of 7 runs)

before patch: 7479 ms
after patch: 1330 ms

1 Like

Just a note: Though the code is C++11, it can be easily rewritten in C++98 with a slightly different interface. There is no C++11 magic about it.

1 Like

Ok, I’m currently looking at your patch, which makes things faster indeed.
The surprise is: the cause of the speed gain is not the algorithm itself, but mainly the use of std::min() and std::max(), instead of my ‘own’ min() and max() functions. Looks like the compiler uses hard-coded functions for computing the min() and max() of two float values. If I use my own min/max functions in your fastmedian() function, I get very similar results as my previous code. So, I’m currently patching my min()/max() functions to make them use std::min/max() when possible. Not sure how I can enable this for C++98 users by the way.
I’ll let you know when this is ready.

I guess the compiler can’t vectorize your ‘own’ min and max functions but it can vectorize std:min() and std:max() at least for float values.

Instead of passing an std::array<> pass the parameters by value or have templated functions like fastmedian9(T*) where the argument is a pointer to nine T’s (reminds me of the 90s - the programming style as well :grin:).

That is what I’ve done.
Anyway, it seems the optimization flags are not optimal. I compile G’MIC with -O3 -mtune=generic, and in this case, my min/max() functions are slower. If I use -Ofast, they become equivalent to std::min/max() (I’ve looked at the assembly code generated, to compare the two versions).

I don’t see any problem with that coding style. Most of the best coders have started coding in the 90’s :slight_smile:
There are so much people advocating for fancy and “modern” syntax who do not realize the assembly code generated by the compiler is the same at the end (or sometimes even worse).
No need to be pedant with good old programmers.

I read it as a nostalgic comment rather than a criticism.
Anyway I’m excited about any speedup, regardless how it’s done :wink:

Fyi: I compiled G’MIC using make cli. Didn’t look which settings are default.

Yeah, sure. I also started programming at the beginning of the 90s. Those modern tools and syntax changes make the code more stringent and add new power (especially for an old language like C++), and I expect them to yield the same optimal output.

That wasn’t meant as a side blow. I admire your work, and backwards compatibility (language wise) sure is an obstacle.

I’m not seeing a problem with the coding style, either. I was more focused on the programming style (T* as argument) where you have to tell the user how many Ts you expect rather then telling the compiler.

OK, so to sum up :
The G’MIC Makefile has been using optimization flags -O3 -mtune=generic, which leads to a faster execution of functions std::min/max() against the cimg::min/max() (which are basic template implementations of the min/max functions). Now that I use -Ofast for the optimization flags, and with a simple template specialization of the cimg::min/max() functions, the processing time is now comparable than the use of std::min/max().
I’ll add the code also for 5x5 median filter too.
Thanks for your patch @heckflosse.

1 Like

Actually I was just making fun with the pun. :slight_smile: I didn’t refer to the 25 parameters variant on purpose.

In case you are interested. median.h includes code for 7x7 and 9x9 too.

Sorry for being over-reactive, but I’ve already faced a lot of situations where I’ve seen people (usually students from engineering schools :slight_smile: ) giving a lot of advice and recommendations about how things must be correctly done. Most of the time, it appears they know nothing about programming. With time going on, I’ve learned to be wary of allusions to the proper way to program.

Isn’t that a bit risky? It enables -ffast-math which enables -ffinite-math-only which can be problematic when handling NaNs or Infs …