Rawproc: yet another raw processor, aptly named

Perhaps because speeding up things here could break other things somewhere else?
Apart from that I never had the impression that dcraw was slow or that RT using adapted dcraw code was slow. Perhaps a developer can speed that up with 100% or 200%, but that will not change the way I use dcraw or RT (okay, I don’t need to batch process 10,000 raws per hour).

dcraw is single-threaded.

For that matter, some algorithms in LibRaw are single threaded, like LMMSE (annoying because that’s what I use the most) and highlight recovery (#9 takes FOREVER).

Some time ago Jdesmis made a multi threaded (omp) and I made a partly vectorized version of lmmse for rt: Speedup for LMMSE demosaic · Issue #2648 · Beep6581/RawTherapee · GitHub.

Ingo

A few changes:

Master branch:

  • Completed the resize parameter panel, now lets you do an aspect-anchored resize on either axis, and select one of the six available interpolation algorithms available in FreeImage
  • A new touchslider that does floating point.
  • A RGB channel mixer for the grey tool. The sliders don’t yet fit in the panel at most window sizes, so there’s work left there. Also, the sliders don’t adjust proportions to 100%, so I’ve got to figure out scheme for that.

convolution_matrix branch:
A lot of my resizing is to produce 640x480 images for web posting. Plain old resizing leaves something to be desired, so I started researching sharpening algorithms. I found some really good references on convolution matrices, which is the mechanism used to implement most sharpening and denoising tools. Typical math, looks daunting in the formulas, but really is simple in concept. So, I added a FreeImage_3x3ConvolutionMatrix function for both 24bit and 48bit images, and put a Sharpen tool in rawproc that applies the following basic sharpen matrix:

0, -1, 0
-1, 5, -1
0, -1, 0

Simply, loop through the pixels, and use the matrix centered on the pixel to multiply each adjacent pixel and the centered pixel by the associated matrix value, sum up all the products, and that becomes the new pixel value. A lot of you already know this, but I’m writing about it because, for my resizing need, this simple manipulation does the trick without messing with radiii or thresholds or any other slider. The github branch has a sharpen tool that simply does this, hardcoded, no sliders. If you want to mess with these matrices and don’t want to mess with rawproc, use G’MIC’s -convolve tool.

I am going to try one more thing in this branch, local contrast. That would be 1) make an edge mask from the original image (there’s a 3x3 matrix for that), blur the mask (yep, another 3x3 matrix for that too), make a sharpened copy of the image using the matrix above, then use the mask to selectively apply, pixel-by-pixel, either the original pixel or the sharpened pixel to the final image. I’m going to modify FreeImage_3x3ConvolutionMatrix to use a mask image, if supplied. For reference, here are the other matrices:

blur:
1, 1, 1
1, 1, 1
1, 1, 1

edge detect:
1,1,1
1-4,1
1,1,1

and you’ll find a really good explanation in the GIMP documentation:

https://docs.gimp.org/en/plug-in-convmatrix.html

I’m writing about this not necessarily to sell rawproc, but to chronicle my recent revelations about how these things work. I went through a similar experience with curve transforms; I really didn’t know how to effectively use the tool until I understood the underlying look-up table. Generally my experience learning digital photography has been a ‘peel-the-onion’ experience; a lot of the literature tells you how to use particular tools, but you really have to dig to find cogent explanations of what goes on ‘under the hood’…

Hi! Interesting developments…

I share your surprise in understanding how things work in reality :wink:

What you describe as “local contrast” sounds to me more like “edge-masked” sharpening, since for local contrast one usually chooses blurs more than few pixels wide.
For the edge detection you have several possibilities, with varying trade-offs between edge detection and noise influence in the edge mask. Personally, for edge detection I find the “gradient norm” algorithm in G’MIC really useful.

Now, coming back to local contrast, another way of understanding it is the following:

  1. you blur your original image
  2. you subtract the blurred image from the original one, thus keeping only the “high spatial frequencies”
  3. you add back the high frequency component to the original image

One of the crucial points of local contrast enhancement is the choice of the blur method. @patdavid has a nice and comprehensive series of examples of different blur methods in his blog. Personally I like the results obtained when using the “bilateral filter” in the blurring step…

My 2cts.

1 Like

Carmelo, thanks for setting me straight; I’m putting aside sharpening for now, I’m getting results in the mask that I don’t understand, and I need to consider the code somewhere else beside the tablet I’ve been working on for the past two weeks. (old eyes and small fonts don’t mix…)

What I have done is worked on the UI to get closer to the ‘develop without a mouse’ objective. Particularly, I’ve modified the curve tool to the point where you can readily work with it using your finger. You tap on the line to either select a point or make a new one, then you can touch and drag anywhere on the grid to move it. If you don’t like a selected control point, double-tap anywhere in the grid and the point is deleted. The idea is to be able to see the curve move as you drag your finger, instead of having to directly touch the point.

My slider also has a change in that regard; it swaps the label and value when you’re changing it, so the changing value isn’t covered by your finger. The sliders also work like the curve, you can touch and drag anywhere in the slider window to move it, but that isn’t effective right now because I’ve made the sliders too narrow. My last major UI work will be to make a toolbar to contain the manipulation tools and selected file operations, and at that point I think I’ll have a raw developer that I can use on a tablet computer without extra peripherals. Oh the gray tool sliders don’t behave well right now, needs some contemplation of a proportion scheme.

WxWidgets doesn’t support the touch/gesture events of any UI, so it’s rather tough to get finger-stuff to work well. But it’s been an interesting thought exercise, adapting photo manipulation tools to what I’ll call ‘finger painting’…

Edit: Oh, all of the stuff described above is in the master branch on github.