librtprocess - quo vadis

Glenn, when your work on porting AHD to librtprocess is done, just make a pull request.
I will gladly review it then :+1:

1 Like

Okay, I just created #31:

Pull Request: It’s apparently one of those “counting cows” things: “Easy. just count the number of legs and divide by four…” :smile:

I do, pretty much, after last year’s pain in making rawproc easier for folk to compile.

However, I’d still like the choice as the developer. When I do Windows builds, I want to package them cleanly, and it still makes instinctive sense to statically link everything, rather than collect .dlls for the install directory. I use mxe.cc to do my cross-compiling, and I use the static targets for such+. Maybe I’m just old and set in my ways… :smile:

Imaging applications collect dependencies, knowing how do do all of it is just too much for one person. Integrating it all requires a certain amount of control to limit the undesired possibilities. libpng is a good place to illustrate that: IMHO, until recent OS versions it was hard to write fully functional libpng code without worrying the available versions across all the distros. There’s still churn regarding their new eXIf chunk. Lensfun is another one in a state of change; hard to aim function calls to a changing API, add to that the evolution of the database format.

I’d rather statically link such, avoid having to deal with the large possibility space. For a while librtprocess will be another such moving target, until the api stabilizes…

I just nuked demosaic_source_folder and created librtprocess branch in RT.

Dear all, I have just prepared a pull request that contains a small fix in the Amaze demosaicing code to make it compatible to the way PhotoFlow processes the image buffers.

By the way, let me take advantage of this to briefly explain the PhotoFlow requirements regarding the “region of interest” (ROI) processing in librtprocess.

PhotoFlow processes the image in chunks, i.e. small regions (tiles) of more or less arbitrary geometry (they could be square tiles or strips or scan-lines). When a function processes a tile, it gets pointer to the input and output ROIs. In the general case, the input region is larger than the output one, because the algorithm needs some border pixels.

Let’s use the Amaze demosaicing as an example. Amaze requires a 16 pixels border around each output ROI. This means that the “rawData” buffer corresponds to a region defined as

{left=(winx-16), top=(winy-16), width=(winw+32), height=(winh+32)}

except when the region is close to the edges of the image.

Another crucial point is how the algorithm addresses the pixels. The amaze function loops over the pixels, starting from (winx-16, winy-16). This means that the pixels are addressed relative to the origin of the image, not the origin of the ROI. This requires a bit of pointer arithmetics when the input and output buffers do not hold the entire image. For example, the first valid pixel in the input RAW data is accessed with


rawData[winy-16][winx-16]

and this must correspond to the first element of the input buffer. On the other hand, the pixel accessed with


rawData[winy-16][winx-16-1]

corresponds to one element before the beginning of the input buffer, and therefore is not valid.

The client code should take care of properly initialising the input and output float** pointers, but I will gladly provide an helper function if it can be useful.

I would bet that every windows dev would jump onto shared dependencies if windows had some usable package management.

Windows already prefers libraries from the local directory. so does it really make the difference if you link it statically or copy it into the output dir in the end?

Not really. Just a predilection I need to get over… :smile:

I created bayerfast branch which adds the fast bayer demosaicer from RT to librtprocess. This demosaicer is not intended for final output, only for fast preview.

1 Like

I tend to forget the details behind dynamic v static libraries, so I do a web search every time to remind myself; e.g., c++ - Difference between static and shared libraries? - Stack Overflow, c++ - Static linking vs dynamic linking - Stack Overflow and their comments. I like this answer; analogy alert! :stuck_out_tongue:

A static library is like a bookstore, and a shared library is like… a library. With the former, you get your own copy of the book/function to take home; with the latter you and everyone else go to the library to use the same book/function. So anyone who wants to use the (shared) library needs to know where it is, because you have to “go get” the book/function. With a static library, the book/function is yours to own, and you keep it within your home/program, and once you have it you don’t care where or when you got it.

Personally, as an end user, I think I prefer the static library. If there are dlls involved, I opt for a portable installation of the app so each app has its own set of dlls.

I wrote a couple to demonstrate in rawproc:

float ** RT_malloc(unsigned w, unsigned h)
{
	float **rawdata = (float **)malloc(h * sizeof(float *));
	rawdata[0] = (float *)malloc(w*h * sizeof(float));
	for (unsigned i=1; i<h; i++) 
		rawdata[i] = rawdata[i - 1] + w; 
	return rawdata;
}

void RT_free(float ** rawdata)
{
	free (rawdata[0]);
	free( rawdata );	
}

(well, actually I took @heckflosse’s code and wrapped it in functions) and you can see them in use here:

BTW, librtprocess with use of most of the demosaics is now in the rawproc master branch. 'course, it just occurred to me that’s with the need for manual copying of jaggedarray.h to some include/ directory, so I’ll probably push a commit that replaces them with the RT_malloc/RT_free approach…

Is there an interest in getting ppg demosaic into librtprocess? Afaik it’s the main bayer demosaicer in DT and there is also code (though currently unused) available in RT code base which could be used to make a faster version (well, at least faster than the current code in RT, don’t know about speed of DT ppg-demosaic code).

I added a pull request for the README.md file, added some usage instructions.

Added a pull request for a white balance operator.

1 Like

To expose my thought process, I’m thinking of a few additions to librtprocess to enable a command-line raw processor. Since I’m still an imaging neophyte, I look to previous work to tell me what’ s needed; in that regard, I’ve spent some time poring over @Elle’s dcraw.c annotated page:

https://ninedegreesbelow.com/files/dcraw-c-code-annotated-code.html

This tells me the following are probably essential operations pre-demosaic:

  1. Bad pixel removal.
  2. Dark frame subtraction.
  3. CA correction.
  4. WB application.

Demosaic then forms a “rubicon” to be crossed, in that it converts the image format from the mosaic to RGB. So, prior to that, one would want operations that benefit from the “really raw” data, and the float** data structure that benefits its performance. Also, keeping the operations discrete lets one consider what they’re individually about in how they affect the image.

I would build such a command line tool as a true “toolbox”, where each operation’s application is explicitly defined in the command line, no defaults. Even dcraw’s defaults just drive me nuts, indeed, it takes logical gymnastics to turn some things off in it. To my experience, dcraw has been the place where I learned what raw processing really is about, and I’d like to continue that thinking in a tool that leverages more libraries as well as provides explicit semantics.

Food for thought/discussion…

It is still early days but I would like to see full documentation on the various demosaicing algorithms, etc., and the other components of librtprocess.

I can write some signature documentation (about the parameters to pass to the functions) but not about how they work. And for the signatures it’s too early as they most likely will change.

I have a starting implementation of the librtprocess demosaic routines in rawproc and img. Missing yet are the x-trans demosaics and CA_correct, no challenge, just haven’t gotten to them yet.

Right now, these changes are in the github master branch, not yet in a Wininstaller or AppImage. If one just wanted to compile the img command line tool and not mess with wxWidgets, the instructions to do so are in the README at the github repository.

I messed a bit with doing a librtprocess pipeline in rawdata (in another github repository of mine), but that’ll require more thought with respect to command architecture and the associated data transforms for TIFF output…

1 Like

…and now I have xtransfast_demosaic() and markesteijn_demosaic() in both rawproc and img. @Claes, they seem to work well…

Still, hard-coded params for unique things. It’ll take a bit to figure out the UI…

For exercising the librtprocess demosaic algorithms, I have posted a rawproc AppImage, rawproc.conf, and readme.txt at this location:

https://glenn.pulpitrock.net/rawproc/

The AppImage is a full-featured rawproc, compiled as a development snapshot, so keep that in mind when using. The rawproc.conf is specifically configured for just doing demosaics, but can be amended to taste.

The readme.txt is what I think one would need to set things up, but you can never fully comprehend what might be needed, so let me know if you have questions.

Please note this particular deployment is targeted at developers; others are more than welcome to try it out, but rawproc is like a manual transmission Ford Pinto automobile with a missing back bumper; it won’t protect you from bad processing decisions, nor help you a lot with the specifics of using the tools.

Hope this helps…

Edit: Almost forgot: rawproc has the librtprocess demosaics, but the img command line program doesn’t yet. If someone wants that sooner, let me know.

Edit2: Gee, I just opened img.cpp to do the librtprocess logic, and it’s already there, forgot I’d already done it. Now, it doesn’t include the parameters for the algorithms that need it, so I’ll work on that this weekend…

2 Likes

I’ve added highlight recovery (the inpaint method from RawTherapee) to librtprocess, and it’s doing great things in Filmulator.

Could someone else test it with their applications?

It’s rather simple: you need to pre-apply the white balance multipliers in raw color space, and tell it the max actual value of each channel (chmax) and white clipping value of each channel (clmax).

It makes the difference between this (camera jpeg):

image

and this (filmulated, with additional white balance adjustments):

image

5 Likes