Why RAWs instead of, say, TIFFs?

The github wiki has a somewhat dated page on compiling rawproc:

I made this back in the day when Ubuntu’s packages weren’t at sufficient versions to support rawproc; with 19.01 you can apt-get all the dependencies.

Well, I haven’t tested wxWidgets that way. I like to statically link it anyway, as most folk would have to install wxWidgets just for rawproc. So, what I generally do for both linux and msys2:

  1. Get and compile wxWidgets. Here’s my configure (in a separate build directory):
    $ ../configure --enable-unicode --disable-shared --disable-debug
  2. Get and compile and install librtprocess (not a package in any distro, yet)
  3. Install the other dependencies (libjpeg, libtiff, libpng, liblcms2, libraw, lensfun)
  4. From there, do the rawproc compile as described in the README

There are a few different angles here:

Why do camera manufacturers deliver RAW files instead of demosaiced TIFFs?

Because your computer can run more powerful algorithms on the RAW (especially denoising and demosaic) than cameras could. I guess this gap has become a bit smaller but is very likely still relevant.

Also if I’m not completely mistaken the white balance is usually performed before steps like demosaic & denoise. This makes intuitive sense as well changing the white balance might actually reveal new edges which were not visible before.

If you just want slight tweaks and you got the dynamic of the compressed in camera already, you might actually get away with editing jpegs, especially if captured using a high resolution camera and targeting ‘web’ resolutions.

But I usually struggle with recreating as pleasing colors and sharpness as the camera’s JPEGs.

From my experience this also depends a lot on the camera. My D810 seems to be nice and linear, just using the standard color matrix gives my usable colors. My A6000 is the opposite. The colors out of the camera are usually off, and it requires a color profile and/or fidgeting to become usable (if still far from perfect). The out of camera jpegs of the Sony look fine if not entirely realistic.

In the end I’m 100% with you, getting decent high(ish) bit depth out of camera images would be wonderful especially for quick snapshots where nothing more than a quick edit is desired.

1 Like

darktable does the same if you enable the option to export processing history to JPG (it ends up in the XMP.darktable.history tag)

1 Like

Oh, 32-bit… it’s easily compiled/built, but the real problem with 32-bit is the 4GB memory space limit. rawproc makes a full-sized copy of the image for each added tool (unless you group tools…), and you can run out of memory really quick if you only have 4GB addressable…

Why would you still run 32bit windows in 2020?

Pretty close to rawproc, except its the history stack, not the applied toolchain. Although it probably could be made equivalent if the history stack is compressed, but the module order looks to still be defined elsewhere…

Yes. I guess that it will be addressed once the module order presets are implemented.

That is a brilliant idea! I will do that tonight!

No idea I thought that is what I read in the readme notes that is why I asked the question??

Mail](https://go.microsoft.com/fwlink/?LinkId=550986) for Windows 10

Thanks Glenn my question was not worded the best …I did mean in general were there instructions to build for windows…I was not actually looking to do a 32 bit build…thanks for responding…

Mail](https://go.microsoft.com/fwlink/?LinkId=550986) for Windows 10

The first post will do, then. For msys2, just pacman -S each package…

Great thanks will try to find some time to give it a whirl…

Btw, the most recent DNG spec 1.5 allows also for the storage of an “enhanced” image in high precision, which is at least demosiaced by definition, and possibly denoised and sharpened. However, it should stay in the camera color space, so quite a bit of camera specific processing is still needed.

Unfortunately, it seems there is no requirement to disclose and record what was exactly done to the raw pixels in order produce that enhanced image.

This is the closest thing I can find in any metadata spec to store such:

Exiv2 - Image metadata library and tools, scroll down to xmpMM:History. I think I read somewhere recently that darktable can optionally store the history stack here.

All the information gathered by the sensor (bayer/xtrans, I don’t speak about 3-channel foveon) is available by one color value per sensel (usually 10 to 16 bit, depending on camera, often even lossless compressed). You can go from that data to demosaiced data but you can not easilly go back, means by storing the demosaiced 16-bit tiff data instead of the raw data, you will lose valuable information, increase the size of the file and also increase in camera processing time (reduce burst speed)

Yes, I don’t see any need for such in a raw file, if one supposes that the raw data comes unmodified. In most raws I’ve inspected, the Makernotes do include the Picture Control settings (sorry, a Nikon nomenclature) that produced the embedded JPEGs.

But a rendition, e.g., a TIFF for use in GIMP, or just a JPEG for some audience, does have a use case for containing the tool chain that produced it. That’s how I open most files in rawproc; I select a rendition, and rawproc finds the source file, opens it, and re-applies the toolchain to start me off with the rendition as-processed. From there, I can change any of the existing tools in the chain, or add/delete tools. Once done, I can choose to either save the new rendition over the old one, or save to a new filename.

I can’t imagine working as efficiently any other way…

Indeed - the depth map addition in DNG 1.5 makes sense, but I’m not sure what was the idea behind having this “enhanced” linear image alongside the raw data… Maybe for faster/lighter tone and color grading workflow, or storing intermediate HDR+ merge results and such? The file size increase is not negligible (in effect 3+1 full res channels now at 16b or more vs a single 16b one).

But we don’t have to replace RAW files. “Editable JPGs” would still be valuable, if not necessarily for the same purposes as RAW files.

In a sense there’s already such a thing, JPEG 2000. Particularly, it allows higher bit-depths than the ubiquitous JPEG standard 8-bit. IMHO 8-bit should never be considered for tone or color editing.

I only consider using plain-ole JPEGs for cropping, and even then, it’s just as easy for me to re-open the raw file and re-apply the processing as a starting point.

1 Like

Every time I hear someone mention JPEG 2000:

Sorry for the off-topic…

1 Like