Why RAWs instead of, say, TIFFs?

Why do camera manufacturers deliver RAW files instead of demosaiced TIFFs?

Essentially, most of my editing is changing exposure, white balance or color balance, and creative adjustments. But I usually struggle with recreating as pleasing colors and sharpness as the camera’s JPEGs.

It would seem to me that my use case would be much better served by a “high dynamic range JPEG”, instead of a RAW file.

I see that this is now slowly happening, with these HEIF or TIFF files that the very latest Canons and Fujis are producing. But I wonder why it took so long to implement that feature?

With a TIFF file you get a already processed image. You cannot go back and do this processing again. For example you want another demosaicing method which gives you a better image.
Therefore it’s best to go back as far as you can and get the data from there. And this is the sensor data stored in a RAW file.

Moreover the RAW just needs one channel per pixel instead of three in the mosaiced image. This saves much space and reduces time for image processing.

JPG can’t be ‘high dynamic range’ since it’s only 8bit.

TIFF can go up to 32bits, but current cameras deliver 12-14bit. Heard of new sensors with 16bit but don’t know if they’re on the market already.

Going away from JPG to losless formats is a good move, but this only affects JPG format.

1 Like

This is exactly the right reason. There is a lot of ways people have tried to obtain as much useful information from the raw sensor data as possible (try a search in Google Scholar). If your camera exports a TIFF you have to trust the manufacturer on their demosaicing algorithm. This may be all right, but not necessarily in all cases. For example, IGV and LMMSE exist because they work better on noisy data (high ISO).

Not necessarily. dcraw -D -T -4 <rawfile> yields a single-channel, 16-bit integer, unscaled, unwhitebalanced, unmosaiced rendition of your raw data, as .tiff

I just did one, the only thing it’s missing to tell a software that it’s umosaiced is the EXIF PhotoInterpretation tag value, which is “BlackIsZero”, instead of “Color Filter Array”

Edit: I missed another required tag, “CFA Pattern 2” and “CFA Repeat Pattern Dim” in my NEF files. That would inform the software with regard to how to walk the bayer pattern.

That’s right, one has to distinguish from the TIFF file/container format itself to the image content it actually carries, which can come in many guises: CFA, RGB, CMYK, grayscale, bitmap, 8 or 16 or 32 bits per component or float, compressed (lossy or lossless) or not, etc…

For example, both NEF and DNG are raw files, that are essentially TIFFs.

3 Likes

GFX 100 has 16bit output

Interesting.

My idea was that demosaicing, coloring, and possibly lens correction, is really something the camera manufacturer can do better than a RAW developer.

But as you pointed out, such a file would also be three times the size, and anyway TIFF.could just as well contain mosaiced data.

Still, personally I prefer to edit JPEGs, which is of course only possible for small edits. For more complex edits, I need to start from RAW, and then be frustrated by my relative inability to create as pleasing colors as the JPEG.

Technology evolves and new, better, methods are invented so that you may get better results with modern raw converters than what’s possible with an older camera. E.g., there was lots of work done for Fuji cfa pattern raws (x-trans?) Additionally, noise reduction may rely on raw data but is computationally expensive and therefore some modern methods are not compatible with on-camera processing as a lot of battery power marry be drained.

For me there’s a lot of analogy to film negatives. Today I am not satisfied with the 1 megapixel scans the lab delivered on CD in the 90’s, and I am happy that I can re-scan today (in my case with 18 MP or 72 MP and much better dynamic range).

When I started shooting RAW+JPG instead of JPG only I did my first tries with RAW developing. They weren’t that good as the JPG from the camera.
With time I got better and my developed RAWs got better than JPG. Finally I set my camera to shoot RAW only.
Won’t go back again :wink:

4 Likes

RAWs are cheap and easy for cameras to write, being essentially simple dumps.

JPEGs are fairly cheap to write because the quality has been stripped out.

If my camera could write good quality demosaiced, distortion-corrected etc files, I would probably use them. But what if it took five seconds to write each one? Hmm, that would sometimes be too slow.

What I’d love is metadata in OOC JPEGs that contained the settings used, and FOSS software that I could run on the RAW and the JPEG metadata to make a high-quality equivalent. Ah well, I can dream.

1 Like

That’s what I’ve been trying to get at, yes. Actually, my Fuji files come very close in most circumstances. But being JPEGs, they are not very amenable to editing. Which is why I would love me some “editable JPEGs”.

That’s how it went for me, too, on my Nikon D7000, Ricoh GR, and Pentax Q7. The Fuji X-E3, however, is a different matter. That’s actually the source of my question: I find the Fuji’s images very hard to recreate. My current workaround involves mostly editing the JPEGs and fiddling with LUTs where that’s not an option.

But that solution is not satisfying. Hence my wish for editable JPEGs (which I erroneously called “TIFFs” in the original post).

But perhaps that’s really more of an issue with that particular camera and less with file formats.

Most cameras use a ‘bayer’ color matrix on their sensor, Fuji has got a different approach with ‘X-Trans’.
Maybe that’s the reason why you got different results with Fuji.

Indeed, that (stupid) X-Trans filter array is a headache.

But my particular problem is not one of demosaicing and sharpness. It’s the rendering of colors and tones that I find hard to recreate.

You could start a PlayRaw where you share a difficult RAW and corresponding JPEG. Maybe you get some inspiration from the edits of others on how to approach your processing?

3 Likes

Yep, that’s the thing that kept me JPEG, although not so much duplicating its results but rather getting a result as good. But, that’s the edge of the rabbit hole; it takes a bit of teasing apart the process to understand what it takes, and then you can deftly put the parts together into pleasing renditions.

The realization that crystallized it for me was that tone and color are the two fundamental things we change to go from the flat RGB after demosaic to an acceptable rendition. They’re related in that changing one usually affects the other, but treating them as distinct operations helps one work intuitively to good effect.

Also, I stopped thinking of JPEGs as anything but final renditions for a specific medium and purpose. My current workflow for anything, family snapshots to carefully considered landscapes, is to shoot raw, then batch process to 800x600 JPEGs for proofs. For my family, that’s usually all they need. I wlll go through the proofs and re-do ones that maybe need exposure adjustment or shadow lift, but I do that by re-opening the raw and applying the proof processing, and using that as a starting point. I never edit a JPEG anymore.

rawproc!!!

That’s exactly what my hack software does. When I use either rawproc interactively or the command line img to create an ouput JPEG, TIFF, or PNG, the software stores the toolchain in the EXIF:ImageDescription. Then, rawproc has a special 'File → Open Source…" menu selection that, when you select, say, a proof JPEG with that information in ImageDescription, it’ll open the source file and re-apply the processing to put you at the starting point for subsequent work. You can either modify a tool that’s already in the chain, or add new tools anywhere in the chain.

Sorry for the blatant marketing, but doggonit, it really works well now, at least for me…

2 Likes

Well strictly speaking if you use a Canon camera with CR2 …IIRC that uses a TIFF container format :stuck_out_tongue:

1 Like

@ggbutcher Glenn scanned git for rawproc…very quickly …instructions are to build for linux but I think you mention you have complied on Windows?? 32-bit?? Are there any specific instructions to do so??

The github wiki has a somewhat dated page on compiling rawproc:

I made this back in the day when Ubuntu’s packages weren’t at sufficient versions to support rawproc; with 19.01 you can apt-get all the dependencies.

Well, I haven’t tested wxWidgets that way. I like to statically link it anyway, as most folk would have to install wxWidgets just for rawproc. So, what I generally do for both linux and msys2:

  1. Get and compile wxWidgets. Here’s my configure (in a separate build directory):
    $ ../configure --enable-unicode --disable-shared --disable-debug
  2. Get and compile and install librtprocess (not a package in any distro, yet)
  3. Install the other dependencies (libjpeg, libtiff, libpng, liblcms2, libraw, lensfun)
  4. From there, do the rawproc compile as described in the README

There are a few different angles here:

Why do camera manufacturers deliver RAW files instead of demosaiced TIFFs?

Because your computer can run more powerful algorithms on the RAW (especially denoising and demosaic) than cameras could. I guess this gap has become a bit smaller but is very likely still relevant.

Also if I’m not completely mistaken the white balance is usually performed before steps like demosaic & denoise. This makes intuitive sense as well changing the white balance might actually reveal new edges which were not visible before.

If you just want slight tweaks and you got the dynamic of the compressed in camera already, you might actually get away with editing jpegs, especially if captured using a high resolution camera and targeting ‘web’ resolutions.

But I usually struggle with recreating as pleasing colors and sharpness as the camera’s JPEGs.

From my experience this also depends a lot on the camera. My D810 seems to be nice and linear, just using the standard color matrix gives my usable colors. My A6000 is the opposite. The colors out of the camera are usually off, and it requires a color profile and/or fidgeting to become usable (if still far from perfect). The out of camera jpegs of the Sony look fine if not entirely realistic.

In the end I’m 100% with you, getting decent high(ish) bit depth out of camera images would be wonderful especially for quick snapshots where nothing more than a quick edit is desired.

1 Like

darktable does the same if you enable the option to export processing history to JPG (it ends up in the XMP.darktable.history tag)

1 Like