What do we lose processing TIFF?

What do we actually lose if instead of processing from a RAW file we have a >= 16 bit losslessly compressed TIFF (or JXL for that matter)?

Looking at the darktable pixelpipe, before the demosaic we have (and im assuming that dt is doing almost everything (if not everything) that can be done before demosaic before demosaic):

  • raw black/white point
  • white balance
  • highlight reconstruction
  • raw chromatic aberrations
  • hot pixels
  • raw denoise

of those the only one that (at least for me) is used as part of the processing of an image is white balance. The rest of them are either something specific to the sensor that i dont need to touch or if they need to be touched they could perfectly be applied for the generation of that tiff (or needs to be applied)

The other thing that we lose is obviously the choice of demosaic algorithm

But if its lossless and high bit depth, then, am i wrong in thinking we dont lose any editing flexibility after demosaic? Also making sure that on the export to tiff we are not clipping any data that wasnt already clipped, which i guess could happen if we are applying a basic white balance.

What about color profiles? does applying a color profile change the data in the file or only it points to a way to interpret the data thats there? I do know that when exporting out of gamut colors are moved into gamut, but as i understand it this is an extra step separate from the color profile and could, potentially be skipped (although i dont see a skip option for it in dt)
Regardless it would probably be best to export to a color profile larger than the camera one. linear rec2020 maybe.

A couple of use cases:
Lets say we have a monochromatic sensor and trichrome a scene with filters. since there no white balance to take care in those RAWs and no demosaic, by aplying the rest of strictly needed processing, exporting those and then combining them into different channels of a single image and doing the actual editing in that result; we didnt lose Absolutely Nothing (?)
Rather than doing as much processing as we can on the separate raws.

And stitched photos. In this case if they are color images we do lose like i said the choice of demosaic algo, and maybe something regarding white balance; but other than that, did we lose anything?

The more info that can be supplied the better, links for further reading, etc. And please dont be afraid to get technical, in fact i prefer it, but still i may ask for clarifications if something goes over my head

Thanks in advance for the answers :smiley:

2 Likes

Nice question! And, nicely dissected.

Regarding color profile, the thing you need to do is make sure whatever you export to TIFF is accompanied by the color profile that represents the image data. If you do nothing to convert the camera data to some other colorspace, the camera profile should accompany it. Using any other profile than that one implies you converted the camera data to that profile. And, that would be ‘loss’.

IMHO, the minimum essential operations on raw data to make a TIFF for further processing are:

  • black/white point: Stretches the data for two reasons: 1) on the black side gets rid of the sensor bias that keeps it from recording black as zero, and 2) on the white side, gets white out of that 12- or 14-bit limit of sensor recording to the 16-bit top of integer data.
  • white balance: Shifts the camera-recorded color for two reasons: 1) the color temperature(s) of the scene lights, and 2) bias built into the sensor’s bayer array.
  • demosaic: there’s no convention for non-RGB pixels that most downstream software would understand.
  • color profile assignment: If you’re shipping camera data in its original spectral response, you need to include the corresponding camera color profile.

Everything else, to my bear-of-little-brain thinking, is lagniappe. Now, things like hot pixels and raw-based denoise are just the domain of raw processing programs, so might as well do them there. And, no tone curve shenanigans, just scene-linear data like the camera captured it.

FWIW…

1 Like

Oh, just saw this sentence… If you plot your camera’s profile in terms of chromaticity, you’ll find the triangle is a lot larger than any of the defined colorspaces. Thing is, you can’t think of that camera triangle as a ‘colorspace’ like the others, it’s actually an arbitrary bound that puts the camera’s bayer red, green, and blue bandpasses in the proper positions to down-convert the data to a destination gamut. The licensed color scientists call the camera matrices “compromise matrices” for that reason.

3 Likes

so what im hearing is that choosing a color space does indeed change the recorded data in the outputfile

ill have to look into availability of camera profiles, but the most obvious answer without investigating is that they are readily available if dt ships them.

but the question does arise: how does this affect a monochromatic sensor? would it need no color space at all?

“choosing” a color space has two interpretations: 1) assigning a color space to an image, or 2) actually transforming an image from one colorspace to another.

You really only want to do #1 with a camera profile to a raw image. From there, you want to do #2 to convert the image from one space to another. There are some folk who show how to assign a non-corresponding color space to an image to get a different look, but to me that’s just a stupid pet-trick that confuses things more than anything.

Most raw processors store camera color information internally; most of those use the dcraw adobe_coeff table from the source code as a basis. That makes it harder to manually assign the colorspace definition to an image. Thinking on it, may just be easier to export from the raw processor in Rec2020 or ProPhoto, converted at the export. I have my own software that would do the assignment, but I don’t recommend it :laughing:

For monochromatic sensors all you really need to know is black and white point.

Besides possibly clipping there, a linear 16-bit TIFF is basically lossless.

1 Like

would you share it? interested in looking at the inner workings

just to reiterate then: since applying a color space seems soft obligatory, it doesnt matter which we choose, could be SRGB and we wouldnt lose anything (?)

this would make sense since all the mono information is in the single pixel value for its intensity
but on the other hand i understand icc also say whats white and such
so im still unsure about the answer