Should I change compression in a TIFF after dt has edited it?

Background to topic:
Scanning portfolio of slides (mostly Kodachrome) with Nikon Coolscan V ED using Nikon Scan 4.0.3 and saving to TIFF. Scanning at 4000 DPI, 14 bit resolution produces 110 MB TIFFs. Having scanned a test selection of slides, I realise that I am going to have a storage problem (yes, I know they’re ‘cheap’ - where you live!).

The question:
What effect will compressing a TIFF have on dt if it has already edited the TIFF ?

I noticed that Nikon Scan does not have any options for producing compressed lossless TIFFs but I can compress them - outside dt - using a suitable app. (I’m using FastStone Photo Resizer), using LZW compression. On the 2 or 3 sample that I tried dt opened the TIFF again OK, but the image looked as if I had increased exposure by 2 full stops or so - and this effect could not be ‘undone’ by using the exposure, basic adjustment, shadows and highlights or contrast, brightness, saturation modules.

So I can see the ‘external’, visual effect of changing the TIFF compression on a previously edited file but I have no idea of what other ‘internal’ effects there might be. Any comments?

Am I correct in assuming that dt can open TIFFs which have been compressed with any of the commonly used methods and will produce identical results ?

From what you describe, I have the feeling that the software you used to compress your TIFF files is removing the metadata of the file.
That’s probably the source of the difference you observe.
You should probably try to find a software that only compress the data without touching the metadata stored in the TIFF.
Aside of that, I don’t have experience with loading various flavours of TIFF, but changing the compression if it’s lossless should not change Darktable behaviour.

dt relies on LibTIFF and should have no problems reading lossless LZW or deflate (ZIP) compressed files.

Apart from the metadata removal (check e.g. for embedded ICC profile before and after your compression using ExifTool or exiv2), at some point there was some dt strangeness on import where the input profile was not taking effect (should be fixed in 3.4.1?), so you could also try verifying that and flip it to something else and back to embedded or sRGB…

That has been very useful: I am quiet surprised to find that applying LZW compression changes the bit depth from 16 (well, actually 14) bits per pixel to 8 - thus throwing away much of the data that I have been waiting for the scanner to send me though a USB 2 interface.

But I guess everybody but me realised that already!

And yes, the compression (via FastStone or via Photoshop) threw a lot of other metadata away as well. Back to the drawing board

The problem is probably not the compression in itself, but the software you used to do the compression.

That might be a “feature” of the program you use, at least https://havecamerawilltravel.com/photographer/tiff-image-compression/ talks about using LZW for 16-bit images. But the 16-bit LZW created sometimes files that were bigger than the uncompressed file…

Err, sounds like you’re not using a good SW for your compression solution. Both LZW and deflate are supported in TIFF at 16 bits without problems.

Probably not the most practical solution, but you could compress the whole file (even using more powerful methods than LZW) and decompress them in batches when you need them.

Yeah, so I hav learned by doing some extra reading. Now confused.

I fear you are correct about the software (except that Photoshop seems to do it too); I was looking on-line for confirmation that compression using LZW should be lossless - but going from 16 to 8 bits per pixel is hardly lossless, is it ?

You could also try e.g. the tiffcp tool from the LibTIFF collection… Haven’t verified what it does re metadata preservation though…

In any case, you also want to make sure the “horizontal” differencing option is enabled when compressing for both LZW and deflate/ZIP.

The destination bit-depth should have nothing to do with using LZW compression, which doesn’t touch the individual values of the image. Your software is doing separate things here, reducing the bit-depth from 16 to 8 bits, and then compressing it with LZW. There should be a way to set those things separately, or you have not-so-good software…

What has fascinated me is the effectiveness (or not!) of the compression: going from an uncompressed tiff image scanned at 4000 samples per inch to an LZW compressed image scanned at 3000 samples per inch reduced the file size from 110 MB to 20 MB. That’s too good to be true, isn’t it?

Yes, I quite agree - and I thought I had chosen those options precisely to not change bit depth but to invoke LZW. Shows you how much I can understand the instructions! Back to the coal face, try again. “If at last you do succeed, don’t try it again”.

I’m one to talk about such software regressions, my hack software is probably brimming with them, things that don’t work as planned because I haven’t tested a certain combination. S’why I like PlayRaws; I get to test against image formats I’d never see otherwise… :laughing:

[quote=“LateJunction, post:1, topic:23653”]
What effect will compressing a TIFF have on dt if it has already edited the TIFF ?

I also didn’t get this part… If you’re editing and saving from dt just make sure you have deflate w/ predictor selected with quality >= 6 and you’re done. You won’t get a massively better result (on average) recompressing it again w/ LZW instead…

This has really made me stop and think: to clarify things a bit, I have been compressing my scanned TIFFs, (using ZIP rather than LZW and now using Photoshop to maintain the bit depth), prior to importing them into dt. In general this has resulted in a reduction in size (for a 3000 dpi scan) from about 62 MB to 59 MB.
I had no idea that there was the capability within dt to compress a TIFF (on export) and indeed I have not previously exported in any image format other then JPEG.

But now, when I follow your suggestion and export a ‘first-pass compressed’ TIFF from dt, as a ‘second pass compressed’ TIFF, I see a further reduction in size to 49 MB. This raises some questions:

  • There’s no free lunch: where did dt find this extra compression ‘room’ from?
  • Is it still lossless?
  • Can I now re-import this ‘second pass compressed’ TIFF back into dt, replacing the larger ‘first-pass compressed’ TIFF and so save some space?

If you use noise reduction, or remove a lot of dust, scratches, etc., there are less very fine details. That makes compression easier/more efficient.

Provided you use a lossless algorithm to compress your TIFF, it doesn’t matter how often you compress, lossless is lossless…

Re-importing the saved version means that you “fix” the previous edits, and won’t be able to undo them: all editing instructions (modules) are executed and the result written to your output file.
But dt should be able to import a file it has written itself…

Not only this, but more importantly because the default deflate compression option in dt also includes horizontal differencing, i.e. the predictor step (LZW can also benefit from it).