so far, I have mostly worked with JPG’s and PNG’s in a color depth of 8-bit. I’m wondering, though, when it makes sense to work with 16- oder even 32-bit images. And: Is it possible to see the difference in a photo print or on a monitor? My monitor has a resolution of full HD (1920x1080 pixels).
it makes sense to work with 16-bit files when you are working with photos, especially if the source was a raw file. Raw files usually have bit depths of 14-bit. If you export that to 8-bit, you are loosing some quality.
On the other hand, when you print photos, you might not see any difference between 8-bit or 16-bit files.
The general idea is: use 16-bit as long as you are editing a photo, then when finished you can save it as a jpg or png to print.
Personally I don’t see any advantage of using 32-bit over 16-bit. 32-bit files tend to be huuugeee!
The GIMP menu items select the arithmetic to be used when editing. I personally use the best: 32-bit floating point, Linear. That does not mean that my saved files are “huuugeee” because, if I save as JPEG, the GIMP automatically converts the internal working file to 8-bit and if I save as TIFF it converts the internal working file to 16-bit or whatever else you select in the TIFF dialog box.
As to working in 8-bit or even 16-bit non-linear, many adjustments, especially round-trip, can lead to artifacts:- a decided disadvantage.
Answer: If I am only cropping, I need only as much precision as the input image. But when changing colour and tones, I want as much precision as possible. The only reason for restricting precision are when high-precision uses too much memory or time.
What precision do I need when saving images that may need further processing?
Answer: Same as previous answer, but substitute “disk” for “memory”.
What precision do I need when saving images as final outputs that need NO further processing?
Answer: For display-referred colorspaces such as sRGB, 8 bits is sufficient.
If I try opening a RAW file in GIMP it uses a third party program like Darktable or Rawtherapee to process a RAW file into an image that can be used by GIMP. Do major color corrections and exposure corrections in these programs. Then when you shut down these programs the image is sent to GIMP as a 32 bit floating point image. I would keep that precision until the export time.
At export if I choose Tiff or PNG then I would export as 16 bit so further editing can be done if later desired. If I export as JPG then it is exported as 8 bit by default in any program.
8 bit images are limited to a maximum of 256 shades of red, green and blue which exceeds what the human eye can see and is more than sufficient for computer monitors and printers, but is a bit limiting for editing again later on.
16 bit images have the potential of 65536 shades of red, green, blue but probably in reality never do have this many shades. Their advantage is flexibility in editing later as you are less likely to get banding artefacts in smooth areas like the blue sky which can be very common from heavily edited jpgs.
8 bit jpgs never remember individual pixels and there tends to be artefacts when pixel peaking or doing extreme enlargements. A tiff file remembers individual pixels and this prevents artefacts being created by the file format.
I am will to be corrected on this if I have made any errors in my summary here. I personally save my images from Darktable and GIMP as 16 bit images which can then be reopened and reexported as 8 bit jpgs when needed. The 8 bit jpgs are often then discarded after their use as the 16 bit is my archival storage file.
I also cringe when I have to edit a JPG out of camera compared to a RAW file as the RAW allows so much more editing without artefacts.
I meant to say pixel peeking or pixel-peeping but needed more coffee. No jpg records every pixel even at 100% quality and I personally never compressed my TIFF files but I thought their compression was lossless but I may be incorrect about that. But I never compress my tiffs because there are instances where not all programs can uncompressed them correctly.
And this can be worse in some cases since due to gamma-encoding, there are only 70 values for the lighter parts of the pictures (those where the luminosity is at least 50% of the white, typically the skies).
If you do the same thing in 16-bit integer, you get gaps and spikes as well but these being much more numerous (256 times more numerous…) are individually much smaller.
Using floating point make this problem vanish, because two pixels with the exact same values are rare.
In practice in Gimp if you use high-precision you prefer 32-bit FP because since it is the format used for internal computations you skip conversions steps.
Terry, I am sorry to say that the statement is not quite correct.
In the GIMP, there is a layer merging option “Difference” … if two images are exactly the same, you get a 100% black image.
Here’s a TIFF versus it’s conversion to YCbCr photometric and 90% Quality at 4:2:0 sub-sampling:
As expected, slight differences are just visible.
Now here’s the TIFF versus it’s conversion to RGB photometric, 100% Quality with no sub-sampling:
As I expected and implied earlier, it is virtually** all black = virtually no difference = almost every pixel “remembered”.
** No difference was greater than 1/255 when viewing at 8-bit “precision”.
Please spare me the retort that “nobody does that” or that “my editor can’t do that” … because, if I wanted absolute max. quality and the Printer only accepted JPEG and not TIFF, I would send them this:
To be taken with a grain of salt, because if you don’t change pixels, the JPEG encoding “settles”. In other words if you encode-decode-reencode the same image with the very same parameters, you get the same values for both encodings. This is used in forensics to (sometimes) determine recently changed areas of an image (Error Level Analysis).
A more thorough way to check encoding damage is to shave one pixel off the image at the top and/or left, so the JPEG “blocks” aren’t the same as in the initial encoding.
I would try that but I’m not sure that different-sized layers can be properly compared in the GIMP by means of the “difference” merging option which relies on a one-to-one pixel correspondence in the “stack”.
I’m not convinced that the method should be “taken with a pinch of salt”. I did not “encode-decode-reencode the same image with the very same parameters”, I encoded the same image with two different JPEG setting and then compared each of the two results in turn with the original to assess the degree of what you call “encoding damage”.
You don’t scale, you just crop, so you still have a one-to-one correspondence between the pixels, after shifting one layer by one pixel in the adequate direction(s).
If I just crop, the shifted layer will be off by one pixel in the x and y directions. Therefore, there will be a difference at each pixel where there was none before. I am becoming convinced that my method is not understood, sorry.
[edit] here is an image cropped by one pixel on two sides and moved by one pixel in the x and y directions then compared by ‘Difference’:
Completely as I predicted. We must be talking at cross-purposes!
As we continue to discuss how to compare different JPEG methods with the original TIFF, the value of my contribution drops rapidly toward zero.
[/edit]
Rather than giving a knee jerk reaction to this statement I found a suitable image that was a RAW file. I opened it in DT and accepted what defaults were applied and then exported it as a tiff and again as a 100% JPG. I then opened the images as layers in GIMP and at 800% magnification and on a 43 inch monitor examined the image quality difference. It was insignificant to say the least.
My next test consisted of doing a rotation, crop and levels adjustment on both the tiff and JPG images separately in GIMP. I exported the jpg image as 100% and 94%. I then opened the images as layers in GIMP. The drop off in quality between the 100% JPG and the Tiff was again very small indeed. However, at 95% quality artefacts were appearing. It should also be noted that the 100% image file size was more than three times the size of the 95% image.
So I concede I may have been too harsh in my criticism of JPGs. However, returning to what I see as the intent of the original post it is to understand the advantages of color bit depth and not specifically the file format. I mentioned the difference between JPG and Tiff because because JPG is limited to 8 bit and that compromises the ability to do large adjustments to color and exposure without creating banding artefacts. 16 bits is better in an image being subjected to editing.
In my view 8 bits is fine for the end use to display on a monitor or to print, but I would avoid 8 bit for editing although it can be sufficient if no major color or exposure adjustment is applied.
My advice to Claus would be to use 16 bit when ever possible for editing. I personally prefer to edit raw files from the camera in darktable and if I need to send the image to GIMP or another program for further editing I export the images as a 16 bit tiff or PNG. I also save all my darkroom edits as a 16 bit tiff for archival storage. The only time I use 8 bit is when I create a jpg to post on the internet or for some other use that required a JPG.
Thanks @xpatUSA for making me relook at the issue of JPG quality. Have a good weekend.
I personally prefer to edit raw files from the camera in darktable and if I need to send the image to GIMP or another program for further editing I export the images as a 16 bit tiff or PNG.
The truth is as in the original statement
“”
If I try opening a RAW file in GIMP it uses a third party program like Darktable or Rawtherapee to process a RAW file into an image that can be used by GIMP. Do major color corrections and exposure corrections in these programs. Then when you shut down these programs the image is sent to GIMP as a 32 bit floating point image. “”
Attention:
script exports the file from dt as EXR file !
Because if you take a random image that has already been encoded as a JPEG, the pixel values already take in account the round-off that occurred when computing the DCT for the 8x8 block they are in. So if you recompress the image using JPEG you are essentially recomputing the same DCT and since the round-off occurred in the previous encoding, your new values are essentially identical to the previous ones and you don’t see any difference.
If you crop a few pixels (anything but a multiple of 8) from the top or left, all the 8x8 blocks are different from the ones used in the previous encoding so the DCT are different, and you really see the difference that comes for the JPEG encoding.