Always work in 16bit ?



since Gimp 2.10 is able to work in 16bit, would you recommend to always work in 16bit because it provides more pixel-info and so image quality?

Is it that simple?

Thank for your help


Basically yes. The only drawback is that you loose speed.
So normally I use 8bit if I only intend to do minor changes and 16bit if major changes are necessary to avoid banding etc…


Ok, thanks for the super fast and easy reply.

(Glenn Butcher) #4


(Morgan Hardwood) #5

That’s generally incorrect. GIMP internally uses 32-bit precision in its calculations so you in fact avoid conversion delays and make things go faster if you convert your image to use 32-bit precision. Just verified that merging down layers in GIMP-2.10.8 works about 3 times as fast in 32-bit float than in 8-bit.

(Stefan Schmitz) #6

Is there any manual change needed if you want to work in higher color resolution?

I personally thought (!) that a TIFF coming in at 16 bit will be treated in 16 bit and a JPG will always be treated in 8 bit. When you export the TIFF (actually the XCF) into JPG it will be scaled down to 8 bit.

How wrong am I?


My question was because:

My wife (still) uses Lightroom (but I will soon get her to darktable :smirk:) but she prefers Gimp over Photoshop (although she has an abo for it);
Last week I discovered, that the default lightroom setting for “open with” Gimp 2.10 was TIFF and 8bit. And as Gimp 2.10 now can edit in 16bit, I wondered if we should set this lightroom default option to 16bit.
I think so…

(Morgan Hardwood) #8

Disclaimer: I’m not a GIMP dev.

See Preferences > Image Import & Export > Import Policies

The math is done in 32-bit precision, but the result may then be reduced to whatever precision your image in GIMP is in. By default GIMP uses whatever precision the image was in, but you can manually change that via Image > Precision, or by enabling the “promote imported images to geeky things I don’t understand” policy.

It will always do that when exporting to JPEG because the popular JPEG implementations only support 8-bit precision (though the JPEG standard allows for higher precision). What happens when you export to TIFF or PNG is up to you.

Absolutely. The file going from dt/lightroom to GIMP is an intermediate file, i.e. it is not the final file. Intermediate files should not lose data and should be fast to read/write, so use 16-bit TIFF, uncompressed if possible. You can then export the file from GIMP in 8-bit precision if you like.

(Pat David) #9

Actually, I think it’s 32-bit floating point (linear) internally. Checking that first line under Import Policies in @Morgan_Hardwood’s screenshot might help if you’re so inclined.

(Stampede) #10

This thread covers something that has been on my mind lately.

The setup: I bought a couple courses from RGG EDU. If you’re not familiar, RGG EDU sells video courses marketed towards professional photographers and retouchers. Of course, all the software used in the courses comes from Adobe. So one of my goals has been to take the info in the tutorials, and translate it to do the same thing in FOSS software. Because the RGG EDU people are putting out stuff and I have seen nothing in the FOSS world that comes close to their level of sophistication.

Anyways, as part of the courses, I am now in their private facebook group. Where customers can ask questions of each other, and the people that recorded the courses.

The 8 bit vs. 16 bit question was raised in that group. One guy asked the instructor, “Why did you export 8 bits out of Lightroom and work on an 8 bit image in Photoshop? Isn’t 16 bits better?”

The instructor says that 16 bits is overkill in most cases and you should only use it where you get color banding (artifacts) from adjusting hues and tone curves. Otherwise, you are slowing your machine down and bloating your file sizes for no reason. Several other people agreed. These are people that have been doing commercial photography professionally for years (magazine ads and stuff like that). One person said, “I’ve processed thousands of images in my career, and I can count on one hand the number of times that I needed to use 16 bit files.”

But, according to this thread, it seems that when using the GIMP, 32 bit floating point tiffs are going to be the fastest ones to work on. Is that true?

Short version: what intermediate file format should I use to get the fastest performance out of the GIMP?

Do I really want to use 32 bit floating point for fastest performance?


I think slowing down mainly concerns 5K-Retina-iMac-users (without the best hardware)
better stick to 16 bit

(Pat David) #12

Fastest performance is still going to come in 8-bit mode (from what I can tell).
If you do use a higher bit depth for editing, I think 32-bit float linear should be faster?

This may be highly memory dependent, I haven’t had a chance to hack at it to test more definitively.

I just know that all of the GEGL operations will be done in 32-bit float linear.

(Glenn Butcher) #13

What you want to use higher bit-depth for is to delay making the “damage” you do to your image visible, especially if you’re doing a lot of changes. Every Single Change you make to your image after the shutter closes does a bit of damage, and doing it in a higher data precision helps to keep the accumulation well within a sub-visible range. In 8-bit precision, a few edits can easily push these artifacts into what you can see.

Most camera raw files deliver the raw image as 16-bit unsigned integer values, so continuing at that precision avoids a transform. Keep in mind there are 256 16-bit values between every to adjacent 8-bit values, so there’s a good bit of room within which to work in 16-bit. Native floating point (32-bit float and 64-bit double) provide more precision within which to work, and the performance difference vice 16-bit integer is not significant. Some softwares will provide a 16-bit float, but that’s implemented in software and can introduce a noticeable performance difference.

But a significant reason to consider floating point is that the image storage convention, black=0.0, white=1.0, allows an edit that causes particularly white to go past 1.0 to actually retain a meaningful value, rather than be clipped as it would be in 8- or 16-bit integer format.

Probably a bit more than you want to know. The essential thing is, if you’re going to change stuff in your image (sharpen, saturate, LUT-it-up, etc.) , it’s a good idea to do that in at least 16-bit, and defer the crush to 8-bit for saving to the output file. However, if you’re always satisfied with the JPEG straight out of the camera, with maybe just a crop for good effect, you can probably live in 8-bit land.

Oh, if you shoot for Reuters, you’ll have to live with 8-bit. They don’t accept anything other than OOC JPEGs…

(Stefan Schmitz) #14

smart decision for a press agency. And way less trouble (= less labor cost), too.


most stock photography agencies accept only 8bit jpegs in srgb, too. probably has to do with file size

(Alan Gibson) #16

Ten years ago, “work in 8-bit integer unless you know that creates problems” might be sensible advice. And if you know the image will be edited once in non-linear colorspace such as sRGB, used once and then discarded, the advice is still okay.

But when an image is edited in linear colorspace, or might be re-purposed, or re-edited in the future, or we simply want the maximum headroom without having to worry, 16-bit integer is the minimum that I consider sensible. Some of my work needs 32-bit floating point, and that’s almost become my usual working practice, though it is often overkill.

I will add a note of caution about floating point: digital arithmetic sometimes gives results like 65535.999999 or 0.000001 or even -0.0000001. With integer arithmetic, these are rounded as we would expect, but floating-point files will retain these weird numbers, and operations like “turn all non-black pixels white” don’t work as we would expect.

(Ingo Weyrich) #17

Integer values in the range [-16777216;16777216] are perfectly matched in 32-bit float space (including the value of 65536 you mentioned as 65535.99999)

(Alan Gibson) #18

Yes, the result of a calculation may be exactly 0.0 or 65536.0 and so on. My point is that sometimes the result of a calculation may not be exactly so. We can demonstrate the effect:

$ echo 'scale=15; 1/49*49' | bc


It’s a long time since I played with bc, but
if you change to scale=14, isn’t there too big
a jump between the results?

(Aurélien Pierre) #20

16 bits is not more info, it’s more progressive transitions, which limits posterization and quantization artifacts… The key concept here is 8/16 bits integers, so every pixel manipulations are rounded to the closest integer, which ends up being quite a gap in low-lights. That’s why clever softs input and output 8/16 bits integers, but work internally in 32 bits floats (where no rounding happens until the very end of the pipe).

That instructor should dig a hole and shot himself in it, that’s beyond stupid. You don’t know if it’s going to be overkill until you run into a banding effect and discover you can trash your whole layers stack and start over in higher bit-depth, or unless you work on gamma-encoded data instead of linearly-encoded, which is – again – stupid. And I’m pretty sure Photoshop doesn’t use 32 bits float internally, but keeps whatever you feed it as is (open a 32 bits float TIFF in it and try to use the healing brush : PS will issue an error saying the healing brush only works in 8 or 16 bits, so my guess is it works on integer RGB).

Doing crap for 20 years only makes you an expert at crap. Use clever, reproductible, systematic workflows. No assumptions, no guesses, no nasty workarounds (except when you can’t avoid them).

Exactly… 32 bits/8 bits sums up to RAM use (and CPU cache use, which is less and less an issue since AVX processors). No definitive answer here, it depends of what your CPU is and how the soft uses its optimizations possibilities. Basically, 8 vs. 32 bits means bigger memory chunks to move between the RAM and CPU cache, so a bit more I/O latencies. But there are several ways to overcome this, especially on modern CPUs (and GPUs), so the difference is neglictible.

I think this has more to do with ensuring the authenticity of the information and avoiding manipulated pictures and montages. See the World Press Photo and McCurry controversies in the past few years.