Always work in 16bit ?

Disclaimer: I’m not a GIMP dev.

See Preferences > Image Import & Export > Import Policies

The math is done in 32-bit precision, but the result may then be reduced to whatever precision your image in GIMP is in. By default GIMP uses whatever precision the image was in, but you can manually change that via Image > Precision, or by enabling the “promote imported images to geeky things I don’t understand” policy.

It will always do that when exporting to JPEG because the popular JPEG implementations only support 8-bit precision (though the JPEG standard allows for higher precision). What happens when you export to TIFF or PNG is up to you.

Absolutely. The file going from dt/lightroom to GIMP is an intermediate file, i.e. it is not the final file. Intermediate files should not lose data and should be fast to read/write, so use 16-bit TIFF, uncompressed if possible. You can then export the file from GIMP in 8-bit precision if you like.

2 Likes

Actually, I think it’s 32-bit floating point (linear) internally. Checking that first line under Import Policies in @Morgan_Hardwood’s screenshot might help if you’re so inclined.

This thread covers something that has been on my mind lately.

The setup: I bought a couple courses from RGG EDU. If you’re not familiar, RGG EDU sells video courses marketed towards professional photographers and retouchers. Of course, all the software used in the courses comes from Adobe. So one of my goals has been to take the info in the tutorials, and translate it to do the same thing in FOSS software. Because the RGG EDU people are putting out stuff and I have seen nothing in the FOSS world that comes close to their level of sophistication.

Anyways, as part of the courses, I am now in their private facebook group. Where customers can ask questions of each other, and the people that recorded the courses.

The 8 bit vs. 16 bit question was raised in that group. One guy asked the instructor, “Why did you export 8 bits out of Lightroom and work on an 8 bit image in Photoshop? Isn’t 16 bits better?”

The instructor says that 16 bits is overkill in most cases and you should only use it where you get color banding (artifacts) from adjusting hues and tone curves. Otherwise, you are slowing your machine down and bloating your file sizes for no reason. Several other people agreed. These are people that have been doing commercial photography professionally for years (magazine ads and stuff like that). One person said, “I’ve processed thousands of images in my career, and I can count on one hand the number of times that I needed to use 16 bit files.”

But, according to this thread, it seems that when using the GIMP, 32 bit floating point tiffs are going to be the fastest ones to work on. Is that true?

Short version: what intermediate file format should I use to get the fastest performance out of the GIMP?

Do I really want to use 32 bit floating point for fastest performance?

I think slowing down mainly concerns 5K-Retina-iMac-users (without the best hardware)
better stick to 16 bit

Fastest performance is still going to come in 8-bit mode (from what I can tell).
If you do use a higher bit depth for editing, I think 32-bit float linear should be faster?

This may be highly memory dependent, I haven’t had a chance to hack at it to test more definitively.

I just know that all of the GEGL operations will be done in 32-bit float linear.

What you want to use higher bit-depth for is to delay making the “damage” you do to your image visible, especially if you’re doing a lot of changes. Every Single Change you make to your image after the shutter closes does a bit of damage, and doing it in a higher data precision helps to keep the accumulation well within a sub-visible range. In 8-bit precision, a few edits can easily push these artifacts into what you can see.

Most camera raw files deliver the raw image as 16-bit unsigned integer values, so continuing at that precision avoids a transform. Keep in mind there are 256 16-bit values between every to adjacent 8-bit values, so there’s a good bit of room within which to work in 16-bit. Native floating point (32-bit float and 64-bit double) provide more precision within which to work, and the performance difference vice 16-bit integer is not significant. Some softwares will provide a 16-bit float, but that’s implemented in software and can introduce a noticeable performance difference.

But a significant reason to consider floating point is that the image storage convention, black=0.0, white=1.0, allows an edit that causes particularly white to go past 1.0 to actually retain a meaningful value, rather than be clipped as it would be in 8- or 16-bit integer format.

Probably a bit more than you want to know. The essential thing is, if you’re going to change stuff in your image (sharpen, saturate, LUT-it-up, etc.) , it’s a good idea to do that in at least 16-bit, and defer the crush to 8-bit for saving to the output file. However, if you’re always satisfied with the JPEG straight out of the camera, with maybe just a crop for good effect, you can probably live in 8-bit land.

Oh, if you shoot for Reuters, you’ll have to live with 8-bit. They don’t accept anything other than OOC JPEGs…

3 Likes

smart decision for a press agency. And way less trouble (= less labor cost), too.

1 Like

most stock photography agencies accept only 8bit jpegs in srgb, too. probably has to do with file size

Ten years ago, “work in 8-bit integer unless you know that creates problems” might be sensible advice. And if you know the image will be edited once in non-linear colorspace such as sRGB, used once and then discarded, the advice is still okay.

But when an image is edited in linear colorspace, or might be re-purposed, or re-edited in the future, or we simply want the maximum headroom without having to worry, 16-bit integer is the minimum that I consider sensible. Some of my work needs 32-bit floating point, and that’s almost become my usual working practice, though it is often overkill.

I will add a note of caution about floating point: digital arithmetic sometimes gives results like 65535.999999 or 0.000001 or even -0.0000001. With integer arithmetic, these are rounded as we would expect, but floating-point files will retain these weird numbers, and operations like “turn all non-black pixels white” don’t work as we would expect.

Integer values in the range [-16777216;16777216] are perfectly matched in 32-bit float space (including the value of 65536 you mentioned as 65535.99999)

Yes, the result of a calculation may be exactly 0.0 or 65536.0 and so on. My point is that sometimes the result of a calculation may not be exactly so. We can demonstrate the effect:

$ echo 'scale=15; 1/49*49' | bc
.999999999999994

It’s a long time since I played with bc, but
if you change to scale=14, isn’t there too big
a jump between the results?

16 bits is not more info, it’s more progressive transitions, which limits posterization and quantization artifacts… The key concept here is 8/16 bits integers, so every pixel manipulations are rounded to the closest integer, which ends up being quite a gap in low-lights. That’s why clever softs input and output 8/16 bits integers, but work internally in 32 bits floats (where no rounding happens until the very end of the pipe).

That instructor should dig a hole and shot himself in it, that’s beyond stupid. You don’t know if it’s going to be overkill until you run into a banding effect and discover you can trash your whole layers stack and start over in higher bit-depth, or unless you work on gamma-encoded data instead of linearly-encoded, which is – again – stupid. And I’m pretty sure Photoshop doesn’t use 32 bits float internally, but keeps whatever you feed it as is (open a 32 bits float TIFF in it and try to use the healing brush : PS will issue an error saying the healing brush only works in 8 or 16 bits, so my guess is it works on integer RGB).

Doing crap for 20 years only makes you an expert at crap. Use clever, reproductible, systematic workflows. No assumptions, no guesses, no nasty workarounds (except when you can’t avoid them).

Exactly… 32 bits/8 bits sums up to RAM use (and CPU cache use, which is less and less an issue since AVX processors). No definitive answer here, it depends of what your CPU is and how the soft uses its optimizations possibilities. Basically, 8 vs. 32 bits means bigger memory chunks to move between the RAM and CPU cache, so a bit more I/O latencies. But there are several ways to overcome this, especially on modern CPUs (and GPUs), so the difference is neglictible.

I think this has more to do with ensuring the authenticity of the information and avoiding manipulated pictures and montages. See the World Press Photo and McCurry controversies in the past few years.

4 Likes

Do we really need statements like this? Not only are they mean-spirited, but this kind of hyperbole only serves to undercut the rest of your statement(s).

5 Likes

Yes we do. The issue of “experts” using their authority to propagate fake knowledge should be adressed with the utmost firmness.

You may still address it with firmness while not saying those kinds of things. In fact, omitting those types of communications make people more likely to believe what you’re saying, improve their own practices, and engage with you further.

On the other hand, being condescending and hyperbolic only makes people avoid you.

5 Likes

Yes, I’m well aware that the 2010’s are all about the form, and little about the content. Being nice is more important than being accurate or competent. The thing is, when people ask for disdain, disdain I give them. It’s one thing to be wrong, that happens to everyone. It’s another to be wrong your whole carreer and not discover it. But, again, that can happen. What is unforgivable is to teach your wrongness for money where you could maybe have done some research before and get some clue that your beliefs were not backed up by evidence. That, is asking for my wrath. Comm’on, Internet… You want to be a teacher ? Show some due-process. You are only one click away from Google Scholar.

You speak about form and content as if they’re mutually exclusive. They are not.

Nobody asked for disdain.

Still no, nobody is asking for wrath. Or disdain.

I thought the same thing when I read the first sentence of your last post.

Can you please stop those kinds of statements? They’re unnecessary.

4 Likes

I think I will stop posting fair and square instead. I don’t live in a state of mind where everyone is nice and trying to improve, and everyone deserves respect for just beeing alive. This world is becoming “Idiocracy” for real, Internet is merely a scope of that, and having to be polite with idiots is making it ok to stay an idiot. I have nothing to win here except grey hair and ulcers, and repeating the same things over and over.

You can be accurate/competent and nice or at least polite. In fact, I’d say it’s a failing if one is unable to muster some tact and politeness when communicating.

3 Likes