Survey on the linear workflow

Also: what is ultra-low gamma?

1 Like

Speaking of unsharp mask, it produces nicer results when is performed in a gamma-encoded color space and on the luminance channel, like darktable already does, no matter the bitdepth.

Linear gamma isn’t always the best choice for some operations.

example

Downscaling: sometimes is better linear gamma, sometimes is better gamma-encoded

Upscaling: always better gamma-encoded

And so on

Ahahah, probably linear gamma

BS. Like the rest of the thread.

Where did you got these ideas ?

This is Lab unsharp masking from darktable, with radius = 4 px, amount = 200 % and threshold = 0.5 (aka insane values)

This is RGB iterative unsharp masking from image doctor, with 4 scales of radius = [2 ; 5], amount = 200 %, and threshold = -1 dB (aka same kind of insane values)

Where do see the halos happening ? Which one desaturates the red font on the book cover ? (and yes, the second one pushes the noise more, but there are 4 unsharp masks stacked on top of each other with noise reduction disabled for good measure)

The “unsharp mask” method for sharpening creates halos, aka overshoot, aka acutance. This is a natural part of the process. In moderation, halos increase the impression of sharpness.

The physical darkroom USM process creates halos.

Digital USM processes create halos, whether the pixels are linear RGB or non-linear sRGB. For example, with ImageMagick, using extreme values on an sRGB input:

magick toes.png -unsharp 0x4+2+0 usm_sRGB.jpg

magick toes.png -colorspace RGB -unsharp 0x4+2+0 -colorspace sRGB usm_lin.jpg



Of course, there are techniques for reducing or eliminating halos, if we want. But simply using linear RGB isn’t a magic cure for USM halos.

3 Likes

The toes tell truth.

Perhaps another way of putting this is that our visual perception expects mixing operations to behave like physical mixing of pigments or lights. Because that’s what we know from the real world. So whenever possible we should strive to model our image processing on physical (aka linear) processes, as that’s what we perceive as “natural”.

You nailed it!

1 Like

Perhaps I should caution: “Partial Nudity”. Ha!

My comparison could be criticised: toes.png has been through some processing, so converting it to linear doesn’t make it scene-referred. Instead, we should use a fully-linear workflow. Fair enough.

So we use dcraw to make a linear version (with sRGB primaries but no transfer curve). Then crop to the toes, and divide by the maximum to stretch the values to the full range (retaining linearity), assign Elle’s linear sRGB profile, and save it as 32-bit floating-point.

%DCRAW% -v -W -w -4 -T -O AGA_1372_lin.tiff AGA_1372.NEF

%IMG7%magick ^
  -verbose ^
  AGA_1372_lin.tiff ^
  -crop 267x233+3033+4189 +repage ^
  -evaluate Divide %%[fx:maxima] ^
  -strip ^
  -set profile sRGB-elle-V4-g10.icc ^
  -set gamma 1.0 ^
  -define quantum:format=floating-point -depth 32 ^
  toes_lin.tiff

We do a USM on that linear image, and convert to sRGB for web convenience.

%IMG7%magick ^
  toes_lin.tiff ^
  -unsharp 0x4+2+0 ^
  -profile %ICCPROF%\sRGB-elle-V4-srgbtrc.icc ^
  toes_lin_usm.jpg

Repeat in the opposite order, so the USM is done in non-linear space:

%IMG7%magick ^
  toes_lin.tiff ^
  -profile %ICCPROF%\sRGB-elle-V4-srgbtrc.icc ^
  -unsharp 0x4+2+0 ^
  toes_nonlin_usm.jpg


The conclusions are unchanged: (1) Both versions show halos. (2) The halos are heavier (but more accurate) when the USM is done on the linear version. For more sensible amounts of USM, the difference is less noticeable.

PS: I’m not arguing against a linear workflow, just to beware of exaggerated claims.

1 Like

That result contradicts everything I have seen so far. Are you sure your file is properly decoded from integer EOTF to linear before you apply the USM ? It rather looks like you applied twice the gamma.

The dcraw command I used creates linear integer values. As far as I can tell, within experimental error, the values are proportional to the intensity of light hitting the sensor. See Linear camera raw.

Gamma is applied only once in each version, at …

-profile %ICCPROF%\sRGB-elle-V4-srgbtrc.icc

With such heavy USM, both versions have created values less than zero and greater than 100% of QuantumRange. Writing the output to JPG simply clips these values.

For avoidance of doubt: I don’t claim that conclusion (2) always applies. I don’t claim that halos are always worse when we do USM in linear coding. This only happens in light parts of images. The opposite happens in dark parts of images.

1 Like

this is consistent with what I have observed

Xavier, I"m going back to read things between software compiles, and I think I read the above on my phone and neglected to respond…

I’ve recently come to the conclusion that the gamma transform is about displaying, and should be left to the actual act of prep for display/export. It should not be in a tool we use to manipulate the data. This I think is what got @anon41087856 going in the first place, and there’s a lot of discussion about doing/undoing it in the middle of the initial flimic threads.

My filmic tool has a power setting, but I put it there to compare results with the images in John Hable’s flimic posts. I keep it set at 1.0 all the time now, and may remove the parameter from the next rawproc version.

Doing a gamma transform to accommodate a tool, then swinging it back to linear (the reciprocal gamma) is two too many transforms for my taste. Every time tone is transformed, I think a bit of difference is dialed into the color, which cannot be recovered. I’d rather find a way to accommodate the tool’s means to comfortably deal with the linear data, or find a better tool for that job. Sorry, @age, I know what you’ve asked for, but I can’t get there… :slight_smile:

2 Likes

The characteristics of certain operations such as USM are well documented. Observe what happens after an unsharp operation of a black line whose central pixel is 1:

gmic (1) r 100,1,1,1,0,0,.5,.5 +unsharp 2 dg 690 nm before,after out_ png

before

after

This is one side of the coin. The other side is encoding. Poynton (2012, page 316) writes: “If gamma correction were not already necessary for physical reasons at the CRT, we would have to invent it for perceptual reasons.” See also here.

Hermann-Josef

1 Like

Good article. I’ve seen it referenced elsewhere, but I hadn’t actually read it until now.

I don’t take issue with it, I just want to carefully consider where it needs to be applied. Right now, I work my images in the original energy relationship all the way to display or export, and for both of those right now I convert to something close to sRGB gamut and ~2.2 gamma.

I can turn off the display transform in rawproc, and you can see the difference in the two screenshots I posted here: 3.0 How to get good results automatically? - #66 by ggbutcher

Well, we say the same thing, for the most part.

In my words, to me all that happens inside the computer and it’s not shown on the display must be done with linear gamma data. Every time we have to see some result, the linear data should be transformed for display, with the appropriate gamma, but the data itself remains untouched.

Only after all the processing is done and when we are happy with our results, the image finally gets transformed with the appropriate gamma upon exporting it (usually with an sRGB gamma).

Up to now I haven’t said anything different from you. But what I tried to say is that I have no idea if an algorithm absolutely needs the data encoded in a certain gamma. That’s what you, developers, know better. But if it has to be encoded, in the end, after the algorithm has finished its task, the data must be returned to a linear gamma encoding.

I know about quantization errors with icc profiles operations, but perhaps it has to be tested if the algorithm gives better results with the encoding/decoding errors than with linear data.

If the algorithm doesn’t work well with linear data, I would vote for a different approach (completely different algorithm), but obviously I’m not the developer, I’m not the one who does the hard work, and the one who takes the decision.

Well, we all know USM are bad, old, and ugly. My point was that ugliness should be lighter in linear.

That’s still output (as opposed to processing).

Quantization is a non-issue since any modern image processing has an internal pipeline using 32 bits floats. You only care about that stuff while saving a file in integer format, not while processing filters.

That makes a lot of people with image processing experience converging toward the same solution…

2 Likes

Thanks for participating. This is a friendly reminder that the purpose of this thread is to exchange information in a charitable manner. I am not interested in offhanded remarks and who agrees with whom. If you disagree on something then counter the claim with evidence to the contrary.

I didn’t do a good job saying this in the post where I presented the (rather idealistic) USM graph. I didn’t write it to show that USM is ugly but to invite people disputing about gamma encoding to graph cases where gamma would be good or bad. Such a discussion would be constructive and instructive.

1 Like