Survey on the linear workflow

Just dropping stuff that was sent to me yesterday by an anon user now banned from this forum:

Right. It’s been a while since I played with those. Bad memories…

1 Like

Please, again, a computer display as we use it to view pictures does not modulate LED light emission: Liquid-crystal display - Wikipedia. The constant emission of the LEDs is modulated by a layer stack of liquid crystal, electrodes and polarization filters.

Edit: Sorry, discourse got the quotation wrong, but don’t know how to fix from my phone.

1 Like

Unburying that topic with some examples:


Left is blurred in scene-linear RGB then converted to sRGB for display. Right is blurred in sRGB prepared for display.

Lens blur applied in scene-linear RGB then converted to sRGB:

Lens blur applied in sRGB prepared for display:

original image by Hanny Naibaho:

https://images.unsplash.com/photo-1496307307731-a75e2e6b0959?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2048&q=80

Another self-explanatory example of accurate lens blur can be found on Chris Bejon’s website.

So I hope this will finally close that non-issue. Scene-linear is not the best, but the only way to get a proper blur. Which is something you need all the time in image processing to blend and soften masks, and also to sharpen pictures. That applies to compositing and background-switching too.

If you want to push it, painting with actual pigments also happens in a scene-referred space, which doesn’t prevent you from mixing colours, even though these pigments don’t give a flying sh*** to your perception.

All in all, the challenge is only to make software GUIs speak scene-referred, which is not entirely possible yet because of bad design mixing view/model/controller in twisted ways. But clean your pipe, undo hard-coded assumptions on grey or white value, and done !

5 Likes

True

Partially untrue , for convolution sharpen

All of this is well known and extensively tested

While blurring might be better at an ultra-low gamma, sharpening and addition of noise are worse, so having filter operations take place at a low gamma generally is a bad idea.
Dan margulis 2002

https://www.ledet.com/margulis/ACT_postings/ColorCorrection/ACT-linear-gamma.htm

Maybe only for deconvolution sharpen (deblur, capture sharpening) is better linear gamma

Deconvolution is only a fancy iterative unsharp-masking to remove lens blur. Lenses work on photons, therefore in scene-referred space. There is absolutely no reason to perform deconvolution in non-linear spaces, unless you want to sharpen noise.

The post you are quoting comes from 2002, when I believe image processing was done in 8 bits integers, and in this context maybe the benefit of avoiding posterization while deconvolving integers superseded the artifacts it creates.

But in 2020, any serious soft works with 32 floats and deconvolving needs scene-linear RGB because it needs to blur first to deblur then.

That Timo guy they made fun of in 2002 on that mailing list was actually right, and the rest of them jerks are assholes. Most of that thread is pure BS that is proven wrong everyday by the 3D compositing world.

PSF are physically-defined objects. There are no partial maybe, no more than there are gamma-corrected lenses.

Also: what is ultra-low gamma?

1 Like

Speaking of unsharp mask, it produces nicer results when is performed in a gamma-encoded color space and on the luminance channel, like darktable already does, no matter the bitdepth.

Linear gamma isn’t always the best choice for some operations.

example

Downscaling: sometimes is better linear gamma, sometimes is better gamma-encoded

Upscaling: always better gamma-encoded

And so on

Ahahah, probably linear gamma

BS. Like the rest of the thread.

Where did you got these ideas ?

This is Lab unsharp masking from darktable, with radius = 4 px, amount = 200 % and threshold = 0.5 (aka insane values)

This is RGB iterative unsharp masking from image doctor, with 4 scales of radius = [2 ; 5], amount = 200 %, and threshold = -1 dB (aka same kind of insane values)

Where do see the halos happening ? Which one desaturates the red font on the book cover ? (and yes, the second one pushes the noise more, but there are 4 unsharp masks stacked on top of each other with noise reduction disabled for good measure)

The “unsharp mask” method for sharpening creates halos, aka overshoot, aka acutance. This is a natural part of the process. In moderation, halos increase the impression of sharpness.

The physical darkroom USM process creates halos.

Digital USM processes create halos, whether the pixels are linear RGB or non-linear sRGB. For example, with ImageMagick, using extreme values on an sRGB input:

magick toes.png -unsharp 0x4+2+0 usm_sRGB.jpg

magick toes.png -colorspace RGB -unsharp 0x4+2+0 -colorspace sRGB usm_lin.jpg



Of course, there are techniques for reducing or eliminating halos, if we want. But simply using linear RGB isn’t a magic cure for USM halos.

3 Likes

The toes tell truth.

Perhaps another way of putting this is that our visual perception expects mixing operations to behave like physical mixing of pigments or lights. Because that’s what we know from the real world. So whenever possible we should strive to model our image processing on physical (aka linear) processes, as that’s what we perceive as “natural”.

You nailed it!

1 Like

Perhaps I should caution: “Partial Nudity”. Ha!

My comparison could be criticised: toes.png has been through some processing, so converting it to linear doesn’t make it scene-referred. Instead, we should use a fully-linear workflow. Fair enough.

So we use dcraw to make a linear version (with sRGB primaries but no transfer curve). Then crop to the toes, and divide by the maximum to stretch the values to the full range (retaining linearity), assign Elle’s linear sRGB profile, and save it as 32-bit floating-point.

%DCRAW% -v -W -w -4 -T -O AGA_1372_lin.tiff AGA_1372.NEF

%IMG7%magick ^
  -verbose ^
  AGA_1372_lin.tiff ^
  -crop 267x233+3033+4189 +repage ^
  -evaluate Divide %%[fx:maxima] ^
  -strip ^
  -set profile sRGB-elle-V4-g10.icc ^
  -set gamma 1.0 ^
  -define quantum:format=floating-point -depth 32 ^
  toes_lin.tiff

We do a USM on that linear image, and convert to sRGB for web convenience.

%IMG7%magick ^
  toes_lin.tiff ^
  -unsharp 0x4+2+0 ^
  -profile %ICCPROF%\sRGB-elle-V4-srgbtrc.icc ^
  toes_lin_usm.jpg

Repeat in the opposite order, so the USM is done in non-linear space:

%IMG7%magick ^
  toes_lin.tiff ^
  -profile %ICCPROF%\sRGB-elle-V4-srgbtrc.icc ^
  -unsharp 0x4+2+0 ^
  toes_nonlin_usm.jpg


The conclusions are unchanged: (1) Both versions show halos. (2) The halos are heavier (but more accurate) when the USM is done on the linear version. For more sensible amounts of USM, the difference is less noticeable.

PS: I’m not arguing against a linear workflow, just to beware of exaggerated claims.

1 Like

That result contradicts everything I have seen so far. Are you sure your file is properly decoded from integer EOTF to linear before you apply the USM ? It rather looks like you applied twice the gamma.

The dcraw command I used creates linear integer values. As far as I can tell, within experimental error, the values are proportional to the intensity of light hitting the sensor. See Linear camera raw.

Gamma is applied only once in each version, at …

-profile %ICCPROF%\sRGB-elle-V4-srgbtrc.icc

With such heavy USM, both versions have created values less than zero and greater than 100% of QuantumRange. Writing the output to JPG simply clips these values.

For avoidance of doubt: I don’t claim that conclusion (2) always applies. I don’t claim that halos are always worse when we do USM in linear coding. This only happens in light parts of images. The opposite happens in dark parts of images.

1 Like

this is consistent with what I have observed

Xavier, I"m going back to read things between software compiles, and I think I read the above on my phone and neglected to respond…

I’ve recently come to the conclusion that the gamma transform is about displaying, and should be left to the actual act of prep for display/export. It should not be in a tool we use to manipulate the data. This I think is what got @anon41087856 going in the first place, and there’s a lot of discussion about doing/undoing it in the middle of the initial flimic threads.

My filmic tool has a power setting, but I put it there to compare results with the images in John Hable’s flimic posts. I keep it set at 1.0 all the time now, and may remove the parameter from the next rawproc version.

Doing a gamma transform to accommodate a tool, then swinging it back to linear (the reciprocal gamma) is two too many transforms for my taste. Every time tone is transformed, I think a bit of difference is dialed into the color, which cannot be recovered. I’d rather find a way to accommodate the tool’s means to comfortably deal with the linear data, or find a better tool for that job. Sorry, @age, I know what you’ve asked for, but I can’t get there… :slight_smile:

2 Likes

The characteristics of certain operations such as USM are well documented. Observe what happens after an unsharp operation of a black line whose central pixel is 1:

gmic (1) r 100,1,1,1,0,0,.5,.5 +unsharp 2 dg 690 nm before,after out_ png

before

after