RAW developer and other modules

I am still debugging the new auto-CA correction code, and there are quite too many output messages… however, only the loading phase should be slower, while the processing should be in principle faster

Thanks, I will have a look

Not really… Lab has a finite gamut as other colorspaces. I am using Lab because it is easier to interpret the values, as lightness is decoupled from color and the Chroma is simply proportional to the (a,b) values.

Please have a look. There was a bug

  1. Should loading pfis be as slow as raws? Would it be a good idea to store CA info in the pfi?
  2. Should auto-ca happen on load? By default, settings in Corr → lens corrections are disabled.
  1. Is it large enough to cover all of the RGB spaces that come with PF?
  2. How are out-of-range and -gamut values represented in Lab?

I just posted in the GitHub issue… I do not see the artefacts in PhF :open_mouth:

Loading a PFI that processes a RAW file results in loading the RAW itself, so the same slowness applies.
I opted for not storing the CA info because in such a way it will be possible to take automatically advantage of future improvements in auto-CA detection. Also there is still room for improvement of the RAW loading speed and auto-CA analysis phase.

The auto-CA works in two steps: an “analysis” phase that computes the correction factors, and which is always executed, and a “correction” phase which modifies the image data, and which is controlled by the checkbox in the Corr tab

I think that at least the ACES colorspace is wider than Lab, and it might even be that some of the very saturated colors that can be recorded by modern digital cameras are also outside of the Lab gamut. However, AFAIK such colors cannot be generated by any existing output device, therefore I would consider them irrelevant in practical terms.

The normal range of Lab values is L=[0…100] and a,b=[-127…128]. When doing conversions in floating-point precision and unbounded mode, out-of-gamut colors will be represented by Lab values outside of such ranges. Also very bright colors in HDR images can be represented by L values above 100.

Good to know the thought process behind auto-ca. Curiosity satisfied.

Interesting tidbit about ACES. Actually, I have been using ACES in recent PlayRaw workflows.

Maybe it is just a typo but shouldn’t a,b=[-128…127]? Proof:

From the first link:

In theory there are no maximum values of a* and b*, but in practice they are usually numbered from -128 to +127 (256 levels).

So [-128…127] is a just convention to match [0…255] but should not matter in PF where everything is done in float.

The CIE Lab colour model encompasses the entire spectrum, including colours outside of human vision.

So it covers everything shown in the chromaticity diagram.

Found this along the way:

I’ve never really liked results of tonemapping algorithms (over the years I’ve tried quite a few), and so have always tonemapped by hand using masks and layers. But this filmic tonemapping is very different and really excellent, actually very film-like.

It’s been probably years since I added any new editing tools to my small arsenal of “go to” editing algorithms, well, apart from GIMP-2.9’s LCH-based tools. But the filmic algorithm is something that I anticipate using quite a lot. It really does allow to add exposure and compress the highlights in a way that looks natural.

@Carmelo_DrRaw - thanks! for the link to the page that explains the filmic tonemapping. And many, many thanks for the filmic tonemapping algorithm.

If you read this page from Bruce, you will see that the integer representation of Lab excludes some real colors (particularly in the green hue range, see the bottom of the page). This limitation does not hold if one uses floating-point values and allows for (a,b) values outside of the [128…127] range. By the way, thanks for fixing the typo in the (a,b) range definition!

However, I am pretty sure that no monitor or printer will be able to actually produce such colors, so for me this limitation is “practically irrelevant”. Experts can correct me if I am wrong.

I agree, I am starting to apply it by default more and more to my images… however there is really nothing very fancy in it, it is just a non-linear curve with a roughly S-shape in perceptual scale. Nevertheless, it really gives a “natural” boost in mid-tones contrast.

You should also try to modify the “preserve colors” slider, to see its effect. Sometimes moving it all the way to “1” produces even more natural results…

Lab Good to know that a,b=[-128…127] does not cover all possible colors and that the range is not a scaled representation of the entire spectrum.

Filmic

This happens when I use ACES:

tone mapping > preserve colors = 1
RAW > Color > working profile = ACES linear 

No clipping
name: [tm-00]
min : -5633875116032
max : 1299298123776
mean: -3475430.2141832756
std : 2461316913.8421001
rang: 6933173239808

Clip overflow
name: [tm-01]
min : -5633875116032
max : 1299298123776
mean: -3475430.2141898782
std : 2461316913.8421001
rang: 6933173239808

Clip negative
name: [tm-10]
min : -2528173096960
max : 0.83970385789871216
mean: -1127271.6365228517
std : 777916169.15958655
rang: 2528173096960.8398

Clip both
name: [tm-11]
min : -2528173096960
max : 0.80282509326934814
mean: -1127271.636529814
std : 777916169.15958655
rang: 2528173096960.8027

Load time

Observations from loading my orchid photo from [PlayRaw] Flowers Flowers Flowers!

  1. Raw and pfi load time from opening to the completion of updating is 29-38s.
  2. Closing image tab using while loading makes PF crash.

GIMP LCH-based (blending modes, and hue shift) is literally the only reason I’m keeping GIMP 2.9.5 and higher. I have not found a free solution besides having to program to get LCH-based tool, but that path sounds tedious. I don’t know when they’ll come to Photoflow, but if it does, I’ll probably be using Photoflow over GIMP for my needs.

LCH ttools are in my TODO list since a while, and I can put priority on that if there is a request… I’ll keep you updated.

1 Like

Could you explain a bit about “roughly S-shape in perceptual scale”? My impression was that these curves approximate “piecewise logarithmic” curves, using the terminology from this followup article:

but maybe the revised version of the filmic tonemapping works completely differently from the original version?

The PhotoFlow filmic tonemapping does require a lot of fiddling to get nice results, with each parameter interacting with the other parameters, sometimes even producing a solid white or solid black result (some of the parameters shouldn’t be pushed all the way to the left).

Also, there doesn’t seem to be a set of parameters that produces “no change”, something the “Piecewise Power Curves” article talks about.

Would you be willing to add the modified filmic curve from the second article to PhotoFlow, not as a replacement but as a new module, maybe “filmic1” and “filmic2”?

Anyway, filmic might just be “curves”, but there is no way using GIMP curves that I’d be able to produce the same results as the filmic tonemapping algorithm.

I’m still trying to work out which filmic parameters correspond to the various portions of a characteristic curve. Is “linear angle” related to “gamma”? which I think in this context refers roughly to the slope of the more or less the linear portion of the characteristic curve?

Anyone have an explanation for “toe number” and “linear white point”?

The second filmic worlds article has a link to a nice explanation of characteristic curves: (WFsites) - Page not published

And for anyone who really likes diagrams and equations:

I have similar questions about filmic.

Yes, I was referring to that above. E.g., setting preserve colors to 1 in ACES linear causes problems. Something might be amiss.

Hi afre,

I wasn’t sure - and I’m still not sure :slight_smile: - what the post you referred to was about. But the ACES color space wasn’t designed for editing, but rather for storage: Redirecting to Google Groups

ACES isn’t just one color space but rather an entire processing and storage protocol that includes several RGB color spaces. If there is a reason why you want to work using one of the ACES color spaces, I’d recommend ACEScg.

For many RGB editing operations (those based on add and subtract), results are the same in all RGB working spaces.

For many other RGB editing operations (those based on multiply and divide by any color other than gray, and also those that modify individual channel information such as mono-mixer/channel mixer and using Levels/Curves to modify individual channels), results very much depend on the RGB primaries. I explained this - with illustrations and in excrutiating detail :slight_smile: - in several articles on my website on why unbounded sRGB is unsuitable as a universal RGB working space. But the principles apply to all RGB working spaces - unbounded or otherwise - simply because many editing operations produce different results in different RGB working spaces, depending on the primaries.

Changing topics, I’m fairly certain that “preserve colors” set to 1 in the filmic tonemapping is the same as blending the results using Luminance blend mode - results match doing this in GIMP starting with filmic tonemapping done in PhotoFlow on a “luminance-based” black and white rendition.

Luminance blend mode can produce very vivid colors in any RGB working space, especially in the brighter colors, if these brighter colors were already fairly colorful in the original image and the tonality that’s being blended is considerably brighter than the original tonality.

Getting back to ACES, many people have thought “Oh, I’ll use ACES because it holds all the visible colors”. But when dealing with camera raw files, for many cameras - and depending on how the input profile was made - bright yellow and dark violet-blue colors are going to be interpreted as outside the visible colors, that is, as imaginary colors. The solution to keeping all the colors isn’t using a larger RGB working space but rather is using whatever procedure you can find or devise that brings out of gamut colors back into gamut.

A serious problem with using any large (as in considerably larger than your monitor profile’s color gamut) RGB working space is that you can easily produce colors your monitor can’t display. PhotoFlow soft proofing can be used to soft proof to your monitor profile, using the gamut check, which is a nice way to see just how many colors are outside your monitor profile’s color gamut.

I was probably not very accurate with my statement… the resulting curve is “kind of” S-shaped, but the key point is that it handles values > 1 by bringing them back analytically into the [0…1] range.

Thanks for pointing to the follow-up article, I’m coding the new formulas right now.

Wow, awesome! thanks!

This is what led to what I said.

Which does not happen when I do not clip the negative values in ACES, ACEScg or Rec.2020. The resultant max values are extremely large and, when I scale or clip everything to [0…1], I get a very flat grey image.

In my case, not solid black or white but very flat. (This ties into the discussion of what filmic and its parameters actually do, which led me to think about the things that @Elle has just asked about.) I guess the takeaway is that filmic and possibly other modules in PF are simply not equipped to handle images with negative values.

As for the negative values that result from filmic, they only appear when I use ACES, no matter which values I clip or do not clip. Based on what @Carmelo_DrRaw and @Elle have said, these colors are out-of-gamut, but I didn’t expect them to be this extreme.

Anyway, I look forward to filmic2.

Oh, I think I know what you mean. Not too long ago I filed an official GIMP bug report complaining about a layer mask turning solid gray after doing an autostretch operation, and someone kindly pointed out that the mask turned gray because of the negative channel values. Oops! Anyway, @Carmelo_DrRaw 's advice to clip the negative channel values during raw processing seems like excellent advice.

The images I’ve been using for exploring the filmic options didn’t have any negative channel values, so the filmic “solid black or white” results from pushing a couple of the parameters all the way to the left is from something other than negative channel values.

This has to be considered a bug, and needs to be corrected. The easiest way of dealing with negative values is to introduce functional relations of the type

f(-x) = -f(x)

Could you share an example of badly behaving filmic adjustment?

Thanks!