PhotoFlow News and Updates

Hi I played around with version photoflow-w64-20171117_1335-git-linear_gamma-f8a7a56b9ce32082b4afc7776a14cfe2a4267391 and noticed the following things:

  • blownout highlights are shown in pink, even though highlight reco is on
  • basic adjustments: the L curve is missing in the HSL curves tool.
  • freehand drawing: when the layer mode is overlay, soft light or similar and the background color is set to transparent the image is darkened.

Cheers
Stephan

Time again for some optimizations. In this case I did some work on the B&W conversions, and prepared the field for optimizing of the ICC conversions (LCMS2 is currently used everywhere, but it is known to be quite slow).

Photoflow has a desaturate tool that provides three methods for converting an image to grayscale:

  1. luminance (LCh) is the first, default and recommended method. It boils down to a conversion to CIELCh, after which the C channels are set to 0 to discard the color information, and finally a conversion back to the initial colorspace. This method provides consistent results that are independent of the original RGB colorspace.
  2. lightness (HSL) computes the average between the minimum and maximum RGB values: L = (M + m)/2
  3. average computes the average of the RGB channels: L = (R + G + B)/3

The second and third methods are less accurate, but can be useful in certain cases.

Today I have optimized the luminance method, obtaining a 3x speedup for sRGB images and almost a 10x speedup for linear-gamma images. The speedup is simply coming from the fact that I am now avoiding a full LCMS conversion to CIELab, instead I am manually computing the XYZ luminance Y using the RGB primaries of the initial colorspace, and then setting R=G=B=Y. If the initial data is gamma-encoded, the RGB values are linearized before computing Y, and then gamma-encoded again at the end.

The effect of the different B&W conversion methods can be compared using this “rainbow” image:

Luminance conversion:

Lightness conversion:

Average method:

One can clearly see how the luminance method better matches the perceived image “brightness” (not sure if this is the correct thechnical owrd…).

Updated packages will be available soon.

1 Like

By the way, it looks like the latest OSX package works fine on your system…

So it seems (more thorough testing needed) => OS 10.11.6
Glad the Travesty’s taking some load out of you :grinning:

As soon as I will find a couple of hours for this, I will add a visualization of the filmic curve to guide the user when adjusting the parameters

:point_up_2: you’re the man :facepunch:

Thanks @Carmelo_DrRaw for remembering my requests that are scattered among multiple threads! Part of the reason that I like PF :slight_smile:.

Edit: just checking. Since you use ICC profiles, PF’s CIELAB and CIELCH use D50, right?

Yes, in particular the luminance is relative to D50…

Also, the way you used the terms luminance and lightness is different from my understanding. Remember this post? However, the L in HSL is also called lightness; so confusing… Concisely, there is L* (LAB) lightness and L (HSL) lightness.

BTW, I am really digging the switchable curves linear - log !
:+1::pray:

I agree… let’s see if I manage to explain things correctly, maybe @Elle can step in if I’ll say something wring:

  • luminance is the second component of an XYZ triplet. Relative luminance is Y divided by the luminance of the reference white point Yn (Yr = Y/Yn). However, the RGB → XYZ matrix given by the RGB primaries of an ICC profile are such that Yn = 1, therefore the Y value computed from the ICC primaries is actually Yr.
  • lightness (L*) is Yr encoded with the L* TRC (CIELAB color space - Wikipedia)
  • lightness (HSL) is a different thing, which unfortunately is called in the same way as L*…

The reason why I have used the term luminance in the B&W conversion tool is that the resulting image is encoded with the same TRC as the initial RGB data, and not necessarily with the L* TRC. However, the resulting image is equivalent to L* lightness.

Finally, the luminance blend mode in PhF does exactly what Elle describes in the article you are referring to. Moreover, since few days the LCH Lightness blend mode is also available…

Hope this is shading a bit of light on this lexical mess!

This switch allows to edit a linear RGB image using a linear curve represented with perceptually uniform axes.

Here is an example of a curve applied to a linear image and represented in perceptual coordinates. Notice the non-trivial behaviour of the curve in the dark tones, which is not a mistake but the result of the linear part of the L* TRC:

The same curve in linear coordinates… notice how less lever-arm there is for editing the dark tones:

The same curve control points, but this time applied to an image encoded with the L* TRC. The curve is now smooth, because the spline interpolation between the points occurs in the same L* encoding, however the effect on the dark areas is much less “natural”:

My suggestion: keep the image in linear RGB, and switch between perceptual and linear representation of the curve depending if you want to fine-tune the dark or light areas of the image.

EDIT: I hope that @Morgan_Hardwood has nothing against the use of this image as an illustration…

1 Like

Absolutely, in fact I’m happy if you do as that lets me see rendering of this scene in other ways than the ones I attempted. FWIW the scene looked overall much lighter to the human eye.

What I show above is the mere result of a non-linear tone curve applied to the linear RGB data, without any local contrast enhancement… I guess that our brain performs some sort of local contrast enhancement to compress the dynamic range. Or maybe a better definition would be global contrast compression?

Anyway, I still need to find a local contrast enhancement/global contrast compression method that does not look “fake”… One day or another I will play with RT’s retinex tool, as I have read a lot of good things about it!

@Carmelo_DrRaw Uncertain whether it is fake-looking or not but I have always enjoyed using CLAHE. It is implemented in a bunch of ways, some of which don’t look appealing at all.

Two things about this:

  • did you look at this: Perception Based Contrast Enhancement by Majumder? I have no experience with it but it seems quite feasible and logical.
  • What is the basis of your shadow-highlights tool? is ot a masked, blurred and inversed copy of the image?

Cheers
Stephan

Hi @Carmelo_DrRaw - I’ve been using PhotoFlow (updated just a couple of days ago, linear gamma branch) and had some more or less minor user interface observations:

  • When I asked PhotoFlow to mark hot pixels that were fixed, I didn’t see any marking lines in the PhotoFlow window before transferring the image to GIMP, but once transferred to GIMP the hot pixels all had vertical lines through them.

  • When setting custom RGB values for the white balance, the WB mode continues to read “Camera”, but it might be less misleading/confusing if it said “Custom”.

  • The little boxes for typing in custom RGB values are too narrow, at least on my machine, so it’s hard to tell if all the numbers have been selected and even harder to enter new numbers. Making the panel wider doesn’t make the little boxes any wider.

  • How does one save a preset for the raw processing layer? I’ve been right-clicking on the layer and saving “something” to the PF config folder that shows up as the suggested place to save the prefix, but when I go to load a saved preset, it’s just not ever there. Maybe PF is saving to the home folder and looking in the prefix folder? Is it necessary to add a suffix/file extension or does PF do this automatically? If I need to add a file extension, which one?

@Elle What do you mean by the first point? I am interested. I have been complaining about the second point for a while now, but I know it is on @Carmelo_DrRaw’s to do list so I can wait :sunny:.

Hi @afre, I’m not sure what you are asking about, so I’ll cover some possible interpretations.

“Hot” and “dead” pixels are pixels that respectively have one or more channels that read 100% full well instead of responding to the actual amount of recorded light, or else don’t respond at all, but I suspect this is something you already know :slight_smile: .

PhotoFlow (and also darktable) allows to show the user which pixels have been fixed, by drawing a short line through the hot/dead pixels on the screen during raw processing. The lines aren’t supposed to be in the output image, only in the image shown on the screen during raw processing.

In Photoflow, the relevant dialog is on the Raw developer layer in the tab labelled “Corr”.

I have some good news!

First of all, I have just finished introducing the RCD demosaicing into photoflow. The implementation is based on the RT version, adapted to process small image tiles in parallel.

Moreover, and following what @agriggio has reported, I have modified the RAW data clipping behaviour in photoflow, which results in much lower purple fringing around dark objects on clipped backgrounds. The improvement is visible for all demosaicing methods, but only when the highlights reconstruction is set to clip (which is the default mode).

The new logic is the following:

  • when the highlight reconstruction is set to clip, the WB multipliers are scaled such that the lowest one has a value of 1. The RAW values are then multiplied by the WB coefficients and then clipped to the [0…1] range.
  • when the highlights reconstruction is set to blend or none, the WB multipliers are scaled such that the highest one has a value of 1. The RAW values are then multiplied by the WB coefficients, without being clipped. After the demosaicing, and before the conversion to the working colorspace, the RGB values are then scaled by the ratio between the highest and lowest WB coefficient, so that the image brightness is the same as in clip mode.

Note that in all cases the demosaicing is performed after converting the RAW data to the user-chosen WB.

Here is a comparison between the Amaze output with HL reco set either to clip and blend (which corresponds also to the old clip mode as long as purple fringing is concerned):

Amaze, blend mode:
DSC_0934-amaze-blend

Amaze, clip mode:
DSC_0934-amaze-clip

Finally, here is the output of the RCD demosaicing in clip mode:
DSC_0934-rcd-clip

There is a slight difference between the two demosaicing methods, but not really a striking one…

Second, I have added a curve representation for the tone mapping functions, which should ease the tweaking of parameters. The curves are represented in perceptual encoding, to better visualise the effect in dark areas. Here is a screenshot showing the filmic new curve with default parameters:

Updated packages are available as usual from here.

3 Likes

Hi @Elle!

I will check, must be some bug…

You are perfectly right! However, while the solution it is easy to describe, it requires some refactoring of the WB UI code in order to implement it. I think I will postpone this until I will also implement some “spot WB over a user-selectable area”, which is something I have on my TODO list since long time and I now want to finally get done.

Are you talking about the boxes in this screenshot?
12
Just to be sure that I’ll modify the good ones…

Right-clicking on the layer and saving is the correct procedure. The preset can be saved in any folder of your choice, but nedds to have a .pfp extension. I should modify the code such that the extension is automatically added if missing… please ping me on this github issue in case you have no news during the next couple of weeks…

1 Like

Was asking because I haven’t been seeing those lines. If you could provide an image of what that looks like, that would be great.

@Carmelo_DrRaw Thanks for the implementation of RCD and giving attention to WB. About clipping, I didn’t understand the relationship between the reconstruction mode and the negative and overflow clipping back when you briefly explained it to me in the other thread. Could you explain it to me again but in the context of the new changes? Thanks.

Let’s suppose we are processing aRAW image with daylight WB multipliers, which can be typically of this order of magnitude:

WB_R = 2.0
WB_G = 1.0
WB_B = 1.6

A clipped highlight will have a normalised RAW value of 1.0 in all channels. After WB correction, the highlights will acquire a purple tint due to the fact that the R and B channels are multiplied by a larger factor than the G channel. This is obviously unphysical and must be cured…

What is done by RT and the previous PhF code is the following:

  1. the WB multipliers are first scaled such that the largest coefficient is equal to one. In our specific case, this would mean
WB_R = 2.0 / 2.0 = 1.0
WB_G = 1.0 / 2.0 = 0.5
WB_B = 1.6 / 2.0 = 0.8
  1. the RAW values are scaled by those coefficients and then fed to the demosaicing routine
  2. the resulting RGB values are scaled by the ratio between the maximum and minimum WB multipliers (2.0 in this case).
  3. finally, the RGB values are clipped to fit into the [0…1] range

What happens in this case is that blown-out RAW pixels around dark areas have an overall purple tint instead of being neutral, and this tint “leaks” into the dark areas through the demosaicing process.

A more conservative approach, which cures the purple issue at the expense of some loss in dynamical range, is the following:

  1. the RAW values are scaled with the higher WB coefficients, those for which WB_G = 1.0, and then clipped to the [0…1] range and fed to the demosaicing routine
  2. the resulting RGB values are again clipped to the [0…1] to remove possible over- or under-shooting introduced by the demosaicing.

This is what happens in the latest PhF version when you set the HL reconstruction to clip. The other methods still use the first procedure.

The caveat with the clip method is that R or B pixels that have large (but not clipped) values will be discarded by the clipping applied at step 1. For example, any raw R value above 0.5 will be clipped, reducing the effective dynamical range of the R channel to one half.

I am currently testing some workaround that allows to keep all the available information without introducing the purple fringing problem…

1 Like