PhotoFlow News and Updates

photoflow
news

#65

Thanks! It fixes the blend problems that I have been having. However:

A. Defringe still crashes PF but only half of the time.

B. Having an additional image layer is still slow and messy when toggling the top layer and when toggling samplers. It appears that the odd redrawing, sampler displacement and disappearance might be due to how PF deals with images with different dimensions. I.e., it is more likely to happen when the top layer is smaller than the bottom layer in width and height. Just a thought… It might be due to some other problem or a combination of problems. Maybe I will do a video capture sometime when I find the motivation. In any case, your updates have made everything run a bit smoother. Thanks again!


(Carmelo Dr Raw) #66

You are welcome! As soon as I will find a couple of hours for this, I will add a visualization of the filmic curve to guide the user when adjusting the parameters.

By the way, it looks like the latest OSX package works fine on your system… I’m glad to see this, because this is the first package that is automatically compiled and assembled using Travis CI. For the user, this means that the package is created in a 100% secure environment (unless the Travis CI machines get compromised in some way)… for me, it means I do not have to spend time creating and uploading new packages each time I commit some interesting changes!


#67

Hi I played around with version photoflow-w64-20171117_1335-git-linear_gamma-f8a7a56b9ce32082b4afc7776a14cfe2a4267391 and noticed the following things:

  • blownout highlights are shown in pink, even though highlight reco is on
  • basic adjustments: the L curve is missing in the HSL curves tool.
  • freehand drawing: when the layer mode is overlay, soft light or similar and the background color is set to transparent the image is darkened.

Cheers
Stephan


(Carmelo Dr Raw) #68

Time again for some optimizations. In this case I did some work on the B&W conversions, and prepared the field for optimizing of the ICC conversions (LCMS2 is currently used everywhere, but it is known to be quite slow).

Photoflow has a desaturate tool that provides three methods for converting an image to grayscale:

  1. luminance (LCh) is the first, default and recommended method. It boils down to a conversion to CIELCh, after which the C channels are set to 0 to discard the color information, and finally a conversion back to the initial colorspace. This method provides consistent results that are independent of the original RGB colorspace.
  2. lightness (HSL) computes the average between the minimum and maximum RGB values: L = (M + m)/2
  3. average computes the average of the RGB channels: L = (R + G + B)/3

The second and third methods are less accurate, but can be useful in certain cases.

Today I have optimized the luminance method, obtaining a 3x speedup for sRGB images and almost a 10x speedup for linear-gamma images. The speedup is simply coming from the fact that I am now avoiding a full LCMS conversion to CIELab, instead I am manually computing the XYZ luminance Y using the RGB primaries of the initial colorspace, and then setting R=G=B=Y. If the initial data is gamma-encoded, the RGB values are linearized before computing Y, and then gamma-encoded again at the end.

The effect of the different B&W conversion methods can be compared using this “rainbow” image:

Luminance conversion:

Lightness conversion:

Average method:

One can clearly see how the luminance method better matches the perceived image “brightness” (not sure if this is the correct thechnical owrd…).

Updated packages will be available soon.


#69

By the way, it looks like the latest OSX package works fine on your system…

So it seems (more thorough testing needed) => OS 10.11.6
Glad the Travesty’s taking some load out of you :grinning:

As soon as I will find a couple of hours for this, I will add a visualization of the filmic curve to guide the user when adjusting the parameters

:point_up_2: you’re the man :facepunch:


#70

Thanks @Carmelo_DrRaw for remembering my requests that are scattered among multiple threads! Part of the reason that I like PF :slight_smile:.

Edit: just checking. Since you use ICC profiles, PF’s CIELAB and CIELCH use D50, right?


(Carmelo Dr Raw) #71

Yes, in particular the luminance is relative to D50…


#72

Also, the way you used the terms luminance and lightness is different from my understanding. Remember this post? However, the L in HSL is also called lightness; so confusing… Concisely, there is L* (LAB) lightness and L (HSL) lightness.


#73

BTW, I am really digging the switchable curves linear - log !
:+1::pray:


(Carmelo Dr Raw) #74

I agree… let’s see if I manage to explain things correctly, maybe @Elle can step in if I’ll say something wring:

  • luminance is the second component of an XYZ triplet. Relative luminance is Y divided by the luminance of the reference white point Yn (Yr = Y/Yn). However, the RGB -> XYZ matrix given by the RGB primaries of an ICC profile are such that Yn = 1, therefore the Y value computed from the ICC primaries is actually Yr.
  • lightness (L*) is Yr encoded with the L* TRC (https://en.wikipedia.org/wiki/Lab_color_space#Forward_transformation)
  • lightness (HSL) is a different thing, which unfortunately is called in the same way as L*…

The reason why I have used the term luminance in the B&W conversion tool is that the resulting image is encoded with the same TRC as the initial RGB data, and not necessarily with the L* TRC. However, the resulting image is equivalent to L* lightness.

Finally, the luminance blend mode in PhF does exactly what Elle describes in the article you are referring to. Moreover, since few days the LCH Lightness blend mode is also available…

Hope this is shading a bit of light on this lexical mess!


(Carmelo Dr Raw) #75

This switch allows to edit a linear RGB image using a linear curve represented with perceptually uniform axes.

Here is an example of a curve applied to a linear image and represented in perceptual coordinates. Notice the non-trivial behaviour of the curve in the dark tones, which is not a mistake but the result of the linear part of the L* TRC:

The same curve in linear coordinates… notice how less lever-arm there is for editing the dark tones:

The same curve control points, but this time applied to an image encoded with the L* TRC. The curve is now smooth, because the spline interpolation between the points occurs in the same L* encoding, however the effect on the dark areas is much less “natural”:

My suggestion: keep the image in linear RGB, and switch between perceptual and linear representation of the curve depending if you want to fine-tune the dark or light areas of the image.

EDIT: I hope that @Morgan_Hardwood has nothing against the use of this image as an illustration…


(Morgan Hardwood) #76

Absolutely, in fact I’m happy if you do as that lets me see rendering of this scene in other ways than the ones I attempted. FWIW the scene looked overall much lighter to the human eye.


(Carmelo Dr Raw) #77

What I show above is the mere result of a non-linear tone curve applied to the linear RGB data, without any local contrast enhancement… I guess that our brain performs some sort of local contrast enhancement to compress the dynamic range. Or maybe a better definition would be global contrast compression?

Anyway, I still need to find a local contrast enhancement/global contrast compression method that does not look “fake”… One day or another I will play with RT’s retinex tool, as I have read a lot of good things about it!


#78

@Carmelo_DrRaw Uncertain whether it is fake-looking or not but I have always enjoyed using CLAHE. It is implemented in a bunch of ways, some of which don’t look appealing at all.


#79

Two things about this:

  • did you look at this: Perception Based Contrast Enhancement by Majumder? I have no experience with it but it seems quite feasible and logical.
  • What is the basis of your shadow-highlights tool? is ot a masked, blurred and inversed copy of the image?

Cheers
Stephan


(Elle Stone) #80

Hi @Carmelo_DrRaw - I’ve been using PhotoFlow (updated just a couple of days ago, linear gamma branch) and had some more or less minor user interface observations:

  • When I asked PhotoFlow to mark hot pixels that were fixed, I didn’t see any marking lines in the PhotoFlow window before transferring the image to GIMP, but once transferred to GIMP the hot pixels all had vertical lines through them.

  • When setting custom RGB values for the white balance, the WB mode continues to read “Camera”, but it might be less misleading/confusing if it said “Custom”.

  • The little boxes for typing in custom RGB values are too narrow, at least on my machine, so it’s hard to tell if all the numbers have been selected and even harder to enter new numbers. Making the panel wider doesn’t make the little boxes any wider.

  • How does one save a preset for the raw processing layer? I’ve been right-clicking on the layer and saving “something” to the PF config folder that shows up as the suggested place to save the prefix, but when I go to load a saved preset, it’s just not ever there. Maybe PF is saving to the home folder and looking in the prefix folder? Is it necessary to add a suffix/file extension or does PF do this automatically? If I need to add a file extension, which one?


#81

@Elle What do you mean by the first point? I am interested. I have been complaining about the second point for a while now, but I know it is on @Carmelo_DrRaw’s to do list so I can wait :sunny:.


(Elle Stone) #82

Hi @afre, I’m not sure what you are asking about, so I’ll cover some possible interpretations.

“Hot” and “dead” pixels are pixels that respectively have one or more channels that read 100% full well instead of responding to the actual amount of recorded light, or else don’t respond at all, but I suspect this is something you already know :slight_smile: .

PhotoFlow (and also darktable) allows to show the user which pixels have been fixed, by drawing a short line through the hot/dead pixels on the screen during raw processing. The lines aren’t supposed to be in the output image, only in the image shown on the screen during raw processing.

In Photoflow, the relevant dialog is on the Raw developer layer in the tab labelled “Corr”.


(Carmelo Dr Raw) #83

I have some good news!

First of all, I have just finished introducing the RCD demosaicing into photoflow. The implementation is based on the RT version, adapted to process small image tiles in parallel.

Moreover, and following what @agriggio has reported, I have modified the RAW data clipping behaviour in photoflow, which results in much lower purple fringing around dark objects on clipped backgrounds. The improvement is visible for all demosaicing methods, but only when the highlights reconstruction is set to clip (which is the default mode).

The new logic is the following:

  • when the highlight reconstruction is set to clip, the WB multipliers are scaled such that the lowest one has a value of 1. The RAW values are then multiplied by the WB coefficients and then clipped to the [0…1] range.
  • when the highlights reconstruction is set to blend or none, the WB multipliers are scaled such that the highest one has a value of 1. The RAW values are then multiplied by the WB coefficients, without being clipped. After the demosaicing, and before the conversion to the working colorspace, the RGB values are then scaled by the ratio between the highest and lowest WB coefficient, so that the image brightness is the same as in clip mode.

Note that in all cases the demosaicing is performed after converting the RAW data to the user-chosen WB.

Here is a comparison between the Amaze output with HL reco set either to clip and blend (which corresponds also to the old clip mode as long as purple fringing is concerned):

Amaze, blend mode:
DSC_0934-amaze-blend

Amaze, clip mode:
DSC_0934-amaze-clip

Finally, here is the output of the RCD demosaicing in clip mode:
DSC_0934-rcd-clip

There is a slight difference between the two demosaicing methods, but not really a striking one…

Second, I have added a curve representation for the tone mapping functions, which should ease the tweaking of parameters. The curves are represented in perceptual encoding, to better visualise the effect in dark areas. Here is a screenshot showing the filmic new curve with default parameters:

Updated packages are available as usual from here.


Diagonal interpolation correction artifacts with AMaZE Demosaicing
(Carmelo Dr Raw) #84

Hi @Elle!

I will check, must be some bug…

You are perfectly right! However, while the solution it is easy to describe, it requires some refactoring of the WB UI code in order to implement it. I think I will postpone this until I will also implement some “spot WB over a user-selectable area”, which is something I have on my TODO list since long time and I now want to finally get done.

Are you talking about the boxes in this screenshot?
12
Just to be sure that I’ll modify the good ones…

Right-clicking on the layer and saving is the correct procedure. The preset can be saved in any folder of your choice, but nedds to have a .pfp extension. I should modify the code such that the extension is automatically added if missing… please ping me on this github issue in case you have no news during the next couple of weeks…