PhotoFlow News and Updates

Was asking because I haven’t been seeing those lines. If you could provide an image of what that looks like, that would be great.

@Carmelo_DrRaw Thanks for the implementation of RCD and giving attention to WB. About clipping, I didn’t understand the relationship between the reconstruction mode and the negative and overflow clipping back when you briefly explained it to me in the other thread. Could you explain it to me again but in the context of the new changes? Thanks.

Let’s suppose we are processing aRAW image with daylight WB multipliers, which can be typically of this order of magnitude:

WB_R = 2.0
WB_G = 1.0
WB_B = 1.6

A clipped highlight will have a normalised RAW value of 1.0 in all channels. After WB correction, the highlights will acquire a purple tint due to the fact that the R and B channels are multiplied by a larger factor than the G channel. This is obviously unphysical and must be cured…

What is done by RT and the previous PhF code is the following:

  1. the WB multipliers are first scaled such that the largest coefficient is equal to one. In our specific case, this would mean
WB_R = 2.0 / 2.0 = 1.0
WB_G = 1.0 / 2.0 = 0.5
WB_B = 1.6 / 2.0 = 0.8
  1. the RAW values are scaled by those coefficients and then fed to the demosaicing routine
  2. the resulting RGB values are scaled by the ratio between the maximum and minimum WB multipliers (2.0 in this case).
  3. finally, the RGB values are clipped to fit into the [0…1] range

What happens in this case is that blown-out RAW pixels around dark areas have an overall purple tint instead of being neutral, and this tint “leaks” into the dark areas through the demosaicing process.

A more conservative approach, which cures the purple issue at the expense of some loss in dynamical range, is the following:

  1. the RAW values are scaled with the higher WB coefficients, those for which WB_G = 1.0, and then clipped to the [0…1] range and fed to the demosaicing routine
  2. the resulting RGB values are again clipped to the [0…1] to remove possible over- or under-shooting introduced by the demosaicing.

This is what happens in the latest PhF version when you set the HL reconstruction to clip. The other methods still use the first procedure.

The caveat with the clip method is that R or B pixels that have large (but not clipped) values will be discarded by the clipping applied at step 1. For example, any raw R value above 0.5 will be clipped, reducing the effective dynamical range of the R channel to one half.

I am currently testing some workaround that allows to keep all the available information without introducing the purple fringing problem…

1 Like

Hi @Carmelo_DrRaw,

interesting, this is also what I’m playing with right now… :slight_smile: (in RT though). I’ll report back when I have something people can test

1 Like

@Carmelo_DrRaw Thanks for the detailed response. I actually understand that part already. What I intended to ask about was the relationship between the exp and color tabs. I.e., highlights reco and clip negative / overflow values. Which is done first and do they affect one another?

Ah, now I understand better the question!

The clipping in the color tab happens after the ICC conversion to the working colorspace. For example, you can have an RGB value in the camera colorspace where all three components are below 1.0. However, such color might be outside of the gamut of the output colorspace (sRGB or whatever) and therefore result in at least one RGB channel above 1.0 or below 0.0. Activating both clipping options in the color tab will cut the RGB values in the working colorspace to the [0…1] range.

I hope this clarifies your doubts, otherwise just ask further…

Which happens first: exp or color tab? I am guessing color tab so the clipping options would still matter even if I set reco mode to clip. See: RAW developer and other modules - #2 by Carmelo_DrRaw.

No, the exposure compensation is applied after demosaicing and before the color space conversion, otherwise you would get problems with DCP profiles that have a nonlinear tone curve…

1 Like

@Carmelo_DrRaw, I have lately been using version 20171117 and now 20171124. In both versions I get a crash everytime I use more than one layer as mask.

Furthermore in Version 20171124, when I use the L channel of the clone layer I get strange results

I confirm both issues, must be a recent regression. I will look into that as soon as possible.

Thanks!

1 Like

Hi @McCap! Both issues should be fixed in the current stable branch. Packages can be downloaded from here. Note that the linear_gamma branch has been merged into stable and will not be further updated…

1 Like

@Carmelo_DrRaw Thanks for all your work on the app! It is amazing what one person can put together!

BTW, I don’t know what @Elle is referring to; I don’t know how to mark hot pixels.

Hi @afre - The option to mark the hot pixels that are going to be fixed is in the same PhotoFlow dialog that allows to fix the hot pixels. It’s supposed to show you which pixels will be fixed. It’s not supposed to draw lines through these pixels in the actual output image that’s saved to disk or transferred to GIMP. Maybe the option to mark the hot pixels isn’t in the version of PhotoFlow that you are using?

Awesome, thank you so much for your work!!

@Elle The simplest explanation is that I may have overlooked this feature. Or my eyes auto-removed it. Or I forgot about using it. Or I used it on an image without hot pixels. Ha ha ha. I will take a closer look :blush: and report back.

I’ll also take a look at that this evening… looks like some trivial mistake.

Upon further investigations, I realised that this marking cannot work properly in the current implementation of the processing pipeline… basically there is no simple way to have them in the preview but not in the exported images.

I will disable this option until I will have found a proper fix. Sorry for that…

Thanks! for checking, and no need to apologize! Seeing the hot pixels is nice but not a high priority item, imho - if someone really really needs to know the location of the hot pixels they can always export the interpolated image without fixing the hot pixels, and do some thresholding to locate them.

Congrats on merging the stable and linear branches!

One more feature added which was pending since a while: “area WB”, i.e. the possibility to compute a custom WB based on the average over several image areas instead of a simple “spot”.

Areas are added with a left click, and removed with a right click. For the moment the geometry of the areas cannot be (yet) modified, but this will come very soon.

Here is a preview:

4 Likes

I’m using PhotoFlow stable updated and built a few days ago. Just now I sent a layer from GIMP CCE to PhotoFlow. PhotoFlow defaults to having the input profile be “embedded”, which is what I want.

Except that the relevant PhotoFlow UI actually says “embedded (sRGB)”. But the actual embedded profile is not sRGB. There is another option, which is “embedded” without having “sRGB” in parenthesis.

What is the difference between these two “embedded” options?

Does PhotoFlow actually detect the embedded profile when an image is transferred from GIMP? Or is it better to always select a profile from disk in order to make sure that the actual input profile really is the profile that the GIMP layer stack was using?

“embedded (sRGB)” is the same as “embedded”, except that it falls back automatically to sRGB if no embedded profile is found.

When sending images from GIMP to photoflow, the intermediate TIFF file has the GIMP working profile embedded, so “embedded” and “embedded (sRGB)” should be absolutely equivalent.

I introduced this new default option because I received a number of complaints that color conversions were not actually taking place with some specific images without embedded profiles… in such cases, sRGB remains the best educated guess, and the user is still free to chose a specific profile is sRGB is not the right choice.

Anyway, I’m open to any suggestion on how to improve the way embedded profiles are managed…