Suitability of custom camera profiles as working profiles

Yes. Your example colors are given using integer values, so just to be on the safe siide I’ll mention that any random color in any random well-behaved RGB working space does require using unbounded floating point conversions to avoid clipping.

There are some restrictions on what the destination color space’s TRC (“tone response curve”) can be. For RGB working spaces with “pure gamma” TRCs that aren’t “gamma=1.0”, LCMS conversions to such working spaces actually clip any RGB channel values that would otherwise be less than zero.

So for V2 profiles, the destination color space needs to be a linear gamma color space (gamma=1.0) to avoid clipping in the shadows, if any of the source colors are out of gamut wrt to the destination color space. For V4 profiles, if the TRC is the right sort of parametric curve (has a linear portion in the shadows, such as the sRGB and LAB companding curves as the TRC), then out of gamut colors are not clipped in the shadows.

Well, quantization from ICC profile color space conversions is not the only thing to worry about. At 32-bit floating point precision, this type of quantization is probably not really a practical concern. I’d be curious as to which was a bigger source of quantization: working in a color space that’s hugely larger than the actual image color gamut (linear gamma matrix input camera input profiles tend to have a lot of wasted space - space occupied by entirely imaginary colors), or doing a conversion to a less wasteful color space and then performing a bunch of subsequent edits.

Which is not at all to say that “smaller color spaces are better than bigger color spaces”, because the best color space to be working in depends on your editing goals and the specific editing steps you make along the way.

Putting quantization issues to one side, there are two basic mathematical operations that are used to modify pixel values: Addition and Multiply. Subtract is just the inverse of Addition. Divide is just the inverse of Multiply. And “raising to a power” is basically also a form of Multiply.

As long as every single RGB operation you perform on your image pixels only uses Addition/Subtract (which also includes Multiplying and Dividing by gray - think about it and you’ll see this is true), it doesn’t matter what linear gamma well-behaved RGB color space the image is in. Final colors will be exactly the same. But if the image is in a non-linear RGB working space, then results will be different even for Addition/Subtract operations, depending on the color space primaries and the TRC.

As soon as any of your RGB operations involve Multiplying or Dividing the RGB channel values by a non-gray color, then the RGB working space primaries matter a whole lot even when the RGB working space has a linear gamma TRC, because Multiply/Divide give different editing results depending on the RGB color space primaries, even if the TRC is linear.

Hmm, two separate issues here: The first issue I already mentioned: Multiply/Divide by non-gray colors produce different results in different RGB working spaces, even if all the RGB color spaces are using a linear gamma TRC.

The second issue is that working in a more or less perceptually uniform RGB working space is certainly sometimes appropriate (depending on your editing goal). But it also introduces “gamma” artifacts. Avoiding gamma artifacts requires working in a linear gamma RGB working space.

OK, you are using RawTherapee, and RawTherapee does a lot of operations in the LAB color space. Assuming an unbounded ICC profile conversion from whatever source color space to LAB, then once in LAB, the source color space doesn’t matter.

Regarding RawTherapee, there’s a lot I don’t know: For example, I don’t know:

  • The exact sequence of color space conversions in RawTherapee, and whether any or all of these conversions are “unbounded”.

  • Which operations are done in LAB and which in the user-chosen RGB working space.

  • Which operations are performed using linear gamma vs perceptually uniform RGB.

Hopefully one of the RawTherapee experts can look at your list of RT operations. It sounds to me like your procedure is sound, though I’m not sure about the contrast and sharpening operations, or even the defringing - are these appropriately done before focus stacking?

As RT does only export at 16-bit integer, the next question is what does Zerene Stacker do?

One possibility is that Zerene stacker converts everything to some internal linear gamma RGB color space, in which case it might be best to export from RT in a perceptually uniform RGB color space to avoid possible 16-bit quantization in the shadows, which probably isn’t really a practical issue, meaning you’d never notice this quantization, but I don’t know for sure.

I’ve never done focus stacking - is this something that’s best done on perceptually uniform RGB? Or should it be done on linear RGB?

I’ll save an outline of what camera input profiles are good for a later post.

2 Likes

Ok. So I’m on the right path here, even if I use V4 profiles, as I intend to use a linear gamma profile through all the process.

Mmmm… Given that RawTherapee works internally in 32-bit floating point, it makes sense doing the conversion to a smaller space. I’m thinking about using a profile like ACEScg (gamma 1.0) as the working profile in RawTherapee, and embed it to tiffs (use it as output profile, too), so Zerene would use it.

Well, I didn’t know, neither expected this. So, how do I know which would be the appropriate working space/profile to choose, aside from trying them all one after another?

Out of curiosity, may I ask for some links to know more about this behaviour?

Well, it depends on each focus stacking aficionado. It is said that improving slightly the sharpening of already diffracted images, and its contrast too, will improve the accuracy of the algorithms used in programs like Zerene.

I think it depends on each stack, but I have seen visible improvement in my stacks by doing so: they are cleaner, needing less retouching. The key here is the levels of sharpening, and contrast tweaking, as they must be almost imperceptible, so they won’t cause increased noise and false detail in the stacked image.

About defringing and other aberrations, algorithms tend to take color areas as detail, so they end up as big blobs of false detail/colors that have to be manually removed.

That I don’t know. I’m almost sure it works in 32-bit floating point, but know nothing about the gamma, or color space used. So I will better ask the developer, and will share its answer here, if that may be useful information in this post.

To be honest, I don’t know. I just know what I expect: no matter how many operations are performed over the images, the original colors must remain more or less the same. Now, after your explanations, probably it will depend on how the stacking software is performing calculations. Maybe that’s worth another question to the developer of Zerene.

Well, the thing is, RawTherapee doesn’t actually give you much choice of what RGB working space to use. The provided choices are sRGB, Adobe RGB, ProPhoto, WideGamut, Bruce, Beta, Best, and Rec2020. ACEScg isn’t on the list. I recommend that you choose Rec2020. ACEScg and Rec2020 are both excellent working spaces and cover very similar gamuts.

Fortunately “trying all the working spaces” isn’t necessary. White-balancing in an RGB working space (which is different from use CIECAM02 or LAB or Chromatic Adaptation or etc to change a white balance) is multiplying by the inverse of the color cast that you want to remove. So for white-balancing or re-white-balancing a scene-referred interpolated raw file, the proper RGB working space to use is your custom camera input profile (or go back to RT and use CIECAM02 and such). Other than that, see this article:

http://colour-science.org/posts/about-rgb-colourspace-models-performance/

For more information with lots of examples about ways in which your chosen RGB working space can more or less dramatically dictate results, see the articles on my website under “Choosing an RGB working space”, and specifically under “3. Working in bounded and unbounded color spaces”.

Start with the following article, and if you really want to understand the ways your choice of working space can influence editing results, read all the articles linked to anywhere in the article:

Also see the following article for examples of how radically different an image’s RGB channel information can look after converting from the camera input profile to a standard RGB working space:

@jdc is the expert on CIECAM. It seems to me that you’d want to do the focus stacking first, and then move the single image back to RawTherapee for further processing. But this is just a guess. That’s an interesting question that I haven’t ever thought about - how “scene-referred” is an image that’s been through various CIECAM02 processing algorithms.

No. The problem is in the way camera sensors capture light, by using color lens filters, combined with the limited number of patches we use to make camera input profiles. For whatever reason digital camera linear gamma matrix input profiles tend to have fairly high errors along the yellow/violet-blue axis - these camera profiles “interpret” bright saturated yellows (bright yellow flowers for example) as somewhat outside the realm of colors people can actually see, and the same is true of dark saturated violet-blues (blue neon signs for example).

Here’s an illustration showing some example bright saturated yellow flower colors, as interpreted by a camera input profile. This illustration is in the article I linked to above, that has examples of how radically an image’s channel values can change when converted to a standard RGB working space):

Maybe as newer cameras come closer to filling the “Luther-Ives” condition this tendency to inaccurate colors will decrease, and I’m guessing not all cameras are affected in exactly this way, but many are.

Thanks! I’m planning to make my very first focus stack in the next few days, so I’m reading your procedure with great interest. Unfortunately my planned image also needs to be ev-bracketed, but fortunately I think the “focus” part will only involve maybe 3 or 4 different focus points, so maybe only 12-16 raw files total.

1 Like

@XavAL Welcome to the forum! Take my thoughts with a dash of paprika because I am a super casual when it comes to processing. For me, it is a pastime.

Personally, I would keep the input files to Zerene Stacker as “scene-referred” as possible. As with any app, it has to make a set of assumptions about its input. It cannot account for every type of input. So I don’t see the point of doing something that might complicate things. I would rather do the post-processing afterward instead of both before and after. I tend to like to keep things as simple as possible. I am sure that a non-human set of algorithms would appreciate that as well :slight_smile:.

If you read the FAQ, you would realize that the right answer isn’t obvious even to the devs. For example, in section Should I sharpen before or after running Zerene Stacker?, there is no clear answer. If I were to paraphrase, it would be something like “be cautious; don’t overdo it” and “do it if you are confident that you are boosting the signal”. In my opinion, scene-referred is as faithful to signal as could be. And scene-referred editing if you know what you are doing. However, the list of operations you listed above seems to be a bit much. Personal taste I guess.

I would say do what you cannot do after the fact with Rawtherapee; e.g. things that cannot be done once you save the files as non-raw.

As for the input type, in particular, range, bit depth and color profile, I would do what you would usually do for any image, or according to @Elle’s articles because she is knowledgeable in this area. Just know that the further away you get from your original files and output goals the more you would have to do to get it to the final output (handling out of gamut colors, tone mapping and other post-processing problems). Also, know that your stack-er would handle certain input types better than others in terms of performance and speed, and may be incompatible with others.

I noticed another thing in the FAQ: it appears that Zerene may do a bit of tone mapping post-fusion, but you could disable that setting.

In conclusion, there aren’t any simple answers, and as the code is not open (that I know of), it might be impossible to know the best answer besides what the dev is willing to divulge. If you would like to be super thorough, I suggest you develop a method of comparing and contrasting your methodology with some sort of quality assessment. Hope this helps.

REC2020 it is, then :wink:

Well, that a relief! And many thanks for those links. It will take me a while to read (and understand) all that information, but I bet I won’t miss a single line

Let’s hope he has some time to tell us how good or bad it is to maintain an image as “scene-referred” as posible. The few articles I’ve read about CIECAM have led me to believe that it is the color model that better reflects the way we see colors, but in no way I’m an expert, so maybe I’m wrong.

That’s not a lot of images. I guess you will be around 0,1x-1x magnification, and in this case you will be far from facing the true problems of high magnification focus stacking.

I don’t wish to deviate from the original questions, and going off topic, but a few tips will soften your path while stacking:

  • I think you want the broader dynamic range possible (notice I haven’t used “high dynamic range”), so you will be better served if you perform an “exposure blend” on each of your 4 bracketed shots. You may use this technique with gimp if you wish. Then you may send the blended images into your focus stacking software

  • using real tone mapping as in true high dynamic range on each bracketed group is not a good idea. Again: in focus stacking it is better exposure blend than tone mapping (at least in the intermediate steps before the final stacked image)

  • focus stacking each exposure setting, and then blending or tone mapping the resulting 4 stacked images is neither a good idea. But being so few images, you could go crazy and try all different possibilities, and then compare results :grin:

  • use a tripod, and focus your lens to different points, front to rear your subject. You can focus stack even a landscape!

  • don’t believe anything of what I’ve said, and try out any technique you want. Have fun! Photography is about having fun, isn’t it?

Thank you! :smiley:

I try as hard as possible, too. But problems start to increase exponentially when you work in higher magnifications (above 5x, but specially above 10x, that is, magnifying 10 times real size), and what used to work doesn’t work anymore. Most of the times, demosaicing a raw file is not near the absolute minimum needed. I tend to use the workflow explained, but it doesn’t always work well, it depends on the subject, so I have to tweak it a bit, always performing the least processing.

That’s what I tried to hint when saying it depends on each focus stacking aficionado. Each one develop it’s own personal formula, and I think not a single one is better than another, because each one has been built around the equipment and experience of the photographer

Well, in this case (Zerene) I’m allowed to work with just jpeg and tiff files. I only take tiff into consideration

I think you’re referring to the Retain UDR Image option. UDR comes from Unrestricted Dynamic Range, and I don’t think that option does any real HDR tone mapping. Most easily it will be some sort of curve applied to the image. All in all the idea is to compress the image dynamic range to fit into a standard display dynamic range (no UDR applied), or leave the whole dynamic range of the image as is (UDR applied)

Certainly not! But thanks for your help

By type, I am including “range, bit depth and color profile”. Unsure whether it could accept stuff like floating point TIFs for instance. In terms of color profiles, according to the FAQ, it sounds like it just passes them to the output file. That might mean that it disregards the input color space and just deals with the numbers… However, it might have its internal fusing space. I use enfuse. Although it is different from Zerene, the docs for enfuse might give you some insight into the inner workings of a stack-er: http://enblend.sourceforge.net/enfuse.doc/enfuse_4.2.xhtml/enfuse.html.

If you don’t mind sharing, I would like to know where to find your photography :slight_smile:.

Well, it only accepts 16bit, integer images. And about the inners of its color engine, and using or not the embedded profile, I will ask the developer itself.

I used enfuse a few years ago. I tried to use a combination of enblend and enfuse to get a higher dynamic range focus stacked image, without much success :stuck_out_tongue_winking_eye:

I didn’t even know then all the small and not so small problems of high magnification focus stacking :sweat_smile:

Well, I don’t like showing images I’m not happy with, and today I don’t feel fully satisfied with my stacks (I’m quite picky)

Even so, a few years ago I published a couple images that I was proud I did. Hope you like them

Right now I’m focused on microminerals, and they prove to be much more difficult to photograph. I haven’t reach a decent quality, yet.

My questions about color profiles and working spaces are just one step in my quest to take better pictures.

1 Like

@XavAL - If you are comfortable building RawTherapee from source, the development version does now support 32-bit floating point output - yeah!! - though the processing pipeline and output is still clamped:

Though the discussion is interesting in and of itself, I am curious about the practical side of things - what exactly is the problem that is motivating you to go to all this effort @XavAL, can you demonstrate it?

1 Like

:clap::clap::clap:

I have just installed it from a repository in my Linux Mint. Great!

Sadly Zerene doesn’t accept those 32-bit floating point images… So I will have to live with the old 16bit images

Well, at least I can try, he he

But please, please, don’t judge the artistic quality of the next images, as they weren’t meant to be nice, neither to be published.

It was a test shot to assess a few problems I have in my stacks. I forced Zerene to cope with zones almost completely dark, with others with near clipping highlights. And to cope with repeating patterns in the eye, that have both dark and highlight areas side by side (in the ommatidia). I was checking a few more things, but they are out of topic here.

Obviously, the processing from start to finish of both images has been exactly the same, but changing the output profile in RawTherapee (the latest development version, indeed).

The blinking gif has been reduced to exactly 25% its original size, so there are no interpolation artifacts by downsizing. The other image is a crop of original images (100% original size), and mixed together to better see the problems (as I was unable to create a proper gif with all the details). Neither image has been post-processed (not even sharpened), to see what I get straight from Zerene.

First the gif. It’s a head of a male carpenter bee:

Here you can see what I call haze in the gamma 2.2 version: an overall whitening of the image (it bothers me specially at the top left part of the big eye). I prefer the more contrasty result from gamma 1.0 version.

Now look at the yellow part of the antenna: the g2.2 version has a noticeable color cast around it, as if it was a lit bulb. Ignore the difference in positioning caused by stacking (I don’t know where it comes from).

Around what we could call it’s nose (lower center part of the image), we could clearly see blooming highlights that must be retouched.

And on the lower left corner, what it should be an almost black area, there is an unpleasant gray, with noticeable noise (maybe it’s not so clear here, but in the bigger, original image it’s easily seen).

I remind you that there are a few other problems and artifacts, but I think they don’t necessarily come from the image gamma.

Now the comparison side by side of the yellow part of the antenna:

Here we have:

  • a more lifelike color of the dark tip in the g1.0 version (some could tell it’s just a version with more contrast)
  • look at the joints: in the g2.2 version the details are lost almost completely
  • the yellow texture is much better in the g1.0 image: look all along the yellow segments and you will see there are more details in the g1.0 version (not only contrast, but detail)
  • it looks to me that g2.2 has sifted from yellow to some orange hue (looking at the real life antenna, it looks yellow to me)

Hope these examples are good enough to show what I’m worrying about

1 Like

Thanks for sharing the two images. I don’t have the gear nor skill to do macro, so all I can do is enjoy other people’s images :slight_smile:.

I tend to use linear gamma because many algorithms aren’t designed to deal with gamma compression. However, there are times when it is more aesthetically pleasing to use perceptual encoding. That is why I said that “type” matters and you need to figure out what pleases Zerene and your sensibilities the most.

Hello! I am Rik Littlefield, the fellow behind Zerene Stacker. I wrote the code and documentation and I answer all the support requests, so I’m the best available source for how the beast works. However, that’s not to say that everything I write is both correct and comprehensible, so if I say something that sounds odd, please feel free to ask about it.

I’m posting here because Morgan Hardwood sent an email earlier today to support@zerenesystems.com, letting me know about this discussion and asking several specific questions. I see that the discussion has developed further since that time, so let me first talk in general about some of the issues that Xavier is raising. Then I’ll address specific questions asked by Morgan.

First, I think the short answer to Xavier’s concerns is that he should process in a linear profile, gamma=1.0, preferably also turning off Brightness correction in the software, definitely turning off “retain extended dynamic range” when saving, and using only the DMap stacking method whenever possible.

Using this approach will avoid gamma errors during interpolation for image alignment. Using DMap, in many parts of the image the output pixel values will be exactly equal to source pixel values (after alignment). In the remaining parts, DMap output values will be just a weighted sum of pixel values from two adjacent frames in the stack (again, after alignment).

There’s a troublesome tradeoff in the recommendation to use DMap. It turns out that subjects with complex geometry, like the hairy parts of Xavier’s bee, usually render better with PMax, but PMax unavoidably makes changes to color, contrast, and brightness. A good compromise in many cases is to render with both DMap and PMax, then retouch the DMap output from the PMax output only in places where PMax did better. That way you get to retain the faithful colors from DMap in places where you’re more likely to notice them, while merging in the better details from PMax in areas where you’re probably less sensitive to color.

Now, backing out a bit…

On the current Zerene Stacker FAQs page is this question/answer:

My colors changed a little. Why is that?

There are three reasons that output images can have different colors from the input: 1) brightness adjustment, 2) PMax, and 3) “Retain extended dynamic range” when saving. “Brightness adjustment” refers to Zerene Stacker’s attempt to correct for uneven exposure between various input images. That feature is turned on by default, but you can turn it off by un-checking Brightness at Options > Preferences > Alignment. “PMax” refers to the PMax stacking method, which often makes slight changes in brightness, contrast, and saturation as a side effect of doing its focus stacking. This behavior is an unavoidable side effect of PMax and should be considered as one of the tradeoffs of PMax versus DMap. “Retain extended dynamic range” when saving causes the range of internal pixel values to be compressed if necessary to fit within the 0-255 range of image files. Internally the range can exceed 0-255 as a result of PMax, brightness adjustment, or even just pixel interpolation during alignment.

Color/brightness/contrast changes can be completely avoided by using the DMap stacking method, with Brightness adjustment turned off at Options > Preferences > Alignment, and “Retain extended dynamic range” turned off at Options > Preferences > Image Saving or in the file save dialog.

Re-reading that entry right now, I see that it does not mention the possibility of changes due to gamma error during interpolation for alignment. That’s clearly an issue that I should address later, but let me forge ahead here.

From the artifacts that are shown in Xavier’s sample images with g1.0 and g2.2, I’m pretty confident that he’s using the PMax stacking method.

Very briefly, PMax operates by decomposing images into a Laplacian pyramid, selecting elements of the pyramid that have the largest absolute value across all images, then collapsing the Laplacian pyramid result back into an ordinary flat image. In the end, every pixel value is the result of recombining cell values from the log(N) levels of the final pyramid. (N is the number of pixels on the long axis of the image.) In principle, each of the contributing cell values could have come from a different source image, and certainly the single cell value at the narrow end of the pyramid is the average of that cell across all source frames.

So, for most stacks, I would be surprised if all the pixel values in any small area of the final PMax image exactly match those of any particular source frame. Even in areas where only a single source frame has all the sharpest details, it’s typical that the pixel values end up getting lightened or darkened somewhat due to contributions from other frames that contributed higher cells in the pyramid.

As described, this probably sounds like undesirable behavior. But on the other hand, it’s also the behavior that permits PMax to make clean transitions across sharp foreground/background discontinuities where depth methods inevitably produce visible halos. So it’s a tradeoff – best details from PMax, most faithful colors from DMap.

Still, it’s a fair question, if you have to run PMax, what profile should you run it on? One simple answer is that you should run it on the same profile that you ran DMap, because only then can you retouch between the two. As discussed above, that implies g1.0, because that’s the only case that does not suffer from gamma error when interpolating for alignment. But even if you were running just PMax, I expect that you’d still get better results from g1.0, partly due to gamma error in interpolation, and partly due to gamma error in each resizing step of the conversion to/from Laplacian pyramid.

Bottom line, linear seems best. At least until somebody shows me a case where it isn’t, which I fully expect to happen within the next few hours.

Now, back to Morgan’s questions:

Does ZS do its calculations in floating point?

Yes, unbounded 32-bit floating point.

What working space does Zerene Stacker use?

At present, the internal working space is obtained by transforming the original pixel values according to the coefficient matrix at Y′UV - Wikipedia . The resulting YUV values are what get used for everything up to saving the output image or transforming it to RGB for monitor display.

For saving the output image, the pixel values are run through the inverse transform to get them back to their original profile. For monitor display, pixel vlaues are converted to sRGB. Monitor profile is not used at this time, by any of the code that I’ve written. I suppose it might be used under the covers by the Java Runtime Environment.

Is there a risk that Zerene Stacker could make some colors go out of gamut after stacking if they are within the gamut (whatever it may be - sRGB, ProPhoto, etc.) before stacking?

Definitely, and especially with PMax. With PMax, it is common for internal values to go out of bounds due to contrast enhancement. This is most commonly seen in the clipping of highlights that became “brighter than white” internally, but I expect it can also happen for saturated colors.

With DMap, I think that colors will stay in gamut except when unusually bright/dark/saturated colors occur along a sharp edge, where spatial interpolation during alignment can cause an overshoot that would produce a value out of range. I have definitely seen this happen when an ordinary landscape scene that contained some pure blacks was processed by DMap, then saved using “retain extended range”, and it became obvious that all the darks got a little less dark. The problem there was that interpolation near black had produced an internal value that was slightly “darker than black”, which “retain extended dynamic range” had preserved by pushing the whole histogram up a little.

Could you please elaborate on whether the Zerene Stacker algorithms’ behavior is influenced by the input profile in any way?

In general, Zerene Stacker operates on the input pixel values without regard for what profile they have. In particular there is no gamma correction, so most of the computations are vulnerable to gamma errors. Xavier’s g=1.0 and g=2.2 stacks of the same bee produce different alignment because ultimately the alignment process involves sliding and stretching images so as to minimize the sum of luminance errors squared, and different gammas cause the minimum to occur at slightly different alignments. For any particular pair of images the difference is so small that it probably would go unnoticed, but when accumulated across the length of the full stack, it can result in an obvious difference as seen here. (Aside: when the same stack is run through each of N different stacking programs, the result is typically N different alignments. Just as with Xavier’s different gamma’s, the results from say Photoshop, Zerene Stacker, Helicon Focus, and CombineZP make an interesting animation.)

Does ZS support ICC v4 profiles or only v2?

Something in between. Observationally, when fed the four images at http://www.color.org/version4html.xalter, Zerene Stacker properly handles images with both v2 profiles and with v4 e-sRGB, but does not properly handle the image with v4 YCC-RGB. At this time I do not know what goes wrong with v4 YCC-RGB, or what other v4 profiles are not handled properly.

Internally, color profiles are handled as instances of java.awt.color.ICC_Profile and are typically read and written by the javax.imageio.ImageIO methods.

I hope this helps. If you have questions, please ask.

–Rik

[Edit: fix typo}

8 Likes

@Rik Welcome to the forum! Always a pleasure to get such an informative and quick response. And thanks @Morgan_Hardwood for sending the email.

Hi Rik,

first of all let me say that it’s awesome to see people actually care about their users and making the effort to go where they are and answering their questions. :+1:

One thing I am not clear about is the use of color profiles. You say that Zerene Stacker has problems with some profiles, so obviously the image profile is used for something. But some parts of your answer sounded as if pixel values were just used as-is and plugged into the YUV transform. If that is the case (I might very well be misunderstanding you) then what are the image profiles used for and why don’t you transform the pixel values to some linear internal color space first? That way the actual color space of input images would no longer matter at all.

A second unrelated question: Would it make sense to allow saving as float TIFF? Then the user could deal with blown highlights later and not have to deal with clipping.

2 Likes

Yes. I’m starting to think that in the end one has just to test and learn what works and what doesn’t, instead of guessing what could work or not.

Hi Rik!
I think I wouldn’t have asked better questions than those Morgan sent you. And I have to admit I was hesitant to involve you in this forum, because yours is not an open source program, and didn’t think it would we welcomed here. My bad.

The images above where processed exactly like that, and saved as Pmax, indeed.

Here is a comparison of the same stack, processed with Dmap (and not worrying much about the contrast threshold), in both g10 and g22 gammas:

Aside from different alignment, I see not many differences in detail and color.

And there’s the strongest part of Zerene: it’s retouching capabilities. But when I go with a Dmap stack and try retouching it with the Pmax stack, I find this (next gif is the comparison between Dmap-g22 and Pmax-g22):

As the lighting was far from perfect and the lens was not a top performer, I expected those color shifts.
But I see a big difference between those images, and I don’t think Pmax here is a good candidate for retouching. It’s difficult to see after downsizing it so much, but look at the bright spots in the middle left part of the image (near the border, below the eye). There the highlights have invaded the darker parts of the bee.

And the yellow segments are much brighter in Pmax than in Dmap

Now the same comparison, but in gamma 1.0:

There are problems yet, in this lighting conditions, but much more restrained

I know a gif image has its limitations, but maybe you can get the idea.

Oh, and all the images have been stacked using v4 profiles, by the way.

Well, I would like to know too if it would make any difference in quality, or color fidelity, allowing 32-bit, floating point images as the input to the stack, now that we can export those files in RawTherapee.

Those GIFs are mesmerizing :stuck_out_tongue:. I had the same questions as @houz. @XavAL, I think he was asking about Zerene output but the input question is valid as well.

I will rephrase my sentence :wink: :

Would it make sense to allow input float TIFF images, and then have the possibility to save stacked images equally as float TIFF?

@afre and @houz, thank you for the warm welcome!

To clarify…

The most basic functionality is to read the pixel values and the profile from source, operate on the pixel values while just storing the profile, then write the pixel values and profile to output. For the test image with v4 YCC-RGB, Java’s ImageIO JPEG reader as used by Zerene Stacker simply does not recognize that the file contains a profile, so the image gets treated as sRGB. If I use Photoshop CC to load the image and save it as TIFF, then Zerene Stacker successfully reads the profile and propagates it to output. So in this case, the problem is associated with the ImageIO JPEG reader.

By default, in Trial and Personal Editions, Zerene Stacker does not honor the input profile for its own screen displays. It just treats the pixel values as if they were sRGB. Of course this never crashes, though it can produce more or less ghastly colors depending on the actual profile. There is also an option, turned on by default in Prosumer and Professional Editions, to honor the input profile by converting pixel values to sRGB according to the profile. This works fine with all the test images except the one with v4 YCC-RGB, converted to TIFF. In that case the profile is read correctly as discussed above, but then the conversion to sRGB fails with a Java exception complaining that “CMMException: LCMS error 13: Couldn’t link the profiles”. I don’t understand the details of Java’s color management well enough to know what that message really means.

The reasons for that are mostly historical. Interestingly, there’s a line item in some of my earliest design notes to do all calculations in a linear color space. But as soon as I tried to figure out how to actually do that, I ran into a problem that the JPEG files coming out of the camera that I was using for testing did not seem to conform to any simple profile. For example changing the exposure by 1 f-stop did not change pixel values by any constant factor. Instead it looked like the camera was trying to simulate an S-curve typical of film, with compression of the highlights. I couldn’t find a reference and couldn’t figure it out on my own, so I basically flagged that line item as “too difficult, revisit later”, and then the issue never got high enough on the priority list to actually do that. But priorities shift with new information, and this conversation may be just what’s needed to make that happen.

I think it makes great sense to both read and write float TIFF. It’s just a question of priorities, balancing cost against benefits to figure out what to work on. When I’ve looked at this issue in the past, it seemed like quite a bit of work for not much gain except in special cases like for example wanting to shoot with exposure bracketing, merge to HDR in linear space, focus stack the linear HDR, and then do tone mapping as a final step. There are enough other potential wrinkles in that process that I’ve always suggested other workflows that could be shoehorned into 16-bit TIFF, especially with “retain extended dynamic range”. Compared with providing an option to liinearize internally, my guess at the moment is that float I/O is high cost, low gain. And both of those get balanced against a long list of other completely different capabilities, such as a simple method to stack from video, or a really good way of dealing with sensor dust.

No worries from this side. By both inclination and policy, I treat all questions and suggestions as an opportunity to learn and improve. Sometimes that means the code, sometimes the documentation, sometimes my own understanding, sometimes all of the above. This forum seems like a friendly place that focuses on the technology, which fits me just fine. Please let me know if I step near any lines!

This is very interesting. I notice that the bright yellows are handled much better by PMax in gamma 1.0. Output quality is generally the thing I value most highly, so this example makes a strong argument for the benefits of providing at least an option to linearize internally.

I agree that there’s a big difference, but I am more optimistic about the potential for retouching. The retouching tool in Zerene Stacker is unusual, possibly unique, in its ability to seamlessly copy image content even when the source and target areas have significantly different brightness and contrast. There is some discussion of this at http://www.photomacrography.net/forum/viewtopic.php?p=85715#85715 and in the surrounding thread. So, even though your images look quite different when flashed, I would expect good success in say retouching the bristles that are better in PMax, without altering the smoother areas that are better in DMap. I’ve tested this approach by harvesting images from the layers of your gif, and to my eye it works well. This depends on individual sensitivities, of course. The PMax output has much higher local contrast and that difference will be preserved by retouching, so some care might be required to avoid noticeable differences in contrast from one area of the bristles to another.

In any case, for the purposes of this thread, I think you’ve demonstrated the key point that for this stack at least, linear gives a better result.

–Rik

[Edit: fix typo]

2 Likes

Well, I have to say that when I started this thread, my intention was to understand why I shouldn’t expect good results by using my custom linear camera profile all through the process until my stacks were finished.

I wanted to know how could I get the results I wished within RawTherapee, and now I have a clear idea of what to do and don’t.

I didn’t want to judge how well, or not so well Zerene handles linear gamma images, but this discussion diverted that way, and I’m happy I have learnt a better way to handle my images from start to finish.

Finally, I want to thank everybody for their help, and for sharing their knowledge. Thank you all!

1 Like