OpenEXR file format output

Hello, I was searching the forums and documentation and I couldn’t find a way to export exrs from the queue.
Was exr at one point a part of the output formats and is no more?
I found exr mentioned here, but it is probably obsolete
http://rawpedia.rawtherapee.com/Image_file_formats_and_compression#OpenEXR_16-bit_floating-point

Was exr replaced by TIFF float?
It would be very helpful to have a direct exr output, because the float TIFF for some reason looks all kinds of wrong.

Thanks a lot

On the same page you are referring to it says, that the images can be converted externally to other more efficient formats, such as 16-bit JPEG-2000 or OpenEXR.

http://rawpedia.rawtherapee.com/Image_file_formats_and_compression#Final_format

The place you are referring to just compares the file sizes.

Got it.
The problem is I probably don’t have enough technical understanding about how to get all the data from the cr2 file out of raw therapee.
I would like to get a 16bit fp exr file - I reset everything, no tonemapping, linear curve, no nothing and just save it as linear exr file - that would be great.
I just don’t see how all the linear information could be stored in a 16bit tiff file.
I have sun in my image for example, I can see, there is more information if I exposure down, but the information is gone when I save it as 16bit tiff - obviously I would say. And converting that to linear exr using inverse srgb gamma for example will not get the values back.
So is there a way to get linear images out of therapee? Sorry for my ignorance :slight_smile: thank you

I don’t know your camera or your selected raw format, but the original image was likely encoded as 14-bit unsigned integers by the camera, so a 16-bit unsigned integer TIFF has plenty of “room” for the original information.

I’m just guessing here, but if you’ve turned off all the RT processing, the only place I can surmise such information loss is in the color/tone conversion when the TIFF is saved. Evidence of that would be an ICC profile embedded in the TIFF that contains a non-linear tone curve. If you want a true “linear” export file, TIFF or OpenEXR, save it with a linear ICC profile conversion, in the case of RT, select one of the profiles with a gamma = 1.0.

https://rawpedia.rawtherapee.com/Color_Management#Output_Profile

2 Likes

The camera is CANON EOS 80D
I think I have reset everything I can think of
The tone curves are linear

nothing in tonemapping

and this is my output profile
image
I didnt find the exact ones with the linear gamma as in the link you’ve sent - probably renamed by now?
but I created a new one like this
image
so I hope it’s ok

the problem is still the same
there is a sun in my image
and this is how RT is showing it, this is how it’s exported in the tiff file basically
image
and when I exposure down slightly, I can see there is more information there
image
the only way to get the information and not touch overall exposure is to use some Highlight Compression

but that seems to me like I’m destroying the linearity of the light and I should not have to do that, no?

thanks a lot for your time and answers! :slight_smile:

this is my cr2 file if you would like to take a look at it
_MG_1547.CR2 (27.6 MB)

I’m not color expert at all, but I would have used something bigger than sRGB. Maybe ACES P1?

Sure, I can use that, but in this case I don’t care about color gamut, I just need it to be linear. So linear srgb if fine for now.

Of course I thought about linear-encoded ACES P1.
Have you tried using the unclipped pp3 profile bundled with RT? I think its purpose is precisely what it says, though I don’t know if its output color profile is linear…

Yeah I tried that one, but doesn’t really do much else than what I’m already doing…this is kind of my starting point. Still not solving the highlights clipping issue.
It all looks like RT is just ignoring information above 1. Or lets say above 100%
I can see it on the histogram - the sun is just above 1 and to get it inside the histogram I would have to set the highlight compression to about 73, then it is sitting at 1.
Or I have to give RT some input profile or information about the camera/file that will map it correctly from the get go…not sure…

This is kind of an ongoing theme in a lot of software, that tries to deal with highlight above 1 from raw data - it is either called Highlight Compression or Highlight Recovery or Pink Highlights or whatever
The process is trying to “save” the information bleeding above 1 and “soft clip” it inside the the 0-1 brightness gamut
But thanks for the suggestions! :slight_smile:

Heve you tried tweaking the raw white points in either the raw tab in RT, or in the camconst file?

In my opinion you must use a wider working profile, and then export to a linear profile.

In any case, you may wish to deactivate the Clip out-of-gamut colors option inside the Exposure tool, or you will loose any information outside the color space you’re using.

However, I can’t see the lost information you talk about:

This is the image as loaded:
_MG_1547

And now with an exposure compensation of -1.0:
_MG_1547-1

That sun is definitely clipped at raw level, so nothing to do there but recover highlights.

As a matter of fact, those images where with Clip out-of-gamut colors turned on. This is how your image looks with the option turned off:

_MG_1547-2
Aside from the pink cast, there’s no new information there.

Hello, I dont think I must have a wider working profile, I dont think the primaries are what affects this situation, but sure, I can switch to adobe rgb or AP1, no problem. The tools settings have the same affect. And yes, I’m exporting with linear profile.

I think I’m starting to understand the issue. Yes you are right, there is nothing to do but recover highlights - that’s what I mean by lost information

And yes, you will get this clamped image when you dont have highlight reonstruction turned on.

And when you turn off clip out of gamut colors - you will see the pink cast

but

that pink cast doesnt mean there is no information
it means there is only partial information in only some channels
they dont all meet at 1, some of the channels continue above it but not uniformly, that’s why the pink cast

you can check the histogram and you can see as you move your cursor from the blue sky into the center of the sun - the sliders will leave the histogram
try these settings to see what I mean


and then try the hl compression at 74 - it will not be 100%, but you can see the sliders get back and the light around the sun get a little bit more information inside

this little sun is taking me through an information wormhole… :slight_smile:

this is what I found - these articles on darktable go deep into the topic
there are actually couple of ways how to recover the highlights and every software deals with it it’s own way
but darktable probably gives you most options I guess - but as I said, sorry for my ignorance, I opened RT just now and started digging through this

https://www.darktable.org/usermanual/en/modules.html#highlight_reconstruction

more info

too much info :slight_smile:

if you search google for raw pink highlights or something like that, you get a lot of hits where people are dealing with this in different softwares

I loaded your raw in my hack software, where I can reliably look at the image data straight out of the file, and the sun’s pixels are basically piled up at the sensor saturation point, no relevant data there except to “make white”. There are a few pixels in the “star spikes” that start to show gradation, I assume that’s what you want to protect. But the center is just maxed out in all three channels.

Where the “pink” is made is when white balance multipliers are applied to the data of the three channels. In this case, since the saturated pixels have lost any gradation they might have had and all got piled up into the same value, the red and blue components get shifted according to the multipliers (green white balance is usually 1.0) and the new RGB values typically make “pink”, or properly, magenta. If you set the white point to the smallest maximum of the three channels, the pixels go back to white as the image data is clipped to that point for export. Inside good software that uses floating point data internally however, those values pushed past white are still there, as values > 1.0, which by convention is usually the floating point white point. Highlight “construction” can take some of that data and shift it back below 1.0, but now that’s made-up data. And your image really doesn’t have much to work with in the region of the sun.

Highlight construction can be helpful putting some definition back into regions of an image that were over-exposed. But if the region is a light source, I tend to just let that go to white oblivion. In the case of the sun, you’d have to screw on beaucoup neutral density filtration to get meaningful gradation, gradation you’re not seeing normally anyway…

4 Likes

Got it! Thanks very much for the insight and the explanation. I shall stop here and let the data go to white oblivion :smiley:
cheers

After @ggbutcher, I would say that in the sun, there is nothing to recover as all information of the 3 channels is lost in the raw. At the border of the star, some channels get unclipped so using
highlight reconstruction (with highlight compression as usual) will pull back values to 1 at the center and rebuild some data at the border.(and RT is rather good at that).
There is nothing else that can improve the situation except some painting.

1 Like

Got it, yes I was interested just in the imidiate area around the sun. I know there is nothing in the middle, that wasn’t my concern. And by now, it became more about learning what goes on behind the scenes technically, rather than visual issues. And yes, as I go more in RT, it is a great piece of software. Thanks!

But back to the topic, any chance of getting OpenEXR file output straight out of RT?

Hello again, if I could please ask additional question about the topic @ggbutcher ? Again sorry, I’m lacking some fundamental knowledge here.

But to sum up the question:
How are stops related to pixel values?

The issue I have, that I cannot wrap my head around:
How come when I take a still digital camera and apply a linear curve profile on the raw data (cr2), I’m fitting everything in a 0-1 range in pixel value.
And when I take a movie digital digital camera like arri alexa mini (.arri) and when I develop it to linear curve, I can get up to 0-35 in pixel value?

And when I look at the stops the sensors can handle, in still cameras there is a range going from 6-14 stops and Alexa has around 15. Which is not that far relatively.
So how come there is such a big difference in linear pixel values?

When I take for example latest Nikon or Sony still digital camera, open it in RT, apply the proper linear profiles, I still fit all the data into a 0-1 range? How come?

Or is that related to 16bit integer vs 16bit float?

So again to sum up:
How are stops related to pixel values?

Thank you very for any insight. This feels like a big puzzle piece for me, that I cannot find enywhere :slight_smile:

Yes, it’s a bit of a head-hurter, even if one is familiar with some aspect of the tech. I’m software person by trade, and I still had to do a lot of digging to piece it all together. Below is a missive that requires just a bit of math and digital encoding understanding, so please don’t hesitate to ask clarifying questions. Also, apologies for English, I sense that it may not be your first language, and even though it’s my first I really don’t know it all that well… :smiley:

It helps to start where the measurement is done, at the sensor. Without getting into all the hardware things, essentially a photo-sensitive thing gets hit by light, and the circuitry takes the electrical response of that thing and turns it into an integer number representing the light’s intensity. 0 means ‘no-light’, or black, and the measurements progress linearly as light intensity increases, to a point where the photo-sensitive thing can no longer resolve a difference, called the saturation point. Light may get stronger, but the measurements of it just pile up at that value.

So, what the camera delivers in the raw file is an array of unsigned integers comprising the light measurements at each pixel (I’m going to ignore the bayer or xtrans filtration for this discussion). With today’s sensor technology, the measurements are usually encoded as 16-bit unsigned integers, even when the camera sensor resolves less. My Nikon D7000 and Z6 will both deliver up to 14-bit sensor data in 16-bit buckets, because that’s how computers are organized. So, even though the 16-bit bucket can express values from 0 to 65535, the values stored only go from 0 to 16383 (14 bits). Notice I’m not talking about stops yet, or the notion of white…

After reading the raw file, the raw processor software has this array of 16-bit integers with which to start work. Some raw processors keep the data in this format through the entire workflow, and that’s not such a bad thing because, 1) 16 bits is a lot of precision with which to work images, and 2) there’s at least two bits of headroom to allow the image values to be manipulated before they hit the top of the 16-bit ‘bucket’. The need for this headroom is seen in such operations as white balance, where the values recorded under the respective red, green, and blue CFA filters are multiplied by respective numbers, e.g., 0.899,1.0,1,35 as respective red,green,and blue multipliers. Two bits of headroom may not sound like much, but in the case of a 14-bit raw image a white balance multiplier of 4.0 (not common) would be needed to reach the 16-bit top.

Still haven’t talked about ‘white’, but now seems like a good time to bring it in. In camera terms, there really isn’t a notion of “white”, but instead there’s that saturation point thing that might look like ‘white’. That’s only because when all three channels pile up there, it looks like white because all three channels are the same value. When we work with our raw data, we really want to defer our anchoring of white in the data until we go to display or export it, where white then becomes meaningful in terms of what the rendition media expects. Indeed, while we process, we ideally want our data to grow and shrink without bound so it retains all the energy relationships of the original light. Then, at display or export, we “put a pin in it” somewhere, and maybe use highlight reconstruction to drag data to the right back into the renderable range. This is what is referred to as “scene-referred” workflow, where all the work is done on the data in its original light energy relationships, or what’s called ‘linear’.

The integer buckets we’ve talked about to date have a problem with that sort of processing, as those 16-bit buckets have a maximum value that’s not that far away. And, when we do math to data that pushes it past that maximum, the data “wraps around” to zero, which looks exceedingly bad in rendered images. This is where floating point representation has value, as its maximums are well beyond anything we’ll encounter in image processing for the foreseeable future (do NOT start thinking about IP addresses, please… :smiley: ). Indeed, the common convention for using floating point to represent image data is to use the range 0.0 - 1.0. Doesn’t sound like a lot of space, but you can put a lot of digits to the right of a decimal point. The big advantage is that data can grow well past that 1.0; indeed, in scene-referred processing, 1.0 doesn’t have a real meaning (NOT white, get it?). Of note is some softwares use a different range convention for their floating point representation, e.g., RawTherapee uses 0.0-65535.0, which I assume is to line up the numeric values provided by the 16-bit integer raw files. G’MIC uses 0.0-255.0. Note they all have 0.0 as the lower anchor; black is black, no energy, so Zero makes fundamental semantic sense.

5 Likes

I had to got into work, so I pushed “Reply” without fully answering your question…

So, with all that I wrote as a basis, stops of exposure are simply a doubling of light (or halving, in the other direction) from a given luminance. Each pixel has a luminance value that is described by an averaging of the three channel values (averaging is one way to do it, there are other equations that yield a more perceptually accurate luminance, but average is simple to understand for this discussion). So, if you take two pixels, average their respective channels to make luminance values, you can then say “pixel A is 1/3 stop brighter than pixel B”, or somesuch. It doesn’t matter whether the pixels values are in the range 0-65535, 0-255, or 0.0-1.0, any two pixels’ luminance can be compared in terms of stops if they are encoded in the same range.

Now, looking back at your post, need to clarify the term “linear”. The term is a bit overloaded in photography contexts, but the predominant meaning has to do with the original raw measurements of light. Those measurements express quantities that have a linear relationship, that is, “twice as bright” is expressed by two numbers, one whose quantity is twice the other. There are a few operations to change data that maintain the light energy relationship; exposure is one of them, a simple multiplication. A tone curve that is a straight line (mathematicians everywhere are cringing… ) is essentially a linear ‘energy-relationship-preserving’ operation. But, when you put a control point on that line and pull it north or south to make an actual curve, it becomes a non-linear operation, destroying the original energy relationships.

Scaling is a multiplication (or division, if going the other direction), and it preserves the linear relationship. If your camera records measurements as integers between 0 and 16383, the measurements can be scaled to the 0.0 - 1.0 range by dividing each measurement by 16383, and the new values retain the linear relationship. The original data has two boundaries, black and saturation, and that range can be linearly mapped to any other range.

I hope I’m helping here…

2 Likes