Developing a FOSS solution for inverting negatives

I am writing some image processing scripts using ImageMagick, with the intention that the output from those scripts, be the input to RawTherapee.

The scripts take a linear RGB scan from a scanner, make adjustments in linear RGB, and the output can be either set to sRGB or I can leave it linear RGB.

I don’t fully understand colorspace, but I working on that.

However I have a few questions:

  1. How do I tell RawTherapee that my input file (tiff) is linear RGB?
  2. Is RawTherapee internal colorspace linear RGB i.e. what is store in memory ?
  3. If I decide instead to convert my input to sRGB. i.e. standard tiff, what conversion does RawTherapee do internally. i.e. would it take the sRGB file and convert that to a linear RGB in memory?
  4. Is there a developer forum, to discuss the RawTherapee code?
1 Like

Assign a color profile, RT will use it when reading the image.

It converts between several colorspaces throughout the pipeline, I believe it ends with L*a*b*.
RawTherapee/color_management.svg at dev · Beep6581/RawTherapee · GitHub

You’re in it. You might also catch some of us for live chat in IRC.

Thanks Morgan a few follows up.

  1. Where could I find a profile that represent linear RGB?

  2. If convert my linear RGB file to sRGB and then let Rawtherapee do its standard conversion from sRGB to its own internal working colorspace, will that conversion be lossless?

Hi @LaurenceLumi

If your scanner outputs the scans in a linear RGB working space, then the scanner itself should have embedded an appropriate ICC profile. Or if it doesn’t, then I’m guessing that you have a scanner profile that perhaps you made after scanning a target chart. If neither of these are the case, where did you get the linear RGB color space in which your script “make[s] adjustments in linear RGB”?

As an aside, imagemagick is not software that I would trust to do ICC profile conversions, though I use imagemagick for other purposes. Imagemagick documentation does talk about linear RGB, but if I recall correctly, that just means “assume sRGB” and then “linearize the sRGB TRC” or something similar.

There is no such thing as “linear RGB” per se. Rather there are various RGB color spaces with various TRCs. In other words, an RGB color space is defined both by the color space primaries and by the color space TRC. So without knowing what linear gamma RGB color space should be assigned to the original scans saved to disk by the scanner, there really isn’t any way to answer the question “Where could I find a profile that represents linear RGB”. Because there are many such profiles.

It’s the raw data from the scanner, e.g. a normalised intensity of 0.3 red is just that. 0.3. Normally the software would take the 0.3 value and gamma encode that value as appropriate to store in the file e.g for sRGB it would become 0.206 or something close, but there are parts of this I don’t understand.

I thought of this as sRGB but with the gamma encoding removed.

Maybe the solution is to have the so the scan in sRGB and remove the gamma to convert to linear RGB before I apply my transformations, in ImageMagick.

However I was worried that doing a conversion back and forth from linear to non linear would result in clipping. (I am working in 16bit)

@LaurenceLumi - Do you have ArgyllCMS or exiftool installed on your computer? Without actually being able to examine one of the scans, it’s difficult to rget a handle on what color space should be assigned.

If you download the profiles in this folder: elles_icc_profiles/profiles at master · ellelstone/elles_icc_profiles · GitHub - or actually just this profile: elles_icc_profiles/sRGB-elle-V4-g10.icc at master · ellelstone/elles_icc_profiles · GitHub - and assign it to one of the scans before it’s been modified by any software, do the colors in the scan look correct?

No. At this point, I am more concerned that everything works in the “correct way”
i.e. if my setup is a little too green then that green is correctly passed along the pipeline to final print.

However there is quite a bit of this process that I don’t understand.

Thanks for the profile. looking at it, it is just a gamma correction? Which is what I suspect ImageMagick does when it converts from linear RGB to sRGB.

I attached the profile to few files:

  1. An original scan, it looks the same but just brighter, it looks identical to what I get if converted using IM from linear to sRGB. or close enough if I just did (pixel)^0.4545 I tried this with a few files.

I can now take a “linear RGB file” add your profile in, and the results are identical to if I had created a final output from my scripts as sRGB.

To my mind the profile is telling RT what the values in the file would be if they were just encoded as sRGB, so that RT can store it internally in own form.

My question now is when I choose Color Management → no profile what happens there ?

This is actually what prompted me to ask these questions.

When I choose that option, the file looks OK from a gamma perspective, but the colours are more bright, and saturated.

As a background what I trying to do is implement some of logic offered in colorperfect using ImageMagick for final processing in RT.

I’m not sure what you mean by “looking at” the profile. Did you examine its contents using some sort of profile viewer? Or did you assign it to some images to see what happens? Or both?

If the result of attaching the profile to the scan is pretty much the same as running the scans through an imagemagick script, then it seems to me that your best bet is to not use ImageMagick. Instead just open the scans directly with RawTherapee and use “Color tab/Color Management/Input profile” to assign the linear gamma sRGB profile “sRGB-elle-V4-g10.icc” to the image, unless you have some other software that you are more comfortable using for assigning profiles.

Well, sort of. The sRGB profie to which I gave you the link is not “just a gamma correction”. It doesn’t apply any gamma correction to the image RGB values at all. All it does is tell LCMS what color space profile should be used to convert the image from RGB to XYZ, from whence it can be converted to some other ICC profile, such as your monitor profile or to whatever output profile you might choose.

RawTherapee uses LCMS to color manage images. So when you assign an ICC profile to the image, LCMS uses that profile to convert the RGB channel values to your selected monitor profile in order to display the correct colorss. Well, correct colors if the right profile is assigned to the image and if the chosen monitor profile correctly describes how your monitor displays colors, but that’s getting off-topic, maybe.

Anyway, RawTherapee has internal working space(s), one of which is LAB, which is the color space a lot of RT processing is done in. So by assigning the correct RGB color space profile to your scans, RT can correctly convert the RGB channel values to LAB and etc.

An RT dev would better be able to explain what RT is doing for this case. Is your monitor by chance a wide-gamut monitor?

Oh, I missed that! For anyone who might not recognize the product “colorperfect”, here’s a link:

I don’t have any experience with scanners, so hopefully other people might have advice on the topic.

Back when I still used Windows and PhotoShop, I learned a lot about color management and proper color mixing while trying to decipher Dunthorn’s articles on PhotoShop’s errors in various editing algorithms, and I still think his pages are well worth perusing even if the vocabulary and concepts might seem a bit strange in places:

1 Like

Both, I looked at it using the gnome, and assigning it using RT.

I ripped out the logic of my script, for the test, I wanted to see what happens, my program needs to work in a linear colorspace, preferably RGB. For the test I just did

linearRGBin → do_nothing → linearRGBout → attachprofile

this is exactly the same as

linearRGBin → do_nothing → convert to RGB → RGB

What I meant was, its an sRGB profile, but it is saying that values in the file are not gamma encoded per the sRGB spec, apply this curve to those values to get the values that match the sRGB gamma curve.

No just a thinkpad, a few years old nothing special.

The invert functionality doesn’t exist in RT, I have already coding something that seems to work. Hopefully it will be of some use… :grinning:

But I got confused when I chose:

Color Management → no profile

I thought that I was somehow losing data by converting from linear RGB to sRGB.

Probably because the penny hasn’t dropped on how RGB colour spaces work under the hood.

Thanks very much for your help!

I think in general RT isn’t the right application as it seems you want to work in RGB, and specifically in linear RGB, and more to the point, you want to know what color space and what color space TRC the operations are using. Many RT operations work internally in LAB space.

As you have something coded up that works, that’s great! But if you are interested in an additional option, PhotoFlow does have an invert operation, and also allows you to choose whether to operate on linear or perceptually uniform RGB.


I have been going through some of the articles on your website, and I think the penny has finally dropped! :grinning: Well maybe…

I will attempt to explain what I couldn’t understand partly because there is a fair bit of erroneous info, which may help others and may help explain what I am trying to do.

What I could not understand was how sRGB, adobeRGB, ProfotoRGB or any RGB colorspace could have a wider gammut or a different gammut if it was using the same numbers.

For example if I used 4bits per colour, I get 4096 colours, putting aside the fact I might not be able to see all 4096 and they may have different gammas corrections applied. Those numbers are fixed, so I couldn’t get my head around 4bit adobeRGB having a different gammut than say 4bit sRGB.

Until the penny dropped that the definition of the colours used to make to create the colour is in fact different.
or put another way the “chromaticity” of the primaries is different, or they are using a different Red, Greens, and Blue to mix with.

So in my case when I said I have a linear RGB file, which is what you get when ask Vuescan or Silverfast for a “Raw” file. Is actually a file were the “chromatic primaries” are the same as used in sRGB but without any of the normal gamma encoding applied. i,e, the intensities stored in the file are those actual measured using the sRGB “chromatic primaries” instead of being gamma encoded as they would normally be.

Or simply I had bog standard sRGB file, with the gamma encoding removed.

It’s now my guess that selecting
Color Management → no profile
is expect linear encoded values, but with some other value used for the “chromatic primaries”, which is the reason for the funny colours.

Anyway back to what I am trying to do:

So far I have a able to apply the following equation

Screenshot from 2017-10-27 18-21-23

using imagemagick, using both RGB and LAB colorspaces for a film scans of colour and black and white negatives.

In the equation Jn represents the intensity measured on the film scan Yp represent the gamma of the photographic paper, typically 1.8, Yc represent the gamma used in the colorspace used to store the file, and K represents the exposure or gain applied.

It seems to work…

However K is destructive, in that the optimal value of K will in many cases result in some clipping.

Which is why I want to incorporate RT somewhere in the process, not just at the end. I course I am not sure yet how do that, and perhaps it’s impossible to do it outside of RT. But that’s were I am.

I hope this babble makes sense.

ImageMagick is okay at doing color conversions as long as you provide it with your own ICC profiles, etc. The thing a user needs to get used to is that IM needs everything to be declared super explicitly all the time, whereas apps that we are used to in this forum are set it and leave it kind of color management. So, it is easy to miss or misunderstand something when you are using IM; you have to know exactly what you are doing.

  1. You can actually drag and drop the image into the post editor. No need to upload it elsewhere.

  2. Looking at the formula, though I am not a mathematician or programmer, it looks like a generalized form. E.g., K could be expanded, as exposure, gain, bias, etc., should not be described using just one variable and relation.

  3. Have you tried asking for advice in the IM forums: They are quite helpful and quick in responding.

Yes, exactly. At any given bit depth all RGB color spaces have the same number of unique colors, regardless of how big or how small that color space might be. RGB color spaces are three-dimensional volumes in XYZ space, so it doesn’t help that usually we are looking at 2D drawings. But in larger color spaces these “same number of colors” are spread further apart, and in smaller color spaces they are closer together. This is why people caution to not edit in large color spaces such as ProPhotoRGB, when working at 8-bit precision - compared to sRGB, the ProPhotoRGB colors (same number of colors) are spread further apart.

This link allows a 3D view of sRGB compared to AdobeRGB (“ClayRGB”). The view was made using ArgyllCMS and does require allowing javascript to execute:

ClayRGB is the colorful semitransparent volume, and sRGB’s volume is indicated by the white grid. You can spin the color gamuts around in any direction. The color gamuts are shown in LAB space, even though they are defined in XYZ space.

I’m guessing the clipping will be in the highlights? Regardless, if you work at 32-bit floating point you can retrieve the otherwise clipped colors. ImageMagick can be built to work at 32-bit precision, but on most Linux distributions I’m guessing you might have to built it yourself and specify Q32. @afre - does IM clip at Q32? I seem to recall it doesn’t, but I’m not sure.

High bit depth GIMP 2.9 and PhotoFlow and also darktable all work at 32f, and will output 32f images that are not clipped if you choose a file format that supports floating point.

@afre is right that ImageMagick can do the ICC profile conversions if you pass along the correct parameters. My concern is with the ImageMagick internal settings for doing the conversions. LCMS can be set up for speed or for precision. Back in the days of LCMS1, ImageMagick used the “speed” settings, which led to less precision ICC profile conversions. I pointed this out to the IM devs, but didn’t get any response, so I dropped the topic.

I haven’t checked to see what the IM code uses today or whether it even makes a difference. For a while LCMS2 was making less precise conversions than LCMS, for matrix to matrix profile conversions. But after some complaints and comparisons and demonstrations of artifacts produced by the LCMS2 code, the LCMS2 code was changed to produce more precise matrix to matrix conversions, and I think maybe today for matrix conversions the LCMS settings might not matter. But I’m not sure. So I don’t use IM for ICC profile conversions. Probably I should redo the tests for ImageMagick, but I just don’t feel like it :slight_smile: !

@Elle I haven’t used IM seriously in a long while, so I tend to refer people to their forums. Thoughts:

  1. The reason that Q32 doesn’t exist in binary form is that it is slow on most machines (there is a speed comparison chart somewhere), unless you compile your commands instead of using CLI only. This is why I use G’MIC nowadays. That and because G’MIC contributors and users tend to be better conversationalists :slight_smile:.

  2. In terms of development, unless you are a patron or sponsor (edit: or can carry a concise low-level conversation), the developers won’t be very responsive to or transparent with you. However, they are always trying to improve IM based on feedback. If you follow their change logs, you would see that they adjust things all of the time, including backpedaling on things if things don’t work out or if a better method is found.

Indeed! I was quite happy with sRGB but I have now realised I need to understand a little more what that actually means under the hood. I think I have got more of a handle on it…

Screenshot from 2017-10-27 18-21-23!

Not a mathematician either, but I can program, and I think I understand it. K is just the exposure of the positive in the enlarger, its the other terms that are more complex, or left out because they can’t be changed after the fact. The gamma represented by the paper is not actually constant, and if we are talking BW it’s value can changed. plus of course the film can be changed to suit the scene.

Yes they are quite helpful answering question on how IM works and as a result I think the script works. Not so much on the bigger picture stuff.

I get the later part that in summary states different “chromatic primaries” need greater precision to get reliable results.

But am I correct in saying that it is actually the choice of those “chromatic primaries” that gives the larger gamut and that it is the fundamental difference why? And from a sensor perspective you need to be able to measure those primaries and from a display perspective you need to able to control the output in terms of those primaries?

Exactly. Which is what normally happens when making a print unless you dodge, or you apply a little bit of magic.

How does that work? If I go back to 4bit analogy does that man I use another 4bits to hold values greater than 16? and then I could use for example use the exposure tool tool, or the highlight compression tool to bring the values down below 16?

I built IM already but I never looked at the options so that’s not a problem. Can RT work this way?

Hi @LaurenceLumi - I’ll post some answers to your other questions tomorrow. But regarding floating point, the Wikipedia article has a succinct summary: Floating-point arithmetic - Wikipedia “In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision.”

In other words, for any given precision, floating point is less accurate than integer, but the amount of memory used is the same. I tested this with GIMP by saving various bit-depth versions of the same large XCF layer stack to disk: For any given bit depth floating point and integer precision do use almost the same amount of disk space when storing the XCF files.

Somewhere on the internet is a nice article that I think is titled something like “everything a programmer needs to know about floating point”. Maybe someone on knows the exact article, though no doubt there are many such articles. As a programmer you can get in trouble with floating point because what we think of as “exact numbers” like 1 or 2 or 10, aren’t exact in floating point, so comparisions get tricky.

As an end user of software, you don’t need to worry about it, except to know that 16-bit floating point is less precise than 16-bit integer, and 32-bit floating point is more precise than 16-bit integer, and 64-bit precision - floating or integer - is overkill for anything we do in the digital darkroom, as well as not being commonly supported.

Regarding building IM using Q32, check the documentation on the IM website - I think to get floating point you also need to use the hdri option to enable High Dynamic Range Images formats and output. Gentoo - the version of Linux that I use - allows the user to choose both Q32 and hdri. As an aside, if you can build IM I’m guessing you are pretty good at compiling software! I seem to recall that building IM was a bit tricky.

Regarding RT, RT uses 32-bit floating point internally. I’m not sure at what points RT code clips values that are >1.0 or <0.0. But it does do some clipping at least in some of the internal color space conversions. At this point RT does not output to disk in a floating point format, so even if it didn’t clip internally, you’d lose information upon export.

darktable, PhotoFlow, and GIMP 2.9 all process at 32f and don’t clip internally though the precise details vary between the three softwares, and do allow to export to disk using file formats (16 and 32 bit floating point tiffs and/or openexr files) that support high dynamic range images.

As a caution, at floating point precision division can produce numbers beyond what the computer can actual process, resulting in NANs.

1 Like

It has to do with how tones are typically represented using floating point. Most software I’ve dealt with represents the tone range using the range 0.0-1.0. Black would be, in RGB, 0.0, 0.0, 0.0; white would be 1.0, 1.0, 1.0.

So, when manipulations are performed, the channel components of highlights that get pushed past the white limit of 1.0 just keep on going, e.g., 1.002. With integer-based tone ranges, the white limit is usually the maximum positive value, e.g., 255 for 8-bit, and calculations that push values past the white limit do things like wrap around back to 0.

Now, when a floating point image thusly munged is to be displayed, these values > 1.0 need to be corralled back into the range below 1.0. The simple thing to do is to clip them back to 1.0, but the information is there to be dealt with more sophisticatedly, in various highlight reconstruction algorithms.

I found the link to the reference @Elle cited,, but it may quickly tell you more than you really wanted to know… :smiley:

1 Like