View/Modify color matrix in DCP files

Nice…

Could the standard 3x3 matrix be transformed in the same way? And if so, Would it help solve its saturated fringing/gradient issues?

We consider the physical world of image capture to work linearly so ideally we would like our processing to also work linearly. That would be possible if certain conditions were met in practice that however typically are not. For instance CFA spectral sensitivity functions would ideally be linearly related to cone fundamentals in the retina. If they were all we would need is a 3x3 matrix to bop around the needed colorimetric spaces.

Unfortunately that’s not the case: sensor SSFs are just an approximation to the ideal, the problem is overdetermined, there is an infinite number of possibly ‘correct’ matrices, if we want just one the best we can do is come up with a Compromise Color Matrix that minimizes potential errors according to some criteria. Think of it as fitting a curve (or a plane or a solid) to a lot of noisy dots. We sometimes try to compensate for some of the biggest errors via a look up table, but even that is just another form of curve fitting that only really works for the finite number of tones that were corrected manually.

So no, the standard 3x3 matrix cannot be made to work perfectly every time, it can only be made to work ok most of the time. All solutions that look more pleasing are just that, more pleasing. This one time, or perhaps in these types of conditions.

Jack

PS Some additional thoughts on this here.

2 Likes

Thanks for the link, I’ve translated the arri log tone curve EI800 to the g’mic parser and used it in photoflow before the conversion to the working space rec.2020 linear, I’ve then used only levels and curves for tone manipulation, no saturation or channel mixer module.

(x > cut) ? c * log10(a * x + b) + d: e * x + f

-fill i=i/255;if(i>0.010591,i=0.247190*(log(5.555556*i+0.052272)/log(10))+0.385537,i=5.367655*i+0.092809);i*255

DSC08363.pfi (35.8 KB)

However the log unbreak profile in darktable works great too
DSC08363.ARW.xmp (8.4 KB)

@age Hey thanks for sharing that I never really got how to use the unbreak profile…clearly you do…I guess I need to get up to speed…just need some time to read…

I took a look in your xmp…what is the curve that you used in the rgb curve module??

It restore and add contrast together with the levels tool

It looked a lot like a basic ACR tone curve in shape…I was wondering if you created it by math or by dragging it visually…that was more my question….

Still not sure how to use the unbreak profile but it seems powerful esp with underexposed images it seems to give you a better starting point……

I’m also interested in acheiving this, been trying with lutcalc and dcamprof but my image it’s sometimes overexposed (after converting it back to rec709 with a lut or with color space transform in resolve), should i try to match middle gray from both curves at 18% compensating with negative exposure (-2.4ev on linear to EI800 LogC) ? can you share your dcp?

Hi @comadrejo, and welcome!

… sometimes overexposed (after converting it back to rec709 with a lut or with color space transform in resolve)

Isn’t that overexposure visible already in resolve? What does the vectorscope show? Wouldn’t it be easier to fix this problem in resolve?

Have fun!
Claes in Lund, Sweden

If it is overexposed after converting to rec709 in Resolve, then try using tonemapping in the colorspace transform.

1 Like

Thank you for the warm welcome my friend!

Sorry to take so long to get back to this thread! I use a (non-free) LUT editing/mixing app called Lattice to create 65536 point 1D LUTs (0-1.0 scale) and reformat the data to linear Raw Therapee Curve format:

linear
0.00000000 0.000000000
0.00005678 0.003456789
...
1.00000000 1.000000000

Lumariver/Dcamprof can build profiles using .rtc curve files, but the catch is that linear .rtc curves are assumed to be in sRGB gamma (i.e., designed to be applied to images that encoded with sRGB gamma) so you need to apply an sRGB to linear transform before/beneath your logarithmic curve or else it will be too bright. In Lattice I build my logarithmic curve and then just apply it on top of an sRGB to linear before exporting.

Now, the maximum linear curve size that dcamprof/Lumariver will write to a DCP is 8192 points, but I’ve found that when you’re using linear .rtc curves it’s best to feed it a curve with as many points as possible. I suspect that this has to do with the fact that dcamprof is transforming your curve with the linear to sRGB gamma adjustment before interpolating it, so, with the extreme nature of a linear to log curve, if you feed it an 8192 point curve you end up with harsher quantization errors in shadows. Also, I haven’t been able to get dcamprof to successfully compile profiles using tone curves that don’t include 0.0 0.0 and 1.0 1.0 points (like the standard log-c curve which usually starts at 0.0928 and ends around 0.95) but, oddly, Lumariver has no problem with them.

Yeah, the exposure difference you’re seeing could be because you’re missing the sRGB to linear transform beneath your LUT to prepare it for dcamprof’s linear to sRGB transform. That would seem to explain why you’re seeing about 2.4 stops of overexposure and the lighter washed out shadow tones. Try doing an sRGB to Log-C curve in LUTCalc. Because sRGB is only defined in an 8-bit linear range you’ll need to add somewhere around 5 stops of exposure adjustment in LUTCalc to get the clipping point near log-c 800’s specified 95%. I’ve found it frustrating to work in LUTCalc because for the life of me I can’t figure out how they calculate legal/extended range and it’s hard to get black and white points to fall where I expect them to. My other concern with LUTCalc is that you can only export up to 16384 point curves and that might or might not be enough to get smooth shadow tonality after dcamprofs sRGB gamma correction. If you’re just pasting the data into a DCP file it might work, but if you’re using it to build a profile with dcamprof you might get rough shadow transitions.

Even with the correct gamma though you might find that you still want to use some negative exposure compensation, depending on whether or not you are metering your photos using still photography or cinema conventions. If you go by your in camera meter, or meter based on the iso set on your still camera, you’ll typically be overexposing by about 2.5-3 stops in Log-C. ACR/Lightroom also adds another third of a stop in order to roll off and reconstruct the highlights. If you shoot a color checker image the target values in log-c are approximately .43 for middle gray in ACR (which is close to .40 on a rec709 display) and about .60 for the white patch. Sensor clipping should be around 95%, but in Lightroom it will vary some with white balance, etc.

Feel free to ask questions! I’m just starting to figure all this out myself, but I’ll be glad to help. I’ll try to post tomorrow about how to go about getting the transform from camera native to Arri wide gamut into the DCP profile.

Hi Nate,

The DNG spec says the following about ProfileToneCurve on p. 56:

ProfileToneCurve_

It would appear that it should be applied in the linear version of the working/output space before applying gamma, and that’s what I do when I roll my own conversions.

Even so, applying the same curve equally to all three RGB channels results in chromaticity shifts (typically a saturation boost). One of the key strengths of DcamProf/Lumariver is that it implements its own neutral tone reproduction operator and works interactively with the ‘look’ LUT to undo chromaticity shifts introduced by the tone curve, if present, thus producing the desired neutral output.

Jack

Thanks Jack! Yeah, you’re absolutely correct, ultimately the DCP curve needs to be in linear gamma with respect to your working profile (which of course is linear Prophoto in ACR). What I was saying though was about how Lumariver/Dcamprof interprets all .rtc format tone curves as being in sRGB when building a profile. So if you want to load an .rtc curve with a linearly interpolated tone curve (as opposed to spline format) then you need to ensure that it is in fact designed to work with sRGB encoded images. The way I do that is to combine it on top of/after an sRGB to linear curve transform so that it’s correct after dcamprof “linearizes” it.

From the Lumariver manual

The RawTherapee format doesn’t specify gamma, the curve is interpreted as being in sRGB, unless loaded as a custom spline when it will be interpreted as in the currently selected viewing gamma. While you can use the JSON format with the DCamProf command line tool and the .rtc format with RawTherapee, the main purpose of using these text formats is that the formats should be easy to hand-edit if needed, or imported/exported to/from a spreadsheet or some other technical software.

If you have a huge table in a spreadsheet or similar that you only want to be linearly interpolated (there’s a limit to how many spline handles you can have), you can’t load it as a spline, but you can load it as a custom curve. Make a .rtc file with all the values (X, Y from 0 to 1) but set “Linear” in the first row instead of “Spline”. If you need to specify a different gamma than sRGB you must use the JSON format, with “CurveType” set to “Linear” and “CurveGamma” to a desired real number.

If you want to specify a pure gamma curve, the JSON format supports that too, like this (gamma 1.8 in this example):

{ "TRC": 1.8 }

For normal display-referred DCP profiles I’d agree that the neutral TRO can be a good option, but for more technical scene-referred logarithmic profiles like we’re talking about it’s important to maintain the linear RGB ratios so that when the log curve is linearized using RGB curves (or an RGB LUT) later in the pipeline the colors fall back to their correct values.

This is difficult in ACR/Lightroom because the default behavior is to use a hue clamped TRO. Thankfully, Lumariver provides an “RGB” tone reproduction operator that works with the profile tone curve and counteracts ACRs default TRO to give something very close to a pure RGB response. I’m not sure if this is possible in dcamprof or not; it doesn’t seem to be listed in the online documention with the other TRO options.

Other RAW converters like Raw Therapee or Darktable might default to a pure RGB TRO or might have their own way of doing it, but I’m not sure. If they do have a pure RGB tone curve response it would be best to use Dcamprof/Lumariver’s “simple” TRO rather than Neutral or RGB. If anyone knows how they work with DCP tone curves let us know!

So, just to clarify, what I’m talking about is making a DCP profile that essentially emulates Arri’s Log-C/Alexa Wide Gamut color and tonal response. The result is an image that compresses the full dynamic range and full color gamut of the sensor to fit within the range of an sRGB/rec709 display and editing pipeline in a way that is very similar to a film negative. The log-c curve is documented and can be easily reversed to scene linear data using widely available LUTs. The Alexa Wide Gamut color space is designed to work with this Log-C gamma. If you convert a linear image to Alexa Wide Gamut it appears desaturated and has some color shifts (cool reds, yellow greens, purplish blues), but in log gamma the effect is to anticipate and partially account for the RGB curve distortions to come in the linearization transform. The approach is not strictly accurate, but it is similar to how negative film works, where everything (theoretically!) falls into place gracefully when the expected pipeline is followed.

Gotcha, thanks!

<rant!!!>

I’m a bit unsettled about this LUTting and curving for the following reason: The means by which the tone and color characteristics of an image are described in its metadata are conventionally by an ICC profile (ACES, OCIO, and Adobe folks, keep reading, I’m trying to describe the still image ‘lay of the land’… ). That profile describes the colorspace and tone to which the image was rendered, color in terms of a gamut, tone in terms of the departure from linear. We’ve beat color to death in recent threads, but tone needs some discussion.

In order to make the ICC transform work, the input image in its gamut and tone needs to first be transformed to linear XYZ (or Lab, but let’s not confuse things with that, here), and then from that to the output colorspace and tone. So, in order to do step 1, the transform machine inverts the input profile’s tone curve to turn the image back into what it hopes is Linear. Then in step two it applies the output profile’s tone curve to make the resulting image conform with the profile.

Thing is, at the conclusion of all the prior tone shenanigans, the output/display transform still thinks that the curved and LUTted image is Linear to start it’s work.

Now, the ACES folk will not be embedding ICC profiles in their renders, they’ll be preparing them to look nice on specific media. We essentially do that in still images by saving our JPEGs to sRGB with the sRGB gamma, but we’re then assuming all the world will regard our image as sRGB in both tone and color. Fine for all us ‘digital granddads’ with our decade-old hardware, not so much for the whippersnappers with their HDR bigscreens… unless they’re color-managing, and then we’re back to niceness. Oh, display color-managing, BTW, is reliant on embedded ICC profiles, in 2020…

In rawproc, my current processing pipe is totally ‘linear’ clean up to the display/output transform (well, if the last tool is selected for display), and all my filmic and control-point curve noodlings are considered to still be ‘linear’ for the above purpose…

Need to keep the ‘display-referred’ state distinct, IMHO…

</rant…>

I totally agree, ideally there should only be one display-referred transform at the very end of the editing pipeline (or two if you perform tone and gamut mapping as separate steps before gamma correction and conversion to the display color space). And yes, as you said, I think that everything before that should happen in a scene-referred editing space, BUT I don’t think that it’s necessary or even ideal for the image to remain in linear gamma throughout the whole editing process. Log-gamma is also scene-referred and conversion from linear to log and back again really only requires only a completely reversible 1D transform.

Honestly, (if you’ll permit me my own little rant, :stuck_out_tongue_winking_eye:) I just can’t understand why there aren’t any (besides Raw Photo Processor, kind of) digital photo processing apps that use logarithmic histograms and tone curves instead of linear and /or gamma encoded. Wouldn’t it be simply amazing to load a raw photo and be presented with a histogram that shows your image data scaled over a (labeled!!) 16 stop scene-referred EV scale?? You’d immediately be able see that your image has, say, 9 stops of dynamic range, and could use the exposure slider to move those 9 stops back and forth throughout the 16 stop range. Instead of trying to “recover” highlight and shadow detail from the digital ether, you’d have a glorious tone curve tool with 16 stops of scene-referred range across the x-axis. If you wanted to emulate the density response of film you could make an RGB tone curve that covers that entire 16 stop range and use scene-referred exposure to linearly move your image’s 9 stops up and down the film curve to emulate either over or under exposure of the film stock… there’s so much you can do when you can edit in a logarithmic space (working with linear data below a log transform) that is larger than your camera’s dynamic range instead of smaller. (This is ACES approach with their ACEScc and ACEScct working spaces).

We’re taught to think in terms of stops and exposure values throughout the whole metering and capture process but when it comes to editing, there’s no sign of an EV value scale, no visual indication of the dynamic range of our image in stops, etc. It’s just crazy to me!

Logarithmic, scene-referred histograms and tone mapping curves just seem so intuitive and creatively liberating, but most photo editing apps are still stuck in 2003 when DSLRs topped out at 6.5 or 7 stops of linear range. Now that we can capture 14 bit files with 12 stops of scene-linear detail and 14+ total stops of scene-tonality it’s time to move beyond 8-bit tone curves. I mean… if you think about it in terms of negative film (or digital cinema cameras!), where middle gray density is about 5 stops up from the “noise floor”, the true middle gray in a 14-stop Sony Raw file is about 6-7 stops down from clipping (yeah, yeah, “but ETTR!!”:stuck_out_tongue_winking_eye:) which is 0.78% or 2/255 on a linear tone curve. Most raw processors don’t even let you put a curve point that close to zero, let alone give you enough precision to do anything useful with those 7bits of data.

Sooo…yeah, now it’s so late that I think I’ve forgotten what I was intending to say originally😆

:stuck_out_tongue_winking_eye:

1 Like

I’ve been meaning to mess with a log transform in this manner since a Slack conversation I had some time ago with troy_s. This in my little mind is all part and parcel to what I’ll call “post-demosaic tone management”. for which there are a variety of ways to skin the cat. All are directed to the goal of rendering a perceptually pleasing image.

I’m putting it this way because there a a lot of folk out there just looking for A Way That Works, and I think all the recent discourse on filmic, scene-vs-display referred, auto-matched, and even this are ‘wading in the weeds’ of the essential objective. The wading is good, don’t get me wrong, but there are a lot of folks standing on the shore afraid of drowning… :scream:

Excellent point! There are certainly a lot of great solutions out there that give results that people are really happy with. OCCD (obsessive compulsive color disorder) isn’t for everyone :grin:

1 Like