View/Modify color matrix in DCP files

how would you go about this on a given image?

Is this useful?? LUT Generator | Tools | Learn & Help

Thought you might like this …good explanation of encoding Common RGB Color Spaces

2 Likes

This looks really cool as well I may have to try to master this stuff

https://opencolorio.org/

Y’might take a look at PhotoFlow; @Carmelo_DrRaw recently incorporated OCIO…

I saw that …came across this as well…given your work with spectra might be of some use to you down the line…GitHub - ampas/rawtoaces: RAW to ACES Utility

rawtoaces is where I got my monochromator-measured D7000 SSF dataset. I’m exchanging emails with them about their licensing; if that works out I’ll add their cameras to GitHub - butcherg/ssf-data: Spectral sensitivity data for digital cameras

Nice…

Could the standard 3x3 matrix be transformed in the same way? And if so, Would it help solve its saturated fringing/gradient issues?

We consider the physical world of image capture to work linearly so ideally we would like our processing to also work linearly. That would be possible if certain conditions were met in practice that however typically are not. For instance CFA spectral sensitivity functions would ideally be linearly related to cone fundamentals in the retina. If they were all we would need is a 3x3 matrix to bop around the needed colorimetric spaces.

Unfortunately that’s not the case: sensor SSFs are just an approximation to the ideal, the problem is overdetermined, there is an infinite number of possibly ‘correct’ matrices, if we want just one the best we can do is come up with a Compromise Color Matrix that minimizes potential errors according to some criteria. Think of it as fitting a curve (or a plane or a solid) to a lot of noisy dots. We sometimes try to compensate for some of the biggest errors via a look up table, but even that is just another form of curve fitting that only really works for the finite number of tones that were corrected manually.

So no, the standard 3x3 matrix cannot be made to work perfectly every time, it can only be made to work ok most of the time. All solutions that look more pleasing are just that, more pleasing. This one time, or perhaps in these types of conditions.

Jack

PS Some additional thoughts on this here.

2 Likes

Thanks for the link, I’ve translated the arri log tone curve EI800 to the g’mic parser and used it in photoflow before the conversion to the working space rec.2020 linear, I’ve then used only levels and curves for tone manipulation, no saturation or channel mixer module.

(x > cut) ? c * log10(a * x + b) + d: e * x + f

-fill i=i/255;if(i>0.010591,i=0.247190*(log(5.555556*i+0.052272)/log(10))+0.385537,i=5.367655*i+0.092809);i*255

DSC08363.pfi (35.8 KB)

However the log unbreak profile in darktable works great too
DSC08363.ARW.xmp (8.4 KB)

@age Hey thanks for sharing that I never really got how to use the unbreak profile…clearly you do…I guess I need to get up to speed…just need some time to read…

I took a look in your xmp…what is the curve that you used in the rgb curve module??

It restore and add contrast together with the levels tool

It looked a lot like a basic ACR tone curve in shape…I was wondering if you created it by math or by dragging it visually…that was more my question….

Still not sure how to use the unbreak profile but it seems powerful esp with underexposed images it seems to give you a better starting point……

I’m also interested in acheiving this, been trying with lutcalc and dcamprof but my image it’s sometimes overexposed (after converting it back to rec709 with a lut or with color space transform in resolve), should i try to match middle gray from both curves at 18% compensating with negative exposure (-2.4ev on linear to EI800 LogC) ? can you share your dcp?

Hi @comadrejo, and welcome!

… sometimes overexposed (after converting it back to rec709 with a lut or with color space transform in resolve)

Isn’t that overexposure visible already in resolve? What does the vectorscope show? Wouldn’t it be easier to fix this problem in resolve?

Have fun!
Claes in Lund, Sweden

If it is overexposed after converting to rec709 in Resolve, then try using tonemapping in the colorspace transform.

1 Like

Thank you for the warm welcome my friend!

Sorry to take so long to get back to this thread! I use a (non-free) LUT editing/mixing app called Lattice to create 65536 point 1D LUTs (0-1.0 scale) and reformat the data to linear Raw Therapee Curve format:

linear
0.00000000 0.000000000
0.00005678 0.003456789
...
1.00000000 1.000000000

Lumariver/Dcamprof can build profiles using .rtc curve files, but the catch is that linear .rtc curves are assumed to be in sRGB gamma (i.e., designed to be applied to images that encoded with sRGB gamma) so you need to apply an sRGB to linear transform before/beneath your logarithmic curve or else it will be too bright. In Lattice I build my logarithmic curve and then just apply it on top of an sRGB to linear before exporting.

Now, the maximum linear curve size that dcamprof/Lumariver will write to a DCP is 8192 points, but I’ve found that when you’re using linear .rtc curves it’s best to feed it a curve with as many points as possible. I suspect that this has to do with the fact that dcamprof is transforming your curve with the linear to sRGB gamma adjustment before interpolating it, so, with the extreme nature of a linear to log curve, if you feed it an 8192 point curve you end up with harsher quantization errors in shadows. Also, I haven’t been able to get dcamprof to successfully compile profiles using tone curves that don’t include 0.0 0.0 and 1.0 1.0 points (like the standard log-c curve which usually starts at 0.0928 and ends around 0.95) but, oddly, Lumariver has no problem with them.

Yeah, the exposure difference you’re seeing could be because you’re missing the sRGB to linear transform beneath your LUT to prepare it for dcamprof’s linear to sRGB transform. That would seem to explain why you’re seeing about 2.4 stops of overexposure and the lighter washed out shadow tones. Try doing an sRGB to Log-C curve in LUTCalc. Because sRGB is only defined in an 8-bit linear range you’ll need to add somewhere around 5 stops of exposure adjustment in LUTCalc to get the clipping point near log-c 800’s specified 95%. I’ve found it frustrating to work in LUTCalc because for the life of me I can’t figure out how they calculate legal/extended range and it’s hard to get black and white points to fall where I expect them to. My other concern with LUTCalc is that you can only export up to 16384 point curves and that might or might not be enough to get smooth shadow tonality after dcamprofs sRGB gamma correction. If you’re just pasting the data into a DCP file it might work, but if you’re using it to build a profile with dcamprof you might get rough shadow transitions.

Even with the correct gamma though you might find that you still want to use some negative exposure compensation, depending on whether or not you are metering your photos using still photography or cinema conventions. If you go by your in camera meter, or meter based on the iso set on your still camera, you’ll typically be overexposing by about 2.5-3 stops in Log-C. ACR/Lightroom also adds another third of a stop in order to roll off and reconstruct the highlights. If you shoot a color checker image the target values in log-c are approximately .43 for middle gray in ACR (which is close to .40 on a rec709 display) and about .60 for the white patch. Sensor clipping should be around 95%, but in Lightroom it will vary some with white balance, etc.

Feel free to ask questions! I’m just starting to figure all this out myself, but I’ll be glad to help. I’ll try to post tomorrow about how to go about getting the transform from camera native to Arri wide gamut into the DCP profile.