So I am not sure if this would work for you. It may just be a fluke with the two images I tried…I transcribed the base curve values to the rgb tone curve and took a snapshot to compare each and there was little to no difference between the base curve and the transcribed tone curve…so maybe you can experiment…base curve is in 0-100 and rgb is 0-255 so you need to do a bit of math to convert…but if that is your workflow you might be able to go that route and have your spline??
Thanks for answering @priort! Not sure I understand what you’re saying. Is your suggestion to use rgb curve intead of base curve? That’s what I’m resorting to now, only to get cubic spline.
But from what I can gather base curve would be the better options because it operates in linear space. But I’m not sure, hence the spamming of this question:
Have you checked the manual (modure reference section) and/or the tooltips in darktable? Those should answer your questions.
And if a question doesn’t get an answer, there’s usually a reason for that, and it’s not lack of readers…
So why would spamming a question change anything? (apart from annoying those that could have an answer…)
I guess my hope is that someone would eventually pick it up. Sorry.
rgb curve says:
RGB is a linear color space designed to capture and display images in additive synthesis. It is related to the capture and display media and does not isolate color and lightness information in the same way that Lab and XYZ color spaces do. This module works in ProPhoto RGB.
Which sounds good, I guess?
base curve on the other hand mentions nothing about linear. I’m still not sure if rgb curve can be a just-as-good replacement for base curve.
As noted above check the tips…they are identical except for the output which is not linear from the base curve and linear from the rgb curve … I tested it on a few images and I could not see the difference in output when I plugged in the same curve…
Base
RGB
I was suggesting that as you needed that option of the spline… my suggestion was to port the basecurve values for your camera as a starting point and go from there or tweak that to get a starting point and then you could add your per image tweaks…
By no means did I do any in-depth analysis but the results were visually similar for me if I toggled basecurve vs rgb curve with the same values and chrom preservation settting… FWIW
I think the main difference is that with the legacy pipeline order, the base curve comes early in the pipeline, before input color profile, so it is applied in camera space. On the other hand, rgb levels/curves are applied in the working space, after input color profile:
With the current workflow, base curve is at the end, just like filmic (one would normally not enable both, this is only done for demonstration):
Functionality-wise, base curve has the exposure fusion, which rgb curve does not have; and, as you have noted, the way the curve is constructed is different.
By definition, once you’ve applied a curve that is anything other than a straight line, it’s not really linear any more. If the “rgb curve” module states that the output is linear for any setting other than what is effectively a nop, it’s lying.
I guess you could call it psuedo-linear - it’s linearly encoded, but is no longer a linear representation of the scene. But in that manner it’s exactly the same as basecurve - basecurve’s output is linearly encoded image data, but it’s fundamentally no longer linear because if the inherent nature of a tone curve.
Ya I found it a bit confusing…I just wanted to show the tooltip to show it was not defined exactly as the basecurve but as noted I didn’t see any difference in the output between the two…
These tooltips need some more work, at least to precise linear compared to what.
If the input is linear, and we apply a curve, the output is by definition non-linear. Surely?
(Unless the “curve” is simply a straight line that passes through the origin, of course.)
Edit: removed my late night comment with little value.
Once a curve has been applied, the data is no longer linear (and it is no longer scene-referred), so the linear part of the workflow has finished.
I guess that’s why for both tone curve and rgb curve the output is marked as " …, display-referred" in the tooltip (as their ouput must be clipped to the 0…1 range).
The difference between the two is in the encoding: in one, pixel value is proportional to light energy, in the other, the pixel values are proportional to log(light energy).
Yes. There are two independent concepts: linear versus non-linear, and clipped versus non-clipped. Pixel values may be linear but clipped, or non-linear and non-clipped. More usually, linear goes with non-clipped (scene-referred) and non-linear goes with clipping (more precisely, “a maximum white exists”) for display-referred.
Within the workflow, we might do no curves or clipping until the end. But we might do some curves earlier (making the values non-linear).
Well there are also two concepts of linear in this discussion (and other):
- encoding of the light values
- relation between input and output.
With your reasoning, a lot of modules will be non-linear (all modules, as soon as you use a partial mask with them…)
I think being linear is not a requirement.
Display Referred means the images being manipulated are immediately transformed into the colour space of the display being used to perform the image manipulations […] which means restricting the image colour and dynamic range available during the creative manipulation process.
Digital Cinematography cameras took this to the next stage, and output an image that was not processed into a given colour space, but output in a format designed to deliver the maximum capture range of the camera - colour and dynamic range. This Capture Referred, or Camera Referred image could be processed into Display Referred
Scene Referred simply means the image data is maintained in a format that as closely as possible represents the original scene, without effective restriction on colour or dynamic range. This is not necessarily the same as the raw image data as exported from the camera (after any necessary debayering, etc), but attempts to ‘correct’ the image to better match the scene the camera was originally pointing at, which may include white point correction, gamut correction, etc.
[…] Scene Referred is also often in Linear Light, which while suitable for computer graphic rendering, is not suitable for grading workflows.
The process used to get images into Scene Referred space is to effectively undo the Capture Referred, or Camera Referred image, and reverse engineer it back into Scene Referred space. The theory being any camera pointed at the same scene would generate the same image in Scene Referred space, within the limitations of the capturing camera’s imaging capabilities.
And then comes the part which, if I understand correctly, proponents of the scene-referred workflow will dispute:
The problem is neither Display Referred, or Scene Referred, workflows really work well in the real world, and so inevitably compromises have to be made.
In reality, using a Display Referred workflow, with a suitable viewing LUT to maintain the timeline images in a colour space that is greater than the display colour space, can be a far better workflow, with far less complexity and issues to overcome, with less image manipulations being performed, and so potentially the best final image quality.
Not true. In this case, the pixel values are now, at least compared to the scene, not proportional to anything, since that curve is not a log.
They are still linearly proportional to output light energy if the display is behaving properly. In this regard - basecurve, RGB curve, filmic, and even jandren’s sigmoidal transform output linearly encoded data. Nonlinear encoding doesn’t happen until the final output display transform (which is separate from basecurve/filmic/etc). At that point you go from linear->linear (common for most high-bitdepth TIFFs), linear->sRGB (most JPEGs), or possibly linear->PQ (if you’re planning on transcoding the image to a video clip for display on an HDR10 display, since that’s really the only widely deployed way to distribute PQ or HLG HDR content).
Interesting quote there. As an FYI, DaVinci Resolve by default operates on nonlinear data in its internal timeline, and it’s considered one of the gold standards in color grading. It does, however, have a VFX module (Fusion), and since Fusion is dependent on physics-based rendering, Fusion does default to operating on linear data. (With linear<->nonlinear conversions performed when crossing the boundaries)
Interesting. In an old thread, I pondered about how we should go about using nonlinear and linear informed functions in a single workflow.
BTW - Is this discussion about (non-)linear going off-topic?
It’s not as simple. It can be non-linear compared scene emission but still linearly-encoded compared to display OETF.
Which is why “linear” means nothing in itself, you need to say linear to what.
All in all, you get scene-referred which means unbounded and linear compared to scene emission, display-referred which means bounded to medium white but doesn’t say if it’s linearly encoded or not.