As an example, for Sony cameras, this is the effective transform applied after sRGB encoding (The S-curves look a little nicer in gamma-compressed space, which is likely why RT’s tone curve tool still operates there):
RT’s AMTC was quite off here, not sure if it’s an error in my math or lens distortion threw it off.
From a darktable perspective, this means that the only valid workflow is one based on the “legacy” model where basecurve is near the beginning of the processing pipeline - because a tone curve has already been applied. (To be clear, the legacy pipeline where basecurve is at the beginning, but with basecurve turned off, again - because it has already been applied.)
If a scene-linear workflow is desired, the tone curve needs to be reverse engineered and an appropriate ICC profile generated that embeds it (hopefully the gamut isn’t mangled too badly and THAT can at least be assumed). I’m planning on cleaning up and enhancing the tools used to generate the data from in the plots above over the holidays and putting them up on github.
Edit: This gets into past discussions about the nature of the data at the output of basecurve/filmic/sigmoid - it’s linear encoded but NOT scene-linear. I’ve used “display-linear” to describe this before. Or another way of saying it - if you linearize with a naive sRGB curve, you get display-linear data and must assume in your pipeline that you’re working with display-linear data. If you linearize with the camera’s actual response curve (calibrated using Robertson or Debevec), you get scene-linear data that is appropriate for a scene-linear pipeline. LuminanceHDR is probably easier to use for determining the response curve, but converting that response curve to an ICC profile is a little harder.