Following There are no overexposed digital images (but only clipped dynamic ranges, which many missed in the text, apparently), here is the second pass on the mapping part.
This video explains the colour pipeline of an HDR-capable videogame engine and the challenges it faces. As it turns out, making video games HDR-ready has taken the same ACES path as rendering or as darktable (not following ACES standard to the letter, but rather its spirit) : fully separate the master from the display, and deal with display only at export time. Meanwhile, make minimal assumptions on the master.
An important concept that Alex Fry introduces is the display mapping, which combines gamut and tone mapping. While you want them to be separated in your workflow, for better ergonomics, they are only 2 sides of the same coin and need to be dealt with together, but there are still no one-size-fits-all magic trick to achieve that.
A note about the filmic mapping shown in the video : it’s the separated channel variant (curve applied on independent R, G, B channels), not the chroma preserving approach (which is very similar in spirit to the mapping they show in ICtCp.
All in all, this is the second pass of explanation as to why you should stop caring about individual values or magic numbers (like middle grey at 18%), or preserving brightness as if it was a constant thing, but only care about mapping full ranges, from one end to another, and move the display mapping at the far end of the master editing pipeline (or even entirely disable it if there is a second software connected at the end of your pipeline).
It’s also quite a shame to see that video games are now better color managed than most photo editing softwares.
(Thanks to a certain Troy S. for finding the video).