Yes as the others have suggested, it is not an exposure issue. Here is a screenshot of three wave forms. Left: raw, no demosaic, everything else off. Middle: raw+wb (d65), no demosaic, everything else off. Right: raw+wb (d65), vng4 demosaic, everything else off.
The red channel overexposure is only post-demosaic, thus easily fixed in raw convertor by reducing exposure (this was the bit that confused my edit, as in dt workflow it is so rare I will ever have to reduce exposure), as the data is there. Instead of doing it in post, you could do what Carvac suggests and leave two stops of headroom, but wouldn’t that produce a slightly noisier image? If you wanted a nice OOC jpeg, that is how it would have to be done.
Do the bits matter here? I thought this was purely a gamut issue, and the only way to see this image properly would be on a wide gamut monitor. My understanding is that the main benefit of higher bit monitors is greater clarity around banding. That is, if you see banding in an 8 bit monitor, you can’t be 100% sure whether the banding is in the image, or due to the screens lack of bits to display the gradient properly (which would be extremely rare, as 8 bits is fine for 99% of real world scenarios). But if you have a 10 bit monitor, and see banding, you can be sure it is in the image, not the screens lack of bits. How wide does a gamut have to be before 10 bits starts becoming a requirement? I am looking to buy a new monitor soon, and I will be prioritising gamut size over number of bits. True 10 bit monitors (not 8 bits + 2 bits dithering) are probably priced out of my range anyway.