Comparing the area of xy gamut graphs is not particularly meaningful, as the area is not normalized to any physiological or physical metric.
Likewise, “99% of [some colorspace]” and similar are mostly marketing terms.
Sure, most people can spot the expanded gamut in a side-by-side comparison, but it is not a coincidence that the market settled on “more-or-less sRGB”.
The usual way to define a metric on the area is perceptual difference, eg MacAdam ellipses.. This shows that the “pure green” usually missed by most (linear) colorspaces is not that important perceptually.
Isn’t a version of this also true when it comes to editing for print? Monitors, even cheap non-OLED ones, are so much more capable than even the most luxurious prints on the whitest paper under ideal gallery light.
In essence, you have to literally worsen the output capacity of your monitor by lowering contrast and brightness levels, to lower your monitor to the level of print.
I believe OLED is worth for the contrast alone. Disregarding the “pure black” argument, they overall also have increased contrast levels, which eventually leads to sharper or at least a higher level of perceived quality. I’d say that even flatter photos, such as landscapes with a ton of fog benefit from that improved contrast in a way that’s not really perceivable in IPS monitors. This can also benefit you in some way such as making you avoid adding a ton of sharpness or local contrast, to compensate for your monitors shortcomes.
I am actually a wee bit worried about that OLED contrast. If I buy an OLED screen, will I “need” to update all my other screens? It happened with my first “retina” screen many years ago.
Although on the other hand, that retina experience was a good thing in the end, so maybe I should buy an OLED indeed… But OLED 4k screens are still very expensive, and only really available in 32" or more.
But of course, since the new TV standards require it, they all have accurate colors and good DCI-P3 coverage.