Due to the differences between display and capture, this isn’t actually true. If a display element “leaks” other colors (e.g. red emitter outputs some green and blue), it pulls that primary inwards towards the white point, shrinking the gamut triangle.
Meanwhile because CFA elements “leak” on capture (e.g. red stimulates the green/blue), the primaries turn out to wind up outside of the range of physically possible colors. The layman’s way of thinking of this:
A pure green single-wavelength stimuli will create nonzero R and B values in the capture. As a result, this “pure green” point is inside the triangle, not at its border.
As an example, Sony’s S-Gamut primaries all lie outside of the realm of physically possible colors - in S-Gamut, you’ll NEVER capture a fully saturated primary.
A basic color matrix (e.g. treat the camera colorspace as a triangle) does come reasonably close though. Of course, having a LUT helps a LOT for handing some of the colors where the graph you posted is exceptionally misshapen. Of course you can’t fix metamerism where two physically distinct colors result in the same capture values.
I’ve wondered for a while if cameras could improve their colour discrimination by swapping out the dual green pixels for either two different greens, or an unfiltered pixel to compare against. I’ve also wondered what would happen if you had 4 compliment filters which each pass a narrow band rather than block a narrow band and thus let more light though.
Most humans have three color receptors. We can trivially build cameras with more color discrimination, but to what end? Displays use three subpixels for the same reason.
Good printers have more inks, but subtractive color (in printers) works somewhat differently than additive color (in displays and cameras).
I just got a Samsung Frame TV, which has an “art mode” where it’ll display images in a psudeo sleep mode. Its really cool. It uses dci-p3, so I’ll be giving this a go from darktable sometime soon.
Interestingly, nearly ALL of those replace CFA positions with “leakier” elements to try and improve light sensitivity. This always comes at the cost of color rendering.
I haven’t seen anyone try to do something involving narrower-band filters with the two greens having different centers. Of course this would reduce color sensitivity which would have a negative impact marketing-wise, and also make the color math much more… fun…
I’d be curious to see the comparison of Phase One IQ3 and a traditional CFA in DCamProf which gives a much more detailed report than a set of patch delta-Es