Sounds like another insane hack deduced from empirism that will work until it doesn’t.
A power transfer function (not a gamma, please) changes the variance in the picture. Depending how your interpolation problem is formulated, that can get in the way of its assumptions.
If the problem you are trying to solve is non-continuous reconstruction, that should be dealt with using spatial correction, and most likely a chroma low-pass filtering. I don’t see how correcting intensity is going to change that, and it’s not excluded that you only got lucky on your example with side-effects hiding conveniently the problem.
If we define the picture gradients as \nabla u(x, y) = \dfrac{d u}{dx} \vec{i} + \dfrac{d u}{d y} \vec{j} then the gradients over each R, G, B plate should be linked by a linear relationship such as a \nabla u_R = b \nabla u_G = c \nabla u_B, with a, b, c real factors. So the interpolation problem can be reduced to propagating gradients between channels to reconstruct missing data, with some normalization factor to take relative intensity into account.
But re-expressing \nabla u^\gamma(x, y) = \dfrac{d u^\gamma}{dx} \vec{i} + \dfrac{d u^\gamma}{d y} \vec{j}, you completely void the a \nabla u_R = b \nabla u_G = c \nabla u_B model so you are messing up the inter-channel correlation hypothesis, which is kinda important given that the available gradients from sensor readings are actually spatially shifted on the sensor plane.
And the most successful demosaicing methods right now (albeit slow) use a mix of laplacians and guided filtering (ARI, MLRI), which intrinsically use inter-channel covariance and intra-channel variance to express the R and B plates as a linear fit of the green plate like R = \dfrac{cov(R, G)}{var{R} + \epsilon} G + \left(mean(R) - \dfrac{cov(R, G)}{var{R} + \epsilon} mean(G)\right). So it’s pretty clear that tinkering with pixel intensity will mess that up by changing the variance, which does not seem necessary either to get high PSNR.