I loaded your raw in my hack software, where I can reliably look at the image data straight out of the file, and the sun’s pixels are basically piled up at the sensor saturation point, no relevant data there except to “make white”. There are a few pixels in the “star spikes” that start to show gradation, I assume that’s what you want to protect. But the center is just maxed out in all three channels.
Where the “pink” is made is when white balance multipliers are applied to the data of the three channels. In this case, since the saturated pixels have lost any gradation they might have had and all got piled up into the same value, the red and blue components get shifted according to the multipliers (green white balance is usually 1.0) and the new RGB values typically make “pink”, or properly, magenta. If you set the white point to the smallest maximum of the three channels, the pixels go back to white as the image data is clipped to that point for export. Inside good software that uses floating point data internally however, those values pushed past white are still there, as values > 1.0, which by convention is usually the floating point white point. Highlight “construction” can take some of that data and shift it back below 1.0, but now that’s made-up data. And your image really doesn’t have much to work with in the region of the sun.
Highlight construction can be helpful putting some definition back into regions of an image that were over-exposed. But if the region is a light source, I tend to just let that go to white oblivion. In the case of the sun, you’d have to screw on beaucoup neutral density filtration to get meaningful gradation, gradation you’re not seeing normally anyway…