- you have eyes,
- you have a safe testing framework (pipeline reordering is allowed),
- do your own tests and conclude.
Moving diffuse or sharpen before input color profile means you apply it in sensor RGB space, which seems like a good idea since it’s the space where the demosacing issue arises (fixing problems close to where they are born is always a safe bet because it’s where you have the most chances to tackle their actual cause instead of some remote side-effect). However, practice shows that it’s not necessarily the best space to do it because of the WB discrepancies that can create chromatic aberrations given that all RGB channels are processed individually.
So I have no definite answer here except try both and see what looks best. But frankly, if you have a sensor of more than 24 Mpx, you don’t even need demosaicing sharpening at all, AA filter or not, you will most likely export at 8 Mpx top, so every square block of 4 px will get squeezed into the same output pixel anyway.