So the original PEP 386 had the concept of "normalizing" a non compliant version into a compliant one. It did/does this through a series of regexes and string replacements.
With the current rules (as I've implemented them, using the comments as what PEP 440 will contain as well) achieve a 95.26% compatibility rate when checked against every version on PyPI. While creating a "normalize" / "suggest" functionality I've been able to get that number up to 98.76%. However my worry is that in normalizing we're creating versions that mean something different then what they actually mean. One such instance is pytz where we have "2012a" which will be normalized into 2012a0. Is 95.26% compatibility "enough" that we can just say we're not going to do normalization? On the other hand we're more or less just hoping that interpreting versions that happen to use a compatible scheme will interpret to something meaning the same thing anyways so maybe this is a always a problem and normalization doesn't really affect it.
On the other hand, I think if the normalization process is kept, then we should encode the exact transformation rules which are used. I also think if we do it then we should mandate it as part of the spec and not make it an optional thing. I'm thinking like how html5 defined a spec, but it also defined what browsers should do when pages don't follow that spec. If we have the capability to normalize (and it's reasonably implemented) I can't imagine any case where it wouldn't make more sense (or at least, just as much sense) to normalize so if we're normalizing, always normalizing seems like a sensible thing to me.