Of course then the results might not have been as stark, and this means they either would have chosen not to publish, or it wouldn't have been accepted for publication. It's crucial to keep in mind that study authors are under no compulsion to publish any results they don't like. Obviously, this can skew what gets out there. Apparently there are laws that actually require this reporting for drug trials, but an audit found only 20% compliance in the US.
Ben Goldacre is currently waging quite the campaign trying to get pharmaceutical companies to live up to the laws that require them to publish info on ALL of their clinical trials, not just the ones that produce flattering results. This comes in conjunction with his new book Bad Pharma that has apparently caused quite a stir (it's not out yet in the US....but it will be in January...in case you wondered what to get me for Christmas).
I suggest reading some of his blog posts if you want a crash course in publication bias and why it's so harmful to us. The quick example of course is the study on hormones and voting....do you really think if a study came out showing that women's menstrual cycles did not effect their voting that it would be published? Journals wouldn't find it interesting, and researchers who base their careers on finding ovulation/behavior links would likely not even submit it.
In the last chapter of his book Bad Science, Goldacre takes the media to task for this. He documents how the most sensational science stories are almost never given to science writers in the interest of making a better story. He then calls out journalists (by name) in the UK who published stories calling for more research on vaccine/autism links, while subsequently failing to report when such research was done (and came up with no link).
If you haven't read anything by him, I highly recommend it.