Jul 16, 2011

Studies of studies show that we get things wrong

Morons often like to claim that their truth has been suppressed: that they are like Galileo, a noble outsider, fighting the rigid and political domain of the scientific literature, which resists every challenge to orthodoxy.

Like many claims, this is something where it's possible to gather data.

Firstly, there are individual anecdotes that demonstrate the routine humdrum of medical fact being overturned.

We used to think that hormone-replacement therapy reduced the risk of heart attacks by around half, for example, because this was the finding of a small trial, and a large observational study. That research had limitations. The small trial looked only at "surrogate outcomes", blood markers that are associated with heart attack, rather than real-world attacks; the observational study was hampered by the fact that women who got prescriptions for HRT from their doctors were healthier to start with. But at the time, this research represented our best guess, and that's often all you have to work with.
When a large randomised trial looking at the real-world outcome of heart attacks was conducted, it turned out that HRT increased the risk by 29%. These findings weren't suppressed: they were greeted eagerly, and with some horror.

Even the supposed stories of outright medical intransigence turn out to be pretty weak on close examination: people claim that doctors were slow to embrace Helicobacter pylori as the cause of gastric ulcers, when in reality, it only took a decade from the first murmur of a research finding to international guidelines recommending antibiotic treatment for all patients with ulcers.

But individual stories aren't enough. This week Vinay Prasad and colleagues published a fascinating piece of research about research. They took all 212 academic papers published in the New England Journal of Medicine during 2009. Of those, 124 made some kind of claim about whether a treatment worked or not, so then they set about measuring how those findings fitted into what was already known. Two reviewers assessed whether the results were positive or negative in each study, and then, separately, whether these new findings overturned previous research.

Seventy-three of the studies looked at new treatments, so there was nothing to overturn. But the remaining 51 were very interesting because they were, essentially, evenly split: 16 upheld a current practice as beneficial, 19 were inconclusive, and crucially, 16 found that a practice believed to be effective was, in fact, ineffective, or vice versa.

Is this unexpected? Not at all. If you like, you can look at the same problem from the opposite end of the telescope. In 2005, John Ioannidis gathered together all the major clinical research papers published in three prominent medical journals between 1990 and 2003: specifically, he took the "citation classics", the 49 studies that were cited more than 1,000 times by subsequent academic papers.

Then he checked to see whether their findings had stood the test of time, by conducting a systematic search in the literature, to make sure he was consistent in finding subsequent data. From his 49 citation classics, 45 found that an intervention was effective, but in the time that had passed, only half of these findings had been positively replicated. Seven studies, 16%, were flatly contradicted by subsequent research, and for a further seven studies, follow-up research had found that the benefits originally identified were present, but more modest than first thought.

Read more at The Guardian

No comments:

Post a Comment