An article in today's Science Daily finally puts some numbers on a phenomenon that has been driving me batty since I started putting some serious time into reading medical studies 4 years ago. This study dealt with cancer trials, but as you will see if you read the article, the phenomenon discussed seems to be true of ALL medical research.
Science Daily: Some Cancer Trials May Have Incorrectly Reported Success
Here's what the researchers found after reviewing 75 articles published in 41 journals from 2002 to 2006. These were group-randomized trials related to cancer or cancer risk factors:
"... 88 percent of those studies reported statistically significant intervention effects that, because of analysis flaws, could be misleading to scientists and policymakers." This means that 9 out of 10 studies claimed success for drugs and treatments that were not supported by the data published with the study!
"Thirty-four of the articles, or 45 percent, reported the use of appropriate methods used to analyze the results." This means more than half of the studies did NOT use appropriate methods to analyze the results.
"Twenty-six articles, or 35 percent, reported only inappropriate methods were used in the statistical analysis." I.e. More than one out of three used statistics in a manner that was completely wrong.
"Nine articles had insufficient information to even judge whether the analytic methods were appropriate or not." In short, one out of eight was so poorly written that it would never have been published in a peer reviewed journal had the reviewers understood statistics.
The people who conducted this damning review of cancer research--people whose careers depend on getting grants, often from drug companies--bent over background to avoid suggesting that the skewing of data was intentional.
But come on people. Drug trials are largely funded by drug companies who will earn a lot of money if the drug succeeds and lose a lot of money if it doesn't. Do you really want me to believe that it is an accident that their misuse of statistics results in 9 out of ten studies showing an intervention to be more effective than it really is?
I discussed this issue with one of my doctors recently. She laughed and said, "They only made me take one statistics course in medical school. The professor was terrible and I didn't learn a thing. I still find statistics baffling." What I found most troubling about this was that this doctor seemed to think this was somehow an endearing trait, like wearing striped shirts with polka dot pants.
It isn't. It is the reason that drug companies get away with publishing studies that anyone who did understand one college level course in statistics (self-included) can immediately see do not use statistical methods correctly--and universally misuse statistics to make ineffective drugs look effective.
Is it too much to demand that the people who publish medical studies--studies whose value is based entirely on the statistics they contain--send those studies for review to people who understand statistics before accepting them for publication? When people's lives--and wallets--are at stake, the answer should be, "Yes!"
But don't hold your breath. As long as scientists are rewarded mostly for publishing research, and as long as journals prefer to publish studies that report positive findings, and as long as most research is paid largely by people with a financial stake in the outcome, that is not going to happen.
March 27, 2008
Subscribe to:
Post Comments (Atom)
1 comments:
I took that statistics class. I made 100% on the tests. I still don't understand it.
Post a Comment