August 31, 2009

Studies Shed Light on Falsification and Bias in Medical Studies

First off, I apologize for the drop off in post frequency. One "side effect" of getting a three book contract from a major publisher, as exciting as it is, is that I have to write the two sequels to the first book I sold. That gives me less time to read and report on the diabetes news. I'm still at it, but I'm blogging only about issues that stand out to me as important or those that readers write me email about.

I'm pleased to see that the most recent, extremely poorly designed study intended to prove that low carb diets will give people heart attacks has not raised a furor.

My readers, and most other people who know anything about science, realized that genetically engineering a mouse to be unable to process fat without developing heart disease and then feeding that mouse fat and giving it heart disease proved nothing except that some people will stop at nothing to defend a discredited theory.

Unengineered mice do not get heart disease eating fats. A large number of studies have found that humans do not get more heart disease eating a low carb/high fat diet. You can find links to these studies and to those proving that cutting out fat does not lower heart attack risk in humans HERE.

I also read in another blog that the "high fat" diet these particular mice were fed included a lot of fructose, which we know has a toxic effect on the liver and cardiovascular system.

The way in which these scientists tortured study design to come up with a study that would yield a finding satisfactory to those who share their religious beliefs (eating fat is bad!) along with something else that happened last week, made me think it was time to discuss a study I ran into last year that examined the question of how often scientists consciously misrepresent and skew data. The findings of that study, which got almost no play in the press, were horrifying.

The other thing that brought this to my attention was a visit to a specialist who wanted to prescribe me Celebrex for a foot problem. If you'll recall, Celebrex is a drug very similar to Vioxx--the pain killer that turned out to be a patient killer, thanks to its ability to promote heart attack.

Celebrex is still on the market, thanks to research that appeared to make it look slightly safer than Vioxx, though the official FDA prescribing information includes a warning that it may promote heart attack. What this doctor did not seem to know was that much of the data that had shown Celebrex to be effective turns out to have been faked, as you can read in this March 10, 2009 report published in the Boston Globe:

Springfield doctor accused of fabricating research results.

The specialist who wanted me to take this dangerous drug practices out of the same hospital as the fraudulent researcher. Current prescribing recommendations for Celebrex make it crystal clear it should never be prescribed to anyone with any possible heart disease, which means it should never be given to someone with diabetes. Deleting the fraudulent studies suggests that the very expensive Celebrex is no more effective than Tylenol, too. Nevertheless 2 billion dollars worth of Celebrex is still being prescribed each year. Probably by doctors unaware of the fraudulent research suppporting it's effectiveness.

Which brings me to the study that looked at how common such fraud and other research malfeasance might be. You can read the study in its entirety here:

How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. Daniele Fanelli. PLos One 4(5): e5738. doi:10.1371/journal.pone.0005738 (May 29, 2009).

This was a metastudy--a study that combined and analyzed data from other, smaller studies. It found,
A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices.
This is a horrific finding. Almost 2% of the scientists admitted (anonymously) to out right fraud. One third admitted "other questionable research practices" which included doing things like ignoring other previously published studies whose results would cast doubt on their own, nonpublication of results unfavorable to a sponsor like a drug company, and "altering and modifying data “to improve the outcome”".

That is bad enough, but the study also looked at what happened when you asked scientists not to report on their own behavior but about that of their colleagues.

Do that and the incidence of reported fraud (outright falsification and fabrication--making things up) goes up to 14.12--meaning one in seven scientists reports observing such behaviors in peers. The reports of other questionable research practices shoots up to 72%--meaning a whopping majority of these scientists believed they had observed these behaviors.

This should give you pause, because these statistics, as the authors point out, require people to admit to malfeasance, which we know many will not. The actual incidence of intentional research fraud is therefore likely to be much higher.

Another study on a similar topic analyzed impact that drug company sponsorship or the existence of financial ties between researchers and drug companies on the design and findings of studies of Actos and Avandia, the TZD drugs.

You can read this research, in full here:

Factors Associated with Results and Conclusions of Trials of Thiazolidinediones Gail Rattinger1, Lisa Bero. PLoS ONE 4(6): e5826. doi:10.1371/journal.pone.0005826. June 8, 2009.

The researchers here examined
61 published RCTs [random controlled trials] comparing a thiazolidinedione [Avandia or Actos] to another anti-diabetic drug or placebo for treatment of type 2 diabetes. Data on study design characteristics, funding source, author’s financial ties, results for primary outcomes, and author conclusions were extracted.
They found that trials that reported favorable glycemic control results for the glitazone were four times more likely to have a corresponding author with financial ties to the glitazone manufacturer (OR (95% CI) = 4.12 (1.05, 19.53); p = 0.04)."

In addition, "trials with conclusions favoring the glitazone were less likely to be funded by a comparator drug company than a glitazone company." Studies showing favorable results for Avandia and Actos also were "less likely to be published in journals with higher impact factors," i.e. the journals that apply more rigorous peer review to articles before accepting them for publication.

The authors note one factor that makes it likely that the real impact of drug company money on study results is much higher: Though 59% (36/61) of the trials studying Actos and Avandia were funded by industry, "39% (24/61) did not disclose any funding" This makes it very likely that many more of these studies were supported by and skewed by researcher financial ties with the drug companies.

Put the results of these two studies together and what emerges is not pretty. For every doctor whose fraudulent research, delivered in response to the marketing needs of his drug company paymasters, is exposed--as happened to Scott S. Reuben, MD of Baystate Medical Center in Springfield, MA, whose work impelled other doctors to prescribe the questionable drug Celebrex, dozens or perhaps even hundreds go undetected.

The fact that at least 59% of all studies of Actos and Avandia were sponsored by drug companies with a stake in proving them effective, or comparing them unfavorably with their own drugs, and that we don't know the sponsorship of another 39%, makes it crystal clear how little research about the effects of the drugs we take is untainted.

Scientists who are in the pay of drug companies face a lot of pressure to deliver the results their masters demand. The tools at their disposal include outright fraud, or the more subtle techniques described by Ms. Fanelli in the first study above: Neglecting to cite studies casting doubt on the desired result. "Tweaking" statistics, not publishing studies with results showing poor outcomes with the drug, designing studies with flawed methodologies intended to produce the desired result. Etc. etc.

All this should explain to you the reason I look with such skepticism on the drug and food studies that get such play in the press and why I analyze their methodology looking for signs that the study was designed to produce a certain result. In many cases the bias is obvious to anyone who hasn't drunken the drug company Kool-Aid, though apparently not to the peer reviewers.

By the same token, what these studies have not looked at, but what would also result in some useful information, is the extent to which drug company funding raises the likelihood that a study will be reported in the print and TV media outlets that are so dependent on drug company income for their revenue.

Very few of the studies I see that were done by scientists not affiliated with drug companies or other powerful interests make it into the health news headlines--especially not those studies that contradict the findings of the drug company supported research.

As readers of my blog know, I reported years before these topics hit the headlines about the link between Avandia and Actos and heart failure, macular edema (a cause of blindness) and osteoporosis. These studies were ignored by the mainstream press. The mainstream press has yet to publish a word about the problems with the DPP-4 inhibitors, though a considerable body of research suggests they may be promoting cancers. When Avandia was finally revealed for the questionable drug it is, the media jumped on the bandwagon suggesting Actos was a safe alternative, though the data connecting Actos to heart failure and broken bones is identical to that involving Avandia.

In short, it's a jungle out there, and your doctor is much too busy to do the detective work it takes to protect you from the dangerous drugs, whose utility is supported by fraudulent research.

2 comments:

Ben said...

The first study you mentioned is interesting, but doesn't appear to be that low carb. By my math, for a 2000 K Cal diet, it would be about 60 grams carbs per day. Also, they replaced all the carbs with protein. So instead of testing a high-fat, low-carb diet, they tested a high-protein, medium-carb diet.

Jenny said...

Any diet up to about 100 g a day of carbohydrate is technically a low carb diet and for most people, a ketogenic diet.

Exactly where the ketogenic boundary will be depends on a person's weight. The smaller you are, the lower it will be.

I agree replacing carb with protein is wrong. It is also very hard on mice who aren't evolved to eat a lot of protein. There are a lot of other bloggers who have written in depth about that study so I am not giving it all that much attention. Links to the other blogs are in my side bar.