March 15, 2010

All Purpose Technique for Debunking Worthless Studies

Every time another study hits the news with a headline like, "Lowering blood sugar doesn't help diabetics" I get a slew of emails asking me what do I think of it? Unfortunately, what I think of most of them can't be printed here without the "adult content" warning popping up when you access the blog.

It is getting to where I have two choices. I can spend my very limited research time reading and debunking bad studies or I can spend it looking for information that might help people.

With this in mind, this post gives you the tools you need to do your own debunking.


1. What was the real question being asked by the researchers?

Most of the time studies reported with the headline "Lowering blood sugar doesn't prevent X in diabetics" are studies that asked, "Does using very expensive new drug Y that lowers blood sugar a very small amount prevent heart attacks?" This is a very different question but you wouldn't know it from the way the studies are reported.

To find out what a study really asked--and what it really found--look up the abstract of the study as displayed by the research journal that published it. You can usually find this abstract by googling the name of the researcher cited in the news releases along with a keyword.

Sometimes you have to see the whole article to understand what was really discovered. The journals Diabetes Care and Diabetes and many others make their full content available to the public six months after publication.

2. Who were the subjects in the study?

There is a huge difference between the health outcomes you will see when you impose tight control on a newly diagnosed 50 year old person with Type 2 diabetes compared with attempting it in a 70 year old who was diagnosed 20 years before and has been running a fasting blood sugar of 195 mg/dl for 20 years.

Often the real headline should be, "People whose bodies are severely damaged by years of incompetent medical treatment respond poorly to drug cocktails that include drugs known to cause dangerous side effects."

3. How many subjects were in the study--and was it really a study?

Study size matters. A study of 150 people is easily skewed to produce the result a sponsor wants to see. A study with 20,000 people is not. Sponsors play a lot of statistical games with data from small studies because when a study is small you can coax just about any result you want to see out of the data if you use techniques that amplify numbers like using "risk" instead of "incidence." This is harder (though not impossible) to do with large sample sizes.

But when you see a large sample size, make sure that sample size reflects a single study. Metastudies that combine data from many studies to come up with total number of subjects are highly unreliable as the methodology in the individual studies and even the lab reference ranges used are usually very different. Mixing apples and oranges makes for a delicious, though high carb, fruit salad. It's not so good for research conclusions.

4. What was the blood sugar of the study subjects like at the beginning and the end of the study?

Too many studies that prove some drug or interventions "doesn't work" will start out with patients whose average A1c is 9% and at the end of the study the average A1c will be 8%. This is way over the level now known to cause complications and heart attack.

Lowering blood sugar WON'T prevent complications if blood sugar isn't lowered enough to prevent complications. That sounds self-evident, but researchers do not seem aware of what other researchers--usually not in the employ of drug companies--have found to be the level at which complications occur.

That level for heart disease appears to be a level where one hour readings after meals do not exceed 155 mg/dl (8.6 mmol/L). (Details HERE.) The research studies that identified toxic blood sugar levels for other complications are described HERE.

5. Was it a human study?

Shocking amounts of what doctors think they know about diabetes and diabetes dietary treatments come from rodent studies. Rodents have very different genomes from people, very different pancreases, and extremely different lipid metabolism.

6. If it was a diet study, what was the actual diet composition?

Many "low carb" studies involve people eating over 150 g of carb a day. Even some "Atkins" diet studies involve people eating over 100 g of carb a day.

Many diet studies purporting to find meat toxic are questionnaire studies where people are asked questions like "How many servings of meat did you eat in the last month." These questionnaires are standardized and do not include the question, "Did you eat fries with that?" Meat eaten with fries, bread, and a soda registers as "meat" or a "high protein diet" when these studies are reported.

7. Who sponsored the study?

If a study is paid for by a drug company independent research confirms it is likely to come up with a positive finding. Often the study itself comes up with negative findings for the original question the study was run to test but the media will report the drug company spin. This means the headline "Drug X lowers Y" often turns out to ignore the rest: "In a statistically insignificant manner indistinguishable from chance."

By the same token, the headline "Drug X prevents hangnails!" ignores that the study was conducted to see if Drug X prevented heart attack, which it did not, and which, in fact, it increases.

8. What was actually measured?

Conclusions are often drawn by measuring something that isn't as closely connected to the result as the reporting of the study would have you believe.

Studies of lipid lowering drugs may report only the LDL level attained by subjects, not whether lowering LDL decreased the incidence of heart attack. (It rarely does.) This is an example of using a "surrogate marker." Surrogate markers are factors believed to be connected to an outcome, but which often aren't.

Measuring A1c instead of incidence of complications when studying a drug's effectiveness is another example of using a surrogate marker.

Studies that tell you that food X lowers blood sugar often actually measure the level of some micronutrient in the food which resembles some pharamaceutical that lowers blood sugar or which has an impact in a test tube but the study does not ot measure whether people who eat the food experience lower blood sugars.

9. Is the finding an average and if so, what was the standard deviation?

Drug X lowers blood sugar of one third of those who take it by 120 mg/dl of another third by 100 mg/dl and of the last third by 90 mg/dl. Drug Y lowers blood sugar of one third of those who take it 300 mg/dl of all the rest by 0. The studies on both these drugs reports that Drug X and Y lower blood sugar by a mean of 100 mg/dl.

These drugs report the same statistic, but the first drug is moderately helpful to everyone who takes it, the second drug is useless for two thirds of those who take it.

This is why you can't tell much about any research finding by looking at means (averages). Unfortunately averages are the statistic used in most medical research.

If there is a standard deviation (SD) given, that helps, though you will never see it discussed in the study or the reporting. The SD shows you the size of the spread around the average. The SD would be much greater for the second study than the first.

Medians are a much better measure, too. But you never see them used. In the case above, the median for Drug X is 100 mg/dl and for Drug Y it is 0 mg/dl.

When studying the impact of anything on blood sugar, it's also worth remembering that the higher the starting blood sugar, the more dramatic a statistical outcome the sponsor can achieve. This is why drug studies are always done with a population of people who start out with very poorly controlled diabetes. You can drop someone from an 11% A1c to a 9% A1c a lot easier with any intervention than you can drop someone from 8% to 6%.

10. Be alert for the tell tales signs of statistically massaged results

There are many dodgy statistical techniques that can turn a very small number into a larger number. Use the "risk" of some condition rather than "incidence" of that condition in the same population and you get a number 10 to 100 times bigger than you started with.

Even dodgier, if your original result is statistically insignificant, you can measure something like "percent of change" rather than risk or incidence. The percent of change between two numbers that vary by a statistically insignificant amount may be reported as statistically significant. I have seen this done more than once. This secondary measurement is meaningless to anyone who understands statistics, but unfortunately, my years of reading studies have convinced me that peers who review medical studies flunked statistics. More than once.


When you see criminally inept research getting play in the media write letters to your newspaper, TV channel, or the publisher who printed it. They will pay the same attention to your letters than they do to mine--none. I've been writing them for years and have never received anything more than an automatically generated form. But perhaps if three thousand people sent in the same letter, it would have more impact. This blog gets a whole lot more than three thousand readers each month. Let the health media hear from you!

If you have more suggestions, please post them in the comments.


michael plunkett said...

Outstanding Post. how to read a study was one of the overlooked gems in Taubes GCBC- glad you brought it to the surface- it was badly need or exposure.

Harold said...

An excellent blog. Unfortunately for many it will be like trying to explain some basic principals of economics. So far over their heads that it is hopeless.
It seems to me that if your problem is inability to efficiently metabolize carbs avoiding them should be obvious but apparently and sadly for them it is not.

Peter said...

That's a hard task Jenny, well worked at...

After a number of years of reading these studies the problems become so apparent that it feels as if it should be obvious to everyone what the flaws are and how the system works. But, if you are at the start of the learning curve, it is so hard to see why the results shouldn't be taken at face value, from the press release even. If you never get up the learning curve you are pretty well doomed, certainly if you heading for diabetes.

That list of things to look for always starts small and easy and just grows and grows, until it becomes daunting in its own right. I'm just wondering what someone intelligent, say my mother-in-law, would make of your post, read along side say the full text of one of the Belfast nutrition group's flawed "sugar is fine for you studies"?

I think we tend to forget exactly how difficult what we do is for the intelligent lay person, who has not spent those years on it...

And yes, where does the time go?


Liz H. said...

Peter, that kitty cat is SO cute and loveable!!

Amanda said...

I just wanted to thank you for an incredibly informative blog and website.

I'm actually exploring the possibility that I have some insulin resistance and possibly the beginnings of PCOS due to my last 2 cycles being completely out of whack.

My hypochondria and the wonders of the internet have led me to believe that these are possibilities, with the symptoms I have and what I know.

Today I purchased a glucose monitor and have been using your website to monitor my glucose levels and see "what's what". Since I have no medical insurance I wanted to see if I could do a little self diagnosis prior to seeking a doctor's help.

It seems my blood sugar spikes to around 130 after 1.5 hours, but it hasn't come below 115 in 4 hours now. My sugar was a very low 62 prior to eating lunch today, so it seems I have numbers that wouldn't alarm a doctor (all under the magic 140) but it makes me think my body is having a hard time dealing with the excess glucose.

I'm going to continue to use your site and blog as a resource and monitor this further until I'm convinced that it's time to seek medical help.

Again, I just wanted to thank you for putting all this excellent info out there for people like me to take advantage of.


Lili said...

Ugh, I just saw this article, too: - I love the conclusion! Trying to control bg and bp is pointless, so just don't try very hard.

Ed Terry said...

You left out my favorite technique: Drawing a generalized conclusion from a very narrowly applicable set of results. I often find the introduction of the study a good indication of the researchers bias. That lets me know what framework the results are being presented in. Very often, results that are valid in a very narrow reality and set of conditions are extrapolated into a generalized conclusion the data simply does not support.

lynn said...

"People whose bodies are severely damaged by years of incompetent medical treatment respond poorly to drug cocktails that include drugs known to cause dangerous side effects."

That made me laugh out loud! Witty AND true!