Every time another study hits the news with a headline like, "Lowering blood sugar doesn't help diabetics" I get a slew of emails asking me what do I think of it? Unfortunately, what I think of most of them can't be printed here without the "adult content" warning popping up when you access the blog.
It is getting to where I have two choices. I can spend my very limited research time reading and debunking bad studies or I can spend it looking for information that might help people.
With this in mind, this post gives you the tools you need to do your own debunking.
WHEN YOU READ A STUDY THAT SEEMS TO PROVE SOMETHING YOU SUSPECT IS FALSE ASK:1. What was the real question being asked by the researchers?Most of the time studies reported with the headline "Lowering blood sugar doesn't prevent X in diabetics" are studies that asked, "Does using very expensive new drug Y that lowers blood sugar a very small amount prevent heart attacks?" This is a very different question but you wouldn't know it from the way the studies are reported.
To find out what a study really asked--and what it really found--look up the abstract of the study as displayed by the research journal that published it. You can usually find this abstract by googling the name of the researcher cited in the news releases along with a keyword.
Sometimes you have to see the whole article to understand what was really discovered. The journals
Diabetes Care and
Diabetes and many others make their full content available to the public six months after publication.
2. Who were the subjects in the study?There is a huge difference between the health outcomes you will see when you impose tight control on a newly diagnosed 50 year old person with Type 2 diabetes compared with attempting it in a 70 year old who was diagnosed 20 years before and has been running a fasting blood sugar of 195 mg/dl for 20 years.
Often the real headline should be, "People whose bodies are severely damaged by years of incompetent medical treatment respond poorly to drug cocktails that include drugs known to cause dangerous side effects."
3. How many subjects were in the study--and was it really a study?Study size matters. A study of 150 people is easily skewed to produce the result a sponsor wants to see. A study with 20,000 people is not. Sponsors play a lot of statistical games with data from small studies because when a study is small you can coax just about any result you want to see out of the data if you use techniques that amplify numbers like using "risk" instead of "incidence." This is harder (though not impossible) to do with large sample sizes.
But when you see a large sample size, make sure that sample size reflects a single study. Metastudies that combine data from many studies to come up with total number of subjects are highly unreliable as the methodology in the individual studies and even the lab reference ranges used are usually very different. Mixing apples and oranges makes for a delicious, though high carb, fruit salad. It's not so good for research conclusions.
4. What was the blood sugar of the study subjects like at the beginning and the end of the study?Too many studies that prove some drug or interventions "doesn't work" will start out with patients whose average A1c is 9% and at the end of the study the average A1c will be 8%. This is way over the level now known to cause complications and heart attack.
Lowering blood sugar WON'T prevent complications if blood sugar isn't lowered enough to prevent complications. That sounds self-evident, but researchers do not seem aware of what other researchers--usually not in the employ of drug companies--have found to be the level at which complications occur.
That level for heart disease appears to be a level where one hour readings after meals do not exceed 155 mg/dl (8.6 mmol/L). (Details
HERE.) The research studies that identified toxic blood sugar levels for other complications are described
HERE.
5. Was it a human study?Shocking amounts of what doctors think they know about diabetes and diabetes dietary treatments come from rodent studies. Rodents have very different genomes from people, very different pancreases, and extremely different lipid metabolism.
6. If it was a diet study, what was the actual diet composition?Many "low carb" studies involve people eating over 150 g of carb a day. Even some "Atkins" diet studies involve people eating over 100 g of carb a day.
Many diet studies purporting to find meat toxic are questionnaire studies where people are asked questions like "How many servings of meat did you eat in the last month." These questionnaires are standardized and do not include the question, "Did you eat fries with that?" Meat eaten with fries, bread, and a soda registers as "meat" or a "high protein diet" when these studies are reported.
7. Who sponsored the study?If a study is paid for by a drug company independent research confirms it is likely to come up with a positive finding. Often the study itself comes up with negative findings for the original question the study was run to test but the media will report the drug company spin. This means the headline "Drug X lowers Y" often turns out to ignore the rest: "In a statistically insignificant manner indistinguishable from chance."
By the same token, the headline "Drug X prevents hangnails!" ignores that the study was conducted to see if Drug X prevented heart attack, which it did not, and which, in fact, it increases.
8. What was actually measured? Conclusions are often drawn by measuring something that isn't as closely connected to the result as the reporting of the study would have you believe.
Studies of lipid lowering drugs may report only the LDL level attained by subjects, not whether lowering LDL decreased the incidence of heart attack. (It rarely does.) This is an example of using a "surrogate marker." Surrogate markers are factors believed to be connected to an outcome, but which often aren't.
Measuring A1c instead of incidence of complications when studying a drug's effectiveness is another example of using a surrogate marker.
Studies that tell you that food X lowers blood sugar often actually measure the level of some micronutrient in the food which resembles some pharamaceutical that lowers blood sugar or which has an impact in a test tube but the study does not ot measure whether people who eat the food experience lower blood sugars.
9. Is the finding an average and if so, what was the standard deviation? Drug X lowers blood sugar of one third of those who take it by 120 mg/dl of another third by 100 mg/dl and of the last third by 90 mg/dl. Drug Y lowers blood sugar of one third of those who take it 300 mg/dl of all the rest by 0. The studies on both these drugs reports that Drug X and
Y lower blood sugar by a mean of 100 mg/dl.
These drugs report the same statistic, but the first drug is moderately helpful to everyone who takes it, the second drug is useless for two thirds of those who take it.
This is why you can't tell much about any research finding by looking at means (averages). Unfortunately averages are the statistic used in most medical research.
If there is a standard deviation (SD) given, that helps, though you will never see it discussed in the study or the reporting. The SD shows you the size of the spread around the average. The SD would be much greater for the second study than the first.
Medians are a much better measure, too. But you never see them used. In the case above, the median for Drug X is 100 mg/dl and for Drug Y it is 0 mg/dl.
When studying the impact of anything on blood sugar, it's also worth remembering that the higher the starting blood sugar, the more dramatic a statistical outcome the sponsor can achieve. This is why drug studies are always done with a population of people who start out with very poorly controlled diabetes. You can drop someone from an 11% A1c to a 9% A1c a lot easier with any intervention than you can drop someone from 8% to 6%.
10. Be alert for the tell tales signs of statistically massaged resultsThere are many dodgy statistical techniques that can turn a very small number into a larger number. Use the "risk" of some condition rather than "incidence" of that condition in the same population and you get a number 10 to 100 times bigger than you started with.
Even dodgier, if your original result is statistically insignificant, you can measure something like "percent of change" rather than risk or incidence. The percent of change between two numbers that vary by a statistically
insignificant amount may be reported as statistically significant. I have seen this done more than once. This secondary measurement is meaningless to anyone who understands statistics, but unfortunately, my years of reading studies have convinced me that peers who review medical studies flunked statistics. More than once.
==
When you see criminally inept research getting play in the media write letters to your newspaper, TV channel, or the publisher who printed it. They will pay the same attention to your letters than they do to mine--none. I've been writing them for years and have never received anything more than an automatically generated form. But perhaps if three thousand people sent in the same letter, it would have more impact. This blog gets a whole lot more than three thousand readers each month. Let the health media hear from you!
If you have more suggestions, please post them in the comments.