Does the acai berry really slow aging? How about that daily glass of red wine to prevent heart disease? Newspapers are packed with new, exciting research from so-called “experts”, giving us simple, authoritative claims to improve our health. The fads are usually rooted in weak, reductionist science which can be quickly debunked.
As a health nut myself, I try to skim through the massive volume of health tips from a skeptic’s perspective, but it’s easy to get overwhelmed by all the contradictory information scattered about. Rather than blindly following the words of an unknown figure, I want to interpret scientific data on my own and make my own informed decisions. The question is, how do you categorize the good from the bad?
Let’s break down at a recent news story from BBC, titled “Daily Aspirin ‘prevents and possibly treats cancer’. I chose this article arbitrarily; you can do the same kind of analysis on any kind of health study. The complete article can be found here:
The first and most straightforward matter to consider is the kind of evidence is behind the claim. This evidence should be in the form of a published scientific paper, not from the words of a television nutritionist or a popular magazine article.
As you can see, this article is based on a research paper published in the scientific journal The Lancet by Professor Rothwell from Oxford. It’s legit. The paper can be found right here, as linked to in the bottom of the article.
Often times, articles will reference studies. If it’s a good study, it shouldn’t be hard to find in a medical database like PubMed. You can search through it for free.
Let’s move on.
Next, consider the size of the study. Obviously, the more test subjects used, the more accurate it should be. This isn’t always the case due to something called clustering, but it’s generally a good rule of thumb to follow.
“51 trials involving more than 77,000 patients” does sound quite impressive. When a study combines the data of many trials together such as in this one, it’s called a meta-analysis. These are pretty useful, because in individual trials, subtle but valid trends may be passed off as due to random chance. They can still be biased because companies trying to sell their products often select only the trials that support whatever they’re saying. Nevertheless, this particular study is still looking good.
But if you keep reading, things start going awry.
There is a very, very serious problem in these two paragraphs. Notice that the trials were not designed to find the relationship between cancer and aspirin. They were made to research the use of aspirin to prevent heart disease, and then researchers looked at how many participants developed and died from cancer.
Okay, so what’s the problem with that? What’s wrong with going back and looking at the data from a new angle?
In fact, this is a major no-no when it comes to the scientific method. Imagine you’re playing darts blindfolded, and you randomly throw a bunch of darts. A few of the darts land pretty close together, so you go and draw a target around it. Gee, you’re a pretty good darts player, right?
Whenever a large study is conducted, a lot of data is collected. If you sift through all the data, you’ll inevitably find some positive correlations due to random chance. This does not indicate whether or not the correlation actually exists; you need to conduct a separate study, with that correlation as your new hypothesis, in order to come to any conclusion. Therefore, in this study, the only true, scientific conclusion that can be reached would refer to the correlation between aspirin and heart disease prevention.
I’m not saying that this makes it a bad study; if you watch the video, Professor Rothwell plainly insists that further research must be done. Basically, the point of this was to give some grounds to continue researching the link between aspirin and cancer, not to tell people at risk of cancer to start taking aspirin. However, you should be wary when someone cites this kind of hindsight research to give you a bold piece of advice, often in the form, “eat/drink/take more of this”. This is the case with many “superfoods”, such as the new American fad, the chia seed.
Now we get into the meat of the article: the stats. It’s grossly common for the press to manipulate data to create shocking headlines and convincing “facts”. I’ve done some colour coding to explain each of these statistics one by one. Upon careful reading and logical thinking, you’ll see that none of them actually say anything significant.
First of all, note that none of these stats are direct quotes. Journalists like to play around with the data themselves, making it much more dramatic and, therefore, interesting. Qualitative statements are much more often quoted because they can be distorted and taken out of context, but when it comes to harsh numbers, it’s much more attention-grabbing for these guys to cook up something for themselves. Here’s my interpretation:
Ignore the “low (75-300mg) daily dose” for now, and note that that the numbers increased from 9 cancer cases out of a thousand to 12 cancer cases out of a thousand. Does that sound like a significant increase to you? I’d say it’s much more like to be due to random chance.
You also have to consider the confounding variables; remember that all the patients are either taking aspirin or a dummy pill to see if it prevents heart disease. Wouldn’t the patients at high-risk for heart disease be more likely to be taking aspirin? And if they are at high-risk for heart disease, wouldn’t they also be trying to live healthier lifestyles, eating better, exercising more, and not taking drugs? And then wouldn’t these people then have a slightly lower chance of getting cancer than the people not taking aspirin?
“Reduced the risk of a cancer death by 15%”…sound impressive? It isn’t. In fact, I think this is the most useless statement in the article. 15% is not an absolute number; it’s a relative number. And relative numbers depend on the initial probability of the event occurring. Let me explain:
This “cancer death” they refer to does not mean all kinds of cancer deaths in the world. It is referring to the specific subgroup of people who are taking aspirin and then develop cancer within five years. What percentage of their test group is that going to affect? Well, if we use the numbers from the previous statistic (9 in 1000 each year for the aspirin-taking group, 12 in 1000 each year for the non-aspirin-taking group), we can approximate that to be about 50 people per 1000, or 5%. Now multiply 5% by 0.85 (or 1 – 0.15) and what do you get? 4.25%. Suddenly, the change doesn’t look so exciting. Again, it’s more than likely due to random chance and confounding variables.
Furthermore, this is a very specific study, taking into account people who are probably somewhat more at-risk for heart disease than normal (or else why would they want to take part in the study?). In the real population, the “die-from-cancer-after-taking-aspirin” scenario would probably occur even less than the already small 5%. Do you think it would be valid to extrapolate the data from that small, specific test group to the entire world? I think not.
A similar argument could be made for this statistic. However, this one’s even more ambiguous because it doesn’t give us a concrete time range; it just indicates that the patients took aspirin for longer than 5 years. What does that mean? Did the cancer death reduction suddenly increase from 15% to 37% the 6th year? 15th year? 50th year? Maybe it was so long that a bunch of people died from other causes, so the proportionate amount of deaths from cancer was reduced.
I like how this says “in absolute numbers”, so it sounds like it’s crystal clear and simple to understand. Unfortunately, it once again succumbs to invalid extrapolation. The data it is based on is from a very small subgroup of people who are taking aspirin and then develop cancer. Even in the study itself, this was a very small number (9 in 1000 each year for the aspirin-taking group, 12 in 1000 each year for the non-aspiring-taking group, in case you’ve forgotten). I’ll give them the benefit of the doubt and assume that they used more than just those 21 people to come up with the one-in-five statement, but even so, it is proportionately miniscule in comparison to the entire study. You can’t generalize from that small subset of people, it just doesn’t work.
The rest of the article is pretty vague doesn’t warrant further analysis. However, there are two more general points I’d like to highlight because they you’ll often encounter them when reading these types of articles.
First of all, the press likes to publish positive correlations. You wouldn’t be reading this article if the title was “Daily aspirin ‘does not prevent and treat cancer’”. As a result, when combining clinical trials to create a meta-analysis, the data is often skewed because the trials that are archived and easily available to the researchers are often the ones showing positive correlations. In this case, I don’t think it’s a major concern because the trials were not intended to measure cancer rates in the first place. A methodological shortcoming is actually beneficial here.
Secondly, the article notes the difference between different dosage levels. Higher doses, apparently, are more effective. This isn’t always the case, and if you run into this type of statement in your own readings, you want to make sure that there is valid evidence behind it. For instance, let’s say a study finds that vitamin C is helpful in curing the common cold. Some well-meaning (or not-so-well-meaning) person might then conclude that eating more oranges when you are sick will help you get better. Do you see the error? Although vitamin C may be important in the process of getting better, there is no evidence that people are deficient in vitamin C. Therefore, it is a logical fallacy to conclude that we need more of it, because we don’t know if having more than we already have will be of any use. It might even cause some nasty side effects.
If you examine these studies with a critical eye, you’ll find very few pieces of advice worth following. You might start to question whether you should be wasting time reading these articles in the first place, but, at the very least, it can be entertaining how gullible the general public is. You might even want to do some experiments yourself and see what you can come up with. In the end though, good health does not come from a magic pill. You’ve got to stick to the basics; eat your veggies, do regular exercise, sleep enough, and enjoy life.