Wednesday, April 28, 2010

How Can You Tell if a Hot Dog is Really Made of Beef?

(Answer: Trust. You have to trust the person who slapped the label on the hot dog.)

Madsen, Kreesten M. et al. 2002. "A population-based study of measles, mumps, and rubella vaccination and autism." New England Journal of Medicine 347: 1480-1482.

This paper by Madsen et. al. is perhaps one of the most widely cited as "scientific proof" there is no link between the MMR vaccine and autism. On the face of it, the study appears to be a straightforward comparison of autism incidence rates between those who had received the MMR and those who hadn't. The authors studied half a million children born between 1991 and 1998 in Denmark. They identified MMR vaccination status (MMR or no MMR) and followed these children to see how many in each group were diagnosed with autism disorders. So far, so good.

The problem is that all risk calculations were done using person-years (persons x years) rather than with the number of children in two groups and how many of each group got autism.

Here were their "rules" for sorting person-years into the vaccinated vs. unvaccinated group.

1. Children were counted in the unvaccinated person-years group until they were vaccinated. Both vaccinated and unvaccinated person-years came from some of the same children.
2. Early-diagnosis (and likely severe, congenital) autism cases were assigned to the unvaccinated group because they were diagnosed prior to vaccination.

So imagine this.

Jack: MMR at age 2. Diagnosed age 4. Followed until age 6.
Unvaccinated group: 2 person-years.
Vaccinated group: 4 person-years.
1 case of autism in vaccinated group.

Bob: MMR at age 2. Followed until age 6.
Unvaccinated group: 2 person-years.
Vaccinated group: 4 person-years.
0 case of autism in vaccinated group.

Bill: MMR at age 2. Followed until age 6.
Unvaccinated group: 2 person-years.
Vaccinated group: 4 person-years.
0 case of autism in vaccinated group.

Ben: MMR at age 2. Followed until age 6.
Unvaccinated group: 2 person-years.
Vaccinated group: 4 person-years.
0 case of autism in vaccinated group.

Henry: Diagnosed age 1. MMR at age 2. Followed until age 4.
Unvaccinated group: 4 person-years.
Vaccinated group: 0 person-years.
1 case of autism in unvaccinated group.

Charlie: No MMR. Followed until age 4.
Unvaccinated group: 4 person-years.
Vaccinated group: 0 person-years.
0 case of autism in unvaccinated group.


Unvaccinated group = 16 person-years.
Vaccinated group = 16 person-years.
One case of autism in each group. No difference between the two groups.

In reality, the above scenario should show 2 cases of autism in 5 vaccinated children vs no case of autism in 1 unvaccinated child. If early diagnosis of autism muddies the vaccinated picture, those cases should have been been pulled from the larger 2 groups and analyzed separately. It would have provided a useful baseline of severe congenital cases to compare against later-onset cases that could possibly be triggered by the MMR.

So, taking Henry of the scenario, there would be 1 case of autism amongst 4 vaccinated children vs. no case of autism in 1 unvaccinated child. Notice how different this picture looks as opposed to the comparison of person-years.

Please note I am not saying that the authors manipulated data until both groups were even. The imaginary scenario simply demonstrates that the use of person-years is NOT the same thing as a straightforward comparison of incidence in both groups. It provides a very distorted picture that does not reflect reality as we know it, which is very malleable to offer whatever kind of results we want, simply based on how long each child is followed.

Other concerns include:

1. Vaccination status of children born before 1996 was inferred from a secondary database. How accurate was this inference method and its resulting data on vaccination status?

2. Many subjects were too young to be diagnosed with autism when the study ended. How much effect did this have on true autism rates in either group?

3. A LOT of calculations were "adjusted." We don't know how. That should always be a red flag, when we are asked to trust that the authors manipulated the data with integrity.

4. How did they diagnose autism? We already know they didn't distinguish between congenital (and likely severe) autism from milder forms or from regressive autism. When investigating whether MMR could be a factor in autism, severity and type of onset should be carefully defined.

5. The study did not exclude the use of other vaccines. Even if the person-years comparison was valid (in that it reflected reality well), the best one can say is that the MMR is not more likely to cause autism than other vaccines, as opposed to being truly "unvaccinated."

The bottom line is, instead of presenting the straight data so we could judge for ourselves how many cases of autism were found in each group, the authors chose to present data that has been processed and diluted and manipulated beyond recognition. Instead of serving us what could have been a sirloin steak of science, we got a hot dog that claims to be made of beef--but in the end, we can't be sure. We just have to take the authors' word for it.

And whenever authors present unrecognizable data with a lot of question marks and ask you to trust them, don't. It's bad science.

Thursday, March 11, 2010

Thank God for Independent Journalism

In today's article, journalist Connie Howard writes a story so simple and so straightforward it boggles the mind why it hadn't been written yet.

On February 2, 2010, the medical journal Lancet retracted their publication of a 1998 paper authored by Andrew Wakefield, MD. If you read major news coverage about this controversy, you would think Wakefield was the devil himself, a shamelessly greedy excuse of a man who used children unethically for personal financial gains. You would also note that in none of these articles did they give Dr. Wakefield a chance to defend himself. In none of the articles did anyone question potential conflicts of interest of those who are condemning him. No, it has been a vilify-Wakefield-fest from start to finish.

Large media conglomerates often do that. They jump on a hatred wagon, usually driven by the elite and powerful, and go hunting with their pitchforks and torches. I don't know what they are teaching journalism majors in school nowadays, but there is as much journalistic integrity in burning someone at the stake as there is in commercials and advertisements.

All I can say is, WOW. Thank you, Ms. Connie Howard, for telling the other side of the story, no matter how unpopular. Thank you, Vue Weekly, for having the courage to publish it.

Sunday, March 7, 2010

Obedience = Protection

The following article is a critique of this paper:
Glanz, JM, et al. Parental Refusal of Pertussis Vaccination Is Associated With an Increased Risk of Pertussis Infection in Children. PEDIATRICS Vol. 123 No. 6 June 2009, pp. 1446-1451

So this guy named Glanz (and his buddies) looked into the database of an HMO and found 156 lab confirmed pediatric cases of pertussis, which also had clear vaccination records, in a 10 year period. Then he randomly picked 595 kids who didn't get pertussis diagnoses, matching the same age and gender of the lab confirmed pertussis kids, to serve as comparison cases.

Out of the 156 pertussis cases, he found 18 of these kids (11%) had parents who refused the pertussis vaccine. The rest, 138 of the kids (89%) had parents who accepted vaccines--their children either got the vaccine or had medical exemptions to miss one or more doses.

Now get this. What he studied wasn't whether or not the kids were vaccinated. There were unvaccinated kids in both the acceptor group and the refusal group! What Glanz was really after was whether parents were accepting or refusing of physician vaccination recommendations. Were they obedient or not? Did they believe in vaccines or not?

Here is my problem. Did the authors really think Bordetella pertussis cares WHY a kid is vaccinated or unvaccinated? Why draw a relationship between parental beliefs and a disease, regardless of vaccination status? They might as well have studied parental religion and pertussis, or parental belief in astrology and pertussis.

See, if it were me, and I were interested in the effects of vaccination on the risk of pertussis, I would group the kids by the number of shots they had, period. Fully vaccinated in one group, partially in the second group, and unvaccinated in the third. I would assume the bacteria do not care WHY kids were unvaccinated or partially vaccinated, forget whether the parents had a bad attitude or not, and focus on the kids' actual biology.

Let's go back to the 595 comparison cases for a moment. They found that while a full 11% of lab confirmed pertussis cases were vaccine refusers, only half a percent (0.5%) of the 595 kids without pertussis diagnoses were vaccine refusers.

So the Glanz guy says, "Hey, look! Most of the vaccine refusers we found are in the group who got pertussis! Kids with rebel, vaccine-refusing parents are 23 times more likely to get pertussis than kids with nice, obedient parents who either vaccinated or didn't vaccinate as they were told to by their doctors."

Breaking medical news! A parent's vaccine acceptance and compliance appears to extend magic protection over a child, regardless of actual immunity. Glanz has discovered a bacterial Passover.

(Eye rolling.) Nuff said, right?

Actually, I got more.

Glanz starts with a group of pertussis kids to find out whether their parents were vaccine acceptors or refusers. Fair enough. But when it came to conclusion time, he reverses the relationship and uses vaccine refusal to predict the risk of pertussis.

Uh uh. You don't do that. At best, it is sloppy. At worst, it is disingenuous. Either way, the reversal is unscientific and makes the conclusion invalid.

If he wanted to use vaccine refusal to predict the risk of pertussis, he should have started with finding all the vaccine refusers in the HMO database. Next, he can look for how many lab confirmed pertussis cases are amongst all the vaccine refusers vs. the randomly picked vaccine accepting controls. Then he can calculate the chances that given a refuser, he would also find lab confirmed pertussis in their charts.

Had he used this method, he might have found very different results. Why?

Not all refusers are exposed to pertussis, for one. What if all the pertussis cases occurred in one school, and that school happened to be in a neo-hippie community with an exceptionally high number of vaccine refusers? What if he took all the comparison cases with no diagnosed pertussis from different schools where pertussis was not going around and happened to have very few refusers.

(He could have matched the control group on age, gender, AND school attended. But he didn't. You would think exposure would be an important variable to control for, but this is the same guy who thinks bacteria care about doctor's notes. So…eye rolling again.)

Now if he had started with all the refusers, you could follow all of them, exposed or not. Match controls on age, gender, and school, and you can have a better idea if refusers truly have a higher risk than acceptors of having lab-confirmed pertussis. At least the conclusion wouldn't be completely incongruent with the study design.

As it stands, the only thing you can possibly conclude from this study is that given a child with lab confirmed pertussis, you are more likely to find that his parents were vaccine refusers than if you were given a child with no diagnosed pertussis (but possibly with asymptomatic pertussis).

First, note that this increased likeliness exists ONLY when compared to a child with no diagnosed pertussis. If you are just looking at the pertussis kids alone, remember that 89% were vaccine acceptors and only 11% were vaccine refusers. That means, given a child with lab confirmed pertussis, you are eight times more likely to find vaccine accepting parents than vaccine refusing parents.

Second, note that I keep saying "lab confirmed pertussis," and not just, "pertussis" like Glanz liked to say. Glanz himself identified three groups of pertussis infections: 1) "frequent asymptomatic infections" where people have pertussis, but show no symptoms; 2) symptomatic infection with no lab confirmation, and 3) symptomatic infection with lab confirmation. Now out of all three, he studied only the third type. Yet in his conclusion, he predicted the risk of pertussis infections in general. Science is supposed to be precise to avoid misleading people. It may sound like nitpicking, but imprecision and overgeneralization is very bad science.

Once you focus on lab confirmed pertussis, and not all pertussis infections, it becomes obvious that diagnostic bias could very well explain the difference between the group with lab confirmed pertussis and the group without diagnosed pertussis. It could be that physicians are more likely to order lab testing for children of vaccine refusers than children of vaccine acceptors. Indeed, Glanz acknowledged this bias.

But get this. He says this bias is roughly cancelled out by another bias: that vaccine refusers are less likely to attend clinics to begin with. He estimates that diagnostic bias is about threefold (3X), and clinic attendance bias was twofold (2x), so the biases cancel each other out and are "negligible." That is to say, doctors are 3 times more likely to order labwork on refusers than acceptors, but refusers are 2 times less likely than acceptors to see that doctor to begin with. So it's all okay.

How did he get this threefold vs twofold estimate? He did a little side analysis of very young children with vague symptoms, counting how many refusers vs acceptors went to a respiratory clinic, and how many refusers vs. acceptors got lab testing orders at the clinic.

Say what?

Very young children with vague symptoms do not represent the initial group, which had mostly older children and were unlikely to have only vague symptoms. It could be that in very young children with vague symptoms, the biases are 3X vs. 2X, but in older children with marked symptoms, the biases could be 5x vs. same, or same vs. 5x, which doesn't cancel out at all. Maybe refusers are more likely to skip clinics for vague symptoms, but if the symptoms are severe, they are just as likely as accepts to go. Maybe doctors are more likely to order labs for refusers for vague symptoms, but when the symptoms are severe, doctors are equally likely to order labs for both groups. We don't know. Again, he is generalizing without the data to support it. Bad, bad science. Since this side analysis of his doesn't represent the core sample of his study, it really is completely useless.

So we're back to the problem of diagnostic bias. It is just one of many possible factors that could explain why if given a child with lab confirmed pertussis, you are likely to find a vaccine refusing parent than given a child without lab confirmed pertussis. You could have doctors who like to order more labwork for refusers, you have happen to have more refuser kids at a school with a pertussis outbreak, you could have refuser kids more likely to exhibit stronger symptoms of pertussis than acceptor kids (who are either vaccinated or have other medical problems for medical exemptions), you could have refusers who are more likely to attend clinics because of stronger symptoms, and so on.

One thing is certain. What you cannot conclude is what Glanz concluded: "Our study found a strong association between parental vaccine refusal and the risk of pertussis infection in children." The study design does not support this conclusion. The methodology doesn't support this conclusion. The data doesn't support this conclusion. It's a case of find one thing, and generalize improperly to make it look like another. It's another case of pseudo-scientific sleight of hand. You got a waste basket for junk science? Crumple this one up and throw it in.

Wednesday, January 20, 2010

The Red Light District in Science

Tomorrow's issue of Nature talks about a need for strategic communication about global warming: how to address "gaps" without undermining the understanding.
Climate science, like any active field of research, has some major gaps in understanding (see page 284). Yet the political stakes have grown so high in this field, and the public discourse has become so heated, that climate researchers find it hard to talk openly about those gaps.
There you have it: political stakes. When did science start having political stakes? Show me a science with political stakes, and I'll show you the world's oldest profession in a lab coat.

You know, I have heard this sort of question before. How do we debate honestly without giving the "other side" ammunition for attack? I've heard this in both religious communities and political parties, even in medical circles. But I have never heard this kind of question in a science.

Science is about the search for objective, empirical evidence. It's about letting the facts fall where they may. It's about the search for truth, not the careful and strategic protection of a belief. There is no "other side" in science; there is only an "other side" in politics, ego, careers, money, and legislation. Scientists are not afraid of attacks. It is understood that if a scientific perspective or conclusion is not defensible, it deserves to fall. Only scientists who are more invested in a political ideology than they are in the scientific method are afraid of attacks.

Shame on Nature for not knowing that. Or forgetting it.