Wednesday, May 20, 2009

What is Pseudoscience?

The real purpose of the scientific method is to make sure Nature hasn't misled you into thinking you know something you don't actually know.
-- Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance

When I googled "pseudoscience," I got a lot of pages describing pseudoscience as any group of ideas that is not accepted by the establishment and is not consistent with the established body of knowledge. In other words, most people use the word "pseudoscience" to name-call and disparage claims they do not believe and do not like. Sure, they talked a lot about how pseudoscience is not testable, but they do not define testability.

After all, there are different standards for tests. What is "testing" to one person is sloppy, inconclusive blather to another. In science, the quality of the data lies in how well others can independently "test" their validity and reproduce the same results. The more details provided to evaluate and reproduce the methodology, the higher the quality of the paper. The more you have to trust the authors' word for anything, the lower the quality is. Requiring trust, or faith, in the methodology is not found in real science.

This motivated me to write my own page on how to identify pseudoscience. Rather than use the label to discredit non-conventional thinking, I would list testability criteria that can apply to popularly accepted knowledge as well. In addition, I would emphasize that there is no specific cut-off in the pseudo-science/science spectrum. Rather, it is important to understand that the less scientifically rigorous a conclusion is, the more pseudoscientific it is.

How to Recognize Pseudoscience:

The bottom line in identifying pseudoscience is recognizing claims and conclusions that are not supported by the evidence provided. Just like commercial products, evidence comes in varying qualities. Pseudoscience is the infomercial or used-car-salesmanship of the science world, a lot of exaggerated, selective conclusions not supported by the quality of the product (or data).

Now all scientific papers have flaws. It is not that we expect perfection in the data. We just don't want any false advertising, if you will, that ignores the flaws.

As an example, I will use a study frequently cited by government and health authorities as further "scientific" evidence that thimerosal (mercury) in vaccines do not cause autism. In 2003, Madsen et al conducted a retrospective study looking at autism rates in Denmark before and after the removal of thimerosal in vaccines, from 1971 to 2000. After the removal of thimerosal in 1992, autism rates actually skyrocketed. Authors concluded that thimerosal cannot be a causal factor in the development of autism. (Reference: Madsen, MD et all. PEDIATRICS Vol. 112 No. 3 September 2003, pp. 604-606 Thimerosal and the Occurrence of Autism: Negative Ecological Evidence From Danish Population-Based Data.)

I chose a paper popularly cited by authorities to show that my pseudoscience criteria are not biased against novel ideas, but can apply to conventionally accepted claims as well. Real science is not attached to an ideological agenda either pro or against establishment, but calls out the flaws where they occur, and demands that conclusions match the data.

1. Vague definitions / Lack of transparency and critical details: Pseudoscience does not use precise, objective, and transparent definitions that can be independently evaluated for as valid. Their definitions are often lacking in detail and murky, forcing the reader to trust that the authors really measured what you think they measured.

For example, in Madsen et al's paper, the authors defined autism as meeting criteria for ICD-8 code 299, "psychosis proto-infantilis." Why did they not use ICD-8 code 308, "infantile autism," like everyone else? When I emailed the lead author for clarification, he simply replied that Denmark had always used ICD-8 code 299 for autism, skipped the use of ICD-9 altogether, and for the specific diagnostic criteria, I would have to consult a Danish child psychiatrist. In short, before 1994, for 23 out of the 29 years in the study period, we have no clear definition of autism. Since Denmark departed from ICD-8 codes in other countries, the specific diagnostic criteria they used is critical. As it stands, we simply have to trust that what the Danes saw as "psychosis proto-infantilis" is the same animal we see as "autism."

2. Changing definitions. Pseudoscience lumps inconsistently defined measurements all together. At best, it is sloppy and unreliable. At worst, it constitutes a sleight-of-hand. Imagine someone tells you that Chemical A caused a study subject to lose weight. But you find out that the "before" weight measurement was taken on a different scale than the "after" measurement. In real science, one would use exactly the same scale for an honest comparison.

Denmark removed thimerosal in 1992. In 1994, Denmark changed its diagnosis of autism from ICD-8 299 "psychosis proto-infantilis" to ICD-10 F84 "infantile autism." Before 1995, Denmark's autism rates counted only inpatient autism cases, those severe enough to be admitted for hospitalization. After 1995, Denmark started counting both hospitalized cases and outpatient cases. So autism rates skyrocketed after thimerosal was removed. Did they really skyrocket, or did it look like that because the definition of autism changed to count a lot more kids? We will never know. The data don't say.

3. No real or actual data provided / Insufficient or adjusted statistics. Pseudoscience present data in graphs or some statistical artifact such as person-years or relative risk. They only show "adjusted" data, even when a straight-up presentation of raw data will do. Then they omit details that would allow independent evaluation of how the data was "adjusted" and if that adjustment was valid.

In Madsen et al's paper, we only see a graph of incidence rates from 1971 to 2000. Any 5th grader who has watched PBS's Cyberchase can tell you that graphs can imply statistical significance, or a sharp rise or fall, where there is actually none. Was the rise significant or just a random fluctuation? The data don't say.

4. No control group / Poor control group. Pseudoscience infers causation from a change in one group, without comparing it to another group that hasn't experienced the change. In science, the comparison group is called the control group. Pseudoscience often has no control group, or designs a control group that is so different from the study group that you can't pinpoint what caused any difference in results.

In the study in question, there simply was no control group. As a correlational study, it did not involve experimentation, and therefore did not strictly follow the scientific method. All correlational and anecdotal studies are quasi- or pseudo-scientific right out of the gates, and need to be interpreted with many qualifiers and exceeding caution.

As it was, it would have been helpful to compare autism rates with those in other countries using similar diagnostic criteria, or compare autism rates between different groups in Denmark, such as autism rates between vaccinated children and unvaccinated children never exposed to thimerosal. Without any comparison at all, the study is not much more than a glorified anecdote of one country instead of one person. Without any controls, anecdotes of thousands of people are still anecdotes.

5. No or inadequate analysis of confounders. In science, confounders are factors that could have caused the result you see instead of, or in addition to, the factor you are studying (called the independent variable). Pseudoscience doesn't consider the impact of factors other than the independent variable.

To illustrate, in the Danish study, a possible confounder is that the amount of thimerosal Danes were exposed to was very low compared to that in other countries. It is possible that the amount of thimerosal is critical in causing autism or not, the Danes were not affected by the low amount they got. Another possible confounder is that thimerosal is only one of several co-factors that play a role in causing autism, or one of many different causes. After all, the autism spectrum disorder has many clinical presentations, and it is entirely possible it has multiple and complicated etiologies. Just because autism rates rose despite absence of thimerosal doesn't automatically mean thimerosal does not play a role. Just because obesity rates rise despite a shortage of ice cream doesn't mean ice cream doesn't make you fat.

6. No or inadequate analysis of flaws and weaknesses. Pseudoscience doesn't have flaws or holes, or doesn't acknowledge as many as it should. Real science is self-critical to carefully place necessary limitations and qualifiers in its conclusions.

Although Madsen et al acknowledged that changing the definition of autism to include outpatient cases after 1994 might have inflated the rates somewhat, the authors downplay the impact. They do not address the other vagaries of definition at all, nor any of the flaws listed here. They do not qualify or limit their conclusions in context of these flaws.

7. Conclusions unwarranted by the data.  Pseudoscience likes to jump to conclusions. Real science is painfully slow, painfully tentative, and painfully precise about interpretation.

The only conclusion warranted by this study is that between 1994 and 2000, inpatient diagnoses of ICD-10 F84 childhood autism appear to have risen in Denmark despite no use of thimerosal in childhood vaccines, but it is unknown if this "rise" is significant or a result of random fluctuation. Certainly, no conclusions on causation or exoneration from causation can be inferred.

If you look at the data in this study against the rigorous standards of real science, they are too vague and inconsistent to support the conclusion that thimerosal cannot possibly cause autism. The study has all the outward trappings of "scientific" research, but none of the fundamental pillars of real science. It is the poster-child of pseudoscience.

Summary

Whenever you read a paper where you don't have enough information to independently evaluate and "test" the data and statistics for validity, a red flag should go up. Pseudoscience can be found everywhere, from widely accepted research findings supported by authorities to "alternative"/paranormal/conspiracy theories.