Ice core vs. Surface measurements
While I was investigating whether the planet was warming, I had always assumed CO2 levels were unquestionably rising. After all, we've all seen graphs like this:
(IPCC: Ice core proxy levels followed by direct measurements)
Since some "adjusting" is done to correlate CO2 amounts in air bubbles to the amount thought to be in the atmosphere at that time, ice core measurements are not as good as the real thing, but are thought to be valid proxies for direct measurements.
Then last year (2007), a German researcher named Ernst Beck published another graph, made from direct CO2 measurements, that looks like this:
(CO2 levels are the red line.)
He showed a peak back in the 1820's near 400 ppm, which throws the entire temperature-CO2 correlation out of whack. Not only that, he accused authors of conventional graphs of cherry-picking data that suited their ideological agenda, of "falsifying the history of CO2." Both Beck and the journal that published the study (Energy and Environment) were immediately attacked by global warming proponents as, to put it politely, unworthy of publishing. Keeling, whose work is the cornerstone of the IPCC graph, calls Energy and Environment as a "forum for laundering pseudo-science."
Name-calling aside, is there any validity to Beck's data on CO2? He compiled 90,000 chemical measurements of CO2 from "180 technical papers published between 1812 and 1961."
"The compilation of data was selective. Nearly all of the air sample measurements that I used were originally obtained from rural areas or the periphery of towns, under comparable conditions of a height of approx. 2 m above ground at a site distant from potential industrial or military contamination...His critics, including Keeling, claim that these measurements are neither here nor there. They have too much variability and do not represent the true "background" level of CO2. The only reliable source of this true, "background" CO2 for that time period is in air bubbles trapped in antarctic ice. Everything else is just irrelevant noise. Note that they are not disputing the accuracy of the data. They are saying anything with that much variance is unacceptable.
...Discounting such unsatisfactory data [because of deficiencies in certain methods], in every decade since 1857 we can still identify several measurement series that contain hundreds of precise, continuous data."
It seems to me that 90,000 readings in 180 published papers should not be so easily dismissed. If nothing else, they show that measured CO2 levels had a huge amount of variance. Yet none of this variance is taken into consideration because all but one source of CO2 measurements (ice core proxies) are categorically rejected.
Now Beck admits that not all CO2 data is equal. He himself threw out data he felt was not representative because of faulty methodology. But who decides what is faulty? Who decides what is representative of the CO2 level in our atmosphere? How does one decide that one measurement (ice core) is representative, and the other (chemical readings near the surface) is not? Who gets to define "background level"? Of all the CO2 measurements out there, is the *one* source selected to represent the planet a matter of consensus as well? A vote by a panel of judges, like a beauty pageant? Is this how science is conducted now?
In science, one has to have a serious and evidence-supported justification for ignoring data. Whenever empirical data is rejected, it is a red flag. Without judging the rationales given for purposely excluding data, both the IPCC and Beck are waving it. Of course, the IPCC rejected a hell of a lot more data (90,000 direct measurements), so one could say their red flag is overwhelmingly larger than Beck's. The more data you reject, the better your reason for rejection must be.
Drive your data, change your definition
Speaking of red flags, whenever a graph changes its definitions mid-point, alarms should sound loud and strong. This is especially true if the methodology changes at a pivotal point in the graph. For example, Jawarowski, a vocal critic of ice core proxies, highlights this change in this graph.
(From: Jaworowski, Z., 2007, CO2: The greatest scientific scandal of our time, EIR Science)
Notice how after they change the definition of CO2 levels from ice core readings to actual measurements from CO2 stations, the curve rises exponentially. Yeah, that should turn on the ambulance sirens in any scientist's head. You can make the "trend" go in any direction you want simply by changing to a different set of data. It could be a defensible change. It could also be sleight of hand.
I understand using proxies because CO2 measuring stations did not exist back then. But why use proxies when there were direct CO2 measurements during the same time period? Wouldn't direct CO2 measurements back then be more comparable to direct CO2 measurements taken today than proxies? Do they have a *really* good reason for rejecting all that data?
What is "background" CO2 anyway?
Keeling, quoting his father's pioneering work on "background" CO2, explains:
"Measurements of the concentration of atmospheric carbon dioxide extend over a period of more than a hundred years. It is characteristic of all the published data that the concentration is not constant even for locations well removed from local sources or acceptors of carbon dioxide. Recent extensive measurements over Scandinavia, reported currently in Tellus, emphasize this variability: observations vary from 280 to 380 parts per million of air. These measurements are in sharp contrast to those obtained in the present study. The total variations at desert and mountain stations near the Pacific coast of North America, 309 to 320 parts per million is nearly an order of magnitude less than for the Scandinavian data. The author is inclined to believe that this small variation is characteristic of a large portion of the earth's atmosphere, since it is relatively easier to explain the large variations in the Scandinavian data as being a result of local or regional factors than to explain in that way the uniformity over more than a thousand miles of latitude and a span of nearly a year, which has been observed near the Pacific coast."In other words, "background" CO2 is whatever source that has the least amount of variance of CO2. Why? Because the author is "inclined to believe" the smallest variation is representative of the earth's atmosphere. His definition of "background" is not based on actual atmospheric measurements showing it has very little variance. No, it is because it makes sense to him the background shouldn't vary all that much.
Keeling continues to say:
"The concept of the atmospheric background has been backed up by millions of measurements made by a community of hundreds of researchers."But he has no references for these millions of measurements (though he references other assertions in his critique). So I can't independently verify what he means by that. If the "background" definition has empirical support, this empirical data should be foremost in his argument. As it stands, it sounds like atmospheric background is a concept, widely accepted to be sure, but not very well defended. And in science, accepted and defended are two different things.
Incidentally, there are only five major CO2 measuring stations (atmospheric baseline observatories). Most of the data for mean monthly or annual CO2 levels come from the station on an active volcano (Earth's largest) called Mauna Loa, which last erupted in 1984. I assume climatologists have taken into account volcanic gases (one of which is CO2) as a potential confounder, and that this has nothing to do with the much higher readings of CO2 since they started taking direct measurements there.
So they rejected a huge amount of empirical data with a lot of variance for a proxy that has very little variance, barely climbing for centuries. Then they attached actual measurements, and CO2 levels leap. How much of it is an artifact of data exclusion and definition change?
I don't know the answer. But I shouldn't have had to ask the question.