Wednesday, November 19, 2008

The Elusive Standard Deviation

Question:
What is the standard deviation of the global mean surface temperature, per year?

The issue of global warming begins by asking the question: Is the planet warming? The answer provided by the consensus of climate scientists is a resounding, "Yes." The cardinal statistic supporting this assertion is a rise in the "global mean/average surface temperature." For example:
  • "Records from land stations and ships indicate that the global mean surface temperature warmed by between 1.0 and 1.7°F since 1850...Since the mid 1970s, the average surface temperature has warmed about 1°F."http://epa.gov/climatechange/science/recenttc.html
It stands to reason that any examination of the global warming problem has to begin here, with the "global mean/average surface temperature." We need to know how much this mean has risen, and if this rise is statistically significant.

The means as a probability distribution
Read more here: http://en.wikipedia.org/wiki/Normal_distribution

Suppose you average the age of everyone in your household. You have three people aged 3, 14, and 43. The average (also called the "mean") age is 20 (3+14+43=60, divided by 3). Suppose you average the age of everyone in the house next door. They have 5 people aged 18, 19, 20, 21, 22. The average age of their house is also 20. The same exact average in two households can describe two very different situations. The fact of the matter is, the mean has limits in its ability to represent a group with one single point. One has to put that single point in context of how spread out the data is; in statistics, the spread is called "variance." There is a huge amount of variance in your house, not so much in your neighbor's.

The smaller the variance, the more "truthful" the mean is in representing the population. You don't even have anyone in your household who is twenty years old; this mean does not "truthfully" represent your household. The average age of 20 is a more accurate representation of the population in your neighbor's house than in your house. That is because your neighbor's house has a smaller variance (5 years) than yours (40 years).

The mean, as a single point representing an population, is a probability distribution. It represents the middle of a distribution of data shaped like a bell curve. Statisticians divide each half of the bell curve into roughly three equal sections; each section is called a standard deviation. So there are six standard deviations in a bell curve: three before the mean, and three after the mean. When you go out one standard deviation from the mean, both under and over the mean, you get 68% of the data in a typical bell curve. You have a 68% chance that the "true" mean lies within one standard deviation (SD), a 95% chance that it lies within two SDs, and a 99.7% chance it lies within three SDs (also denoted by σ, lowercase sigma).



When you compare the means of two different bell curves, the question is: what are the chances the difference is caused by random or meaningless fluctuations? The higher the variance of the two distributions, the more likely the "true" mean is uncertain, and the more likely the differences between the two uncertain means are random and not significant. The quickest way to eyeball the significance of any comparison is to see if the difference is more than one SD. If the change is larger than one SD, it is likely to be significant. If the change is smaller than one SD, the mean is likely to be due to random fluctuations and errors in measurement.

Where's the mean? Where's the SD?

Now that we've gotten elementary statistics out of the way, let's look at that mean surface temperature again. The rise in the global mean surface temperature is oft cited and well known. But when you look at the graphs referenced by discussions of mean temperatures, you don't see any numbers for the means. You see a horizontal line labeled "0.00° C" and vertical bars going fractions of a degree below or over the zero. (For an example, look here.) One assumes that the bars represent the global mean temperatures, and they are going up. But upon closer examination, the bars represent not the global mean temperatures, but "global temperature anomalies," or departures from the Big Zero in the middle.

How do you evaluate the mean, if you don't know what the mean is or what its variance looks like? What is the absolute value of the global mean surface temperature, and what is its standard deviation? I was able to find this table showing the absolute mean temperatures per year from 1880 to 2007. In addition, this GISS site describes:
"For the global mean, the most trusted models produce a value of roughly 14 Celsius, i.e. 57.2 F, but it may easily be anywhere between 56 and 58 F and regionally, let alone locally, the situation is even worse."
So we can estimate the global mean to hover around 14° C. But what do these average temperatures represent without knowing their standard deviations?

Here is what I mean. A table that shows this:
1880: 13.88° C ± 10° C
2007: 14.73° C ± 10° C

reads very differently from a table that shows this:
1880: 13.88° C ± 0.25° C
2007: 14.73° C ± 0.25° C

Now given the fact that temperatures across the planet have a huge variance over the year, it is impossible that the standard deviation would be less than 1°C. (That would mean the global temperature distribution roughly ranges from 11° C to 17° C, which we all know is patently false.) What is the likely ballpark of the standard deviation?

The hottest record in Canada is 45° C (113° F) and the coldest record in Africa is -24° C (-11° F). It is not unreasonable to estimate the bulk of the world's temperatures for the year falls roughly in this range. This type of range would give us 30° C below and 30° C above the mean of 14° C, which gives a reasonable estimate that the SD should be in the ballpark of 10° C. (If the spread is actually wider, the SD would be even larger.)

If that is the case, why would a change from 13.88° C ± 10° C to 14.73° C ± 10° C be considered significant? The difference is well within the margin of error. Why would climate scientists make such a big to-do about this fraction of a degree increase, in context of the huge global variance? Why can't we find the exact SD?; surely it has been calculated. (You can google it until the cows come home, but you won't find that SD.)


Who is the Big Zero?

It turns out that there is no such thing as the "global mean surface temperature" or its SD as statistical entities. How then, do they know the mean is rising, if it doesn't exist? What are they comparing the "anomalies" to? Who is this Big Zero in all the graphs and data tables?

I was naive and ignorant enough to assume that climate scientists take temperature readings from weather stations all over the earth for the period of a year, and average them into one temperature. Then they compared that average from year to year to observe a rising trend in this mean temperature. Not.

This is what really happens. Climate scientists take readings from weather stations and feed the data into a computer model that adjusts for all sorts of variables, including number of wet days, cloud cover, sunshine, diurnal temperature range, etc. Using computer modeling, they divide the world into 5x5 grids, and fill the boxes with known data and interpolate unknown data. They run the model for a while and come up with a single mean for a 30 year period, usually 1961 - 1990. This mean is called a climatology. The GISS site discusses the Elusive Absolute SATs (Surfact Air Temperatures):
Q. If SATs cannot be measured, how are SAT maps created ? A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts. We may start out the model with the few observed data that are available and fill in the rest with guesses (also called extrapolations) and then let the model run long enough so that the initial guesses no longer matter, but not too long in order to avoid that the inaccuracies of the model become relevant. This may be done starting from conditions from many years, so that the average (called a 'climatology') hopefully represents a typical map for the particular month or day of the year.
There are differing methods and time periods used for modeling climatologies, resulting in different climatologies. Climate scientists pick the climatology most appropriate for purpose and compare their annual mean temperatures (also calculated by computer models) to it. The climatology is the absolute standard by with all other temperature calculations are measured; it is the Big Zero. All annual temperatures are evaluated in terms of either hotter or colder than the climatology. The climatology itself does not have a standard deviation, because it is not a straightforward average, but a "adjusted" figure, a result of a very educated computer guess. Scientists estimate the margin of error of these climatologies to be exceptionally small. Climate Research Unit (CRU) explains here.
How accurate are the hemispheric and global averages?Annual values are approximately accurate to +/- 0.05°C (two standard errors) for the period since 1951. They are about four times as uncertain during the 1850s, with the accuracy improving gradually between 1860 and 1950 except for temporary deteriorations during data-sparse, wartime intervals. Estimating accuracy is a far from a trivial task as the individual grid-boxes are not independent of each other and the accuracy of each grid-box time series varies through time (although the variance adjustment has reduced this influence to a large extent). The issue is discussed extensively by Folland et al. (2001a,b) and Jones et al. (1997). Both Folland et al. (2001a,b) references extend discussion to the estimate of accuracy of trends in the global and hemispheric series, including the additional uncertainties related to homogeneity corrections.
Why do climate scientists use climatologies, instead of a straight-up means? The same CRU site elaborates.
Why are the temperatures expressed as anomalies from 1961-90?
Stations on land are at different elevations, and different countries estimate average monthly temperatures using different methods and formulae. To avoid biases that could result from these problems, monthly average temperatures are reduced to anomalies from the period with best coverage (1961-90). For stations to be used, an estimate of the base period average must be calculated. Because many stations do not have complete records for the 1961-90 period several methods have been developed to estimate 1961-90 averages from neighbouring records or using other sources of data. Over the oceans, where observations are generally made from mobile platforms, it is impossible to assemble long series of actual temperatures for fixed points. However it is possible to interpolate historical data to create spatially complete reference climatologies (averages for 1961-90) so that individual observations can be compared with a local normal for the given day of the year.
Why do they talk about the mean surface temperature, if the mean doesn't exist? The best I can make of it, the mean is a theoretical estimate (climatology + the anomaly) that is assumed to be a close correlate of global anomaly trends. If the anomalies go up, it is undisputed that the mean has also gone up.

So is the rise in global mean surface temperature significant?

It depends. Not on objective data, mathematical rigor, and the scientific method; but on personal values. Do you trust climate scientists and their computer modeling or not? It is funny way to do science, because when it comes down to it, it is a matter of belief. Do you believe they have done a good job in "adjusting" all the variables in their computer models? Do you believe they have both the intellectual competence and the professional integrity to have factored in all the relevant variables accurately to get the "truth"?

If you are comparing computer-adjusted data with computer-adjusted data, how do you know if the difference between them is significant? You have to trust the person doing the adjusting.

Please don't get me wrong. I am not casting aspersions on climate scientists at all. I am describing an inherent subjectivity in all endeavors entirely dependent on computer modeling. What comes out is simply a function of what goes in. You program the computer to spit out whatever number you want. And the decision of what goes in is ultimately subjective. Unlike the situation in other sciences where methodology interacts with reality, and you get the results you get whether you like it or not; computers do not interact with the real world. The computer model does not get feedback from reality, only from the programmer. There is no way to cut out the subjective input and values of the programmer from the process.

The only check and balance that exist in a field comprised of entirely computer modeling is the community of scientists and their subjective approval of one's programming. It is no wonder that "consensus" is used so much to describe climate change.

-----------------------------
Links for further reading:

Steve McIntyre's Climate AuditMathematician who "audits" climate modeling
20 Questions Statisticians Should Ask About Climate Change (pdf)
by Edward J. Wegman, statistician, George Mason University
William M. Briggs, Statistician: Blogs on Global WarmingLetter to Call for Review of the IPCC
by Vincent Gray, climate scientist and former IPCC Reviewer

3 comments:

  1. Helen-hope Ive made it over from CA! Would be interested in your comments.

    I seem to have carved a niche over at CA as the resident historian so I tend to view things on a much longer time scale than most climate modellers and don't have the same number of theories.

    The reaason I didn't want to post this at CA is that Steve M asked me not to, as it refers to the work of Ernst Beck who is a taboo subject over there

    The following is an attempt to put climate change into a broader historic context. As most sites have many non experts who are often confused by terminology it has been written in a straightforward manner, so apologies in advance to the experienced.

    Background

    Put simply, the official IPCC view is that current temperatures are ‘unprecedented’ and this has been caused by man made co2 emissions from burning fossil fuels adding to the existing levels of ‘natural’ emissions. This has disrupted atmospheric ‘equilibrium’ (whereby co2 has previously been absorbed into natural ‘sinks’ such as oceans) and subsequently created a dramatic rise in atmospheric co2 concentrations. These range from 280ppm (parts per million) before 1750- the start of the industrial age – as measured by ice core samples, a 1900 figure of 295ppm (according to the work of GS Callandar) through 315ppm in 1958 as measured by Charles Keeling at Mauna Loa, through to today’s reading of 380ppm, thereby ‘proving’ the relentless rise of warming man made co2.

    The following graph compares known historic temperatures with known levels of human co2 emissions and consists of information from two separate data sets. Dr Mann has spent 15 years working on his ‘hockey sticks’ and the ‘spaghetti derivatives’ so readers should understand this is very much a first try!

    Graph 1 (temperatures in red) is the one chosen as our base line-It shows Hadley CET (Central England Temperatures) dating from 1660 to current date (in degrees C) This is the longest temperature series in the world and covers much of what we know as the Little Ice age-approximately 1350 to 1880. The figures have been unadjusted or smoothed, so shows actual peaks and troughs very well. Whilst CET is pretty good, the four stations represented within it do change and we would estimate a UHI (Urban Heat Island) effect over the last fifty years of at least 0.5C. However the graph has NOT been adjusted for this probable bias as it makes it easier to spot potentially rogue temperatures from other series. Consequently we consider this to be our benchmark

    The blue line in the bottom right hand corner shows the actual CDIAC//IPCC total emissions of co2 by humans since 1750. Any man made emissions are said to go straight to the ‘bottom line’ as an increase in concentration. Because of this immediate cause and effect we have therefore translated these additional emissions directly to an equivalent ppm amount. We had as guideline reference points the known ppm from today, back to 1958 when Charles Keeling first took his measurements. N.B. Natural emissions are annually 20 times greater than mans.

    After plotting all the known fixed points (please refer to graph) it appeared to show that either;
    1. Co2 has no relationship whatsoever to temperature-it can be equally as warm at the 280ppm pre industrial levels, as it can at today’s 380ppm levels.
    2. Alternatively that the estimate of 280ppm taken from ice cores is false and that other co2 peaks and troughs need to be factored in to create any sort of co2/temperature relation
    3. Alternatively co2 lags temp rise by up to 800 years (as suggested in ice cores) so the current rises in temperatures is a are response to the Medieval warm period, not the modern warm period.
    It can also be seen that temperature rises appear to predate co2 increases and there were various times even in the Little Ice age when temperatures were as warm as today. Hadley CET are said to be ‘indicative’ of the Northern Hemisphere.

    http://cadenzapress.co.uk/download/mencken.xls

    It seemed worth investigating proposition 2 further as the graph seems to cry out for additional co2 spikes to be inserted in order to provide some correlation with fluctuating temperatures.

    Consequently the work of Beck was inspected in considerable detail (Beck believes there are many reliable co2 readings prior to 1958 and that these demonstrate much greater variability than the ice cores suggest.) Some of Beck’s historic measurements were factored in, and this appeared to show some interesting correlation. There has been much criticism of Beck and his claims somewhat derided by warmists. Consequently a detailed history of pre 1958 co2 measurements was also researched in the context of the wide spread taking of measurements from the period 1820 onwards. These were needed for scientific and practical reasons-for example within mines, hospitals and in workplaces. The British Factories Act of 1889 set a limit of 900ppm inside cotton factories.

    From this research it is difficult to come to any conclusion other than very many high quality scientists-some Nobel winners- from Victorian times onwards, were perfectly capable of taking very accurate measurements which showed variations from around 280 to 400ppm or so. Of course, if this interpretation is accepted it means the ice core measurements are incorrect.

    It is my intention to work on other graphs that more accurately examine Becks readings, whilst also trying to put co2 into its broader context-for example its insignificant numerical proportions compared to other greenhouse gases.

    ReplyDelete
  2. Hello Helen

    Nice clear site!

    You said;
    "It seems to me there is a lot of speculation and assumption on both temperatures and carbon dioxide levels the further back we go."

    To be honest I have increasingly come to the view that many climate scientists make things up as they go along!

    The Hadley information I posted is completely factual however and the co2 data comes from Cdiac/IPCC so it is what they use in their reports.

    I'm just observing that the curent high co2 levels obviously could have had no impact on historically high temperatures. Presumably someone must have put these two key sets of data before but I couldnt find it so had to do it myself.

    I am in contact with Beck and think he is possibly guilty of a little cherry picking to prove his point but there is probably something in it. I don't believe the ice core measurements!

    GS Callandar was also guilty of cherry picking when he first of all came up with the idea of AGW back in 1938 and had no shred of evidence to back up his hunch. He in turn greatly infuenced Charles Keeling in 1955 who was young, inexperienced, newly qualified and had no idea what the figures meant and believed what he was told-that the co2 figures were 285ppm in 1900; so when Keeling recorded 315 in 1957 he added 2 and 2 and came to 5!

    Getting back to global temperatures, you might be interested in the following graph as well-it is related to Zurich temperatures and is on the same template as Hadley. Zurich is another of those few places which have uninterrupted records going back a long way. As you can see it very closely mirrors Hadley until the last 50 years when it displays classic UHI effects.

    Not surprising, as Zurich has grown fourfold since then and engulfed the reporting station.

    http://cadenzapress.co.uk/download/combined_mencken.xls

    This is the weather station in Zurich

    http://weather.gladstonefamily.net/site/06660

    that provided the data-you can scroll out to see the way the station has changed-back in the 70's this was apparently a completely rural area. This sort of thing makes me very suspicious of the value of 'global temperatures'

    I will let you know when I have the percentage of greenhouse gases graph ready.

    I post quite a lot on; http://www.harmlessky.org it is quite a small group but you would be welcome. We tend to be much more down to earth and have far fewer theories than CA!
    (it is the anthropogenic global warming debate usually accesed from the right hand bar. It now runs onto 19 pages-it was started off by a UK newspaper and was switched to HS as it was so popular)

    TonyB

    ReplyDelete
  3. Helen,

    I'm a real research scientist. Yes, I do it for a living. Because I work for the government, I am usually constrained in expressing any opinion related to CO2 emissions and global climate change. I work for the executive branch, and their policy on this matter was best summarized by the Director of the lab I work for in an all employees meeting a few years back. “I don’t care if CO2 based global warming is a scientific reality. It is a political reality, and this lab will focus its research efforts in this area!”

    Officially, I am not allowed to question the validity of all this global warming stuff. That’s right, your government researchers are not allowed to pursue the truth (your tax dollars at work). Carbon sequestration is big business these days, and it’s best not to get in their way.

    Normally, I don’t bother responding to blogs and such, but you seem to actually understand science and statistics! You don’t see that too much in global warming discussions, so I felt compelled to comment. My comments are as follows:

    1) One or two degrees Fahrenheit in a hundred years? Are you kidding me?! Science and statistics aside, who’s going to buy that? Just sit back and think about it for just a second. Can you tell me the average temperature in your town at any given moment to within a degree? I sure as hell can’t. Now you’re going to tell me that some guy at NASA or a university can use his fancy-ass numerical simulation to determine the average temperature of the whole damn world over a hundred years to within one degree?! And you’re surprised it’s con.

    2) I’d appreciate it if you’d quit using the term “climate scientist.” Climatologists don’t do any experiments to test hypotheses, and therefore don’t use scientific method and don’t really qualify as scientists. Of course, I’m just as guilty of chasing fear bucks as your local climatologist (I’m ashamed to admit), but at least I do some experiments.

    3) You don’t have to use science to know that something is wrong with our world, and we’re either the cause, or making things worse. When we dump tons of waste heat, mercury, NOX, SOX, H2S, CO, metal chlorides, fluorides, unburned hydrocarbons, particles and an endless list of other toxic crap into our environment, does it make sense to spend hundreds of billions of our tax dollars on CO2 sequestration?

    Thanks.

    Fluffy, Ph.D.

    ReplyDelete