# Statistical Analysis Projects Future Temperatures In North America

Was it Benjamin Disraeli who said “There are three kinds of lies… “?

For the first time, researchers have been able to combine different climate models using spatial statistics – to project future seasonal temperature changes in regions across North America.

They performed advanced statistical analysis on two different North American regional climate models and were able to estimate projections of temperature changes for the years 2041 to 2070, as well as the certainty of those projections.

The analysis, developed by statisticians at Ohio State University, examines groups of regional climate models, finds the commonalities between them, and determines how much weight each individual climate projection should get in a consensus climate estimate.

Through maps on the statisticians’ website, people can see how their own region’s temperature will likely change by 2070 – overall, and for individual seasons of the year.

Given the complexity and variety of climate models produced by different research groups around the world, there is a need for a tool that can analyze groups of them together, explained Noel Cressie, professor of statistics and director of Ohio State’s Program in Spatial Statistics and Environmental Statistics.

OSU

### 7 Responses to Statistical Analysis Projects Future Temperatures In North America

1. Eric Baumholer

Wow, this is amazingative. The computer ‘determines how much weight each individual climate projection should get in a consensus climate estimate.’

The problem with climatologists is that the consensus has always been wrong. Finally we have a computer which will arbitrate amongst the software written by the climatologists and tell us what the consensus really ought to be.

Oh, sigh of relief. Finally we’ll have a neutral machine/software arbiter which will independently come to a conclusion independent of what climatologists tell their dutiful machines and software what to do.

Uhh, wait. Are they having 10,000 monkeys run the machine and write the software?

All the statistics in the world are unable to forecast the outcome of a single roll of a die – with any statistically significant accuracy.
The purpose of statistics is to analyze data *ex post facto*. For forecasts see your local Spiritualist or Prophet.

• Westchester Bill

Insurance companies in the United States have been going strong since the 19th century. Actuarial tables are projections based on a model of mortality rates. Statistics are OK. The mistake in the subject analysis is in using models to verify each other. Actuarial models are compared to actual mortality rates. At some time climate models that predict warming from CO2 will have to cope with the recent decade of not-warming, perhaps cooling, to the general increase in CO2.

• Roy Perez

It’s important to distinguish between descriptive statistics and inferential statistics. The SSES Web Project at Ohio State is using Bayesian inference, which is used to derive parametric posterior probability from a prior probability and a likelihood function. In particular, project’s Bayesian framework as applied to RCM and GCM data is termed “hierarchical data-process-parameter model”; you can review the SSES tutorial on Bayesian analysis and hierarchical modeling here.

3. This should be interesting. From what little I remember of college chemistry, to determine the error range of a result you sum all the error ranges of the parts that went into that result. So, to determine the error range of this result they will sum the errors of all the models involved in finding the result. This pretty much guarantees an answer that describes current conditions, plus or minus 10 degrees or so.

4. DaleinND

“They performed advanced statistical analysis on two different North American regional climate models …”

GIGO

• Gamecock

“They performed advanced statistical analysis on two different North American regional climate models …”

. . . then picked the result they liked.