Harvard Climate Statistician: Tree-ring temperature proxies ‘carry significant uncertainties’

No doubt Martin Tingley will soon be trashed by Michael Mann et al. The countdown begins… Ten… nine…eight

From Climate Central:

The problem here is that people haven’t been taking comprehensive readings of temperatures in the high Arctic for very long, so they rely on proxies such as tree-ring thickness to stand in for temperature. And those proxies, said lead author Martin Tingley, an expert in climate statistics at Harvard, “carry significant uncertainties.”

9 thoughts on “Harvard Climate Statistician: Tree-ring temperature proxies ‘carry significant uncertainties’”

  1. A really excellent comment.

    It is my understanding that goodness of fit statistics, as demanded by Mr. McIntyre, are near zero for Mann’s work and the work of an other fellow in Mann’s clique..

  2. I made this comment previously at climataudit re the Mann and Marcott studies grafting on temperature data with no calibration between the two. Such calibration has indeed been done using the isotope values in tree rings that are a more direct measure of temperature than ring width so avoiding, at least to a large degree, the other influencing factors such as water supply, nitrogen bears! Her results confirm what we know from historical records that the Ronam Warm perid was c.a. 1.5 C warmer than the end of the 20th century.:

    In fact the calibration of tree rings with temperature has been done long ago by Libby and Pandolfi – Libby was a real scientist who developed the use of isotope measurements for dating tree rings. In one of their calibration studies they referenced the know dates of rings in reference German oaks against measurements of isotopes of oxygen, hydrogen and carbon and the temperature records from Basle, and CET back to 1658.
    http://www.pnas.org/content/71/6/2482.abstract
    From this and other studies they concluded that the temperature during the last 1800 years in the NH has fell by 1.5 oC up to the 1970’s – they also dated tree rings from the USA (redwood and bristlecone pines) and Japan (cedar).

    http://www.nature.com/nature/journal/v261/n5558/abs/261284a0.html
    http://onlinelibrary.wiley.com/doi/10.1029/JC081i036p06377/abstract
    http://www.nature.com/nature/journal/v266/n5601/abs/266415a0.html
    Their work invalidates the claims of Mann and Marcott concerning temperature spikes during the “recent” past. Since Libby also tested bristlecone pines – finding their use accurate when dated by isotopic methods cross referenced to ice cores – it would be interesting to use her data to recalibrate the Mann99 and similar graphics that “disappeared” the MWP and LIA.

    They also accurately predicted the following warming from the 1970’s to 2000 and the current cooling cycle that they forecast to result in a new mini ice age:
    http://www.climateconversation.wordshine.co.nz/2012/02/the-globe-is-cooling-what-it-does-when-its-not-warming/

    Note that, in contrast to the data miners who populate much of what is claimed to be climate science, Libby was a real scientist – a chemist – who worked with Fermi on the Manhattan project and was a pioneer in isotope studies.

  3. Then there is a basic issue:

    “It’s the same with summer average temperatures in the Arctic. “From a statistics perspective,” he said, “it’s very different to ask ‘is this summer warmer than the summer of 1473’ and ‘is this summer warmer than all summers in the past 600 years?’”

    With precise records, you could just look it up, but the uncertainties in climate proxies make it much more difficult — and without going into the virtues and technical details of different statistical techniques, it turns out that Bayesian analysis is ideally suited to answer that broader question.

    The answer, write Tingley and his co-author Peter Huybers, also of Harvard: “…we show that the magnitude and frequency of recent warm temperature extremes at high northern latitudes are unprecedented in the past 600 years.”

    So, there is something special about the fact that we are at the end of the warming period that started at the bottom of the Little Ice Age, about where their study starts?!?!?!

    HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

    Wonder why they didn’t extend it back another 400 years to include at least part of the Medieval Warming Period?? God forbid they go back to the Roman Optimum!!!!

    As Doug points out, they even did a poor job of their slam dunk!!!

  4. Procedural Uncertainty vs Representational Uncertainty:

    Mann and Marcott and others follo the IPCC conception of uncertainty: it is in the way you manipulate the basic data that the true uncertainty results. 95% certainty means that the outcome occurs 95% of the time as specified by the set up. That is, with the uncertainties put into the model, the same result occurs when the model is run with all variables changing within the specified range.

    Representational certainty is that level to which the program, model or proxy recreates the physical occurrence. These are not the same thing at all.

    Mann et al use proxies which they say, with statistical modification, give a certain result. But what is the level of correlation between the result and actuality, i.e. the pattern and quantum give you the values (temperature and time in this case) that would have occurred should you have measured them at those times and places? That is the certainty that is of interest to us and the one that the average citizen and policy writer thinks he is being told. But he is not.

    The problem all the proxy studies get into is when they go from proxy to the thing that the proxy is supposed to mimic. Sounds simple: if the proxy doesn’t mimic the object of interest, it has little direct value, though subjectively it may, and if it is the only thing we have, then we use it while upfront identifying its non objective assistance. And the uncertainty this brings to us.

    We aren’t arguing that the math is wrong. We are arguing that whatever the math is, and however certain we are that the answer we get is what we would get if we studied the matter more or in different ways, what the math tells us has an uncertainty – often signficant – with regard to what we really are interested in. The question wrt to Mann, then, is not that the tree rigngs don’t tell us something, and that they will tell us the same thing if we take different trees in different areas (though in Mann’s case, the uniqueness of the Yamal samples makes or breaks his case), but what the tree ring data tell us about the temperature of the planet during the times the tree rings were developing.

    Mann, Marcott and the IPCC do not address this issue when they speak of certainties.

    We have decent instrumental data back to 1880 at least. Various agencies have worked it and come to similar conclusions about temperature changes, though the GISTemp data, in particular, has representational (not procedural!) problems with infilling areas of low data density, such as the Arctic and South America. One should see some of these proxies being closely studied for this period, in particular tree rings. Then we would be able to correlate the proxy response to the instrumental – we’d be able to identify the Representational Certainty to lay on top of the Procedural Certainty.

    But we do not. The reason Mann and Marcott spliced the instrumental data onto their proxy data is that they cannot claim that similar data that was laid down by nature since 1880 shows what the instrumental data does. Since trees grow every day when conditions are favourable, you’d think this odd and ask what the problem is. Fine sediments with all their temperature-sensitive chemistry settle out of the water each time sediments heavier than the water are introduced into the column. For some reason close study in special locations where either tree rings or quiet, sediment rich streams are to be found are not done or, at any rate, reported on. Mann, with all the attacks he has received, has not attempted a detailed tree-ring to instrumental period correlation study that would get his detractors off his back.

    You have to wonder why.

    The 73 proxy records have been graphically displayed at WUWT. My aged old mamers would have suggeted most were “all over the map”. From these records, however, mathematical machinations produced a smooth curve: the purity of Procedural Certainty showing its head. But when you look from one to another, and consider that individually each is supposed to tell you something meaningful about the planet at large, you wonder how that can be when each at any given time tells you a different story – but not just the story of a temperature of the moment, but whether the temperature was going up or down. Is it that each proxy in its own area reacts differently depending on other conditions (like trees growing well one year because it is wet but cool, and another because it is warm but dry?), so that the same temperature condition results shows up differently? Or is the temp in each location independent of each other and a function of a distant cause such as maritime conditions or wind patterns?

    Further, Representation Certainty applies to the group: if we were to take the 73 proxy locations today, and use only those temperature records of the past 130 years, would they give the global temperature record to the planet? If you were to take the number of proxy data points, then take an equal number of instrumental datapoints – compress them into a similar time split – during the last 130 year period, would you get an end result that looked like the GISTemp or HadCru4 temperature profile of the planet? Use a portion of the instrument period data as a proxy for the whole data in a distribution, density and manipulation manner similar to the proxies.

    There are ways to estimate Representational Certainty. But there is little incentive to do so in the current political and financial environment of either research or government office. The only result that is likely is to display a discomforting lack of representational accuracy when looking to climate proxies. Poor correlational accuracy means low certainty – despite what the models say.

    If it weren’t so, we’d have read about it already.

  5. This should be fun. The Climate Central article has a temperature map of the Arctic, claiming to show temperature measurements where none exist. McIntyre has already pointed out some major problems with Tingley and Huybers, who in turn have challenged the accuracy of tree ring proxies abused by Mann. Popcorn, please!

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from JunkScience.com

Subscribe now to keep reading and get access to the full archive.

Continue reading