Climategate 2.0: Revkin flirts with skepticism

It was apparently only for a moment, however.

From the Climategate 2.0 collection, Revkin questions (politely) Michael Mann’s motives in an e-mail to Tim Osborn:

this is very helpful, tim.

i look forward to talking a bit shortly.

a key question, to me, is whether tihs new analysis implies that there has been wishful (or at lest selective) thinking in the paleo analyses done so far (mann and others)? In other words, is there any evidence in all of this that their bias against past variability is intentional?

we’re all always looking for what we want to find, to some extent, no?

The e-mail exchange follows.

date: Tue, 28 Sep 2004 08:47:04 -0400
from: Andy Revkin
subject: Re: couple things..
to: Tim Osborn

this is very helpful, tim.
i look forward to talking a bit shortly.
a key question, to me, is whether tihs new analysis implies that there has
been wishful (or at lest selective) thinking in the paleo analyses done so
far (mann and others)? In other words, is there any evidence in all of this
that their bias against past variability is intentional?
we’re all always looking for what we want to find, to some extent, no?
At 08:10 AM 9/28/2004, you wrote:
>Dear Andy,
>
>I’d left before your email arrived yesterday, but I’ll be in the office
>most of Tuesday, till about 4.30pm here (what’s the time difference? 5
>hours maybe? that’d make it 11.30am your time).
>
>If you want a specific time, then 2pm here (is that 9am for you?) would be
>good – I’ll stay close to the phone around that time.
>
>In the meantime, some responses via email.
>
>At 01:14 28/09/2004, Andy Revkin wrote:
>>i’d appreciate your response to the following thoughts emailed by Mann.
>>particularly the spots I’ve underlined and highlighted with boldface.
>>thanks!!
>>
> >>1. This kind of analysis isn’t new….There is a good discussion of this
>>>in the review paper by Jones and Mann…The same exact thing as what the
>>>authors describe has already been done using forced simulations
>>>(Rutherford et al, 2003, J. Climate) and long control simulations
>>>(Zorita et al, J. Climate). In these studies, the bias argued by the
>>>authors were found not to be significant. This is true because the range
>>>of low-frequency variability did not significantly exceed that present
>>>in the 20th century calibration period in those simulations, nor does it
>>>in any other simulations that have been done of the forced response of
>>>the climate over the past 1000 years (see Figure 8 of Jones and Mann, 2004).
>
>The von Storch et al. paper acknowledges some of these points and cites
>these papers in its penultimate paragraph. In our comment
>(Osborn/Briffa), we say that the size of this systematic error may be more
>pronounced in the new study than in others (or perhaps more than in the
>real world) because their model simulation exhibits lots of long-term
>climate change that the methods are unable to fully reconstruct.
>
>So what conclusions do we draw, that can be applied to reconstructions of
>real world climate?
>
>It seems that both von Storch and Mann agree that things are fine if
>pre-20th century climate did not deviate much from the range covered by
>the calibration period (often 1900-1980), because then the reconstructions
>would not be biased. And it seems that they agree that if reconstruction
>methods would do poorly if the past deviations were larger.
>
>The question then is, would it be appropriate, before we begin to
>interpret our climate reconstructions, to make assumptions about what past
>climate may have been? If we assume it didn’t vary much, then our
>reconstructions may be fine. If we assume it did vary lots, then our
>reconstructions will be biased and will underestimate those
>variations. I’d prefer to make as few assumptions of possible! In
>particular, I’d prefer not to use climate model simulations as evidence
>for the amount of past climate change, because I would then be unable to
>use climate reconstructions to test the model simulations – that would be
>circular!
>
>>> The sensitivity of the model used in the present study is somewhat
>>> higher than those of other models, but this isn’t the main problem with
>>> their simulation. More problematic is the fact that the authors in the
>>> present case use a solar forcing that is about twice that used by other
>>> researchers in the field. Using this unusually large past forcing
>>> scenario, the authors obtain variations in previous centuries that are
>>> well outside the range of the modern period used to calibrate the
>>> reconstruction.
>
>More for von Storch et al. to answer. All I’d say is that the model
>sensitivity seems well within the IPCC range of reasonable values, and
>that though the solar variations may be stronger than others have used,
>the uncertainty range is so larger that the variations used by von Storch
>et al. again fall within the accepted range.
>
>I’d re-iterate the point made in our comment, though, that their
>simulation is merely a reasonable case for which any valid reconstruction
>should perform adequately.
>
>>>Osborn and Briffa indeed mention that the authors arguments hinge on
>>>the much larger amount of low-frequency variability that is present in
>>>their simulation (as it influences the ‘redness’ of the spectrum of the
>>>model data).
>
>Indeed.
>
>>>In this regard, their conclusions would seem not to apply to the real world.
>
>Hold on! Mann is now making an assumption about the real world
>climate. If we knew the amount of long-term variability that the real
>climate showed, then we wouldn’t be needing to reconstruct it in the first
>place!
>
>In the supplementary information to the von Storch paper, they show
>similar results using a different model, and using weaker solar
>forcing. This is only mentioned briefly in their main paper, but does
>mean it is harder to simply dismiss von Storch et al.’s results as being
>dependent on one particular model.
>
>>>2. In Mann et al (1999), one of the studies the authors focus on, the
>>>method of uncertainty estimate does in fact take into account the
>>>potential loss of low-frequency variance due to the limited regression
>>>period, the very issue raised by the authors. In their accompanying
>>>commentary, Osborn and Briffa seem to be unaware of this or
>>>mischaracterize this when they state that this has not been taken into
>>>account in previous work.
>>> Mann et al (1999) examined the spectrum of the “residuals” over an
>>> older (“cross-validation”) period that was independent from the
>>> calibration period.
>
>That’s not what Mann et al. (1999) says – it says they looked at the
>”residuals” during the calibration period. Perhaps they made a mistake?
>
>Doesn’t matter, however, for two reasons (1) whether they looked at
>calibration or independent residuals, the periods of analysis would be
>relatively short and thus couldn’t tell us much about whether the
>multi-decadal to multi-centennial variations were adequately captured; and
>(2) the von Storch et al. paper implies the errors are systematic (i.e.
>reconstructed values are consistently smaller than the real values) yet
>the low-frequency errors considered by Mann et al. are random (sometimes
>too small, sometimes too large, hence the error range is applied equally
>either side of the reconstruction.
>
>On this basis, I don’t think that the bias found by von Storch has been
>adequately considered in previous work, though I accept that there may be
>scientific debate here, and things may turn out differently after further
>study.
>
>>> Where they found evidence for enhanced regression uncertainty in these
>>> residuals at the lowest (century) timescale resolved, they inflated the
>>> estimates of the uncertainties accordingly (to more than 1 degree C
>>> peak-to-peak). This inflated uncertainty, which accounts for potential
>>> low-frequency regression bias, in fact accommodates the range of
>>> potential bias shown by the authors in the present study.
>>>
>>>The conclusion in previous studies that late 20th century warmth is
>>>anomalous in a long-term context actually takes into account the
>>>expanded regression uncertainties at low-frequencies that are the
>>>subject of the present analysis. There is no inconsistency–its just a
>>>matter of different interpretation/spin.
>
>My previous comment applies here too – it seems to us that the errors
>previously published don’t take adequate account of the potential
>systematic underestimation of low-frequency variations. In which case (as
>we say), detection of unusual late-20th century warmth would need to be
>re-assessed. We don’t say that this re-assessment has been done yet, thus
>we don’t say that previous conclusions are necessarily
>wrong. Re-assessment of these issues is, anyway, an ongoing task that is
>done in light of all new evidence, not just this one study.
>
>>>4. It is curious that the authors focus on the results of the
>>>MBH98/MBH99 method, when in fact they demonstrate that this method
>>>performs better than the the simple approaches generally used by other
>>>researchers in the field that make use of local regressions of
>>>temperature against proxy data (Bradley et al, ’93; Briffa et al, ’98;
>>>Jones et al, ’98; Crowley and Lowery, ’00; Esper et al ’02; Mann and
>>>Jones, ’03). Oddly, in this context, the authors argue for a more
>>>favorable comparison of their result to the reconstruction of Esper et
>>>al, even though this paper uses the approach authors note as being more
>>>prone to the regression bias in question.
>>>
>>>In reality, applications of these these different (local and
>>>pattern-based) approaches actually yield relatively similar past
>>>histories (see Figure 5 in Jones and Mann, ’04). Indeed, a paper in
>>>press in “Journal of Climate” by Rutherford et al (on which Osborn and
>>>Briffa are co-authors), shows that both the local regression method and
>>>pattern-based regression method yield essentially indistinguishable
>>>results when applied to the same network of proxy data.
>
>We were careful to make the point in our comment that it isn’t just the
>Mann et al. method that has this potential problem. Indeed we emphasise
>that other methods may be worse! We took care to point this out in case
>it wasn’t obvious from a cursory read of the von Storch paper.
>
>>>So, really there is nothing new here, and I’m very surprised that
>>>Science chose to publish this article. In order to believe that the
>>>results of this study have any real world implications at all, the
>>>authors would need to reconcile the extreme variability in their model
>>>simulation with the numerous other model simulations that indicate far
>>>less low-frequency variability in past centuries.
>
>As said before, I prefer not to use climate model evidence to justify the
>assumptions made in developing climate reconstructions, because then I
>can’t use the climate reconstructions to test the performance of climate
>models. That would be circular! Given that we should be testing how well
>the climate models do at simulating past climate, this would be an
>unfortunate loss!
>
>Hope that helps. Sorry if I went on a bit long.
>
>Speak to you at 9am/2pm.
>
>Cheers
>
>Tim
>
>Dr Timothy J Osborn
>Climatic Research Unit
>School of Environmental Sciences, University of East Anglia
>Norwich NR4 7TJ, UK
>
>e-mail: t.osborn@uea.ac.uk
>phone: +44 1603 592089
>fax: +44 1603 507784
>web: http://www.cru.uea.ac.uk/~timo/
>sunclock: http://www.cru.uea.ac.uk/~timo/sunclock.htm
Andrew C. Revkin, Environment Reporter, The New York Times
229 West 43d St. NY, NY 10036
Tel: 212-556-7326, Fax: 509-357-0965 (via http://www.efax.com, received as email)

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s