Monumental fault in manmade global warming notion hiding in plain sight

by Russell Cook
December 24, 2011, JunkScience.com

I don’t mean promoters of the issue comically spinning failed predictions for more frequent hurricanes and warmer snow-less winters into covering any hot/cold/wet/dry extremes. Or Al Gore’s Texas-sized carbon footprint undermining demands for ours to to be minuscule. Sure, the IPCC also has appearance problems as a supposedly ‘unbiased’ organization, caught red-handed with assessments authored by people in environmentalist groups, and its own “ClimateGate” scientists behaving badly doesn’t help, either.

We have an arguably more far reaching problem – one that imperils the issue itself, and the mainstream media’s basic integrity.

This plain-sight problem is invisible when anyone accepts the issue as settled science. No hint of the problem is seen when the media moans about extreme weather and melting icecaps, while offering advice for sustainable lifestyles that use carbon-free renewable energy. Of course, no hint whatsoever is seen when armchair psychoanalysis is offered about public opinion, like when the NY Times’ David Brooks said, “..we have had a lot of information about global warming from Al Gore and many others. And, yet… support for a response to global warming has gone down”,

This monumental problem only becomes evident when we point to skeptic scientists claiming human activity is not a significant part of global warming. The immediate, predictable diatribe is, “Skeptics are few in number, don’t have published papers to their credit, and are on the payroll of big coal & oil.” The problem fades out of sight again when no one challenges those assertions.

Try asking instead, “You can prove any of that?”, and watch what happens.

If the response is that anyone defending skeptic scientists is an ignorant, mind-numbed talk radio listener / right-wing blog reader / Fox News zealot, or is a person who won’t give up their SUV to save the planet, then you see the problem plain as day. This is a sleight-of-hand shell game to ensure the public never thinks there may be legitimate scientific opposition criticism.

A second opinion ought to be welcomed, especially if it’s good news that the little warming we do see is a natural process.

I’ll let others psychoanalyze the bizarre opposition to such news, and I’ll let the scientists explain the science and the actual number of people on each side of the issue. What I am able to do is show what the mainstream media buries, namely all the red flags surrounding accusations against skeptic scientists.

Al Gore’s 2006 movie, An Inconvenient Truth, exposed the heart of the problem, and although it didn’t start there, he faces tough questions about his role with the problem’s origins. After describing all kinds of potential climate disasters – which have thus far failed to happen – Gore takes a short length of time near the end of the movie to equate skeptic scientists with tobacco industry ‘science experts’ who downplayed cigarette smoking health concerns. His comparison is quite effective, he literally spells out the words “reposition global warming as a theory rather than fact” in red letters across the screen, saying they were from a leaked memo no different than an old tobacco company’s leaked internal document, “Doubt is our product, since it is the best means of creating a controversy in the public’s mind.” Pick up a copy of his 2009 “Our Choice” book, and both sentences are spelled out in half-inch tall letters on pages 356 and 357.

There’s an enormous red flag here. A complete context scan of the Brown & Williamson “Doubt is our product” memo is found on the internet within seconds, web sites quoting it link directly to the scan, and there is no question it was a top-down industry directive.

The “reposition global warming” memo literally cannot be found that way. The only web links to the otherwise incredibly hard to find Greenpeace archive scan of the memo are in my own online articles, and when astute readers look through this set of interoffice instructions for a small TV and radio campaign, it becomes abundantly obvious that the sentence has been taken out-of-context in order to portray it as the main goal of a sinister industry directive.

It gets worse. Gore’s companion book to his movie says the memo “was discovered by the Pulitzer Prize-winning reporter Ross Gelbspan”. Two problems are easily found. First (as Steve Milloy pointed out long ago), the Pulitzer organization does not recognize Gelbspan as a prize winner, and second, other book authors and reporters refer to the “reposition global warming” sentence prior to Gelbspan’s earliest mention of it in a December 1995 radio interview. Nevertheless, Gore’s 2009 “Our Choice” book again referred to him as a Pulitzer winner and said the memo was “uncovered by investigative journalist Ross Gelbspan” on page 358. Inexplicably in his June 22 Rolling Stone article, Gore instead attributes the memo sentence to a 1991 NY Times article, which was not written by Gelbspan.

There are more red flags. The 1991 NY Times article says it received the memo in a packet provided by the Sierra Club. Yet, intensive searches through current and archive Sierra Club web pages yield not a solitary word about finding what any environmentalist would call the central ‘smoking gun’ evidence of a fossil fuel industry / skeptic scientist conspiracy.

On top of all that, many people point to Ross Gelbspan’s 1997 book, “The Heat is On”, as the first exposé of this memo sentence evidence. The other words he mentions a paragraph after it on page 34 of his book are from other memos in the packet, concerning targets of the coal industry PR campaign: “…older, less-educated men…” and “young, lower-income women”. Meanwhile, page 360 of Al Gore’s 1992 “Earth in the Balance” book says his Senate office received documents “…leaked from the National Coal Association…” which said, “People who respond most favorably to such statements are older, less-educated males from larger households, who are not typically active information-seekers… another possible target is younger, lower-income women…”

Identical words from the same memos in Gore’s Senate office as much as four years prior to Gelbspan, the man he credits with discovering them – a huge red flag if there ever was one. And not a word about this contradiction in the mainstream media. Had reporters taken just a few hours of their time to talk to a now-former employee of that coal organization – as I did just recently – they would have been told that these specific memos were a rejected proposal for the PR campaign, and were never actually implemented, thus they would not have been seen by other fossil fuel company executives. There was no industry directive to “reposition global warming”, period.

There is a sea of red flags to be analyzed, more than space allows here. Ross Gelbspan and John Passacantando, the head of the enviro-activist group Ozone Action from ’93 to 2000, claim to have obtained the “reposition global warming” memo in 1996 while jointly working to publicize it as evidence of skeptic scientists’ guilt, but they never say who gave it to them. Passacantando became the executive director of Greenpeace USA in 2000, merging Ozone Action into it, and his former co-workers now have influential positions elsewhere: Phil Radford is Greenpeace USA’s current director, Kelly Sims Gallagher is an official reviewer of IPCC reports, and Kalee Kreider is Al Gore’s spokesperson.

It is entertaining to note how Ms Kreider started working at Ozone Action in 1993 and transferred to Greenpeace in 1996, and was seen just a year later in a 1997 IPCC Regional Impacts of Climate Change Special Report, in its Annex H USA section for “Authors, Contributors, and Expert Reviewers”. Al Gore says this about her in “Our Choice”, page 411, “[she] has been of invaluable assistance in all of my climate work”. Considering she joined his current staff in 2006, and his “climate work” goes back to 1988, it might be worthwhile to ask him exactly what he meant there.

Legitimate scientific criticism could wipe out the so-called global warming crisis. What’s been the response for twenty years? Don’t debate skeptic scientists, assassinate their character – but hide the evidence proving their corruption.

The monumental fault in global warming is right there in plain sight, and the mainstream media either can’t spot it or offers strangely vague answers when I try to alert them about it. This issue showcases a genuine divide of inexcusable proportions: We have 1% of the media elite who have committed journalistic malfeasance for over twenty years, and we are the 99% who no longer trust them! Expose this problem for all to see, and we knock down not only the politics of global warming, we also potentially put news reporting back to the way it should be done, telling the truth, the whole truth, and nothing but the truth.

Russell Cook’s collection of writings on this issue can be seen at “The ’96-to-present smear of skeptic scientists,” and you can follow him on Twitter at QuestionAGW.

About these ads

12 responses to “Monumental fault in manmade global warming notion hiding in plain sight

  1. We are witness to history. The greatest scam ever in world history. It began falling apart a few years ago and the email leaks were the frosting on the cake. But with money and power this huge at stake it MUST be recessitated. Politicians can use this to raise taxes, set policies, build kingdoms and collapse entire countries. The lies, the conspiracies, the personal assassinations have just begun. There is too much at stake.

  2. An excellent article. I only have one question; since when does the mainstream media have integrity?

  3. Russel Cook’s otherwise excellent article didn’t point out that Climate Gate l and Climate Gate ll, which should have brought the frenzy to a halt, were ignored by the media.

    • Indeed, as the commenter “mamapajamas”, points out below, I could go on at great length there. Not for lack of trying directly on my part, when it comes to telling the MSM about ClimateGate – see my Dec 2009 piece “The Lack of Climate Skeptics on PBS’s ‘NewsHour’ ” http://www.americanthinker.com/blog/2009/12/the_lack_of_climate_skeptics_o.html , in which there is a link to my first appearance at the PBS Ombudsman web page. My gripe back then to the Ombudsman was the way the NewsHour didn’t mention ClimateGate I until a week after the news broke.

      I haven’t stopped – Dec 14th was my seventh appearance on the Ombudsman page, scroll down this page http://www.pbs.org/ombudsman/2011/12/the_mailbag_making_sene_of_cleancut_and_scruf_1.html to “The Relentless Russell Cook…” To their credit, the NewsHour did actually devote 125 words to ClimageGate II (indirectly as “controversial emails”), in a Nov 28th Durban climate conference discussion where the guest was the Washington Post’s Juliet Eilperin. She blew off the entire ClimageGate scandal as utterly insignificant.

  4. Wayne: Oh, there’s a lot more than that that Russel didn’t mention, but he doesn’t have 300+ pages to deal with all of the problems.

    This came to my attention as an obvious scam early on, virtually the moment I learned that the dire predictions being made were based upon computer models. I know little about climate, but I’ve been involved with computers since around 1967, and know that they can NOT make serious predictions when most of the parameters are sheer guesses. You can’t take a hypothesis with only a few known possible parameters, put them into a computer script, and expect a correct solution. It is, in fact, utterly impossible. Computers simply are not that smart. They add ones and zeros and store information. Everything else they do is a variation on those two functions. If the ones and zeros they add are not precisely correct, it is not possible for the end-of-job conclusion to be correct. Emphasis: It. Is. NOT. POSSIBLE. Computer climate models are a scam.

    • I have read that their “computer models” are based on manipulating past data to be able to predict historical climate and events relatively accurately. Unfortunately, the same computer logic doesn’t work for predicting future climate. That’s why they have been consistently wrong.
      Computers are good at predicting strength of materials, heating and electrical problems that have specific, known variables. Climate is too complex, and probably has dozens of variables, known and unknown.

  5. A little global warming would be a great benefit to mankind. However computer models cannot model the past climate so why should we believe future predictions. Weather men can’t get the weather right 5 days in advance. Its about money and power.

  6. Well, to me it’s obvious, there IS little mainstream media in the historical sense of unbiased, informed, investigation or reporting. Most “reporters”, video or print, don’t do anything but parrot. It’s sad that many pay any attention to them.

  7. The Carbon Commodity Fraud is just one of many scams run by the monarch/monopolists who have been in control of America and Europe for over a century. They own the war industries, the main stream media and both sides of the two party puppet show that passes for our dillusion of democracy. Another ‘science’ fraud is Hubbert’s Peak Oil which diverts all petroleum commerce through the elitist profit network. Green energy is another assault against sicence, as EVERY ‘sustainable’ system is non functional. The underlying disease for all of these symptoms is the FRAUDULANT FRACTIONAL RESERVE BANKING SYSTEM. This Ponzi scheme is owned and run by the globalists and every war and every depression in the last century has been intentional. Either you believe that hapless humanity stumbles from one expensive bloodbath blindly into the next expensive bloodbath….OR….you realize that a sinister small group of elites have carefully stage-set, directed and PROFITTED by human carnage. This is futher explained in “Fractional Reserve Banking Begat Faux Reality”. The elitists could not stop with just Faux Science, they needed to re-write history to give false provenence to this fraud. Visit http://www.FauxScienceSlayer.com for more on this Multi-level Fraud Marketing and demand a New Magna Carta. It is time to arrest and convict these robber barons.

  8. Zeus Crankypants

    jarmo | December 28, 2011 at 4:02 pm | Reply
    I have read that their “computer models” are based on manipulating past data to be able to predict historical climate and events relatively accurately. Unfortunately, the same computer logic doesn’t work for predicting future climate. That’s why they have been consistently wrong.

    (here is a little essay I wrote a few years ago about the CRU and their data processing and computer modeling technologies. As a programmer myself, I quickly recognized that the quality of their computer modeling and data processing programs would have gotten any normal IT department fired)

    Who is Ian “Harry” Harris? He is a staff member at the Climatic Research Unit at East
    Anglia University. His short bio on the CRU staff page says this. “Dendroclimatology,
    climate scenario development, data manipulation and visualisation, programming.”
    (http://www.cru.uea.ac.uk/cru/people/). He was tasked with maintaining, modifying and
    rewriting programs from the existing climate modeling software suite that existed at CRU
    since at least the 1990′s. He kept copious notes of his progress from 2006 through 2009,
    including his notes and comments internally in the programs themselves and in a 314
    page document named “harry_read_me.txt.” If you revel in the minutia of programmer’s
    notes you can easily find this document on the internet.

    I will document 4 different aspects of Ian “Harry” Harris’ notes
    1) General comments, inaccurate data bases
    2) CRU Time Series 3.0 dataset
    3) a RUN dialog
    4) Faulty code

    Quotes are verbatim, including typos, misspellings and language differences). any other
    mistakes are mine.

    1) General comments from the “harry_read_me.txt.” about the CRU programs and data.

    Here is Ian “Harry” Harris talking about both the legacy programs and legacy climate
    databases and the new data he is trying to create.

    “Oh GOD if I could start this project again and actually argue the case for junking the
    inherited program suite!!”

    author note: This is the program suite that has been generating data for years for
    CRU and staff.

    …knowing how long it takes to debug this suite – the experiment endeth here. The option
    (like all the anomdtb options) is totally undocumented so we’ll never know what we lost.”

    author note: Remember, Dr. Phil Jones, head of CRU initially said they never lost
    any data.

    “Sounds familiar, if worrying. am I the first person to attempt to get the CRU databases
    in working order?!! The program pulls no punches. I had already found that
    tmx.0702091313.dtb had seven more stations than tmn.0702091313.dtb, but that hadn’t
    prepared me for the grisly truth:”

    “Getting seriously fed up with the state of the Australian data. so many new stations have
    been introduced, so many false references.. so many changes that aren’t documented.
    Every time a cloud forms I’m presented with a bewildering selection of similar-sounding
    sites, some with references, some with WMO codes, and some with both. And if I look
    up the station metadata with one of the local references, chances are the WMO code will
    be wrong (another station will have it) and the lat/lon will be wrong too.”

    author note: How were they generating temperature data on their world grid in the
    past if they couldn’t even match up stations?

    “I am very sorry to report that the rest of the databases seem to be in nearly as poor a
    state as Australia was. There are hundreds if not thousands of pairs of dummy stations,
    one with no WMO and one with, usually overlapping and with the same station name and
    very similar coordinates. I know it could be old and new stations, but why such large
    overlaps if that’s the case? Aarrggghhh!”

    “So.. should I really go to town (again) and allow the Master database to be
    ‘fixed’ by this program? Quite honestly I don’t have time – but it just shows the state our
    data holdings have drifted into.
    Who added those two series together? When?
    Why? Untraceable, except anecdotally. It’s the same story for many other Russian
    stations, unfortunately – meaning that (probably) there was a full Russian update
    that did no data integrity checking at all.
    I just hope it’s restricted to Russia!!”

    author note: Fixed? What does that mean? And why the quotes? This is live data Ian
    is talking about.

    “This still meant an awful lot of encounters with naughty Master stations, when really I
    suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the
    nuclear option – to match every WMO possible, and turn the rest into new stations (er,
    CLIMAT excepted). In other words, what CRU usually do. It will allow bad
    databases to pass unnoticed, and good databases to become bad, but I really don’t think
    people care enough to fix ‘em,
    and it’s the main reason the project is nearly a
    year late.”

    author note: This is about the strongest statement Ian makes about the state of the
    data at CRU

    “The big question must be, why does it have so little representation in the low numbers?
    Especially given that I’m rounding erroneous negatives up to 1!! Oh, sod it. It’ll do. I
    don’t think I can justify spending any longer on a dataset, the previous version of which
    was completely wrong (misnamed) and nobody noticed for five years.”

    “This was used to inform the Fortran conversion programs by indicating the latitudepotential_
    sun and sun-to-cloud relationships. It also assisted greatly in understanding
    what was wrong – Tim was in fact calculating Cloud Percent, despite calling it Sun
    Percent!! Just awful.”

    author note: Dr. Tim Mitchell or Dr. Tim Osborn? CRU -
    http://www.cru.uea.ac.uk/~timm/index.html

    “They aren’t percentage anomalies! They are percentage anomalies /10. This could
    explain why the real data areas had variability 10x too low. BUT it shouldn’t be – they
    should be regular percentage anomalies! This whole process is too convoluted and
    created myriad problems of this kind. I really think we should change it.”

    “Am I the first person to attempt to get the CRU databases in working order?!!”

    “Right, time to stop pussyfooting around the niceties of Tim’s labyrinthine software suites
    - let’s have a go at producing CRU TS 3.0! since failing to do that will be the definitive
    failure of the entire project..”

    “OH FUCK THIS. It’s Sunday evening, I’ve worked all weekend, and just when
    I thought it was done I’m hitting yet another problem that’s based on the hopeless state of
    our databases. There is no uniform data integrity, it’s just a catalogue of issues that
    continues to grow as they’re found.”

    Remember, he is talking about legacy programs and legacy data.

    2) About the CRU Time Series 3.0 dataset.
    Remember all the comments I posted here about HARCRUT3 dataset, which contains
    global temperature readings from 1850 onward and the possible problems with the data in
    that database. Well, HADCRUT3 is built from CRUTEM3 and the Hadley SST data.
    CRUTEM3 is built partially from CRU TS 3.0 which is mentioned above. And much of
    the data used for climate modeling in the past was contained in earlier versions of this
    data table CRU TS 2.1, CRU TS 2.0, CRU TS 1.1 and CRU TS 1.0. table used for
    earlier climate models. (see history of CRU TS at http://csi.cgiar.org/cru/).
    Evidently Ian “Harry” Harris managed to finally produce the dataset CRU TS 3.0 and
    here is a question from Dr Daniel Kingston, addressed to “Tim.”

    So, you release a dataset that people have been clamouring for, and the buggers only
    start using it! And finding problems. For instance:

    Hi Tim (good start! -ed)
    I realise you are likely to be very busy at the moment, but we have come across
    something in the CRU TS 3.0 data set which I hope you can help out with.
    We have been looking at the monthly precipitation totals over southern Africa (Angola,
    to be precise), and have found some rather large differences between precipitation as
    specified in the TS 2.1 data set, and the new TS 3.0 version. Specifically, April 1967 for
    the cell 12.75 south, 16.25 east, the monthly total in the TS 2.1 data set is 251mm,
    whereas in TS 3.0 it is 476mm.

    The anomaly does not only appear in this cell, but also in a number of neighbouring cells.
    This is quite a large difference, and the new TS 3.0 value doesn’t entirely tie in
    with what we might have expected from the station-based precip data we have for this
    area.

    Would it be possible for you could have a quick look into this issue?
    Many thanks,
    Dr Daniel Kingston
    Post Doctoral Research Associate
    Department of Geography
    University College London

    And here is Ian “Harry” Harris’ answer.

    Well, it’s a good question! And it took over two weeks to answer. I wrote angola.m,
    which pretty much established that three local stations had been augmented for 3.0, and
    that April 1967 was anomalously wet. Lots of non-reporting stations (ie too few years to
    form normals) also had high values. As part of this, I also wrote angola3.m, which added
    two rather interesting plots: the climatology, and the output from the Fortran gridder I’d
    just completed. This raised a couple of points of interest:

    1. The 2.10 output doesn’t look like the climatology, despite there being no stations in the
    area. It ought to have simply relaxed to the clim, instead it’s wetter.

    2. The gridder output is lower than 3.0, and much lower than the stations!

    I asked Tim and Phil about 1., they couldn’t give a definitive opinion. As for 2., their
    guesses were correct, I needed to mod the distance weighting. As usual, see
    gridder.sandpit for the full info.

    So to CLOUD. For over a year, rumours have been circulating that money had been
    found to pay somebody for a month to recreate Mark New’s coefficients. But it never
    quite gelled. Now, at last, someone’s producing them! Unfortunately.. it’s me.
    The idea is to derive the coefficients (for the regressing of cloud against DTR) using the
    published 2.10 data. We’ll use 5-degree blocks and years 1951-2002, then produce
    coefficients for each 5-degree latitude band and month. Finally, we’ll interpolate to get
    half-degree coefficients. Apparently.

    Lots of ‘issues’. We need to exclude ‘background’ stations – those that were relaxed to the
    climatology. This is hard to detect because the climatology consists of valid values, so
    testing for equivalence isn’t enough. It might have to be the station files *shudder*.
    Using station files was OK, actually. A bigger problem was the inclusion of strings of
    consecutive, identical values (for cloud and/or dtr). Not sure what the source is, as they
    are not == to the climatology (ie the anoms are not 0). Discussed with Phil – decided to
    try excluding any cell with a string like that of >10 values. Cloud only for now. The
    result of that was, unfortunately, the loss of several output values,

    3) Run dialogs
    Ian “Harry” Harris did a very good job of documenting his different “runs” of the
    programs, clipping and pasting the “run time dialog” into his “harry_read_me.txt.”
    document. Run time dialog is the text, messages and input prompts that appear on the
    screen when you run the program. You can see below that the original programmers of
    the CRU program suite had a “lively” style of informative messages to the end user. Here
    is a message you get when running an “update” program to merge temperature reporting
    stations.

    Before we get started, an important question: If you are merging an update – CLIMAT,
    MCDW, Australian – do you want the quick and dirty approach? This will blindly match
    on WMO codes alone, ignoring data/metadata checks, and making any
    unmatched updates into new stations (metadata permitting)?

    Enter ‘B’ for blind merging, or : B

    Do you know what this program produced? Bad records, an incomplete dataset. Records
    with station identifiers missing, stations duplicated, no checks for missing data. And if
    the program had data it didn’t know what to do with, it turned the data into a new station,
    even if it didn’t really know what that data was in reference to.

    Remember, these are the legacy programs that CRU used to generate data. These were
    live programs, live data. Ian “Harry” Harris was trying to fix and modify these programs,
    because many of them produced invalid data.

    4) Example of faulty code.
    Here is one example, from Ian “Harry” Harris, about an already existing function, one
    that had been used to generated data in the past.

    Back to precip, it seems the variability is too low. This points to a problem with the
    percentage anomaly routines. See earlier escapades – will the Curse of Tim
    never be lifted?

    A reminder. I started off using a ‘conventional’ calculation
    absgrid(ilon(i),ilat(i)) = nint(normals(i,imo) + * anoms(ilon(i),ilat(i)) * normals(i,imo)
    / 100) which is: V = N + AN/100

    This was shown to be delivering unrealistic values, so I went back to anomdtb to see how
    the anomalies were contructed in the first place, and found this:
    DataA(XAYear,XMonth,XAStn) = nint(1000.0*((real(DataA(XAYear,XMonth,XAStn))
    / & real(NormMean(XMonth,XAStn)))-1.0)) which is: A = 1000((V/N)-1)

    So, I reverse engineered that to get this: V = N(A+1000)/1000
    And that is apparently also delivering incorrect values. Bwaaaahh!!

    Harry eventually fixed this, so in the future it would produce accurate data, but one
    wonders how many times data was pushed through this formula in the past and how
    much invalid data was generated from this faulty function.

    Epilog:
    Remember Ian “Harry” Harris was working on a legacy program suite, not some “quick
    and dirty methods.” A suite of programs and datasets used by CRU for climate modeling
    and in use for many years. If you want to, read his 314 pages of notes that detail better
    than I could all of the problems he ran into trying to work with those existing legacy
    programs.

    Does this information presented here disprove AGW? Of course not. There are many
    other scientific organizations besides the CRU. But it does highlight, with provable facts
    that the CRU in themselves have been responsible for bad data, bad programs and as we
    have seen by the dust up about the ignored Freedom of Information Act requests that was
    issued to CRU, responsible for trying to cover up their mistakes. This is bad science and
    unfair to all the honest scientist the world over who are diligently working on honest
    climate science.

    Addendum:

    You have to give Ian “Harry” Harris a lot of credit. Evidently he has been responsible for
    cleaning up a lot of the mistakes that have existed in climate based datasets in the past.
    This little narrative represents some of his work with NCEP/NCAR Reanalysis. (National
    Centers for Environmental Prediction – NOAA – http://www.ncep.noaa.gov/)
    http://www.cru.uea.ac.uk/cru/data/ncep/

    1948-1957 Data Added (Ian Harris, 22 Jul 2008)
    2007 Data Added (Ian Harris, 17 Apr 2008)
    2006 Data Added (Ian Harris, 11 Mar 2007)
    2005 Data Added (Ian Harris, 13 Jan 2006)
    2004 Data Added (Ian Harris, 28 Nov 2005)
    2003 Data Added (Ian Harris, 11 May 2004)
    SURFACE TEMPERATURE ADDED (Ian Harris, 10 December 2003)
    WARNING NOTE ADDED FOR SURFACE FLUX TEMPERATURES (Ian Harris, 10
    December 2003)
    ALL DATASETS UPDATED TO 2002 (Ian Harris, 23 June 2003)
    LAND/SEA MASKS ADDED (Ian Harris, 16 December 2002)
    Land/Sea Masks for regular and Gaussian grids have been added.
    NEW WINDOW ONLINE (Ian Harris, 9 July 2002)
    The new Quarter-Spherical Window (0N-90N; 90W-90E) is now in use (see table
    below).
    The old window data (here) has now been entirely replaced.
    Please address any requests for new variables to me.
    BAD DATA REPLACED (Ian Harris, 23 May 2002)
    The TOVS Problem has been resolved and only corrected data appears on this site.
    Anyone wishing to access the old (potentially incorrect) data in order to evaluate
    the extent of the problem should contact me.

    The last entry in that narrative is interesting.

  9. Your article doesn’t address the science behind global warming. CO2 is released geologically (from volcanic vents and the like) and then sequestered by plants, forming fossil fuels reserves such as peat, coal, natural gas, and oil. Were this not the case, the CO2 record would show a perpetual increase in in CO2, and backing yearly geologic emissions out from the present day, the earth would have had no CO2 for much of recorded history— absurd, considering that plants need CO2 and early historians ramble on about vegetation! That said, modern industry has mined and burned much of the fossil fuel reserves, releasing in 200 years what nature spent tens of millions of years storing. It’s in the air now, the increase has been verified, it can’t be removed quickly. The gas while clear in the visible spectrum is somewhat reflective in the IR (heat) spectrum, trapping heat in the exact same way the windows of a car or greenhouse do on a sunny day. More heat means a higher vapor pressure for water, creating a mild feedback loop (water vapor also traps heat, so more humidity, yet more heat no longer radiated back into space. Adding more tint (greenhouse gases) , without increasing the ability to radiate heat leads to higher temperatures. If the greenhouse effect were made up, then it would make no sense for the atmosphere to store heat so well, and there would be radical shifts between day and night temperatures (think hellish days and arctic nights). A car with clear windows is typically cooler than one with tinted ones. The science here is neither controversial nor complicated.

    • Zeus Crankypants

      And you don’t address the screwed up computer models, software programs and defective data sets that were so nicely documented by Ian Harris in the above narrative. Nice deflection… and you added a little Tu Quoque fallacy into the mix. Well… my comment was about shoddy software and faulty computer models. Would you like to address my subject?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s