<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" > <channel> <title> Comments on: Monumental fault in manmade global warming notion hiding in plain sight </title> <atom:link href="https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/feed/" rel="self" type="application/rss+xml" /> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/</link> <description>All the junk that’s fit to debunk.</description> <lastBuildDate>Fri, 06 Jan 2012 01:05:06 +0000</lastBuildDate> <sy:updatePeriod> hourly </sy:updatePeriod> <sy:updateFrequency> 1 </sy:updateFrequency> <generator>https://wordpress.org/?v=6.7.2</generator> <item> <title> By: Zeus Crankypants </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6405</link> <dc:creator><![CDATA[Zeus Crankypants]]></dc:creator> <pubDate>Fri, 06 Jan 2012 01:05:06 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6405</guid> <description><![CDATA[In reply to <a href="https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6404">Niko</a>. And you don't address the screwed up computer models, software programs and defective data sets that were so nicely documented by Ian Harris in the above narrative. Nice deflection... and you added a little Tu Quoque fallacy into the mix. Well... my comment was about shoddy software and faulty computer models. Would you like to address my subject?]]></description> <content:encoded><![CDATA[<p>In reply to <a href="https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6404">Niko</a>.</p> <p>And you don’t address the screwed up computer models, software programs and defective data sets that were so nicely documented by Ian Harris in the above narrative. Nice deflection… and you added a little Tu Quoque fallacy into the mix. Well… my comment was about shoddy software and faulty computer models. Would you like to address my subject?</p> ]]></content:encoded> </item> <item> <title> By: Niko </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6404</link> <dc:creator><![CDATA[Niko]]></dc:creator> <pubDate>Thu, 05 Jan 2012 07:36:26 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6404</guid> <description><![CDATA[Your article doesn't address the science behind global warming. CO2 is released geologically (from volcanic vents and the like) and then sequestered by plants, forming fossil fuels reserves such as peat, coal, natural gas, and oil. Were this not the case, the CO2 record would show a perpetual increase in in CO2, and backing yearly geologic emissions out from the present day, the earth would have had no CO2 for much of recorded history--- absurd, considering that plants need CO2 and early historians ramble on about vegetation! That said, modern industry has mined and burned much of the fossil fuel reserves, releasing in 200 years what nature spent tens of millions of years storing. It's in the air now, the increase has been verified, it can't be removed quickly. The gas while clear in the visible spectrum is somewhat reflective in the IR (heat) spectrum, trapping heat in the exact same way the windows of a car or greenhouse do on a sunny day. More heat means a higher vapor pressure for water, creating a mild feedback loop (water vapor also traps heat, so more humidity, yet more heat no longer radiated back into space. Adding more tint (greenhouse gases) , without increasing the ability to radiate heat leads to higher temperatures. If the greenhouse effect were made up, then it would make no sense for the atmosphere to store heat so well, and there would be radical shifts between day and night temperatures (think hellish days and arctic nights). A car with clear windows is typically cooler than one with tinted ones. The science here is neither controversial nor complicated.]]></description> <content:encoded><![CDATA[<p>Your article doesn’t address the science behind global warming. CO2 is released geologically (from volcanic vents and the like) and then sequestered by plants, forming fossil fuels reserves such as peat, coal, natural gas, and oil. Were this not the case, the CO2 record would show a perpetual increase in in CO2, and backing yearly geologic emissions out from the present day, the earth would have had no CO2 for much of recorded history— absurd, considering that plants need CO2 and early historians ramble on about vegetation! That said, modern industry has mined and burned much of the fossil fuel reserves, releasing in 200 years what nature spent tens of millions of years storing. It’s in the air now, the increase has been verified, it can’t be removed quickly. The gas while clear in the visible spectrum is somewhat reflective in the IR (heat) spectrum, trapping heat in the exact same way the windows of a car or greenhouse do on a sunny day. More heat means a higher vapor pressure for water, creating a mild feedback loop (water vapor also traps heat, so more humidity, yet more heat no longer radiated back into space. Adding more tint (greenhouse gases) , without increasing the ability to radiate heat leads to higher temperatures. If the greenhouse effect were made up, then it would make no sense for the atmosphere to store heat so well, and there would be radical shifts between day and night temperatures (think hellish days and arctic nights). A car with clear windows is typically cooler than one with tinted ones. The science here is neither controversial nor complicated.</p> ]]></content:encoded> </item> <item> <title> By: Zeus Crankypants </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6403</link> <dc:creator><![CDATA[Zeus Crankypants]]></dc:creator> <pubDate>Thu, 29 Dec 2011 18:51:24 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6403</guid> <description><![CDATA[<blockquote> jarmo | December 28, 2011 at 4:02 pm | Reply I have read that their “computer models” are based on manipulating past data to be able to predict historical climate and events relatively accurately. Unfortunately, the same computer logic doesn’t work for predicting future climate. That’s why they have been consistently wrong. </blockquote> (here is a little essay I wrote a few years ago about the CRU and their data processing and computer modeling technologies. As a programmer myself, I quickly recognized that the quality of their computer modeling and data processing programs would have gotten any normal IT department fired) Who is Ian "Harry" Harris? He is a staff member at the Climatic Research Unit at East Anglia University. His short bio on the CRU staff page says this. "Dendroclimatology, climate scenario development, data manipulation and visualisation, programming." (http://www.cru.uea.ac.uk/cru/people/). He was tasked with maintaining, modifying and rewriting programs from the existing climate modeling software suite that existed at CRU since at least the 1990's. He kept copious notes of his progress from 2006 through 2009, including his notes and comments internally in the programs themselves and in a 314 page document named "harry_read_me.txt." If you revel in the minutia of programmer's notes you can easily find this document on the internet. I will document 4 different aspects of Ian "Harry" Harris' notes 1) General comments, inaccurate data bases 2) CRU Time Series 3.0 dataset 3) a RUN dialog 4) Faulty code Quotes are verbatim, including typos, misspellings and language differences). any other mistakes are mine. 1) General comments from the "harry_read_me.txt." about the CRU programs and data. Here is Ian "Harry" Harris talking about both the legacy programs and legacy climate databases and the new data he is trying to create. <blockquote> "Oh GOD if I could start this project again and actually argue the case for junking the inherited program suite!!" <em>author note: This is the program suite that has been generating data for years for CRU and staff.</em> ...knowing how long it takes to debug this suite - the experiment endeth here. The option (like all the anomdtb options) is totally undocumented so we'll never know what we lost." <em>author note: Remember, Dr. Phil Jones, head of CRU initially said they never lost any data.</em> "Sounds familiar, if worrying. am I the first person to attempt to get the CRU databases in working order?!! The program pulls no punches. I had already found that tmx.0702091313.dtb had seven more stations than tmn.0702091313.dtb, but that hadn't prepared me for the grisly truth:" "Getting seriously fed up with the state of the Australian data. so many new stations have been introduced, so many false references.. so many changes that aren't documented. Every time a cloud forms I'm presented with a bewildering selection of similar-sounding sites, some with references, some with WMO codes, and some with both. And if I look up the station metadata with one of the local references, chances are the WMO code will be wrong (another station will have it) and the lat/lon will be wrong too." <em>author note: How were they generating temperature data on their world grid in the past if they couldn't even match up stations?</em> "I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that's the case? Aarrggghhh!" <strong> "So.. should I really go to town (again) and allow the Master database to be 'fixed' by this program? Quite honestly I don't have time - but it just shows the state our data holdings have drifted into.</strong> Who added those two series together? When? Why? Untraceable, except anecdotally. It's the same story for many other Russian stations, unfortunately - <strong>meaning that (probably) there was a full Russian update that did no data integrity checking at all.</strong> I just hope it's restricted to Russia!!" <em>author note: Fixed? What does that mean? And why the quotes? This is live data Ian is talking about.</em> "This still meant an awful lot of encounters with naughty Master stations, when really I suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option - to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). <strong>In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don't think people care enough to fix 'em,</strong> and it's the main reason the project is nearly a year late." <em>author note: This is about the strongest statement Ian makes about the state of the data at CRU</em> "The big question must be, why does it have so little representation in the low numbers? Especially given that I'm rounding erroneous negatives up to 1!! Oh, sod it. It'll do. I don't think I can justify spending any longer on a dataset, the previous version of which was completely wrong (misnamed) and nobody noticed for five years." "This was used to inform the Fortran conversion programs by indicating the latitudepotential_ sun and sun-to-cloud relationships. It also assisted greatly in understanding what was wrong - Tim was in fact calculating Cloud Percent, despite calling it Sun Percent!! Just awful." <em>author note: Dr. Tim Mitchell or Dr. Tim Osborn? CRU - http://www.cru.uea.ac.uk/~timm/index.html</em> "They aren't percentage anomalies! They are percentage anomalies /10. This could explain why the real data areas had variability 10x too low. BUT it shouldn't be - they should be regular percentage anomalies! This whole process is too convoluted and created myriad problems of this kind. I really think we should change it." "Am I the first person to attempt to get the CRU databases in working order?!!" "Right, time to stop pussyfooting around the niceties of Tim's labyrinthine software suites - let's have a go at producing CRU TS 3.0! since failing to do that will be the definitive failure of the entire project.." <strong> "OH FUCK THIS. It's Sunday evening, I've worked all weekend, and just when I thought it was done I'm hitting yet another problem that's based on the hopeless state of our databases. There is no uniform data integrity, it's just a catalogue of issues that continues to grow as they're found."</strong> </blockquote> Remember, he is talking about legacy programs and legacy data. 2) About the CRU Time Series 3.0 dataset. Remember all the comments I posted here about HARCRUT3 dataset, which contains global temperature readings from 1850 onward and the possible problems with the data in that database. Well, HADCRUT3 is built from CRUTEM3 and the Hadley SST data. CRUTEM3 is built partially from CRU TS 3.0 which is mentioned above. And much of the data used for climate modeling in the past was contained in earlier versions of this data table CRU TS 2.1, CRU TS 2.0, CRU TS 1.1 and CRU TS 1.0. table used for earlier climate models. (see history of CRU TS at http://csi.cgiar.org/cru/). Evidently Ian "Harry" Harris managed to finally produce the dataset CRU TS 3.0 and here is a question from Dr Daniel Kingston, addressed to "Tim." <blockquote> So, you release a dataset that people have been clamouring for, and the buggers only <strong>start using it! And finding problems.</strong> For instance: <blockquote> Hi Tim (good start! -ed) I realise you are likely to be very busy at the moment, but we have come across something in the CRU TS 3.0 data set which I hope you can help out with. We have been looking at the monthly precipitation totals over southern Africa (Angola, to be precise), and have found some rather large differences between precipitation as specified in the TS 2.1 data set, and the new TS 3.0 version. Specifically, April 1967 for the cell 12.75 south, 16.25 east, the monthly total in the TS 2.1 data set is 251mm, whereas in TS 3.0 it is 476mm. The anomaly does not only appear in this cell, but also in a number of neighbouring cells. This is quite a large difference, and the <strong>new TS 3.0 value doesn't entirely tie in with what we might have expected from the station-based precip data we have for this area.</strong> Would it be possible for you could have a quick look into this issue? Many thanks, Dr Daniel Kingston Post Doctoral Research Associate Department of Geography University College London </blockquote> </blockquote> And here is Ian "Harry" Harris' answer. <blockquote> Well, it's a good question! And it took over two weeks to answer. I wrote angola.m, which pretty much established that three local stations had been augmented for 3.0, and that April 1967 was anomalously wet. Lots of non-reporting stations (ie too few years to form normals) also had high values. As part of this, I also wrote angola3.m, which added two rather interesting plots: the climatology, and the output from the Fortran gridder I'd just completed. This raised a couple of points of interest: 1. The 2.10 output doesn't look like the climatology, despite there being no stations in the area. It ought to have simply relaxed to the clim, instead it's wetter. 2. The gridder output is lower than 3.0, and much lower than the stations! I asked Tim and Phil about 1., they couldn't give a definitive opinion. As for 2., their guesses were correct, I needed to mod the distance weighting. As usual, see gridder.sandpit for the full info. So to CLOUD. For over a year, rumours have been circulating that money had been found to pay somebody for a month to recreate Mark New's coefficients. But it never quite gelled. Now, at last, someone's producing them! Unfortunately.. it's me. The idea is to derive the coefficients (for the regressing of cloud against DTR) using the published 2.10 data. We'll use 5-degree blocks and years 1951-2002, then produce coefficients for each 5-degree latitude band and month. Finally, we'll interpolate to get half-degree coefficients. Apparently. Lots of 'issues'. We need to exclude 'background' stations - those that were relaxed to the climatology. This is hard to detect because the climatology consists of valid values, so testing for equivalence isn't enough. It might have to be the station files *shudder*. Using station files was OK, actually. A bigger problem was the inclusion of strings of consecutive, identical values (for cloud and/or dtr). Not sure what the source is, as they are not == to the climatology (ie the anoms are not 0). Discussed with Phil - decided to try excluding any cell with a string like that of >10 values. Cloud only for now. The result of that was, unfortunately, the loss of several output values, </blockquote> 3) Run dialogs Ian "Harry" Harris did a very good job of documenting his different "runs" of the programs, clipping and pasting the "run time dialog" into his "harry_read_me.txt." document. Run time dialog is the text, messages and input prompts that appear on the screen when you run the program. You can see below that the original programmers of the CRU program suite had a "lively" style of informative messages to the end user. Here is a message you get when running an "update" program to merge temperature reporting stations. <blockquote> Before we get started, an important question: If you are merging an update - CLIMAT, MCDW, Australian - do you want the quick and dirty approach? This will blindly match on WMO codes alone, <strong>ignoring data/metadata checks</strong>, and making any unmatched updates into new stations (metadata permitting)? Enter 'B' for blind merging, or : B </blockquote> Do you know what this program produced? Bad records, an incomplete dataset. Records with station identifiers missing, stations duplicated, no checks for missing data. And if the program had data it didn't know what to do with, it turned the data into a new station, even if it didn't really know what that data was in reference to. Remember, these are the legacy programs that CRU used to generate data. These were live programs, live data. Ian "Harry" Harris was trying to fix and modify these programs, because many of them produced invalid data. 4) Example of faulty code. Here is one example, from Ian "Harry" Harris, about an already existing function, one that had been used to generated data in the past. <blockquote> Back to precip, it seems the variability is too low. This points to a problem with the percentage anomaly routines. See earlier escapades - <strong>will the Curse of Tim never be lifted? </strong> A reminder. I started off using a 'conventional' calculation absgrid(ilon(i),ilat(i)) = nint(normals(i,imo) + * anoms(ilon(i),ilat(i)) * normals(i,imo) / 100) which is: V = N + AN/100 This was shown to be delivering unrealistic values, so I went back to anomdtb to see how the anomalies were <strong>contructed in the first place</strong>, and found this: DataA(XAYear,XMonth,XAStn) = nint(1000.0*((real(DataA(XAYear,XMonth,XAStn)) / & real(NormMean(XMonth,XAStn)))-1.0)) which is: A = 1000((V/N)-1) So, I reverse engineered that to get this: V = N(A+1000)/1000 And that is apparently also delivering incorrect values. Bwaaaahh!! </blockquote> Harry eventually fixed this, so in the future it would produce accurate data, but one wonders how many times data was pushed through this formula in the past and how much invalid data was generated from this faulty function. Epilog: Remember Ian "Harry" Harris was working on a legacy program suite, not some "quick and dirty methods." A suite of programs and datasets used by CRU for climate modeling and in use for many years. If you want to, read his 314 pages of notes that detail better than I could all of the problems he ran into trying to work with those existing legacy programs. Does this information presented here disprove AGW? Of course not. There are many other scientific organizations besides the CRU. But it does highlight, with provable facts that the CRU in themselves have been responsible for bad data, bad programs and as we have seen by the dust up about the ignored Freedom of Information Act requests that was issued to CRU, responsible for trying to cover up their mistakes. This is bad science and unfair to all the honest scientist the world over who are diligently working on honest climate science. Addendum: You have to give Ian "Harry" Harris a lot of credit. Evidently he has been responsible for cleaning up a lot of the mistakes that have existed in climate based datasets in the past. This little narrative represents some of his work with NCEP/NCAR Reanalysis. (National Centers for Environmental Prediction - NOAA - http://www.ncep.noaa.gov/) http://www.cru.uea.ac.uk/cru/data/ncep/ <blockquote> 1948-1957 Data Added (Ian Harris, 22 Jul 2008) 2007 Data Added (Ian Harris, 17 Apr 2008) 2006 Data Added (Ian Harris, 11 Mar 2007) 2005 Data Added (Ian Harris, 13 Jan 2006) 2004 Data Added (Ian Harris, 28 Nov 2005) 2003 Data Added (Ian Harris, 11 May 2004) SURFACE TEMPERATURE ADDED (Ian Harris, 10 December 2003) WARNING NOTE ADDED FOR SURFACE FLUX TEMPERATURES (Ian Harris, 10 December 2003) ALL DATASETS UPDATED TO 2002 (Ian Harris, 23 June 2003) LAND/SEA MASKS ADDED (Ian Harris, 16 December 2002) Land/Sea Masks for regular and Gaussian grids have been added. NEW WINDOW ONLINE (Ian Harris, 9 July 2002) The new Quarter-Spherical Window (0N-90N; 90W-90E) is now in use (see table below). The old window data (here) has now been entirely replaced. Please address any requests for new variables to me. BAD DATA REPLACED (Ian Harris, 23 May 2002) The TOVS Problem has been resolved and only corrected data appears on this site. <strong>Anyone wishing to access the old (potentially incorrect) data in order to evaluate the extent of the problem should contact me.</strong> </blockquote> The last entry in that narrative is interesting.]]></description> <content:encoded><![CDATA[<blockquote><p> jarmo | December 28, 2011 at 4:02 pm | Reply<br /> I have read that their “computer models” are based on manipulating past data to be able to predict historical climate and events relatively accurately. Unfortunately, the same computer logic doesn’t work for predicting future climate. That’s why they have been consistently wrong. </p></blockquote> <p>(here is a little essay I wrote a few years ago about the CRU and their data processing and computer modeling technologies. As a programmer myself, I quickly recognized that the quality of their computer modeling and data processing programs would have gotten any normal IT department fired)</p> <p>Who is Ian “Harry” Harris? He is a staff member at the Climatic Research Unit at East<br /> Anglia University. His short bio on the CRU staff page says this. “Dendroclimatology,<br /> climate scenario development, data manipulation and visualisation, programming.”<br /> (<a href="http://www.cru.uea.ac.uk/cru/people/" rel="nofollow ugc">http://www.cru.uea.ac.uk/cru/people/</a>). He was tasked with maintaining, modifying and<br /> rewriting programs from the existing climate modeling software suite that existed at CRU<br /> since at least the 1990’s. He kept copious notes of his progress from 2006 through 2009,<br /> including his notes and comments internally in the programs themselves and in a 314<br /> page document named “harry_read_me.txt.” If you revel in the minutia of programmer’s<br /> notes you can easily find this document on the internet.</p> <p>I will document 4 different aspects of Ian “Harry” Harris’ notes<br /> 1) General comments, inaccurate data bases<br /> 2) CRU Time Series 3.0 dataset<br /> 3) a RUN dialog<br /> 4) Faulty code</p> <p>Quotes are verbatim, including typos, misspellings and language differences). any other<br /> mistakes are mine.</p> <p>1) General comments from the “harry_read_me.txt.” about the CRU programs and data.</p> <p>Here is Ian “Harry” Harris talking about both the legacy programs and legacy climate<br /> databases and the new data he is trying to create.</p> <blockquote><p> “Oh GOD if I could start this project again and actually argue the case for junking the<br /> inherited program suite!!”</p> <p><em>author note: This is the program suite that has been generating data for years for<br /> CRU and staff.</em></p> <p>…knowing how long it takes to debug this suite – the experiment endeth here. The option<br /> (like all the anomdtb options) is totally undocumented so we’ll never know what we lost.”</p> <p><em>author note: Remember, Dr. Phil Jones, head of CRU initially said they never lost<br /> any data.</em></p> <p>“Sounds familiar, if worrying. am I the first person to attempt to get the CRU databases<br /> in working order?!! The program pulls no punches. I had already found that<br /> tmx.0702091313.dtb had seven more stations than tmn.0702091313.dtb, but that hadn’t<br /> prepared me for the grisly truth:”</p> <p>“Getting seriously fed up with the state of the Australian data. so many new stations have<br /> been introduced, so many false references.. so many changes that aren’t documented.<br /> Every time a cloud forms I’m presented with a bewildering selection of similar-sounding<br /> sites, some with references, some with WMO codes, and some with both. And if I look<br /> up the station metadata with one of the local references, chances are the WMO code will<br /> be wrong (another station will have it) and the lat/lon will be wrong too.”</p> <p><em>author note: How were they generating temperature data on their world grid in the<br /> past if they couldn’t even match up stations?</em></p> <p>“I am very sorry to report that the rest of the databases seem to be in nearly as poor a<br /> state as Australia was. There are hundreds if not thousands of pairs of dummy stations,<br /> one with no WMO and one with, usually overlapping and with the same station name and<br /> very similar coordinates. I know it could be old and new stations, but why such large<br /> overlaps if that’s the case? Aarrggghhh!”</p> <p><strong> “So.. should I really go to town (again) and allow the Master database to be<br /> ‘fixed’ by this program? Quite honestly I don’t have time – but it just shows the state our<br /> data holdings have drifted into.</strong> Who added those two series together? When?<br /> Why? Untraceable, except anecdotally. It’s the same story for many other Russian<br /> stations, unfortunately – <strong>meaning that (probably) there was a full Russian update<br /> that did no data integrity checking at all.</strong> I just hope it’s restricted to Russia!!”</p> <p><em>author note: Fixed? What does that mean? And why the quotes? This is live data Ian<br /> is talking about.</em></p> <p>“This still meant an awful lot of encounters with naughty Master stations, when really I<br /> suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the<br /> nuclear option – to match every WMO possible, and turn the rest into new stations (er,<br /> CLIMAT excepted). <strong>In other words, what CRU usually do. It will allow bad<br /> databases to pass unnoticed, and good databases to become bad, but I really don’t think<br /> people care enough to fix ’em,</strong> and it’s the main reason the project is nearly a<br /> year late.”</p> <p><em>author note: This is about the strongest statement Ian makes about the state of the<br /> data at CRU</em></p> <p>“The big question must be, why does it have so little representation in the low numbers?<br /> Especially given that I’m rounding erroneous negatives up to 1!! Oh, sod it. It’ll do. I<br /> don’t think I can justify spending any longer on a dataset, the previous version of which<br /> was completely wrong (misnamed) and nobody noticed for five years.”</p> <p>“This was used to inform the Fortran conversion programs by indicating the latitudepotential_<br /> sun and sun-to-cloud relationships. It also assisted greatly in understanding<br /> what was wrong – Tim was in fact calculating Cloud Percent, despite calling it Sun<br /> Percent!! Just awful.”</p> <p><em>author note: Dr. Tim Mitchell or Dr. Tim Osborn? CRU –<br /> <a href="http://www.cru.uea.ac.uk/~timm/index.html" rel="nofollow ugc">http://www.cru.uea.ac.uk/~timm/index.html</a></em></p> <p>“They aren’t percentage anomalies! They are percentage anomalies /10. This could<br /> explain why the real data areas had variability 10x too low. BUT it shouldn’t be – they<br /> should be regular percentage anomalies! This whole process is too convoluted and<br /> created myriad problems of this kind. I really think we should change it.”</p> <p>“Am I the first person to attempt to get the CRU databases in working order?!!”</p> <p>“Right, time to stop pussyfooting around the niceties of Tim’s labyrinthine software suites<br /> – let’s have a go at producing CRU TS 3.0! since failing to do that will be the definitive<br /> failure of the entire project..”</p> <p><strong> “OH FUCK THIS. It’s Sunday evening, I’ve worked all weekend, and just when<br /> I thought it was done I’m hitting yet another problem that’s based on the hopeless state of<br /> our databases. There is no uniform data integrity, it’s just a catalogue of issues that<br /> continues to grow as they’re found.”</strong> </p></blockquote> <p>Remember, he is talking about legacy programs and legacy data.</p> <p>2) About the CRU Time Series 3.0 dataset.<br /> Remember all the comments I posted here about HARCRUT3 dataset, which contains<br /> global temperature readings from 1850 onward and the possible problems with the data in<br /> that database. Well, HADCRUT3 is built from CRUTEM3 and the Hadley SST data.<br /> CRUTEM3 is built partially from CRU TS 3.0 which is mentioned above. And much of<br /> the data used for climate modeling in the past was contained in earlier versions of this<br /> data table CRU TS 2.1, CRU TS 2.0, CRU TS 1.1 and CRU TS 1.0. table used for<br /> earlier climate models. (see history of CRU TS at <a href="http://csi.cgiar.org/cru/" rel="nofollow ugc">http://csi.cgiar.org/cru/</a>).<br /> Evidently Ian “Harry” Harris managed to finally produce the dataset CRU TS 3.0 and<br /> here is a question from Dr Daniel Kingston, addressed to “Tim.”</p> <blockquote><p> So, you release a dataset that people have been clamouring for, and the buggers only<br /> <strong>start using it! And finding problems.</strong> For instance:</p> <blockquote> <p>Hi Tim (good start! -ed)<br /> I realise you are likely to be very busy at the moment, but we have come across<br /> something in the CRU TS 3.0 data set which I hope you can help out with.<br /> We have been looking at the monthly precipitation totals over southern Africa (Angola,<br /> to be precise), and have found some rather large differences between precipitation as<br /> specified in the TS 2.1 data set, and the new TS 3.0 version. Specifically, April 1967 for<br /> the cell 12.75 south, 16.25 east, the monthly total in the TS 2.1 data set is 251mm,<br /> whereas in TS 3.0 it is 476mm.</p> <p>The anomaly does not only appear in this cell, but also in a number of neighbouring cells.<br /> This is quite a large difference, and the <strong>new TS 3.0 value doesn’t entirely tie in<br /> with what we might have expected from the station-based precip data we have for this<br /> area.</strong></p> <p>Would it be possible for you could have a quick look into this issue?<br /> Many thanks,<br /> Dr Daniel Kingston<br /> Post Doctoral Research Associate<br /> Department of Geography<br /> University College London </p></blockquote> </blockquote> <p>And here is Ian “Harry” Harris’ answer.</p> <blockquote><p> Well, it’s a good question! And it took over two weeks to answer. I wrote angola.m,<br /> which pretty much established that three local stations had been augmented for 3.0, and<br /> that April 1967 was anomalously wet. Lots of non-reporting stations (ie too few years to<br /> form normals) also had high values. As part of this, I also wrote angola3.m, which added<br /> two rather interesting plots: the climatology, and the output from the Fortran gridder I’d<br /> just completed. This raised a couple of points of interest:</p> <p>1. The 2.10 output doesn’t look like the climatology, despite there being no stations in the<br /> area. It ought to have simply relaxed to the clim, instead it’s wetter.</p> <p>2. The gridder output is lower than 3.0, and much lower than the stations!</p> <p>I asked Tim and Phil about 1., they couldn’t give a definitive opinion. As for 2., their<br /> guesses were correct, I needed to mod the distance weighting. As usual, see<br /> gridder.sandpit for the full info.</p> <p>So to CLOUD. For over a year, rumours have been circulating that money had been<br /> found to pay somebody for a month to recreate Mark New’s coefficients. But it never<br /> quite gelled. Now, at last, someone’s producing them! Unfortunately.. it’s me.<br /> The idea is to derive the coefficients (for the regressing of cloud against DTR) using the<br /> published 2.10 data. We’ll use 5-degree blocks and years 1951-2002, then produce<br /> coefficients for each 5-degree latitude band and month. Finally, we’ll interpolate to get<br /> half-degree coefficients. Apparently.</p> <p>Lots of ‘issues’. We need to exclude ‘background’ stations – those that were relaxed to the<br /> climatology. This is hard to detect because the climatology consists of valid values, so<br /> testing for equivalence isn’t enough. It might have to be the station files *shudder*.<br /> Using station files was OK, actually. A bigger problem was the inclusion of strings of<br /> consecutive, identical values (for cloud and/or dtr). Not sure what the source is, as they<br /> are not == to the climatology (ie the anoms are not 0). Discussed with Phil – decided to<br /> try excluding any cell with a string like that of >10 values. Cloud only for now. The<br /> result of that was, unfortunately, the loss of several output values, </p></blockquote> <p>3) Run dialogs<br /> Ian “Harry” Harris did a very good job of documenting his different “runs” of the<br /> programs, clipping and pasting the “run time dialog” into his “harry_read_me.txt.”<br /> document. Run time dialog is the text, messages and input prompts that appear on the<br /> screen when you run the program. You can see below that the original programmers of<br /> the CRU program suite had a “lively” style of informative messages to the end user. Here<br /> is a message you get when running an “update” program to merge temperature reporting<br /> stations.</p> <blockquote><p> Before we get started, an important question: If you are merging an update – CLIMAT,<br /> MCDW, Australian – do you want the quick and dirty approach? This will blindly match<br /> on WMO codes alone, <strong>ignoring data/metadata checks</strong>, and making any<br /> unmatched updates into new stations (metadata permitting)?</p> <p>Enter ‘B’ for blind merging, or : B </p></blockquote> <p>Do you know what this program produced? Bad records, an incomplete dataset. Records<br /> with station identifiers missing, stations duplicated, no checks for missing data. And if<br /> the program had data it didn’t know what to do with, it turned the data into a new station,<br /> even if it didn’t really know what that data was in reference to.</p> <p>Remember, these are the legacy programs that CRU used to generate data. These were<br /> live programs, live data. Ian “Harry” Harris was trying to fix and modify these programs,<br /> because many of them produced invalid data.</p> <p>4) Example of faulty code.<br /> Here is one example, from Ian “Harry” Harris, about an already existing function, one<br /> that had been used to generated data in the past.</p> <blockquote><p> Back to precip, it seems the variability is too low. This points to a problem with the<br /> percentage anomaly routines. See earlier escapades – <strong>will the Curse of Tim<br /> never be lifted? </strong></p> <p>A reminder. I started off using a ‘conventional’ calculation<br /> absgrid(ilon(i),ilat(i)) = nint(normals(i,imo) + * anoms(ilon(i),ilat(i)) * normals(i,imo)<br /> / 100) which is: V = N + AN/100</p> <p>This was shown to be delivering unrealistic values, so I went back to anomdtb to see how<br /> the anomalies were <strong>contructed in the first place</strong>, and found this:<br /> DataA(XAYear,XMonth,XAStn) = nint(1000.0*((real(DataA(XAYear,XMonth,XAStn))<br /> / & real(NormMean(XMonth,XAStn)))-1.0)) which is: A = 1000((V/N)-1)</p> <p>So, I reverse engineered that to get this: V = N(A+1000)/1000<br /> And that is apparently also delivering incorrect values. Bwaaaahh!! </p></blockquote> <p>Harry eventually fixed this, so in the future it would produce accurate data, but one<br /> wonders how many times data was pushed through this formula in the past and how<br /> much invalid data was generated from this faulty function.</p> <p>Epilog:<br /> Remember Ian “Harry” Harris was working on a legacy program suite, not some “quick<br /> and dirty methods.” A suite of programs and datasets used by CRU for climate modeling<br /> and in use for many years. If you want to, read his 314 pages of notes that detail better<br /> than I could all of the problems he ran into trying to work with those existing legacy<br /> programs.</p> <p>Does this information presented here disprove AGW? Of course not. There are many<br /> other scientific organizations besides the CRU. But it does highlight, with provable facts<br /> that the CRU in themselves have been responsible for bad data, bad programs and as we<br /> have seen by the dust up about the ignored Freedom of Information Act requests that was<br /> issued to CRU, responsible for trying to cover up their mistakes. This is bad science and<br /> unfair to all the honest scientist the world over who are diligently working on honest<br /> climate science.</p> <p>Addendum:</p> <p>You have to give Ian “Harry” Harris a lot of credit. Evidently he has been responsible for<br /> cleaning up a lot of the mistakes that have existed in climate based datasets in the past.<br /> This little narrative represents some of his work with NCEP/NCAR Reanalysis. (National<br /> Centers for Environmental Prediction – NOAA – <a href="http://www.ncep.noaa.gov/" rel="nofollow ugc">http://www.ncep.noaa.gov/</a>)<br /> <a href="http://www.cru.uea.ac.uk/cru/data/ncep/" rel="nofollow ugc">http://www.cru.uea.ac.uk/cru/data/ncep/</a></p> <blockquote><p> 1948-1957 Data Added (Ian Harris, 22 Jul 2008)<br /> 2007 Data Added (Ian Harris, 17 Apr 2008)<br /> 2006 Data Added (Ian Harris, 11 Mar 2007)<br /> 2005 Data Added (Ian Harris, 13 Jan 2006)<br /> 2004 Data Added (Ian Harris, 28 Nov 2005)<br /> 2003 Data Added (Ian Harris, 11 May 2004)<br /> SURFACE TEMPERATURE ADDED (Ian Harris, 10 December 2003)<br /> WARNING NOTE ADDED FOR SURFACE FLUX TEMPERATURES (Ian Harris, 10<br /> December 2003)<br /> ALL DATASETS UPDATED TO 2002 (Ian Harris, 23 June 2003)<br /> LAND/SEA MASKS ADDED (Ian Harris, 16 December 2002)<br /> Land/Sea Masks for regular and Gaussian grids have been added.<br /> NEW WINDOW ONLINE (Ian Harris, 9 July 2002)<br /> The new Quarter-Spherical Window (0N-90N; 90W-90E) is now in use (see table<br /> below).<br /> The old window data (here) has now been entirely replaced.<br /> Please address any requests for new variables to me.<br /> BAD DATA REPLACED (Ian Harris, 23 May 2002)<br /> The TOVS Problem has been resolved and only corrected data appears on this site.<br /> <strong>Anyone wishing to access the old (potentially incorrect) data in order to evaluate<br /> the extent of the problem should contact me.</strong> </p></blockquote> <p>The last entry in that narrative is interesting.</p> ]]></content:encoded> </item> <item> <title> By: jarmo </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6402</link> <dc:creator><![CDATA[jarmo]]></dc:creator> <pubDate>Wed, 28 Dec 2011 21:02:15 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6402</guid> <description><![CDATA[In reply to <a href="https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6397">mamapajamas</a>. I have read that their "computer models" are based on manipulating past data to be able to predict historical climate and events relatively accurately. Unfortunately, the same computer logic doesn't work for predicting future climate. That's why they have been consistently wrong. Computers are good at predicting strength of materials, heating and electrical problems that have specific, known variables. Climate is too complex, and probably has dozens of variables, known and unknown.]]></description> <content:encoded><![CDATA[<p>In reply to <a href="https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6397">mamapajamas</a>.</p> <p>I have read that their “computer models” are based on manipulating past data to be able to predict historical climate and events relatively accurately. Unfortunately, the same computer logic doesn’t work for predicting future climate. That’s why they have been consistently wrong.<br /> Computers are good at predicting strength of materials, heating and electrical problems that have specific, known variables. Climate is too complex, and probably has dozens of variables, known and unknown.</p> ]]></content:encoded> </item> <item> <title> By: Joseph A Olson </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6401</link> <dc:creator><![CDATA[Joseph A Olson]]></dc:creator> <pubDate>Tue, 27 Dec 2011 15:40:02 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6401</guid> <description><![CDATA[The Carbon Commodity Fraud is just one of many scams run by the monarch/monopolists who have been in control of America and Europe for over a century. They own the war industries, the main stream media and both sides of the two party puppet show that passes for our dillusion of democracy. Another 'science' fraud is Hubbert's Peak Oil which diverts all petroleum commerce through the elitist profit network. Green energy is another assault against sicence, as EVERY 'sustainable' system is non functional. The underlying disease for all of these symptoms is the FRAUDULANT FRACTIONAL RESERVE BANKING SYSTEM. This Ponzi scheme is owned and run by the globalists and every war and every depression in the last century has been intentional. Either you believe that hapless humanity stumbles from one expensive bloodbath blindly into the next expensive bloodbath....OR....you realize that a sinister small group of elites have carefully stage-set, directed and PROFITTED by human carnage. This is futher explained in "Fractional Reserve Banking Begat Faux Reality". The elitists could not stop with just Faux Science, they needed to re-write history to give false provenence to this fraud. Visit http://www.FauxScienceSlayer.com for more on this Multi-level Fraud Marketing and demand a New Magna Carta. It is time to arrest and convict these robber barons.]]></description> <content:encoded><![CDATA[<p>The Carbon Commodity Fraud is just one of many scams run by the monarch/monopolists who have been in control of America and Europe for over a century. They own the war industries, the main stream media and both sides of the two party puppet show that passes for our dillusion of democracy. Another ‘science’ fraud is Hubbert’s Peak Oil which diverts all petroleum commerce through the elitist profit network. Green energy is another assault against sicence, as EVERY ‘sustainable’ system is non functional. The underlying disease for all of these symptoms is the FRAUDULANT FRACTIONAL RESERVE BANKING SYSTEM. This Ponzi scheme is owned and run by the globalists and every war and every depression in the last century has been intentional. Either you believe that hapless humanity stumbles from one expensive bloodbath blindly into the next expensive bloodbath….OR….you realize that a sinister small group of elites have carefully stage-set, directed and PROFITTED by human carnage. This is futher explained in “Fractional Reserve Banking Begat Faux Reality”. The elitists could not stop with just Faux Science, they needed to re-write history to give false provenence to this fraud. Visit <a href="http://www.FauxScienceSlayer.com" rel="nofollow ugc">http://www.FauxScienceSlayer.com</a> for more on this Multi-level Fraud Marketing and demand a New Magna Carta. It is time to arrest and convict these robber barons.</p> ]]></content:encoded> </item> <item> <title> By: eck </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6400</link> <dc:creator><![CDATA[eck]]></dc:creator> <pubDate>Tue, 27 Dec 2011 03:30:54 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6400</guid> <description><![CDATA[Well, to me it's obvious, there IS little mainstream media in the historical sense of unbiased, informed, investigation or reporting. Most "reporters", video or print, don't do anything but parrot. It's sad that many pay any attention to them.]]></description> <content:encoded><![CDATA[<p>Well, to me it’s obvious, there IS little mainstream media in the historical sense of unbiased, informed, investigation or reporting. Most “reporters”, video or print, don’t do anything but parrot. It’s sad that many pay any attention to them.</p> ]]></content:encoded> </item> <item> <title> By: Paul </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6399</link> <dc:creator><![CDATA[Paul]]></dc:creator> <pubDate>Tue, 27 Dec 2011 03:01:05 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6399</guid> <description><![CDATA[A little global warming would be a great benefit to mankind. However computer models cannot model the past climate so why should we believe future predictions. Weather men can't get the weather right 5 days in advance. Its about money and power.]]></description> <content:encoded><![CDATA[<p>A little global warming would be a great benefit to mankind. However computer models cannot model the past climate so why should we believe future predictions. Weather men can’t get the weather right 5 days in advance. Its about money and power.</p> ]]></content:encoded> </item> <item> <title> By: Russell C </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6398</link> <dc:creator><![CDATA[Russell C]]></dc:creator> <pubDate>Mon, 26 Dec 2011 21:34:17 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6398</guid> <description><![CDATA[In reply to <a href="https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6396">Wayne Holbrook</a>. Indeed, as the commenter "mamapajamas", points out below, I could go on at great length there. Not for lack of trying directly on my part, when it comes to telling the MSM about ClimateGate - see my Dec 2009 piece "The Lack of Climate Skeptics on PBS's 'NewsHour' " http://www.americanthinker.com/blog/2009/12/the_lack_of_climate_skeptics_o.html , in which there is a link to my first appearance at the PBS Ombudsman web page. My gripe back then to the Ombudsman was the way the NewsHour didn't mention ClimateGate I until a week after the news broke. I haven't stopped - Dec 14th was my seventh appearance on the Ombudsman page, scroll down this page http://www.pbs.org/ombudsman/2011/12/the_mailbag_making_sene_of_cleancut_and_scruf_1.html to "The Relentless Russell Cook..." To their credit, the NewsHour did actually devote 125 words to ClimageGate II (indirectly as "controversial emails"), in a Nov 28th Durban climate conference discussion where the guest was the Washington Post's Juliet Eilperin. She blew off the entire ClimageGate scandal as utterly insignificant.]]></description> <content:encoded><![CDATA[<p>In reply to <a href="https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6396">Wayne Holbrook</a>.</p> <p>Indeed, as the commenter “mamapajamas”, points out below, I could go on at great length there. Not for lack of trying directly on my part, when it comes to telling the MSM about ClimateGate – see my Dec 2009 piece “The Lack of Climate Skeptics on PBS’s ‘NewsHour’ ” <a href="http://www.americanthinker.com/blog/2009/12/the_lack_of_climate_skeptics_o.html" rel="nofollow ugc">http://www.americanthinker.com/blog/2009/12/the_lack_of_climate_skeptics_o.html</a> , in which there is a link to my first appearance at the PBS Ombudsman web page. My gripe back then to the Ombudsman was the way the NewsHour didn’t mention ClimateGate I until a week after the news broke.</p> <p>I haven’t stopped – Dec 14th was my seventh appearance on the Ombudsman page, scroll down this page <a href="http://www.pbs.org/ombudsman/2011/12/the_mailbag_making_sene_of_cleancut_and_scruf_1.html" rel="nofollow ugc">http://www.pbs.org/ombudsman/2011/12/the_mailbag_making_sene_of_cleancut_and_scruf_1.html</a> to “The Relentless Russell Cook…” To their credit, the NewsHour did actually devote 125 words to ClimageGate II (indirectly as “controversial emails”), in a Nov 28th Durban climate conference discussion where the guest was the Washington Post’s Juliet Eilperin. She blew off the entire ClimageGate scandal as utterly insignificant.</p> ]]></content:encoded> </item> <item> <title> By: mamapajamas </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6397</link> <dc:creator><![CDATA[mamapajamas]]></dc:creator> <pubDate>Mon, 26 Dec 2011 04:04:22 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6397</guid> <description><![CDATA[Wayne: Oh, there's a lot more than that that Russel didn't mention, but he doesn't have 300+ pages to deal with all of the problems. This came to my attention as an obvious scam early on, virtually the moment I learned that the dire predictions being made were based upon computer models. I know little about climate, but I've been involved with computers since around 1967, and know that they can NOT make serious predictions when most of the parameters are sheer guesses. You can't take a hypothesis with only a few known possible parameters, put them into a computer script, and expect a correct solution. It is, in fact, utterly impossible. Computers simply are not that smart. They add ones and zeros and store information. Everything else they do is a variation on those two functions. If the ones and zeros they add are not precisely correct, it is not possible for the end-of-job conclusion to be correct. Emphasis: It. Is. NOT. POSSIBLE. Computer climate models are a scam.]]></description> <content:encoded><![CDATA[<p>Wayne: Oh, there’s a lot more than that that Russel didn’t mention, but he doesn’t have 300+ pages to deal with all of the problems. </p> <p>This came to my attention as an obvious scam early on, virtually the moment I learned that the dire predictions being made were based upon computer models. I know little about climate, but I’ve been involved with computers since around 1967, and know that they can NOT make serious predictions when most of the parameters are sheer guesses. You can’t take a hypothesis with only a few known possible parameters, put them into a computer script, and expect a correct solution. It is, in fact, utterly impossible. Computers simply are not that smart. They add ones and zeros and store information. Everything else they do is a variation on those two functions. If the ones and zeros they add are not precisely correct, it is not possible for the end-of-job conclusion to be correct. Emphasis: It. Is. NOT. POSSIBLE. Computer climate models are a scam.</p> ]]></content:encoded> </item> <item> <title> By: Wayne Holbrook </title> <link>https://junkscience.com/2011/12/monumental-fault-in-manmade-global-warming-notion-hiding-in-plain-sight/#comment-6396</link> <dc:creator><![CDATA[Wayne Holbrook]]></dc:creator> <pubDate>Sun, 25 Dec 2011 00:40:02 +0000</pubDate> <guid isPermaLink="false">https://junkscience.com/?p=8275#comment-6396</guid> <description><![CDATA[Russel Cook's otherwise excellent article didn't point out that Climate Gate l and Climate Gate ll, which should have brought the frenzy to a halt, were ignored by the media.]]></description> <content:encoded><![CDATA[<p>Russel Cook’s otherwise excellent article didn’t point out that Climate Gate l and Climate Gate ll, which should have brought the frenzy to a halt, were ignored by the media.</p> ]]></content:encoded> </item> </channel> </rss>