by Steven Milloy and Michael Gough
August 25, 1997, Investor’s Business Daily
Bad science comes. Bad science goes. That’s the great thing about science. It’s only as permanent as it is sound. But what happens when bad science becomes law?
In June 1996, Tulane University researchers reported alarming results from so-called “endocrine disrupters” – manmade chemicals,like pesticides, PCBs and plastics, that allegedly disrupt hormonal systems and cause everything from cancer to infertility to attention-deficit disorder.
The endocrine disrupter issue had gained a great deal of publicity in March 1996 with the publication of the book “Our Stolen Future,” an alarmist compendium of anecdotes carefully woven together to support the theory that manmade chemicals are wreaking havoc with human hormonal processes.
Scientists panned “Our Stolen Future” from the outset. The book’s promoters saw their claims founder on balanced presentations of the endocrine disrupter issue by major media. By May 1996, the hormone hysteria had largely subsided.
But then Science, a pre-eminent scientific journal in the U.S., got into the act. It trumpeted the Tulane research, which claimed that mixtures of pesticides disrupt hormone systems up to 1,600 times more than individual pesticides alone.
Science not only published the study, but featured it in the news section of the magazine and published a favorable editorial. This was enormous fanfare in a journal where almost all articles are published without editorial notice.The media immediately picked up on this hoopla.
So did lawmakers. In the wake of the publicity,but before independent scientists could test the Tulane results, Congress included in the Food Quality Protection Act of 1996 a mandate that the Environmental Protection Agency develop an endocrine disruption screening program for pesticides. It marked a new world speed record for enacting science into law.
Soon after, the Tulane research started to unravel.
By November 1996, four laboratories had tried but failed to replicate the Tulane findings – a highly unusual outcome in the controlled setting of laboratory research. In January 1997, Science printed a letter from scientists at the U.S.National Institute of Environmental Health Sciences, Texas A&M University and Duke University reporting the Tulane results could not be replicated.
A month later, the highly regarded international science journal Nature published results from British researchers who could not replicate the Tulane findings.
The Tulane researchers initially responded to critics by claiming special conditions in their laboratory. But “special conditions” are the stuff of magic, not science. Scientific findings are supposed to describe the world in general, not some special place.
Last month, the Tulane team finally threw in the towel and retracted their study. In a letter to Science they wrote, “We . . . have not been able to replicate our initial results . . . (and) others have been unable to reproduce the results we reported.”
We may never know whether the Tulane research resulted from mistakes or fraud. In any case, the scientific community has rejected the study. But in stark contrast to the media coverage of the study when it was first published,neither Science nor the mainstream press has ballyhooed the rebuttal and retraction.
Worse, Tulane’s junk science survives in law. And EPA officials have stated that no changes in policy are forthcoming. So EPA will establish more onerous testing procedures, products will become more expensive or not available at all,and consumers will suffer. All for phantom protection from a nonexistent problem.
In the end, science corrected itself. Will Congress?
Michael Gough is director of risk studies at the Cato Institute. Steven Milloy is executive director of The Advancement of Sound Science Coalition.
I followed this story at the time. Several hundred pairs of chemicals were tested. They reported the most extreme pair in the science paper. When asked to replicate their finding they could not. One way to look at this is that all the data was random, but with variability a pair would have a high value. If they ran the experiment again, all pairs, there would be a high value, but for a different pair. Essentially the first report was a random value that was high. The PI asked the technician to replicate the work and when the technician could not the technician was blamed. The real problem was that the PI did not replicate the finding before rushing off to publish. I asked for the entire data set to see if my reading of the situation was correct. They would not give it to me. I asked the foundation that funded the work to help and they would not.
Randomness happens and has consequences.
A timely post, Gentlemen. I have just used this information in a comment on PhysOrg concerning their story “3-D images show flame retardants can mimic estrogens.”