My how time flies — and nothing gets done.
Below is Steve Milloy’s Senate testimony from March 6, 1995.
Testimony of Steven J. Milloy, President, Regulatory Impact Analysis Project on S. 333, the Department of Energy Risk Management Act of 1995 before the U.S. Senate Committee on Energy and Natural Resources March 6, 1995
Mr. Chairman, I want to thank you for the opportunity to testify on S. 333, the “Department of Energy Risk Management Act of 1995, ” as amended. I would also like to commend you and the bill’s other sponsors, Senator Johnston and Senator Lott, for recognizing the importance and need for science-based risk assessment. Science-based risk assessment is the one tool we have which can help us formulate environmental policy that is based on wisdom rather than fear.
I want to discuss with you today the state of risk assessment as it is practiced by the federal agencies. Importantly, risk assessment is not a new concept or process. The federal government has relied upon risk assessment since the days of the Manhattan Project when risk assessment was used to set radiation exposure limits for workers.
More importantly, science-based risk assessment is bipartisan in nature. For example, in the last Congress, Senator Johnston recognized the importance of science-based risk assessment in his amendments to the then-proposed EPA Cabinet bill. Consistent with this bipartisan support, the objective for S.333 should be to ensure that science-based risk assessment is used to formulate environmental policy.
However, based on what I have learned from my studies, today’s risk assessment process is too often not based in analytic science. Risk assessment has become a process that is based almost exclusively on policy and value judgments. Because the absence of science in risk assessment has largely been obscured, policy makers, the media and the public have not had a full and fair opportunity to evaluate how our environmental policy has been formulated to date.
TODAY, RISK ASSESSMENT IS DRIVEN BY POLICY AND VALUE JUDGMENTS NOT ANALYTIC SCIENCE
Some may find my characterization of today’s risk assessment process surprising. But through simple analysis, it easily be demonstrated that risk assessment has become a matter of policy and value judgments not analytic science.
In 1992, EPA published a self-audit of the role of science in the EPA regulatory process. The report, entitled Safeguarding the Future: Credible Science, Credible Decisions, was prepared by a blue-ribbon panel of independent scientists. At the time Safeguarding the Future was released, one statement in the report greatly intrigued me. That statement was that “science should never be adjusted to fit policy. ” In the context of Safeguarding the Future, the clear implication of this statement was that, in fact, science does get adjusted to fit policy. At the time I wondered, just how does this happen and why is it permitted to happen?
One year later, in 1993, the Department of Energy asked my firm to study the respective roles of science and policy in risk assessment. As a result of this study, I learned how science gets adjusted to fit policy as was implied in Safeguarding the Future. I would like to share this with you today in hopes that it will give you a better perspective on current risk assessment practice as you formulate the future of risk assessment.
Our report, which was sponsored solely by the Department of Energy and was published in October 1994, is entitled Choices in Risk Assessment: The Role of Science Policy in the Environmental Risk Management Process. The findings of Choices in Risk Assessment were reviewed not only by the Department of Energy, but by the Science Adviser to the EPA Administrator, and by senior staff of the Congressional Office of Technology Assessment and the Committee on Interagency Radiation Research and Policy Coordination. The primary findings presented in Choices in Risk Assessment are as follows:
Most environmental risks are so small that even their very existence cannot be proven. Despite our tremendous capabilities in science, we cannot see these risks and we cannot measure them.
Because environmental risks are so small, we create them through a risk assessment process which is entirely dependent upon policy and value judgments, not analytic science.
These policies and value judgments are inherently biased and are designed to achieve pre-determined regulatory outcomes and objectives.
Policy makers, the media and the public are unaware that it is policy and value judgments, not analytic science, that causes us to spend hundreds of billions of dollars on many of our environmental policies.
Getting back to Safeguarding the Future, it is through these risk assessment policies and values judgments that science is adjusted to fit policy. Let’s discuss some examples.
EXAMPLE #1: RADIATION RISK ASSESSMENT AND THE LINEAR NONTHRESHOLD MODEL
As a matter of reasonable scientific certainty, we know from actual human experiences that very high doses of radiation increase the risk of cancer. We know this from studies of the survivors of the atomic bomb explosions, from studies of miners who worked in underground uranium mines for many years, and from studies of women who used to paint watch dials and instrument control panels with radium paint in the 1930s and who would lick the brushes to get a better point.
However, we have little to no meaningful information concerning what, if any, risks there are from exposures to much lower levels of radiation such as those, for example, which may be experienced from radon in tap water, from working at or living near Department of Energy facilities, or from naturally-occurring radiation experienced by those who live in Denver, Colorado, those who drive on the roads of Southeastern Idaho, or those who work in and live near the oil fields of Alaska and Louisiana.
To overcome this lack of scientific knowledge and data, we use a device called the linear nonthreshold model that purports to enable us to guess about potential risks of low-level radiation exposures based on what we know of radiation risks from the ultra high-level radiation exposures I mentioned earlier. This model assumes, and federal agencies would have you believe, that the potential radiation-induced cancer risk from living in Denver, Colorado is somehow similar to surviving an atomic bomb explosion, or that taking daily showers in your home is somehow like working in underground uranium mine. The linear nonthreshold model has never been scientifically validated and is essentially nothing more than a gimmick through which regulation can be justified.
EXAMPLE #2: PESTICIDES AND THE LINEAR NONTHRESHOLD MODEL
As another example of where the linear nonthreshold model is used to justify regulation, consider the case of pesticides residues in food. In 1993, the National Academy of Sciences (NAS) published a report entitled Pesticides in the Diets of Infants and Children in which the Academy stated that “pesticides can cause a range of adverse effects on human health, including cancer … ” However, the Chairman of the NAS Committee which produced the report, Dr. Philip Landrigan, publicly acknowledged that there is absolutely no data that any health effects have ever been caused by any legal application of pesticides. So how does the National Academy of Sciences reach its conclusion that pesticides can cause cancer?
As part of the process of coming to market, each pesticide must undergo a battery of 120 tests to ensure safety. Some of these tests involve feeding specially-bred laboratory animals massive doses of pesticides to see whether the pesticides cause cancer in the animals. How high are these doses? They are the maximum amount of pesticide the animal can eat without being poisoned to death from just eating the pesticide. These doses are often many thousands of times greater than any human could ever be exposed to in an entire lifetime. If some of the animals get cancer from these massive doses, then employing the linear nonthreshold model, we assume that normal, minute dietary exposures to pesticides cause cancer. Keep in mind, this assumption has never been validated despite wide use of pesticides in foods for over 50 years.
EXAMPLE #3: FLUORIDATED DRINKING WATER AND THE RESOLUTION OF CONFLICTING DATA
Often when federal agencies assess risk, they will have conflicting data on whether a substance poses a health risk. There may be “positive” studies which associate that substance with a health risk and “negative” studies which do not associate the substance with health risk. Which studies do you believe? This dilemma is generally dealt with in summary fashion by federal agencies by assuming that negative data is utterly meaningless.
Now, in the spirit of Safeguarding the Future, I will point out to you that when the Public Health Service, the National Academy of Sciences and EPA were faced with this dilemma in the case of fluoridated drinking water, conversely it was the positive data which was assumed to be meaningless. Why did they completely reverse their standard risk assessment policy in the case of fluoridated drinking water? Do you think it had anything to do with the fact that the Public Health Service has been actively promoting fluoridated drinking water for 50 years? What would the public think if after 50 years of such promotion the government was forced to say troops”?
EXAMPLE #4: WORKPLACE INDOOR AIR QUALITY AND WHAT CONSTITUTES “REPUTABLE SCIENTIFIC EVIDENCE”
A final example to show how science can be adjusted to fit policy involves the Occupational Safety and Health Administration (OSHA). In 1980, the Supreme Court vacated an OSHA standard for benzene stating that although OSHA may use assumptions to set workplace standards, those assumptions must have some basis in reputable scientific evidence. In 1992, the U.S. Court of Appeals for Eleventh Circuit vacated OSHA health standards for 428 substances because, again, OSHA employed assumptions which had no basis in reputable scientific evidence.
Just within the last year, OSHA proposed to regulate indoor air quality in the workplace at an OSHA-estimated cost of $8 billion per year. Attached to your copy of my testimony, is the “science” underlying OSHA’s risk assessment for the $8 billion workplace indoor air quality proposal. This four-page, hand-written, tabular analysis purports to summarize data from a scientifically-unvalidated survey of twelve office buildings in San Francisco, California. Do you know how many office buildings there in the U.S.? There are over 4,500,000 according to OSHA’s own estimates!
So, even assuming for the sake argument that the data in the twelve-building survey is valid, does OSHA’s assumption that these twelve buildings are representative of the over 4,500,000 indoor workplace environments across the country constitute scientific evidence, especially insofar as it is supposed to justify a regulation which would cost building owners $8 billion per year? How big a guess is that? How does your answer change if I tell you that this analysis was not even subjected to independent peer-review prior to its use by OSHA? Is an analysis “reputable”, as required by the Supreme Court, if it is not independently peer-reviewed?
CONCLUSION: THE FUTURE OF RISK ASSESSMENT IS UP TO CONGRESS
As I have tried to illustrate by example, today’s risk assessment process permits governmental agencies to produce any results to justify any regulation they want. What is worse is that these results are then labeled and publicized as “science” when in fact they are nothing more than policy and value judgments. Given that there is rarely a sufficient amount of data on which to make wholly scientific decisions, if an agency wants to regulate, all it has to do is employ the usual unvalidated assumptions to scanty data and then, automatically, both a risk and a basis for regulation are manufactured. If an agency doesn’t want to regulate, all it has to do is change assumptions and the risk and basis for regulating disappear. Although, agencies would have Congress, the media and the public believe that this type of risk assessment is a scientific process, as can easily be demonstrated, the process has little to do with science. This is what was carefully implied in Safeguarding the Future by the statement that science should not be adjusted to fit policy.
I would like to say that I applaud your efforts to make the risk assessment process and, thereby, the process through which we establish environmental policy, more scientific in nature. However, I must say that based on my analysis of the present system, neither the bill recently passed in the House (H.R. 1022) nor S.333 can guarantee that environmental policy will be science-based. To date, the strength of these bills is the requirement that the policy and value judgments used in risk assessment be disclosed so that policy makers, the media and the public can more easily understand how the regulatory agencies arrive at their conclusions. As an example of why this is important, after 20 years and probably a billion dollars spent researching the potential health effects of dioxin, I want to know that EPA still must still rely entirely upon policy and value judgments to make up the risks to humans from dioxin.
However, mere disclosure does not guarantee rational regulatory decisions. As an example, I am currently working on a project sponsored by the National Environmental Policy Institute examining how the application of science to Superfund risk assessment would make Superfund cleanups more efficient in terms of costs and more effective in terms of real risk reduced. As I have read through EPA Superfund documents, at first glance, I am almost persuaded to commend EPA in the disclosure of the policies and value judgments used in Superfund risk assessments. However, as it turns out, this disclosure by EPA is merely pro forma. In the end, the actual cleanup decisions are almost invariably based upon the nonscientific policy and value judgments that have gone into the site risk assessments. Frankly, and perhaps more than the Superfund law itself, it is this type of implementation of Superfund by EPA that appears to have contributed most significantly to delaying as well as increasing the costs of cleanups. Thus, while disclosure is a cornerstone of improving the process, it is inadequate by itself to ensure that our environmental policies are based on scientific knowledge.
In closing, I would like to urge the Committee that now is the time to address risk assessment. We have reached the point in our efforts on environmental protection that, as a matter of scientific knowledge and data, environmental protection is now not an issue of public health. Now more than ever we need to rely on science-based risk assessment to ensure that we are not spending our scarce resources unwisely or frivolously. To date, and as demonstrated in Safeguarding the Future and Choices in Risk Assessment the federal agencies have not shown either a willingness or capability to bring about this much needed change on their own.
Thank you very much for this opportunity to share my thoughts with you and I offer my assistance to the Committee as it addresses the complex and important issue of risk assessment.
I will now take any questions that you may have.