I have done this so often before. It is about how the EPA perverts epidemiology.
A recent exchange with a Ph D math guy on the question of small association epidemiology, forces me to revisit what is good epidemiology. Why small associations are not proof of anything.
So—– here goes.
First of all, epidemiology is a serious and important study of populations. What happens to them when they are exposed usually to something thought to be detrimental.
Here is a Cliff’s notes version of the chapters on epidemiology in the 2nd and 3rd edition of the Reference Manual on Scientific Evidence published by the Federal Judicial Center, which is the entity responsible for educating Federal Judges.
I wish I could do this simple, but it ain’t.
For serious scholars on the issue of how the EPA cheats, the materials are essential. Give me a chance to show you. I didn’t write these things–they were written by the premier experts in epidemiology on the issue of how studies should be done.
The short version of what is in these chapters is this:
1. Epidemiological studies look for “associations” and that means some increase in events compared to the control population, deaths, disease, harm of some kind.
2. The associations are positive if negative–for example a 5 % increase in disease or death is a Relative Risk (RR) of 1.05.
3. HONEST epidemiologists know that a RR of less than 2 means a relatively small association exists, that fails to meet the requirements in rules of observational Epidemiology for proof of causation.
4. For associations of less than 100 % or a Relative Risk of less than 2, the association is considered within the noise range of an epidemiological observational study.
5. So—–When the Relative Risk is less than 2, it reflects an association that is within the noise range of the relatively crude and uncontrolled nature of observational epidemiological studies.
EPA air pollution studies all have associations in the range of no proof. ALL IN THE RANGE OF NOISE FOR OBSERVATIONAL EPIDEMIOLOGICAL STUDIES.
I prefer presenting the thing by the experts in the field as picked by the Federal Judiciary to write in their Reference Manual on Scientific Evidence that was published in the 2nd edition in 2000 and in the 3rd edition in 2011.
For the benefit of the reader, I did not include the whole chapter on epidemiology for the 2nd and 3rd edition. At the top of the sections for each chapter is the info on how to read the whole chapter.
I just hoped to engage the reader on the important stuff related to Relative Risk and proof of causation.
Click to access 2nd-and-3rd-epi-highlights-ref-manual.pdf
Ok so now we have an exchange between me, John, and a prominent PhD mathematician on the question of epidemiological events that are measured as small associations. The question is what do they mean, are they real, should we consider them evidence?
Here is the debate, if you will, between me and the mathematician.
MATH GUY versus BIOLOGY GUY (Me) on the question of small associations in Epidemiological observational Studies.
Math guy-Many people, including me, would be pleased to reduce a risk that only had one chance in 3 of killing you,
John—that’s not the point since small associations of 33 %, for example, don’t show risk, unless they have a magnitude of association that exceeds 100% or a relative risk (RR) of 2 or more. Show one chance in three or whatever is a small association, and when they fail to reach the threshold for reliability they show nothing–they don’t get out of the noise range for the study.
Math guy-even though it is not “more likely than not” that it will.
John—more likely than not is a legal term of art, just means that the evidence is now considered reliable. Until the evidence reaches the reliable stage, which is a RR of 2 or 100 % difference in association, it is nothing, it means nothing, as in the comment I made above.
Math guy-The argument that RR must exceed 2 to meet a “more likely than not” standard is not an urban legend.
John— before in another communication, you said it was an urban legend, to make fun of me—but now you find out from me it is in a book for judges on epidemiology written by 3 authors of great national repute, including Leon Gordis MD DrPH. The rule was suggested by Gordis and his co authors in their book on guidance for judges on how to evaluate epidemiological testimony and evidence. I am not making this stuff up.
Math guy-It is simple arithmetic (if the RR is causal, which I think is the real issue).
John—the issue is that an RR of less than 2 is not considered adequate to get out of the range of noise—so it is not anything and certainly not evidence of causation.
Math guy-It is the argument that any RR < 2 should be considered not causal that strikes me as not worth using — it is neither true nor sensible.
John—that is where you are demonstrably wrong—yes it does make a difference because of what we know about the reliability and the relevance of the associations found in observational ecological epidemiological studies. In the low ranges, like below a 100% association or even more for some who are more conservative, honest epidemiologists are reluctant to assert proof of causation. The associations with RR of less than 2 (some say less than 3 or 4) mean nothing because of the crude nature of observational studies.
Math guy-and the Focusing less on whether the RR exceeds 2 (which is irrelevant for environmental risks, where “more likely than not” to kill you is not relevant)
John—yes it is relevant—it is the question that is relevant, material and dispositve (not to sound too lawyerly) the question that is important is–is the association large enough to support a reliable argument for proof of causation. If the magnitude of the association is inadequate, there is no causal argument, there is only a number for an association within the range of meaningless noise in an observational study.
Math guy-and more on whether the RR is causal might do much to move our understanding forward, IMO.
John-the only way that an association (which is converted into a Relative Risk) is considered causal is if it is big enough to overcome the crude nature of observational studies. I cannot change the history of how that rule was established.
I refer to the rules because they still pertain and are used when politics are not effecting conduct of researchers and journal editors. The rule on RR for observational studies is well established not only in the Reference Manual but by many other experts and Journal Editors. I am not inventing the rule, but I will not deny that many epidemiological studies published violate the rule.
Some say that if they followed the rules on magnitude of association, epidemiologists would be out of business.
Math guy-Leon Gordis, whom I met a few decades ago, never advocated (to my knowledge) that RR < 2 implies lack of causation.
John—you think he didn’t write the chapters I sent you or didn’t you read them? I even sent you the page numbers of the important discussions. You think meeting the man trumps reading what he wrote? I think not.
Math guy-Nor does anyone else with a firm grasp of arithmetic.
John-this is not about arithmetic, I know what 1.05 means in arithmetic—you don’t know what it means in epidemiology. In an observational study it means nothing.
Math guy-Again, if an exposure increases mortality rates by less that 100%, that could still be huge, even though RR < 2.
John-do you really not understand? That’s the game they play, 10 percent association, projected to a population of 300 million is indeed a big number. However 10 % or a RR of 1.1 is in the range of meaningless noise in an ecological observational study. PERIOD.
Your colleagues in epidemiology, if they are honest, will explain it to you.
I think you give your associate too much credit by calling him “Math Guy”. He doesn’t seem to understand math any better than biology. He’s quoting raw numbers with no context. One plus one may equal two in the raw conceptual field of math, but one apple plus one orange does not equal a banana. If he really were a mathematician he’d know that the 33% increase in risk is not a 1in 3 chance. Even a 500% increase in a tiny risk is still tiny. 500 times a one in a million chance is still 0.05% chance. We haven’t even gotten to the standard practice of considering backwards statistics, measuring how many people with a symptom were exposed to an agent rather than how many people exposed to the agent contracted the symptom. Anyone who understands math knows all epidemiological cause/effect claims are a joke.
https://xkcd.com/1252/
The truth is that NO amount of relative risk implies causal relationship. It implies correlation and may be useful in indicating a starting point for research of a more conclusive nature. It wouldn’t matter if 100% of the people who took action A experienced result A if that same group of people also took actions B, C, and D. In general population studies there are just too many confounding factors to trust any outcomes, even 1:1 relationships are not a guarantee of causality.
Math Guy is laboring under the common misconception that epidemiologists have the same definition for “cause” that the rest of us use. In that respect, any epidemiologist who claims to have proven that one thing causes another is outright lying. I’m sure there are honest epidemiologists out there, but they never seem to make the news. In what other context would you use the word “cause” if an action led to an outcome less than 25% of the time? According to one European study from 2006 24.4% of male “heavy smokers” develop lung cancer (no word on what age they reached or how long the smoked prior to diagnosis, or any information on other exposures to environmental agents).
Your advocate is also lousy at debate. In addition to confusing correlation with causation, he also uses argumentum ad ignorantiam (lack of evidence against is proof) sentence: “Leon Gordis…never advocated (to my knowledge) that RR < 2 implies lack of causation. “ He assumes that Leon Gordis never advocated this simply because he doesn’t have evidence that he did.
He invokes argumentum ad populum in the sentence “Many people, including me, would be pleased to reduce a risk that only had one chance in 3 of killing you.” “Many people” are wrong about many things. What pleases them is irrelevant to the truth even if he wasn’t so wrong on the math that got him to “1 in 3 chance of killing you (eventually)”
He combines Argumentum ab auctoritate, and ad hominem in the sentence, “Nor does anyone else with a firm grasp of arithmetic.” He simultaneous implies that people that are good at math agree, and anyone who disagrees is bad at math. None of which has anything to do with the question at hand.
Bottom line, You’re tossing your pearls before swine with this one, John.