I am like a broken record, but its an important tune.
Population studies are done by epidemiologists. Social studies, public health studies, studies about things that are good or things that are bad.
In our world at JunkScience.com one of the focuses is on reliable evidence, so we need to know the rules on populations studies.
Cohort studies follow groups to see what happens.
Case Control studies looks back on people who have had something happen.
Ecological (observational) studies just look at patterns of events in populations without necessarily knowing the populations at all)
Now the Epidemiological studies are interesting and sometimes identify interesting things, but they are not Randomized, Controlled Studies, also acronymed RCT, Randomized Controlled Trials.
Drug efficacy studies are an example of RCTs, a group of subjects is selected and controls, randomized profiled to eliminate confounding factors, then the studies commence and usually include blinding subjects and researchers to avoid bias.
The EPA has sponsored millions of dollars worth of ecological studies on effects of air pollution, big populations usually studied as a regional groups, for example all residents of a metropolitan statistical area.
So the study proceeds without controls and an endpoint is used, for air pollution it’s deaths compared to air pollution or maybe hospitalizations or emergency visits compared to air pollution or ozone.
Lots of confounders in ecological/observational studies.
In the study of toxicology the Bradford Hill Rules are pertinent.
From the Ask search tool:
Bradford Hill criteria for asserting toxicological causation
In 1965 Austin Bradford Hill proposed a series of considerations to help assess evidence of causation, which have come to be commonly known as the “Bradford Hill criteria”.
In contrast to the explicit intentions of their author, Hill’s considerations are now sometimes taught as a checklist to be implemented for assessing causality. Hill himself said “None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required sine qua non.”
1. Strength: A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal.
2. Consistency: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.
3. Specificity: Causation is likely if a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship.
4. Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay).
5. Biological gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence.
6. Plausibility: A plausible mechanism between cause and effect is helpful (but Hill noted that knowledge of the mechanism is limited by current knowledge).
7. Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, Hill noted that “… lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations”.
Experiment: “Occasionally it is possible to appeal to experimental evidence”.
Analogy: The effect of similar factors may be considered.
Now the rules look a lot like the Koch Postulates for determining the cause of a disease, but why not. Biological studies should follow a certain methodolgy to reach the truth. Then Repeat to assure reliability of the findings.
Epidemiology in the setting of toxicological studies is an effort to measure effects of toxins by studying exposed populations.
Here is where I borrow from a friend and ally of Milloy and me–Stan Young, PhD.
Stan Young
Rules for Observational Studies
Composite Rules/Checklist for Editors and Referees
1. Analysis protocol and a listing of the questions at issue, including any biological rational, was made public or filed with trusted 3rd party prior to study initiation and examination of data.
2. The guidelines for reproducible research, Peng et al., are followed. (post analysis protocol prior to study, and make analysis code and data available electronically)
3. The analysis protocol gives methods and strategies to be used to correct for multiple testing.
4. The analysis protocol gives methods and strategies to be used to correct for multiple modeling.
5. The analysis protocol was developed without access to the response variable(s) of interest, Rubin, 2007.
6. Account for selection bias, cross-sectional and/or longitudinal bias.
7. Investigate and demonstrate balance of important covariates between the treatment groups without access to the outcome, Rubin, 2007.
8. Important covariates are available for use in the study.
9. Results are replicated, either by using a test and holdout sample or by use of a separate data base.
10. Any claim made needs to meet a minimal standard of magnitude. For example a risk ratio has to be 2 or larger to make a claim. See Federal Judicial Center Reference Guide.
11. The quality of the research (Items 1-10), not the presence of “statistical significance” of a finding, is the condition of publication.
References
Peng RD, Dominici F, Zeger SL. Reproducible Epidemiologic Research. American Journal of Epidemiology 2006;63:783-789.
Reference Guide on Epidemiology, Federal Judicial Center, Reference Manual on Scientific Evidence, 2nd ed., (Federal Judicial Center, 2000), www.fjc.gov/public/pdf.nsf/lookup/sciman00.pdf/$file/sciman00.pdf (Pages 362-363 and 384)
Rubin D. The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. Statistics in Medicine 2007; 26:20–36.
This is Stan Young’s summary of the meaning of Relative Risk.
Please note the preeminent experts he quotes at the end.
Stan Young on relative risk as a measure of causation
RR
Relative Risk or Risk Ratio (sometimes also called Hazard Ratio or Odds Ratio, but as the meaning of odds is quite different, especially in, for example, racing circles, this is to be avoided) is at the very heart of the dispute between epidemiology and real science. If X% of people exposed to a putative cause suffer a certain effect and Y% not exposed to the cause (or alternatively the general population) suffer the same effect, the RR is X/Y. If the effect is “bad”, then a RR greater than unity denotes a “bad” cause, while an RR less than unity suggests beneficial cause (and likewise if they are both “good”). An RR of exactly unity suggests that there is no correlation. There are a number of problems in a simplistic application of RR. In particular:
1. Even where there is no correlation, the RR is never exactly unity, since both X and Y are estimates of statistical variates, so the question arises as to how much deviation from unity should be acceptable as significant.
2. X and Y, while inherently unrelated, might be correlated through a third factor, or indeed many others (for example, age). Sometimes such confounding factors might be known (or thought to be known) and (sometimes dubious) attempts are made to allow for them. Where they are not known they cannot be compensated for, by definition.
3. Sometimes biases are inherent in the method of measurement employed.
4. Statistical results are often subjected to a chain of manipulations and selections, which (whether designed to or not) can increase the deviation of the RR from unity.
5. Publication bias can give the impression of average RRs greater than 1.5 when there is no effect at all.
For these reasons most scientists (which includes scientifically inclined epidemiologists) take a fairly rigorous view of RR values. In observational studies, they will not normally accept an RR of less than 3 as significant and never an RR of less than 2. Likewise, for a putative beneficial effect, they never accept an RR of greater than 0.5. Sometimes epidemiologists choose to dismiss such caution as an invention of destructive skeptics, but this is not the case. For example:
In epidemiologic research, [increases in risk of less than 100 percent] are considered small and are usually difficult to interpret. Such increases may be due to chance, statistical bias, or the effects of confounding factors that are sometimes not evident. [Source: National Cancer Institute, Press Release, October 26, 1994.]
“As a general rule of thumb, we are looking for a relative risk of 3 or more before accepting a paper for publication.” – Marcia Angell, editor of the New England Journal of Medicine”
“My basic rule is if the relative risk isn’t at least 3 or 4, forget it.” – Robert Temple, director of drug evaluation at the Food and Drug Administration.
“An association is generally considered weak if the odds ratio [relative risk] is under 3.0 and particularly when it is under 2.0, as is the case in the relationship of ETS and lung cancer.” – Dr. Kabat, IAQC epidemiologist
This strict view of RRs may be relaxed somewhat in special circumstances; for example in a fully randomized double-blind trial, as opposed to an observational study, which produces a result with a high level of significance.
You can see why, if the EPA keeps putting up air pollution studies with RR of less than 1.2 on air pollution, it would make me fight back–since they aren’t proving anything and they use their small associations to project to hundreds of thousands of deaths from small particles, for example.
Class dismissed.
Wanna see some more, go to the top and search epa epidemiology. there is a series of essays.
Informative article, exactly what I needed.