This author discusses fallacious scientific methods and attitudes.
He Does a good job.
I summarize another fine discussion of scientific fallacies in the book Judging Science by Foster and Huber (MIT Press 1997), written in the wake of the SCOTUS Decision in Daubert V Merrill Dow and the Federal Judicial Center’s publication of the REference Manual on Scientific Evidence (2nd ed. 2000 Mosby, 3rd ed. 2011 National Academy Press).
My commentary on Foster and Huber.
John Dale Dunn MD JD
Diplomate ABEM, ABLM
Admitted but inactive, Texas and Louisiana Bars
Consultant Emergency Services, Peer Review
Introduction to fallacious and erroneous science and the law.
In addition to reviewing the Reference Manual on Scientific Evidence of the Federal Judicial Center, txt and links in this folder, there are also some excerpts from a book by Peter Huber, PhD and attorney, and Ken Foster PhD on the meaning of the new rules of admissibility for scientific evidence and testimony.
The section of the book excerpted focuses on fallacies in science and the intellectual, epistemological, political, social and psychological aspects of bad science.
First, however, anyone attempting to understand the current state of affairs should read the folder file on Angelo Codevilla, the essay on scientific pretense, along with the farewell speech by Dwight D Eisenhower in 1960 that discussed after the military-industrial complex, the government-research complex and in that section Ike warns of the danger of big government funding research programs and how such developments might corrupt the scientific process, which is not about authority and consensus, but skepticism and humility, the self questioning that is essential for good science.
After reviewing the essay by Codevilla, one might expand on the problem of oligarchies in the other essay by Angelo Codevilla on the Ruling Class in America, that discusses the problem of elitist oligarchy dominated government tainted by group think and statist agendas.
Peter Huber, Kenneth Foster Judging Science (1997 MIT Press)
The chapters of importance in this book discuss the judicial articulation of what is good science, then essays and discussion on ‘
Testability and Falsification—Chapter 3
Errors in Science—Chapter 4
Scientific Validity—Chapter 6
Peer review and the Scientific Community—Chapter 7
That’s enough for this folder material and will be summarized with the excerpts from the book including in the materials of the folder.
The materials are valuable, because they include original essays by many of the important figures in the philosophy of science. This summary is by John Dunn, but the original writers are better in their original discussion for more in depth inquiry.
1. Karl Popper is quoted and his teaching on good science is adhered to in the Blackmun Daubert opinion. Popper, a philosopher, emphasizes the importance of the deductive method of development of scientific concept and solutions, which is heavily focused on evidence and testing theories developed for evidence that might falsify the theory. Falsifiable is essential to a good scientific theory, otherwise Popper considers the theory non science. Pp. 35- 55
2. Weinberg proposes a concept of trans-science that is not practically verifiable or it may exceed the sensitivity of the instruments and methodology. Pp 55, 56.
3. An example of trans-science is epidemiology in the range below proof of effect, for example uncertain methodology or Relative Risk of less than 2. P 57.
4. Another concept of trans-science that is rhetorically in widespread use is to prove no risk, to prove the negative. P 58.
5. Reliability and validity are not the same, for example a reproducible and reliable measure may be invalidated because of a poor instrument or methods or bad underlying science. The first error is easier to identify and correct than the second, which looks valid. P 69-71.
6. Confounders produce validity errors and are the reason observational studies require effects of 100 percent—there are many confounders, listed at p 71, migrations or maturation of the study group, attrition, selection, regression to the mean, sequence of effects, experimenter and subject biases and behavior, even simple things like recall bias and over reliance on recall.
7. Confidence interval is another form of measure of reliability of the data, providing a range of accuracy or reliability around a result. P 79, 81. But some say that confidence interval is too loose. One important consideration is that if a confidence interval includes 1.0, there is no basis to argue for an effect. STUDIES RELIED ON BY US EPA THAT INCLUDE 1.0 IN THE CONFIDENCE INTERVAL (CI) ARE NOT RELIABLE TO SUPPORT AN ASSERTION OF TOXICITY. A CONFIDENCE INTERVAL THAT INCLUDES 1.0 SHOWS A NULL EFFECT.
8. When the signal (results) is in the range of the noise (background natural variability) the reliability of the research is compromised by the signal to noise confusion. In studies with small effects like the US EPA air pollution premature death studies, confirmation bias (also called tunnel vision) energized by intellectual passion and commitment to a political agenda produce studies that do not justify the policies proposed and pursued or the regulatory regimes imposed. P 84.
9. Fallacies and fallacious thinking and research derive from reliance on authority, consensus, acceptance of a vote of those present, obfuscation or cover and selection bias in the service of intellectual passion or ambition, or the “gold effect” which is another form of intellectual passion combined with social pressure consensus bias. All these biases and prejudices and fallacies of thinking are in contravention to the gold standard for scientific inquiry—skeptical experimentation by researchers who are the most strict judge of the nature and reliability of their research and disciplined in analyzing whether their evidence is proof of a theory. P 85.
10. Intellectual passion and ego of the researcher are sources of bad science and one of the most important conflicts of interest. Ego produces a failure to test one’s theory adequately and produces confirmation bias—gathering supportive evidence and rejecting dissent or disagreement and evidence that falsifies the theory in favor. All researchers tend to mythologize themselves and their research, and lack the humility to recognize their own fallibility or see the limits or weakness of their research. Their investment in their career and stature make them rigid and uncritical in their assertions of theory or positing of solutions or answers. P 86
11. Sick science is characterized by:
a. The maximum effect is produced by a phenomenon of barely detectable intensity.
b. Observations are made near the threshold of visibility of the eyes or instruments.
c. There are claims of great accuracy (and significance).
d. Ad hoc excuses are used to nullify any dissent or criticism.
e. The supporters rise and then fall.
12. Another characteristic of sick science is the cargo cult syndrome—pretense of scientific methodology that has no substance. P 89.
13. Another characteristic of sick science is the reports of effects that are considered ominous are in the range of background. E.g. EMG that was proposed to cause terrible carcinogenic effects in the range of the earth’s magnetic fields.
14. The pattern of error that goes to policy making, for example ignoring opportunity benefits, fear of introducing new technologies on the precautionary principle, ignoring safety risks associated with a proposed regulatory regime or remedy, ignoring large existing benefits in favor of fear of risk or the precautionary principle, or MOST IMPORTANT, IGNORING THE UNINTENDED CONSEQUENCES OF PROPOSED SOLUTIONS, EITHER IN TERMS OF COMPLIANCE COSTS OR DIRECT AND KNOWN RISKS AND DETRIMENTS.
15. Procrustean data torturing is not different from opportunistic data torturing, and certainly no less pernicious and deceitful. P 99.
16. The seven deadly sins of knowledge or the cognitive illusions that are nefarious;
b. magical thinking
c. predictability in hindsight
d. anchoring or tunnel vision
e. ease of deception
f. probability blindness or chance ignorance
g. the game of conjuring of linkages and ignoring the weak links in a chain P 118, 119
17. Reliability refers to the reproducibility of the data. Reliability is measured in terms of sensitivity and specificity. Bayes’ theorem measures positive and negative predictive values that are both dependent on sensitivity and specificity. P 113-115.
18. Back to Popper, the soundness of a theory depends on
a. the conclusions must be internally consistent
b. avoid tautological statements that prove nothing but just reference the assertion
c. look for scientific advances in a theory
d. test a theory with experiments
19. The theory must be logically consistent, falsifiable, must assert something new, or novel, and it must be verified by experimental evidence (p 138, 139).
20. There are a fistful of fallacies
a. indirect cause asserted
b. necessary causes are not always sufficient cause
c. temporal or post hoc causation is not real causation
d. ecological fallacy transfers observations about populations to individuals
e. the faggot fallacy piles small and suspect items of proof or evidence and attempts to validate by the bundle or the height of the pile
f. weight of evidence fallacy is similar to e. and relies on the pile
g. bellman’s fallacy is another form of the pile fallacy
h. fallacy of risk is the confusion of absolute and relative risk and using one or the other to deceive
i. inappropriate extrapolation is the assumption that one knows the trends and can project
j. new syndrome fallacy is novelty to an extreme
k. insignificant significance—overemphasizing the importance of statically significance in proof of a theory
l. Fallacy of ignoring large effects in small studies because hey fail a statistical significance test.
m. Positive results are fallaciously given more significance
n. Denial of medical mistakes (all these are on P 143)
21. There are good rules for reading and evaluating a paper as a reviewer. P 149-150
22. Feinstein dissects fallacious and alarming medical reports on reserpine causing breast cancer, coffee causing pancreatic cancer, and alcohol and breast cancer. Feinstein reviews how the studies on these reports were flawed. P 156.
It is important to note that the book Judging Science is an exceptional effort by extraordinary authors and this writer cannot do them justice. The books sections are excerpted by necessity.
Buying the book will be the best choice for anyone compelled to learn the intricacies of legal management of scientific evidence and the theories of science that underlie any reasonable discussion of scientific reliability and veracity.
Here we go.