Nature Articles

Joe Bast Sent me this summary of Nature articles with his brief comments. I liked.

Thanks Joe:

Friends,

Attached are 25 pages from some recent issues of NATURE that I found especially interesting.

* “Modelling the effects of subjective and objective decision making in scientific peer review,” by UK Park, Mike W. Peacey, and Marcus R. Mufato, Nature, Vol. 506, February 6, 2014, pp. 93-96. This lengthy (4 pages) and heavily footnoted article appears as a “Letter,” so I’m not sure it’s peer reviewed. The authors are economists in the UK and South Korea. This is an important article about the collapse of peer review. It repeatedly cites Ioannidis’s pioneering work in this field, points out that “it has been shown that increased popularity of a particular research theme reduces the reliability of published results, and that findings published in prestigious journalists are less reliable and more likely to be retracted.” They also cite research showing “a mismatch between the claims made in the abstracts [of academic articles], and the strength of evidence for those claims based on a neutral analysis of the data, consistent with the occurrence of herding.”

* “Drought and fire change sink to source,” by Jennifer K. Balch, reports on a study that found the Amazon forest biome released more carbon than it took up in 2010, a major drought year, due to forest fires and reduced photosynthesis rates. Since many computer models forecast more drought due to global warming, she warns that “if drought and fire frequencies increase in the future, they may override the Amazon’s function as a carbon sink.”

* “Carbon dioxide storage is secure.” Vivian Scott, a scientist at Edinburgh University in the UK, writes on behalf of six cosignatories to object to an article in a previous issue that warned that seabed fractures pose a threat to plans to sequester carbon dioxide under the North Sea in the Sleipner gas field.

*”Make supply chains climate-smart,” by Anders Levermann, a “professor of dynamics of the climate system at the Potsdam Institute for Climate Impact Research, Germany,” describes efforts to “track the flows of specific goods at a scale appropriate for the effects of natural disasters” to be observed and then prepared for and defended against. While the author repeats the usual mantra of AGW alarmism and cites no new data, Robert Carter in Australia will be happy to know work along these lines is underway.

* “Statistical Errors,” by Regina Nuzzo, in the February 13th issue of NATURE, is a fascinating article about the value of P-values. “P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume.” John Dunn may find this relevant to his recent debate with others over the role of confidence levels and statistical significance.

* “Managing forests in uncertain times,” by Valentin Bellassen and Sabastiaan Luyssaert (both are spelled correctly), are Frenchmen who seem to be writing far outside their area of competence. They report that “the world’s forests have absorbed as much as 30% (2 petagrams of carbon per year; PgCyear-1) of annual global anthropogenic CO2 emissions – about the same amount as the oceans.” Carbon dioxide fertilization and anthropogenic nitrogen emissions are “accelerating tree growth worldwide,” but this trend may be endangered by “forest fires, infestations, droughts and storms” thought to be linked to global warming. “A quantified understanding of how all these drivers shape the forest carbon sink is lacking. And predictions of how they will change during this century remain uncertain.”

* “A green illusion,” by Kamel Soudani and Christephe Francois, reports on a study appearing in the Letters section (“Amazon forests maintain…” see below) warns that previous estimates of the “greenness of Amazon forests” based on satellite observations “are in fact an optical artifact.”

* “A two-fold increase in carbon cycle sensitivity to tropical temperature variations,” by Zuhui Wang et al., is full of computer modeling gogglygoog, but I think they found that “most of the models used do not correctly capture the response of tropical carbon fluxes to climate variability,” which “call into question their ability to predict the future evolution of the carbon cycle and its feedbacks to climate.”

* “Amazon forests maintain consistent canopy structure and greenness during the dry season,” by Douglas C. Morton et al., “Here we show that the apparent green up of Amazon forests in optical remove sensing data resulted from seasonal changes in near-infrared reflectance, an artifact of variations in sun-sensor geometry. Correcting this biodirectional reflectance effect eliminated seasonal changes in surface reflectance, consistent with independent libar observations and model simulations with unchanging canopy properties.”

Joe

Joseph Bast

President

The Heartland Institute

One South Wacker Drive #2740

Chicago, IL 60606

Phone 312/377-4000

Email jbast@heartland.org

Web site http://www.heartland.org

About these ads

4 responses to “Nature Articles

  1. ‘ “Statistical Errors,” by Regina Nuzzo, in the February 13th issue of NATURE, is a fascinating article about the value of P-values. “P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume.” ‘

    There is new research in the area of reliability of p-values. The new research indicates they are much less repeatable/reliable than previously thought. STILL, they are relatively reliable so long as the researcher
    a. posts their analysis protocol before examination of the data (in pool, call your shot)
    b. there is only one question OR the analysis corrects for asking multiple questions.
    c. The protocol explains exactly how the analysis will be adjusted AND does not allow the analysts flexibility in the adjustment process.

    Most of the non-reliability of reported p-values comes from how the data is treated before analysis, multiple testing (point b above) and multiple modeling (point c).

    Simple rule of thumb: Randomized clinical trials are relatively reliable. Observational studies, as currently conducted, are so unreliable as to be essentially worthless.

    • The big problem with statistics in general is that the figures are highly dependent on the subjective view of potential outcomes. If I knock a glass off a counter I could say that it will either break or not break and conclude that there is a 50% chance of either. conversely I could say that there are an infinite number of possible landing configurations and only a finite number of landing configurations that will lead to breakage and thus conclude that there is a near zero chance of the glass breaking. The exact same logic could show that there is a near 100% chance of breaking. Sound math and sound science are not always the same thing.

      The calculation of P-value begins and ends with assumptions. There are usually a few assumptions thrown in the middle as well. The statistical method can never be more reliable than the underlying assumptions, not the least of which are the arbitrary threshold values. The entire purpose of p-values and standard deviation is to add an air of legitimacy to correlation studies. The adoption of these questionable standards allows its practitioners to claim to have proven that A causes B without even claiming to know how A causes B. The real-world consequences range from bad diet advice to the approval of expensive, often dangerous medications with questionable effectiveness.

      No amount of peer review will prevent inaccurate studies from being published so long as the community at large continues to place their faith in the arbitrary standards that govern P-values. The use of P-values in the court room is even more disasterous.

      • Observational studies are a real problem. Period. There is a lack of intellectual honesty for observational studies.

        Randomized studies are very different. VERY DIFFERENT. Here treatment is assigned at random to the objects, so the characteristics of the object, different that they are, are averaged out over the objects in a group. Randomization is one of the great achievements of mankind for getting at true cause and effect. p-values are effective in randomized studies.

        The really startling observation made by me and the director of NISS is that where observational claims have been tested in randomize trials the claims from observational studies NEVER replicated. Some observational study claims are correct, but not very many. Can observational studies be done better. YES. But very often the goal is to get something published (and get Government grants). Making a claim that will replicate is entirely secondary. Write your congressman and have them cut off funding of observational studies.

        • I write my congressmen to request they eliminate funding for all studies. It’s not one of their enumerated powers so they have no authority to pick and choose which researcher they want to succeed. The same is true of charity and investment.

          That being said, even in a well-controlled, randomized trial, the nature of P-values is highly subjective. Even extremely strong correlation is not the same as causation. The decision to place faith in an arbitrarily chosen threshold value regardless of context gives a false impression of objectivity often at the cost of subjective, but sound judgment.

          Averaging hides aberrations in data that might prove problematic to the hypothesis under test. The assumption that errors and biases can be averaged out is one of the biggest problems in research. In general, averages yield less information the more information you put into them. Even when conducted well, randomization and averaging are highly dependent on sufficient sample size. Faced with the high cost of large studies, many researchers resort to meta-analysis, and as a result, introduce multiple additive errors.

          Even if the researcher does everything right and is free from any internal bias, the methodology itself is insufficient to show causality, especially if the relationship is anything less than 1 to 1. Until the physical mechanism for causation is known it is wrong to use the word cause. Humanity has a long history of things we used to “know”. False certainty has a habit of standing in the way of progress and diverting attentions. It is important to be honest about the limitations of research both with the public and ourselves. Spouting P-values to an uninformed public to gin up support for the belief that a claim has been proven is patently dishonest even if we’re right in the end.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s