< Phd | Water Drills >
  1. Ben Lovett Active Member


    Members do not see these Ads. Sign Up.
    Are you over reliant on P values?

    George Cobb, Professor Emeritus of Mathematics and Statistics at Mount
    Holyoke College



    From Science: http://www.sciencemag.org/news/sifter/statisticians-urge-scientists-move-past-p-values


    A common statistical technique is being overused, FiveThirtyEight reports?and it could let incorrect results sneak through. A p-value of less than 0.05 is often taken to mean that a result is ?statistically significant,? a boundary that can make the difference between a scientific paper being published or rejected. Such high stakes create heated discussions, and when a group of statisticians got together to write a report about the technique, it produced a year-long debate. The resulting statement, released today by the American Statistical Association, describes the many limitations of p-values, such as their inability to distinguish between small and large differences. In other words, a result may have a low enough p-value to be ?statistically significant??but that doesn?t mean it?s important. P-values still have their place, many of the experts say. But they should only be one tool in a scientist?s toolkit, rather than the bar by which all research is judged.
    and also at http://fivethirtyeight.com/features...-agree-on-its-time-to-stop-misusing-p-values/

    and the full paper (ASA Statement on Statistical Significance and P-values) is available at;

    http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108

    It expands on 6 principles

    1. P-values can indicate how incompatible the data are with a specified statistical model
    2. P-values do not measure the probability that the studied hypothesis is true, or the
    probability that the data were produced by random chance alone.
    3. Scientific conclusions and business or policy decisions should not be based only on
    whether a p-value passes a specific threshold.
    4. Proper inference requires full reporting and transparency
    5. A p-value, or statistical significance, does not measure the size of an effect or the
    importance of a result
    6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.


    Ben
     
  2. gavw Active Member

    And for that reason most reputable journals now, rightly, demand that p-values are given with their respective effect size (such as Cohen's d). Also worth remembering that statistical significance and clinical significance are not the same thing

    further reading:
    http://rsos.royalsocietypublishing.org/content/1/3/140216
     
  3. Griff Moderator

  4. BEN-HUR Well-Known Member

    This issue also made its way to Retraction Watch...

    We?re using a common statistical test all wrong. Statisticians want to fix that. (http://retractionwatch.com/2016/03/...est-all-wrong-statisticians-want-to-fix-that/)


     
  5. NewsBot The Admin that posts the news.

    Articles:
    1
    PUBLIC RELEASE: 15-MAR-2016
    Misleading p-values showing up more often in biomedical journal articles, Stanford study finds
     
< Phd | Water Drills >
Loading...

Share This Page