Welcome to the Podiatry Arena forums

You are currently viewing our podiatry forum as a guest which gives you limited access to view all podiatry discussions and access our other features. By joining our free global community of Podiatrists and other interested foot health care professionals you will have access to post podiatry topics (answer and ask questions), communicate privately with other members, upload content, view attachments, receive a weekly email update of new discussions, access other special features. Registered users do not get displayed the advertisements in posted messages. Registration is fast, simple and absolutely free so please, join our global Podiatry community today!

  1. Everything that you are ever going to want to know about running shoes: Running Shoes Boot Camp Online, for taking it to the next level? See here for more.
    Dismiss Notice
  2. Have you considered the Critical Thinking and Skeptical Boot Camp, for taking it to the next level? See here for more.
    Dismiss Notice
  3. Have you considered the Clinical Biomechanics Boot Camp Online, for taking it to the next level? See here for more.
    Dismiss Notice
Dismiss Notice
Have you considered the Clinical Biomechanics Boot Camp Online, for taking it to the next level? See here for more.
Dismiss Notice
Have you liked us on Facebook to get our updates? Please do. Click here for our Facebook page.
Dismiss Notice
Do you get the weekly newsletter that Podiatry Arena sends out to update everybody? If not, click here to organise this.

Statisticians Found One Thing They Can Agree On: It?s Time To Stop Misusing P-Values

Discussion in 'General Issues and Discussion Forum' started by Ben Lovett, Mar 9, 2016.

  1. Ben Lovett

    Ben Lovett Active Member

    Members do not see these Ads. Sign Up.
    Are you over reliant on P values?

    George Cobb, Professor Emeritus of Mathematics and Statistics at Mount
    Holyoke College

    From Science: http://www.sciencemag.org/news/sifter/statisticians-urge-scientists-move-past-p-values

    A common statistical technique is being overused, FiveThirtyEight reports?and it could let incorrect results sneak through. A p-value of less than 0.05 is often taken to mean that a result is ?statistically significant,? a boundary that can make the difference between a scientific paper being published or rejected. Such high stakes create heated discussions, and when a group of statisticians got together to write a report about the technique, it produced a year-long debate. The resulting statement, released today by the American Statistical Association, describes the many limitations of p-values, such as their inability to distinguish between small and large differences. In other words, a result may have a low enough p-value to be ?statistically significant??but that doesn?t mean it?s important. P-values still have their place, many of the experts say. But they should only be one tool in a scientist?s toolkit, rather than the bar by which all research is judged.
    and also at http://fivethirtyeight.com/features...-agree-on-its-time-to-stop-misusing-p-values/

    and the full paper (ASA Statement on Statistical Significance and P-values) is available at;


    It expands on 6 principles

    1. P-values can indicate how incompatible the data are with a specified statistical model
    2. P-values do not measure the probability that the studied hypothesis is true, or the
    probability that the data were produced by random chance alone.
    3. Scientific conclusions and business or policy decisions should not be based only on
    whether a p-value passes a specific threshold.
    4. Proper inference requires full reporting and transparency
    5. A p-value, or statistical significance, does not measure the size of an effect or the
    importance of a result
    6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

  2. gavw

    gavw Active Member

    And for that reason most reputable journals now, rightly, demand that p-values are given with their respective effect size (such as Cohen's d). Also worth remembering that statistical significance and clinical significance are not the same thing

    further reading:
  3. Griff

    Griff Moderator

  4. BEN-HUR

    BEN-HUR Well-Known Member

    This issue also made its way to Retraction Watch...

    We?re using a common statistical test all wrong. Statisticians want to fix that. (http://retractionwatch.com/2016/03/...est-all-wrong-statisticians-want-to-fix-that/)

  5. NewsBot

    NewsBot The Admin that posts the news.

    Misleading p-values showing up more often in biomedical journal articles, Stanford study finds

Share This Page