Systematic review| Volume 101, ISSUE 12, P2219-2226, December 2020

Novel Effect Size Interpretation Guidelines and an Evaluation of Statistical Power in Rehabilitation Research

Published:April 06, 2020DOI:



      First, to establish empirically-based effect size interpretation guidelines for rehabilitation treatment effects. Second, to evaluate statistical power in rehabilitation research.

      Data Sources

      The Cochrane Database of Systematic Reviews was searched through June 2019.

      Study Selection

      Meta-analyses included in the Cochrane Database of Systematic Reviews with “rehabilitation” as a keyword and clearly evaluated a rehabilitation intervention.

      Data Extraction

      We extracted Cohen’s d effect sizes and associated sample sizes for treatment and comparison groups. Two independent investigators classified the interventions into 4 categories using the Rehabilitation Treatment Specification System. The 25th, 50th, and 75th percentile values within the effect size distribution were used to establish interpretation guidelines for small, medium, and large effects, respectively. A priori power analyses established sample sizes needed to detect the empirically-based values for small, medium, and large effects. Post-hoc power analyses using median sample sizes revealed whether the “typical” rehabilitation study was sufficiently powered to detect the empirically-based values. Post hoc power analyses established the statistical power of each test based on the sample size and reported effect size.

      Data Synthesis

      We analyzed 3381 effect sizes extracted from 99 meta-analyses. Interpretation guidelines for small effects ranged from 0.08 to 0.15; medium effects ranged from 0.19 to 0.36; and large effects ranged from 0.41 to 0.67. We present sample sizes needed to detect these values based on a priori power analyses. Post hoc power analyses revealed that a “typical” rehabilitation study lacks sufficient power to detect the empirically-based values. Post hoc power analyses using reported sample sizes and effects indicated the studies were underpowered, with median power ranging from 0.14 to 0.23.


      This study presented novel and empirically-based interpretation guidelines for small, medium, and large rehabilitation treatment effects. The observed effect size distributions differed across intervention categories, indicating that researchers should use category-specific guidelines. Furthermore, many published rehabilitation studies are underpowered.


      List of abbreviations:

      RTSS (Rehabilitation Treatment Specification System)
      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'


      Subscribe to Archives of Physical Medicine and Rehabilitation
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect


        • Whyte J.
        • Barrett A.
        Advancing the evidence base of rehabilitation treatments: a developmental approach.
        Arch Phys Med Rehabil. 2012; 93: S101-S110
        • Damschroder L.J.
        • Aron D.C.
        • Keith R.E.
        • Kirsh S.R.
        • Alexander J.A.
        • Lowery J.C.
        Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science.
        Implement Sci. 2009; 4: 50
        • Johnston M.V.
        • Dijkers M.P.
        Toward improved evidence standards and methods for rehabilitation: recommendations and challenges.
        Arch Phys Med Rehabil. 2012; 93: S185-S199
        • Ottenbacher K.J.
        Why rehabilitation research does not work (as well as we think it should).
        Arch Phys Med Rehabil. 1995; 76: 123-129
        • Ottenbacher K.J.
        • Barrett K.A.
        Measures of effect size in the reporting of rehabilitation research.
        Am J Phys Med Rehabil. 1989; 68: 52-58
        • Hart T.
        • Bagiella E.
        Design and implementation of clinical trials in rehabilitation research.
        Arch Phys Med Rehabil. 2012; 93: S117-S126
        • Latif L.A.
        • Amadera J.E.D.
        • Pimentel D.
        • Pimentel T.
        • Fregni F.
        Sample size calculation in physical medicine and rehabilitation: a systematic review of reporting, characteristics, and results in randomized controlled trials.
        Arch Phys Med Rehabil. 2011; 92: 306-315
        • Ottenbacher K.J.
        • Barrett K.A.
        Statistical conclusion validity of rehabilitation research. A quantitative analysis.
        Am J Phys Med Rehabil. 1990; 69: 102-107
        • Castellini G.
        • Gianola S.
        • Bonovas S.
        • Moja L.
        Improving power and sample size calculation in rehabilitation trial reports: a methodological assessment.
        Arch Phys Med Rehabil. 2016; 97: 1195-1201
        • Musselman K.E.
        Clinical significance testing in rehabilitation research: what, why, and how?.
        Phys Ther Rev. 2007; 12: 287-296
        • Page P.
        Beyond statistical significance: clinical interpretation of rehabilitation research literature.
        Int J Sports Phys Ther. 2014; 9: 726-736
        • Hevey D.
        • McGee H.M.
        The effect size statistic: useful in health outcomes research?.
        J Health Psychol. 1998; 3: 163-170
        • Cohen J.
        Statistical power analysis for the behavioural sciences.
        Lawrence Erlbaum Associates, Hillside, NJ1988
        • Cohen J.
        A power primer.
        Psychol Bull. 1992; 112: 155-159
        • Conn V.S.
        • Ruppar T.M.
        • Phillips L.J.
        • Chase J.-A.D.
        Using meta-analyses for comparative effectiveness research.
        Nurs Outlook. 2012; 60: 182-190
        • Thompson B.
        Effect sizes, confidence intervals, and confidence intervals for effect sizes.
        Psychol Sch. 2007; 44: 423-432
        • Brydges C.R.
        Effect size guidelines, sample size calculations, and statistical power in gerontology.
        Innov Aging. 2019; 3: 1-8
      1. Lovakov A, Agadullina E. Empirically Derived Guidelines for Interpreting Effect Size in Social Psychology. 2017. Available at:, Accessed April 30, 2020.

        • Quintana D.S.
        Statistical considerations for reporting and planning heart rate variability case-control studies.
        Psychophysiology. 2017; 54: 344-349
        • Wiley
        Cochrane Database of Systematic Reviews.
        (Available at:)
        • Moher D.
        • Liberati A.
        • Tetzlaff J.
        • Altman D.G.
        Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.
        PLoS Med. 2009; 6e1000097
        • Silagy C.A.
        • Middleton P.
        • Hopewell S.
        Publishing protocols of systematic reviews: comparing what was done to what was planned.
        JAMA. 2002; 287: 2831-2834
        • Tendal B.
        • Higgins J.P.
        • Jüni P.
        • et al.
        Disagreements in meta-analyses using outcomes measured on continuous or rating scales: observer agreement study.
        BMJ. 2009; 339: b3128
        • Lakens D.
        Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs.
        Front Psychol. 2013; 4: 863
        • Aguinis H.
        • Gottfredson R.K.
        • Joo H.
        Best-practice recommendations for defining, identifying, and handling outliers.
        Organ Res Methods. 2013; 16: 270-301
        • Hart T.
        • Dijkers M.P.
        • Whyte J.
        • et al.
        A theory-driven system for the specification of rehabilitation treatments.
        Arch Phys Med Rehabil. 2019; 100: 172-180
        • Van Stan J.H.
        • Dijkers M.P.
        • Whyte J.
        • et al.
        The Rehabilitation Treatment Specification System: implications for improvements in research design, reporting, replication, and synthesis.
        Arch Phys Med Rehabil. 2019; 100: 146-155
        • Hart T.
        • Whyte J.
        • Dijkers M.
        • et al.
        Manual of Rehabilitation Treatment Specification.
        (Available at:)
      2. Brunner J, Schimmack U. Estimating population mean power under conditions of heterogeneity and selection for significance. Meta-Psychol; in press.

        • Hoenig J.M.
        • Heisey D.M.
        The abuse of power: the pervasive fallacy of power calculations for data analysis.
        Am Stat. 2001; 55: 19-24
        • Cumming G.
        Understanding the new statistics: effect sizes, confidence intervals, and meta-analysis.
        Routledge, New York2012
        • Button K.S.
        • Ioannidis J.P.
        • Mokrysz C.
        • et al.
        Power failure: why small sample size undermines the reliability of neuroscience.
        Nat Rev Neurosci. 2013; 14: 365-376
        • Bacchetti P.
        Current sample size conventions: flaws, harms, and alternatives.
        BMC Med. 2010; 8: 17
        • McClelland G.H.
        Increasing statistical power without increasing sample size.
        Am Psychol. 2000; 55: 963-964
        • Bakker M.
        • van Dijk A.
        • Wicherts J.M.
        The rules of the game called psychological science.
        Perspect Psychol Sci. 2012; 7: 543-554
        • Caldwell D.M.
        • Ades A.
        • Higgins J.
        Simultaneous comparison of multiple treatments: combining direct and indirect evidence.
        BMJ. 2005; 331: 897-900
        • Glenny A.
        • Altman D.
        • Song F.
        • et al.
        Indirect comparisons of competing interventions.
        Health Technol Assess. 2005; 9: 1-148
        • Guyatt G.H.
        • Osoba D.
        • Wu A.W.
        • Wyrwich K.W.
        • Norman G.R.
        Methods to explain the clinical significance of health status measures.
        Mayo Clin Proc. 2002; 77: 371-383