Special communication| Volume 93, ISSUE 8, SUPPLEMENT , S185-S199, August 2012

Download started.


Toward Improved Evidence Standards and Methods for Rehabilitation: Recommendations and Challenges


      Johnston MV, Dijkers MP. Toward improved evidence standards and methods for rehabilitation: recommendations and challenges.
      Interventions and programs for people with disability should be based on the best—the most discriminating and rigorous—methods of systematic review and knowledge translation possible. Extant systems for systematic review and practice recommendations have excellent features but severe difficulties are encountered when attempting to apply them to disability and rehabilitation. This article identifies issues in evidence synthesis and linked practice recommendations and describes both new and long-tested methods to address them. Evidence synthesis in disability and rehabilitation can be improved by: explicating criteria for evaluating nonrandomized evidence, including the regression discontinuity, interrupted time series, and single-subject designs, as well as state-of-the-art methods of analysis of observational studies; greater use of meta-analysis; considering effect size, direction of biases, and dose-response relationships; employing more discriminating methods of evaluating flaws in masking, considering also measurement reliability and objectivity; considering overall biases and conflicts of interest; increased attention to composition of review panels; and greater transparency in reporting of the bases of reviewers' judgments. Review methods need to be developed for assistive technology and for measurement procedures. Application to practice can be improved by attention to treatment alternatives, explicit evaluation of generalizability, synthesizing clinical experience as a source of evidence, and a focus on the best—rather than the ideally most-rigorous—evidence. Study outcomes should be measured and reviewed in terms meaningful to persons served. In sum, methods are available to improve evidence synthesis and the application of resulting knowledge. We recommend that these methods be employed.

      Key Words

      List of Abbreviations:

      AAN (American Academy of Neurology), ACRM (American Congress of Rehabilitation Medicine), AT (assistive technology), D&R (disability and rehabilitation), EBM (evidence-based medicine), EBP (evidence-based practice), GRADE (Grading of Recommendations Assessment, Development and Evaluation), IES (Institute of Education Sciences), PEDro (Physiotherapy Evidence Database), RCT (randomized controlled trial), RDD (regression discontinuity design), SR (systematic review), SSD (single subject/case design), TSRD (time series research design)
      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'


      Subscribe to Archives of Physical Medicine and Rehabilitation
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect


        • Johnston M.V.
        • Vanderheiden G.C.
        • Farkas M.D.
        • et al.
        The challenge of evidence in disability and rehabilitation research and practice: a position paper.
        National Center for Dissemination of Disability Research, Southwest Educational Development Laboratory, Austin2009
        • Dijkers M.
        The value of “traditional” reviews in the era of systematic reviewing.
        Am J Phys Med Rehabil. 2009; 88: 423-430
        • Institute of Medicine
        Knowing what works in health care: a roadmap for the nation.
        in: Eden J. Wheatley B. McNeil B. Sox H. Committee on reviewing evidence to identify highly effective clinical services. National Academies Pr, Washington (DC)2008
        • Atkins D.
        • Eccles M.
        • Flottorp S.
        • et al.
        Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group.
        BMC Health Serv Res. 2004; 4: 38
        • Edlund W.
        • Gronseth G.
        • So Y.
        • Franklin G.
        American Academy of Neurology clinical practice guideline process manual.
        American Academy of Neurology, St. Paul2004
      1. Higgins J.P.T. Green S. Cochrane handbook for systematic reviews of interventions. Cochrane Collaboration, Chichester2009
        • Guyatt G.H.
        • Oxman A.D.
        • Vist G.E.
        • et al.
        GRADE: an emerging consensus on rating quality of evidence and strength of recommendations.
        BMJ. 2008; 336: 924-926
        • West S.G.
        • King V.
        • Carey T.
        • et al.
        Systems to rate the strength of scientific evidence.
        Agency for Healthcare Research and Quality, Rockville2002 (AHRQ Publication No 02-E016)
        • Agency for Health Care Research and Quality
        U.S. Preventive Services Task Force Procedure Manual.
        (AHRQ Publication No. 08-05118-EF, July 2008) U.S. Government, Rockville2008 (Accessed March 8, 2012)
        • Eden J.
        • Levit L.
        • Berg A.
        • Morton S.
        • Committee on Standards for Systematic Reviews of Comparative Effectiveness Research
        Finding what works in health care: standards for systematic reviews.
        (editors) Institute of Medicine, National Academies Pr, Washington (DC)2011
      2. Graham R. Mancher M. Wolman D.M. Greenfield S. Steinberg E. Clinical practice guidelines we can trust. Institute of Medicine, National Academies Pr, Washington (DC)2011
        • Rosenthal R.
        Interpersonal expectations: effects of the experimenter's hypothesis.
        in: Rosnow R.L. Rosenthal R. People studying people: artifacts and ethics in behavioral research. W.H. Freeman, New York2009: 138-210
        • Chambless D.L.
        • Ollendick T.H.
        Empirically supported psychological interventions: controversies and evidence.
        Annu Rev Psychol. 2001; 52: 685-716
        • Gronseth G.
        • Moses L.K.
        • Getchius T.S.
        Clinical practice guideline process manual: 2011 edition.
        American Academy of Neurology, St. Paul2011
        • Guyatt G.H.
        • Oxman A.D.
        • Kunz R.
        • et al.
        Incorporating considerations of resources use into grading recommendations.
        BMJ. 2008; 336: 1170-1173
        • Brozek J.L.
        • Akl E.A.
        • Alonso-Coello P.
        • et al.
        Grading quality of evidence and strength of recommendations in clinical practice guidelines.
        Allergy. 2009; 64: 669-677
        • Guyatt G.
        • Oxman A.D.
        • Akl E.A.
        • et al.
        GRADE guidelines: 1.
        J Clin Epidemiol. 2011; 64: 383-394
        • Guyatt G.H.
        • Oxman A.D.
        • Schunemann H.J.
        • Tugwell P.
        • Knottnerus A.
        GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology.
        J Clin Epidemiol. 2011; 64: 380-382
        • Dijkers M.P.
        • Murphy S.L.
        • Krellman J.
        Evidence-based practice for rehabilitation professionals: concepts and controversies.
        Arch Phys Med Rehabil. 2012; 93: S164-S176
        • Sackett D.L.
        • Straus S.E.
        • Richardson W.S.
        • Rosenberg W.M.
        • Haynes R.B.
        Evidence-based medicine: how to practice and teach EBM.
        Churchill Livingstone, Edinburgh2000
        • Fitzpatrick R.B.
        PEDro: a physiotherapy evidence database.
        Med Ref Serv Q. 2008; 27: 189-198
        • Campbell D.T.
        • Stanley J.C.
        • Gage N.L.
        Experimental and quasi-experimental designs for research.
        R. McNally, Chicago1966
        • Shadish W.R.
        • Cook T.D.
        • Campbell D.T.
        Experimental and quasi-experimental designs for generalized causal inference.
        Houghton Mifflin, Boston2002
        • Cook T.D.
        • Campbell D.T.
        Quasi-experimentation: design and analysis issues for field settings.
        Houghton Mifflin, Boston1979
        • Johnston M.V.
        • Ottenbacher K.J.
        • Reichardt C.S.
        Strong quasi-experimental designs for research on the effectiveness of rehabilitation.
        Am J Phys Med Rehabil. 1995; 74: 383-392
        • Imbens G.
        • Lemieux T.
        • National Bureau of Economic Research
        Regression discontinuity designs: a guide to practice.
        National Bureau of Economic Research, Cambridge2007
        • Schochet P.
        • Cook T.
        • Deke J.
        • et al.
        Standards for regression discontinuity designs.
        in: Institute for Educational Sciences, What Works Clearinghouse, Department of Education. Institute for Educational Sciences, Washington (DC)June 2010 (editors) (Accessed March 8, 2012)
        • Brozek J.L.
        • Akl E.A.
        • Compalati E.
        • et al.
        Grading quality of evidence and strength of recommendations in clinical practice guidelines part 3 of 3.
        Allergy. 2011; 66: 588-595
        • Guyatt G.H.
        • Haynes R.B.
        • Jaeschke R.Z.
        • et al.
        Users' Guides to the Medical Literature: XXV.
        JAMA. 2000; 284: 1290-1296
        • Portney L.G.
        • Watkins M.P.
        Foundations of clinical research: applications to practice.
        3rd ed. Pearson/Prentice Hall, Upper Saddle River2009
        • Schlosser R.W.
        Synthesizing efficacy research in AAC.
        in: Schlosser R. Lloyd L. Schlodber R.W. The efficacy of augmentative and alternative communication. Academic Pr, San Diego2003: 230-257
        • Schlosser R.W.
        The role of single-subject experimental designs in evidence-based practice times.
        SEDL, National Center for Dissemination of Rehabilitation Research, Austin, TX2009 (Technical Brief No. 22 2009)
        • Tate R.L.
        • McDonald S.
        • Perdices M.
        • Togher L.
        • Schultz R.
        • Savage S.
        Rating the methodological quality of single-subject designs and n-of-1 trials: introducing the Singe-Case Experimental Design (SCED) Scale.
        Neuropsychol Rehabil. 2008; 18: 385-401
        • Backman C.L.
        • Harris S.R.
        • Chisholm J.A.
        • Monette A.D.
        Single-subject research in rehabilitation: a review of studies using AB, withdrawal, multiple baseline, and alternating treatments designs.
        Arch Phys Med Rehabil. 1997; 78: 1145-1153
        • Janosky J.E.
        • Leininger S.L.
        • Hoerger M.P.
        • Libkuman T.M.
        Single subject designs in biomedicine.
        Springer, Dordrecht2009
        • Johnston M.V.
        • Smith R.O.
        Single subject designs: current methodologies and future directions.
        Occup Ther J Res. 2010; 30: 4-10
        • Kratochwill T.R.
        • Hitchcock J.
        • Horner R.H.
        • Levin J.R.
        • Odom S.L.
        • Rindskopf D.M.
        • Shadish W.R.
        Single-case designs technical documentation.
        2010 (What Works Clearinghouse, Institute of Educational Sciences, U.S. Department of Education, 2010) (Accessed February 8, 2012)
        • Morgan D.L.
        • Morgan R.K.
        Single-case research methods for the behavioral and health sciences.
        SAGE, Los Angeles2009
        • Kazdin A.E.
        Single-case research designs: methods for clinical and applied settings.
        2nd ed. Oxford Univ Pr, New York2011
        • Cryer J.D.
        • Chan K.S.
        Time series analysis with applications in R.
        2nd ed. Springer, New York2008
        • Matowe L.K.
        • Leister C.A.
        • Crivera C.
        • Korth-Bradley J.M.
        Interrupted time series analysis in clinical research.
        Ann Pharmacother. 2003; 37: 1110-1116
        • Ramsay C.R.
        • Matowe L.
        • Grilli R.
        • Grimshaw J.M.
        • Thomas R.E.
        Interrupted time series designs in health technology assessment: lessons from two systematic reviews of behavior change strategies.
        Int J Technol Assess Health Care. 2003; 19: 613-623
        • Ottenbacher K.J.
        • Hinderer S.R.
        Evidence-based practice: methods to evaluate individual patient improvement.
        Am J Phys Med Rehabil. 2001; 80: 786-796
        • Lindsey J.K.
        Statistical analysis of stochastic processes in time.
        Cambridge Univ Pr, Cambridge2004
        • Harris R.
        • Helfand M.
        • Woolf S.
        • et al.
        Current methods of the US Preventive Services Task Force: a review of the process.
        Am J Preventive Medicine. 2001; 20 (3 (Suppl): 21-35
        • Schneider B.L.
        • American Educational Research Association
        Estimating causal effects: using experimental and observational designs: a think tank white paper.
        American Educational Research Association, Washington (DC)2007
        • Stroup D.F.
        • Berlin J.A.
        • Morton S.C.
        • et al.
        Meta-analysis of observational studies in epidemiology: a proposal for reporting.
        JAMA. 2000; 283: 2008-2012
        • Wang J.
        • Donnan P.T.
        • Steinke D.
        • MacDonald T.M.
        The multiple propensity score for analysis of dose-response relationships in drug safety studies.
        Pharmacoepidemiol Drug Saf. 2001; 10: 105-111
        • von Elm E.
        • Altman D.G.
        • Egger M.
        • Pocock S.J.
        • Gotzsche P.C.
        • Vandenbroucke J.P.
        Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies.
        BMJ. 2007; 335: 806-808
        • Rubin D.B.
        Estimating causal effects from large data sets using propensity scores.
        Ann Intern Med. 1997; 127: 757-763
        • Psaty B.M.
        • Koepsell T.D.
        • Lin D.
        • et al.
        Assessment and control for confounding by indication in observational studies.
        J Am Geriatr Soc. 1999; 47: 749-754
        • National Research Council (U.S.)
        • Transportation Research Board
        Statistical methods.
        Transportation Research Board, National Academy of Sciences, Washington (DC)2008
        • Little R.J.
        • Rubin D.B.
        Causal effects in clinical and epidemiological studies via potential outcomes: concepts and analytical approaches.
        Annu Rev Public Health. 2000; 21: 121-145
        • Kunz R.
        Randomized trials and observational studies: still mostly similar results, still crucial differences.
        J Clin Epidemiol. 2008; 61: 207-208
        • Jahn A.
        • Razum O.
        Observational studies for intervention assessment.
        Lancet. 2001; 357: 2141
        • Hogan J.W.
        • Lancaster T.
        Instrumental variables and inverse probability weighting for causal inference from longitudinal observational studies.
        Stat Methods Med Res. 2004; 13: 17-48
        • Hartz A.
        • Bentler S.
        • Charlton M.
        • et al.
        Assessing observational studies of medical treatments.
        Emerg Themes Epidemiol. 2005; 2: 8
        • Comber H.
        • Perry I.J.
        Observational studies for intervention assessment.
        Lancet. 2001; 357: 2141-2142
        • Black N.
        Why we need observational studies to evaluate the effectiveness of health care.
        BMJ. 1996; 312: 1215-1218
        • Basu A.
        • Heckman J.J.
        • Navarro-Lozano S.
        • Urzua S.
        Use of instrumental variables in the presence of heterogeneity and self-selection: an application to treatments of breast cancer patients.
        Health Econ. 2007; 16: 1133-1157
        • Rutter M.
        Proceeding from observed correlation to causal inference: the use of natural experiments.
        Perspect Psychol Sci. 2007; 2: 377-395
        • Odgaard-Jensen J.
        • Vist G.
        • Timmer A.
        • et al.
        Randomisation to protect against selection bias in healthcare trials.
        Cochrane Database Syst Rev. 2011 Apr 13; (MR000012)
        • Rosenbaum P.R.
        Observational studies.
        2nd ed. Springer, New York2010
        • Heckman J.
        • Navarro-Lozano S.
        Using matching, instrumental variables, and control functions to estimate economic choice models.
        Rev Econ Stat. 2004; 86: 30-57
        • Benson K.
        • Hartz A.J.
        A comparison of observational studies and randomized controlled trials.
        N Engl J Med. 2000; 342: 1878-1886
        • Steiner P.M.
        • Cook T.D.
        • Shadish W.R.
        • Clark M.H.
        The importance of covariate selection in controlling for selection bias in observational studies.
        Psychol Methods. 2010; 15: 250-267
        • West S.G.
        • Duan N.
        • Pequegnat W.
        • et al.
        Alternatives to the randomized controlled trial.
        Am J Public Health. 2008; 98: 1359-1366
        • Cepeda M.S.
        • Boston R.
        • Farrar J.T.
        • Strom B.L.
        Comparison of logistic regression versus propensity score when the number of events is low and there are multiple confounders.
        Am J Epidemiol. 2003; 158: 280-287
        • Martens E.P.
        • Pestman W.R.
        • de Boer A.
        • Belitser S.V.
        • Klungel O.H.
        Instrumental variables: application and limitations.
        Epidemiol. 2006; 17: 260-267
        • Newhouse J.P.
        • McClellan M.
        Econometrics in outcomes research: the use of instrumental variables.
        Annu Rev Pub Health. 1998; 19: 17-34
        • Liberati A.
        • Altman D.G.
        • Tetzlaff J.
        • et al.
        The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration.
        J Clin Epidemiol. 2009; 62: e1-e34
        • Moher D.
        • Liberati A.
        • Tetzlaff J.
        • Altman D.G.
        Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.
        J Clin Epidemiol. 2009; 62: 1006-1012
        • Cooper H.M.
        • Hedges L.V.
        • Valentine J.C.
        The handbook of research synthesis and meta-analysis.
        2nd ed. Russell Sage Foundation, New York2009
        • Robert E.S.
        Best evidence synthesis: an intelligent alternative to meta-analysis.
        J Clin Epidemiol. 1995; 48: 9-18
      3. GRADEpro. [Computer program]. Version 3.2 for Windows. Jan Brozek, Andrew Oxman, Holger Schünemann, 2008.

        • Slavin R.E.
        Best evidence synthesis: an intelligent alternative to meta-analysis.
        J Clin Epidemiol. 1995; 48: 9-18
        • Johnston M.V.
        • Ottenbacher K.J.
        • Graham J.E.
        • Findley P.A.
        • Hansen A.C.
        Systematically assessing and improving the quality and outcomes of medical rehabilitation programs.
        in: Frontera W.R. DeLisa J.A. DeLisa's physical medicine & rehabilitation: principles and practice. 5th ed. Wolters Kluwer/Lippincott Williams & Wilkins Health, Philadelphia2010: 325-355
        • Keith R.A.
        Treatment strength in rehabilitation.
        Arch Phys Med Rehabil. 1997; 78: 1298-1304
        • Hartman J.M.
        • Forsen Jr, J.W.
        • Wallace M.S.
        • Neely J.G.
        Tutorials in clinical research: part IV: recognizing and controlling bias.
        Laryngoscope. 2002; 112: 23-31
        • Johnston M.V.
        • Sherer M.
        • Whyte J.
        Applying evidence standards to rehabilitation research: an overview.
        Am J Phys Med Rehabil. 2006; 85: 292-309
        • Wood L.
        • Egger M.
        • Gluud L.L.
        • et al.
        Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study.
        BMJ. 2008; 336: 601-605
      4. Stone A.A. Turkkan J.S. Bachrach C.A. Jobe J.B. Kurtzman H.S. Cain V.S. The science of self-report: implications for research and practice. Lawrence Erlbaum, Mahwah2000
        • Dillman D.A.
        • Smyth J.D.
        • Christian L.M.
        Internet, mail, and mixed-mode surveys: the tailored design method.
        3rd ed. Wiley, Hoboken2008
        • van de Mortel T.F.
        Faking it: social desirability response bias in self-report research.
        Austrailan J Adv Nursing. 2008; 25: 40-48
        • Stokes E.
        Rehabilitation outcome measures.
        Churchill-Livingston, Phildelphia2011
        • Lexchin J.
        Those who have the gold make the evidence: how the pharmaceutical industry biases the outcomes of clinical trials of medications.
        Sci Eng Ethics. 2011 Feb 15; ([Epub ahead of print])
        • Lexchin J.
        • Bero L.A.
        • Djulbegovic B.
        • Clark O.
        Pharmaceutical industry sponsorship and research outcome and quality: systematic review.
        BMJ. 2003; 326: 1167-1170
        • American Educational Research Association, American Psychological Association, National Council on Measurement in Education, Joint Committee on Standards for Educational and Psychological Testing (U.S.)
        Standards for educational and psychological testing.
        American Educational Research Association, Washington (DC)1999
        • Johnston M.V.
        • Keith R.A.
        • Hinderer S.R.
        Measurement standards for interdisciplinary medical rehabilitation.
        Arch Phys Med Rehabil. 1992; 73: S3-S23
        • Conrad K.J.
        International conference on objective measurement: applications of Rasch analysis in health care.
        Med Care. 2004; 42: 1-6
        • McHorney C.A.
        • Monahan P.O.
        Postscript: applications of Rasch analysis in health care.
        Med Care. 2004; 42: I73-I78
        • Andresen E.M.
        Criteria for assessing the tools of disability outcomes research.
        Arch Phys Med Rehabil. 2000; 81: S15-S20
        • Johnston M.V.
        • Graves D.
        • Greene M.
        The uniform postacute assessment tool: systematically evaluating the quality of measurement evidence.
        Arch Phys Med Rehabil. 2007; 88: 1505-1512
        • Johnston M.V.
        • Graves D.E.
        Towards guidelines for evaluation of measures: an introduction with application to spinal cord injury.
        J Spinal Cord Med. 2008; 31: 13-26
        • Campbell D.T.
        • Stanley J.C.
        • Gage N.L.
        Experimental and quasi-experimental designs for research.
        Houghton Mifflin, Boston1981
        • Guyatt G.H.
        • Oxman A.D.
        • Kunz R.
        • Vist G.E.
        • Falck-Ytter Y.
        • Schunemann H.J.
        What is “quality of evidence” and why is it important to clinicians?.
        BMJ. 2008; 336: 995-998
        • Hillier S.
        • Grimmer-Somers K.
        • Merlin T.
        • et al.
        FORM: an Australian method for formulating and grading recommendations in evidence-based clinical guidelines.
        BMC Med Res Methodol. 2011; 11: 23
        • Friedman L.
        • Furberg C.D.
        • DeMets D.
        Fundamentals of clinical trials.
        4th ed. Springer, New York2010
        • Haley S.M.
        • Coster W.J.
        • Binda-Sundberg K.
        Measuring physical disablement: the contextual challenge.
        Phys Ther. 1994; 74: 443-451
        • Boutron I.
        • Moher D.
        • Altman D.G.
        • Schulz K.F.
        • Ravaud P.
        Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration.
        Ann Intern Med. 2008; 148: 295-309
        • Craig P.
        • Dieppe P.
        • Macintyre S.
        • Michie S.
        • Nazareth I.
        • Petticrew M.
        Developing and evaluating complex interventions: the new Medical Research Council guidance.
        BMJ. 2008; 337: 1655
        • Dijkers M.P.
        External validity in research on rehabilitation interventions: issues for knowledge translation.
        National Center for Dissemination of Disability Research, SEDL, Austin, TX2011 (Accessed February 8, 2012)
        • Dobson K.S.
        • Craig K.D.
        Empirically supported therapies: best practice in professional psychology.
        Sage, Thousand Oaks1998
        • Rollnick S.
        • Miller W.R.
        • Butler C.C.
        Motivational interviewing in health care; helping patients change behavior.
        Guilford Pr, New York2008
        • Muran J.C.
        • Barber J.P.
        The therapeutic alliance: an evidence-based guide to practice.
        Guilford Pr, New York2010
        • Kielhofner G.
        Conceptual foundations of occupational therapy practice.
        4th ed. F.A. Davis Co, Philadelphia2009
        • Cochrane Collaboration
        Glossary of terms in The Cochrane Collaboration.
        (Accessed July 13, 2011)
        • Tunis S.R.
        • Stryer D.B.
        • Clancy C.M.
        Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy.
        JAMA. 2003; 290: 1624-1632
        • Fransen G.A.
        • van Marrewijk C.J.
        • Mujakovic S.
        • et al.
        Pragmatic trials in primary care.
        BMC Med Res Methodol. 2007; 7: 16
        • Cook T.D.
        A quasi-sampling theory of the generalization of causal relationships.
        New Directions for Program Evaluation. 1993; 1993: 39-82
        • Brennan R.L.
        Generalizability theory.
        Springer, New York2001
        • Shavelson R.J.
        • Webb N.M.
        Generalizability theory: a primer.
        Sage Publications, Newbury Park1991
        • Horn S.D.
        • Gassaway J.
        Practice based evidence: incorporating clinical heterogeneity and patient-reported outcomes for comparative effectiveness research.
        Med Care. 2010; 48: S17-S22
        • Damschroder L.J.
        • Aron D.C.
        • Keith R.E.
        • Kirsh S.R.
        • Alexander J.A.
        • Lowery J.C.
        Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science.
        Implement Sci. 2009; 4: 50
        • Qaseem A.
        • Snow V.
        • Owens D.K.
        • Shekelle P.
        The development of clinical practice guidelines and guidance statements of the American College of Physicians: summary of methods.
        Ann Intern Med. 2010; 153: 194-199
        • Haby M.M.
        • Donnelly M.
        • Corry J.
        • Vos T.
        Cognitive behavioural therapy for depression, panic disorder and generalized anxiety disorder: a meta-regression of factors that may predict outcome.
        Aust N Z J Psychiatry. 2005; 40: 9-19
        • Roth A.
        • Fonagy P.
        What works for whom?.
        2nd ed. Guilford Pr, New York2005
        • Arroll B.
        • Elley C.R.
        • Fishman T.
        • et al.
        Antidepressants versus placebo for depression in primary care.
        Cochrane Database Syst Rev. 2009 Jul 8; (CD007954)
        • Guyatt G.H.
        • Oxman A.D.
        • Kunz R.
        • et al.
        Going from evidence to recommendations.
        BMJ. 2008; 336: 1049-1051
        • Institute of Medicine
        Crossing the quality chasm: a new health care system for the 21st century.
        National Academy Pr, Washington (DC)2001
        • Smith G.C.
        • Pell J.P.
        Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials.
        BMJ. 2003; 327: 1459-1461
        • Schlosser R.W.
        • Koul R.
        • Costello J.
        Asking well-built questions for evidence-based practice in augmentative and alternative communication.
        J Commun Disord. 2007; 40: 225-238
      5. Scherer M.J. Assistive technology: matching device and consumer for successful rehabilitation. 1st ed. American Psychological Association, Washington (DC)2002
        • Cook A.M.
        • Hussey S.M.
        Assistive technologies: principles and practice.
        2nd ed. Mosby, St. Louis2002
        • Fuhrer M.J.
        Assistive technology outcomes research: challenges met and yet unmet.
        Am J Phys Med Rehabil. 2001; 80: 528-535
        • Mann W.C.
        • Ottenbacher K.J.
        • Fraas L.
        • Tomita M.
        • Granger C.V.
        Effectiveness of assistive technology and environmental interventions in maintaining independence and reducing home care costs for the frail elderly.
        Arch Fam Med. 1999; 8: 210-217
        • Johnston M.V.
        • Case-Smith J.
        Development and testing of interventions in occupational therapy: towards a new generation of research in occupational therapy.
        Occup Ther J Res. 2009; 29: 4-13
        • Lipsey M.W.
        Theory as method: small theories of treatment.
        New Directions for Program Evaluation. 1993; 57: 5-38
        • MacKinnon D.P.
        Introduction to statistical mediation analysis.
        Lawrence Erlbaum Associates, New York2008
        • Whyte J.
        Using treatment theories to refine the designs of brain injury rehabilitation treatment effectiveness studies.
        J Head Trauma Rehabil. 2006; 21: 99-106