Treatment fidelity is the establishment of the reliability

Similar documents
Clinical Review Criteria Related to Speech Therapy 1

Implementing Response to Intervention (RTI) National Center on Response to Intervention

Occupational Therapist (Temporary Position)

Improved Effects of Word-Retrieval Treatments Subsequent to Addition of the Orthographic Form

PROGRAM REQUIREMENTS FOR RESIDENCY EDUCATION IN DEVELOPMENTAL-BEHAVIORAL PEDIATRICS

Intro to Systematic Reviews. Characteristics Role in research & EBP Overview of steps Standards

Kannapolis City Schools 100 DENVER STREET KANNAPOLIS, NC

Trauma Informed Child-Parent Psychotherapy (TI-CPP) Application Guidance for

Presentation Summary. Methods. Qualitative Approach

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Laura A. Riffel

Curriculum Vitae of. JOHN W. LIEDEL, M.D. Developmental-Behavioral Pediatrician

SY 6200 Behavioral Assessment, Analysis, and Intervention Spring 2016, 3 Credits

CALIFORNIA STATE UNIVERSITY, SAN MARCOS SCHOOL OF EDUCATION

Speech/Language Pathology Plan of Treatment

Glenn County Special Education Local Plan Area. SELPA Agreement

SPECIALIST PERFORMANCE AND EVALUATION SYSTEM

Milton Public Schools Special Education Programs & Supports

Gena Bell Vargas, Ph.D., CTRS

Parent Motivation to Participate in Treatment: Assessment and Prediction of Subsequent Participation

Evaluation Off Off On On

UNIVERSITY OF SOUTHERN MISSISSIPPI Department of Speech and Hearing Sciences SHS 726 Auditory Processing Disorders Spring 2016

Systematic reviews in theory and practice for library and information studies

Process Evaluations for a Multisite Nutrition Education Program

Improving recruitment, hiring, and retention practices for VA psychologists: An analysis of the benefits of Title 38

Writing Functional Dysphagia Goals

Designing Case Study Research for Pedagogical Application and Scholarly Outcomes

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

The Use of Consequences and Self-Monitoring to Increase Time in Seat and The Number of

PRESENTED BY EDLY: FOR THE LOVE OF ABILITY

Reference to Tenure track faculty in this document includes tenured faculty, unless otherwise noted.

Mathematical learning difficulties Long introduction Part II: Assessment and Interventions

Children and Adults with Attention-Deficit/Hyperactivity Disorder Public Policy Agenda for Children

Assessing Functional Relations: The Utility of the Standard Celeration Chart

From practice to practice: What novice teachers and teacher educators can learn from one another Abstract

An Asset-Based Approach to Linguistic Diversity

Applying Florida s Planning and Problem-Solving Process (Using RtI Data) in Virtual Settings

Alyson D. Stover, MOT, JD, OTR/L, BCP

Bayley scales of Infant and Toddler Development Third edition

Assessment System for M.S. in Health Professions Education (rev. 4/2011)

Data-Based Decision Making: Academic and Behavioral Applications

Success Factors for Creativity Workshops in RE

Tun your everyday simulation activity into research

Whole School Evaluation REPORT. Tigh Nan Dooley Special School Carraroe, County Galway Roll Number: 20329B

Early Warning System Implementation Guide

University of New Hampshire Policies and Procedures for Student Evaluation of Teaching (2016) Academic Affairs Thompson Hall

EQuIP Review Feedback

Tomball College and Community Library Occupational Therapy Journals

Section 3.4 Assessing barriers and facilitators to knowledge use

Model of Human Occupation

WHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING AND TEACHING OF PROBLEM SOLVING

Greek Teachers Attitudes toward the Inclusion of Students with Special Educational Needs

Progress Monitoring & Response to Intervention in an Outcome Driven Model

Prevent Teach Reinforce

University of Arkansas at Little Rock Graduate Social Work Program Course Outline Spring 2014

Therapeutic Listening Listening with the Whole Body

GUIDELINES FOR COMBINED TRAINING IN PEDIATRICS AND MEDICAL GENETICS LEADING TO DUAL CERTIFICATION

Parent Information Welcome to the San Diego State University Community Reading Clinic

How to Judge the Quality of an Objective Classroom Test

Discussion Data reported here confirm and extend the findings of Antonucci (2009) which provided preliminary evidence that SFA treatment can result

MSW Advanced Direct Practice (ADP) (2 nd -Year MSW Field Placement) Field Learning Contract

2. CONTINUUM OF SUPPORTS AND SERVICES

School Leadership Rubrics

SSIS SEL Edition Overview Fall 2017

Social Work Simulation Education in the Field

Longitudinal Integrated Clerkship Program Frequently Asked Questions

Basic Standards for Residency Training in Internal Medicine. American Osteopathic Association and American College of Osteopathic Internists

Modified Systematic Approach to Answering Questions J A M I L A H A L S A I D A N, M S C.

EVALUATING MATH RECOVERY: THE IMPACT OF IMPLEMENTATION FIDELITY ON STUDENT OUTCOMES. Charles Munter. Dissertation. Submitted to the Faculty of the

Curriculum Vitae. Sara C. Steele, Ph.D, CCC-SLP 253 McGannon Hall 3750 Lindell Blvd., St. Louis, MO Tel:

Brief Home-Based Data Collection of Low Frequency Behaviors

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

BENCHMARK TREND COMPARISON REPORT:

NCEO Technical Report 27

Educational Quality Assurance Standards. Residential Juvenile Justice Commitment Programs DRAFT

Psychometric Research Brief Office of Shared Accountability

Special Education Services Program/Service Descriptions

SCHEMA ACTIVATION IN MEMORY FOR PROSE 1. Michael A. R. Townsend State University of New York at Albany

Fort Lauderdale Conference

Pierce County Schools. Pierce Truancy Reduction Protocol. Dr. Joy B. Williams Superintendent

MADISON METROPOLITAN SCHOOL DISTRICT

MASTER OF EDUCATION (M.ED), MAJOR IN PHYSICAL EDUCATION

ACADEMIC AFFAIRS GUIDELINES

ASSESSMENT REPORT FOR GENERAL EDUCATION CATEGORY 1C: WRITING INTENSIVE

Guide for Fieldwork Educators

No Parent Left Behind

Critical Care Current Fellows

Guidelines for the Use of the Continuing Education Unit (CEU)

Running Head: Implementing Articulate Storyline using the ADDIE Model 1. Implementing Articulate Storyline using the ADDIE Model.

Adults with traumatic brain injury (TBI) often have word retrieval problems (Barrow, et al., 2003; 2006; King, et al., 2006a; 2006b; Levin et al.

Advances in Assessment The Wright Institute*

Building our Profession s Future: Level I Fieldwork Education. Kari Williams, OTR, MS - ACU Laurie Stelter, OTR, MA - TTUHSC

State Parental Involvement Plan

Adler Graduate School

Communication Disorders Program. Strategic Plan January 2012 December 2016

Implementation Science and the Roll-out of the Head Start Program Performance Standards

CONTINUUM OF SPECIAL EDUCATION SERVICES FOR SCHOOL AGE STUDENTS

- COURSE DESCRIPTIONS - (*From Online Graduate Catalog )

Evidence for Reliability, Validity and Learning Effectiveness

Special Educational Needs and Disability (SEND) Policy. November 2016

Transcription:

Treatment Fidelity: Its Importance and Reported Frequency in Aphasia Treatment Studies Jacqueline J. Hinckley a and Natalie F. Douglas a Purpose: Treatment fidelity is a measure of the reliability of the administration of an intervention in a treatment study. It is an important aspect of the validity of a research study, and it has implications for the ultimate implementation of evidence-supported interventions in typical clinical settings. Method: Aphasia treatment studies published in the last 10 years in 3 journals were reviewed using coding techniques that were adapted from Gresham, Gansle, Noell, Cohen, and Rosenblum (1993). The following items were noted: identifying information, study design, description of both the dependent and independent variables, and whether a measure of treatment fidelity was explicitly included. Results: Of the aphasia treatment studies published in the last 10 years, 14% explicitly reported treatment fidelity. Most studies reporting treatment fidelity used checking of videotaped sessions by independent raters. Of the reviewed studies, 45% provided sufficient treatment description to support replication. Conclusion: Treatment fidelity is widely acknowledged as being critical to research validity and is a foundation for the implementation of evidence-based practices, but only a small percentage of aphasia treatment studies published in the last 10 years explicitly reported treatment fidelity. Recommendations for research practices include increased attention to matters of treatment fidelity in the peer review process and explicit incorporation of 3 levels of treatment fidelity in treatment research. Key Words: reproducibility of results, evidence-based practice, speech-language pathology, aphasia Treatment fidelity is the establishment of the reliability of the independent variable in a treatment study. Treatment fidelity has implications for the internal validity of a research study, and it also bears on our ultimate ability to apply evidence-based practices to real clinical settings. Thus, it is important for disciplines to selfmonitor their reporting practices for treatment fidelity. Moncher and Prinz (1991) included two concepts in their basic definition of treatment fidelity: Treatment integrity refers to how well a treatment condition was implemented as planned (Vermilyea, Barlow, & O Brien, 1984; Yeaton & Sechrest, 1981), and treatment differentiation refers to whether the treatment conditions being studied differed from each other sufficiently so that the intended manipulation of the independent variable can be assumed to have occurred. Both of these concepts are important to consider because it is possible to administer a treatment as planned without differentiation from a comparison treatment or to successfully differentiate two treatments in a a University of South Florida, Tampa Correspondence to Jacqueline J. Hinckley: jhinckle@usf.edu Editor: Swathi Kiran Associate Editor: Leanne Togher Received July 30, 2012 Accepted December 22, 2012 DOI: 10.1044/1058-0360(2012/12-0092) research study without implementing the treatment with a high degree of integrity (Kazdin, 1986). Specific methods and examples for assessing treatment fidelity in the field of speech-language pathology have been described (Hinckley, 2007; Kadaravek & Justice, 2010; Schlosser, 2002). Three basic approaches for verifying treatment fidelity are the report of independent raters, clinician self-report, and participant self-report. Direct observation of live treatment sessions and/or the review and rating of recorded treatment sessions are frequently chosen methods for assessing treatment adherence to protocol with independent raters. A less direct measure of treatment fidelity is to ask clinicians to indicate after sessions whether they included all of the required components of the administered treatment. Participants can also be asked to report whether they received all of the components of the assigned treatment. For example, participant self-report could be an option in a study addressing the outcomes of a family education program. Determining treatment fidelity in a behavioral treatment study is of concern because of the possibility of therapist drift, which refers to the modification of a treatment protocol in small and gradual ways, unintentionally or unknowingly, in which a clinician varies the original treatment protocol in an attempt to respond to a client s specific behaviors (Peterson, Homer, & Wonderlich, 1982; Waller, 2009). Another motivation for the application of treatment fidelity measures stems from ethical considerations. In some types of studies, there could be a potential for American Journal of Speech-Language Pathology N Vol. 22 N S279 S284 N May 2013 N ß American Speech-Language-Hearing Association Supplement: Select Papers From the 42 nd Clinical Aphasiology Conference S279

harm to the participants if the treatment protocol is not adhered to (Peterson et al., 1982). Treatment fidelity measures also bear on the internal validity of the study. Effect size is one analytical tool for studying the effects and outcomes of a treatment, and internal validity in an outcome study is highly correlated with effect size (Smith, Glass, & Miller, 1980). To enhance the internal validity of a treatment study, we must ensure that treatment potency and fidelity are managed explicitly. A threat to the internal validity of a study that does not report treatment fidelity measures is a greater chance of both Type II and Type III errors. If a treatment is not administered completely and accurately in a treatment study, there is a potential decrease in statistical power, which then increases the chance of a Type II, or false negative, error (Frances, Sweeney, & Clarkin, 1985). Without reporting treatment fidelity, the treatment that was actually implemented is undefined, potentially leading to a Type III error. That is, when treatment fidelity measures are not carried out, it is not possible to know whether the results of the study are attributable to the planned treatment or to the treatment that was actually implemented (Dobson & Cook, 1980; Linnan & Steckler, 2002). If treatment fidelity is not measured, researchers may unwittingly report outcomes that are associated with a treatment that is different from the one being described. Failure to monitor adherence to the treatment protocol or to measure treatment fidelity also can potentially affect the actual outcome of a study. In a review of 181 experimental studies that were published between 1980 and 1990 in seven behavioral intervention journals, Gresham, Gansle, Noell, Cohen, and Rosenblum (1993) found moderate positive correlations between the degree of treatment fidelity and the level of treatment outcome. A specific example is a study of special education students receiving direct social skills training by different teachers across different classrooms. The outcomes of students whose teachers administered the training with the highest fidelity levels were superior to the outcomes of students whose teachers were less compliant with the treatment protocol (McEvoy, Shores, Wehby, Johnson, & Fox, 1990). If treatment fidelity had not been measured, the outcomes of all teachers, regardless of treatment fidelity, might have been compiled, thereby altering the overall study outcome. Treatment fidelity, then, can affect the internal validity of a study and potentially the outcome of the study itself. In building a scientific basis for clinical practice, we must be certain that a treatment that may ultimately become an evidence-based practice has been consistently administered in order to ensure that the conclusions of the study are valid. These individual studies may be entered into systematic reviews or meta-analyses on which clinical practice guidelines are built. Recommendations for clinical practice will come from this research; thus, a lack of treatment fidelity reporting could affect the treatment that is ultimately received by large numbers of individuals (Bhar & Beck, 2009; Cherney, Patterson, Raymer, Frymark, & Schooling, 2008). Treatments that can be measured for adherence to a protocol are likely to be sufficiently well described to be replicated, which is a critical step in the research process. In addition, because of the specificity of the procedures, treatments that are described in detail and are measured for fidelity will be those that can most readily be translated into clinical practice. Even well-researched therapeutic interventions can be difficult to transfer into typical clinical settings; studies need to incorporate specific strategies to address the problem of ultimate intervention implementation (e.g., Burgio et al., 2001). Once a treatment is supported by evidence, practitioners will need to understand and be able to implement the core components of the treatment in real settings. A critical bridge between the accumulated evidence for a treatment and its implementation in real practice is an understanding of its core components, which typically begins with the establishment of fidelity and the measure with which fidelity has been assessed (Fixsen, Naoom, Blasé, Friedman, & Wallace, 2005; Frances et al., 1985). An important phase of treatment research involves investigating the outcomes of a treatment in typical practice settings, where we will need to know more about the ways in which a particular treatment can be modified without compromising the desired effect. Using tools and methods for treatment fidelity will allow those tools to become the focus of clinical research that addresses outcomes associated with differing levels of treatment fidelity. Reporting treatment fidelity from the early stages of treatment research will facilitate empirical investigations of potential adjustments to the treatment and could help to bridge the research-to-practice gap. As we build our clinical literature into evidence-based systematic reviews, meta-analyses, and practice guidelines, we will need a strong body of literature that has included the reporting of treatment fidelity. Explicitly reporting these measures, including treatment protocols or manuals, can speed the process of bringing an investigational treatment to practice. As the disciplines of education, health care, and psychology strive to implement evidence-based practices, they have begun to examine the reporting of fidelity measures. Two possible outcomes of a review of reported treatment fidelity are to determine need areas in a discipline s literature or to show progress. Moncher and Prinz (1991) reviewed 359 studies in clinical psychology, behavior therapy, psychiatry, and marital and family treatment that were published between 1980 and 1988. During the 9-year review period, there was a fivefold increase in reported treatment fidelity from the beginning to the end of the review period. In this case, improvement in treatment fidelity practices was observed across journals over time. In contrast, two reviews focused on autism and applied behavior analysis showed little increase in the frequency of treatment fidelity reporting. In the first review, studies that were published in the Journal of Applied Behavior Analysis (JABA) between 1980 and 1990 were assessed, with 16% of these studies reporting treatment fidelity (Gresham et al., 1993). A follow-up review of 60 studies focused on interventions for children with autism in JABA and other journals between the years 1993 to 2003 showed that little increase in the frequency of treatment fidelity reporting had S280 American Journal of Speech-Language Pathology N Vol. 22 N S279 S284 N May 2013

occurred since the first study, with only 18% of the studies reviewed reporting treatment fidelity (Wheeler, Baggett, Fox, & Blevins, 2006). Other disciplines have recently begun the process of self-evaluation for reporting of treatment fidelity, such as clinical psychology (Perepletchikova, Treat, & Kazdin, 2007) and school psychology (Cochrane & Laux, 2008; Sanetti, Gritter, & Dobey, 2011). These reviews of the psychology and autism literatures suggest that, although the importance of treatment fidelity is widely acknowledged, it is infrequently reported. To our knowledge, no literature reviews of treatment fidelity have been completed in other closely related fields such as occupational or physical therapy. Ten evidence-based systematic reviews of aphasia treatmentarelistedontheevidence-basedpractice Compendium of the American Speech-Language-Hearing Association (ASHA, n.d.). Aphasia treatment is an area in speech-language pathology that is developing an important clinical literature and practice recommendations. Aphasia has been reported to be one of the most common consequences of stroke, affecting at least one third of stroke patients during the acute and chronic phases, with worldwide attention on practice recommendations (Salter, Teasell, Bhogal, Zettler, & Foley, 2011). Given the importance of treatment fidelity to the validity of our clinical literature and its ultimate implementation in typical clinical settings, the purpose of this study was to assess the frequency with which treatment fidelity is reported in aphasia treatment studies. Method We sampled journals that would accept aphasia treatment studies within their scope. We included the journal Aphasiology, an international journal that is focused on language impairment and disability as a result of brain damage, and two ASHA journals, the American Journal of Speech-Language Pathology (AJSLP) and the Journal of Speech, Language, and Hearing Research (JSLHR). We reviewed studies published in all three journals during the last 10 years (2002 2011) using the following criteria: (a) an empirical study of an intervention administered across multiple sessions, and (b) self-identified as a treatment study. Articles that were reviews of previous work, republications of older studies (e.g., Clinical Aphasiology Conference Classics), and retrospective studies were excluded. A total of 149 studies across all three journals met these inclusion and exclusion criteria and were entered into our review. We reviewed each of the 149 studies for identifying information, general description of the study design, description of the dependent and independent variables, and indication of whether any measure of treatment fidelity was explicitly included. If a measure of treatment fidelity was included, we documented what was reported. We categorized the study design based on groupings used by Gresham et al. (1993; see Table 1). Dependent and independent variables were listed in terms used by the study author(s). We also coded the studies as to whether the treatment description was operationally described at a level that was sufficient for replication. Raters were asked to consider Could you replicate/implement this treatment based on the description in this publication? If sufficient description was offered to allow for implementation, yes was coded. Yes was also coded if the study was using a treatment for which there were additional published references or resources. Binary (yes/no) coding was used to indicate whether the studies reported any measure of treatment fidelity. Measures such as observations of treatment adherence and use of a training manual, as well as measures of procedural reliability, were coded as yes. For those studies that indicated treatment fidelity, additional details were recorded about how treatment fidelity was established, sources used for treatment fidelity, and how implementers of the treatment were trained. Both authors of this paper served as raters. Each rater read and scored each article and then compared ratings. Agreement on ratings of replicability for the independent variable was 100%. Agreement on the presence of reported treatment fidelity was initially 98%. Disagreements were discussed until consensus was reached on all coded items. Results There were 134 aphasia treatment studies in the journal Aphasiology that met criteria during the 10-year period between 2002 and 2011. There were fewer eligible aphasia treatment studies in the two ASHA journals, with seven studies published in AJSLP and eight studies published in JSLHR. The average number of treatment studies that met our eligibility criteria was 15.5 studies per year across all three journals. A complete table with all of the studies reviewed and their rated characteristics, as well as a complete Table 1. Study design coding categories used in our review of aphasia treatment studies. Category Group Withdrawal Multiple baseline Alternating treatments Changing criterion Description Use of an experimental group and a control group or a comparison of two or more treatment groups Comparisons within subject designs in which changes are compared across phases of the study (e.g., A/B/A/B, etc.) Comparisons of both within and between subjects and/or behaviors Alternating treatment components across sessions or days Within series change strategies in which the dependent variables are brought under the control of established and shifting criteria Hinckley & Douglas: Reported Treatment Integrity in Aphasia S281

Figure 1. Number of studies reported each year across all three journals American Journal of Speech-Language Pathology; Journal of Speech, Language, and Hearing Research; and Aphasiology and the number of studies per year that explicitly reported treatment fidelity. reference list of the studies, is provided in the online supplementary materials. Overall, the mean number of participants per study across all three journals was three (range = 1 20 participants). The majority of studies incorporated a multiple baseline design, followed by other single-subject designs. Based on the description in the article alone, 67/149 (49.9%) of the studies were judged to provide sufficient treatment description to allow for replication. This number included studies for which previous detailed publications were cited. Some studies described fairly complex treatments that lacked detail; others described treatments that required clinician decision making regarding tasks and trials, and criteria for these decisions were not offered. Twenty-one of the 149 studies (14%) explicitly reported some aspect of treatment fidelity. Figure 1 shows the number of published aphasia treatment studies across all three journals for each of the 10 years, and the number of studies per year that explicitly reported treatment fidelity. Because submission patterns, editorial policies, and publication practices may differ among journals, Figure 2 shows the number of aphasia treatment studies and those reporting fidelity in the two ASHA journals (AJSLP and JSLHR), and Figure 3 shows the same data for Aphasiology. The majority of the studies explicity reporting treatment fidelity (13/21) checked adherence to steps in the treatment protocol by having one or more raters review videotapes from a sample (10% 20%) of the training sessions and asking the raters to indicate whether each step was observed during the videotaped sessions. When this form of treatment fidelity was used, a percentage of the treatment steps completed in the sampled sessions was reported. When more than one rater was used for checking protocol steps in sampled sessions, point-to-point agreement between the raters was reported. Five of the 21 studies explicitly reporting treatment fidelity conducted supervision of the treatment during its implementation during the study. Two of the 21 studies described the use of a training manual as a way to conduct treatment fidelity. One of the 21 studies used training before initiation of the treatment study via role playing (Melton & Bourgeois, 2005). Among all of these 21 studies, only one (Hickey, Bourgeois, & Olswang, 2004) reported using two forms of treatment fidelity together, both a training manual and independent ratings of training adherence. Discussion The purpose of this paper was to describe the reporting of treatment fidelity among aphasia treatment studies that were published in three journals from 2002 to 2011. Of the 149 studies reviewed, 21 (14%) reported some measure of treatment fidelity. The primary method used in these studies Figure 2. Total number of aphasia treatment studies, with the number reporting treatment fidelity, in the American Speech-Language-Hearing Association journals: American Journal of Speech-Language Pathology and Journal of Speech, Language, and Hearing Research. No aphasia treatment studies meeting eligibility criteria appeared in these journals in 2002, 2005, or 2009. S282 American Journal of Speech-Language Pathology N Vol. 22 N S279 S284 N May 2013

Figure 3. Total number of aphasia treatment studies, with the number reporting treatment fidelity, in Aphasiology. was review of video samples of treatment sessions for evaluation of adherence to treatment protocol steps. The percentage of studies in this review reporting measures of treatment fidelity was similar to the percentage reported in reviews of treatment fidelity in the school-based intervention literature, in which 16% of studies reported treatment fidelity (Gresham et al., 1993; Peterson et al., 1982), but less than the 62% of studies reporting treatment fidelity in the school psychology literature (Sanetti et al., 2011). There was no apparent trend toward an increase in reporting of treatment fidelity across the 10-year period reviewed in any journal. This is similar to reviews of the autism literature (Gresham et al., 1993; Wheeler et al., 2006), but unlike the trend toward an increased frequency of reporting that was seen in the psychology literature (Moncher & Prinz, 1991; Sanetti et al., 2011). Although the principles and methods for conducting treatment fidelity have been described in the speech-language pathology literature (Hinckley, 2007; Kadaravek & Justice, 2010; Schlosser, 2002), our study is the first effort to actively self-monitor our reporting practices for treatment fidelity. Once we assess our own reporting practices in this area, we can take steps to improve our clinical literature by incorporating treatment fidelity measures. One way to incorporate treatment fidelity measures is to more consistently attend to treatment fidelity reporting during the peer review process. For example, we considered the publication information for the journals where our reviewed studies were published. The Aphasiology journal does not currently have an explicit statement in its information for authors that addresses treatment fidelity. AJSLP and JSLHR specify the Consolidated Standards of Reporting Trials (CONSORT; Schulz, Altman, & Moher, 2010) and the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND; Des Jarlais, Lyles, Crepaz, & the Trend Group, 2004) statement as guidelines for clinical research studies. Each of these has at least an indirect reference to treatment fidelity. The CONSORT statement specifies that reports of clinical trials should indicate the number of participants who received treatment as allocated. The TREND statement has a 22-item checklist that intends to guide the reporting of nonrandomized clinical trials (http://www.cdc.gov/trendstatement/). The TREND statement includes reporting of deviations from the protocol as planned, along with reasons for the deviation. By improving our expectations for the reliability of the independent variable, we enhance the validity of our clinical research. In addition to improving our reporting of treatment fidelity, we can improve our treatment fidelity practices by attending to three recommended levels of treatment fidelity (Lichstein, Riedel, & Grieve, 1994). First, treatment delivery, including measures of treatment fidelity, should be monitored to ensure that clinicians are delivering the treatment in the intended manner. Strategies to ensure fidelity of treatment delivery include use of a detailed, scripted treatment manual; structured training; supervisory monitoring and feedback; and delivery and accuracy checklists (Burgio et al., 2001). A second recommended level of treatment fidelity is treatment receipt, or a reporting by the person receiving the treatment. Measures of treatment receipt could include either a performance measure for example, performance of homework or a self-reported measure about the treatment components. The third recommended level of treatment fidelity is treatment enactment. Measures of treatment enactment could include direct observation of the treatment as it is being delivered and/or reviews of clinician documentation that was completed during treatment administration. Examples of treatment fidelity measures encompassing all of these levels have been developed in the field of psychology (Gearing et al., 2011). As the use and reporting of fidelity measures increase, clinicians may experience increased access to treatment procedures that would allow them to use evidence-supported practices more readily. At present, speech-language pathologists can access practice guidelines with recommendations for the use of particular evidence-based treatments via Internet sites such as ASHA s Evidence-Based Compendium (http:// www.asha.org/members/ebp/compendium) and the Academy Hinckley & Douglas: Reported Treatment Integrity in Aphasia S283

of Neurologic Communication Disorders & Sciences (http://www.ancds.org/index.php/practice-guidelines-9). Unfortunately, gaining access to information about the detailed procedures of the evidence-supported treatments recommended by those practice guidelines can be more challenging. The ultimate goal of treatment research is to develop evidence-supported practices that can be implemented in typical clinical settings. This actual implementation necessitates that the core components of the intervention be not only be well studied but also be accessible to a typical clinician. Measures and reports of treatment fidelity are a beginning step toward implementation of evidence-based practices a goal that is important to both researchers and clinicians. References American Speech-Language-Hearing Association. (n.d.). Compendium of EBP guidelines and systematic reviews. Retrieved from http://www.asha.org/members/ebp/compendium. Bhar, S. S., & Beck, A. T. (2009). Treatment integrity of studies that compare short-term psychodynamic psychotherapy with cognitive-behavior therapy. Clinical Psychology: Science and Practice, 16, 370 378. Burgio, L., Corcoran, M., Lichstein, K. L., Nichols, L., Czaja, S., Gallagher-Thompson, D., Schulz, R. (2001). Judging outcomes in psychosocial interventions for dementia caregivers: The problem of treatment implementation. The Gerontologist, 41(4), 481 489. Cherney, L. R., Patterson, J. P., Raymer, A., Frymark, T., & Schooling, T. (2008). Evidence-based systematic review: Effects of intensity of treatment and constraint-induced language therapy for individuals with stroke-induced aphasia. Journal of Speech, Language, and Hearing Research, 51, 1282 1299. Des Jarlais, D. C., Lyles, C., Crepaz, N., & the TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361 366. Dobson, D., & Cook, T. J. (1980). Avoiding Type III error in program evaluation: Results from a field experiment. Evaluation and Program Planning, 3, 269 276. Fixsen, D. L., Naoom, S. F., Blasé, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication #231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network. Frances, A., Sweeney, J., & Clarkin, J. (1985). Do psychotherapies have specific effects? American Journal of Psychotherapy, 39, 159 174. Gearing, R. E., El-Bassel, N., Ghesquiere, A., Baldwin, S., Gillies, J., &Ngeow,E.(2011). Major ingredients of fidelity: A review and scientific guide to improving quality of intervention research implementation. Clinical Psychology Review, 31(1), 79 88. Gresham, F. M., Gansle, K. A., Noell, G. H., Cohen, S., & Rosenblum, S. (1993). Treatment integrity of school-based behavioral intervention studies: 1980 1990. School Psychology Review, 22, 254 273. Hickey, E., Bourgeois, M., & Olswang, L. (2004). Effects of training volunteers to converse with nursing home residents with aphasia. Aphasiology, 18(5 7), 625 637. Hinckley, J. J. (2007). Treatment integrity and problem-solving. In A. M. Guilford, S. Graham, & J. Scheuerle (Eds.), The speechlanguage pathologist: From novice to expert (pp. 64 75). Upper Saddle River, NJ: Pearson. Kadaravek, J. N., & Justice, L. M. (2010). Fidelity: An essential component of evidence-based practice in speech-language pathology. American Journal of Speech-Language Pathology, 19, 369 379. Kazdin, A. E. (1986). Comparative outcome studies of psychotherapy: Methodological issues and strategies. Journal of Consulting and Clinical Psychology, 54, 95 105. Lichstein, K. L., Riedel, B. W., & Grieve, R. (1994). Fair tests of clinical trials: A treatment implementation model. Advances in Behaviour Research and Therapy, 16, 1 29. Linnan, L., & Steckler, A. (2002). Process evaluation for public health interventions and research: An overview. In L. Linnan & A. Steckler (Eds.), Process evaluation for public health interventions and research (pp. 1 23). San Francisco, CA: Jossey-Bass. McEvoy, M. A., Shores, R. E., Wehby, J. H., Johnson, S. M., & Fox, J. J. (1990). Special education teachers implementation of procedures to promote social interaction: Reported treatment fidelity among children in integrated settings. Education and Training in Mental Retardation, 25, 267 276. Melton, A., & Bourgeois, M. (2005). Training compensatory memory strategies via the telephone for persons with TBI. Aphasiology, 19(3 5), 353 364. Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11, 247 266. Perepletchikova, F., Treat, T. A., & Kazdin, A. E. (2007). Treatment integrity in psychotherapy research: Analysis of the studies and examination of the associated factors. Journal of Consulting and Clinical Psychology, 75, 829 841. Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15, 477 492. Salter, K., Teasell, R., Bhogal, S., Zettler, L., & Foley, N. (2011). Aphasia. EBRSR: Evidence-based review of stroke rehabilitation. Retrieved from http://www.ebrsr.com/reviews_details.php? Aphasia-3. Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of school interventions with children in the school psychology literature from 1995 to 2008. School Psychology Review, 40, 72 84. Schlosser, R. W. (2002). On the importance of being earnest about treatment integrity. Augmentative and Alternative Communication, 18, 36 44. Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomized trials. Annals of Internal Medicine, 152, 1 6. Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy. Baltimore, MD: Johns Hopkins University Press. Vermilyea, B. B., Barlow, D. H., & O Brien, G. T. (1984). The importance of assessing treatment integrity: An example in the anxiety disorders. Journal of Behavioral Assessment, 6, 1 11. Waller, G. (2009). Evidence-based treatment and therapist drift. Behaviour Research and Therapy, 47, 119 127. Wheeler, J. J., Baggett, B. A., Fox, J., & Blevins, L. (2006). Treatment integrity: A review of intervention studies conducted with children with autism. Focus on Autism and Other Developmental Disabilities, 21, 45 54. Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatment: Strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49, 156 167. S284 American Journal of Speech-Language Pathology N Vol. 22 N S279 S284 N May 2013