Checklists for the Common Guidelines for Education Research and Development

Similar documents
R01 NIH Grants. John E. Lochman, PhD, ABPP Center for Prevention of Youth Behavior Problems Department of Psychology

TU-E2090 Research Assignment in Operations Management and Services

Practical Research. Planning and Design. Paul D. Leedy. Jeanne Ellis Ormrod. Upper Saddle River, New Jersey Columbus, Ohio

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

Early Warning System Implementation Guide

Tun your everyday simulation activity into research

EXECUTIVE SUMMARY. Online courses for credit recovery in high schools: Effectiveness and promising practices. April 2017

Introduction to Simulation

MASTER S THESIS GUIDE MASTER S PROGRAMME IN COMMUNICATION SCIENCE

GUIDE FOR THE WRITING OF THE DISSERTATION

Master s Programme in European Studies

Results In. Planning Questions. Tony Frontier Five Levers to Improve Learning 1

The Talent Development High School Model Context, Components, and Initial Impacts on Ninth-Grade Students Engagement and Performance

Designing Propagation Plans to Promote Sustained Adoption of Educational Innovations

Preliminary Report Initiative for Investigation of Race Matters and Underrepresented Minority Faculty at MIT Revised Version Submitted July 12, 2007

Stakeholder Engagement and Communication Plan (SECP)

What is PDE? Research Report. Paul Nichols

GUIDE TO EVALUATING DISTANCE EDUCATION AND CORRESPONDENCE EDUCATION

Software Maintenance

Jason A. Grissom Susanna Loeb. Forthcoming, American Educational Research Journal

The Efficacy of PCI s Reading Program - Level One: A Report of a Randomized Experiment in Brevard Public Schools and Miami-Dade County Public Schools

EVALUATING MATH RECOVERY: THE IMPACT OF IMPLEMENTATION FIDELITY ON STUDENT OUTCOMES. Charles Munter. Dissertation. Submitted to the Faculty of the

WORK OF LEADERS GROUP REPORT

ACADEMIC AFFAIRS GUIDELINES

The Impacts of Regular Upward Bound on Postsecondary Outcomes 7-9 Years After Scheduled High School Graduation

A Game-based Assessment of Children s Choices to Seek Feedback and to Revise

Model of Human Occupation

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Lincoln School Kathmandu, Nepal

Process Evaluations for a Multisite Nutrition Education Program

California Professional Standards for Education Leaders (CPSELs)

Research Design & Analysis Made Easy! Brainstorming Worksheet

DESIGNPRINCIPLES RUBRIC 3.0

BSP !!! Trainer s Manual. Sheldon Loman, Ph.D. Portland State University. M. Kathleen Strickland-Cohen, Ph.D. University of Oregon

VIEW: An Assessment of Problem Solving Style

Systematic reviews in theory and practice for library and information studies

A Pilot Study on Pearson s Interactive Science 2011 Program

Graduate Program in Education

Update on Standards and Educator Evaluation

A. What is research? B. Types of research

Number of students enrolled in the program in Fall, 2011: 20. Faculty member completing template: Molly Dugan (Date: 1/26/2012)

Lecture 1: Machine Learning Basics

Further, Robert W. Lissitz, University of Maryland Huynh Huynh, University of South Carolina ADEQUATE YEARLY PROGRESS

Individual Interdisciplinary Doctoral Program Faculty/Student HANDBOOK

Focus on. Learning THE ACCREDITATION MANUAL 2013 WASC EDITION

Thesis-Proposal Outline/Template

ATW 202. Business Research Methods

Conceptual and Procedural Knowledge of a Mathematics Problem: Their Measurement and Their Causal Interrelations

Person Centered Positive Behavior Support Plan (PC PBS) Report Scoring Criteria & Checklist (Rev ) P. 1 of 8

Expanded Learning Time Expectations for Implementation

Multiple Measures Assessment Project - FAQs

DOCTOR OF PHILOSOPHY BOARD PhD PROGRAM REVIEW PROTOCOL

Asian Development Bank - International Initiative for Impact Evaluation. Video Lecture Series

WHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING AND TEACHING OF PROBLEM SOLVING

Reference to Tenure track faculty in this document includes tenured faculty, unless otherwise noted.

The Impact of Formative Assessment and Remedial Teaching on EFL Learners Listening Comprehension N A H I D Z A R E I N A S TA R A N YA S A M I

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

Designing a Case Study Protocol for Application in IS research. Hilangwa Maimbo and Graham Pervan. School of Information Systems, Curtin University

Connecting to the Big Picture: An Orientation to GEAR UP

Understanding Language

EFFECTS OF MATHEMATICS ACCELERATION ON ACHIEVEMENT, PERCEPTION, AND BEHAVIOR IN LOW- PERFORMING SECONDARY STUDENTS

Karla Brooks Baehr, Ed.D. Senior Advisor and Consultant The District Management Council

Summary results (year 1-3)

Peer Influence on Academic Achievement: Mean, Variance, and Network Effects under School Choice

Implementation Science and the Roll-out of the Head Start Program Performance Standards

Instructional Intervention/Progress Monitoring (IIPM) Model Pre/Referral Process. and. Special Education Comprehensive Evaluation.

Writing an Effective Research Proposal

Lecture 15: Test Procedure in Engineering Design

Delaware Performance Appraisal System Building greater skills and knowledge for educators

w o r k i n g p a p e r s

Learning By Asking: How Children Ask Questions To Achieve Efficient Search

State University of New York at Buffalo INTRODUCTION TO STATISTICS PSC 408 Fall 2015 M,W,F 1-1:50 NSC 210

Emerald Coast Career Institute N

Indicators Teacher understands the active nature of student learning and attains information about levels of development for groups of students.

Implementing an Early Warning Intervention and Monitoring System to Keep Students On Track in the Middle Grades and High School

MFL SPECIFICATION FOR JUNIOR CYCLE SHORT COURSE

ScienceDirect. Noorminshah A Iahad a *, Marva Mirabolghasemi a, Noorfa Haszlinna Mustaffa a, Muhammad Shafie Abd. Latif a, Yahya Buntat b

MSW POLICY, PLANNING & ADMINISTRATION (PP&A) CONCENTRATION

Effectiveness of McGraw-Hill s Treasures Reading Program in Grades 3 5. October 21, Research Conducted by Empirical Education Inc.

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Student Support Services Evaluation Readiness Report. By Mandalyn R. Swanson, Ph.D., Program Evaluation Specialist. and Evaluation

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

APA Basics. APA Formatting. Title Page. APA Sections. Title Page. Title Page

Fighting for Education:

Social Emotional Learning in High School: How Three Urban High Schools Engage, Educate, and Empower Youth

Final Teach For America Interim Certification Program

Innovating Toward a Vibrant Learning Ecosystem:

Queensborough Public Library (Queens, NY) CCSS Guidance for TASC Professional Development Curriculum

CONNECTICUT GUIDELINES FOR EDUCATOR EVALUATION. Connecticut State Department of Education

PREPARING FOR THE SITE VISIT IN YOUR FUTURE

Innovation of communication technology to improve information transfer during handover

Essays on the Economics of High School-to-College Transition Programs and Teacher Effectiveness. Cecilia Speroni

Priorities for CBHS Draft 8/22/17

Planning a Dissertation/ Project

Multiple regression as a practical tool for teacher preparation program evaluation

THE EFFECTS OF CREATIVE TEACHING METHOD ON MOTIVATION AND ACADEMIC ACHIEVEMENT OF ELEMENTARY SCHOOL STUDENTS IN ACADEMIC YEAR

UNIVERSITY of NORTH GEORGIA

San José State University Department of Marketing and Decision Sciences BUS 90-06/ Business Statistics Spring 2017 January 26 to May 16, 2017

Davidson College Library Strategic Plan

Note: Principal version Modification Amendment Modification Amendment Modification Complete version from 1 October 2014

Transcription:

Checklists for the Common Guidelines for Education Research and Development This document includes a series of six checklists one for each of the six types of research outlined in the Common Guidelines for Education Research and Development. The Guidelines, developed by the Institute of Education Sciences at the U.S. Department of Education and the National Science Foundation, explains those agencies shared expectations for education research and development. The checklists, created by EvaluATE, are distillations of key points from the Guidelines. The checklists are intended to support use of the Guidelines, enabling users to quickly reference a type of research and determine whether they are following guideline s expectations. As such, they provide an overview and orientation to the Guidelines. They do not replace that report nor do they expand or elaborate on the report s content. The checklists content has been extracted (usually verbatim) from the full report. All checklist users are strongly encouraged to read the complete Guidelines, available from http://bit.ly/nsfies_guide. You may go directly to the checklist for each type of research by clicking on the links below: 1. Foundational Research to advance the frontiers of education and learning; develop and refine theory and methodology; and provide fundamental knowledge about teaching and/or learning 2. Early-Stage or Exploratory Research to investigate approaches to education problems to establish the basis for design and development of new interventions or strategies and/or to provide evidence for whether an established intervention or strategy is ready to be tested in an efficacy study 3. Design and Development Research to develop new or improved interventions or strategies to achieve well-specified learning goals or objectives, including making refinements on the basis of small-scale testing 4. Efficacy Research to determine whether an intervention or strategy can improve outcomes under ideal conditions (e.g., with more implementation support, highly trained personnel, and/or more homogenous participants than is typical) 5. Effectiveness Research to estimate the impacts of an intervention or strategy when implemented under conditions of routine practice (i.e., conditions similar to what would occur if a study were not being conducted) 6. Scale-Up Research to estimate the impacts of an intervention or strategy under conditions of routine practice and across a broad spectrum of populations and settings, sufficiently diverse to broadly generalize findings

TYPE 1: FOUNDATIONAL RESEARCH to advance the frontiers of education and learning; develop and refine theory and methodology; and provide fundamental knowledge about teaching and/or learning Justification Policy and/or Practical Significance Address important research questions related to education and learning Have clear implications for policy and/or practice (direct relationship to student outcomes not required) Theoretical and Empirical Basis Outline the study s theoretical and empirical bases Explain why the research is needed Describe whether and how the study will identify or explore important new constructs in education and learning extend understanding of current constructs explain understanding of relationships among the constructs under investigation and/or extend research methodologies for advancing the evidence base to support improved policy or practice Evidence Project Outcomes Advances in theory, methodology, and/or understanding of important constructs in education Findings that could serve as basis for future studies Research Plan Define the study s key conjectures or hypotheses, questions, and objectives derived from the study s theoretical and empirical justifications Describe the study design in detail, including: population of interest sampling or selection methods sample size data analysis methods Describe plans for data management and analysis, curating, and sharing Describe plan for disseminating findings For studies that include hypothesis testing: Identify the minimum relevant mean difference or relationship between variables and sample size required to ensure adequate statistical power to detect true differences or relationships of this magnitude or larger For qualitative studies: Justify the sample size and selection plan For studies analyzing secondary data: Describe the source and availability of data and sequence of modeling planned 2

For studies collecting primary data: Describe the instruments and protocols Provide evidence from literature to support assumptions that guide the sample design Describe strategies for ensuring validity and reliability of outcome measures Describe how findings will be triangulated External Feedback Subject the study to a series of external, critical reviews of its design and activities via one or more of the following strategies: Peer review of the proposed project Ongoing monitoring and review by the grant making agency s personnel External review panels or advisory boards proposed by the project and/or the agency Third-party evaluator Peer review of publications and conference presentations resulting from the project Ensure the external review is sufficiently independent and rigorous to influence the project s activities and improve the quality of its finding 3

TYPE 2: EARLY-STAGE OR EXPLORATORY RESEARCH to investigate approaches to education problems to establish the basis for design and development of new interventions or strategies and/or to provide evidence for whether an established intervention or strategy is ready to be tested in an efficacy study Justification Policy and/or Practical Significance Describe the practical education problem or issue on which the study is focused Provide a rationale for studying the problem Explain how the research will generate important knowledge to inform the development, improvement, or evaluation of education programs, policies, or practices Theoretical and Empirical Basis Evidence Describe the theoretical or empirical rationale for the project, including citations of supporting evidence For research on existing interventions, explain why it should be studied through early-stage or exploratory research rather than an efficacy study Project Outcomes Evidence regarding one or both of the following: Malleable factors association with education outcomes. Whether malleable factors and conditions moderate and/or mediate the relationship between malleable factors and education outcomes. Explanation of relationship between factors and education outcomes in the form of one of the following: Well-specified conceptual framework that supports a link between the malleable factors and education outcomes Theoretical explanation for the factors and conditions moderation and/or mediation of the relationship between malleable factors and learner outcomes Determination based on empirical evidence and conceptual framework of whether there is a basis for pursuing a Design and Development project or an Efficacy study or whether further foundational/exploratory research is needed before proceeding to Efficacy or Effectiveness testing Research Plan Define the study s hypotheses or research questions derived from the study s theoretical and empirical justifications Describe the research design, demonstrating how it is appropriate for the hypotheses or research questions Justify the proposed context and sample for the study If secondary analyses are proposed, describe data sources Describe data collection procedures and instruments, including evidence of and strategies for ensuring reliability and validity If applicable, describe a plan to study the opportunities for interventions to address education challenges Describe data analysis procedures Describe reporting plan 4

External Feedback Subject the study to a series of external, critical reviews of its design and activities via one or more of the following strategies: Peer review of the proposed project Ongoing monitoring and review by the grant making agency s personnel External review panels or advisory boards proposed by the project and/or the agency Third-party evaluator Peer review of publications and conference presentations resulting from the project Ensure the external review is sufficiently independent and rigorous to influence the project s activities and improve the quality of its finding 5

TYPE 3: DESIGN AND DEVELOPMENT RESEARCH to develop new or improved interventions or strategies to achieve well-specified learning goals or objectives, including making refinements on the basis of small-scale testing Justification Policy and/or Practical Significance Specify the practical problem the intervention will address Justify the importance of the problem Describe how the intervention differs from existing practice Explain why the project has the potential to improve education outcomes or increase efficiencies in the education system or institutional setting Theoretical and Empirical Basis Evidence Describe the theoretical or empirical justification for the intervention If the theoretical basis rests on evidence related to individual components, explain how combining these components in a new intervention is expected to achieve intended outcomes Provide well-explicated theory of action or logic model for the intervention, including key components and their relationships, both theoretical and operational Project Outcomes Fully developed version of the design-research, including all materials necessary for implementation Well-specified theory of action, including evidence supporting or refuting key assumptions of the intervention s original theoretical basis Description of the major design iterations and resulting evidence to support key assumptions about the theory of action Description and empirical evidence of the adjustments to the theory of action and intervention design that resulted from design testing Measures with evidence of technical quality for assessing the implementation of the intervention under typical conditions Pilot data on the intervention s promise for generating intended education outcomes Research Plan Describe method for developing the intervention to the point where it can be used by the intended end users Describe methods for collecting evidence on the feasibility of implementation by end users under typical conditions Describe method for obtaining pilot data on the intervention s promise for achieving intended outcomes External Feedback Subject the project s design and activities to a series of external, critical reviews via one or more of the following strategies: Peer review of the proposed project Ongoing monitoring and review by the grant making agency s personnel External review panels or advisory boards proposed by the project and/or the agency Third-party evaluator Peer review of publications and conference presentations resulting from the project 6

TYPE 4: EFFICACY RESEARCH to determine whether an intervention or strategy can improve outcomes under ideal conditions (e.g., with more implementation support, highly trained personnel, and/or more homogenous participants than is typical) Justification Policy and/or Practical Significance Describe the intervention to be tested Specify the practical problem the proposed intervention will address Justify the importance of the problem Describe how the intervention differs from other approaches to addressing the problem Explain why and how the intervention will be studied under ideal conditions rather than routine practice Identify the implementation settings and populations Theoretical and Empirical Basis Evidence Justify the research through one or more of the following: Empirical evidence of the promise of the intervention from a well-designed and implemented pilot study Empirical evidence from an Early-Stage research study supporting the intervention s theory of action Evidence that the intervention is widely used even though its efficacy has not been established If the study is a replication of a study with a different population: Evidence of positive impacts from a previous well-designed and implemented efficacy study Justification for studying the intervention with a new population Project Outcomes Descriptions of the study goals, design and implementation, data collection and quality, and analysis and findings 1 Reliable estimates of the intervention s average impact If possible, estimates for sample subgroups (e.g., by setting, population group, or cohort) Documentation of implementation of the intervention and the counterfactual condition in sufficient detail for readers to judge applicability of the findings Discussion of the implications of the findings for the intervention s theory of action If favorable impacts are found, description of the intervention s organizational supports, tools, and procedures that were key features of implementation If no favorable impacts are found, discussion of possible reasons Research Plan* Identify and justify the following: Study design used to estimate the intervention s causal impact on the outcomes of interest Key outcomes of interest and minimum size impact that would have policy or practical relevance 1 As outlined in the What Works Clearinghouse Reporting Guide at http://ies.ed.gov/ncee/wwc/documentsum.aspx?sid=235 7

Study setting(s) and target population(s) Sample, including the power it provides for detecting impact Data collection plan, including information about Procedures Measures Evidence on and strategies for ensuring reliability and validity Plans for collecting data on implementation, comparison group practices, and study context Analysis plan Reporting plan *The Guidelines includes the following additional guidance regarding the design of Efficacy, Effectiveness, and Scale-Up Research: Use designs that will yield impact estimates with strong causal validity and that, for example, could meet What Works Clearinghouse standards without reservations (see http://ies.ed.gov/ncee/wwc/). Generally and when feasible, include random assignment to treatment and comparison groups. Use quasi-experimental designs, such as matched comparison groups or regression discontinuity designs only when there is direct compelling evidence demonstrating the implausibility of common threats to internal validity. Study sample size and allocation to condition should be such that the minimum true impact detectable size with 80 percent power and a 95 percent confidence interval is no larger than the minimum relevant size impact for policy or practice. If that is not the case, provide a rationale for conducting the study despite its not meeting this standard. Primary outcome measures should include student outcomes sensitive to the performance change the intervention is intended to bring about, student outcomes not strictly aligned with the intervention, and student outcomes of practical interest to educators and policymakers. Outcomes should be pre-specified, have been demonstrated as reliable and valid for the intended purposes, and based on data-collection methods that have been shown to yield reliable data. Measure the strength and qualities of implementation to address whether the intervention s impact estimates may be linked to how it was implemented. Measure comparison group practices and/or conditions to support a clear characterization of the contrast between the intervention and comparison condition. Identify the measures, their validity and reliability, and how data will be collected. Specify analytic models that reflect the sample design and maximize the likelihood of obtaining unbiased, efficient estimates of average impacts and the confidence intervals around those impacts. Describe additional analyses conducted to explore variability in the intervention s impacts and possible implications for the theory of change (e.g., subgroup analyses (expected in Effectiveness and in Scale-up Studies); exploration of co-variation in impact estimates and fidelity of implementation or intervention contrasts; and evidence of possible moderator and mediator effects). 8

External Feedback Subject the project to a series of external, critical reviews of its design and activities via one or more of the following strategies: Peer review of the proposed project Ongoing monitoring and review by the grant making agency s personnel External review panels or advisory boards proposed by the project and/or the agency Third-party evaluator Peer review of publications and conference presentations resulting from the project Ensure the external review is sufficiently independent and rigorous to influence the project s activities and improve the quality of its findings 9

TYPE 5: EFFECTIVENESS RESEARCH to estimate the impacts of an intervention or strategy when implemented under conditions of routine practice (i.e., conditions similar to what would occur if a study were not being conducted) Justification Policy and/or Practical Significance Describe the intervention to be tested Specify the practical problem the intervention will address Justify the importance of the problem Describe how the intervention differs from other approaches to addressing the problem Explain why and how the intervention will improve education outcomes or increase efficiencies in the education system Explain why the intervention will be studied under typical, rather than ideal conditions Identify the implementation setting(s) and population(s) Theoretical and Empirical Basis: Provide empirical evidence of the intervention s efficacy, as demonstrated by one or more of the following: Statistically significant and substantively important impact estimates from either One study that includes multiple sites or settings 2 Evidence Two studies that include one site or setting 2 Evidence that the intervention is widely used even though its efficacy has not been established Project Outcomes Descriptions of the study goals, design and implementation, data collection and quality, and analysis and findings 3 Reliable estimates of the intervention s average impact. If possible, estimates for sample subgroups (e.g., by setting, population group, or cohort) Documentation of implementation of the intervention and the counterfactual condition in sufficient detail for readers to judge applicability of the findings Discussion of the implications of the findings for the intervention s theory of action If favorable impacts are found, description of the intervention s organizational supports, tools, and procedures that were key features of implementation If no favorable impacts are found, discussion of possible reasons Research Plan* Identify and justify the following: Study design used to estimate the intervention s causal impact on the outcomes of interest Key outcomes of interest and minimum size impact that would have policy or practical relevance Study setting(s) and target population(s) Sample, including the power it provides for detecting impact Data collection plan, including information about Procedures Measures 2 Studies must meet guidelines for evidence for impact studies (i.e., Efficacy, Effectiveness, and Scale-Up Research) 3 As outlined in the What Works Clearinghouse Reporting Guide at http://ies.ed.gov/ncee/wwc/documentsum.aspx?sid=235 10

Evidence on and strategies for ensuring reliability and validity Plans for collecting data on implementation, comparison group practices, and study context Analysis plan Reporting plan *The Guidelines includes the following additional guidance regarding the design of Efficacy, Effectiveness, and Scale-Up Research: Use designs that will yield impact estimates with strong causal validity and that, for example, could meet What Works Clearinghouse standards without reservations (see http://ies.ed.gov/ncee/wwc/). Generally and when feasible, include random assignment to treatment and comparison groups. Use quasi-experimental designs, such as matched comparison groups or regression discontinuity designs only when there is direct compelling evidence demonstrating the implausibility of common threats to internal validity. Study sample size and allocation to condition should be such that the minimum true impact detectable size with 80 percent power and a 95 percent confidence interval is no larger than the minimum relevant size impact for policy or practice. If that is not the case, provide a rationale for conducting the study despite its not meeting this standard. Primary outcome measures should include student outcomes sensitive to the performance change the intervention is intended to bring about, student outcomes not strictly aligned with the intervention, and student outcomes of practical interest to educators and policymakers. Outcomes should be pre-specified, have been demonstrated as reliable and valid for the intended purposes, and based on data-collection methods that have been shown to yield reliable data. Measure the strength and qualities of implementation to address whether the intervention s impact estimates may be linked to how it was implemented. Measure comparison group practices and/or conditions to support a clear characterization of the contrast between the intervention and comparison condition. Identify the measures, their validity and reliability, and how data will be collected. Specify analytic models that reflect the sample design and maximize the likelihood of obtaining unbiased, efficient estimates of average impacts and the confidence intervals around those impacts. Describe additional analyses conducted to explore variability in the intervention s impacts and possible implications for the theory of change (e.g., subgroup analyses (expected in Effectiveness and in Scale-up Studies); exploration of co-variation in impact estimates and fidelity of implementation or intervention contrasts; and evidence of possible moderator and mediator effects). 11

External Feedback Subject the project to a series of external, critical reviews of its design and activities via one or more of the following strategies: Peer review of the proposed project Ongoing monitoring and review by the grant making agency s personnel External review panels or advisory boards proposed by the project and/or the agency Third-party evaluator Peer review of publications and conference presentations resulting from the project Ensure the external review is sufficiently independent and rigorous to influence the project s activities and improve the quality of its findings 12

TYPE 6: SCALE-UP RESEARCH to estimate the impacts of an intervention or strategy under conditions of routine practice and across a broad spectrum of populations and settings, sufficiently diverse to broadly generalize findings Justification Policy and/or Practical Significance Describe the intervention to be tested Specify the practical problem the intervention will address Justify the importance of the problem Describe how the intervention differs from other approaches to addressing the problem Explain why and how the intervention will improve education outcomes or increase efficiencies in the education system Explain why the intervention will be studied under typical conditions with a broad sample, rather than ideal or routine conditions Identify the implementation setting(s) and population(s) Theoretical and Empirical Basis Evidence Provide empirical evidence of the intervention s efficacy, as demonstrated by one or more of the following: Statistically significant and substantively important impact estimates from either One study that includes multiple sites or settings 4 Two studies that include one site or setting 4 Project Outcomes Descriptions of the study goals, design and implementation, data collection and quality, and analysis and findings 5 Reliable estimates of the intervention s average impact. If possible, estimates for sample subgroups (e.g., by setting, population group, or cohort) Documentation of implementation of the intervention and the counterfactual condition in sufficient detail for readers to judge applicability of the findings Discussion of the implications of the findings for the intervention s theory of action If favorable impacts are found, description of the intervention s organizational supports, tools, and procedures that were key features of implementation If no favorable impacts are found, discussion of possible reasons Research Plan* Identify and justify the following: Study design used to estimate the intervention s causal impact on the outcomes of interest Key outcomes of interest and minimum size impact that would have policy or practical relevance Study setting(s) and target population(s) Sample, including the power it provides for detecting impact Data collection plan, including information about Procedures Measures 4 Studies must meet guidelines for evidence for impact studies (i.e., Efficacy, Effectiveness, and Scale-Up Research) 5 As outlined in the What Works Clearinghouse Reporting Guide at http://ies.ed.gov/ncee/wwc/documentsum.aspx?sid=235 13

Evidence on and strategies for ensuring reliability and validity Plans for collecting data on implementation, comparison group practices, and study context Analysis plan Reporting plan *The Guidelines includes the following additional guidance regarding the design of Efficacy, Effectiveness, and Scale-Up Research: Use designs that will yield impact estimates with strong causal validity and that, for example, could meet What Works Clearinghouse standards without reservations (see http://ies.ed.gov/ncee/wwc/). Generally and when feasible, include random assignment to treatment and comparison groups. Use quasi-experimental designs, such as matched comparison groups or regression discontinuity designs only when there is direct compelling evidence demonstrating the implausibility of common threats to internal validity. Study sample size and allocation to condition should be such that the minimum true impact detectable size with 80 percent power and a 95 percent confidence interval is no larger than the minimum relevant size impact for policy or practice. If that is not the case, provide a rationale for conducting the study despite its not meeting this standard. Primary outcome measures should include student outcomes sensitive to the performance change the intervention is intended to bring about, student outcomes not strictly aligned with the intervention, and student outcomes of practical interest to educators and policymakers. Outcomes should be pre-specified, have been demonstrated as reliable and valid for the intended purposes, and based on data-collection methods that have been shown to yield reliable data. Measure the strength and qualities of implementation to address whether the intervention s impact estimates may be linked to how it was implemented. Measure comparison group practices and/or conditions to support a clear characterization of the contrast between the intervention and comparison condition. Identify the measures, their validity and reliability, and how data will be collected. Specify analytic models that reflect the sample design and maximize the likelihood of obtaining unbiased, efficient estimates of average impacts and the confidence intervals around those impacts. Describe additional analyses conducted to explore variability in the intervention s impacts and possible implications for the theory of change (e.g., subgroup analyses (expected in Effectiveness and in Scale-up Studies); exploration of co-variation in impact estimates and fidelity of implementation or intervention contrasts; and evidence of possible moderator and mediator effects). 14

External Feedback Subject the project to a series of external, critical reviews of its design and activities via one or more of the following strategies: Peer review of the proposed project Ongoing monitoring and review by the grant making agency s personnel External review panels or advisory boards proposed by the project and/or the agency Third-party evaluator Peer review of publications and conference presentations resulting from the project Ensure the external review is sufficiently independent and rigorous to influence the project s activities and improve the quality of its findings 15