Evaluation: Designs and Approaches

Similar documents
Assessment System for M.S. in Health Professions Education (rev. 4/2011)

What is PDE? Research Report. Paul Nichols

Assessment of Student Academic Achievement

Unit 7 Data analysis and design

PCG Special Education Brief

Learning Lesson Study Course

Person Centered Positive Behavior Support Plan (PC PBS) Report Scoring Criteria & Checklist (Rev ) P. 1 of 8

Inquiry Learning Methodologies and the Disposition to Energy Systems Problem Solving

EXECUTIVE SUMMARY. Online courses for credit recovery in high schools: Effectiveness and promising practices. April 2017

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

Law Professor's Proposal for Reporting Sexual Violence Funded in Virginia, The Hatchet

NCEO Technical Report 27

STUDENT WELFARE FREEDOM FROM BULLYING

TU-E2090 Research Assignment in Operations Management and Services

Grade 4. Common Core Adoption Process. (Unpacked Standards)

Preliminary Report Initiative for Investigation of Race Matters and Underrepresented Minority Faculty at MIT Revised Version Submitted July 12, 2007

Quantitative Research Questionnaire

Strategic Planning for Retaining Women in Undergraduate Computing

WORK OF LEADERS GROUP REPORT

Paying for. Cosmetology School S C H O O L B E AU T Y. Financing your new life. beautyschoolnetwork.com pg 1

Patient/Caregiver Surveys

Leveraging MOOCs to bring entrepreneurship and innovation to everyone on campus

Save Children. Can Math Recovery. before They Fail?

Student Support Services Evaluation Readiness Report. By Mandalyn R. Swanson, Ph.D., Program Evaluation Specialist. and Evaluation

1110 Main Street, East Hartford, CT Tel: (860) Fax: (860)

Why Philadelphia s Public School Problems Are Bad For Business

George Mason University Graduate School of Education Program: Special Education

Simple Random Sample (SRS) & Voluntary Response Sample: Examples: A Voluntary Response Sample: Examples: Systematic Sample Best Used When

Course Title: Health and Human Rights: an Interdisciplinary Approach; TSPH272/TPOS272

NDPC-SD Data Probes Worksheet

Lesson M4. page 1 of 2

EFFECTIVE CLASSROOM MANAGEMENT UNDER COMPETENCE BASED EDUCATION SCHEME

Understanding Co operatives Through Research

Aspiring For More Than Crumbs: The impact of incentives on Girl Scout Internet research response rates

ABET Criteria for Accrediting Computer Science Programs

DESIGNPRINCIPLES RUBRIC 3.0

Process Evaluations for a Multisite Nutrition Education Program

Stakeholder Debate: Wind Energy

MBA 5652, Research Methods Course Syllabus. Course Description. Course Material(s) Course Learning Outcomes. Credits.

and secondary sources, attending to such features as the date and origin of the information.

Harvesting the Wisdom of Coalitions

Denver Public Schools

Kristin Moser. Sherry Woosley, Ph.D. University of Northern Iowa EBI

CHALLENGES FACING DEVELOPMENT OF STRATEGIC PLANS IN PUBLIC SECONDARY SCHOOLS IN MWINGI CENTRAL DISTRICT, KENYA

Aalya School. Parent Survey Results

ESTABLISHING A TRAINING ACADEMY. Betsy Redfern MWH Americas, Inc. 380 Interlocken Crescent, Suite 200 Broomfield, CO

Abu Dhabi Indian. Parent Survey Results

University of Waterloo School of Accountancy. AFM 102: Introductory Management Accounting. Fall Term 2004: Section 4

Abu Dhabi Grammar School - Canada

Using Rhetoric Technique in Persuasive Speech

Cooking Matters at the Store Evaluation: Executive Summary

Politics and Society Curriculum Specification

Expanded Learning Time Expectations for Implementation

Visit us at:

ADDIE: A systematic methodology for instructional design that includes five phases: Analysis, Design, Development, Implementation, and Evaluation.

South Carolina English Language Arts

Teaching Colorado s Heritage with Digital Sources Case Overview

Higher Education Six-Year Plans

Linguistics Program Outcomes Assessment 2012

Common Core Standards Alignment Chart Grade 5

SOCIAL PSYCHOLOGY. This course meets the following university learning outcomes: 1. Demonstrate an integrative knowledge of human and natural worlds

Introduction to Forensics: Preventing Fires in the First Place. A Distance Learning Program Presented by the FASNY Museum of Firefighting

Strategic Practice: Career Practitioner Case Study

(I couldn t find a Smartie Book) NEW Grade 5/6 Mathematics: (Number, Statistics and Probability) Title Smartie Mathematics

Developing creativity in a company whose business is creativity By Andy Wilkins

California Professional Standards for Education Leaders (CPSELs)

Software Security: Integrating Secure Software Engineering in Graduate Computer Science Curriculum

Youth Sector 5-YEAR ACTION PLAN ᒫᒨ ᒣᔅᑲᓈᐦᒉᑖ ᐤ. Office of the Deputy Director General

Research Design & Analysis Made Easy! Brainstorming Worksheet

Inside the mind of a learner

Intro to Systematic Reviews. Characteristics Role in research & EBP Overview of steps Standards

Loyola University Chicago Chicago, Illinois

Peaceful School Bus Program

Safe & Civil Schools Series Overview

Monitoring Metacognitive abilities in children: A comparison of children between the ages of 5 to 7 years and 8 to 11 years

Master of Science (MS) in Education with a specialization in. Leadership in Educational Administration

Executive Summary. Lincoln Middle Academy of Excellence

Systematic reviews in theory and practice for library and information studies

Enhancing Students Understanding Statistics with TinkerPlots: Problem-Based Learning Approach

Ekapeli (in Finnish), GraphoGame (internationally)

Ministry of Education, Republic of Palau Executive Summary

A Game-based Assessment of Children s Choices to Seek Feedback and to Revise

FY16 UW-Parkside Institutional IT Plan Report

Executive Summary. Colegio Catolico Notre Dame, Corp. Mr. Jose Grillo, Principal PO Box 937 Caguas, PR 00725

Introduction to the HFLE course

Master s Programme in European Studies

URBANIZATION & COMMUNITY Sociology 420 M/W 10:00 a.m. 11:50 a.m. SRTC 162

STUDENT LEARNING ASSESSMENT REPORT

FAQ (Frequently Asked Questions)

elearning OVERVIEW GFA Consulting Group GmbH 1

Program Alignment CARF Child and Youth Services Standards. Nonviolent Crisis Intervention Training Program

Simulation in Maritime Education and Training

EPA RESOURCE KIT: EPA RESEARCH Report Series No. 131 BRIDGING THE GAP BETWEEN SCIENCE AND POLICY

MURRAY STATE UNIVERSITY DEPARTMENT: NUTRITION, DIETETICS, AND FOOD MANAGEMENT COURSE PREFIX: NTN COURSE NUMBER: 230 CREDIT HOURS: 3

Procedia - Social and Behavioral Sciences 209 ( 2015 )

Why Youth Join Gangs Proposal. Team Members

Committee to explore issues related to accreditation of professional doctorates in social work

Governors and State Legislatures Plan to Reauthorize the Elementary and Secondary Education Act

La Grange Park Public Library District Strategic Plan of Service FY 2014/ /16. Our Vision: Enriching Lives

GUIDE TO EVALUATING DISTANCE EDUCATION AND CORRESPONDENCE EDUCATION

Transcription:

Evaluation: Designs and Approaches July 2004 The choice of a design for an outcome evaluation is often influenced by the need to compromise between cost and certainty. Generally, the more certain you want to be about your program s outcomes and impact, the more costly the evaluation. It is part of an evaluator s job to help you make an informed decision about your evaluation design. This publication describes two of the basic choices that must be made when designing an evaluation. The first is about the choice of an evaluation s design. The second concerns choosing a quantitative, qualitative, or mixed approach. Issues to Consider When Selecting a Design There are a number of important issues to consider before selecting an evaluation design. These include the following: Complex evaluations cost more, but allow for greater confidence in their findings. Complex evaluation designs are more difficult to implement and thus require greater expertise in research methods and analysis. Complex designs are not without their own problems. Increasing the complexity of an evaluation can also increase the chances that something will go wrong especially if there is not enough evaluation expertise involved in the effort. No evaluation design is immune to threats to its validity. There is a long list of possible complications associated with any evaluation study. However, your evaluator will help you maximize the quality of your evaluation study. Don t assume that an intervention that works in one setting will always work in others. It s unlikely that any intervention will work equally well with all types of people and in all settings. Some evaluation is better than none. Though you may not have the money or resources to conduct the evaluation of your dreams, start somewhere even if that means using the least rigorous design. Four Commonly Used Designs for Outcome Evaluations There are four commonly used designs for outcome evaluations. They are described below starting with the least expensive design which also provides the least certain results and ends with the most costly but also most rigorous design. - 1 -

One-Group, Post-Test Only Design (least expensive, least rigorous) In the one-group, post-only Design, a post-test, such as survey, is administered to the program or study participants after the intervention has been administered. Although relatively inexpensive, this design does not measure changes in knowledge, attitudes, or behavior. Nor does it allow comparisons between people taking part in the program to people who did not participate. One-Group, Pre-Test and Post-Test Design The one group, pre-test and post-test design is more informative than the pre-test only design because it provides information on changes in knowledge, attitudes, or behavior of the program participants or study subjects that occurred during the time in which the intervention took place. All things being equal, this design can provide some evidence that the intervention produced these changes. However, it cannot conclusively demonstrate this. The changes may have occurred because of other reasons. For example, the changes may be a consequence of the fact that participants were older and more mature during the pre-test. Or they may have been exposed to another intervention (such as a national public education campaign) during the period in which they participated in the program. Pre-Test and Post-Test with Comparison Group Design In evaluations using this design, the pre-tests and post-tests are administered to both the intervention group that is, the people participating in the program and another similar group that does not participate (and, consequently, does not receive the intervention). The addition of a comparison group helps determine whether any changes in knowledge, attitudes, or behavior can be attributed to the intervention. - 2 -

The more similarity there is between the people who receive the intervention and those who do not, the more confidence there is that changes detected in the intervention group, but not the control group, were actually a result of the program or intervention. Thus, the comparison group should be as similar to the intervention group in terms of gender, race or ethnicity, socioeconomic status, and education as possible. This design is both more expensive and complex than the two designs described previously. And it still leaves room for alternative explanations, because the intervention and control groups may differ in some undetected but important ways. Pre-Test and Post-Test with Control Group Design (most expensive, most rigorous) This design offers the greatest possibility of attributing evaluation outcomes to program activities. By randomly assigning individuals from the same target population to either an intervention or control group, all members of that target population have an equal chance of becoming a member of either group. This should ensure that members of the intervention and control groups are similar with respect to the key variables that might affect their performance on the pre-test and post-test. This type of evaluation is more complex and expensive than the other three, but provides the highest degree of certainty that it was the intervention that caused any changes detected by the evaluation. Quantitative, Qualitative, and Mixed Approaches to Evaluation Another choice to make when designing an evaluation is whether to gather quantitative data, qualitative data, or both. Quantitative Data Quantitative data can be counted, measured, and reported in numerical form and answer questions such as who, what, where, and how much. For example, a quantitative evaluation of a school-based violence prevention program might use disciplinary reports to discover that the intervention resulted in a 10 percent decrease in incidents of physical fighting on the campus. - 3 -

The quantitative approach is useful for describing concrete phenomena and for statistically analyzing results, such as calculating the percentage decrease of cigarette use among 8th-grade students. Some examples of quantitative data include test scores, attendance rates, drop-out rates, and survey rating scales. Advantages of collecting quantitative data include the following: Data collection instruments can be used with large numbers of study participants. Data collection instruments can be standardized, allowing for easy comparison within and across studies. Data are easily compiled for analysis. Findings can be presented succinctly. Findings are more widely accepted as being scientific and applicable than those from qualitative evaluations. Qualitative Data Qualitative data are reported in narrative form. Examples of qualitative data include written descriptions of program activities, testimonials of program effects, comments about how a program was or was not helpful, case studies, analyses of existing files, focus groups, key informant interviews, and observational studies. Qualitative evaluations might not yield results that are accepted as scientifically rigorous as those from quantitative evaluations. But the qualitative approach can provide important insights into how well a program is working and what can be done to increase its impact. Qualitative data can also provide information about how participants including the people responsible for operating the program as well as the target audience feel about the program. For example, a qualitative evaluation of a school-based violence prevention program might use teacher interviews to find representative comments about what they thought about the programs impact, such as I really think I learned a lot in the program. And I ve been able to intervene in several situations that might otherwise have resulted in fights among students. Benefits of collecting qualitative data include the following: It promotes understanding of diverse stakeholder perspectives (e.g., what the program means to different people). It may shed light on unanticipated outcomes. Stakeholders, funders, policymakers, and the public may find quotes and anecdotes easier to understand and more appealing than statistical data. It can generate new ideas about how to make the program work better. - 4 -

Mixed-Method Evaluations The ideal evaluation combines quantitative and qualitative methods. A mixed-method approach offers a range of perspectives on a program's processes and outcomes. Benefits of this type of approach include the following: It increases the validity of your findings by allowing you to examine the same phenomenon in different ways. It can result in better data collection instruments. For example, focus groups can be invaluable in the development or selection of a questionnaire used to gather quantitative data. It promotes greater understanding of the findings. Quantitative data can show that change occurred and how much change took place, while qualitative data can help you and others understand what happened and why. It offers something for everyone. Some stakeholders may respond more favorably to a presentation featuring charts and graphs. Others may prefer anecdotes and stories. By using different sources and methods at various points in the evaluation process, your evaluation team can build on the strengths of each type of data collection and minimize the weaknesses of any single approach. References: Additional evaluation resources can be found in the Evaluation Toolkit. This publication is based on material from Locating, Hiring, and Managing an Evaluator, a web-based course designed and implemented by the Northeast Center for the Application of Prevention Technologies (CAPT), and from Are You Making Progress? Increasing Accountability Through Evaluation, a web-based course developed and implemented by CAPT and the National Coordinator Training and Technical Assistance Center, Health and Human Development Programs, Education Development Center, Inc. - 5 -