Research Methods: An Overview

Similar documents
Evidence-based Practice: A Workshop for Training Adult Basic Education, TANF and One Stop Practitioners and Program Administrators

Tun your everyday simulation activity into research

Practical Research. Planning and Design. Paul D. Leedy. Jeanne Ellis Ormrod. Upper Saddle River, New Jersey Columbus, Ohio

Developing Students Research Proposal Design through Group Investigation Method

ABET Criteria for Accrediting Computer Science Programs

Systematic reviews in theory and practice for library and information studies

The Talent Development High School Model Context, Components, and Initial Impacts on Ninth-Grade Students Engagement and Performance

The Efficacy of PCI s Reading Program - Level One: A Report of a Randomized Experiment in Brevard Public Schools and Miami-Dade County Public Schools

Further, Robert W. Lissitz, University of Maryland Huynh Huynh, University of South Carolina ADEQUATE YEARLY PROGRESS

Teacher intelligence: What is it and why do we care?

Room: Office Hours: T 9:00-12:00. Seminar: Comparative Qualitative and Mixed Methods

Research Design & Analysis Made Easy! Brainstorming Worksheet

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

SY 6200 Behavioral Assessment, Analysis, and Intervention Spring 2016, 3 Credits

UCLA Issues in Applied Linguistics

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

Evidence for Reliability, Validity and Learning Effectiveness

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Advancing the Discipline of Leadership Studies. What is an Academic Discipline?

2018 Student Research Poster Competition

George Mason University Graduate School of Education Program: Special Education

NCEO Technical Report 27

Georgetown University School of Continuing Studies Master of Professional Studies in Human Resources Management Course Syllabus Summer 2014

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Learning Lesson Study Course

Process Evaluations for a Multisite Nutrition Education Program

The Political Engagement Activity Student Guide

Learning Disabilities and Educational Research 1

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

school students to improve communication skills

GUIDE TO EVALUATING DISTANCE EDUCATION AND CORRESPONDENCE EDUCATION

Running Head GAPSS PART A 1

Save Children. Can Math Recovery. before They Fail?

ATW 202. Business Research Methods

Examinee Information. Assessment Information

Unit 3. Design Activity. Overview. Purpose. Profile

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

Improving Conceptual Understanding of Physics with Technology

Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant Sudheer Takekar 1 Dr. D.N. Raut 2

KENTUCKY FRAMEWORK FOR TEACHING

SETTING STANDARDS FOR CRITERION- REFERENCED MEASUREMENT

The Evaluation of Students Perceptions of Distance Education

HEROIC IMAGINATION PROJECT. A new way of looking at heroism

Kentucky s Standards for Teaching and Learning. Kentucky s Learning Goals and Academic Expectations

Supporting Students Construction of Scientific Explanation through Generic versus Context- Specific Written Scaffolds

1GOOD LEADERSHIP IS IMPORTANT. Principal Effectiveness and Leadership in an Era of Accountability: What Research Says

PROGRAM REQUIREMENTS FOR RESIDENCY EDUCATION IN DEVELOPMENTAL-BEHAVIORAL PEDIATRICS

Developing an Assessment Plan to Learn About Student Learning

BENCHMARK TREND COMPARISON REPORT:

Epistemic Cognition. Petr Johanes. Fourth Annual ACM Conference on Learning at Scale

Mapping the Assets of Your Community:

MEASURING GENDER EQUALITY IN EDUCATION: LESSONS FROM 43 COUNTRIES

ROLE OF SELF-ESTEEM IN ENGLISH SPEAKING SKILLS IN ADOLESCENT LEARNERS

M.S. in Environmental Science Graduate Program Handbook. Department of Biology, Geology, and Environmental Science

Positive Behavior Support In Delaware Schools: Developing Perspectives on Implementation and Outcomes

Integrating simulation into the engineering curriculum: a case study

Inquiry Practice: Questions

GRADUATE STUDENTS Academic Year

Greek Teachers Attitudes toward the Inclusion of Students with Special Educational Needs

Student Support Services Evaluation Readiness Report. By Mandalyn R. Swanson, Ph.D., Program Evaluation Specialist. and Evaluation

DESIGN, DEVELOPMENT, AND VALIDATION OF LEARNING OBJECTS

VIEW: An Assessment of Problem Solving Style

Classroom Assessment Techniques (CATs; Angelo & Cross, 1993)

What is PDE? Research Report. Paul Nichols

State Parental Involvement Plan

Evaluation of Hybrid Online Instruction in Sport Management

Quantitative Research Questionnaire

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Program Alignment CARF Child and Youth Services Standards. Nonviolent Crisis Intervention Training Program

DOES RETELLING TECHNIQUE IMPROVE SPEAKING FLUENCY?

ACCREDITATION STANDARDS

Inquiry and scientific explanations: Helping students use evidence and reasoning. Katherine L. McNeill Boston College

5 Star Writing Persuasive Essay

Social Emotional Learning in High School: How Three Urban High Schools Engage, Educate, and Empower Youth

Primary Teachers Perceptions of Their Knowledge and Understanding of Measurement

WHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING AND TEACHING OF PROBLEM SOLVING

Introduction. 1. Evidence-informed teaching Prelude

Motivation to e-learn within organizational settings: What is it and how could it be measured?

2 Research Developments

Model of Human Occupation

Guide to Teaching Computer Science

STUDENT ASSESSMENT AND EVALUATION POLICY

Longitudinal family-risk studies of dyslexia: why. develop dyslexia and others don t.

NATIONAL SURVEY OF STUDENT ENGAGEMENT (NSSE)

Study Abroad Housing and Cultural Intelligence: Does Housing Influence the Gaining of Cultural Intelligence?

Age Effects on Syntactic Control in. Second Language Learning

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

New Jersey Department of Education World Languages Model Program Application Guidance Document

EVALUATING MATH RECOVERY: THE IMPACT OF IMPLEMENTATION FIDELITY ON STUDENT OUTCOMES. Charles Munter. Dissertation. Submitted to the Faculty of the

GRADUATE COLLEGE Dual-Listed Courses

New Jersey Department of Education

Signs, Signals, and Codes Merit Badge Workbook

PIRLS. International Achievement in the Processes of Reading Comprehension Results from PIRLS 2001 in 35 Countries

Chapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4

Monitoring Metacognitive abilities in children: A comparison of children between the ages of 5 to 7 years and 8 to 11 years

Learning By Asking: How Children Ask Questions To Achieve Efficient Search

Early Warning System Implementation Guide

Strategic Planning for Retaining Women in Undergraduate Computing

Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design

Instructional Intervention/Progress Monitoring (IIPM) Model Pre/Referral Process. and. Special Education Comprehensive Evaluation.

Transcription:

Experiment al Research. In an experiment, participants are randomly assigned to one of several treatments. One of the most basic experimental designs involves random assignment to either an experimental group (which receives some kind of treatment), or a control group (which does not receive the treatment). If the differences in Research Methods: An Overview OVERVIEW OF RESEARCH METHODS IN EDUCATION STRENGTHS AND WEAKNESSES OF RESEARCH DESIGNS There are many different methodologies that can be used to conduct educational research. The type of methodology selected by a researcher emanates directly from the research question that is being asked. In addition, some of the differing techniques for conducting educational research reflect different paradigms in scientific thought. In this entry, a review of the most commonly used methodologies is presented; in addition, the strengths and weaknesses of various methods are compared and contrasted. OVERVIEW OF RESEARCH METHODS IN EDUCATION Research methodologies can be classified in many different ways. For example, some researchers distinguish between quantitative and qualitative studies; others distinguish between experimental and non-experimental research; still others distinguish between research that is conducted in laboratories versus in the field (i.e., in classrooms). Obviously, there are many ways to categoriz e research methods. However, there also is much overlap in such categoriz ations. For example, a non-experimental study can be either quantitative or qualitative; an experimental study can include some qualitative components. This entry does not attempt to classify these methodologies; rather, the various methods are first briefly described and then compared and contrasted. Table 1ILLUSTRATION BY GGS INFORMATION SERVICES. CENGAGE LEARNING, GALE. Correlat ional Research. Correlational research involves quantitatively studying the relations between and among variables. One of the hallmarks of correlational research is that cause and effect relations cannot be determined. Researchers who engage in correlational research do not manipulate variables; rather, they collect data on existing variables and examine relations between those variables. A number of different statistical techniques can be used to analyze correlational data. An example of a correlational research would be an examination of the statistical relations between middle school students' standardiz ed examination scores in mathematics, and the students' demographic characteristics (e.g., gender, ethnicity, socioeconomic status, etc.).

treatment between the experimental and the control group are tightly controlled, and if subsequent to the experiment there are measurable differences between the two groups that were not present before the experiment, then researchers often conclude that the experimental manipulation caused the differences to occur. Many researchers and government agencies consider true experiments to represent the gold standard in research; however, it is extraordinarily difficult to conduct true experiments in actual educational settings (i.e., schools). The primary reason for this difficulty is the fact that students can rarely be randomly assigned to conditions or classrooms in school settings. It is also important to distinguish between small-scale experiments and larger-scale clinical trials. Small-scale experiments can occur in settings such as laboratories or classrooms, whereas larger-scale clinical trials often occur across many classrooms or schools. An example of an experiment would be a study examining the effects of a video presentation on learning multiplication skills. Students in a classroom where all students are learning about multiplication could be randomly assigned to either watch a video that demonstrates multiplication skills, or to watch another video (i.e., a video about how to make ice cream sundaes); the students would probably be asked to view the videos in a highly controlled environment, where the experimental and control conditions could be as similar as possible (except for the video presentation). If on a post-test the students who watched the multiplication video outperformed the other students, then a researcher could conclude that the video caused the improved performance. Quasi-Experiment al Research. In quasi-experimental studies, researchers do not randomly assign participants to groups (Cook & Campbell, 1979). Quasi-experimentation is used often in educational research, because it is often impossible and sometimes unethical to randomly assign students to settings. In quasi-experimental studies, researchers attempt to control for differences between non-randomly assigned groups in a number of ways. Two of the most common methods include (a) matching, and (b) statistical control. The following example explains the concept. A researcher is interested in comparing the effects of a traditional third-grade reading curriculum with the effects of an enhanced version of the curriculum that includes extra homework assignments. If the two versions of the curricula are being administered in different classrooms, the researcher can try to match similar classrooms on certain variables. For example, the researcher might decide to match classrooms on years of experience of the teacher, wherein teachers with much experience (e.g., 20 or more years of teaching experience) might be paired, so that for each pair of highly experienced teachers, one is assigned to each condition. In addition, the researcher can statistically control for variables that are related to the outcome. If the researcher knows that variables such as socioeconomic status and prior reading ability are related to reading achievement, then the researcher can statistically control for these variables, in order to better assess the unique effects of the new curriculum. Qualit at ive Research. Qualitative research represents a broad framework for conducting educational studies. Whereas quantitative research focuses on measurable variations between and among variables, qualitative studies focus on holistic descriptions of learners and teachers in naturalistic settings. Fraenkel and Wallen (1996) describe five general characteristics of qualitative research studies. These include: 1. Researchers collect their data in naturalistic settings (e.g., classrooms), by observing and participating in regular activities. 2. Data are collected via words or pictures (not via numerical or quantifiable indicators). 3. Processes (i.e., how individuals communicate with each other about a lesson) are as important as products (i.e., whether or not students obtain the correct answers to a problem). 4. Most qualitative researchers do not start out with specific hypotheses; rather, they use inductive methods to generate conclusions regarding their observations. 5. Qualitative researchers care about participants' perceptions; investigators are likely to question participants in depth about their beliefs, attitudes, and thought processes.

A variety of methods can be used to conduct qualitative studies. For example, qualitative researchers can collect their data from direct observations, from analyses of video or audio recordings, from interviews, or from longterm ethnographic studies. There are a variety of different ways of analyz ing qualitative data. Generally, researchers carefully examine their data and discover themes that emerge from the data. Sometimes several researchers will analyz e the same sources of data and then compare their conclusions and examine the extent to which they agree or disagree (inter-rater reliability); in other studies, one researcher will conduct all of the analyses, and will also critically examine how his or her own biases may affect interpretations. Software packages have been developed to assist qualitative researchers with data analysis. Two of the most commonly used packages are Envivo and NUDIST. Longit udinal and Cross-sect ional Research. Many research studies in education focus on developmental issues (i.e., how individuals change over time). For example, it is known that the reading strategies that young children use are different from the reading strategies adopted by older children (Pressley & Harris, 2006). There are several different methods that can be used to examine such developmental phenomena. In a longitudinal study, researchers collect data on the same individuals over a number of different time periods or waves. Thus the same group of students might complete study assessments at the end of first grade, second grade, third grade, and fourth grade. Researchers can then examine changes in student data across those four years. In a cross-sectional study, researchers collect data on individuals of differing ages or developmental levels, at the same time. Thus data are collected for many students, at one time interval only. For example, a researcher might give assessments to 200 first graders, 200 second graders, 200 third graders, and 200 fourth graders all at the same time. Then the researcher can compare the results of students in these four different grades and try to draw some conclusions about developmental differences. Most researchers agree that when possible, longitudinal studies provide better developmental data than crosssectional studies. The primary advantage of longitudinal studies is that the same individuals are assessed at different time points; therefore, it is easier to make inferences about true development over time, since the distinct data points represent the same individuals across different time periods. However, longitudinal research is often difficult to conduct, because it is very expensive, and it is often difficult to track individuals over time; many of the students who participate in the first wave of data collection may have moved or may not want to participate in later waves of the study. Design Experiment s. When researchers conduct design experiments, they examine the effects of educational interventions in actual classrooms while the interventions are being implemented. As results are obtained and analyz ed, the intervention is changed and continuously re-evaluated (Brown, 1992). Cobb, Confrey, disessa, Lehrer, and Schauble (2003) identified five overarching features of design experiments: 1. The purpose of design experiments is to develop theories about learning (including how learning is supported). 2. Design experiments involve an intervention, or the introduction of a new instructional technique. 3. In design experiments, researchers attempt to develop new theoretical perspectives, but also must test and refine their theories along the way. 4. Design experiments have iterative designs; as theories change during the study, the design of the study must be revised and altered accordingly. 5. The theories that are developed in design experiments should affect future instruction. An example of a design experiment might be a study of a new curriculum designed to teach adolescents about HIV and pregnancy prevention. The curriculum might be introduced into the classroom setting; then, after initial presentation of the first few units, the researchers might collect data and then make some alterations to the next units, based on those data. This process can continue until the curriculum is substantially improved.

It is important to note that in design experiments, the changes in instruction that occur across iterations are often confounded with greater teacher familiarity with the approach as a whole. This can be problematic, because it hinders researchers' abilities to make causal inferences. Microgenet ic Research. In microgenetic research studies, the same individual is observed intensively over a long period of time; this could be for many weeks or even months. Data are collected in order to examine both large-scale and small-scale changes in learners' use of strategies over time (Kuhn, 1995). Data can be analyz ed via either quantitative or qualitative methods, depending on the types of data that are collected. As noted by Chinn (2006), most educational research using a microgenetic approach has examined learners' usage of cognitive strategies (e.g., problem solving). Micro-genetic studies are time consuming and be expensive, but they also can provide researchers with rich and detailed information concerning cognitive processes in learners. An example of a microgenetic study would be an examination of a kindergartener's strategy usage in solving simple addition problems over a three-month period. Single-Subject Research. In a single-subject study, there is only one participant. Researchers generally examine a variable at a baseline stage (prior to the start of an intervention), and then later examine how this variable changes at different time intervals, as an intervention is introduced. In single-subject research, control or comparison groups are not used. Researchers are particularly interested in whether or not patterns replicate over time within the same subject; in addition, researchers also examine whether or not similar patterns can be generated in new subjects. Single-subject studies are particularly common in the special education literature, although this methodology can be used in other areas of educational research as well. An example of a single-subject study would be an examination of the effect of classical music on the ability of a learning-disabled child to solve single-digit addition problems. First, the child's baseline addition skills would be assessed; then, the student's skills in the presence of music would be measured. The music might then be alternately started and stopped several times, while the student's problem-solving skills are continuously assessed. Act ion Research. Action research is research that is conducted by classroom teachers, examining their own practices. The goal of action research is to examine one's practices critically and then to make changes to those practices based on the results of the research. Action research can be conducted by a single teacher, or by a group of educators working together. Ferrance (2000) summariz es five steps in action research. These include: 1. Identify the problem or question that is going to be investigated. 2. Gather data to help answer the driving question. Data can be collected in many forms (e.g., interviews with students, surveys, journals, video or audio tapes, samples of student work, etc.). 3. Interpret the data by critically examining all data sources, and identifying major themes. 4. Evaluate results; in particular, examine whether or not the research question has been answered. 5. Take next steps develop additional research questions, or make changes to instructional methods. Action research can improve instruction for students; in addition, it can empower teachers, since it is a tool that allows them to judge their own efforts and evaluate the outcomes of their practices. STRENGTHS AND WEAKNESSES OF RESEARCH DESIGNS Each of the aforementioned research designs has both strengths and weaknesses. Some of these differences are obvious but others are not. Table 1 presents some examples of the key strengths and weaknesses of the various research methodologies discussed in this entry. This is not an exhaustive list; rather, it is provided to demonstrate that each methodology is complex and has both pros and cons. When researchers and consumers of research evaluate the strengths and weaknesses of various designs, there are many issues to consider. Specifically, there are several key questions that can serve as a framework for

evaluating research designs. The main questions are discussed below. T he Research Quest ion. The research question is by far the most important question to consider when selecting and evaluating a research design. In all educational studies, the major research question should be articulated before the methodology is selected; the appropriate methodology should then be chosen based on that question. Most social scientists agree that a preferred methodology should not be used as a framework to guide research. For example, a large school district might want to know if high school students' foreign language pronunciation is better after two years of studying Spanish or two years of studying French. The research question might be: What is the relation between studying French versus studying Spanish, and foreign language pronunciation after two years of study? The researcher then must decide which research design is the most appropriate to answer the specific research question. In this example, the researcher can easily eliminate several options. For example, an experiment would be impossible, since students cannot be randomly assigned to Spanish or French classes. In addition, the researcher might decide that qualitative and microgenetic studies are inappropriate, since the researcher is not interested in the processes or developmental trends that occur over time. There are several other questions that the researcher must also address that will help to finaliz e the decision. T he Sample Being St udied. Researchers must consider the nature of their samples when selecting a methodology. This is an important question because some methodologies are challenging to implement with certain populations. For example, most studies that use survey-based methodologies require the participants to be able to read the survey items. If the sample included young children, or individuals with impaired visual abilities, then this might preclude the use of a self-administered survey. In addition, if the researcher is studying a large sample, with more than 1000 participants, in many cases this would prohibit the investigator from implementing single-subject designs, since the sample is so large. Resources Available t o Do t he Research. Many resources are needed to complete research studies. Novice researchers often do not realiz e the cost involved with educational studies. A college student doing a small study for a research methods course will certainly not have the same resources available as an experienced investigator with a multimillion dollar grant. Resources involve more than money. Another important consideration is personnel. Some research methodologies require more personnel than others. For example, a microgenetic study might be carried out by one investigator who can focus on the progress of a few subjects. In contrast, a large experimental study that requires collection of large amounts of data from many participants will require many more personnel. Thus if fewer resources are available, a researcher might not be able to use the ideal methodology to conduct a study. Time is another important resource that often affects the type of methodology that is chosen for a particular study. A design experiment that involves continuous evaluation of progress and setting of goals might be ideal if a researcher has enough time to devote to a long-term study. Some studies (e.g., longitudinal studies) take a long time to complete. Thus a researcher who is interested in examining developmental issues, but who does not have a lot of time and funding, might select a cross-sectional methodology instead. T he Int ended Audience for t he Research. Different audiences will benefit from different kinds of research studies. If the audience is practitioners, then action research might be highly appropriate. First, teachers can be directly involved in action research studies; second, other educators might be more willing to accept the results obtained from one of their peers via action research than from unknown researchers. Certain funding agencies might be interested in only funding some types of studies. For example, there is much debate among educational researchers about the advantages and disadvantages of using experimental designs in educational research; whereas many funding agencies encourage experimental studies, many educational

researchers argue that sometimes, true experiments are difficult to implement in actual classroom settings. Ferrance, E. (2000). Action research. Providence, RI: Northeast and Islands Regional Educational Laboratory at Brown University. Using Mixed Met hods. Many educational issues are multifaceted and complex; consequently, often one single methodology will not yield all of the essential information that researchers desire. Given the strengths and weaknesses of the various designs, and the many decisions that researchers must make before choosing a methodology, a number of scholars in recent years have begun to use mixed methods in educational research. When researcher use mixed methods, they use a variety of different methodologies within the same study. A mixed methods study is usually challenging; the researchers must be able to utiliz e multiple designs appropriately. Some mixed methods studies involve two or more methodologies being carried out simultaneously, whereas others involve a succession of different studies, all designed to answer one general research question. An example is a study conducted by Turner and her colleagues (Turner et al., 2002). In that study, the researchers were interested in examining the relations between early adolescents' perceptions of the classroom environment and the students' use of avoidance strategies (e.g., avoidance of asking for help from the teacher) in math classrooms. The researchers realiz ed that the use of multiple methods would help them to best answer their research question. Therefore, they conducted a study in which longitudinal survey data were collected from a sample of more than 1,000 students. The researchers also randomly selected nine classrooms in which they conducted observations. The final analysis of data included quantitative results from the surveys as well as qualitative results from detailed discourse analyses from the classrooms. Each source of data provided different types of information, which allowed the researchers to examine a variety of indicators of the use of avoidance strategies by students. The quantitative survey data allowed the researchers to examine the relations of both student characteristics (e.g., gender) and students' perceptions of classroom environments to the use of avoidance strategies; the observational data allowed the researchers to examine the discourse patterns in classrooms with different types of learning environments. In summary, research methodology is a complex topic. This entry has described some of the most basic issues in the research enterprise, some of the methods that educational researchers use in their work, and some of the complexities involved in deciding upon an appropriate methodology. Ultimately, the methodology that is chosen will be determined by the specific research question and by the resources that are available. Most research studies have limitations, which often are related to the design of the study. Research can always be improved, and it is important for scholars engaged in educational research critically to evaluate their designs and to acknowledge the limitations of their studies. As new researchers replicate previous studies, they often will attempt to eliminate the design problems encountered by previous researchers. This is one of the most important ways in which educational researchers can continue to improve and enhance knowledge about teaching and learning. BIBLIOGRAPHY Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions. Journal of the Learning Sciences, 2, 141 178. Chinn, C. (2006). The microgenetic method: Current work and extensions to classroom research. In J. L. Green, G. Cmailli, & P.B. Elmore (Eds.), Handbook of complementary methods in education research (pp. 4 39 4 56). Mahwah, NJ: Erlbaum. Cobb, P., Confrey, J., disessa, A., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32, 9 13. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.

Fraenkel, J.R., & Wallen, N.E. (1996). How to design and evaluate research in education (3rd ed.). New York: McGraw-Hill. Kuhn, D. (1995). Microgenetic study of change: What has it told us? Psychological Science, 6, 133 139. Pressley, M., & Harris, J.R. (2006). Cognitive strategies instruction: From basic research to classroom instruction. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology (2nd ed., pp. 265 286). Mahwah, NJ: Erlbaum. Turner, J. C., Midgley, C., Meyer, D. K., Gheen, M., Anderman, E. M., Kang, Y., & Patrick, H. (2002). The classroom environment and students' reports of avoidance strategies in mathematics: A multimethod study. Journal of Educational Psychology, 94, 88 106. Copyright 2003-2009 The Gale Group, Inc. All rights reserved.