CHAPTER 3 RESEARCH DESIGN AND METHODOLOGY

Similar documents
Master s Programme in European Studies

Calculators in a Middle School Mathematics Classroom: Helpful or Harmful?

George Mason University Graduate School of Education Program: Special Education

DIDACTIC MODEL BRIDGING A CONCEPT WITH PHENOMENA

Integration of ICT in Teaching and Learning

Formative Assessment in Mathematics. Part 3: The Learner s Role

What effect does science club have on pupil attitudes, engagement and attainment? Dr S.J. Nolan, The Perse School, June 2014

Quantitative Research Questionnaire

THE IMPACT OF STATE-WIDE NUMERACY TESTING ON THE TEACHING OF MATHEMATICS IN PRIMARY SCHOOLS

DEVELOPING ENGLISH MATERIALS FOR THE SECOND GRADE STUDENTS OF MARITIME VOCATIONAL SCHOOL

Physics 270: Experimental Physics

How to Judge the Quality of an Objective Classroom Test

TESSA Secondary Science: addressing the challenges facing science teacher-education in Sub-Saharan Africa.

Special Educational Needs and Disabilities Policy Taverham and Drayton Cluster

Procedia - Social and Behavioral Sciences 98 ( 2014 ) International Conference on Current Trends in ELT

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

Instructor: Mario D. Garrett, Ph.D. Phone: Office: Hepner Hall (HH) 100

Teacher Action Research Multiple Intelligence Theory in the Foreign Language Classroom. By Melissa S. Ferro George Mason University

Transfer of Training

Post-intervention multi-informant survey on knowledge, attitudes and practices (KAP) on disability and inclusive education

Developing an Assessment Plan to Learn About Student Learning

Inquiry Learning Methodologies and the Disposition to Energy Systems Problem Solving

EQuIP Review Feedback

Thameside Primary School Rationale for Assessment against the National Curriculum

Summary results (year 1-3)

American Journal of Business Education October 2009 Volume 2, Number 7

HEROIC IMAGINATION PROJECT. A new way of looking at heroism

A Pilot Study on Pearson s Interactive Science 2011 Program

Delaware Performance Appraisal System Building greater skills and knowledge for educators

Thesis-Proposal Outline/Template

Laporan Penelitian Unggulan Prodi

Degree Qualification Profiles Intellectual Skills

Introduction. 1. Evidence-informed teaching Prelude

School Size and the Quality of Teaching and Learning

Evaluation of Hybrid Online Instruction in Sport Management

The Relationship Between Tuition and Enrollment in WELS Lutheran Elementary Schools. Jason T. Gibson. Thesis

The Impact of Formative Assessment and Remedial Teaching on EFL Learners Listening Comprehension N A H I D Z A R E I N A S TA R A N YA S A M I

Motivation to e-learn within organizational settings: What is it and how could it be measured?

An Evaluation of Planning in Thirty Primary Schools

Arizona s English Language Arts Standards th Grade ARIZONA DEPARTMENT OF EDUCATION HIGH ACADEMIC STANDARDS FOR STUDENTS

Dyslexia and Dyscalculia Screeners Digital. Guidance and Information for Teachers

International Journal of Innovative Research and Advanced Studies (IJIRAS) Volume 4 Issue 5, May 2017 ISSN:

Assessment and Evaluation

Criterion Met? Primary Supporting Y N Reading Street Comprehensive. Publisher Citations

School Inspection in Hesse/Germany

MALDIVES (UNESCO/EBS/EPS-DFID-RIVAF) PROJECT DELIVERABLE 2 DATA COLLECTION PROGRESS REPORT DRAFT. 26 August CDE Consulting Male Maldives

STUDENT ASSESSMENT AND EVALUATION POLICY

We seek to be: A vibrant, excellent place of learning at the heart of our Christian community.

Wonderworks Tier 2 Resources Third Grade 12/03/13

Internship Department. Sigma + Internship. Supervisor Internship Guide

Procedia - Social and Behavioral Sciences 141 ( 2014 ) WCLTA 2013

MGT/MGP/MGB 261: Investment Analysis

Mathematics subject curriculum

ADDIE MODEL THROUGH THE TASK LEARNING APPROACH IN TEXTILE KNOWLEDGE COURSE IN DRESS-MAKING EDUCATION STUDY PROGRAM OF STATE UNIVERSITY OF MEDAN

ABET Criteria for Accrediting Computer Science Programs

University of Massachusetts Lowell Graduate School of Education Program Evaluation Spring Online

Monitoring Metacognitive abilities in children: A comparison of children between the ages of 5 to 7 years and 8 to 11 years

Developing Effective Teachers of Mathematics: Factors Contributing to Development in Mathematics Education for Primary School Teachers

FIRST-YEAR UNIVERSITY BIOLOGY STUDENTS DIFFICULTIES WITH GRAPHING SKILLS

Procedia - Social and Behavioral Sciences 209 ( 2015 )

The Singapore Copyright Act applies to the use of this document.

University of Arkansas at Little Rock Graduate Social Work Program Course Outline Spring 2014

WHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING AND TEACHING OF PROBLEM SOLVING

Improving Conceptual Understanding of Physics with Technology

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

Understanding Games for Teaching Reflections on Empirical Approaches in Team Sports Research

Ohio s New Learning Standards: K-12 World Languages

MBA 5652, Research Methods Course Syllabus. Course Description. Course Material(s) Course Learning Outcomes. Credits.

Astronomy News. Activity developed at Cégep de Saint-Félicien By BRUNO MARTEL

EFFECTS OF MATHEMATICS ACCELERATION ON ACHIEVEMENT, PERCEPTION, AND BEHAVIOR IN LOW- PERFORMING SECONDARY STUDENTS

PROJECT MANAGEMENT AND COMMUNICATION SKILLS DEVELOPMENT STUDENTS PERCEPTION ON THEIR LEARNING

Developing creativity in a company whose business is creativity By Andy Wilkins

GUIDE TO EVALUATING DISTANCE EDUCATION AND CORRESPONDENCE EDUCATION

The Efficacy of PCI s Reading Program - Level One: A Report of a Randomized Experiment in Brevard Public Schools and Miami-Dade County Public Schools

English for Specific Purposes World ISSN Issue 34, Volume 12, 2012 TITLE:

Number of students enrolled in the program in Fall, 2011: 20. Faculty member completing template: Molly Dugan (Date: 1/26/2012)

Writing Research Articles

The KAM project: Mathematics in vocational subjects*

STEM Academy Workshops Evaluation

Practices Worthy of Attention Step Up to High School Chicago Public Schools Chicago, Illinois

CHAPTER 4: RESEARCH DESIGN AND METHODOLOGY

Syllabus Education Department Lincoln University EDU 311 Social Studies Methods

CONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS

Growth of empowerment in career science teachers: Implications for professional development

TRI-STATE CONSORTIUM Wappingers CENTRAL SCHOOL DISTRICT

What is PDE? Research Report. Paul Nichols

Writing a composition

MASTER OF ARTS IN APPLIED SOCIOLOGY. Thesis Option

Self Assessment. InTech Collegiate High School. Jason Stanger, Director 1787 Research Park Way North Logan, UT

Individualising Media Practice Education Using a Feedback Loop and Instructional Videos Within an elearning Environment.

Unit 7 Data analysis and design

Introduction of Open-Source e-learning Environment and Resources: A Novel Approach for Secondary Schools in Tanzania

NATIONAL SURVEY OF STUDENT ENGAGEMENT (NSSE)

TUESDAYS/THURSDAYS, NOV. 11, 2014-FEB. 12, 2015 x COURSE NUMBER 6520 (1)

Kelli Allen. Vicki Nieter. Jeanna Scheve. Foreword by Gregory J. Kaiser

Assessing and Providing Evidence of Generic Skills 4 May 2016

THE EFFECTS OF TEACHING THE 7 KEYS OF COMPREHENSION ON COMPREHENSION DEBRA HENGGELER. Submitted to. The Educational Leadership Faculty

Publisher Citations. Program Description. Primary Supporting Y N Universal Access: Teacher s Editions Adjust on the Fly all grades:

Sociology 521: Social Statistics and Quantitative Methods I Spring 2013 Mondays 2 5pm Kap 305 Computer Lab. Course Website

School Leadership Rubrics

Transcription:

CHAPTER 3 RESEARCH DESIGN AND METHODOLOGY 3.1. Introduction. In chapter two the literature related to the problem investigated was reviewed. The main purpose for this was to give a theoretical framework to the study, and to outline the current global situation with regard to practical work, as well as to give reasons for the suggestion for the use of the microchemistry kits in a Mozambican context. This chapter discusses the strategies, the instruments and the procedures used to collect and analyze the data. It also gives an overview of the instruments and procedures used to answer the research questions and to attain the aims of the study. In this study, The strategies used refers mainly to the research design, which is concerned with defining the research paradigm and the methods used in the research. It is generally agreed that there are two main approaches to educational research, quantitative and qualitative (Yin, 1984; Fraenkel & Wallen, 1990; Welman & Kruger, 2001). In this research, another paradigm has been also considered, which is a pragmatic or eclectic paradigm based on a mixture of methods from both qualitative and quantitative stances. The instruments refers to the tools used to collect data. In science education the instruments generally used to collect data are questionnaires, interviews, diagnostic tests and observation schedules. The procedures refers to methods used to collect data, organize and analyze the data collected. Depending on the research paradigm used a research 42

program can employ statistics, detailed descriptions, triangulation of data and other methods. The strategies, instruments and procedures used to collect the data were chosen by taking the nature of the research problem, the research questions and the objectives of the research into account. 3.2. Research Design. Taking the research problem and the approach used to answer the research questions into consideration this research relies mainly on a pragmatic paradigm. While the use of a diagnostic test to analyze learners understanding of chemistry before and after a certain period of intervention tends to classify the study as quantitative research. The manipulation of situations, reliance on statistical indices, and preference for obtaining a meaningful sample size, are all features considered by Fraenkel & Wallen (1990) and Schumacher & McMillan (1993), of a quantitative paradigm. Within a quantitative paradigm this research can be classified by definition as experimental. This is because independent and dependent variables, the existence of experimental and comparison groups, and pre-test and post tests are all components of the study (Babbie, 1995; Huysamen, 2001, and Welman & Kruger, 2001). Nevertheless, the study cannot be fully classified as experimental research, because the sample was not randomly chosen. This is normally done to equate the units of analysis (in this case the pre-test and post-tests) in terms of known and unknown nuisance variables (Babbie, 1995; Huysamen, 2001, and Welman & Kruger, 2001). Therefore, the research may be considered quasiexperimental. The use of a questionnaire as an instrument to investigate both teachers and learners opinions about practical work classifies the study as qualitative. While 43

the study of participants feelings, thoughts, beliefs and ideals, the preference for narrative description, as well as holistic description of complex phenomena, are all aspects considered by Guba & Lincoln (1983), Schumacher & McMillan (1993) and Babbie (1995), as features of the qualitative paradigm. In addition to that, according to Borg & Gall (1989, p. 404): Qualitative methods are probably the best means for discovering educational problems and enabling researchers to better understand the total environment in which education takes place. This research encompasses a pragmatic approach because it uses methods from both the quantitative and qualitative paradigm. This methodological approach was chosen because it is thought to be the best way of obtaining the information required. The decision to choose this method was based on the fact that it would enable the research questions to be answered and allow the aims of the study to be achieved. 3.3. Sampling. For the purpose of this study four out of the five government schools available in the city of Beira were used. One school was left out because it has been run by a private company, and therefore has the features of a semi-private school This type of non-probability sampling is known as incidental or convenience sampling (Fraenkel & Wallen, 1990; Schumacher & McMillan, 1993; and Welman & Kruger, 2001). According to Welman & Kruger (2001, p.62): An accidental sample is the most convenient collection of members of population (units of analysis) that are near and readily available for research purposes. 44

The four schools were divided in two groups of two schools each. The first two were considered comparison schools (i.e. comparison group), and the other two schools were considered experimental schools (i.e. experimental group). For the purpose of the study, one Grade nine class from each school was used. The classes were selected according to availability. These classes were required to answer a questionnaire and take a diagnostic test before the intervention. After eight weeks of intervention learners from both the experimental and comparison groups were required to re-take the same diagnostic test, whilst only learners from the two experimental schools were asked to answer a new questionnaire. All junior secondary school chemistry teachers from the four schools were asked to answer a questionnaire before the intervention (Appendix F shows the profile of the teachers). After eight weeks of intervention only the two teachers from the experimental group were asked to answer another questionnaire. The table below gives an overview of the samples used in each school for the diagnostic test for learners and questionnaires for teachers and learners. Table 6: Learners and Teachers Involved in the Study for each School. Questionnaires Diagnostic Test Group Schools To Teachers To Learners To Learners Before After Before After Pre-test Post-test School A 4 0 41 0 48 48 Comparison School B 4 0 44 0 44 35 School C 5 1 31 37 41 40 Experimental School D 5 1 47 50 48 48 Total: 4 18 2 163 87 181 171 As demonstrated above, one questionnaire was answered by teachers and another by learners before the intervention. After the intervention, two different questionnaires were administered to teachers and learners from the experimental school, one for teachers and another for learners. 45

The same diagnostic test was administered to learners of both groups, before the intervention (i.e. pre-test), and after intervention (i.e. post-test). An effort was made to obtain a sample that is as representative as possible of diverse learning environments of the area of study the city of Beira. As a result, two schools were based in an urban area (school B and D) and the other two schools were based in a suburban area (school A and C). Furthermore, schools A and D had laboratories while schools B and C did have. The underlying idea is that practical work might be carried out in a laboratory, or outside in the field, or in an ordinary classroom. (Millar et al., 2000, p.36). The choice of a relatively large number of learners as the sample for the study is to make it more representative and increase the probability of generalizing the results (Fraenkel & Wallen, 1990; Schumacher & McMillan, 1993; Welman & Kruger, 2001). 3.4. Research Instruments. In this research two main instruments were used to gather data, a content-based diagnostic test for learners and questionnaires for teachers and for learners. The same test was administered before and after intervention. Four different questionnaires were designed and administered, one for teachers and another for learners before and after the intervention. The next section gives an overview of the content and how these instruments were used to collect data. 3.4.1. Questionnaires. According to Welman & Kruger (2001, p.148), In social studies a questionnaire is used to measure opinions, attitudes, scores, intelligence, etc. Thus, the questionnaire was the most suitable instrument to measure teachers and learners opinions about the importance of practical work in both experimental 46

and comparison group, and teachers and learners opinions about the microchemistry kits used, for the experimental group, before and after intervention. 3.4.1.1. Designing the Questionnaires. Two questionnaires were designed to be administered before the intervention: one 5-item open-ended questionnaire for teachers (Appendix G), and one 3-item questionnaire for learners (Appendix H). Both questionnaires were aimed at establishing teachers and learners opinions about the problems faced when teaching and learning chemistry and about the importance of practical work in chemistry teaching and learning. After eight weeks of intervention another two questionnaires were designed. The two questionnaires were aimed at finding out from teachers whether or not the microchemistry kits contributed to achieving the aims of practical work (Appendix I), as well as learners opinions about the microchemistry kits used during the eight weeks of intervention (Appendix J). In order to have face validity of the questionnaires (Sanders, 1994), after being designed, they were submitted to two university lecturers to check whether they would gather the intended information, as well as language issues (i.e. wording, and sentence structures). 3.4.1. 2. Piloting the Questionnaire. In a warning about poorly designed questionnaires Oppenheim (1966, p. 3) states: Fact gathering can be an exciting and tempting activity to which a questionnaire opens a quick and seemingly easy avenue: the weaknesses in the design are frequently not realized until the results have to be interpreted, if then. 47

To prevent this problem, all the questionnaires were piloted before being administered to the targeted teachers and learners. Four junior secondary school chemistry teachers from two schools, one private and the other semi-private, and ten Grade nine students from the same schools where the study was conducted were used for piloting. An effort was made to ensure that Respondents in pilot studies should be as similar as possible to those in the main inquiry. (Oppenheim, 1966, p.29). The main outcome of the piloting was to determine the times required to answer the questionnaires, rephrasing some questions which resulted in ambiguous answers and reducing the length of the learners questionnaire by taking out some questions found irrelevant to the study.. 3.4.1.3. Data Collection. After the approval from provincial and district educational authorities the following stage was to administer the questionnaire to the selected sample. With the approval of the school principals it was necessary to make arrangements with the chemistry teachers of the classes involved in the study. It was decided that the questionnaire should be administered during the long lunch break (30 minutes) or during a weekly class-meeting. The questionnaires for learners, both before and after intervention, were administered in a group, with all learners sitting in a classroom and allowed thirty minutes to complete it. Most of the learners answered it in less than the thirty minutes given to them. The questionnaires for teachers, both before and after intervention, were administered individually. Each teacher was given the questionnaire and the researcher observed them answering it. Some teachers answered it in groups of two or three, but they were not permitted to talk to each other. The main idea was that the researcher should observe the teacher answering the questionnaire; 48

teachers were not permitted to chat with their colleagues or to take the questionnaire home. 3.4.1.4. Data Analysis. The questionnaires administered to teachers and learners, before and after the intervention period were open-ended. Thus, after the responses it was necessary to create a coding system based on the patterns revealed in the answers. Based on the coding system created, the data was then summarized in tables. The interpretation of the data was done quantitatively, using descriptive statistics (frequencies and percentages), as well as qualitatively, based on the interpretation of all responses obtained from the questionnaires. The data gathered from the questionnaires answers two research questions, namely: teachers and learners opinions about practical work, and teachers and learners opinions of microchemistry kits. 3.4.2. Diagnostic Test. According to Fraser (1991, p.16) Classroom tests are to determine how much and what each pupil knows about the learning content taught over a certain period of time. Although this statement refers to the use of tests as a tool to assess learners in a normal teaching situation, the rationale for using it is applicable to the intended use of the diagnostic test in this study. 3.4.2.1. Designing and Face Validity. In order to test Grade 9 learners understanding of specific chemistry topics and concepts, a diagnostic content-based test was designed. The test was based on the topic of acids and bases and other related concepts and topics taught in 49

Grade 8, which is the previous the year of study, and Grade 9 before the study started. The twenty five question diagnostic test was based on the topic of acids and bases and concepts derived from the group VI and group V of the periodic table of the elements as well as other related topics. The concepts and questions selected for the diagnostic test were designed according to the demands of the Grade 8 and Grade 9 syllabuses. In order to improve the validity of the instrument (Sanders, 1998), the diagnostic test was submitted to two university chemistry lecturers for face validity and content validity. The lecturers were basically looking at three main aspects. Firstly, to check whether the questions were testing the intended content; and secondly, to check grammar, wording and sentence construction, and finally to check the level of comprehension of the questions for Grade nine learners. The final version of the test designed, (taking into consideration the experts suggestions), was translated from English into Portuguese. The translated version of the test was then submitted to two Portuguese speaking university chemistry lecturers to check the same aspects as done by the two English speaking university chemistry lecturers. In addition to that they had to check whether the questions were suitable for Grade 9 learners, taking the Mozambican context into consideration. As a result of the process of checking the face validity involving Mozambican and South African university chemistry lecturers, significant changes were made to the content and structure of the diagnostic test. After that the instrument was considered ready to be administered; however it still had to be piloted. 50

3.4.2.2. Piloting the Diagnostic Test. Before administering the instrument to the selected sample a pilot study was conducted. In the pilot study ten Grade 9 learners were selected, from the four schools, but from classes other than those involved in the main study. After answering the diagnostic test (Appendix K), learners were asked to provide comments on the three questions survey (see Appendix L). In addition to that, some of the answers given were probed in order to find the reasons for their answering in the way they did. Three main aspects arose from the pilot study, and the probing process (which took the form of an interview): 1. The number of questions in the original diagnostic test (Appendix K) was increased in the final version (appendix M) which was eventually administered to the sample groups of the main study. This was done because in the probing process, learners gave different reasons and explanations for the answers given. Thus, in order to increase the range of options, and ascertain that the learners fully understood those, questions 5, 6, 7, 8, 10 and 12 were given more options. 2. The estimated time to answer the test was altered to 90 minutes (two double periods) contrary to an initial estimation of 75 minutes. This was done because of the extra options introduced to six questions (see previous number). 3. The sequencing of the questions was changed, alternating the easy questions with those considered difficult. This was done not to make the diagnostic test easier or harder, but to make the process of testing their understanding in the selected topic as smooth as possible. 51

3.4.2.3. Data Collection. Before administering the diagnostic test a formal request to carry on the research was sent to provincial education authorities. With the approval of the educational authorities from provincial and district level, the principals of the schools were notified of the research. Only after that the teachers were selected and an arrangement was made in order to administer the test in a double period. In two schools it was easy because they already had in the time table one double period for chemistry once a week. But in the other two schools there were only three single periods per week, therefore, in order to administer the test it was necessary to negotiate with the teacher of the lesson prior to or after the lesson in question. The diagnostic test was answered in the blank spaces left for answers after each question. The chemistry teacher of the class and the researcher were the invigilators of the test. During the test, learners were allowed to ask questions, and use the periodic table of the elements and calculators. They were not allowed to use their chemistry textbook or chemistry exercise book, and were not allowed to talk to each other. After 90 minutes they had to hand in the test whether they had answered all the questions or not. The test was marked by the researcher using a memorandum, in which each correct answer was allocated one point. An effort was made to include only the content prescribed by the syllabus in the test, and the questions were designed to meet the intellectual skills and abilities of learners (Fraser, 1991). 52

3.4.2.4. Data Analysis. When designing the study, an effort was made from the start to have a sample that was representative so that the results could be generalized (Fraenkel & Wallen, 1990; Schumacher & McMillan, 1993; Kruger & Welman, 2001, and Huysamen, 2001). Two types of statistical analysis were conducted with the data gathered from the diagnostic test: The first was a descriptive statistic, in which the final scores from the experimental group were compared to those of the comparison group. For that purpose table and bar graphs are used to illustrate the differences. The second test was a statistical test called analysis of covariance (ANCOVA). This test was conducted to find out whether there was a significant difference, after eight weeks of intervention, between the comparison group (i.e. lessons that occurred without any external intervention) and the experimental group (i.e. lessons that occurred with the use of microchemistry kits). The rationale for using ANCOVA is to establish whether the observable differences in all schools between pre-test and post-test are statistically significant or not. In the ANCOVA, the schools will be considered as covariates, the pre-test will be considered as independent variable and the post-test dependent variable, and the intervention made will be considered the fixed factor (details of the model used and results of the ANCOVA test see Appendix N and appendix O). ANCOVA is a parametric test used when the data is normally distributed. One of the advantages of using ANCOVA is that one increases the power of the statistical test by showing the differences between the groups under study (Sanders, 1995), 53

3.5. Concluding Remarks. The data gathered from the two instruments will be presented and discussed in different chapters: the information obtained from the questionnaires will be described in chapter 4, and the information collected from the diagnostic test will be described in chapter 5. Although these instruments are frequently used in a quantitative approach, in this research they are also used qualitatively, this is because irrespective of its frequency what also counts is the actual information obtained. Hence, during the study, opportunistic data, such as, classroom observation, field notes, informal interviews with teachers and learners were also collected. These sources of information allow thick descriptions of facts and phenomena as well as triangulation of data, and inferences based on a whole range of available data. The risk of using the ANCOVA in this research was that, at the beginning, the classes involved in the research were not equivalent, a fact which can be observed from the results of the pre-test. One of the pre-requisites to the use of ANCOVA is that the scores should be equivalent at the start; only after that can one really check whether the differences between the comparison group and the experimental group after the intervention were statistically significant (Sanders, 1995). The next chapter (chapter 4), presents and analyses data collected from the questionnaires administered to teachers and learners before and after the intervention. 54