DO YOU HAVE THESE CONCERNS?

Similar documents
1.1 Examining beliefs and assumptions Begin a conversation to clarify beliefs and assumptions about professional learning and change.

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

ACADEMIC AFFAIRS GUIDELINES

Grade Dropping, Strategic Behavior, and Student Satisficing

Early Warning System Implementation Guide

The Impact of Honors Programs on Undergraduate Academic Performance, Retention, and Graduation

08-09 DATA REVIEW AND ACTION PLANS Candidate Reports

What Is The National Survey Of Student Engagement (NSSE)?

Advancing the Discipline of Leadership Studies. What is an Academic Discipline?

CONSISTENCY OF TRAINING AND THE LEARNING EXPERIENCE

Aviation English Training: How long Does it Take?

California Professional Standards for Education Leaders (CPSELs)

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

4.0 CAPACITY AND UTILIZATION

BENCHMARK TREND COMPARISON REPORT:

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Procedia - Social and Behavioral Sciences 209 ( 2015 )

2017 FALL PROFESSIONAL TRAINING CALENDAR

Evidence for Reliability, Validity and Learning Effectiveness

Principal vacancies and appointments

Rote rehearsal and spacing effects in the free recall of pure and mixed lists. By: Peter P.J.L. Verkoeijen and Peter F. Delaney

Student Course Evaluation Class Size, Class Level, Discipline and Gender Bias

ACBSP Related Standards: #3 Student and Stakeholder Focus #4 Measurement and Analysis of Student Learning and Performance

GCSE English Language 2012 An investigation into the outcomes for candidates in Wales

Using Team-based learning for the Career Research Project. Francine White. LaGuardia Community College

The Good Judgment Project: A large scale test of different methods of combining expert predictions

RCPCH MMC Cohort Study (Part 4) March 2016

OVERVIEW OF CURRICULUM-BASED MEASUREMENT AS A GENERAL OUTCOME MEASURE

Observing Teachers: The Mathematics Pedagogy of Quebec Francophone and Anglophone Teachers

ANALYSIS: LABOUR MARKET SUCCESS OF VOCATIONAL AND HIGHER EDUCATION GRADUATES

Introduction. 1. Evidence-informed teaching Prelude

Key concepts for the insider-researcher

VII Medici Summer School, May 31 st - June 5 th, 2015

The number of involuntary part-time workers,

A Guide to Supporting Safe and Inclusive Campus Climates

Mathematics textbooks the link between the intended and the implemented curriculum? Monica Johansson Luleå University of Technology, Sweden

Table of Contents. Internship Requirements 3 4. Internship Checklist 5. Description of Proposed Internship Request Form 6. Student Agreement Form 7

Position Statements. Index of Association Position Statements

School Size and the Quality of Teaching and Learning

American Journal of Business Education October 2009 Volume 2, Number 7

Abstract. Highlights. Keywords: Course evaluation, Course characteristics, Economics, Instructor characteristics, Student characteristics

JOB OUTLOOK 2018 NOVEMBER 2017 FREE TO NACE MEMBERS $52.00 NONMEMBER PRICE NATIONAL ASSOCIATION OF COLLEGES AND EMPLOYERS

West Georgia RESA 99 Brown School Drive Grantville, GA

Greek Teachers Attitudes toward the Inclusion of Students with Special Educational Needs

Algebra I Teachers Perceptions of Teaching Students with Learning Disabilities. Angela Lusk Snead State Community College

Specific questions on these recommendations are included below in Section 7.0 of this report.

Developing Highly Effective Industry Partnerships: Co-op to Capstone Courses

What effect does science club have on pupil attitudes, engagement and attainment? Dr S.J. Nolan, The Perse School, June 2014

Student-led IEPs 1. Student-led IEPs. Student-led IEPs. Greg Schaitel. Instructor Troy Ellis. April 16, 2009

Alpha provides an overall measure of the internal reliability of the test. The Coefficient Alphas for the STEP are:

Promoting the Wholesome Professor: Building, Sustaining & Assessing Faculty. Pearson, M.M. & Thomas, K. G-SUN-0215h 1

Linking the Common European Framework of Reference and the Michigan English Language Assessment Battery Technical Report

Tap vs. Bottled Water

Unequal Opportunity in Environmental Education: Environmental Education Programs and Funding at Contra Costa Secondary Schools.

TRAVEL TIME REPORT. Casualty Actuarial Society Education Policy Committee October 2001

San Francisco County Weekly Wages

Running Head: Implementing Articulate Storyline using the ADDIE Model 1. Implementing Articulate Storyline using the ADDIE Model.

NAIMES. educating our people in uniform. February 2016 Volume 1, Number 1. National Association of Institutions for Military Education Services

AC : PREPARING THE ENGINEER OF 2020: ANALYSIS OF ALUMNI DATA

National Survey of Student Engagement

Implementing cross-disciplinary learning environment benefits and challenges in engineering education

Summary results (year 1-3)

Networks and the Diffusion of Cutting-Edge Teaching and Learning Knowledge in Sociology

Teaching in a Specialist Area Unit Level: Unit Credit Value: 15 GLH: 50 AIM Awards Unit Code: GB1/4/EA/019 Unique Reference Y/503/5372

Rubric for Scoring English 1 Unit 1, Rhetorical Analysis

Measurement. When Smaller Is Better. Activity:

Qualification handbook

The Role of Test Expectancy in the Build-Up of Proactive Interference in Long-Term Memory

Quality Framework for Assessment of Multimedia Learning Materials Version 1.0

Implementing Response to Intervention (RTI) National Center on Response to Intervention

AC : DEVELOPMENT OF AN INTRODUCTION TO INFRAS- TRUCTURE COURSE

PIRLS. International Achievement in the Processes of Reading Comprehension Results from PIRLS 2001 in 35 Countries

Graduate Division Annual Report Key Findings

THE INFORMATION SYSTEMS ANALYST EXAM AS A PROGRAM ASSESSMENT TOOL: PRE-POST TESTS AND COMPARISON TO THE MAJOR FIELD TEST

OFFICE OF ENROLLMENT MANAGEMENT. Annual Report

ScienceDirect. Noorminshah A Iahad a *, Marva Mirabolghasemi a, Noorfa Haszlinna Mustaffa a, Muhammad Shafie Abd. Latif a, Yahya Buntat b

User Education Programs in Academic Libraries: The Experience of the International Islamic University Malaysia Students

Khairul Hisyam Kamarudin, PhD 22 Feb 2017 / UTM Kuala Lumpur

Sheila M. Smith is Assistant Professor, Department of Business Information Technology, College of Business, Ball State University, Muncie, Indiana.

Blended E-learning in the Architectural Design Studio

Dr. Steven Roth Dr. Brian Keintz Professors, Graduate School Keiser University, Fort Lauderdale

University-Based Induction in Low-Performing Schools: Outcomes for North Carolina New Teacher Support Program Participants in

Running Head: STUDENT CENTRIC INTEGRATED TECHNOLOGY

SAT MATH PREP:

M55205-Mastering Microsoft Project 2016

leading people through change

MPA Internship Handbook AY

Full text of O L O W Science As Inquiry conference. Science as Inquiry

NCEO Technical Report 27

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

VOL. 3, NO. 5, May 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

learning collegiate assessment]

Shyness and Technology Use in High School Students. Lynne Henderson, Ph. D., Visiting Scholar, Stanford

Introductory thoughts on numeracy

NTU Student Dashboard

Group Assignment: Software Evaluation Model. Team BinJack Adam Binet Aaron Jackson

Introduction to Psychology

ASSESSMENT GUIDELINES (PRACTICAL /PERFORMANCE WORK) Grade: 85%+ Description: 'Outstanding work in all respects', ' Work of high professional standard'

Effective practices of peer mentors in an undergraduate writing intensive course

Transcription:

DO YOU HAVE THESE CONCERNS? FACULTY CONCERNS, ADDRESSED MANY FACULTY MEMBERS EXPRESS RESERVATIONS ABOUT ONLINE COURSE EVALUATIONS. IN ORDER TO INCREASE FACULTY BUY IN, IT IS ESSENTIAL TO UNDERSTAND THE UNDERLYING REASONS FOR POSSIBLE RESISTANCE AND TO PROVIDE ANSWERS TO HELP DIFFUSE CONCERNS. THE FOLLOWING ARE RESEARCH-BASED ANSWERS TO FOUR MAJOR FACULTY CONCERNS ABOUT COURSE EVALUATIONS. From Chapter 7: Online Ratings, in Hạtịvah, N., Theall, M., & Franklin, J. (2013). Student ratings of instruction. Oron Publications. Concern 1: The online method leads to a lower response rate [which may have some negative consequences for faculty]. Participation in online ratings is voluntary and requires student motivation to invest time and effort in completing the forms. Faculty are concerned that these conditions will produce a lower response rate that may reduce the reliability and validity of the ratings, and which may have some negative consequences for them. The majority of studies on this issue found that indeed, online ratings produce a lower response rate than in-class ratings (Avery, Bryant, Mathios, Kang, & Bell, 2006; Benton, Webster, Gross, & Pallett, 2010; IDEA, 2011; Nulti, 2008). Explanations are that in-class surveys are based on a captive audience, and moreover, students in class are encouraged to participate by the mere presence of the instructor, his/her expressed pressure to respond, and peer pressure. In contrast, in online ratings, students lack motivation or compulsion to complete the forms or they may experience inconvenience and technical problems (Sorenson & Johnson, 2003). In order to address this concern, we have provided resources: Top 10 Ways to Increase Response Rates Strategies for Student Participation Concern 2: Dissatisfied/less successful students participate in the online method at a higher rate than other students. Faculty are concerned that students who are unsuccessful, dissatisfied, or disengaged may be particularly motivated to participate in online ratings in order to rate their teachers low, blaming them for their own failure, disengagement, or dissatisfaction. Consequently, students with low opinions about the instructor will participate in online ratings at a substantially higher rate than more satisfied students. If this concern is correct, then the majority of respondents in online surveys will rate the instructor and the course low, and consequently, the rating distribution will be skewed

towards the lower end of the rating scale. However, there is robust research evidence to the contrary (for both methods on paper and online), that is, the distribution of student ratings on the Overall Teaching item is strongly skewed towards the higher end of the scale. Online score distributions have the same shape as the paper distributions a long tail at the low end of the scale and a peak at the high end. In other words, unhappy students do not appear to be more likely to complete the online ratings than they were to complete paper ratings (Linse, 2012). The strong evidence that the majority of instructors are rated above the mean of the rating scale indicates that the majority of participants in online ratings are the more satisfied students, refuting faculty concerns about a negative response bias. Indeed, substantial research evidence shows that the better students, those with higher cumulative GPA or higher SAT scores, are more likely to complete online SRI forms than the less good/successful students (Adams & Umbach, 2012 ; Avery et al., 2006; Layne, DeCristoforo, & McGinty, 1999; Porter & Umbach, 2006; Sorenson & Reiner, 2003). The author examined this issue at her university for all undergraduate courses in two large schools: Engineering and Humanities (Hativa, Many, & Dayagi, 2010). The number of participating courses was 110 and 230, respectively, for the two schools. At the beginning of the semester, all students in each of the schools were sorted into four GPA levels. The lowest 20% of GPAs in a school formed the Poor group whereas the highest 20%, the Excellent group. The two intermediate GPA levels formed, respectively, the Fair and Good groups, with 30% of the students in each. Results show that the rate of response for the Poor, Fair, Good and Excellent groups were respectively for the school of humanities: 35, 43, 43, and 50, and for the school of engineering: 48, 60, 66 and 72. In sum, this faculty concern is refuted and even reversed the higher the GPA, the larger the response rate in the online method so that the least successful students seem to participate in online ratings at a lower rate than better students. Concern 3: The lower response rate (as in Concern 1) and the higher participation rate of dissatisfied students in online administration (as in Concern 2) will result in lower instructor ratings, as compared with in-class administration. Faculty members are concerned that if the response rate is low (e.g., less than 40% as happens frequently in online ratings), the majority of respondents may be students with a low opinion of the course and the teacher, lowering the true mean rating of the instructor. Research findings on differences in average rating scores between the two methods of survey delivery are inconsistent. Several studies found no significant differences (Avery et al., 2006; Benton et al., 2010; IDEA, 2011; Linse, 2010; Venette, Sellnow, & McIntyre, 2010). Other studies found that ratings were consistently lower in online than on paper, but that the size of the difference was either small and not statistically significant (Kulik, 2005) or large and statistically significant (Chang, 2004).

The conflicting findings among the different studies can be explained by differences in the size of the population examined in these studies (from dozens to several thousand courses), the different instruments used (some of them may be of lower quality), and the different research methods. Nonetheless, the main cause of variance between findings in the different studies is probably whether participation in SRI is mandatory or selective. If not all courses participate in the rating procedure rather only those selected by the department or self-selected by the instructor, the courses selected and their mean ratings may not be representative of the full course population and should not be used as a valid measure for comparison. The author examined this issue in two studies that compared mean instructor ratings in paper- and online SRI administration based on her university data, with mandatory course participation. The results of both studies are presented graphically and reveal a strong decrease in annual mean and median ratings from paper to online administration. The lower online ratings cannot be explained by a negative response bias by higher participation rate of dissatisfied students, because as shown above, many more good students participate in online ratings than poor students. A reasonable explanation is that online ratings are more sincere, honest, and free of teacher influence and social desirability bias than in-class ratings. The main implication is that comparisons of course/teacher ratings can take place only within the same method of measurement either on paper or online. In no way should ratings in both methods be compared. The best way to avoid improper comparisons is to use a single method of rating throughout all courses in an institution, or at least in a particular school or department. CONCERN 4: THE LOWER RESPONSE RATE AND THE HIGHER PARTICIPATION RATE OF DISSATISFIED STUDENTS IN ONLINE ADMINISTRATION WILL RESULT IN FEWER AND MOSTLY NEGATIVE WRITTEN COMMENTS Faculty members are concerned that because the majority of expected respondents are dissatisfied students, the majority of written comments will be negative (Sorenson & Reiner, 2003). An additional concern is that because of the smaller rate of respondents in online surveys, the total number of written comments will be significantly reduced compared to in-class ratings. The fewer the comments written by students, the lower the quality of feedback received by teachers as a resource for improvement. There is a consensus among researchers that although mean online response rates are lower than in paper administration, more respondents write comments online than on paper. Johnson (2003) found that while 63% of the online rating forms included written student comments, only less than 10% of in-class forms included such comments. Altogether, the overall number of online comments appears to be larger than in the paper survey.

In Support: On average, classes evaluated online had more than five times as much written commentary as the classes evaluated on paper, despite the slightly lower overall response rates for the classes evaluated online (Hardy, 2003, p. 35). In addition, comments written online were found to be longer, to present more information, and to pose fewer socially desirable responses than in the paper method (Alhija & Fresko, 2009). Altogether, the larger number of written comments and their increased length and detail in the online method, provide instructors with more beneficial information and thus the quality of online written responses is better than that of in-class survey comments. The following are four possible explanations for the larger number of online comments and for their better quality: No time constraints: During an online response session, students are not constrained by time and can write as many comments and at any length as they wish. Preference for typing over handwriting: Students seem to prefer typing (in online ratings) to handwriting comments. Increased confidentiality: Some students are concerned that the instructor will identify their handwriting if the comments are written on paper. Prevention of instructor influence: Students feel more secure and free to write the honest truth and candid responses online. Regarding the favorability of the comments, students were found to submit positive, negative, and mixed written comments in both methods of rating delivery, with no predominance of negative comments in online ratings (Hardy, 2003). Indeed, for lowrated teachers those perceived by students as poor teachers written comments appear to be predominantly negative. In contrast, high-rated teachers receive only few negative comments and predominantly positive comments. In sum, faculty beliefs about written comments are refuted students write online more comments of better quality that are not mostly negative but rather represent the general quality of the instructor as perceived by students.

References Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53, 576-591. Alhija, F. N. A., & Fresko, B. (2009). Student evaluation of instruction: What can be learned from students' written comments? Studies in Educational Evaluation, 35 (1), 37-44. Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does an online delivery system influence student evaluations? The Journal of Economic Education, 37 (1), 21-37. Benton, S. L., Webster, R., Gross, A. B., & Pallett, W. H. (2010). An analysis of IDEA student ratings of instruction using paper versus online survey methods, 2002-2008 data IDEA Technical Report no. 16 : The IDEA Center. Chang, T. S. (2004). The results of student ratings: Paper vs. online. Journal of Taiwan Normal University, 49 (1), 171-186. Hardy, N. (2003). Online ratings: Fact and fiction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 31-38). San Francisco: Jossey-Bass. Hativa, N., Many, A., & Dayagi, R. (2010). The whys and wherefores of teacher evaluation by their students. [Hebrew]. Al Hagova, 9, 30-37. IDEA. (2011). Paper versus online survey delivery IDEA Research Notes No. 4 : The IDEA Center. Johnson, T. D. (2003). Online student ratings: Will students respond? In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 49-59). San Francisco: Jossey-Bass. Kulik, J. A. (2005). Online collection of student evaluations of teaching Retrieved April 2012, from http://www.umich.edu/~eande/tq/onlinetqexp.pdf Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40 (2), 221-232. Linse, A. R. (2010, Feb. 22nd). [Building in-house online course eval system]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary. Linse, A. R. (2012, April 27th). [Early release of the final course grade for students who have completed the SRI form for that course]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary. Nulti, D. D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment and Evaluation in Higher Education, 33, 301-314. Porter, R. S., & Umbach, P. D. (2006). Student survey response rates across institutions: Why do they vary? Research in Higher education, 47 (2), 229-247. Sorenson, D. L., & Johnson, T. D. (Eds.). (2003). Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96). San Francisco: Jossey-Bass. Sorenson, D. L., & Reiner, C. (2003). Charting the uncharted seas of online student ratings of instruction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 1-24). San Francisco: Jossey-Bass. Venette, S., Sellnow, D., & McIntyre, K. (2010). Charting new territory: Assessing the online frontier of student ratings of instruction. Assessment & Evaluation in Higher Education, 35 (1), 97-111.