Interpreting ACER Test Results

Similar documents
Norms How were TerraNova 3 norms derived? Does the norm sample reflect my diverse school population?

Dyslexia and Dyscalculia Screeners Digital. Guidance and Information for Teachers

2 nd grade Task 5 Half and Half

Centre for Evaluation & Monitoring SOSCA. Feedback Information

End-of-Module Assessment Task

Shockwheat. Statistics 1, Activity 1

Using SAM Central With iread

MCAS_2017_Gr5_ELA_RID. IV. English Language Arts, Grade 5

CONSTRUCTION OF AN ACHIEVEMENT TEST Introduction One of the important duties of a teacher is to observe the student in the classroom, laboratory and

Chapter 4 - Fractions

OVERVIEW OF CURRICULUM-BASED MEASUREMENT AS A GENERAL OUTCOME MEASURE

learning collegiate assessment]

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

Ohio s Learning Standards-Clear Learning Targets

How to Judge the Quality of an Objective Classroom Test

Welcome to ACT Brain Boot Camp

1. READING ENGAGEMENT 2. ORAL READING FLUENCY

Physics 270: Experimental Physics

End-of-Module Assessment Task K 2

Probability and Statistics Curriculum Pacing Guide

Fisk Street Primary School

LITERACY ACROSS THE CURRICULUM POLICY

/ On campus x ICON Grades

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Jack Jilly can play. 1. Can Jack play? 2. Can Jilly play? 3. Jack can play. 4. Jilly can play. 5. Play, Jack, play! 6. Play, Jilly, play!

Alignment of Australian Curriculum Year Levels to the Scope and Sequence of Math-U-See Program

Achievement Testing Program Guide. Spring Iowa Assessment, Form E Cognitive Abilities Test (CogAT), Form 7

Unit Lesson Plan: Native Americans 4th grade (SS and ELA)

November 2012 MUET (800)

Online Administrator Guide

Writing a Basic Assessment Report. CUNY Office of Undergraduate Studies

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

Are You Ready? Simplify Fractions

CAN PICTORIAL REPRESENTATIONS SUPPORT PROPORTIONAL REASONING? THE CASE OF A MIXING PAINT PROBLEM

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

Many instructors use a weighted total to calculate their grades. This lesson explains how to set up a weighted total using categories.

Sample Problems for MATH 5001, University of Georgia

FEEDBACK & MARKING POLICY. Little Digmoor Primary School

University of Waterloo School of Accountancy. AFM 102: Introductory Management Accounting. Fall Term 2004: Section 4

Create Quiz Questions

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Mathematics Scoring Guide for Sample Test 2005

Prentice Hall Literature: Timeless Voices, Timeless Themes, Platinum 2000 Correlated to Nebraska Reading/Writing Standards (Grade 10)

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

1. READING ENGAGEMENT 2. ORAL READING FLUENCY

POWERTEACHER GRADEBOOK

INTERNAL MEDICINE IN-TRAINING EXAMINATION (IM-ITE SM )

Contents. Foreword... 5

AN ANALYSIS OF GRAMMTICAL ERRORS MADE BY THE SECOND YEAR STUDENTS OF SMAN 5 PADANG IN WRITING PAST EXPERIENCES

LOYOLA CATHOLIC SECONDARY SCHOOL FEB/MARCH 2015

Session Six: Software Evaluation Rubric Collaborators: Susan Ferdon and Steve Poast

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Alberta Police Cognitive Ability Test (APCAT) General Information

STRETCHING AND CHALLENGING LEARNERS

RETURNING TEACHER REQUIRED TRAINING MODULE YE TRANSCRIPT

Renaissance Learning 32 Harbour Exchange Square London, E14 9GE +44 (0)

Sight Word Assessment

Stacks Teacher notes. Activity description. Suitability. Time. AMP resources. Equipment. Key mathematical language. Key processes

TA Script of Student Test Directions

Psychometric Research Brief Office of Shared Accountability

Longman English Interactive

Wonderworks Tier 2 Resources Third Grade 12/03/13

Financing Education In Minnesota

Project Based Learning Debriefing Form Elementary School

The Flaws, Fallacies and Foolishness of Benchmark Testing

Interactive Whiteboard

InCAS. Interactive Computerised Assessment. System

Introducing the New Iowa Assessments Language Arts Levels 15 17/18

Case study Norway case 1

FOR TEACHERS ONLY. The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION PHYSICAL SETTING/PHYSICS

Karla Brooks Baehr, Ed.D. Senior Advisor and Consultant The District Management Council

Appendix L: Online Testing Highlights and Script

Evidence-based Practice: A Workshop for Training Adult Basic Education, TANF and One Stop Practitioners and Program Administrators

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

Guidelines for the Iowa Tests

D Road Maps 6. A Guide to Learning System Dynamics. System Dynamics in Education Project

1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature

Essentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology

Linking the Ohio State Assessments to NWEA MAP Growth Tests *

The following shows how place value and money are related. ones tenths hundredths thousandths

Senior Stenographer / Senior Typist Series (including equivalent Secretary titles)

WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company

Prentice Hall Literature: Timeless Voices, Timeless Themes Gold 2000 Correlated to Nebraska Reading/Writing Standards, (Grade 9)

School Size and the Quality of Teaching and Learning

Wolf Pack Sats Level Thresholds

Teaching a Laboratory Section

Tour. English Discoveries Online

Formative Assessment in Mathematics. Part 3: The Learner s Role

5 Guidelines for Learning to Spell

Answer Key For The California Mathematics Standards Grade 1

Rubric for Scoring English 1 Unit 1, Rhetorical Analysis

Tap vs. Bottled Water

By. Candra Pantura Panlaysia Dr. CH. Evy Tri Widyahening, S.S., M.Hum Slamet Riyadi University Surakarta ABSTRACT

Introducing the New Iowa Assessments Reading Levels 12 14

Houghton Mifflin Online Assessment System Walkthrough Guide

Urban Analysis Exercise: GIS, Residential Development and Service Availability in Hillsborough County, Florida

The Singapore Copyright Act applies to the use of this document.

PowerTeacher Gradebook User Guide PowerSchool Student Information System

Paper 2. Mathematics test. Calculator allowed. First name. Last name. School KEY STAGE TIER

Transcription:

Interpreting ACER Test Results This document briefly explains the different reports provided by the online ACER Progressive Achievement Tests (PAT). More detailed information can be found in the relevant teacher manuals. Understanding the PAT scales Each learning area that is assessed by one of the PATs has a PAT scale. Each scale consists of pat units that are named after that scale. PAT reading comprehension scale patc PAT maths and PAT maths plus - patm PAT science scale patsc PAT written and dictated spelling pats PAT punctuation and grammar patpg A PAT scale locates the difficulty of the questions and the achievements of the students on the same scale. Easy questions and low achieving students are located lower down the scale. Hard questions and high achieving students are located higher up the scale. When students sit a test, their test raw score, or the number of questions they answered correctly is converted into a scale score. This is a location on the PAT scale. Low achieving students have low scale scores. High achieving students have high scale scores. The question difficulties show the location of the questions on the same PAT scale as the students scale scores. Easy questions have lower locations than harder questions. All the questions from all the tests in the same PAT learning area are located on the one scale. This is why student achievement can be compared and monitored over time using scale scores regardless of which test students sat in that learning area. Qualities of scale scores Scale scores are measures on an interval scale. This means that a difference of 5 pat units in the middle of the PAT scale (for example, from 50 to 55) is equivalent to the same difference on any other part of the scale (for example, from 15 to 20 or from 85 to 90 pat units). Scale scores allow comparison of students results on test booklets of varying difficulty. Scale scores enable the tracking of students development in skills as measured by the test from year to year. Scale scores provide a common achievement scale for all tests within the same learning area giving teachers the flexibility to match test difficulty to student achievement and measure growth over time.

Understanding the individual reports The on-line individual report graphically shows the student s scale score, which is their location on the scale, and the question difficulties or the location of the test questions, on the same scale. The student s scale score is the dotted line. On the example above, the student s scale score is 111.4. The numbers are the questions in the test. Easier questions are lower on the scale and harder questions are higher on the scale. The circled numbers are questions this student answered correctly. The numbers in red squares are questions this student answered incorrectly. The numbers in white squares are questions this student missed. The analytic strength of the PATs comes from locating student achievement and question difficulty on the same scale. The questions get harder as you move up the scale. Typically, students should answer easier questions correctly and harder questions incorrectly. Students who show atypical responses, such as the illustrated example, require further teacher investigation. This example shows this student has tended to get the easy questions wrong and harder questions right. Questions that are located around a student s scale score are questions the student has a fifty per cent chance of answering correctly. These are skills the student is currently consolidating. Questions that are 10 or more pat units below the student s scale score are questions they are mainly likely to answer correctly. These are skills this student has largely mastered. Questions that are 10 or more pat units above the student s scale score are questions this student is currently likely to find too challenging. Teachers can use the question difficulty locations to help identify what students typically have mastered, and what they need to learn next. Interpreting ACER Test Results Page 2

Margins of error All test scores have an associated margin of error. Statistical principles can be used to estimate the size of the likely error on any given test score. These margins of error are often expressed as +/ (plus or minus) a particular value, or as shading or dotted lines on a diagram. In the individual on-line report, the margin of error for the student s scale score is identified with the dark grey shading either side of the red dotted line (the student s scale score).. The grey shading on this report shows that this student s score could have been somewhere between 108 to 114 on this scale. This is one standard error, which means there is a 68% chance of the student s score falling somewhere in this range. Small differences in scale scores should not be given more importance than they deserve. Each PAT manual provides tables showing the error for each scale score. It is important to understand that the errors are very large when students get most of the questions right in a test or most of the questions wrong. These students have a scale location that is broadly indicative only and is problematic to use when measuring improvement over time. It is recommended that teachers give students a test that is better matched to their ability if monitoring over time is required. Use the student s indicative scale score to find a test with question difficulties in the same range as the student s scale score. Diagrams illustrating question difficulty for each of the tests can be found in each PAT manual. The individual report also provides a stanine and percentile rank for this student. See the sections, What are norms? What are percentiles? Interpreting ACER Test Results Page 3

Understanding the group reports The group report shows the response each student in your class gave to each question in the test. This helps you to identify patterns of strength and weakness in the way students have responded to the questions for the whole class as well as for individuals. The questions are numbered along the top. If you scroll over the question number, a window will pop up with that question. The question difficulty refers to the location of the question on the PAT scale for this learning area. Compare the question difficulty locations across the test: the lower numbers indicate the easiest questions and the higher numbers the harder questions for this test. The question classification is the sub-strand of the learning area that this question assesses. The percentage correct is the percentage of students in your class who answered this question correctly. You can sort the group report in different ways to help you to identify possible patterns in student responses that can inform your teaching. This allows you to sort by correct answers and incorrect answers. This allows you to sort by students scale scores. This ranks the students by their achievement on this test. This allows you to sort by stanines. See What are norms? What are stanines? This allows you to sort by percentile ranks. See What is a percentile rank? Interpreting ACER Test Results Page 4

What are norms? All the PATs have norm reference samples consisting of Australian students from all states and territories and from all school systems government, Catholic and independent. A random, stratified sampling frame was used to select the schools. PATs were administered to students in different year levels of the norm reference sample so you can compare your students achievements on the test with the norm reference sample by year level. The norm data has a normal distribution on each PAT scale. This means: There are a few students located on the higher parts on the scale, because they got most of the test right. Most students are located in a large bulge around the part of scale that shows they answered about half the questions correctly and about half the questions incorrectly. There are a few students located on the lower parts of the scale, because they got most of the test wrong. The following diagram shows the distribution of the norm reference group as a curved line on the scale. You can see where the line bulges to show the majority of students. The numbers inside the curved line indicate the percentage of students in the norm reference group with locations on this part of the scale. The norm data for the PATs is reported as percentile ranks and as stanines. They are different ways of describing the distribution of achievement of the norm reference sample on a PAT scale. You do not need to use both forms of norm data. There are reports in the PAT manual that present the norm data visually. You may prefer to use the visual information and ignore the percentile ranks and stanines. If you find the norm reference data confusing, you do not need to use it. You can gain a great deal of useful information from the PATs just by using students scale scores and question difficulties. Interpreting ACER Test Results Page 5

What is a percentile rank? A percentile rank is associated with a scale score. Take any student s scale score and the corresponding percentile rank indicates the percentage of students in the norm reference group for that year level with lower scale scores. The red line on the diagram above shows that a student with a scale score of 70 has a percentile rank of 40, meaning 40 per cent of the norm reference group had scale scores below 70. Percentiles should not be used to measure progress over time. Use scale scores for this purpose. It is important to remember that the norm data is indicative only. Percentile rank shows how your students compare with the norm sample, but this information should not be over interpreted. Interpreting ACER Test Results Page 6

What are stanines? Stanines divide the student achievement distribution for the norm reference group into nine categories, with stanine 1 the lowest, stanine 5 the midpoint and stanine 9 the highest. Your students scale scores are used to match them to one of the stanines in the norm reference group distribution. Students in stanine 9 are located well above norm reference group bulge on same part of the scale as the small number of students in the norm reference group who got most of the test right. Students in stanine 5 are located on the same part of the scale as the bulge, or the largest number of students in the norm reference group. These students answered about half the questions correctly and about half incorrectly. Students in stanine 1 are located well below norm reference group bulge. They are located on the same part of the scale as the small number of students in the norm reference group who got most of the test wrong. Stanines are useful for describing distributions of achievement; however, it is recommended that only differences of two or more stanines should be regarded as indicating a real difference in performance. Stanines should not be used to measure progress. Use scale scores for this purpose. It is important to remember that the norm data is indicative only. Stanines show how your students compare with the norm sample, but this information should not be over interpreted. Technical information All ACER PAT teacher manuals provide detailed technical information. Refer to the appropriate chapter in each manual for information about construction of each PAT scale, validity, reliability and characteristics of the norm-reference groups. Interpreting ACER Test Results Page 7