Using learning styles to evaluate computer-based instruction

Similar documents
A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

WHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING AND TEACHING OF PROBLEM SOLVING

DIANA: A computer-supported heterogeneous grouping system for teachers to conduct successful small learning groups

Guru: A Computer Tutor that Models Expert Human Tutors

10.2. Behavior models

Concept mapping instrumental support for problem solving

Kelli Allen. Vicki Nieter. Jeanna Scheve. Foreword by Gregory J. Kaiser

VOL. 3, NO. 5, May 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

STA 225: Introductory Statistics (CT)

Does the Difficulty of an Interruption Affect our Ability to Resume?

Innovative Methods for Teaching Engineering Courses

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

CONSISTENCY OF TRAINING AND THE LEARNING EXPERIENCE

Student Perceptions of Reflective Learning Activities

CWIS 23,3. Nikolaos Avouris Human Computer Interaction Group, University of Patras, Patras, Greece

DYNAMIC ADAPTIVE HYPERMEDIA SYSTEMS FOR E-LEARNING

Ryerson University Sociology SOC 483: Advanced Research and Statistics

Psychometric Research Brief Office of Shared Accountability

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

A Case-Based Approach To Imitation Learning in Robotic Agents

Study Abroad Housing and Cultural Intelligence: Does Housing Influence the Gaining of Cultural Intelligence?

Greek Teachers Attitudes toward the Inclusion of Students with Special Educational Needs

Web-based Learning Systems From HTML To MOODLE A Case Study

Alpha provides an overall measure of the internal reliability of the test. The Coefficient Alphas for the STEP are:

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Using Virtual Manipulatives to Support Teaching and Learning Mathematics

Evaluating Collaboration and Core Competence in a Virtual Enterprise

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

STUDENT SATISFACTION IN PROFESSIONAL EDUCATION IN GWALIOR

White Paper. The Art of Learning

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

A Game-based Assessment of Children s Choices to Seek Feedback and to Revise

Reducing Spoon-Feeding to Promote Independent Thinking

Sheila M. Smith is Assistant Professor, Department of Business Information Technology, College of Business, Ball State University, Muncie, Indiana.

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

ECON 365 fall papers GEOS 330Z fall papers HUMN 300Z fall papers PHIL 370 fall papers

Probability and Statistics Curriculum Pacing Guide

REVIEW OF CONNECTED SPEECH

The Effect of Time to Know Environment on Math and English Language Arts Learning Achievements (Poster)

Rote rehearsal and spacing effects in the free recall of pure and mixed lists. By: Peter P.J.L. Verkoeijen and Peter F. Delaney

On-the-Fly Customization of Automated Essay Scoring

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Beyond the Blend: Optimizing the Use of your Learning Technologies. Bryan Chapman, Chapman Alliance

Robot manipulations and development of spatial imagery

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

Using Moodle in ESOL Writing Classes

American Journal of Business Education October 2009 Volume 2, Number 7

Learning By Asking: How Children Ask Questions To Achieve Efficient Search

What is PDE? Research Report. Paul Nichols

Python Machine Learning

BENCHMARK TREND COMPARISON REPORT:

Certificate of Higher Education in History. Relevant QAA subject benchmarking group: History

CONQUERING THE CONTENT: STRATEGIES, TASKS AND TOOLS TO MOVE YOUR COURSE ONLINE. Robin M. Smith, Ph.D.

TAIWANESE STUDENT ATTITUDES TOWARDS AND BEHAVIORS DURING ONLINE GRAMMAR TESTING WITH MOODLE

ACCOMMODATIONS MANUAL. How to Select, Administer, and Evaluate Use of Accommodations for Instruction and Assessment of Students with Disabilities

Summary results (year 1-3)

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

Davidson College Library Strategic Plan

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

State University of New York at Buffalo INTRODUCTION TO STATISTICS PSC 408 Fall 2015 M,W,F 1-1:50 NSC 210

Evaluation of Hybrid Online Instruction in Sport Management

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

Enhancing Students Understanding Statistics with TinkerPlots: Problem-Based Learning Approach

Characterizing Mathematical Digital Literacy: A Preliminary Investigation. Todd Abel Appalachian State University

Executive Summary: Tutor-facilitated Digital Literacy Acquisition

Using Team-based learning for the Career Research Project. Francine White. LaGuardia Community College

Stephanie Ann Siler. PERSONAL INFORMATION Senior Research Scientist; Department of Psychology, Carnegie Mellon University

Inside the mind of a learner

Mathematics Program Assessment Plan

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

Evidence for Reliability, Validity and Learning Effectiveness

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

On-Line Data Analytics

12- A whirlwind tour of statistics

Student Morningness-Eveningness Type and Performance: Does Class Timing Matter?

Visit us at:

Research Design & Analysis Made Easy! Brainstorming Worksheet

DESIGN, DEVELOPMENT, AND VALIDATION OF LEARNING OBJECTS

Inquiry Learning Methodologies and the Disposition to Energy Systems Problem Solving

MASTER S THESIS GUIDE MASTER S PROGRAMME IN COMMUNICATION SCIENCE

ScienceDirect. Noorminshah A Iahad a *, Marva Mirabolghasemi a, Noorfa Haszlinna Mustaffa a, Muhammad Shafie Abd. Latif a, Yahya Buntat b

Metadiscourse in Knowledge Building: A question about written or verbal metadiscourse

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

EQuIP Review Feedback

Chapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4

Navigating the PhD Options in CMS

Capturing and Organizing Prior Student Learning with the OCW Backpack

THE ROLE OF TOOL AND TEACHER MEDIATIONS IN THE CONSTRUCTION OF MEANINGS FOR REFLECTION

Study Group Handbook

Lecturing Module

Sociology 521: Social Statistics and Quantitative Methods I Spring Wed. 2 5, Kap 305 Computer Lab. Course Website

Mandarin Lexical Tone Recognition: The Gating Paradigm

A Study of the Effectiveness of Using PER-Based Reforms in a Summer Setting

Lecture 1: Machine Learning Basics

Conceptual and Procedural Knowledge of a Mathematics Problem: Their Measurement and Their Causal Interrelations

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

Transcription:

Computers in Human Behavior 21 (2005) 287 306 Computers in Human Behavior www.elsevier.com/locate/comphumbeh Using learning styles to evaluate computer-based instruction L. Michele Miller Department of Cognitive Science, University of California, Irvine, Irvine, CA 92697-5100, USA Available online 18 March 2004 Abstract This study compared two instruments while evaluating the effects of learning style on performance when using a computer-based instruction (CBI) system to teach introductory probability and statistics. The Gregorc Style Delineator (GSD) and the Kolb Learning Style Inventory (LSI) were used to measure learning style. Results indicated that there was an effect of learning style when using the GSD: students identified as Concrete Sequential learned significantly less than students identified as Concrete Random. There was no effect according to LSI styles. Lack of an ordering preference dimension in the LSI is discussed as a possible explanation. Findings from other studies evaluating CBI and recommendations are also discussed. Ó 2004 Elsevier Ltd. All rights reserved. Keywords: Learning style; Computer-based instruction; Online instruction; Human computer interaction 1. Introduction The idea that people have particular dispositions that influence their behavior in certain situations is not a new one. Early in the 20th century, Thorndike (1913) recognized the importance of individual differences in learning scenarios. In 1937, Allport formally introduced these predilections as styles, expanding on Jung s theory of psychological types (Allport, 1937). Sternberg asserts that the core definition of style that is, its reference to habitual patterns or preferred ways of doing something (e.g., thinking, learning, teaching) that are consistent over long periods of time and E-mail address: millerlm@uci.edu (L.M. Miller). 0747-5632/$ - see front matter Ó 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2004.02.011

288 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 across many areas of activity remains virtually the same (Sternberg & Grigorenko, 2001, p. 2). Most instructors do not plan their lessons to accommodate all learning styles, although research on styles has found that the match between a teacher s instructional style and the learning style of a student affects performance in a classroom environment. Gregorc (1984) found that individuals learned with ease when the learning environment was compatible with their learning style, but learning was thought of as a challenge, hard, or distasteful when there was a mismatch (p. 54). Dunn, Griggs, Olson, and Beasly (1995) conducted a meta-analysis of studies over a 10-year period involving 3181 students and concluded that matching teaching style with students learning styles resulted in a measurable increase in performance. In a similar vein, Federico (1991) found that students learn more efficiently when pedagogical procedures are adapted to the students individual differences. As computerbased instruction becomes more prevalent in learning environments, it seems logical to extend the matching findings one would expect the performance of individuals to vary according to the match between their learning style preferences and the environment provided by the CBI system. If so, this provides an opportunity to facilitate the optimal pairing of instruction and learning styles. 1.1. Learning style instruments Although there are instructors and researchers that have realized the importance of learning styles, the community has not settled on one single instrument to measure a person s learning preferences. This could be because there has not yet been one that integrates all relevant individual differences. Bokoros, Goldstein, and Sweeney (1992) conducted a factor analysis on five cognitive style instruments. They found three underlying factors: decision making functionality, information-processing functionality, and an attentional focus dimension. The first factor involves high-level cognitive processes that control cognitive operations and arriving at decisions. Factor 2 involves a receiving function, ordering and encoding sensory input (Bokoros et al., 1992, p. 104). The third factor determines the general focus of attention, whether one prefers external stimuli or one s own thoughts and ideas. Of the five instruments, two measured learning styles: the Gregorc Style Delineator (GSD) and the Kolb Learning Style Inventory (LSI). As these two are the focus of this investigation, a description of each will be provided, including the Bokoros et al. (1992) loadings and examples of research where each have been used. 1.2. Gregorc Style Delineator (GSD) The GSD was designed to aid an individual to recognize and identify the channels through which he/she receives and expresses information efficiently, economically, and effectively (Gregorc, 1982a, p. 1). There are two dimensions within the model: Perception and Ordering (Gregorc, 1982b). The Perception dimension deals with how one grasps information, either Abstractly or Concretely. The Ordering dimension involves the way one prefers to arrange and refer to information:

L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 289 Sequentially or Randomly. Although most people are able to function with all channels depending on the situation, people tend to be predisposed strongly to a combination of less than all four. In the model, four combinations are used: Concrete Sequential (CS), Abstract Sequential (AS), Abstract Random (AR), and Concrete Random (CR). The GSD consists of 10 items in which four words are listed vertically. An individual ranks the words in order of self descriptivity on a scale of 1 4 (4 being most descriptive), then the numbers are totaled to give values for each mediation channel: CS, AS, AR, and CR. The test retest correlation coefficients range from.85 to.88 (Gregorc, 1982b). In the factor analysis of Bokoros et al. (1992), the GSD loads on factor 1 relating to cognitive operations and decision making (r ¼ :655) and on factor 2 relating to the ordering and encoding of sensory input (r ¼ :653). The GSD has been used in variety of studies. Moore (1990) found a significant relationship between intuitive musical ability and the abstract random learning style. Davidson, Savenye, and Orr (1992) discovered that AS students did best and AR students did worst in a course covering educational computer technologies, even though a variety of teaching and assessment methods were used. After studying 4546 students over 4 years, the results of Drysdale, Ross, and Shultz (2001) showed those who learn sequentially did better in science and math-related courses. Their results also showed that those identified as random learners did significantly better in fine arts courses. Ross, Drysdale, and Shultz (2001) demonstrated that sequentials did significantly better than randoms in courses covering basic computer system knowledge and standard computer application software (word processing, database, spreadsheet, Internet). 1.3. Kolb Learning Style Inventory (LSI) The LSI is based on Kolb s Experiential Learning Model. In this model, knowledge is created from grasping and transforming one s experiences (Kolb, 1984). There are two modes of grasping experience: Concrete Experience (CE) and Abstract Conceptualization (AC). There are two modes of transforming experience: Reflective Observation (RO) and Active Experimentation (AE). This results in four learning styles. Divergers favor CE and RO. Assimilators favor AC and RO. Convergers favor AC and AE. Accomodators favor CE and AE. The LSI consists of 12 questions about the ways in which one learns best. Each question has four answers, which are ranked by an individual in terms of best fit on a scale of 1 4 (4 being best). The numbers are totaled to give scores for CE, AC, RO, and AE. Then (AE RO) and (AC CE) are calculated and used as the abscissa and ordinate, respectively, on a graph that determines one s ultimate learning style. The test retest correlation coefficients range from.91 to.99 (Veres, Sims, & Locklear, 1991). In the factor analysis of Bokoros et al. (1992), the LSI loaded only on factor 1 (r ¼ :635) relating to cognitive operations and decision making. The LSI has been widely used. The Experiential Learning Theory Bibliography (Kolb, 2003) contains 1728 references. To demonstrate the types of findings, consider three computer related studies. Buch and Bartley (2002) found from the results of a

290 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 training delivery mode preference survey that convergers show a stronger preference for computer-based delivery and assimilators show a stronger preference for print-based delivery. Sein and Robey (1991) discovered learning style-based performance differences when using analogical vs. abstract models to learn a text-based electronic mail system: convergers and assimilators performed better with the abstract model; accomodaters and divergers performed better with the analogical model. When learning relational database Structured Query Language, Jain (1996) demonstrated that abstract learners form more accurate and complete mental models than concrete learners. Note that these findings involve media type and mental models, not order related aspects. 1.4. Questions In the factor analysis of Bokoros et al. (1992), the GSD loaded on the first 2 of 3 factors (decision making and information processing), while the Kolb LSI only loaded on the first factor. According to the design of Bokoros et al., the second factor involves how one deals with the ordering and encoding of incoming information. Since the LSI is missing this component, it seems it would have no way of distinguishing information processing preferences during active experimentation or reflective observation. It could also impact the LSI s ability to identify organizational patterns in one s preferred mode of grasping experience: abstract conceptualization or concrete experience. These aspects characterize distinguishing features of computerbased instruction styles. If the Kolb instrument does not take these into account, then it s possible that using the LSI to evaluate effects of learning style on performance in computer-based instruction systems could result in missing a valid effect of individual differences. The following two experiments seek to answer these questions: 1. Will there be an effect of learning style on performance in an assessment-based CBI course teaching introductory probability and statistics that allows students to choose their own path through ready-to-learn material? 2. If the GSD and LSI are used, will both show significant effects of learning style on performance? 2. Method Experiment 1 2.1. Participants Out of an introductory Probability & Statistics class (Psych 10A at the University of California, Irvine) with 54 students, 36 agreed to participate (30 females, 6 males). These students were paid $10 for their time. 2.2. Course description Psych 10A is the first quarter of a three-quarter sequence covering Probability & Statistics. (A quarter lasts 10 weeks plus a final examination week.) It covers the basics of descriptive statistics (histogram, mean, median, mode, variance, correla-

tion, etc.) and elementary probability. This quarter was the first time that ALEKS was used in teaching the course. ALEKS stands for Assessment and Learning in Knowledge Spaces and is based on theoretical research in Knowledge Spaces (ALEKS Corporation, 2002). In Knowledge Space theory, the knowledge state of a student with respect to a particular field of information can be represented by a particular subset of questions or problems that the subject is capable of answering (Falmagne, Koppen, Villano, Doignon, & Johannesen, 1990). ALEKS is accessed from an Internet web site, and the students are able to use the system from any computer that can connect to the Internet and run a Java script. During the first session with the system the student goes through a tutorial on how to use the various features. Then an assessment is given to evaluate their initial knowledge. From then on the student is allowed to work at their own pace through the various topics. When starting a new topic, the student is shown a problem and given the choice to try to solve it or to view an explanation of the solution. Once a student answers five consecutive questions correctly the topic is considered mastered and the student can choose a new topic. After approximately 10 h of work, an assessment is triggered. The topics in which the student misses any questions have to be re-practiced. More advanced topics are not capable of being chosen until the basic, underlying concepts have been learned. The students are expected to spend an average of 5 h per week using the system. If they have difficulty, they can go to the office hours of the professor and/or teaching assistant or attend one of the periodic group problem sessions scheduled by the instructor. The instructor provides pacing for the course and ensures that students are doing their own work by requiring inclass assessments during the fourth, eighth, and final weeks. 2.3. Procedure L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 291 The participants met during the second week of the course to complete a short survey and two learning style inventories: the LSI (Kolb, 1993) and the GSD (Gregorc, 1982c). The order of the learning style surveys was balanced across the students half had the GSD before the LSI, while the other half had the LSI before the GSD. The additional survey consisted of Likert style questions (scale from 1 to 5, 5 being highest) regarding computer comfort, CBI experience, knowledge of statistics, mathematical ability, and motivation to do well in the course. Students were also asked to indicate their handedness and gender. After the course was over, the amount learned by each student was determined by subtracting the initial assessment score from the final assessment score. 3. Results Experiment 1 3.1. Effects of learning styles ANOVA was used to determine if the amount of material learned by the students differed according to their learning styles. This showed that there was no effect of

292 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 Kolb learning style on the amount of material learned (F ð3; 32Þ ¼:704, p ¼ :557) or on the final assessment scores (F ð3; 32Þ ¼:821, p ¼ :492). There was also no effect of Kolb style on the total amount of time spent (F ð3; 32Þ ¼:351, p ¼ :789). Table 1 contains the means for each learning style. Although, there was no difference in amount of time spent (F ð3; 32Þ ¼1:336, p ¼ :28), there was a difference in amount learned according to the learning styles identified by the GSD (F ð3; 32Þ ¼2:992, p ¼ :045). Univariate contrasts showed that students with the Concrete Sequential learning style learned significantly less than students with the Abstract Random (p ¼ :011) or Concrete Random styles (p ¼ :047). Table 1 contains these means also. In terms of course material, the Abstract Random students learned 21.2% more topics than the Concrete Sequential students and the Concrete Random students learned 15.6% more topics than the Concrete Sequential students. 3.2. Survey measure effects Further analysis was conducted on the survey measures to determine if the students backgrounds could have played a role in the learning differences. Tables 2 and 3 contain the data. Of particular interest was whether the students math/ statistics background (or lack of it) was responsible for the differences. One of the survey measures asked the students: How would you rate your mathematical abilities compared to other students at your academic level? (1) Worse than All, (2) Worse than Most, (3) Average, (4) Better than Most, (5) Better than All. An ANOVA showed that there was no significant difference in the amount learned by their math rating (F ð2; 33Þ ¼:652, p ¼ :528) or in the students math ability rating by their GSD style (F ð3; 32Þ ¼:582, p ¼ :631). Additionally, there was not a significant effect of math ability rating on the final assessment score (F ð2; 33Þ ¼1:558, p ¼ :226). There was no interaction between Gregorc style and math ability rating on the amount of material learned (p ¼ :4). Regarding their statistics background, another survey question asked the students: On a scale Table 1 Experiment 1 Performance means by learning style Learning style N Initial assessment Final assessment Amount learned Time spent (hours) Kolb Accommodating 3 12.3 88.7 76.4 32.1 Assimilating 16 13.1 75.5 62.4 37.2 Converging 10 13.5 76.1 62.6 32.9 Diverging 7 4.7 75.4 70.7 36.5 Gregorc CS 16 14.6 70.6 56.0 a 33.3 AS 5 16.4 84.2 67.8 44.5 AR 7 3.9 81.1 77.2 a 37.1 CR 8 9.0 80.6 71.6 a 32.8 a Significantly different, a ¼ :05 level.

L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 293 Table 2 Experiment 1 Performance means by survey measures Characteristic N Initial assessment Final assessment Amount learned Computer comfort 2 1 3.0 87.0 84.0 43.7 3 12 10.2 76.3 66.1 34.4 4 17 15.3 76.1 60.8 35.6 5 6 5.0 77.8 72.8 35.8 CBI experience 1 13 15.7 76.5 60.8 32.1 2 5 12.0 84.2 72.2 35.5 3 6 7.2 75.2 68.0 37.0 4 10 9.0 73.4 64.4 36.1 5 2 9.0 81.0 72.0 49.6 Stats knowledge 1 6 3.2 70.3 67.1 70.3 2 17 10.9 78.2 67.3 78.2 3 8 22.3 74.4 52.1 74.4 4 5 6.4 83.2 76.8 83.2 Time spent (hours) Math rating 2 8 1.9 72.0 70.1 45.7 a 3 19 13.9 75.7 61.8 32.5 a 4 9 15.1 83.1 68.0 32.7 a Motivation rating 4 17 11.7 77.8 66.1 30.1 a 5 19 11.4 75.8 64.4 40.2 a Handedness Left 6 8.3 70.7 62.3 26.4 a Right 30 12.2 78.0 65.8 37.3 a Gender Female 30 11.7 76.4 64.6 36.7 Male 6 10.5 78.7 68.2 29.0 The survey questions used a Likert scale from 1 to 5, 5 being highest. a Significantly different, a ¼ :05 level. from 1 to 5, how would you rate your knowledge of statistics prior to taking this course? 1 ¼ No knowledge, 5 ¼ Extremely Knowledgeable. An ANOVA showed that there was no significant difference in the amount learned by their statistics knowledge rating (F ð3; 32Þ ¼ 2:181, p ¼ :11) or in the students statistics knowledge rating by their GSD style (F ð3; 32Þ ¼:545, p ¼ :655). There was also no effect of statistics knowledge on final assessment scores (F ð3; 32Þ ¼:955, p ¼ :426). Finally, there was no interaction of statistics knowledge and Gregorc style on amount of material learned (p ¼ :946). These analyses indicate that the difference in amount learned is not due to the students varying backgrounds in mathematics and statistics.

294 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 Table 3 Experiment 1 Survey means by learning style a Styles N Computer comfort CBI experience Stats knowledge Math ability Kolb Accommodating 3 3.00 2.00 2.00 3.00 4.67 Assimilating 16 3.75 2.50 2.56 3.13 4.69 Converging 10 4.20 2.60 2.30 3.10 4.50 Diverging 7 3.57 2.71 2.00 2.71 4.14 Gregorc CS 16 4.00 2.44 2.31 3.19 4.50 AS 5 3.60 3.40 2.80 2.80 4.80 AR 7 3.57 2.14 2.29 2.86 4.71 CR 8 3.63 2.50 2.13 3.00 4.25 a None significant at a ¼ :05 level. Motivation 3.3. Correlations Correlational analysis provided additional evidence that performance was affected by learning style as identified by the Gregorc styles, but not by the Kolb styles. The results in Table 4 show that there was no significant correlation between the Kolb dimensions and any of the performance measures. On the other hand, the GSD Concrete Sequential score was negatively correlated with the final assessment score. Thus, those who were identified as having very strong characteristics of the CS type, scored lower on the final assessment than those who showed weaker CS characteristics. There was also a positive correlation between the Concrete Random score and the final assessment score. Thus, students with strong CR characteristics scored higher on the final assessment than students with weaker CR characteristics. These correlations support the finding that students identified as CS learned significantly less than the CR students and that there was no effect of Kolb learning style. 4. Discussion Experiment 1 These results provide answers to the questions raised earlier: (1) There was an effect of learning style on course performance in a non-linear CBI course teaching introductory probability and statistics: students with the CS learning style learned significantly less than students with the AR or CR styles; (2) Only the GSD and not the LSI showed significant effects of learning style on performance. Before further analyzing these results, the issue of an unexpected variable must be dealt with: the students were allowed to bring notes to the in-class assessments (i.e., assessments 1 & 2, and the final assessment). Thus, the difference between the learning styles may not truly reflect compatibility with the CBI system, so much as how adept the students were at creating and using their notes. The experiment was continued during the second quarter of the course, where all students were provided with formula pages

Table 4 Experiment 1 Correlations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 GSD scores 1. CS 2. AS.24 3. AR ).53 a ).52 a 4. CR ).58 a ).41 b ).11 Performance 5. Amt. learned ).26 ).21.10.32 6. Final assessment ).36 b.15 ).11.34 b.63 a 7. Time spent ).15 ).09.08.07 ).08 ).05 Survey measures 8. Computer comfort.47 a.07 ).27 ).17.00 ).02.00 9. CBI experience ).05.05 ).17.13.11 ).07.26 ).05 10. Stats knowledge ).08.23 ).15 ).02.00.19.10.23.19 11. Math ability.06.29 ).24 ).03 ).03.29 ).37 b.17.10.47 a 12. Motivation ).13.07.31 ).18 ).04 ).07.43 a ).13.00.16 ).04 LSI scores 13. CE ).19 ).06 ).14.32.06.03.10 ).38 b.08 ).08 ).15 ).04 14. RO.27 ).16.11 ).30.03 ).12 ).01.00.15 ).13 ).02 ).28 ).32 15. AC ).16.25.05 ).06 ).29.03.18.17 ).20.42 b.28.44 a.09 ).55 a 16. AE.03 ).01 ).02.06.16.07 ).22.17 ).02 ).16 ).08 ).07 ).62 a ).18 ).39 a a Significant at a ¼ :01 level. b Significant at a ¼ :05 level. L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 295

296 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 created by the instructor, but were not allowed to use their own notes. This scenario is described in Experiment 2. 5. Method Experiment 2 5.1. Participants Thirty-one students from Experiment 1 agreed to participate in Experiment 2. (Of the 36 students that participated in Experiment 1: one did not enroll in the second quarter. Three dropped and one decided to cease participating.) three additional students agreed to participate, for a total of 34 participants. 5.2. Course description The course was again taught using the ALEKS system (see Experiment 1 for a description). It was the second quarter of a three-quarter sequence covering Probability & Statistics. (A quarter lasts 10 weeks plus a final examination week.) It built on the basics learned in the first quarter and continued through the next levels of statistics and probability, covering topics such as regression lines, correlation, normal distributions, and expectation & variance of random variables. The students were not allowed to use their own notes during in-class assessments. Instead, the instructor provided them with two double-sided pages containing relevant formulas. 5.3. Procedure Subjects that had not participated in Experiment 1 met during the second week to complete a short survey and the two learning style inventories used in Experiment 1: the LSI (Kolb, 1993) and the GSD (Gregorc, 1982c). They were paid $10 for their time. For Table 5 Experiment 2 Performance means by learning style Learning style N Initial assessment Final assessment Amount learned Kolb Accommodating 4 2.8 83.5 80.7 37.5 Assimilating 15 10.6 70.1 59.5 37.9 Converging 8 17.3 80.6 63.3 36.4 Diverging 7 1.0 70.4 69.4 40.4 Time spent (hours) Gregorc CS 13 11.4 67.3 55.9 ab 33.2 AS 4 15.8 73.3 57.5 47.7 AR 9 5.8 75.7 69.9 b 44.2 CR 8 6.5 84.4 77.9 a 34.0 a Significantly different, a ¼ :05 level. b Marginally significant difference, a ¼ :09 level.

L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 297 students that had participated in Experiment 1, the learning style and survey information was reused to analyze their performance in the second quarter of the course. After the course was over, the amount learned by each student was determined by subtracting the initial assessment score from the final assessment score. 6. Results Experiment 2 6.1. Effects of learning styles As in Experiment 1, there was no effect of Kolb learning style in Experiment 2 on the amount of material learned (F ð3; 30Þ ¼1:4, p ¼ :262) or on the final assessment Table 6 Experiment 2 Performance means by survey measures Survey measure N Initial assessment Final assessment Amount learned Computer comfort 2 1 0.0 78.0 78.0 43.4 3 12 5.9 75.4 69.5 35.1 4 18 12.5 71.8 59.3 37.5 5 3 6.3 82.7 76.3 50.3 CBI experience 1 12 14.2 72.5 58.3 34.6 2 4 0.0 80.5 80.5 35.7 3 7 2.1 76.3 74.2 39.9 4 9 12.3 70.1 57.8 39.1 5 2 9.5 83.5 74.0 51.4 Stats knowledge 1 5 3.8 75.4 71.6 38.6 2 19 9.1 74.5 65.4 36.1 3 6 11.2 66.3 55.1 42.0 4 4 14.0 83.3 69.3 40.3 Math rating 2 9 1.2 a 67.1 65.9 42.6 3 19 9.2 74.7 65.5 37.7 4 6 21.7 a 83.5 61.8 32.0 Time spent (hours) Motivation rating 4 16 6.9 74.0 67.1 32.4 a 5 18 11.3 74.4 63.1 42.9 a Handedness Left 5 10.8 63.0 52.2 27.9 Right 29 9.0 76.2 67.2 39.7 Gender Female 29 8.8 73.1 64.3 38.9 Male 5 12.0 80.6 68.6 32.5 a Significantly different, a ¼ :05 level.

298 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 Table 7 Experiment 2 Survey means by learning style a Styles N Computer comfort CBI experience Stats knowledge Math ability Kolb Accommodating 4 3.25 1.75 2.00 3.00 4.50 Assimilating 15 3.80 2.53 2.53 3.00 4.73 Converging 8 4.00 3.00 2.38 3.00 4.38 Diverging 7 3.29 2.57 1.71 2.57 4.29 Gregorc CS 13 3.77 2.54 2.23 3.08 4.46 AS 4 3.75 3.25 2.75 2.75 4.75 AR 9 3.67 2.22 2.33 2.78 4.67 CR 8 3.50 2.63 2.00 2.88 4.38 a None significant at a ¼ :05 level. Motivation score (F ð3; 30Þ ¼:969, p ¼ :42). There was also no effect of Kolb learning style on the total amount of time spent (F ð3; 30Þ ¼:092, p ¼ :964). Table 5 contains the means by learning style. Similar to Experiment 1, there was no effect of the Gregorc learning style on total amount of time spent (F ð3; 30Þ ¼2:071, p ¼ :125), but there was an effect on the amount of material learned (F ð3; 30Þ ¼ 2:787, p ¼ :058). Univariate contrasts showed that students with the CS learning style learned significantly less than students with the CR style (p ¼ :013). The difference between the CS and the AR students, significant in Experiment 1, was only marginal in Experiment 2 (p ¼ :091). Table 5 contains these means also. In terms of course material, the CR students learned 22% more material than the CS students and the AR students learned 14% more than the CS students. An additional marginally significant difference was found in Experiment 2: the AS learning style learned less than students with the CR style (p ¼ :081). Perhaps without the use of their notes, the AS students had more difficulty learning from the system than the CR students. So, even though notes were not allowed and there were slightly fewer students participating, the findings of Experiment 1 are supported. 6.2. Survey measure effects The survey measures were analyzed with the performance data to determine if the students mathematics/statistics backgrounds played a role in the learning differences of Experiment 2. Tables 6 and 7 contain the data. As in Experiment 1, an ANOVA showed that there was no significant difference in the amount learned by their math rating (F ð2; 31Þ ¼:087, p ¼ :917) or in the students math ability rating by their GSD style (F ð3; 30Þ ¼:448, p ¼ :721). Again, there was no interaction between Gregorc style and math ability rating on the amount of material learned (p ¼ :813) and there was no effect of math ability rating on the final assessment (F ð2; 31Þ ¼1:425, p ¼ :256). Regarding their statistics knowledge, an ANOVA showed that there was

Table 8 Experiment 2 Correlations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 GSD scores 1. CS 2. AS.32 3. AR ).55 a ).58 a 4. CR ).58 a ).47 a ).04 Performance 5. Amt. learned ).23 ).20 ).07.42 b 6. Final assessment ).21.04 ).20.39 b.72 a 7. Time spent ).21 ).23.26 ).02.17 ).02 Survey measures 8. Computer comfort.39 b ).01 ).14 ).15 ).10.01.19 9. CBI experience.05.08 ).12 ).01.08.02.24.04 10. Stats knowledge.00.20 ).03 ).12 ).10.04.10.25.15 11. Math ability.10.27 ).26 ).06 ).06.29 ).25.00.09.36 b 12. Motivation ).27.04.25 ).11 ).10.01.37 b ).19.13.29 ).04 LSI scores 13. CE ).18 ).27 ).02.37 b.24.05.14 ).34 ).14 ).27 ).15 ).08 14. RO.37 b.03.05 ).51 a ).19 ).24 ).11.00.16 ).13.04 ).08 ).41 b 15. AC ).05.24.01 ).12 ).37 b ).06.09.23 ).13.48 a.18.45 a ).09 ).39 b 16. AE ).17.02 ).04.27.29.24 ).08.09.10 ).05 ).04 ).21 ).44 a ).25 ).40 b a Significant at a ¼ :01 level. b Significant at a ¼ :05 level. L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 299

300 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 no significant difference in the amount learned (F ð3; 30Þ ¼:719, p ¼ :548) by their knowledge rating. There was also no effect in the student s statistics knowledge rating by their GSD style (F ð3; 30Þ ¼:677, p ¼ :573). Additionally, there was no effect of statistics knowledge on the final assessment scores (F ð3; 30Þ ¼:651, p ¼ :589). Finally, there was no interaction of statistics knowledge and Gregorc style on amount of material learned (p ¼ :606). These finding are comparable to that found in Experiment 1 and again verify that the difference in amount learned is not due to the student s varying backgrounds in mathematics and statistics. 6.3. Correlations The negative correlation between CS scores and the final assessment scores found in Experiment 1, was not significant in Experiment 2 (see Table 8). This may be due to some of the CS students becoming more accustomed to the instructional system, resulting in better learning strategies and higher scores on the final assessment. As in Experiment 1, there was a significant positive correlation between the CR score and the score on the final assessment, again indicating that a student with strong CR characteristics scored higher on the final than a student with weaker CR characteristics. One new phenomenon was found in Experiment 2, there was a significant positive correlation between the amount learned and the CR score, showing that the stronger the CR characteristics the more the student learned with the ALEKS system. 7. Discussion Experiment 2 7.1. Findings are not due to ceiling effects This experiment replicated the major findings of Experiment 1, thus answering the questions posed earlier: (1) There was an effect of learning style on course performance in a non-linear CBI course teaching introductory probability and statistics; (2) Only the GSD and not the LSI shows significant effects of learning style on performance. The learning style effect showed that students with the CS learning style learned significantly less than the students with the CR style. Before looking at the data, one might argue that this could be due to ceiling effects that the initial scores of the CS students could be so high as to limit the maximum possible amount learned. However, a one way ANOVA shows no significant difference by Gregorc learning style in the initial assessment scores (Experiment 1: F ð3; 32Þ ¼ 1:138, p ¼ :349; Experiment 2: F ð3; 30Þ ¼:6, p ¼ :62). Tables 1 and 5 contain the means. Additionally, there is no significant difference by learning style in the final assessment scores (Experiment 1: F ð3; 32Þ ¼2:264, p ¼ :1; Experiment 2: F ð3; 30Þ ¼1:452, p ¼ :247). Interestingly, there is a pattern to the means. In both experiments, the mean of the CS initial assessment is the second highest and the mean of their final assessments is the lowest. If there was a ceiling effect, the CS final assessment scores would be near 100%. One could

L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 301 also argue that perhaps the CS students suffered from general computer anxiety, which interfered with their learning. However, the positive correlation between the CS scores on the GSD instrument and their ratings of computer comfort disproves this (Tables 4 and 8). Thus, it seems that lack of compatibility between the format of the CBI system and the preferences of the CS students is a more likely explanation for the learning style effect. 7.2. Effect of Gregorc learning style Gregorc (1982b) gives several pages of characteristics of individuals with dominant CS preferences. These individuals are said to approach experiences in an ordered, sequential manner. They are adverse to change and do not quickly adapt to new conditions. They prefer predictable environments, and dislike surprises. They also have a tendency to be inflexible and rigid. Additionally, it has been found that CS individuals prefer traditional methods of instruction with deadlines and preselected assignments (Andrews & Wheeler, 1994). These characteristics could explain the CS students lower level of material learned. The ALEKS system was an unexpected change to the traditional method of teaching the course (traditional meaning multiple lectures per week in combination with reading/homework assignments and periodic paper exams). In addition, the ALEKS system does not force a strict linear path through the system. The course content is divided up into sections, which are presented in a pie format. The topics that the students are ready for are displayed when the mouse hovers over a particular piece of the pie. These are determined by the last assessment and the topics that they have mastered by practice since that assessment. At any time, the students are able to choose from any piece of the pie which displayed topic they would like to work on. After approximately 10 hours of work, the system requires the student to go through an assessment. The results of this resets the student s pie with the ready-to-learn topics. If the assessment shows that the student doesn t really understand one of the previously completed topics, they will be required to go through the problems again. The mismatch between the format of the ALEKS system and the preferences of CS individuals could be the explanation for the lower amount learned by the CS students in these experiments. On the other hand, many aspects of the ALEKS format are compatible with the characteristics of the CR style. Gregorc (1982b) also describes the characteristics of a CR individual. They prefer a stimulus-rich environment with freedom of movement and expression. They are not adverse to change and often instigate it themselves, as they do not like to be entrenched in any one place for long periods of time. When told they cannot do something, they are challenged to prove they can. Additionally, Andrews and Wheeler (1994) found that CR students prefer to choose their instruction methods and due dates in a flexible framework. This ideal match-up seems to explain why the CR students learned a significantly higher amount of material than the CS students.

302 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 7.3. Possible explanation for lack of Kolb LSI effect Another Experiment 1 finding replicated is the lack of an effect of Kolb learning style on the amount of material learned. This refutes the assertion made in Federico (2000) that attitudes towards CBI aligned along Kolb learning styles would affect learning. He found that individuals with Accommodating and Assimilating profiles had the most agreeable attitudes towards CBI, Converging and Diverging had the least agreeable attitudes. However, students with these styles did not perform significantly differently in the experiments reported here. A possible explanation for this lack of effect goes back to the factor analysis in Bokoros et al. (1992). The Kolb Learning Style Inventory did not load on the factor involving an individual s ordering & organizational preferences. Thus, the LSI seems to have no way of measuring whether an individual prefers a highly structured environment with a required linear path or a less structured environment where one can choose their own path and determine their own pace. So, although the instrument distinguishes between modes of grasping and transforming experience, it does not seem to be capable of identifying the characteristics that result in individual differences in the amount of material learned when using a low-structure, nonlinear CBI system. 8. General discussion 8.1. Non-significant results using the LSI These experiments have shown that it is important to consider the focus of a learning style instrument before using it to determine whether there are learning style effects in a particular environment. The findings here call into question the CBI study findings that have used the Kolb instrument and found no effect. Melara (1996) found no effect of Kolb learning style in a study comparing hierarchical (in which a concept could only be viewed when all subordinate concepts had been viewed) vs. network (concepts could be viewed in any order) structure of material to be learned. Dalkir (1998) found that there was no significant difference in learning from a CBI system with Kolb learning styles, while there was with the Entwistle ASI. Reed, Oughton, Ayersman, Ervin, and Giessler (2000) found that there was no effect of Kolb learning style on the nature of hypermedia system use (linear vs. nonlinear traversal) or the amount of time spent with the system, although, there was an effect using the Group Embedded Figures Test. Kettanurak, Ramamurthy, and Haseman (2001) found that there was not a significant effect of Kolb learning style on user attitude toward low-interactivity (less navigational control) vs. high-interactivity (more navigational control) multimedia environments. McWilliams (2001) found no significant difference between performance and Kolb learning styles with computerbased training using participants from a variety of backgrounds such as law firms, financial institutions, manufacturing companies and university computer centers. The lack of effect in these studies could be due to other factors, but these studies all involve order and structure aspects that the Kolb instrument does not seem to

L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 303 measure. By contrast, a review of the approximately 40 publications that list Gregorc as a keyword results in four studies involving computer use with order/structure aspects and all of them found a significant effect (Ross, 2000; Ross & Shultz, 1999; Ross et al., 2001; Willis, 1995). The point is not to suggest that the GSD is in general better than the LSI, but rather to show that any model with only two dimensions may lack the critical factor necessary for evaluation within a specific learning environment. 8.2. Multi-dimensional instruments To avoid missing learning preferences that may be critical to evaluating the compatibility of computer-based instruction systems, perhaps future research should choose style instruments that focus on more than two dimensions. One example is Sternberg s Theory of Mental Self-government (Sternberg, 1988). This model contains five dimensions and avoids labeling by instead categorizing an individual s characteristics for each style within the theory on a continuum from very low to very high. Sternberg argues that societal government grew out of the various types of selfgovernment and he draws his style dimensions from those aspects. The dimensions are: functions (legislative, executive, and judicial), forms (monarchic, hierarchic, oligarchic, and anarchic), scope (internal and external), and leanings (liberal, and conservative). Another multidimensional instrument is the Learning Style Orientation Inventory (Towler & Dipboye, 2003). This measure was designed using empirical methods so as to ensure construct validity and internal consistency reliability. It consists of learning preference statements, which an individual scores on a scale from 1 to 5 to indicate how characteristic each is of their personal preference. Their model involves five learning factors: discovery, experience, group orientation, observation, and structure. Since multidimensional instruments are available, future CBI research should avoid the trap of using narrowly focused style instruments when attempting to find compatibility effects. 8.3. Compatibility of CBI formats Many of the style models give information on the preferred learning and testing formats for each learning style. For example, Sternberg (1994) lists common traditional teaching methods and the styles that are most compatible with them, such as solving given problems (most compatible: executive) and working on projects (most compatible: legislative). He also lists common traditional assessment methods and the styles most compatible with them (e.g. individuals with a legislative style prefer essays rather than multiple-choice exams). This type of information needs to be created for CBI environments. Some studies exist that explore the compatibility of CBI formats to learning style types. Lee and Lehman (1993) examined a hypermedia environment with and without instructional cues (explicit display of unviewed subtopics when an individual attempted to go to another topic). They found that active learners were able to adequately learn the material regardless of whether the instructional cues were present. However, neutral and passive learners that received

304 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 instructional cues performed better on the achievement post-test than those that did not receive them. Shute (1993) taught basic principles of electricity using an intelligent tutoring system with two modes: rule application (where the rule was given to an individual who answered a question incorrectly) and rule-induction (where individuals were given the variables involved in a rule and were expected to generate the rule themselves). The results showed that individuals that explored more did better in the rule-induction environment and those that explored less performed better in the rule application environment. Beishuizen, Stoutjesdijk, and van Putten (1994) studied students learning from a hypertext format of a textbook chapter on memory. They found that content-structuring help was detrimental to the performance of individuals that were deep processors and beneficial to individuals that were shallow processors. In their second experiment they discovered that deep processors that were primarily self-regulators and surface processors that were primarily externally regulated performed better at a hypermedia search task than selfregulating surface processors and externally regulated deep processors. Recker and Pirolli (1995) studied a non-linear hypertext environment used to teach recursion programming in LISP. Individuals with low ability made more programming errors after learning with the non-linear hypertext environment, than with a linearly presented version of the same material. After observing the navigation choices of the two groups, they concluded that the low ability individuals may have been overwhelmed by the complexity and high degree of control within the hypertext system. Ross and Shultz (1999) used a linear format in which next and previous buttons were used to traverse a CBI course on cardiopulmonary resuscitation. Using the GSD as the learning style instrument, they found that AS students performed the best, while AR students performed the worst. More experiments like these need to be performed to build a store of knowledge about the compatibility of computer-based instruction formats with human learning styles. Understanding the compatibility of CBI formats for different styles allows us to create instructional systems that are effective for all types of students. Sternberg has found that teachers and students alike confuse mismatches in styles of teaching and learning with lack of ability (Sternberg, 1994, p. 36). In addition, teachers have been found to give more favorable evaluations to students that have similar styles (Sternberg & Grigorenko, 1995). Computer systems have the capability of being completely objective. CBI designers should put effort into designing systems that meet the needs of all styles of learning/thinking. There are several ways of doing this. There could be one interface with aspects that appeal to all styles. For example, a hypermedia system that provides linear as well as non-linear traversal. Another strategy could test the learner before beginning instruction and provide them with the most compatible interface. An alternative strategy could analyze the learner s use of the system with statistical and artificial neural network techniques, such as used by Dalkir (1998), to make system interface modifications dynamically. The number of computerized instructional systems is rapidly increasing. The Globewide Network Academy (GNA), a nonprofit organization, has been formed to promote access to worldwide distance education programs. The GNA Distance Learning Catalog (GNA, 2003) contains links to 4937 distance education programs

L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 305 and 32,573 distance education courses. As schools and universities supplement (or replace) traditional courses with CBI content, the creators must ensure that the CBI formats appeal to all style types. By doing this, we can help prevent student alienation, or worse, a new form of student discrimination. References ALEKS Corporation. (2002). ALEKS Corporation web site: http://www.aleks.com. Allport, G. W. (1937). Personality, a psychological interpretation. New York: Holt. Andrews, S. V., & Wheeler, P.J. (1994). Personalizing instructional supervision: Differentiating support structures for teachers. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA (ERIC Document Reproduction Service No. ED375081). Beishuizen, J., Stoutjesdijk, E., & van Putten, K. (1994). Studying textbooks: Effects of learning styles, study task, and instruction. Learning & Instruction, 4(2), 151 174. Bokoros, M. A., Goldstein, M. B., & Sweeney, M. M. (1992). Common factors in five measures of cognitive style. Current Psychology: Research & Reviews, 11(2), 99 109. Buch, K., & Bartley, S. (2002). Learning style and training delivery mode preference. Journal of Workplace Learning, 14(1), 5 10. Dalkir, K. L. (1998). Improving user modeling via the integration of learner characteristics and learner behaviors (Doctoral Dissertation, Concordia University, 1998). Dissertation Abstracts International, 59, 462. Davidson, G. V., Savenye, W. C., & Orr, K. B. (1992). How do learning styles relate to performance in a computer applications course. Journal of Research on Computing in Education, 24(3), 348 358. Drysdale, M. T., Ross, J. L., & Shultz, R. A. (2001). Cognitive learning styles and academic performance in 19 first-year university courses: Successful students versus students at risk. Journal of Education for Students Placed at Risk, 6(3), 271 289. Dunn, R., Griggs, S. A., Olson, J., & Beasly, M. (1995). A meta-analytic validation of the Dunn and Dunn model of learning style preferences. Journal of Educational Research, 88(6), 353 362. Falmagne, J.-C., Koppen, M., Villano, M., Doignon, J.-P., & Johannesen, L. (1990). Introduction to knowledge spaces: How to build test and search them. Psychological Review, 97, 201 224. Federico, P.-A. (1991). Student cognitive attributes and performance in a computer-managed instructional setting. In R. Dillon & J. Pellegrino (Eds.), Instruction: Theoretical and applied perspectives (pp. 16 46). New York: Praeger. Federico, P.-A. (2000). Learning styles and student attitudes toward various aspects of network-based instruction. Computers in Human Behavior, 16(4), 359 379. GNA (2003). Globewide Network Academy Distance Learning Catalog. Retrieved November 11, 2003, from the GNA web site: http://www.gnacademy.org/mason/catalog/browse.html. Gregorc, A. F. (1982a). Gregorc Style Delineator: Development, technical and administration manual. Columbia, CT: Gregorc Associates. Gregorc, A. F. (1982b). An adult s guide to style. Columbia, CT: Gregorc Associates. Gregorc, A. F. (1982c). Gregorc Style Delineator. Columbia, CT: Gregorc Associates. Gregorc, A. F. (1984). Style as a symptom: A phenomenological perspective. Theory into Practice, 23(1), 51 55. Jain, S. K. (1996). The effects of individual cognitive learning styles and troubleshooting experience on the development of mental models in teaching a database query language to novices (Doctoral dissertation, George Washington University, 1996). Dissertation Abstracts International, 56, 3414. Kettanurak, V., Ramamurthy, K., & Haseman, W. D. (2001). User attitude as a mediator of learning performance improvement in an interactive multimedia environment: An empirical investigation of the degree of interactivity and learning styles. International Journal of Human Computer Studies, 54(4), 541 583. Kolb, D. A. (1984). Experiential learning. Englewood Cliffs, NJ: Prentice-Hall.

306 L.M. Miller / Computers in Human Behavior 21 (2005) 287 306 Kolb, D. A. (1993). Learning Style inventory. Boston, MA: Hay Group. Kolb, D. A. (2003). Experiential learning style bibliography. Retrieved November 11, 2003, From: http:// www.learningfromexperience.com/html/research_library.html. Lee, Y. B., & Lehman, J. D. (1993). Instructional cuing in hypermedia: A study with active and passive learners. Journal of Educational Multimedia and Hypermedia, 2(1), 25 37. McWilliams, V. M. (2001). Exploring the relationship between computer-based training, learning styles, and cognitive styles (Doctoral dissertation, University of New Mexico, 2001). Dissertation Abstracts International, 62, 539. Melara, G. E. (1996). Investigating learning styles on different hypertext environments: Hierarchical-like and network-like structures. Journal of Educational Computing Research, 14(4), 313 328. Moore, B. R. (1990). The relationship between curriculum and learner: Music composition and learning style. Journal of Research in Music Education, 38(1), 24 38. Recker, M. M., & Pirolli, P. (1995). Modeling individual differences in students learning strategies. Journal of the Learning Sciences, 4(1), 1 38. Reed, W. M., Oughton, J. M., Ayersman, D. J., Ervin, J. R., Jr., & Giessler, S. F. (2000). Computer experience, learning style, and hypermedia navigation. Computers in Human Behavior, 16(6), 609 628. Ross, J. L. (2000). An exploratory analysis of post-secondary student achievement comparing a Webbased and a conventional course learning environment (Doctoral dissertation, University of Calgary, 2000). Dissertation Abstracts International, 61(5-A), 1809. Ross, J. L., & Shultz, R. A. (1999). Can computer-aided instruction accommodate all learners equally? British Journal of Educational Technology, 30(1), 5 24. Ross, J. L., Drysdale, M. T., & Shultz, R. A. (2001). Cognitive learning styles and academic performance in two postsecondary computer application courses. Journal of Research on Technology in Education, 33(4), 400 412. Sein, M. K., & Robey, D. (1991). Learning style and the efficacy of computer training methods. Perceptual & Motor Skills, 72(1), 243 248. Shute, V. J. (1993). A comparison of learning environments: All that glitters. In S. P. Lajoie & S. J. Derry (Eds.), Computers as cognitive tools: Technology in education (pp. 47 73). Hillsdale, NJ: Lawrence Erlbaum. Sternberg, R. J. (1988). Mental self-government: A theory of intellectual styles and their development. Human Development, 31, 197 224. Sternberg, R. J. (1994). Allowing for thinking styles. Educational Leadership, 52(3), 36 40. Sternberg, R. J., & Grigorenko, E. (1995). Styles of thinking in the school. European Journal for High Ability, 6, 201 219. Sternberg, R. J., & Grigorenko, E. (2001). A capsule history of theory and research on styles. In R. J. Sternberg & L.-F. Zhang (Eds.), Perspectives on thinking, learning, and cognitive styles (pp. 1 21). Mahwah, NJ: Lawrence Erlbaum. Thorndike, E. L. (1913). Educational psychology: Vol. II. The psychology of learning. New York: Teachers College. Towler, A. J., & Dipboye, R. L. (2003). Development of a learning style orientation measure. Organizational Research Methods, 6(2), 216 235. Veres, J., II, Sims, R., & Locklear, T. (1991). Improving the reliability of Kolb s revised learning style inventory. Educational and Psychological Measurement, 51, 143 150. Willis, J. H. (1995). Stress, cognitive style, and job satisfaction of computer programmers (Doctoral Dissertation, United States International University, 1995). Dissertation Abstracts International, 55(7- B), 3002.