This chapter elaborates the assessment of the practicality of the prototype in

Similar documents
Prepared by: Tim Boileau

Lectora a Complete elearning Solution

Evaluating Collaboration and Core Competence in a Virtual Enterprise

Student User s Guide to the Project Integration Management Simulation. Based on the PMBOK Guide - 5 th edition

Carolina Course Evaluation Item Bank Last Revised Fall 2009

Houghton Mifflin Online Assessment System Walkthrough Guide

TotalLMS. Getting Started with SumTotal: Learner Mode

Motivation to e-learn within organizational settings: What is it and how could it be measured?

Online ICT Training Courseware

Multimedia Courseware of Road Safety Education for Secondary School Students

Prototype Development of Integrated Class Assistance Application Using Smart Phone

CHANCERY SMS 5.0 STUDENT SCHEDULING

Faculty Schedule Preference Survey Results

Specification of the Verity Learning Companion and Self-Assessment Tool

Texas A&M University-Central Texas CISK Comprehensive Networking C_SK Computer Networks Monday/Wednesday 5.

Application of Virtual Instruments (VIs) for an enhanced learning environment

E-learning Strategies to Support Databases Courses: a Case Study

Android App Development for Beginners

TA Certification Course Additional Information Sheet

A student diagnosing and evaluation system for laboratory-based academic exercises

OPAC and User Perception in Law University Libraries in the Karnataka: A Study

Executive summary (in English)

MBA 5652, Research Methods Course Syllabus. Course Description. Course Material(s) Course Learning Outcomes. Credits.

AC : DEVELOPMENT OF AN INTRODUCTION TO INFRAS- TRUCTURE COURSE

What is beautiful is useful visual appeal and expected information quality

Urban Analysis Exercise: GIS, Residential Development and Service Availability in Hillsborough County, Florida

Introduction to Moodle

Using Virtual Manipulatives to Support Teaching and Learning Mathematics

Experience College- and Career-Ready Assessment User Guide

CWSEI Teaching Practices Inventory

Textbook Evalyation:

Contact: For more information on Breakthrough visit or contact Carmel Crévola at Resources:

Linking the Common European Framework of Reference and the Michigan English Language Assessment Battery Technical Report

Beginning Blackboard. Getting Started. The Control Panel. 1. Accessing Blackboard:

Assessment and Evaluation

Aronson, E., Wilson, T. D., & Akert, R. M. (2010). Social psychology (7th ed.). Upper Saddle River, NJ: Prentice Hall.

Usability Design Strategies for Children: Developing Children Learning and Knowledge in Decreasing Children Dental Anxiety

Using Blackboard.com Software to Reach Beyond the Classroom: Intermediate

Moodle Student User Guide

Field Experience Management 2011 Training Guides

Curriculum Assessment Employing the Continuous Quality Improvement Model in Post-Certification Graduate Athletic Training Education Programs

Connect Microbiology. Training Guide

DO NOT DISCARD: TEACHER MANUAL

Unit 3. Design Activity. Overview. Purpose. Profile

Situational Virtual Reference: Get Help When You Need It

Scott Foresman Addison Wesley. envisionmath

Computer Software Evaluation Form

Implementing a tool to Support KAOS-Beta Process Model Using EPF

An Introductory Blackboard (elearn) Guide For Parents

Preferences...3 Basic Calculator...5 Math/Graphing Tools...5 Help...6 Run System Check...6 Sign Out...8

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

A 3D SIMULATION GAME TO PRESENT CURTAIN WALL SYSTEMS IN ARCHITECTURAL EDUCATION

Abstractions and the Brain

A BEGINNERS GUIDE TO SUCCESSFUL ONLINE SURVEYS

GALICIAN TEACHERS PERCEPTIONS ON THE USABILITY AND USEFULNESS OF THE ODS PORTAL

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

re An Interactive web based tool for sorting textbook images prior to adaptation to accessible format: Year 1 Final Report

STUDENT MOODLE ORIENTATION

Use of simulated animations to enhance student learning

MOODLE 2.0 GLOSSARY TUTORIALS

Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design

School Inspection in Hesse/Germany

Pragmatic Use Case Writing

Demography and Population Geography with GISc GEH 320/GEP 620 (H81) / PHE 718 / EES80500 Syllabus

PSY 1010, General Psychology Course Syllabus. Course Description. Course etextbook. Course Learning Outcomes. Credits.

EXPO MILANO CALL Best Sustainable Development Practices for Food Security

Instructional Approach(s): The teacher should introduce the essential question and the standard that aligns to the essential question

The Revised Math TEKS (Grades 9-12) with Supporting Documents

TIPS PORTAL TRAINING DOCUMENTATION

Clerical Skills Level I

MAINTAINING CURRICULUM CONSISTENCY OF TECHNICAL AND VOCATIONAL EDUCATIONAL PROGRAMS THROUGH TEACHER DESIGN TEAMS

BENCHMARKING OF FREE AUTHORING TOOLS FOR MULTIMEDIA COURSES DEVELOPMENT

Outreach Connect User Manual

WP 2: Project Quality Assurance. Quality Manual

TeacherPlus Gradebook HTML5 Guide LEARN OUR SOFTWARE STEP BY STEP

PowerTeacher Gradebook User Guide PowerSchool Student Information System

ITSC 2321 Integrated Software Applications II COURSE SYLLABUS

Principal vacancies and appointments

Group A Lecture 1. Future suite of learning resources. How will these be created?

Using GIFT to Support an Empirical Study on the Impact of the Self-Reference Effect on Learning

1. Portal Screen Default Display

Computer-Based Support for Curriculum Designers: A Case of Developmental Research

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

BSP !!! Trainer s Manual. Sheldon Loman, Ph.D. Portland State University. M. Kathleen Strickland-Cohen, Ph.D. University of Oregon

The Implementation of Interactive Multimedia Learning Materials in Teaching Listening Skills

Student Handbook. This handbook was written for the students and participants of the MPI Training Site.

First and Last Name School District School Name School City, State

Thesis-Proposal Outline/Template

DEVELOPING A PROTOTYPE OF SUPPLEMENTARY MATERIAL FOR VOCABULARY FOR THE THIRD GRADERS OF ELEMENTARY SCHOOLS

Tour. English Discoveries Online

PUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school

Ruggiero, V. R. (2015). The art of thinking: A guide to critical and creative thought (11th ed.). New York, NY: Longman.

Guidelines for Project I Delivery and Assessment Department of Industrial and Mechanical Engineering Lebanese American University

CONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS

Targeted Alaska Reading Performance Standards for the High School Graduation Qualifying Exam

ZHANG Xiaojun, XIONG Xiaoliang School of Finance and Business English, Wuhan Yangtze Business University, P.R.China,

The Creation and Significance of Study Resources intheformofvideos

MATH 108 Intermediate Algebra (online) 4 Credits Fall 2008

Using SAM Central With iread

Transcription:

143 Chapter 5 Assessing the practicality of the prototype This chapter elaborates the assessment of the practicality of the prototype in Shanghai. Two studies were organized for this assessment. The first study focused on primary target group users, and the second study focused on other users who were related to, or interested in the program. This chapter starts with an introduction in Section 5.1. The detailed design of the assessment studies is elaborated in Section 5.2. The results with primary target group users and other users are presented in Section 5.3 and 5.4 respectively. Finally, this chapter ends with the main conclusions. 5.1 Introduction Through four rounds of prototyping, as elaborated in Chapter 3, a satisfying prototype has been developed. Based on the expert appraisal at the ECNU (see Section 3.6.2), it was found that the prototype was valid and had potential to be practical for intended target users. In order to verify to what extent the prototype would be practical, the prototype was assessed in Shanghai. According to van den Akker (1999) and Nieveen (1997, 1999), practicality usually refers to the extent to which users consider a system as appealing and usable under normal conditions. These normal conditions usually refer to the authentic context where the intended target users work. Based on this guideline, two studies were organized in person in Shanghai to assess the practicality of the program. These two studies involved two different kinds of users respectively. The primary target group users, teacher-designers of the subjects of Biology or Geography, took part in the first study. Other users, including teacher-designers of other subjects, CBL designers from secondary schools and software developers from education-related computer companies, took part in the second study. The overall research question was:

144 Chapter 5 To what extent is the CASCADE-MUCH program practical for both primary target group users and other users in the context of Shanghai? With regard to the first study, the following sub-questions were posed: 1. Are the four components (content, support, interface and scenario) perceived to be practical by the primary target group users? 2. Do novice designers and experienced designers use the program differently? With the second study, the following sub-questions were posed: 1. Are the four components (content, support, interface and scenario) perceived to be practical by the other users? 2. Is it worthwhile to extend the program to support other subjects/applications? 5.2 Design of the assessment studies 5.2.1 Participants Study 1: with primary target group users With the intent of triangulation (Miles & Huberman, 1994), the following criteria were used to select participants in the first study. First, the participants should be real intended target users, who were either Biology or Geography teachers (or teaching researchers at district educational colleges) because only these two subjects were supported by the program. Second, some participants should have experiences of multimedia curriculum design and some should not, because the program was designed to be usable for both novice designers and experienced designers. Third, the participants must have basic computer skills; otherwise they might have initial difficulties in using the program. Eventually, based on the above criteria six participants were invited and agreed to participate in the study. Their general characteristics are briefly summarized in Table 5.1.

Assessing the practicality of the prototype 145 Table 5.1: General characteristics of the participants in study 1 (n=6) Participants Experienced designer Computer skills A Biology teaching researcher B Biology teacher C Geography teaching researcher D Geography teacher E Biology teaching researcher F Biology teaching researcher Of the six participants, four (A, B, E and F) were Biology teachers or teaching researchers (who worked at district educational colleges), two (C and D) were Geography teachers or teaching researchers; two (B and D) were secondary school teachers, and four (A, C, E and F) were teaching researchers from district educational colleges. Two (E and F) of them were experienced designers, who gained experiences of multimedia curriculum design during the MCB project (see Section 1.2.1). Both of them were responsible for the development of the instructional scenario. Four (A, B, C and D) of them were novice designers, who had never developed instructional scenarios before, but had experience in developing slide shows with PowerPoint or simple instructional software. All of them had basic computer skills and were able to use computers. Study 2: with other users Because the second study aimed at collecting general opinions of other users who were not intended target users but were interested in the CASCADE- MUCH program, participant selection was rather broad. In general, those people who were related to multimedia curriculum design or were interested in CASCADE-MUCH were considered to be proper candidates. Three kinds of people were invited to take part in the workshop. They were: 1. teacher-designers of other subjects; 2. computer-based learning (CBL) designers at secondary schools; 3. software developers at education-related computer companies. It was assumed that winners of a CBL instructional software design competition in the Pudong new area of Shanghai, who were teachers or CBL designers at primary or secondary schools, would be interested in multimedia instructional design. An invitation letter for this study was sent to each winner (about 30 in total). In addition, invitation letters were also sent to some education-related computer companies because they were making similar

146 Chapter 5 educational software and might also be interested. Eventually, 13 participants expressed interest in joining the workshop after they received the invitation letters of the workshop. Their general characteristics are summarized in Table 5.2. Table 5.2: General characteristics of the participants in study 2 (n=13) Participants Organization Computer skills Mathematics teachers (n=2) Primary schools History teacher (n=1) Secondary school Physics teachers (n=2) Secondary schools Chemistry teacher (n=1) Secondary school Biology teacher (n=1) Secondary school CBL designers (n=3) Secondary schools Software developers (n=3) Computer companies The seven subject teachers had experience with presentation slides design with PowerPoint or simple multimedia instructional software design, but had no experience of instructional scenario development for multimedia curricula. The three CBL designers at secondary schools were usually computer literacy teachers, but also provided technical support for subject teachers who wanted to design instructional software. The three software developers at educationrelated computer companies were computer programmers or CBL designers. 5.2.2 Procedures and activities In general, the procedures, activities and instruments of these two studies were similar, but slight differences existed. In this section, the procedures and activities of these two studies will be described. Study 1: with primary target group users The first study was organized in three small groups at two separate times. Each group consisted of two participants. The first group included one Biology teaching researcher and one Biology teacher (A and B), who were both novice designers. The second group included one Geography teaching researcher and one Geography teacher (C and D), who were novice designers as well. The research activities of these two groups were held successively at one institute (Huangpu District Educational College) in one afternoon. The third group included two Biology teaching researchers (E and F), who were experienced designers of instructional scenarios. The research activity of this group was held one week later at another institute (Putuo District Educational

Assessing the practicality of the prototype 147 College). Although the first two groups and the third group were organized at two different times and places, they are described together in this section since the procedures, activities and instruments were similar. In total, the whole process took about two and a half hours. The procedure of each group consisted of the following steps: 1. Introduction (about 10 minutes) During the introduction, the aims of the program and the workshop, the procedure and time schedule of the workshop were briefly explained. 2. Assignment (about one and a half hours) After the introduction, each participant was invited to carry out an assignment. The assignment was to make an instructional scenario for any knowledge unit in their textbooks. The knowledge unit was selected by themselves. Each participant worked on one computer individually. During this process, participants were encouraged to ask questions, and could chat with their peers and/or with the evaluator. Meanwhile, the evaluator observed their working processes, and made notes of their questions, errors, comments and suggestions. The evaluator did not disturb their activities during this process unless the participants met with some errors or needed personal support. 3. Questionnaire (about 15 minutes) After the participants tentatively finished their assignments, the evaluator helped them check whether the instructional scenarios had been successfully produced in Microsoft Word. If it was done, they were reminded to answer questions on the questionnaire. The detailed description of the questionnaire will be given in Section 5.2.3. 4. Discussion (about 20 minutes) After the participants had finished their questionnaires, a further in-depth discussion followed. The discussion focused mainly on the advantages and disadvantages of the program, and how to improve the program in the future. Study 2: with other users The second study was held as a workshop in a computer lab at a secondary school (Jinhua Senior Secondary School). The main process of this workshop was similar to that of the first study. Some slight differences are explained below.

148 Chapter 5 First, two assistants were invited to help the evaluator organize the workshop. They installed the program on each computer before the workshop, made video recordings and took observational notes during the workshop, and took discussion minutes during the discussion. Second, one more session (a presentation) was added after the introduction. The main aim of the presentation was to give the participants a general idea of what CASCADE- MUCH is, and more specifically to demonstrate its aims, structure, functionality and design elements (content selection, representation, organization and interface design). The presentation took about half an hour. Third, the assignment was slightly changed, as the users were not intended target users. The teacher-designers of other subjects were encouraged to develop an instructional scenario based on a section or a knowledge unit in the textbooks from their specific subject areas. The CBL designers and software developers were advised to make instructional scenarios for any subject they liked. 5.2.3 Data collection and analysis Table 5.3 illustrates the instruments used for data collection during these two studies and data analysis methods applied. Table 5.3: Overview of instruments and data analysis Instruments Study 1 (n=6) Study 2 (n=13) Likert scale questions on questionnaire Mean Mean, S.d, t-score Open-ended questions on questionnaire S&D S&D Built-in log files Figures, tables S&D Observational notes S&D S&D Discussion minutes S&D S&D Video S&D Note: S.d = Standard deviation; S&D = Summary and discussion During the first study, the instruments used for data collection included: a questionnaire, built-in log files, observational notes and discussion minutes. The questionnaire included two parts (see Appendix F). The first part contained 30 Likert scale questions covering the four components (content, support, interface and scenario) of the program. The second part included five open-ended questions. In the first study, only mean scores were calculated on the Likert scale questions since the number of participants was small. In the second study, means, standard deviations and t-scores were calculated. The open-ended questions on the questionnaire were summarized.

Assessing the practicality of the prototype 149 The built-in log files were used to record how users really used the program, and find out the differences (if any) between novice designers and experienced designers. The built-in log files mainly traced the following variables: The paths of walking through the program This variable indicated whether novice designers and experienced designers walked through the program differently, and helped to find out reasons why some screens (if any) were visited several times or not at all. The time spent on each screen On the one hand, this variable indicated whether some screens were harder to understand or more interesting to some users; on the other hand, it also showed whether novice and experienced designers spent different amounts of time on each screen. The support tools utilized This variable indicated what kinds of support were utilized (very often) and might be useful for users. The data included in each built-in log file were visualized with figures or tables. In this way, the similarities and differences between each participant could be easily found. The observational notes contained questions the users asked, the opinions they expressed, the comments/suggestions they proposed, and the errors they met. The discussion minutes provided insight in the comments and suggestions the participants proposed during discussion and reasons for making those comments and suggestions. In the second study, the instruments used for data collection were similar. One additional instrument (video) was used to make a live record of some episodes of the workshop. The video would help the evaluator and also other people who are interested in the workshop to review what was happening during the workshop. In addition, the built-in log files were mainly designed for comparing the differences between novice and experienced designers. The data collected with this instrument were not systematically analyzed since all participants were novice designers in this study. But the built-in log files still provided some additional information as summarized in Section 5.4.2.

150 Chapter 5 5.3 Results with primary target group users 5.3.1 Perceived practicality of the four components Content There were four statements on the questionnaire, which were related to the component of content. The participants' perceptions with regard to the practicality of the content, based on the answers to the four-point Likert scale questions/ statements, are summarized in Table 5.4. The original data of the answers to the questions are shown in Appendix H. Table 5.4: Perceived practicality of the content (n=6) Content Mean * Max Min 1. I could easily understand the content on each 3.3 4 2 screen. 2. The content fits my practical needs for multimedia 3.0 4 2 curriculum (or learning materials) design. 3. I could easily understand the explanations of 3.5 4 3 keywords/models. 4. I learned some useful information from 3.3 4 2 CASCADE-MUCH. 3.3 Note: *4 = agree; 3 = slightly agree; 2 = slightly disagree; 1 = disagree The overall mean of the answers to these four Likert scale questions was 3.3, which indicated that the participants basically perceived the content to be rather practical although some participants slightly disagreed with some specific statements. Support The program provides four broad categories of support: information, advice, tools and training. The participants' perceptions with regard to practicality of the support, based on answers to the Likert scale questions on the questionnaire, are summarized in Table 5.5.

Assessing the practicality of the prototype 151 Table 5.5: Perceived practicality of the support Support n Mean * Max Min 5. The help function provided me with useful 6 3.3 4 2 models. 6. The suggestions gave me some valuable 5 3.4 4 3 expert advice. 7. The previews indicated to me what would be 5 3.4 4 2 affected by the current settings. 8. The additional explanations of 5 3.0 4 1 suggestions/previews helped me understand why the suggestions/previews were given. 9. The tips provided me with useful information. 5 3.8 4 3 10. The Edit Panel helped me easily make 6 4.0 4 4 instructional scenarios. 11. I could easily export scenarios into Microsoft 6 3.7 4 2 Word. 12. I think the explanations/examples of 6 3.3 4 2 keywords are practical. 13. The concept-mapping tool is useful for me to 5 3.4 4 3 make content selection. 3.5 Note: *4 = agree; 3 = slightly agree; 2 = slightly disagree; 1 = disagree Generally speaking, the results in Table 5.5 indicate that the participants perceived the component of support to be practical, since the overall mean of answers to all questions was 3.5 on a four-point scale and even the lowest mean score was still as high as 3.0 (Q8). However, some participants (slightly) disagreed with some specific statements such as Q8 that got the minimal scale. An encouraging point was that all participants agreed that Edit Panel could be helpful when making instructional scenarios (Q10). In addition, it was probably because some categories of support were not utilized, some questions such as Q6 and Q7 were not answered by all participants. Interface The participants' perceptions with regard to practicality of the interface, based on the answers to the Likert scale questions on the questionnaire, are summarized in Table 5.6.

152 Chapter 5 Table 5.6: Perceived practicality of the interface (n=6) Interface Mean * Max Min 14. User tasks on each screen are clear for me. 3.7 4 3 15. The meanings of buttons on each screen are clear 3.5 4 3 for me. 16. I like the fonts and colors on each screen. 3.2 4 1 17. The navigation tools (linear and browser) are easy 3.8 4 3 for me to use. 18. I think the amount of information on each screen 3.7 4 3 is proper. 19. I think each screen has a consistent design. 3.8 4 3 20. I believe that the interface is consistent with other 3.7 4 3 computer programs. 21. The interface is easy to learn for me. 3.5 4 3 22. The interface is easy to use for me. 3.7 4 3 23. I feel the program is error free. 3.2 4 2 3.6 Note: *4 = agree; 3 = slightly agree; 2 = slightly disagree; 1 = disagree In general, the participants agreed with the practicality of the interface, since the overall mean of the answers to all questions was 3.6 and the lowest means were 3.2 (Q16 and Q23) that is still higher than 'slightly agree'. However, one point that needs concern was that Q16 got the minimal scale and its mean was also relatively lower, which indicates that the fonts and colors need improvements. In addition, the relatively low mean of Q23 shows that some errors occurred during the time period that the participants were using the program. Scenario The scenario is the major product of the program. The participants' perceptions with regard to the practicality of the scenario, based on the answers to the questionnaire, are presented in Table 5.7.

Assessing the practicality of the prototype 153 Table 5.7: Perceived practicality of the scenario (n=6) Scenario Mean * Max Min 24. The scenario includes what I intended to include. 3.3 4 2 25. The scenario is well structured. 3.5 4 2 26. I can easily modify the scenario within Microsoft 3.7 4 3 Word. 27. I am satisfied with the produced scenario. 3.8 4 3 28. The meanings of the elements in the scenario are 3.3 4 2 clear for me. 29. The produced scenario can help me easily discuss 3.5 4 3 my wishes with computer programmers. 30. I believe that the scenario would be easy to 3.7 4 3 understand for computer programmers. 3.5 Note: *4 = agree; 3 = slightly agree; 2 = slightly disagree; 1= disagree Although the scenarios that were produced within one and a half hours during the workshop were rather preliminary, the results in Table 5.7 show that the participants perceived their scenarios to be practical. The overall mean of the answers to all questions was 3.5, and even the minimal means were as high as 3.3. However, the minimal scores indicate that some statements such as Q24, Q25 and Q28 were slightly disagreed with by some specific participants. Comparison of mean scores between novice and experienced designers Figure 5.1 provides the specific means of novice and experienced designers. This figure shows that the mean scores of the components reported by the experienced designers were generally higher than those of the novice designers, with an exception of the component of interface. The results indicated that the experienced designers were more satisfied with these three components (content, support and scenario) than the novice designers. However, the differences on these three components between the novice and experienced designers were not statistically significant. The t-score of Q8, which had the biggest difference, was 1.55, while for a sample size of 3 and probability of 10%, the cutoff value for the t-score is at least 2.92 in order to say that the difference is statistically significant with 90% certainty (cf. Bhattacharyya & Johnson, 1977; Krathwohl, 1998). With regard to the component of interface, the difference between the experienced and the novice designers were less and not significant at all.

154 Chapter 5 Mean 4 Figure 5.1: 3.9 (Experienced designers) 3.8 3.7 3.6 3.5 3.4 3.3 3.2 3.1 (Novice designers) 3 2.9 2.8 2.7 2.6 2.5 2.4 2.3 Content Support Interface Scenario 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Question Means of answers given by the novice and experienced designers Conclusion The answers to all Likert scale questions on the questionnaire showed that the four components (content, support, interface and scenario) were perceived to be rather practical by the primary target group users. In addition, no statistically significant difference was found between the novice and experienced designers with regard to the perceived practicality of the four components. 5.3.2 Actual use of the program The results in the built-in log files reflect how participants really used the program. The results were recorded by the variables walking routes, time spent on each screen and support tools utilized. Walking routes Figure 5.2 shows examples of two participants (B and E) walking through the Designer's Aid part. By comparing walking routes of the six participants, it was found that these two walking routes were rather representative. Of the six participants, four walked through the Designer's Aid part in a way similar to Figure 5.2 (A), and two in a way similar to Figure 5.2 (B). Figure 5.2 (A) illustrates how participant B started from the screen 1 and gradually stepped forward to the last screen (screen 16). Then he returned to the first screen again and quickly stepped forward to screen 13. After that he returned to screen 1 again and stepped forward to screen 10. Figure 5.2 (B) illustrates

Assessing the practicality of the prototype 155 participant E stepped back and forward in a more flexible way. It was found that no obvious differences existed between novice and experienced designers. Of the four novice designers, three adopted a similar walking route as in Figure 5.2 (A), and one adopted a similar walking route as in Figure 5.2 (B). Of the two experienced designers, one adopted a similar route as in Figure 5.2 (A), and another one (participant E) adopted the route of Figure 5.2 (B). Step 43 42 6 41 1 40 0 39 1 38 0 37 1 36 0 35 1 34 0 33 1 32 12 31 5 30 5 29 3 28 2 27 0 26 1 25 0 24 0 23 1 22 0 21 1 20 0 19 1 18 60 17 7 16 0 15 25 14 29 13 8 12 14 11 24 10 2 9 7 8 6 7 6 6 7 5 49 4 18 3 26 2 17 1 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Screen Step 39 9 38 1 37 2 36 50 35 5 34 3 33 2 32 0 31 2 30 2 29 2 28 4 27 25 [Mindman, Edit Panel] 26 3 25 11 24 1 23 13 22 5 21 3 20 2 19 5 18 19 17 5 16 5 [Main, Edit Panel] 15 15 14 0 13 7 12 42 11 11 2 10 1 9 0 8 2 7 39 6 57 5 31 4 0 3 57 2 3 1 22 0 1 2 3 4 5 6 7 8 910111213141516Screen (A) (B) Notes: - The numbers on the curve show the time (in seconds) spent on the screens. - The underlined screen numbers (such as 1 and 2) show that the user clicked keyword(s) on the screens. - The items in brackets[] mean that the user visited those parts then. - The screen dumps of screen 1 to 16 can be found in Appendix A-2. Figure 5.2: Examples of walking routes

156 Chapter 5 Time spent on each screen It was assumed that novice and experienced designers might spend significantly different amounts of time on each screen since they had various levels of expertise and might be interested in various aspects as well. Figure 5.3 shows the time (in seconds) each participant spent on each screen. No significant differences were found between the novice and the experienced designers. All participants spent a similar amount of time on each screen with the exception of participant A, who spent relatively more time on screens 7 and 11. The other participants did not have obvious differences on time spent on each screen. The results also show that on screens 1, 5, 7 and 11 the participants spent relatively more time. Screen 1 is the starting screen of the program, introducing what will be analyzed in the analysis part. Screen 5 is about learner analysis, in which a wizard is included. The wizard might cost the users more time. Screen 7 is a summary of the analysis part, and screen 11 is about content selection. Relatively little time was spent on screens 6, 10 and 12. Screen 6 is to specify the designer's role, screen 10 is the starting screen of the content elaboration, and screen 12 is about content representation.

Assessing the practicality of the prototype 157 Time 250 240 230 220 210 200 190 180 170 160 150 140 130 120 110 100 B 90 A 80 70 E 60 50 C 40 30 D 20 F 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16Screen Note: A, B, C, D = novice designers; E, F = experienced designers Figure 5.3: Time spent on each screen Support tools utilized In addition to the walking route and time spent on each screen, the utilized support tools were also recorded in the built-in log files. Table 5.8 summarizes how many times each support tool was used by each participant. It shows that although minor differences existed between individuals, no obvious differences existed between the novice (A, B, C and D) and the experienced (E and F) designers. From this table, it can be found that keywords were frequently clicked by all participants with the exception of participant D. The Edit (Panel) tool was utilized at least one time by each participant. Each

158 Chapter 5 participant used the Microsoft Word tool at least one time, either in the Designer's Aid part or in the Edit Panel part, to produce instructional scenarios in Microsoft Word. Four participants utilized the browser tool to visit screens arbitrarily. Of the six participants, only participant E attempted to get help, suggestions and previews intentionally. However, this does not mean that the other participants never used these kinds of support. They could also get related help when they clicked keywords. In addition, suggestions and previews at specific times pop up automatically when available; this was not recorded by the built-in log file. Furthermore, five of the six participants used tips, and four of them attempted to use the concept-mapping tool 'Mindman' to produce concept maps. Table 5.8: Support tools utilized by the participants Edit Designer's Aid panel Kw Edit Word Brws Help Sgt Prv Tips Mm Word A 13 3 2 8 1 y 9 B 6 8 1 3 y 7 C 3 1 1 2 2 D 1 3 E 8 1 1 19 2 3 1 1 y F 5 2 1 y 1 Notes: - A, B, C, D: novice designers; E, F: experienced designers - Kw/Edit/Word/Brws/Help/Sgt/Prv/Tips: the numbers of Keywords/Edit/Word/ Browser/Help/Suggestion/Preview/Tips clicked - Mm: whether the concept-mapping tool (Mindman) was clicked - The sign means the support tool was never intentionally used Furthermore, this table indicates that participant E was an active user who tried each kind of support tools several times. Comparatively, participant D was a more inert user, who only utilized the Edit tool one time and the Word tool three times without trying any other support tools. Summary and conclusion This section presented results of three variables recorded in the built-in log files. With regard to the variable of the walking routes, it was found that two generalized walking routes (linear and flexible) were frequently adopted by the participants. But it did not show that the novice designers commonly adopted one route and the experienced designers adopted another. No obvious differences were found on the walking routes between the novice and the experienced designers. With regard to the variable of time spent on each screen, it

Assessing the practicality of the prototype 159 seemed that no obvious differences existed between the novice and the experienced designers in this aspect, either. Also, no distinguishable differences were found between the novice and the experienced designers with regard to the variable of support tools utilized. In conclusion, although various users had minor differences, no significant overall differences existed between the novice and the experienced designers with regard to these three variables. 5.3.3 Comments and suggestions Open-ended questions on the questionnaire during the discussion were used to collect general comments and suggestions. In this section, these comments and suggestions will be summarized. Perceived advantages Participant E, one of the two experienced teacher-designers, agreed that the program indeed had more advantages than the traditional approach to instructional scenario development on paper. She said that the program was easy and simple to use; the content was easy to understand; users could easily produce instructional scenarios by selecting some options. She believed that with the program users could immediately get help, explanations of keywords and expert advice when they met difficulties. She further added "In practice, mostly it is very difficult to find an expert to help you solve a problem very soon. But with this program, problems can be easily solved." Participant F said that the program was easy to use; instructional scenarios were easier to produce and to modify. He believed that the resulting instructional scenarios would help teacher-designers and computer programmers to communicate, because the instructional scenarios were based on systematic multimedia instructional design. Although the four novice designers had no experience in instructional scenario development, they expressed similar opinions on advantages of the program. Participant A mentioned that the instructional scenario he produced was well structured and easy to read. With the program, it was not necessary to spend much time in organizing the structure during the development of an instructional scenario. Participants B and D agreed that it could be very fast to produce a provisional instructional scenario with the program. Participant C mentioned that she had never made an instructional scenario, and had never used such a computer support system before. She felt some difficulties in using it, but she believed that she could gradually get used to it if she spent

160 Chapter 5 more time on it. Doubted usefulness The four novice designers (A, B, C and D) mentioned that they never developed instructional scenarios before. In reality, they were familiar with making instructional presentation slides with Microsoft PowerPoint. On the slides, they could put text, pictures and even video clips together. Shifts between two slides could be accompanied with special effects. For classroom teaching, they thought that these kinds of presentation were usually sufficient and also worked well. In addition, they thought that this kind of presentation slide was easy to make. Sometimes they made instructional software with Authorware, but not very often. In the latter case, they used to work together with computer teachers in their schools, because they thought development with Authorware required programming skills. However, in both cases they did not first develop instructional scenarios, but just made the instructional materials directly. These four novice designers thought that the program was not useful for them to make presentation slides or instructional software in practice. However, they agreed that what they developed so far was very limited instructional software, mostly used for one lesson, or only for a part of one lesson. For more complex multimedia projects, for instance a multimedia curriculum covering a whole course, this just-do-it approach would not work efficiently. In such a case, this program would become useful. The two experienced teacher-designers did not doubt the usefulness of the CASCADE-MUCH program. They believed that it would be helpful for teachers to make instructional scenarios for multimedia curricula. Expected materials Four (B, C, E, and F) of the six participants mentioned that they needed more learning materials/resources when they were making instructional scenarios. They suggested that more lesson materials be added to the program. Participants E and F said that the lesson materials might include any texts, pictures, animation and/or video clips which were relevant to the knowledge units of the subjects. With these ready-to-use lesson materials, they would not need to make it again if they perceived that it would be appropriate for a multimedia curriculum. Participant B thought that teachers particularly needed lesson materials in preparation for classroom teaching. He believed that this program would become more useful and more attractive if it could provide more lesson materials. Participant C mentioned that when she made

Assessing the practicality of the prototype 161 presentation slides with PowerPoint, she particularly needed some related supporting materials. If some materials could be found in this program, teachers would be more willing to use it. Simplified version Two participants (C and D) suggested that the program should have a simplified version in order that new users of the program or novice designers of multimedia curriculum could easily start with it. They thought that the current program was complicated. It might be not very easy to learn or to use for new users or novice designers. They believed that a simplified (or 'foolproof') version and a comprehensive version should be provided. The simplified version might include some basic functionality. New users could start it with fewer difficulties. The comprehensive version might include more tools and broader functionality. Experienced teacher-designers could use the comprehensive version to make more complicated instructional scenarios. More discussions on this topic can be found in Section 6.2.5. Replaced product Participant E thought that it might be better if the program could directly produce a final product of multimedia curriculum rather than an intermediate product of instructional scenario. She said that if teacher-designers could directly construct multimedia curricula with the program, then the process of multimedia curriculum development would become simpler and more efficient, because in that case the teacher-designers would not need to discuss and negotiate with computer programmers. Elaborated discussions on this issue can be found in Section 6.2.4. 5.4 Results with other users 5.4.1 Perceived practicality of the four components As mentioned in Section 5.2.1, 13 other users including teachers of other subjects, CBL designers at secondary schools and software developers at computer companies took part in this workshop. Although not all of them were teacher-designers, they were advised to use the program as if they were intended target users. The questionnaire was the same as that used in the first study. The mean and standard deviation of each question are presented in Table 5.9. Furthermore, in order to make the results easier to read, they are also presented in Figure 5.4.

162 Chapter 5 Table 5.9: Perceived practicality of the four components by other users n Mean * S.d. 1. I could easily understand the content on each screen. 13 3.6 0.5 C 2. The content fits my practical needs for multimedia 13 3.2 0.9 O curriculum (or learning materials) design. N 3. I could easily understand the explanations of 13 3.3 0.8 T keywords/models. E 4. I learned some useful information from CASCADE- 13 3.3 0.9 N MUCH. T Sub-total 3.3 0.8 5. The help function provided me with useful models. 13 3.1 0.9 6. The suggestions gave me some valuable expert advice. 13 3.2 0.6 7. The previews indicated to me what would be affected by the current settings. 13 3.2 0.6 8. The additional explanations of the suggestions/ 11 3.2 0.6 S previews helped me understand why the U suggestions/previews were given. P 9. The tips provided me with useful information. 12 3.5 0.5 P 10. The support tool of Edit Panel could help me easily 12 3.5 0.7 O make instructional scenarios. R 11. I could easily export scenarios into Microsoft Word. 13 3.5 0.8 T 12. I think the explanations/examples of keywords are 12 3.4 0.9 practical. 13. The concept-mapping tool is useful for me to make content selection. 12 3.5 0.7 Sub-total 3.4 0.7 14. User tasks on each screen are clear to me. 13 3.2 0.7 15. The meanings of buttons on each screen are clear to 13 3.3 0.6 me. I 16. I like the fonts and colors on each screen. 13 2.7 0.9 N 17. The navigation tools (linear and browser) are easy for 11 3.3 0.5 T E me to use. 18. I think the amount of information on each screen is 13 3.1 0.8 R F proper. 19. I think each screen has a consistent design. 12 3.1 0.9 A 20. I believe that the interface is consistent with other 12 2.8 0.6 C E computer programs. 21. The interface is easy to learn. 13 3.5 0.5 22. The interface is easy to use. 13 3.5 0.5 23. I feel the program is error free. 11 2.5 0.9 Sub-total 3.1 0.7 To be continued on the next page

Assessing the practicality of the prototype 163 S C E N A R I O n Mean * S.d. 24. The scenario includes what I intended to include. 11 3.1 0.8 25. The scenario is well structured. 12 3.3 0.7 26. I can easily modify the scenario within Microsoft 12 3.4 0.5 Word. 27. I am satisfied with the produced scenario. 11 3.3 0.6 28. The meanings of the elements in the scenario are clear 12 3.4 0.5 for me. 29. The produced scenario can help me easily discuss my 12 3.2 0.4 wishes with computer programmers. 30. I believe that the scenario would be easy to understand for computer programmers. 12 3.0 0.7 Sub-total 3.2 0.6 Total 3.2 0.7 Note: *4 = agree; 3 = slightly agree; 2 = slight disagree; 1 = disagree Generally speaking, the four components of the program were perceived to be rather practical by the 13 other users (7 teacher-designers of other subjects, 3 CBL designers at secondary schools and 3 software developers at computer companies) based on the data collected from the questionnaire. The average mean of answers to all questions was 3.2, and the average standard deviation was 0.7. Specifically, of these four components (content, support, interface, and scenario) the overall means were 3.3, 3.4, 3.1 and 3.2; and the standard deviations were 0.8, 0.7, 0.7, and 0.6 respectively. In particular, the participants more agreed with the practicality of the component of support (Mean=3.4) than the component of interface (Mean=3.1). However, the t- score was 1.11, which shows that the difference between these two means was still not large enough (should be at least 1.78 when sample size is 13 and probability is 10%) to reach statistical significance (cf. Bhattacharyya & Johnson, 1977; Krathwohl, 1998).

164 Chapter 5 Mean 3.7 3.6 3.5 3.4 3.3 3.2 3.1 3 2.9 2.8 2.7 2.6 2.5 Content Support Interface Scenario 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Question Note: = average mean; = mean of each question Figure 5.4: Illustration of means One fact was that some specific questions had rather low means as indicated in Figure 5.4. Three means in the 'interface' part were lower than 3.0, and Q23 got the lowest score (2.5). The t-score of Q23 was -2.19 (<-1.78), which indicates that the difference between the mean of Q23 and the average mean was statistically significant. Further improvements need to be made to remove the errors. In addition, the differences between other means and the average mean were not statistically significant since the t-score of Q16, which got the second lowest score (2.7), was -1.56 (>-1.78). However, some improvements --particularly related to the component of interface-- were expected to be made. 5.4.2 Actual use of the program The data collected in the built-in log files reflect how the other users really used the program. In this section, a summary of the data is given. Walking routes It was found that the other users used the program in a way similar to that of the target users. Of the 13 users, six used the program in a linear way as shown in Figure 5.2 (A), and the other seven users used the program in a more flexible way as shown in Figure 5.2 (B). The data show that no way was used more frequently than another was.

Assessing the practicality of the prototype 165 Time spent on each screen Figure 5.5 shows the average time spent on each screen for the 13 users. This figure indicates that users spent relatively more time on screens 1, 2, 8, 11 and 12, and spent relative less time on screens 9, 10 and 13. These two groups of data demonstrate that the users spent more time on the starting screens of the prototype and spent less time on the following screens. However, they spent relatively more time on the screen (#11) of the first design element (content selection), which indicates that the users paid much attention when they started to actually develop an instructional scenario. Time 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 1 2 3 4 5 6 7 8 9 10111213141516Screen Figure 5.5: Average time spent on each screen Support tools utilized Table 5.10 lists the support tools utilized by each user in the second study. This table shows results similar to those in Table 5.8 in the first study. Keywords were frequently clicked by most users, and each user utilized the Edit Panel tool at least one time to actually edit an instructional scenario. The instructional scenario was also exported at least one time to Microsoft Word, either from the Designer's Aid part or from the Edit Panel part. The support of help, suggestion and preview was relatively less used intentionally. However, it does not mean that these types of support were not used. Users could visit help after clicking keywords, and some available suggestions and previews could pop up automatically. The concept-mapping tool of Mindman was used by almost each user. In conclusion, the other users could make instructional scenarios with the prototype; explanations and examples of the keywords seemed to be helpful since the users often clicked the keywords;

166 Chapter 5 and the browser and Mindman might be helpful for them as well. Table 5.10: Support tools utilized by other users Edit Designer's Aid panel Kws Edit Word Brws Help Sgt Prv Tips Mm Word P1 5 2 1 5 1 1 y 2 P2 9 1 2 1 2 y 3 P3 1 1 1 2 y 2 P4 3 1 2 1 y 1 P5 15 9 9 2 2 y 3 P6 7 1 15 1 3 4 P7 17 3 1 65 1 2 1 y 1 P8 1 4 6 P9 7 13 4 2 3 y 1 P10 2 3 3 1 1 4 P11 1 3 1 9 1 1 y 1 P12 13 3 8 1 2 y 2 P13 2 1 2 3 1 7 Notes: - P1..13 = Participant 1 to 13 - Kw/Edit/Word/Brws/Help/Sgt/Prv/Tips: the numbers of Keywords/Edit/Word/ Browser/Help/Suggestion/Preview/Tips clicked - Mm: whether the concept-mapping tool (Mindman) was clicked - The sign means the support tool was never intentionally used 5.4.3 Expected extension to other subjects The current version of the program supports two subjects: Biology and Geography. During the second assessment study, the teacher-designers of other subjects commonly mentioned that the program should be extended to support other subjects as well. In this section, some reasons for extension to other subjects will be discussed. The need of multimedia support for other subjects Many teacher-designers mentioned that not only these two subjects, but also some other subjects could benefit from the support of multimedia. For example, one of the Physics teachers said that there are a lot of natural phenomena to be explained and experiments to be done in the subject of Physics. Some phenomena are hard to explain, and some experiments are expensive or not safe to do. In such cases, learners as well as teachers usually expect that computers can provide a virtual learning environment in which

Assessing the practicality of the prototype 167 they can observe phenomena or do experiments safely. The History teacher also mentioned that in the subject of History, there are also a lot of historical events that can be better presented with multimedia. Some of the events can even be presented with games or simulations. In addition, other teachers also expressed that they wanted to use multimedia to improve or enhance their classroom teaching. When they found that only two subjects were supported by the program, they commonly expected that more subjects would be added so that they could also use it. Possibilities to support other subjects Meanwhile, the prototype has the possibilities to support other subjects as well. For example, a Physics teacher said "All subjects have commonalities. I think this program is also suitable for the subject of physics. Of course, the curriculum standard and the supporting resources/materials should be changed to Physics." The Chemistry teacher said that "This program could be used for the subject of Chemistry, but something should be supported, such as experiments, equations and some special signs." In addition, the program had the following two advantages, which were also available for other subjects: The produced instructional scenarios were well structured and easy to modify since the scenarios could be exported to Microsoft Word. The program could help teacher-designers save time and energy because the program could automatically produce instructional scenarios after teacher-designers have entered content. Furthermore, the available supporting materials provided more convenience for teacher-designers to make instructional scenarios. In addition, some participants also gave suggestions on how to extend the program to support other subjects during the workshop. The Chemistry teacher suggested that several versions of the program should be developed separately to support various subjects. In this case, each subject version could closely integrate specific characteristics of that subject and serve well for that subject. However, the History teacher suggested that it might also be possible to integrate several subjects in the same program. He said that "The current version supports two subjects of Biology and Geography, other subjects can also be added to it in a similar way." He thought that in this way teachers of one subject might learn something useful from other subjects as well.

168 Chapter 5 Because of these potential advantages and possibilities, all of the seven teacher designers clearly expressed that they would be willing to use the program in the future if they would develop instructional scenarios. In addition, two CBL designers and one software developer mentioned that they would be glad to use it as well. In conclusion, the prototype seems to be helpful for other subjects, and the prototype is also feasible to be extended to other subjects. 5.4.4 Other comments and suggestions The CBL designers and software developers were not intended or potential target users, but they were a group of related people who were interested in such a computer support program. During this workshop, they proposed many comments and suggestions from their particular points of view. In this section, these comments and suggestions will be summarized. CBL designers One CBL designer mentioned that one of the most significant advantages of the program was that it could help teacher-designers learn what a systematic approach to multimedia instructional design is. He said that although some teachers had experience in using multimedia software in their classroom teaching, and even some of them had developed presentation slides, most of them did not know how to systematically design and develop a multimedia instructional program. Those teachers who had experience in simple software development mostly developed instructional software based on their own experience, not based on a systematic approach. He said that the program provided a systematic approach to multimedia instructional design, including detailed analysis and design. He believed that this approach would be helpful for those teachers who want to develop multimedia instructional programs because they can learn some information from the program. Related discussion can be found in Section 6.2.5. Another CBL designer made three comments on the program. First, he thought that the Chinese name of the program (a Computer Support System for Multimedia Curriculum Design) was too complex to understand. He suggested making it simple, such as 'a Computer Support System for Scenario Development'. Second, he thought that the program would be useful for more complex multimedia projects, but not useful for development of simple instructional software. He said that, in practice, teachers used to make instructional software or presentation packages by themselves. They did not need to make instructional scenarios in advance because usually the

Assessing the practicality of the prototype 169 instructional software was simple or small. This opinion was similar to some teacher-designers in the first study with primary target group users, but not consistent with the earlier CBL designer. More discussions on this issue can be found in Section 6.2.5. Third, he thought that the most important thing for teachers was to provide them with some teaching or lesson materials --not a support system for scenario development. He suggested that the materials part of the program should be enhanced. More discussion on this issue will be provided in Section 6.2.2. Software developers One software developer suggested that the concepts of workspace and project could be introduced into the program to manage the various presentation forms of multimedia. He said in some programming tools, such as Visual C++, a workspace can include any number of projects, which can be of the same or different categories. Projects of different categories can have corresponding development platforms. In addition, any project can be included in one or many workspaces. Although the concepts of workspace and project were not used in the program, some ideas in the program were similar to those concepts. For instance, an instructional scenario can be seen as a workspace, and a presentation form of multimedia can be treated as a project in the program. An instructional scenario can involve many presentation forms, and one presentation form can be included in many instructional scenarios as well, since most presentation forms can be stored in separate files. Another software developer suggested that the concept of collaboration should be integrated into the program. He said that with the development of computer networks, collaborative work at different sites is becoming feasible, and a lot of computer systems, such as Lotus Notes, support collaborative work. Teacher designers and computer programmers may work at different sites even within one multimedia project group. In this way collaborative work becomes possible. If the program can support collaborative work, teacher designers and computer programmers can work together remotely and effectively. However, since the current version of the program is a CD-ROM based application, this good suggestion is hard to implement. It might be interesting to support collaboration in the future when the program is a webbased application.