E-LEARNING TOOL: A REVIEW ON TRENDS IN AUTOMATED PROGRAMMING CODES ASSESSMENT

Similar documents
Tools and Techniques for Large-Scale Grading using Web-based Commercial Off-The-Shelf Software

Online Marking of Essay-type Assignments

Agent-Based Software Engineering

ZACHARY J. OSTER CURRICULUM VITAE

CWIS 23,3. Nikolaos Avouris Human Computer Interaction Group, University of Patras, Patras, Greece

On the Combined Behavior of Autonomous Resource Management Agents

Software Security: Integrating Secure Software Engineering in Graduate Computer Science Curriculum

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

Web-based Learning Systems From HTML To MOODLE A Case Study

Chamilo 2.0: A Second Generation Open Source E-learning and Collaboration Platform

GACE Computer Science Assessment Test at a Glance

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

Bluetooth mlearning Applications for the Classroom of the Future

Implementing a tool to Support KAOS-Beta Process Model Using EPF

Automating Outcome Based Assessment

On-Line Data Analytics

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

Education the telstra BLuEPRint

Python Machine Learning

An Industrial Technologist s Core Knowledge: Web-based Strategy for Defining Our Discipline

DICE - Final Report. Project Information Project Acronym DICE Project Title

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

Word Segmentation of Off-line Handwritten Documents

A Coding System for Dynamic Topic Analysis: A Computer-Mediated Discourse Analysis Technique

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

The IDN Variant Issues Project: A Study of Issues Related to the Delegation of IDN Variant TLDs. 20 April 2011

DESIGN, DEVELOPMENT, AND VALIDATION OF LEARNING OBJECTS

Multimedia Courseware of Road Safety Education for Secondary School Students

Textbook Evalyation:

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

Specification of the Verity Learning Companion and Self-Assessment Tool

TEACHING IN THE TECH-LAB USING THE SOFTWARE FACTORY METHOD *

Student-created Narrative-based Assessment

Applying Learn Team Coaching to an Introductory Programming Course

Evidence for Reliability, Validity and Learning Effectiveness

REVIEW OF CONNECTED SPEECH

DICTE PLATFORM: AN INPUT TO COLLABORATION AND KNOWLEDGE SHARING

Computer Organization I (Tietokoneen toiminta)

AQUA: An Ontology-Driven Question Answering System

Circuit Simulators: A Revolutionary E-Learning Platform

Writing Research Articles

Guru: A Computer Tutor that Models Expert Human Tutors

Data Fusion Models in WSNs: Comparison and Analysis

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Towards a Collaboration Framework for Selection of ICT Tools

Blended E-learning in the Architectural Design Studio

Radius STEM Readiness TM

The Comparative Study of Information & Communications Technology Strategies in education of India, Iran & Malaysia countries

IMPROVING SPEAKING SKILL OF THE TENTH GRADE STUDENTS OF SMK 17 AGUSTUS 1945 MUNCAR THROUGH DIRECT PRACTICE WITH THE NATIVE SPEAKER

Modeling user preferences and norms in context-aware systems

CS Machine Learning

Infrastructure Issues Related to Theory of Computing Research. Faith Fich, University of Toronto

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

A Note on Structuring Employability Skills for Accounting Students

Introduction to Information System

TOURISM ECONOMICS AND POLICY (ASPECTS OF TOURISM) BY LARRY DWYER, PETER FORSYTH, WAYNE DWYER

CONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS

We are strong in research and particularly noted in software engineering, information security and privacy, and humane gaming.

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

An OO Framework for building Intelligence and Learning properties in Software Agents

Student User s Guide to the Project Integration Management Simulation. Based on the PMBOK Guide - 5 th edition

E-learning Strategies to Support Databases Courses: a Case Study

Worldwide Online Training for Coaches: the CTI Success Story

How to Judge the Quality of an Objective Classroom Test

similar to the majority ofcomputer science courses in colleges and universities today. Classroom time consisted of lectures, albeit, with considerable

InTraServ. Dissemination Plan INFORMATION SOCIETY TECHNOLOGIES (IST) PROGRAMME. Intelligent Training Service for Management Training in SMEs

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

EXECUTIVE SUMMARY. Online courses for credit recovery in high schools: Effectiveness and promising practices. April 2017

Department of Computer Science GCU Prospectus

Improving Conceptual Understanding of Physics with Technology

Reducing Features to Improve Bug Prediction

Ph.D. Computer Engineering and Information Science. Case Western Reserve University. Cleveland, OH, 1986

PeopleSoft Human Capital Management 9.2 (through Update Image 23) Hardware and Software Requirements

Higher Education Review (Embedded Colleges) of Navitas UK Holdings Ltd. Hertfordshire International College

Leveraging MOOCs to bring entrepreneurship and innovation to everyone on campus

CS 100: Principles of Computing

Taking Kids into Programming (Contests) with Scratch

MMOG Subscription Business Models: Table of Contents

1 Use complex features of a word processing application to a given brief. 2 Create a complex document. 3 Collaborate on a complex document.

CAPBLAT: An Innovative Computer-Assisted Assessment Tool for Problem Based Learning

Deploying Agile Practices in Organizations: A Case Study

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

Knowledge-Based - Systems

LEGO MINDSTORMS Education EV3 Coding Activities

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions

Integrating E-learning Environments with Computational Intelligence Assessment Agents

GALICIAN TEACHERS PERCEPTIONS ON THE USABILITY AND USEFULNESS OF THE ODS PORTAL

Development of an IT Curriculum. Dr. Jochen Koubek Humboldt-Universität zu Berlin Technische Universität Berlin 2008

Identifying Novice Difficulties in Object Oriented Design

Empirical research on implementation of full English teaching mode in the professional courses of the engineering doctoral students

VOL. 3, NO. 5, May 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Intelligent Agent Technology in Command and Control Environment

THE WEB 2.0 AS A PLATFORM FOR THE ACQUISITION OF SKILLS, IMPROVE ACADEMIC PERFORMANCE AND DESIGNER CAREER PROMOTION IN THE UNIVERSITY

Group Assignment: Software Evaluation Model. Team BinJack Adam Binet Aaron Jackson

ACCOUNTING FOR MANAGERS BU-5190-OL Syllabus

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

Transcription:

E-LEARNING TOOL: A REVIEW ON TRENDS IN AUTOMATED PROGRAMMING CODES ASSESSMENT Muhammad Firdaus Zul Kafli 1 Mohd Fadzli Marhusin 2 Shaharudin Ismail 3 Zul Hilmi Abdullah 4 Faculty Science and Technology Universiti Sains Islam Malaysia (USIM) ABSTRACT The students assessment is done in order to gauge their progress in a learning process. The ratio of one lecturer to a small number of students is considered a practical solution. However, the numbers of students are increasing, which could lead to a significant increase of workload for the lecturer. Teaching a programming course to a large number of students is a challenging. There are tradeoffs between giving a lot of assessments to the students to boost their understanding of the subject matter versus the amount of extra work for the lecturer to mark the assessments given. Since an assessment related to programming code merely deals with programming logic, it is desirable that an automated assessment tool is used to relieve the unnecessary load. Over the past few years, there are studies that offers solutions for automated marking of programming codes. Each solution has its own characteristics and mechanism. This study explores the trends in the e-learning tool (automated programming code marking systems) by reviewing the existing research and describing the ideal features of the tools as part of a complex e-learning environment. Hopefully, this study could inspire the future development of the e-learning tool for automated programming code marking systems. KEYWORDS: E-Learning, Automated Assessment, Automated Grading 1.0 INTRODUCTION Programming assignments means assignments where students write code and submit it for assessment. The assessment provides the lecturer with a feedback channel that proves how learning goals are being met. It provides both means to guide students learning and feedback for both learner and lecturer about the learning process (Ihantola et al., 2010). However, the numbers of students in this field are growing every year and too costly in term of workload and efficiency (Jackson & Usher, 1997a) to evaluate assignments, quizzes, and projects which will then affect the amount of knowledge absorbable by students. Hence it is desirable that an automated assessment tool is used to relieve the unnecessary load. E-learning is making headway in line with the development and improvement of World Wide Web and lead to a big significance with a new name: E-learning 2.0 (Downes, 2005). The current trends of this automated assessment are towards more complex software task and towards more and more automated aids to deliver a programming course via e-learning system, i.e., Moodle (MOODLE, 2013). Although tests and quizzes in objective-based formats (true/false or multiple choice questions) could be easily implemented, evaluating assessment in a format of programming code would require a more dynamic method. This is correct as program code for each and every student could be unique in spite of the code answering the question given and producing the correct output. The ideal features will be well explained in the next section. A lecturer does not have to be concerned with an increasing number of students taking a programming subject. The purpose of this paper is to explore the trends in Automated Programming Code Assessment (APCA) by reviewing the existing studies and describing the ideal features for

future APCA. The tool would automatically evaluate each assessment submitted by the students by comparing the code submitted with answer scheme, which could be one or more possible solutions. 2.0 THE TRENDS OF AUTOMATED PROGRAMMING CODES ASSESSMENT(APCA) Automated programming code assessment has recently become an important assisting tool for instructors in a programming course to automatically assess students programming assignments. Many tools were developed in the past and we have categorized them into four phases. 2.1 The Pioneer First Automated Programming Codes Assessment Automated Programming Codes Assessment is started for a long time ago as lecturers have asked students to solve the programming questions. The first automated code assessment is found in (Hollingsworth, 1960). Using the system, students need to submit program written in assembly language. This graders system will run the program and produce two different outputs either wrong answer or program complete. The benefits from the system were the efficient use of computing resources as well as enable more students to submit their programming assignments. As the time goes by, automated programming codes assessment also evolved. Wirth and Forsythe, along with Naur, presents an APCA that examines programs written in Algo (Forsythe & Wirth, 1965; Naur, 1964). The operation of the system was by using a grader program to test submitted programs. The system focusses more on their three main functions which are to supply test data, keep track of running time and maintain a grade book. Next, (Hext & Winings, 1969) proposed several new ideas. They need to do some modifications to both compilers and operating system to be implemented in new APCA. This technique compares stored test data to data obtained by executing the students assignment. The detailed report will be produced including detailed test results. Amazingly, the authors say that this method may be possible to be used to check for cheating or plagiarism. Although the system quite fascinating and relevant to today, the security component is still in question which need to be solved. 2.2 APCA using tools Second Generation of APCA Development of pioneer APCA required a great deal of expertise. A second Generation of APCA however can be labeled as tool-based. This kind of APCA was developed using pre-existing tool sets and utilities supplied by the operating systems or programming environment. There were also an interaction between user interface which a GUI to serve the user and the engine which running in the background. Kassandra was a system designed to ease the burden of teaching assistant in evaluating all programming assessments, i.e., Fortran, Maple, Matlab and Oberon assignments at eth Zurich (Matt, 1994). Its main purpose was to check the accuracy of programs produced by

students. Kassandra system was based on the observation that a number of assignments could be easily and fairly assessed. Comparing the results with the one given by instructors tested accuracy. This system also would respond to errors committed by the students. Kassandra also produced a feature for the students to check their own scores. All the students could get back to their assignments for correction or review. Security wise, distinct program was created for students and lecturers. The results generated are stored in a log file which only accessible by lecturers. The ASSYST system developed by (Jackson & Usher, 1997b) introduces a scheme that analyses programming submissions through a number of criteria. ASSYST decided whether the programming assignments are correct based on comparing with a set of predefined test data, efficiency in the use of CPU time, and the metric scores based on their complexity and style. Furthermore, this system provides mechanisms to manage with submissions, create and generate reports, and allow weightings to be assigned to particular aspects of the tests. Ceilidh was developed by the learning technology research group (LTR) at the University of Nottingham and was first used in 1988, mainly designed to evaluate C, and C++ assignments (Foxley, Higgins, Tsintsifas, & Symeonidis, 2001). The Unix-based Ceilidh has three main components (Benford, Burke, Foxley, Gutteridge, & Zin, 2010). One of its advantages is it could handle more than one submission made by each student for each assignment. This would allow students to improve their codes in order to get high scores. Some of the functions available in this system for the course administrations include a progress monitor. In addition, it was also able to examine the different types of assignments and presents the results of an analysis in a clear and interesting to the students (Foxley, Higgins, Burke, Gibbon, & Zin, 1997). PC^2 otherwise known as the Programming Contest Control is developed at California State University, Sacramento (Ashoo, Boudreau, & Lane, 2011). It is a popular system used to host programming competitions around the world, virtually Internet-based and site-based competition modes are possible. PC^2 let the contestants to submit the programming source code to the judge via a network. The judge then can retrieve the source code, recompile and execute the program. PC^2 also offers automated marking that will mark automatically based on the program s specific input and output. These second generation APCA has continued to evolve and develop. The PC^2 project has been supplemented by a graphical user interface and as a starting point to interact with more user in network. Some of the systems stated have evolved into the third type of APCA, webbased application. 2.3 APCA in Web-based Application Third Generation The third generation of APCA deals with developments in web technology and introduces sophisticated testing approaches. The students submitted their programming assignments through e-learning website such (MOODLE, 2013). CourseMarker was a flexible, secure and user-friendly system that was developed at the University of Nottingham as a successor to the Ceilidh (Higgins, Hegazy, Symeonidis, & Tsintsifas, 2003). CourseMarker had a number of additional capabilities, i.e., the assessment of diagram-based work. Despite a slightly dated interface, lecturers were given a wide variety of statistical data of the results (Rawles, Joy, & Evans, 2002). Besides that,

CourseMarker also had a plagiarism detector in this system which it compares the one it marked by others submitted by the rest of the students (Higgins, Symeonidis, & Tsintsifas, 2002). While the second generation APCA, Ceilidh gave a simple indication of the number of marks for a submission, CourseMarker provides the student with richer feedback. As the system presenting the student with a percentage of an optional alphabetic scale and allowing a student to interactively identify their weaknesses. CourseMarker was a flexible, secure and user-friendly system that was developed at the University of Nottingham as a successor to the Ceilidh (Higgins et al., 2003). CourseMarker had a number of additional capabilities, i.e., the assessment of diagram-based work. Despite a slightly dated interface, lecturers were given a wide variety of statistical data of the results (Rawles et al., 2002). Besides that, CourseMarker also had a plagiarism detector in this system which it compares the one it marked with others submitted by the rest of the students (Higgins et al., 2002). The BOSS Online Submission System was developed by an employee in the Computer Science Department at the University of Warwick (The University of Warwick, 2009). The system was designed to facilitate the delivery of online programming exercises and at the same time allowed each assessment to be assessed immediately (Joy, Griths, & Boyatt, 2005). BOSS offered a user-friendly system, where students could use this system to test their programming exercises. Once satisfied, they could securely submit their assignments to a particular lecturer (Douce, Livingstone, & Orwell, 2005). BOSS produced form of preliminary checking when the assignments were submitted. The system was also based on the comparison of the textual output (Joy, Chan, & Luck, 2000). The BOSS system was also given a dedicated graphical user interface for students and lecturers. The latest version of the system also included a web-based application. This would enable the lecturers to review submissions using a traditional web-browser. Similar to CourseMarker, BOSS system also had a plagiarism detector (Douce et al., 2005). 2.4 Secured APCA Fourth Generation Most of the systems previously address some of their main functionality requirements which are needed until today. Unfortunately, they do not provide adequate security. In specific, there were opportunities which could be used by students to exploit loopholes in the systems cause special concern given that learning environments are generally intended to encourage and stimulate experimentation (Luck & Joy, 1999). New framework, such as that by (Dawson-Howe, 1996), handle the process of submission and testing through the sending of email messages containing programs, data and results. This method avoids some of the security loopholes. Besides that, Dawson-Howe s work does go further than others in that it includes some simple database management facilities for maintaining the submission, grades, and also generating simple reports. One of the previous system which has implemented security in their system was BOSS2 (The University of Warwick, 2009). A system with student work is stored in a central file system, security implementation is a must. In BOSS2, verification and feedback are tackled by forming an authentication code for each file submitted using Snefru algorithm (Merkle,

1990). The Snefru algorithm is a secure hash function that maps an input file to a fixedlength byte array. 3.0 IDEAL FEATURES FOR FUTURE APCA After reviewing the trend of APCA, we can conclude that development of APCA was increasingly towards a reliable and secured system. Therefore in this paper, we are proposing the ideal features for future development of APCA, in the perspective of Programming Languages, Architecture, and Security. 3.1 Programming language Programming is one of the most important factors that need to be justified in the APCA. Most of the automated assessment tools nowadays are either targeted only for Java or have support for Java (Ihantola et al., 2010). This fits well with the trend of Java being used as introductory programming languages in institutions. Other than that, some of the automated assessment tools to support other languages include C, C++, Python, and Pascal. The future APCA should be language independent (Roberts & Verbyla, 2003). Especially if the assessment is based on the output comparison, any language that can be executed can be automatically assessed after the schema output has been set up in the configuration file. 3.2 Architecture of APCA Start with a computer-based application, a client-server architecture was proposed after that so that the access to global information within the system wide environment easily accessible. However, this architecture is prone to weaknesses (Kannadiga & Zulkernine, 2005). First, the addition of a new client causes an incremental load on the centralized server, raising a scalability issue. Second, communications with the centralized server can overload the network, although clustering and round-robin server configurations could overcome the problem, but the solution could be costly. Third, some of the clients contain platform specific components. These problems have led many researchers (Balasubramaniyan, Garcia- Fernandez, Isacoff, Spafford, & Zamboni, 1998; Kannadiga & Zulkernine, 2005; Marhusin, Cornforth, & Larkin, 2008; Spafford & Zamboni, 2000; Sulaiman, 2010) to enhance software system using a multi-agent approach. The features of a multi - agent system (MAS) such as proactive, reactive, social, truthful, benevolent, adaptive, autonomous and rational (Bellifemine, Caire, & Greenwood, 2007) are among the reasons for the adoption of this approach in software systems in (Masrom, Rahman, Shafie, Baykara, & Mastorakis, 2009; Soh, Jiang, & Ansorge, 2004). MAS is a multi-platform environment in which an agent can be added or removed with minimal impact to the system. An agent-based automated programming code assessment tool was introduced in (Masrom et al., 2009). They described the main components and the functional features of the system. The main components are including the student, lecturer and assessment. The functional features of the system include the roles played by some agent-based component such as tasks related to the central agent, student and lecturer agents and the agent that perform the assessment.

3.3 Security As all the systems have been centralized on a server therefore a security measure is a must. In order to strengthen the communication channel, multi-level security encryption is implemented (Sulaiman, Sharma, Ma, & Tran, 2011). Theoretically a stronger encryption algorithm and a longer key tend to cost in term of speed and bandwidth. With the number of students varies and most of the time, it is desirable to apply the least but strong encryption algorithm and key so that he submission of programming code is secure. The same security implementation details needed for the data storage. When the data needed to be stored on the same machine, using the strongest security mechanism would not be a problem (Ferguson & Schneier, 2003). However, when that data need to be transferred from one agent to another, involving different host, determination of a suitable encryption algorithm and key strength are required (Sulaiman, 2010; Sulaiman et al., 2011). 4.0 ISSUE AND CHALLENGES The use of the APCA tool in grading student s assignments poses important issues and challenges. One of the issues is that the increased complexity of automated assessment method and nondisclosure of technical details made students confused on how the scoring works (Bennett, 2011). Individual feedback is central of guiding learning (Queensland, 2002). Every good assessment should provide high quality feedback to ensure understanding of students. But to provide such feedback to hundreds or thousands of students simultaneously is a daunting prospect. Feedback response time is also very important. From the students' point of view, the feedback receives at the end of the subject might not have any opportunity to apply the improved understanding. 5.0 CONCLUSION The students assessment is done in order to gauge their progress in a learning process. The ratio of one lecturer to a small number of students is considered a practical solution. However, the numbers of students are increasing, which could lead to a significant increase of workload for the lecturer. Teaching a programming course to a large number of students is a challenging. There are tradeoffs between giving a lot of assessments to the students to boost their understanding of the subject matter versus the amount of extra work for the lecturer to mark the assessments given. Since an assessment related to programming code merely deals with programming logic, it is desirable that an automated assessment tool is used to relieve the unnecessary load. Over the past few years, there are studies that offers solutions for automated marking of programming codes. Each solution has its own characteristics and mechanism. In this paper, we have reviewed the trend of APCA starting from 1960 with (Hollingsworth, 1960) until 2005 by BOSS2. The trend is described in details in four generations of APCA. The widespread use of Internet influences the development of APCA to evolve from host-

based on the online submission. Like other transformation, this one also comes to many issues and challenge. Most of them were in term of security aspect. In the fourth generation of APCA, new approach introduced by (Dawson-Howe, 1996) had overcome some of security loopholes. Based on the reviews, we describe on the ideal characteristics of APCA for the future development, especially in the perspective of Programming Languages, Architecture, and Security. The APCA should continue as the online submission system and implement agentbased APCA. This will improve in term of information integrity and maintenance of the tool. For the security perspective, multi-level security encryption should be implemented to deal with data confidentiality. REFERENCES Ashoo, Samir E., Boudreau, Troy, & Lane, Douglas A. (2011). Welcome to the PC2 Home Page! 9.2.3. Retrieved May 12, 2013, from http://www.ecs.csus.edu/pc2. Balasubramaniyan, J. S., Garcia-Fernandez, J. O., Isacoff, D., Spafford, E., & Zamboni, D. (1998). An Architecture for Intrusion Detection Using Autonomous Agents. Paper presented at the Proceedings of the 14th Annual Computer Security Applications Conference. Bellifemine, Fabio Luigi, Caire, Giovanni, & Greenwood, Dominic. (2007). Developing Multi-Agent Systems with JADE (Wiley Series in Agent Technology): John Wiley \& Sons. Benford, Steve, Burke, Edmund, Foxley, Eric, Gutteridge, Neil, & Zin, Abdullah Mohd. (2010). The Ceilidh Courseware System. Retrieved 22 May 2013, from http://www.cs.nott.ac.uk/~ceilidh/papers/cal.cat. Bennett, Randy Elliot. (2011). Automated Scoring of Constructed-Response Literacy and Mathematics Items. Paper presented at the Advancing Consortium Assessment Reform (ACAR)., Washington, D.C. Dawson-Howe, Kenneth. (1996). Automatic Submission and Administration of Programming Assignments. SIGCSE Bulletin, 27, 51-53. Douce, Christopher, Livingstone, David, & Orwell, James. (2005). Automation Test-Based Assessment of Programming: A Review. ACM Journal of Educational Resources in Computing, 5(2). Downes, Stephen. (2005). E-learning 2.0. elearn, 2005(10), 1. doi: 10.1145/1104966.1104968 Ferguson, Niels, & Schneier, Bruce. (2003). Practical Cryptography Wiley. Forsythe, George E., & Wirth, Niklaus. (1965). Automatic grading programs. Commun. ACM, 8 (5), 275-278. Foxley, Eric, Higgins, Colin, Burke, Edmund, Gibbon, Cleveland, & Zin, Abdullah Mohd. (1997). The Ceilidh System An Overview and some Experiences of use. Paper presented at the Asian Technology Conference in Mathematics.

Foxley, Eric, Higgins, Colin, Tsintsifas, Athanasios, & Symeonidis, Pavlos. (2001). The Coursemaster Automated Assessment System - A Next Generation Ceilidh. Paper presented at the Conference on Computer Assisted Assessment to support the ICS disciplines, University of Warwick. Hext, J. B., & Winings, J. W. (1969). An automatic grading scheme for simple programming exercises. Commun. ACM, 12(5), 272-275. Higgins, Colin, Hegazy, Tarek, Symeonidis, Pavlos, & Tsintsifas, Athanasios. (2003). The CourseMarker CBA System: Improvements over Ceilidh. Journal Education and Information Technologies, 8(3), 304. Higgins, Colin, Symeonidis, Pavlos, & Tsintsifas, Athanasios. (2002). The Marking System for CourseMaster. Paper presented at the Proceedings of the Seventh Annual Conference on Intregrating Technology into Computer Science. Hollingsworth, Jack. (1960). Automatic graders for programming classes. Commun. ACM, 3(10), 528-529. Ihantola, Petri, Ahoniemi, Tuukka, Karavirta, Ville, Sepp, Otto, #228, & #228. (2010). Review of recent systems for automatic assessment of programming assignments. Paper presented at the Proceedings of the 10th Koli Calling International Conference on Computing Education Research, Koli, Finland. Jackson, David, & Usher, Michelle. (1997a). Grading student programs using ASSYST (Vol. 29, pp. 335-339): ACM Press. Jackson, David, & Usher, Michelle. (1997b). Grading student programs using ASSYST. Paper presented at the Proceedings of the twenty-eighth SIGCSE technical symposium on Computer science education, San Jose, California, USA. Joy, Mike, Chan, Pui-Shan, & Luck, Michael. (2000). Networked Submission And Assessment. Paper presented at the 1st Annual Conference of the LTSN Centre for Information and Computer Science, Newtonwabbey. Joy, Mike, Griths, Nathan, & Boyatt, Russell. (2005). The BOSS Online Submission and Assessment System. Journal on Educational Resources in Computing(JERIC), 5(2). Kannadiga, Pradeep, & Zulkernine, Mohammad. (2005). DIDMA: A Distributed Intrusion Detection System Using Mobile Agents. Paper presented at the Proceedings of the Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Networks. Luck, Michael, & Joy, Mike. (1999). A Secure On-line Submission System. Warwick University, 29(8), 721-740. Marhusin, Mohd Fadzli, Cornforth, David, & Larkin, Henry. (2008). An overview of recent advances in intrusion detection. Paper presented at the 8th IEEE International Conference on Computer and Information Technology (CIT'08) University of Technology, Sydney, Australia. Masrom, S, Rahman, ASA, Shafie, AS, Baykara, NA, & Mastorakis, NE. (2009). Computer assisted assessment for computer programming course with agent based architecture. Paper presented at the WSEAS International Conference. Proceedings. Mathematics and Computers in Science and Engineering.

Matt, Urs von. (1994). Kassandra: The Automatic Grading System SIGCUE Outlook (Vol. 22, pp. 26-40). Merkle, Ralph C. (1990). A fast software one-way hash function. Journal of Cryptology 3(1), 43-58. MOODLE. (2013). Moodle.org: open-source community-based tools for learning. Retrieved 10 May 2013, from https://moodle.org/ Naur, Peter. (1964). Automatic Grading of Students' ALGOL Programming. BIT Numerical Mathematics 4(3), 177-188. Queensland, University of. (2002). Assessing large classes. Retrieved 22 September, 2013, from http://www.cshe.unimelb.edu.au/assessinglearning/03/large.html Rawles, Simon, Joy, Mike, & Evans, Michael. (2002). Computer-Assisted Assessment in Computer Science: Issues and Software. In R. R. 387 (Ed.). Department of Computer Science: University of Warwick. Roberts, Graham H. B., & Verbyla, Janet L. M. (2003). An online programming assessment tool. Paper presented at the Proceedings of the fifth Australasian conference on Computing education - Volume 20, Adelaide, Australia. Soh, Leen-Kiat, Jiang, Hong, & Ansorge, Charles. (2004). Agent-based cooperative learning: a proof-of-concept experiment. Paper presented at the Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (SIGCSE'04), Norfolk, Virginia, USA. Spafford, Eugene H., & Zamboni, Diego. (2000). Intrusion detection using autonomous agents. Comput. Netw., 34(4), 547-570. doi: 10.1016/s1389-1286(00)00136-5 Sulaiman, Rossilawati. (2010). MAgSeM: A Multi-agent Security Framework for Secure Cyber Services PhD Thesis: The University of Canberra, Australia. Sulaiman, Rossilawati, Sharma, Dharmendra, Ma, Wanli, & Tran, Dat. (2011). A new security model using multilayer approach for E-health services. Journal of Computer Science, 7(11), 1691-1703. doi: DOI:10.3844/jcssp.2011.1691.1703 The University of Warwick. (2009). BOSS Online Submission System. Retrieved April 5, 2013, from http://www.dcs.warwick.ac.uk/boss/about.php.