A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens Greece marsam@teiath.gr et.e.f.gr@gmail.com prentakis@gmail.com Andreas Papadakis School of Pedagogical & Technological Education (ASPETE) Greece andreas.papadakis@gmail.com John Gelegenis Technological Educational Institute (T.E.I.) of Athens Greece jgelegenis@teiath.gr Grammatiki Tsaganou University of Athens Greece gram@di.uoa.gr Nikolaos Tselikas University of Peloponnese (UoP) Greece ntsel@uop.gr Abstract: Monitoring and evaluating a group of students during computer based laboratory exercises is a challenging task, especially in the case that the evaluation takes place in realtime classroom conditions. A diagnosis usually requires taking into account both the comprehension of theoretical principles and the student's competence with the use of the scientific tools. In this paper an artificial intelligence educational system using fuzzy logic is being presented, capable of diagnosing the students, provide support and evaluate them based not only on the end result but on their performance across the entire exercise. A preliminary build of the system described in this study has been used in order to monitor, diagnose, assist and evaluate students receiving training on the fuzzy logic toolbox of Mathworks MatLab software suite. Introduction Monitoring and evaluating a group of students during theoretical education and / or practical experiments is a challenging task, especially in the case that it is used in real classroom conditions. Usually it is necessary to diagnose, in the context of assisting or evaluating the performance of students in modules (and especially laboratory ones), both the capability of using the laboratory tool as well as his / her sound comprehension of the theoretical principles (which he / she is expected to apply using the lab tool). Experiments are quite important for technological education. In several modules the students are expected to apply the theoretical knowledge and perform computer based exercise (design, simulation) using software tools. Such exercises are usually one-size-fits-all, with rather limited possibilities of adaptation and customization. Furthermore, the evaluation of the performance is solely based on the final result; the process followed by each student, as well as the potential weak points remain opaque to the tutor. A question that often appears, is whether the evaluation should be based only on the final result or take into account the intermediate steps that have been followed. Additional characteristics, such as the total time that has been needed in order to solve the problem, the number of commands executed and the route the student has followed are usually - 4038 -
ignored or in the best case only qualitatively considered. However, e-learning environments and technology allow for the diagnosis and evaluation to take place based on a multitude of parameters rather than just the end results (McConnell, 1999). This paper describes an automated intelligent monitoring, diagnostic, assistance and assessment system, imperceptible by the students, using artificial intelligence and, specifically, fuzzy logic, in order to diagnose potential traits and weaknesses of the student and provide personalized support, taking into account the entire problem-solving process rather than just the end result. As the proposed system is capable of diagnosing the student through every step of the educational process, the recorded data and results can also be used to derive a personalized student profile, leading to more the accurate and objective evaluation of a student's capabilities (Bai & Chen, 2008; Cheng, 1998; Stathacopoulou, Grigoriadou, Samarakou, & Mitropoulos, 2007). System description The proposed system consists of four main subsystems, briefly listed below: Monitoring subsystem, which system monitors and records the movements of a student during the exercise. Sophisticated logging functionality is a necessity. Diagnosing subsystem, which system determines and evaluates the initial knowledge of the student. Modeling subsystem, which system creates the student model based on individual knowledge profiles. Evaluation subsystem, which system is used to evaluate a student's performance based not only on the final result but on several other factors. The base architecture of the proposed system can be seen in fig. 1. 2.1 Monitoring subsystem Fig.1 Base architecture of the proposed system The monitoring and logging subsystem follows and logs every action a student takes during the laboratory exercise. All recordings are being performed with the student's knowledge and consent, yet on an imperceptible for the user level, without any intervention of the educational process. Although this subsystem has been primarily designed with laboratory/experimental exercises in mind, it may be used as a support module for diagnosing subsystems used purely for theoretical education. The information to be recorded may include, yet are not limited to: Which examination questions and problems the student addresses and their order. The time intervals corresponding to each problem addressed, as well as between problems and questions. The number of times the student will seek advice in the software's documentation files and or online. The errors which the student commits, especially the number of similar or identical errors. The number and list of software commands which have been executed. 2.2 Diagnosing subsystem The base objective of the diagnostic module is to output the cognitive profile of a student, representing his/her prior knowledge relative to the educational goals set by the exercise. - 4039 -
Generally, any other information which may be relative to the nature of a specific exercise can be logged and then taken advantage of by the rest of subsystems. The process of building a cognitive profile of a student is based on several characteristics, such as the student's prior knowledge on the particular subject, knowledge gaps, contradictive answers and actions, even from the student's attitude during the exercise and his/her willingness to participate. As such, it is a base requirement of this subsystem to investigate possible ways to motivate the students into engaging in the diagnostic process, which will be extracting a model of the current educational status of the student (Self, 1993). The diagnosis of a student during a laboratory exercise has more to do with the skill level of using the software tool(s) required for the completion of the exercise. Initially, the implementation of some simple diagnostic exercises are enough in order to draw conclusions regarding the degree of a student's familiarity with the tool and or specific types of exercises, mainly by the number of tries and incorrect answers taking place. Therefore, the system proposed in this study requires the specification of specific standards for the assembly of a student's cognitive profile. The minimum required standards which need to be set at the set of rules which leads to the initial diagnosis, the artificial intelligence technique(s) used to derive an accurate diagnosis (such as case-based reasoning, fuzzy logic, neural networks, etc.) and, finally, the format and structure required for the proper generation of an usable cognitive profile. 2.3 Modeling subsystem The modeling subsystem is based on the cognitive profile assembled by the diagnostic subsystem in order to design a specific student model. The model includes, besides the initial cognitive profile, the necessary parameters for the design of the feedback which will be presented to the student, allowing for the engagement of the student in the process of diagnosis. The feedback is given to the student after the initial diagnosis as an individual activity, either during the exercise or after its completion, and is adjusted according to the specific educational characteristics of the student (Dimitrova, 2003; Susan, 1997). Feedback may be given in the form of assistance, in the form of suggestions-tips during or before an exercise, in the form of didactic instruction, or even as simple examples, in order to achieve the best possible learning and diagnostic result. For the means of this study, a simplified modeling subsystem has been designed which follows a simple set of rules to rank students between some gradations of skill (beginner, standard, adept, expert) in order to be able to give some feedback and customized assistance. 2.4 Evaluation subsystem The participation of students throughout the entire process depends on individual decisions, responses and the movements, the willingness to participate, from compliance with instructions and encouragement offered the system in various phases (Bloom, Hastings, & Madaus, 1971). The very core of the evaluation module is the model student. Using artificial intelligence techniques details of the initial and final cognitive profile can be reviewed. The assessment of students during an exercise will take into account all the actions which a student takes, logging every action and command. Aside from the correctness of the executed commands, the exactness of their choices and behavior are also used in correlation with the cognitive profile of the student (on the subject of the exercise) (Samarakou, Papadakis, Prentakis, Karolidis, & Athineos, 2009). The standards required for the evaluation of the students can be separated into two distinct categories. The first category is the recordings relative to the motivation of the student and his/her involvement in the modeling process. This stage requires: Recording details of student involvement in the process of diagnosis and possibly updates on the original student cognitive profile - 4040 -
Recording details of student involvement in the construction process of the model Recording details of student engagement in the review of his/her model; steps leading to changes in thinking and, therefore, changes in the model. The second category requires raw navigation and interaction data, such as: Recording information on getting help (frequency, type, etc.). Recording the user's tendency to move between earlier stages and to later stages of the exercise. Logging time intervals corresponding to any given activity. Any other set of supplementary information which may be associated with this activity. Mathworks MatLab experimental exercise The purpose of this exercise is to familiarize the students with the potential use of fuzzy logic in conventional systems (mechanical, electrical, etc.) by using the Fuzzy Logic toolbox of MatLab. The objectives of this scenario are for the students to be capable of: - Recognizing and describing the modules that can be used to build a fuzzy logic system, as well as the types and characteristics of these. - Understand the range of possibilities and advantages offered by fuzzy logic systems software - Be familiar with the use of the Fuzzy Logic toolbox of MatLab. The preliminary test of the system which took place combined both theory and practical exercises, designed so as to clearly identify the questions and answers aiming towards specific parts of the educational module. The contribution of the theoretical and practical parts of the educational module to the evaluation of the student can be chosen specifically for each individual module or class, or even per student based partially or solely on the judgment of the educator. 3.1 Data collection and diagnosis Data collection is important to occur continuously during the time frame of the exercise. Specifically, the system logs every important action the user takes, from entering commands to minimizing the application window. The collected data is stored in an editable format, which may be read and or processed later by other subsystems or different tools. The continuous data collection and logging capabilities induce the potential of dynamic feedback by the system to a number of actions performed by the students. For example, the feedback is being used to provide supplementary material to the student or even inform the student regarding time-related issues. It can also be used for many other types of feedback, even for adjusting the difficulty of the entire examination, which however is out of the scope of this experimental setup. The data which is being observed during the experiment is extensive. Even in this preliminary experiment, it includes: The overall duration time for the completion of the test. The time corresponding to each part of the test. The number of times the user sought assistance via the software's library or online. The number of total errors and the classification of errors into categories. The nature and number of reoccurring errors, if any. The number of commands used and their sequence At the end of each test, the monitoring system gives an estimated correctness factor by simply comparing the results to those obtained when the exercise has been performed by an expert. - 4041 -
3.2 Fuzzy logic evaluation Three basic components form the fuzzy logic evaluation subsystem, performing the fuzzification, inference (using a set of rules) and defuzzification (fig. 2). Fig. 2 Fuzzy logic based evaluation system The fuzzification process takes into account three types of data, namely the automatic test mark which was obtained by comparing the student's results to those achieved by an expert, the deviation in time of each student in comparison to the time required by the expert and, finally, the deviation between the number of commands entered by the student and the minimum required to successfully complete the exercise. During the process of the inference, which is based on sets of rules created by experts, the system interprets the results derived by the fuzzification process by using a specific set of rules for the interpretation of the each set of data (Sevarac, Devedzic, & Jovanovic, 2012). After the defuzzification process, the system creates a cognitive model of each student with two separate aspects, the theoretical knowledge of the student and the expertise of the student with the fuzzy logic toolbox of the MatLab. Each of the two aspects is being expressed in the area [0 100], which can be dissected into any number of linguistic values. For the means of this experiment, the results of both aspects were dissected into 5 equally sized areas (0 to 100 in steps of 20) and are then being translated into linguistic values: 0-20 : awful 20-40 : bad 40-60 : acceptable 60-80 : competent 80-100 : expert Conclusions The testing and evaluation of the proposed system has been performed on a few select cases at first and then in classes of 12 students, with the evaluation results of the system being compared to those of an attending supervisor. The results were very encouraging, with the fuzzy logic system working as fully intended. The student profiles and marking were accurate, with very slight deviations from those of the supervisor when fully focused on a single student. When a single professor had to supervise an entire 12 student class, deviations became apparent; however, after checking the logging files, one could realize that the artificial intelligence software was taking into account points that the professor had missed. Issues did arise, as for example when a student performed an exercise just as accurately but also faster than the expert his results were being compared to, leading to an incorrect evaluation. Clearly, there will be many similar problems as the system is being developed and especially as inputs and the inference rules increase in both number and complexity. However, it is easy to iron out these problems gradually via careful rule composing and troubleshooting/trial runs. Acknowledgement This research has been co-funded by the European Union (European Social Fund) and Greek national resources under the framework of the Archimedes III: Funding of Research Groups in TEI of Athens project of the Education & Lifelong Learning Operational Programme. - 4042 -
References Bai, S.-M., & Chen, S.-M. (2008). Evaluating students learning achievement using fuzzy membership functions and fuzzy rules. Expert Systems with Applications, 34(1), 399-410. doi: 10.1016/j.eswa.2006.09.010 Bloom, B. S., Hastings, J. T., & Madaus, G. F. (1971). Handbook on formative and summative evaluation of student learning: McGraw-Hill. Cheng, C. H., Yang, K. L,. (1998). Using fuzzy sets in education grading System. Journal of Chinese Fuzzy Systems Association, 4(2), 81-89. Dimitrova, V. (2003). STyLE-OLM: Interactive Open Learner Modelling. PhD, University of Leeds, UK. McConnell, D. (1999). Examining a collaborative assessment process in networked lifelong learning. Journal of Computer Assisted Learning, 15(3), 232-243. doi: 10.1046/j.1365-2729.1999.153097.x Samarakou, M., Papadakis, A., Prentakis, P., Karolidis, D., & Athineos, S. (2009). A Fuzzy Model for Enhanced Student Evaluation. The International Journal of Learning, 16(10), 103-118. Self, J. (1993). Model-based cognitive diagnosis. User Modeling and User-Adapted Interaction, 3(1), 89-106. doi: 10.1007/bf01099426 Sevarac, Z., Devedzic, V., & Jovanovic, J. (2012). Adaptive neuro-fuzzy pedagogical recommender. Expert Systems with Applications, 39(10), 9797-9806. doi: http://dx.doi.org/10.1016/j.eswa.2012.02.174 Stathacopoulou, R., Grigoriadou, M., Samarakou, M., & Mitropoulos, D. (2007). Monitoring students' actions and using teachers' expertise in implementing and evaluating the neural network-based fuzzy diagnostic model. Expert Syst. Appl., 32(4), 955-975. doi: 10.1016/j.eswa.2006.02.023 Susan, B. (1997). See Yourself Write: A Simple Student Model to Make Students Think: Springer. - 4043 -