Multimedia Intelligent Tutoring System For Context-Free Grammar

Similar documents
Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

Parsing of part-of-speech tagged Assamese Texts

An Interactive Intelligent Language Tutor Over The Internet

GACE Computer Science Assessment Test at a Glance

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

AQUA: An Ontology-Driven Question Answering System

Language properties and Grammar of Parallel and Series Parallel Languages

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

COMPUTATIONAL COMPLEXITY OF LEFT-ASSOCIATIVE GRAMMAR

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Modeling user preferences and norms in context-aware systems

Radius STEM Readiness TM

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance

Grammars & Parsing, Part 1:

Some Principles of Automated Natural Language Information Extraction

Identifying Novice Difficulties in Object Oriented Design

Software Maintenance

Millersville University Degree Works Training User Guide

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

Cognitive Modeling. Tower of Hanoi: Description. Tower of Hanoi: The Task. Lecture 5: Models of Problem Solving. Frank Keller.

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Guide to Teaching Computer Science

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

Rule-based Expert Systems

RANKING AND UNRANKING LEFT SZILARD LANGUAGES. Erkki Mäkinen DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF TAMPERE REPORT A ER E P S I M S

Proof Theory for Syntacticians

CS 598 Natural Language Processing

CREATING SHARABLE LEARNING OBJECTS FROM EXISTING DIGITAL COURSE CONTENT

An OO Framework for building Intelligence and Learning properties in Software Agents

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Basic Parsing with Context-Free Grammars. Some slides adapted from Julia Hirschberg and Dan Jurafsky 1

On-Line Data Analytics

PRODUCT PLATFORM DESIGN: A GRAPH GRAMMAR APPROACH

Physics 270: Experimental Physics

Axiom 2013 Team Description Paper

SOFTWARE EVALUATION TOOL

A student diagnosing and evaluation system for laboratory-based academic exercises

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

Parallel Evaluation in Stratal OT * Adam Baker University of Arizona

Natural Language Processing. George Konidaris

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Evolution of Symbolisation in Chimpanzees and Neural Nets

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

The Singapore Copyright Act applies to the use of this document.

Knowledge-Based - Systems

Introduction to Simulation

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

A General Class of Noncontext Free Grammars Generating Context Free Languages

CWIS 23,3. Nikolaos Avouris Human Computer Interaction Group, University of Patras, Patras, Greece

Cooperative Training of Power Systems' Restoration Techniques

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Developing a TT-MCTAG for German with an RCG-based Parser

Ministry of Education General Administration for Private Education ELT Supervision

Abstractions and the Brain

Implementing a tool to Support KAOS-Beta Process Model Using EPF

An Introduction to the Minimalist Program

10.2. Behavior models

Smarter Balanced Assessment Consortium: Brief Write Rubrics. October 2015

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

Using Blackboard.com Software to Reach Beyond the Classroom: Intermediate

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing

Clouds = Heavy Sidewalk = Wet. davinci V2.1 alpha3

Setting Up Tuition Controls, Criteria, Equations, and Waivers

Generating Test Cases From Use Cases

The Impact of Positive and Negative Feedback in Insight Problem Solving

BUILD-IT: Intuitive plant layout mediated by natural interaction

A MULTI-AGENT SYSTEM FOR A DISTANCE SUPPORT IN EDUCATIONAL ROBOTICS

NATURAL LANGUAGE PARSING AND REPRESENTATION IN XML EUGENIO JAROSIEWICZ

"f TOPIC =T COMP COMP... OBJ

Introduction to Causal Inference. Problem Set 1. Required Problems

Informatics 2A: Language Complexity and the. Inf2A: Chomsky Hierarchy

E-learning Strategies to Support Databases Courses: a Case Study

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

Competition in Information Technology: an Informal Learning

Empiricism as Unifying Theme in the Standards for Mathematical Practice. Glenn Stevens Department of Mathematics Boston University

School of Innovative Technologies and Engineering

SIE: Speech Enabled Interface for E-Learning

Blended E-learning in the Architectural Design Studio

Multimedia Application Effective Support of Education

A Study on professors and learners perceptions of real-time Online Korean Studies Courses

Evaluating Collaboration and Core Competence in a Virtual Enterprise

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

The Strong Minimalist Thesis and Bounded Optimality

Extending Place Value with Whole Numbers to 1,000,000

Discriminative Learning of Beam-Search Heuristics for Planning

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Circuit Simulators: A Revolutionary E-Learning Platform

Ontology-based smart learning environment for teaching word problems in mathematics

TU-E2090 Research Assignment in Operations Management and Services

Erkki Mäkinen State change languages as homomorphic images of Szilard languages

Learning goal-oriented strategies in problem solving

Save Children. Can Math Recovery. before They Fail?

Transcription:

Multimedia Intelligent Tutoring System For Context-Free Grammar Rhodora L. Reyes Software Technology Department College of Computer Studies, De La Salle University, Professional Schools Inc. ccsrlr@ccs.dlsu.edu.ph Carlo Galvey, Ma. Christine Gocolay, Eden Ordona, Conrado Ruiz, Jr. Software Technology Department College of Computer Studies, De La Salle University, Professional Schools Inc. ABSTRACT This paper presents a multimedia intelligent tutoring system that teaches context-free grammar. The tutor model of this ITS is composed of a set of teaching strategies and an algorithm that determines which teaching action to be deployed given the goals of the system and the current state of the student model. The student model uses the Constraint-Based Modeling (CBM) approach in diagnosing the learner. CBM reduces the complexity of student modeling by focusing on the difference of the student s solution to the ideal solution only and the analysis is reduced to pattern matching. The assumption here is that there can be no correct solution of a problem that traverses a problem state, which violates the fundamental ideas, or concepts of the domain. The system also includes features for simulating the created context-free grammar to aid in teaching. Keywords Intelligent Tutoring Systems, Constraint-Based Modeling. 1. INTRODUCTION The demand for high-quality education at a low cost intensifies, as simultaneously, computers become cheaper, more powerful and more user friendly. Thus, the interest in computer-based instructional environments grow [10]. It is important to consider in the development of computer-based instructional environments how individualized instructions can be provided to its learners. An Intelligent Tutoring System is a computer-based tutor that provides individualized instruction through diagnosis, adaptive instruction and remediation of its individual learners [7]. Context-free Grammar Multimedia Intelligent Tutoring System (CFG-MINTS) [5] focuses its instruction on context-free grammars. Its primary function is to introduce and familiarize students with context-free grammars through instruction and remediation. It evaluates the learning process of the student to effectively diagnose and correct his errors and misconceptions. CFG-MINTS also have an external mechanism called MINTS Authoring Tool (MINTSAT) that is used in creating and updating the curriculum database containing the lessons, exercises, definitions, descriptions, examples, and explanations used by CFG-MINTS. In the rest of this paper, an overview of CFG-MINTS will be discussed followed by the detailed discussion of its main modules including MINTSAT. The results of testing the system will be discussed that show how the learners feedback. Finally, the conclusion and recommendation will be presented. 2. CFG-MINTS: AN OVERVIEW CFG-MINTS is made up of the STUDENT MODEL, INTERFACE MODEL, and the TUTOR MODEL. The TUTOR MODEL is composed mainly of the instructional planner and curriculum database. The Student Model on the other hand calls the Parsing Module, Simulation Model and Evaluation Module (see Figure 1). Based on the knowledge of the student, the TUTOR MODEL decides what kinds of instructional intervention should be taken. This will enable the creation of an instructional plan for the particular student by consulting the knowledge domain. This process includes the selection of an appropriate objective, strategies-tobe-used to attain that objective, and the specification of the plans to support the strategy in meeting the objective. In deciding the next action to be made, the available materials are also taken into consideration by checking contents of the curriculum database [9]. After this, the execution of the specific action like introducing a new concept, presenting an exercise, and reviewing a lesson will be carried out. Any instructional tool selected will be presented through the help of the INTERFACE MODEL. Multimedia is incorporated to improve instruction. If an exercise is to be given to the student, the INTERFACE MODEL communicates to the STUDENT MODEL.. Based on the present student action, the history of student actions, and the rules for accessing the system s model state, the STUDENT MODEL attempts to ascertain the knowledge of the student. It determines what the student knows and does not know about context-free grammar lesson. In addition, it checks if what he knows about context-free grammars is correct or not. The result of the STUDENT MODEL is a knowledge state representation that the TUTOR MODEL uses to set the next action to be done as an instructor. The STUDENT MODEL component tasks are to syntactically analyze and simulate the student s solution.

TUTOR MODEL Instructional Planner Supervisory KS Pedagogic KS Correction KS Thematic KS Teaching KS Curriculum DB Student DB I n t e r f a c e THEMATIC, TEACHING, SUPERVISORY and CORRECTION KNOWLEDGE SOURCES. The PEDAGOGIC KNOWLEDGE SOURCE checks the gap between the last and current session and the difficulty of the concept, and the learning level achieved under the concept. The type and performance of the student are also considered in the set of conditions. The default action of the pedagogic rules is the setting of the strategy for the next segment of a session. These strategies include the presentation of the system, a general review, a brief review of the concepts taken last session, a recollection of the last concept studied, a resumption of a concept previously started, an introduction of a new concept, an exercise, and the termination of the session. These strategies are treated as objectives by the TEACHING KNOWLEDGE SOURCE, which provides the plans to achieve them. The TEACHING KNOWLEDGE SOURCE is primarily responsible for carrying out the presentation of an object. STUDENT MODEL Parsing Module Simulation Model Evaluate Constraint Manager/Evaluation Figure 1. Functional View of CFG-MINTS 3. CFG-MINTS COMPONENTS The primary components of CFG-MINTS are the student model and tutor model. This section discusses the details of these components. 3.1 CFG-MINTS Tutor Model The structure of the tutor model is made up of an INSTRUCTIONAL PLANNER, a STUDENT DATABASE, a CURRICULUM DATABASE [9], as shown in Figure 1. The INSTRUCTIONAL PLANNER is the core component of the TUTOR MODEL while the STUDENT and CURRICULUM DATABASES are the supporting components. These databases are used by the INSTRUCTIONAL PLANNER for teaching the students. The student database contains personal information about the student including the performance and the lessons he has taken while the curriculum database contains all the presentation materials including lessons, exercises and explanations. This section will focus on the discussion of the instructional planner. 3.1.1. The Instructional Planner The INSTRUCTIONAL PLANNER is the main component of the TUTOR MODEL. It is made up of different knowledge sources, which was based on the design presented in [12]. The design divides the component into five levels of abstraction: PEDAGOGIC, Whenever the objective requires a new concept to be taught, a call to the THEMATIC KNOWLEDGE SOURCE is made. It searches through the concepts and prioritizes them. as follows: 1. the concepts that has reached the acceptance learning level during the session; 2. the mostly recently interrupted concepts; 3. the acceptance concepts after the current concept; 4. the interrupted concepts that has the most post-requisites that are not of the integration level; 5. the acceptance concepts prior to the current concept; and 6. the concepts that have not reached the integration level. Compared to a simple syllabus-based method, the applied prioritization in choosing the next concept to teach treats the topics as interrelated (not individual) objects that may affect each other s significance at a given point in a teaching session. The SUPERVISORY KNOWLEDGE SOURCE focuses on the detection and resolution of conflicts arising among the tutor and the objectives of the student. Every time an interaction from the student occurs, a supervisor goal is created. If there are no conflicts, tutor planned activity goes on. In other cases, conflict resolution plans will decide whether to consider a local change and notify the TEACHING KNOWLEDGE SOURCE or the PEDAGOGIC KNOWLEDGE SOURCE that a change in strategy is needed. During a teaching session, TUTOR MODEL puts forward exercises to check the comprehension of the student of the concepts. After which, it expects to receive a token which represents the error the student committed from the STUDENT MODEL. This activates the CORRECTION KNOWLEDGE SOURCE whose goal is to remediate the misconception of the student 3.2 CFG-MINTS Student Model Student modeling aims to analyze student solutions to a contextfree grammar problem, and to give an evaluation by pointing out errors and giving out the possible misconception. In doing this, several processes are gone through in order to arrive at the desired result. The task of building a student model is extremely difficult and laborious, due to huge search spaces involved and the small amount of information to start from. Several researchers have

pointed to the inherent intractability of the task [4], [6]. If the goal is to model student's knowledge completely and precisely, student modeling is bound to be intractable. However, a student model can be useful although it is not complete and accurate [6]. Even simple and constrained modeling is sufficient for instruction purposes, and this claim is supported by findings that human teachers also use very loose models of their learners, and yet are highly effective in what they do [4]. CFG -MINTS uses Constraint-Based Modeling (CBM) [6] to form models of its students. The Constraint Based Modeling used by the system, reduces the complexity of student modeling by focusing on faults only. Domain knowledge is represented in the form of state constraints, where a constraint defines a set of equivalent problem states. An equivalence class triggers the same instructional action; hence the states in an equivalence class are pedagogically equivalent. The assumption here is that there can be no correct solution of a problem that traverses a problem state, which violates the fundamental ideas, or concepts of the domain. A violated constraint signals the error, which comes from incomplete and incorrect knowledge. CFG-MINTS models students by looking at the student's solution and by comparing the student's solution to the ideal one. Each constraint has a unique number, and contains the relevance and satisfaction patterns. The Student Model starts analysis by first loading the database corresponding to the category of the exercise currently given to the student. The exercise can be a true-or-false, direct-answer, matching type, creation of context-free-grammar in type. The production rules given by the student as his solution is converted into a list for manipulation. Pattern matching is done between the student solution and the ideal solutions found in the database per production rule (see Figure 2), that is, the student variable is bounded to the database variable, and if their pattern corresponds with each other, then they match. If no ideal solution in the database matches wholly with the solution of the student, the ideal solution in the database which matched the most with the solution of the student will be fetched from the database. The Student Model proceeds to apply perturbations on the solution of the student with the goal of successfully pattern matching with the ideal solution found in the database. There are three perturbations (see Figure 3) to be applied, namely: modify a variable/terminal, delete a variable/terminal, and insert a variable/terminal. If pattern matching succeeds, the Student Model determines the error of the student based on the solution of the student and what the database was expecting. Modify a variable/termin l Perturbation Delete a variable/terminal Figure 3. Three types of perturbation. Insert a variable/terminal If deductive inferencing fails, the system assumes the solution of the student is incorrect and tries to determine the error of the student based on the solution of the student and what the database is expecting. In determining the error of the student, the system searches for the error associated with the ideal solution that was currently being pattern matched. Based on this, the system looks for the specific production rule in the solution of the student that did not match with the expected production, that is, given Production rule in the Student s Solution -- act -> act + bat - cat Production rule in the Ideal Solution -- A -> A * C - B the solution of the student and the ideal solution did not match the entry + of the solution of the student, and the entry * of the ideal solution. This entry will determine the specific error of the student. It is also based on this that the system will be able to determine the associated misconception. A -> B + A pattern match <expr> -> <term> + <expr> (student Figure production 2. Pattern rule) matching between the ideal (database and student s production rule ) solution In the first instance A that binds the with student <expr> production rule and the Figure 2. Pattern matching between the ideal and student s solution database production rule do not match, deduction is applied. The deduction process enables the system to recognize if the production rules made by the student is the same as any of the existing set of production rules in the database, but just appeared differently. This process includes the application of perturbation. 3.2.1 Domain Model To analyze the solution of the student, the Student Model will have to determine what the student is trying to do, and what the student is actually doing. The set of ideal solutions forms the domain model. From the domain theory, an explanation of the plan of the student is derived. A student model in CFG-MINTS contains general information about the student, history of previously solved problems and information about the usage of constraints, as demonstrated in the solutions produced by the student. Constraint-based modeling reduces the complexity of student modeling by focusing on faults only. Domain knowledge is represented in the form of state constraints, where a constraint defines a set of equivalent problem states. A state constraint is an ordered pair (Cr, Cs), where Cr is the relevance condition and Cs is the satisfaction condition. Cr is used to identify problem states,

in which Cr is relevant, while Cs identifies the class of relevant states in which Cs is satisfied. Each constraint specifies the property of the domain, which is shared by all correct paths. In other words, if Cr is satisfied in a problem state, in order for that problem state to be a correct one, it must also satisfy Cs. Conditions may be any logical formulas, hence may consist of various tests on the problem state. 3.2.2 Parsing Module The parsing module task is to change the student s solution form text format into sets of tokens in production rules. It checks syntax, deduces the variables form the terminals and loads the solution into memory, in the data type. If the system finds no error, the next step is to assign the token type to every token in the system. It first finds the variables or non-terminals, then the special symbols and the remaining tokens are assigned as terminals. The final output of the parsing Module is a syntactically correct production, with all of its terminals assigned token type. 3.2.3 Determining Errors and Misconceptions The analysis of a context-free grammar solution of the student leads to the determination of errors if the plan of the student does not match with the domain database. The error and misconception domain design is composed of three levels, namely: General Error Level, Specific Error Level, and the Misconception Level. Each ideal rule is compared to all the student rules/productions after performing pattern matching. The percentage of correctly matched tokens is the Satisfaction Condition. The Related Condition is computed by getting the highest Satisfaction Condition after using the different Perturbations. Then the system chooses the Solution that is closely matched to the student s solution if there are several possible solutions. In such a case the average of ( Satisfaction Condition + Related Condition ) of the rules in a Solution is computed. The solution with the highest average will be chosen as the solution the student is trying for. If the average is 200% then that means it is a perfect match. If it is not a perfect match however it goes through each ideal rule. An ideal rule is perfectly match to a student solution if both the Satisfaction Condition and the Related Condition is 100%. In which case the modeler will move to the next rule. Otherwise, there will be a violation and the violated constraint will be looked up in the constraint rules table in the database (see Table 1) Table 1: Constraint Rules Table Constraints Solution No. Production No. Token No Misconceptio n General Error Specific Error 1 343 2 5 001 01 01 (Autonumber) 0 0 0 If the Related Condition is 100%, the system will try to find the particular token/symbol that made the error. It will then look it up in the constraints table and return the appropriate general error, specific error and Misconception. These errors have their own tables for their text explanation. Otherwise it will fire the violation for the enire production. Tutor model for evaluation and remediation. If more than one error occurs, all errors are shown to the student. However, remediation will only try to solve the error that appeared the most number of times. If there is a tie or all of them appeared only once then the system chooses the first one. This assumes that it is possible that the preceding error were caused by the first error. The student model can detect missing symbols, additional/unneeded symbols and wrong symbols. But it can only detect one per production. The reason is that if we try to perturb the student s answer too much the resulting relevance condition may increase but the ideal solution might also be matched to the wrong student rule. As an example, given that the student is asked to construct a grammar that would accept valid arithmetic expression, with the alphabet = { id, *, +, (, and ) }. And the ideal solution are: E -> E + E E -> E * E E -> ( E ) E -> id E -> E + T T T -> T * F F F -> ( E ) id And the student solution is: expr -> expr + id term term -> term * factor * factor -> expr ) id The solution will be matched to the second solution and the student model would detect the following errors: 1. The id in the first production should be a non-terminal and the production should be <var1> -> <var1> + <var2> 2. The * in after factor should be deleted and the productio should look like, <var1> -> <var1> * <var2> 3. The production <var1> -> <var2> is missing. 4. The symbol, ( should be inserted after -> and before expr in the 4th production. The student model would then look up these violated constraints in the constraint table in the database and return their respective error codes. It would then return the specific error, general error and misconception. The Student model sends the errorcode as x-xx-xx-xxx, Problem Type General Error Specific Error Misconception, to the

3.2.4 Simulation Model The main function of this model is to simulate the Context-Free Grammar through a graphical parse tree on the screen, its leftmost and right-most derivation, short listing of accepted strings and parsing of strings given a grammar. The input of the model is a syntactically correct grammar from the Parsing module and shows the simulation to the student. This helps the student identify and analyze their solution. Initially the start symbol is the only node on the Parse tree frame of the screen. When the user selects a rule either through the menu or the list of production the system tries to find a non-leaf node that can be expanded. It does this by using depth-first searching, the preference of whether the left-most or right-most sibling is to be expanded first is dependent on the choice selected by the student on the menu, by default it is left to right. For example, given the grammar, E -> E + T T, T -> T * F F, F -> ( E ) id, if the production E-> E + T was fired, with E as the start symbol. The output on the screen is shown on Figure 4. Figure 4. Sample parse tree However, if there is a predicted conflict, meaning the node s children would overlap with other nodes then a conflict algorithm is used to resolve the problem. The first case is that the leftmost child would overlap with another node. In this case, the parent will then move itself and if any its siblings or other nodes, who are on its left-side, to the right. For example if the production. T-> T * F was fired, with a given parse tree. The output on the screen is shown in Figure 5. side at a time until it reaches a state where there are no longer any conflicts (Figure 6). Figure 6. Conflict detection on both sides. The conflict algorithm used is also recursive, but only to the node s children. For example, if a particular node is to be moved has children the move would also apply to all its descendants. Another feature of the simulation model is its capability to parse a given string and generate its parse tree when it is accepted. Because of the innate relationship of derivatives and parse trees, the system to create the parse tree first tries to find either a leftmost or rightmost derivation for the string. With the derivation, each line corresponds to firing a particular rule to achieve the next derivation line. Using this attribute the system uses the rule firing in the previous section to generate the parse tree on the screen. The generation of accepted strings is another feature of the simulation model. It is done by using the algorithm for the generation of the derivation. The procedure will only stop generating strings when the number of generated strings reaches 30. The generation of the derivation of a particular string is done by all generating possible strings and comparing it to the input string. This is obviously expensive in terms of computation and time. This is the reason for the incorporation of several pruning techniques to cut off branches that do not lead to the desired result. The derivation starts with the start symbol. It then assigns the next possible rules that can be fired into the component rules, with all the rulefired tags set to not fired. It chooses a particular rule based on the order in the grammar from top-down. When a rule has been chosen, its tag is set to in use. Then the rule is fired and a new derivation line is created. This continues until a string has been generated or any of the pruning condition is true: Figure 5. Sample conflict. If the conflict is found on the other side, the right-most child the model would still move the siblings and other nodes to the right of the parent, but it will not move itself. If the conflict exist on both sides it will first try to resolve one 1. If the length of the derivation line is greater than the given input string. Note that the length of the derivation line is not necessarily the token count of the derivation line. If nullable symbols exist, the length is the token count minus the nullable symbols. 2. Given a derivation line α 1 β 1, where α 1 is a string composed only of terminals, and β 1 is a string. And given an input string, α 2 β 2, where α 2 is a substring of the input string

whose length is equal to α 1, and β 2 is a string. Where α 1 α 2. 3. Given a derivation line α 1 β 1, where α 1 is a string, and β 1 is a string composed only of terminals. And given an input string, α 2 β 2, where α 2 is a string, and β 2 is a substring of the input string whose length is equal to β 1. Where β 1 β 2. 4. Given a derivation line αxβ, where where α and β are strings and x is terminal found in Σ. Where x is not found in the input string. When any of these conditions are true then the system backtracks. It deletes the current derivation and sets the tag of rule to rule fired. It then chooses another rule to traverse. When the system is in the first line and all of its possible rules have been set to rule fired it is then said the string is not accepted by the grammar. The generation of accepted strings is another feature of the simulation model. It is done by using the algorithm for the generation of the derivation, discussed in the previous paragraphs. However, there are no pruning conditions and the strings generated will be saved in an array. The procedure will only stop generating strings when the number of generated strings reaches 30. 4. CONCLUSION CFG-MINTS was presented in this paper as a multimedia intelligent tutoring system for teaching context-free grammar. Its tutor model determines how to tutor, what instructional tools to try, and why and how often to interrupt the student, during the instructional process. Depending upon the objective of the tutor, as well as on the current state of the Student Model, the system arrives at the best possible plan of action to be taken in the instructional process. The Constraint-based approach was used in student modeling to reduce the complexity of student modeling. Another feature of the system is the simulation model. The simulation allows the student to visualize and analyze his or her solution so that he or she can correct his or her misconception. Studies, De La Salle University, Professional Schools Inc., Manila, Philippines. [5] Holt, P., Dubs, S., Jones, M. and Greer, J. The State of the Student Modeling. Student modeling: The key to individualized Instruction. Pp 3-35. [6] Ohlsson, Stellan 1994. Constraint-Based Student Modeling. Springer-Verlag, New York, pp. 167-189 [7] Reyes, R.L.: A Domain Theory Extension of a Student Modeling System for Pascal Programming. Lecture Notes in Intelligent Tutoring Systems. ITS 98 Conference. Springer-Verlag, San Antonio, Texas. 1998. [8] Reyes, R. L. etal. A Self-extending Tutor Model for Pascal Programming. Lecture Notes in Intelligent Tutoring System. ITS 98 [9] Reyes, R. (1999). Adaptive Web-Based Intelligent Tutoring System for C Programming. Proceeding of the 1999 International Conference in Computers in Education. Chiba, Japan. [10] Sison, R.C. (1994). Intelligent Tutoring Systems: Specific Design Issues for a Rule-Learning Student Model. In 6 th De La Salle University Computer Conference, Manila. Pp I-19-32 [11] Sleeman, D. (1997) Intelligent Tutoring System. http://www.cis.unisa.edu.au/acrc/is_its.html July,1999. [12] Spohrer, J. Soloway, E. Novice Mistakes : Are the folk wisdom correct? Communications of the ACM.pp. 624-632 [13] Verdejo, M. (1992). A framework for instructional planning and discourse modeling in intelligent tutoring systems. New Directions for Intelligent Tutoring Systems. Pp 16-170. 5. REFERENCES [1] S. Alessi & S Trollip. (1991) Computer-Based Instruction Methods and Development. Englewood Cliffs, New Jersey: Prentice Hall, Inc. pp 17-85. [2] J Anderson, C Boyle, A Corbett & M Lewis (1990). Cognitive modeling and intelligent tutoring, Artificial Intelligence. pp 7-49. [3] R. Apostol, T. Kua, R. Mendoza & C. Tan.(1996) GURU: A Tutor Model for Pascal Programming. An Undergraduate Thesis. College of Computer Studies, De La Salle University, Manila [4] C. Galvey, M.C. Gocolay, E. Ordona, C. Ruiz and R. Reyes (1999). Multimedia Intelligent Tutoring System for Context Free Grammar. An Undergraduate Thesis, Software Technology Department, College of Computer