Reinforcement Learning-based Spoken Dialog Strategy Design for In-Vehicle Speaking Assistant

Size: px
Start display at page:

Download "Reinforcement Learning-based Spoken Dialog Strategy Design for In-Vehicle Speaking Assistant"

Transcription

1 Reinforcement Learning-based Spoken Dialog Design for In-Vehicle Speaking Assistant Chin-Han Tsai 1, Yih-Ru Wang 1, Yuan-Fu Liao 2 1 Department of Communication Engineering, National Chiao Tung University, yrwang@cc.nctu.edu.tw 2 Department of Electrical Engineering, National Taipei University of Technology, yfliao@ntut.edu.tw Abstract In this paper, the simulated annealing Q-learning (SA-Q) algorithm is adopted to automatically learn the optimal dialogue strategy of a spoken dialogue system. Several simulations and experiments considering different user behaviors and speech recognizer performance are conducted to verify the effectiveness of the SA-Q learning approach. Moreover, the automatically learned strategy is applied to an in-vehicle speaking assistant prototype system with real user response inputs to enable a driver to easily control various in-car equipments including a GPS-based navigation system. Key Word: reinforcement learning, Q-learning, dialogue strategy, spoken dialog system 1. Introduction Speech, especially spoken language, is the most natural, powerful and universal human-machine/computer interface. For example, an in-vehicle spoken dialogue system (SDS) could assist a driver to safely navigate the car or access real-time traffic information (see Fig. 1). For elderly homecare, it is crucial to provide elderly people a friendly SDS to request various services or to operate complex assistant equipments. Usr: I want to know the nearest gas station. Sys: The nearest gas station is at the corner of the King plaza and about 1KM away. Usr: Please activate the navigation system Sys: GPS is on, please turn left in the next street. resulting in a set of slots to be filled, which are usually used to make a database query or update. For example, in the SDS-based railway ticket reservation system, the system might have the following slots that need to be filled to successfully reserve a ticket: (i) the departure city, (ii) the arrival city, (iii) date on which to travel and (iv) time at which to travel. However, how to automatically design an efficient mixed-initiative dialog strategy to assist the user to quickly fill in these slots is never a trivial problem. In fact, the dialog flow of the SDSs could be mapped onto a Markov decision process (MDP) [2] with an appropriate set of states and actions. And the automatic learning of the dialog strategy could be further described as an intelligent control problem that involves an agent learning to improve the performance of SDSs by interaction with different underlying speakers/environment. Therefore, in this paper, a reinforcement-based simulated annealing Q-learning (SA-Q) [3] approach were used to automatic learn the optimal dialogue strategy of a spoken dialogue system (see Fig. 2) and, more specifically, the spoken dialogue strategy of an intelligent in-vehicle speaking assistant. Moreover, a probabilistic user model (also see Fig. 2) considering various user behaviors is built to train the SA-Q learning algorithm. The generated strategy is finally applied to an in-vehicle speaking assistant prototype system to allow a driver to easily use spoken language to control the global position system (GPS)-based navigation system and other complex in-car equipments. Figure 1. The application scenario of a spoken dialogue system for GPS-based car navigation. The current state-of-the-art SDSs are often mixed-initiative slot-filling systems [1]. This means that both the user and system may take the initiative to provide information or ask follow up questions in a dialog session to jointly complete certain tasks. These kinds of SDSs are useful in domains where certain bits of information need to be elicited from the user, Figure 2. The block diagram of the reinforcement learning-based dialogue strategy design for spoken dialogue systems. The organization of this paper is as follows. The

2 SA-Q learning algorithm is briefly described in the second section. Several simulations and experiments considering different user behaviors are shown to verify the effectiveness of the SA-Q learning method in Section 3. The automatically learned strategy is applied to the in-vehicle speaking assistant prototype system in Section 4. Finally, some conclusions are given in the last section. 2. Reinforcement learning-based dialog strategy design As shown in Fig. 2, the SA-Q learning method [3] is a reinforcement learning algorithm that does not need a model of its working environment and can be used to adapt system behaviors on-line. Therefore, it is very suitable to deal with various user behaviors for implementing SDSs. The Q-learning algorithm essentially works by estimating the strength of correlation between states and actions (so called state-action pairs). The quality value Q(s,a) is defined to be the expected discounted sum of future reward obtained by taking action a from state s and following an optimal policy thereafter. Once these values have been learned, the optimal action from any state is the one with the highest Q-value. The values of state-action pairs Q(s,a) can be found by the following steps. First, when the system moves to state s and takes action a, a quality value function Q(s,a) is defined as follows: Q * (, s a) R(, s a) T(, s a, s')max Q * = + γ (', s a') s' S (1) where T(s,a,s ) is the state transition probability from current state s to next state s. R(s,a) is the reward been given for this action. And, γ is the discount parameter for the reward will be given in the forgoing process. Then, the optimal action with maximum Q(s,a) in state s could be found as ( ) max (, ) * * a a' V s = Q s a, (2) and the optimal strategy of the dialogue system, is * * π s = arg max Q s, a (3) ( ) ( ) a Q-values are estimated on the basis of experience as follows: (i) From the current state s, select an action a. This will cause a receipt of an immediate payoff r, and arrival at a next state s'. (ii) Update Q(s,a) based upon this experience as follows: Q( s, a) : = ( 1 α) Q( s, a) + α r+ γ max Q( s, a ) (4) ( = Q( s, a) + α r+ γ max Q( s, a ) Q( s, a) a where α is the learning rate. (iii) Repeat steps (1) and (2) until converged. ( a ) ) This algorithm is guaranteed to converge to the optimal strategy when each state is visited potentially infinitely often and the learning rate satisfied 2 α = and α <. For large amount of state t t spaces and actions and finite computing time, however, its performance strongly depends on the time course of α. How to decide the learning rate is an important issue in the Q-learning. Therefore, in this paper, the Simulated annealing Q learning (SA-Q) [3] was used. SA algorithm is usually applied to the search procedure in order to control the balance between exploration and exploitation. In SA algorithm, the state transition probability between state i and j can be set as 1, if f ( j) > f ( i) P( i, j) = f ( j) f ( i) (5) t e, otherwise where f ( i ), f ( ) j is the cost function and t is the temperature parameter. And, it can be applied to Q-learning method to control the balance between exploration and exploitation search. In the Q learning, we can follow the explored action, a p, determined by the strategy or change to an exploited random action, a r. The probability to change to an exploited random action, a r, will be set to 1, if Q( s, ar) > Q( s, ap) P( ap, ar) = Q( s, ar) Q( s, ap) (6) t e, otherwise Because the Q function satisfied Q( s, ar) Q( s, ap), such that Q( s, ar) Q( s, a ) p P( ap, ar) = exp (7) t Furthermore, we can use the temperature-dropping criterion, t = k 1 λtk, k = 0,1,2, +, λ [0.5,1.0], to update the temperature t after the goal state was reached. The SA-Q learning can in fact be treated as a ε-greedy method using a dynamic ε value. 3. Simulations and experiments of the automatic spoken dialogue strategy design In this section, several simulations and experiments of SDSs were done to verify the effectiveness of dialogue strategy learning by the SA-Q learning method. A general dialogue system with N slots and K tasks were used in all simulations. That means there are K functions in the spoken dialogue system. And for activating one specific function, different numbers of slots including some necessary and un-necessary slots are needed to be filled. In the following simulation, we set K to 4 and N to 6. As shown in Fig.

3 3, there are some joint and disjoint slots for each function. The un-necessary slots may be the task identifier, for example in intelligent in-vehicle speaking assistant system, the user may say U: I want to activate the navigation function. However, if the user gives some distinctive necessary slots, for example, the destination or routing option in a car navigation task, the system should immediately figure out the requested function and does not need the information of un-necessary slot. FIGURE 3. The general slots diagram of a spoken dialogue system In all following simulations, there are four possible values for each slot, i.e., Unknown, Unconfirmed, Grounded and Cancelled. Thus, the total possible states will be 4 10 =1M. Since the number of states are so large, to automatically find the optimal strategy, if not impossible, a very difficult problem. Then, all the possible actions might be taken in the system are listed as follows: (i) Greeting (ii) Query the task (iii) Confirm the task (iv) Give the slot information (v) Confirm the slot information (vi) Close the system Therefore, the total possible actions will be 151 in our simulations. To find a suitable spoken dialogue strategy using SA-Q learning method, real user (environment) responses to each system move and action are needed. However, for practice reason, it is reasonable to first use a simulated user behavior model to train an initial dialogue system. Then the system could be further trained on-line. Therefore, in our simulation, a probabilistic user model as shown in Fig. 2 was built. The system reward function, R(s,a) are given as following R = WDD+ WRRF + WM MIS+ WcancelCS+ WGG, (8a) which depends on the distance between current state and the goal state D, the number of mismatch slots between system query and user answer - MIS, whether the user force to close the system - CS and whether the task is complete or not - G. And the detail definition of D is D = R u U i i + R g G R C N R i + c i i f Mi (8b) where U, G, C is the number of Unconfirmed Grounded and Cancelled necessary slots, N is the number of necessary slots, M is the number of Unknown un-necessary slots and i is the task index. The weighting factor was setting to following values in our simulation. Ru = 0.5, Rg = 1, Rc = 1, Rf = 3 WD = 1, WR = 85, WM = 6, Wcancel = 85, WG = 20, WT = 1 However, how to build a good user model is another difficult problem and is out of the scope of this paper. Therefore, a simple probabilistic user model was used in all simulations. In the following, three experiments considering (i) convergence of the SA-Q learning algorithm, (ii) different user behaviors and (iii) the performance of the speech recognizer, respectively, were investigated to examine the effectiveness and robustness of the dialogue strategy found by the SA-Q learning algorithm. EXPERIMENT 1: In this simulation, we consider the convergence of the SA-Q learning algorithm, therefore, the recognition rate for the speech recognizer was assumed to be 100% and the confidence measure threshold for Grounded was set to 0.65, which means only 65% user s answer will set the corresponding slot to Grounded and 35% will set the slot to Unconfirmed. Then the probability that the user will give the information of each necessary slot needed for the task is 0.8 after greeting and the probability user will give the answer system query is also 0.8. Fig. 4 shows the convergence curve of SA-Q learning. Because the size of state space was 220, the system needed about epoch of training dialog sessions to converge to the optimal strategy. FIGURE 4. Convergence of the SA-Q learning algorithm. If we further analyze the found dialogue strategy,

4 a reasonable dialogue strategy as shown in Fig. 5 could be shown. It is worthy noting the system will always try to ask the user the task-specific slots since the system did know that speech recognizer was perfect. different recognition rates and confidence measures in both training and evaluation phases. The results shown that the dialogue strategies trained under match condition in fact didn t have the best performance. On the other hand, in order to increase the robustness of the system, it is preferred to train the dialogue strategy under lower recognition rate and confidence level. Moreover, if the recognition rates and confidence measures of speech recognizer were too low; the system could learn nothing. 4. In-vehicle speaking assistant prototype system FIGURE 5. A typical example of the dialogue strategy automatically learned by the SA-Q learning algorithm. EXPERIMENT 2: In this experiment, we want to examine whether different user behaviors will affect the learned dialogue strategy. Four types of user s behavior models were used in the following simulations, (i) User1 only one answer will be given no matter how many slots were asked. (ii) User2 the number of answers will given was less than three. (iii) User3 the user will give all answers that system asks for. (iv) User4 not only the answers that system asks for but also other slots needed for the specified task will be given. Using the above four user behavior models, four dialogue strategies were learned. In Table 1 to 4, the performances of applying the learned 4 strategies to different users are shown. From those tables, we could see that (i) the performance was best when the training user and test user were the same (ii) the strategy trained by User2 was the most robust one (iii) Never used User4 to train the system, because system doesn t need to do anything. The average objective values were also shown in Table 1 to 4. As we can see that the weighting factor defined in Eq. (8a) and (8b) will enable the system to find an optimal dialogue strategy which minimizes the average number of dialogue turns. EXPERIMENT 3: In this experiment, the robustness issues of the SA-Q algorithm were studied considering different recognition performance and confidence level of a speech recognizer. In Table 5 to 9, several experiments using user model User3 were done to test the learned dialogue strategies by varying the speech recognizer with Since the SA-Q learning algorithm performs very well, a prototype spoken dialogue system, i.e., the in-vehicle speaking assistant system was established and fist trained using probabilistic user models and further trained using real users. The block diagram of the system is shown in Fig. 6. The major functions of the system include: (i) GPS/GIS car navigation assistant (ii) points of interesting database query, i.e., gas stations and parking slots (iii) mobile phone directory assistant, i.e., direct dialing by name and phone number query The assignment between the functions and slots are shown in Fig. 7. Therefore, there are three functions and in total 18 slots. FIGURE 6. The block diagram of the in-vehicle speaking assistant prototype system. FIGURE 7. The assignment between the functions and slots. The found spoken dialogue strategy using SA-Q

5 learning is shown in Fig. 8. Could be seen from Fig. 8, the dialogue strategy is reasonable and does make senses. Since the system expects median recognition performance and confidence level, the system will try to identify the task if it doesn t know or it will directly ask the remaining slot information after the underlying task is known. Finally, Fig. 9 shows a snapshot of the prototype system in action. prototype system in action. 5. Conclusions In this paper, the SA-Q learning algorithm was shown to be capable to automatically learn the optimal dialogue strategy for designing a spoken dialogue system. The automatically learned dialogue strategy was then applied to an in-vehicle speaking assistant prototype system to enable a driver to easily control various in-car equipments including a GPS-based navigation system. The extension of the SA-Q learning algorithm to more complex spoken dialogue system is now under exploring. 6. References FIGURE 8. The dialogue strategy learned by SA-Q learning [1] E. Levin, R. Pieraccini, W. Eckert, G. Fabbrizio, S. Narayanan, Spoken Language Dialogue: from Theory to Practice,, Proc. of ASRU99, IEEE Workshop, Keystone, Colorado, Dec [2] E. Levin, R. Pieraccini, and W. Eckert, Using Markov decision process for learning dialogue strategies, Proc. ICASSP 98, Seattle, WA, May, 1998.C. Watkins, Learning from Delayed Rewards. Ph.D. Thesis, Psychology Department, Cambridge University, Cambridge, England, [3] Maozu Guo, Yang Liu, and Jacek Malec, A new Q-learning algorithm based on the metropolis criterion, in IEEE Transactions on systems, man, and cybernetics-part B, VOL.34, NO.5, Oct Acknowledgements This work was supported by National Science Council of the R.O.C. under grant no. NSC E FIGURE 9. In-vehicle speaking assistant Table 1. Performance of different of dialogue strategies under user behavior model User1 (SR Success rate, NQ Average number of queries used in the dialogue, O average objective measure). Used NR NQ O NS NQ O NS NQ O NS NQ O Table 2. Performance of different of dialogue strategies under user behavior model User2. Used SR NQ O Table 3. Performance of different of dialogue strategies under user behavior model User3. Used SR NQ O

6 Table 4. Performance of different of dialogue strategies under user behavior model User4. Used SR NQ O Table 5. Performance of strategy 3 (trained under recognition rate (R)=1.00 and confidence measure (C)= 0.90) R= R= R= R= Table 6. Performance of strategy 3 (trained under recognition rate (R)=1.00 and confidence measure (C)= 0.65) R= R= R= R= Table 7. Performance of strategy 3 (trained under recognition rate (R)=0.9 and confidence measure (C)= 0.9) R= R= R= R= Table 8. Performance of strategy 3 (trained under recognition rate (R)=0.9 and confidence measure (C)= 0.65) R= R= R= R= Table 9. Performance of strategy 3 (trained under recognition rate (R)=0.6 and confidence measure (C)= 0.65) R= R= R= R=

Lecture 10: Reinforcement Learning

Lecture 10: Reinforcement Learning Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com

More information

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

Task Completion Transfer Learning for Reward Inference

Task Completion Transfer Learning for Reward Inference Machine Learning for Interactive Systems: Papers from the AAAI-14 Workshop Task Completion Transfer Learning for Reward Inference Layla El Asri 1,2, Romain Laroche 1, Olivier Pietquin 3 1 Orange Labs,

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Exploration. CS : Deep Reinforcement Learning Sergey Levine Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?

More information

AMULTIAGENT system [1] can be defined as a group of

AMULTIAGENT system [1] can be defined as a group of 156 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 2, MARCH 2008 A Comprehensive Survey of Multiagent Reinforcement Learning Lucian Buşoniu, Robert Babuška,

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

Georgetown University at TREC 2017 Dynamic Domain Track

Georgetown University at TREC 2017 Dynamic Domain Track Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain

More information

Task Completion Transfer Learning for Reward Inference

Task Completion Transfer Learning for Reward Inference Task Completion Transfer Learning for Reward Inference Layla El Asri 1,2, Romain Laroche 1, Olivier Pietquin 3 1 Orange Labs, Issy-les-Moulineaux, France 2 UMI 2958 (CNRS - GeorgiaTech), France 3 University

More information

High-level Reinforcement Learning in Strategy Games

High-level Reinforcement Learning in Strategy Games High-level Reinforcement Learning in Strategy Games Christopher Amato Department of Computer Science University of Massachusetts Amherst, MA 01003 USA camato@cs.umass.edu Guy Shani Department of Computer

More information

DIANA: A computer-supported heterogeneous grouping system for teachers to conduct successful small learning groups

DIANA: A computer-supported heterogeneous grouping system for teachers to conduct successful small learning groups Computers in Human Behavior Computers in Human Behavior 23 (2007) 1997 2010 www.elsevier.com/locate/comphumbeh DIANA: A computer-supported heterogeneous grouping system for teachers to conduct successful

More information

E-learning Strategies to Support Databases Courses: a Case Study

E-learning Strategies to Support Databases Courses: a Case Study E-learning Strategies to Support Databases Courses: a Case Study Luisa M. Regueras 1, Elena Verdú 1, María J. Verdú 1, María Á. Pérez 1, and Juan P. de Castro 1 1 University of Valladolid, School of Telecommunications

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

A Reinforcement Learning Variant for Control Scheduling

A Reinforcement Learning Variant for Control Scheduling A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

On the Combined Behavior of Autonomous Resource Management Agents

On the Combined Behavior of Autonomous Resource Management Agents On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science

More information

Regret-based Reward Elicitation for Markov Decision Processes

Regret-based Reward Elicitation for Markov Decision Processes 444 REGAN & BOUTILIER UAI 2009 Regret-based Reward Elicitation for Markov Decision Processes Kevin Regan Department of Computer Science University of Toronto Toronto, ON, CANADA kmregan@cs.toronto.edu

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

Acquiring Competence from Performance Data

Acquiring Competence from Performance Data Acquiring Competence from Performance Data Online learnability of OT and HG with simulated annealing Tamás Biró ACLC, University of Amsterdam (UvA) Computational Linguistics in the Netherlands, February

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Transfer Learning Action Models by Measuring the Similarity of Different Domains Transfer Learning Action Models by Measuring the Similarity of Different Domains Hankui Zhuo 1, Qiang Yang 2, and Lei Li 1 1 Software Research Institute, Sun Yat-sen University, Guangzhou, China. zhuohank@gmail.com,lnslilei@mail.sysu.edu.cn

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Improving Action Selection in MDP s via Knowledge Transfer

Improving Action Selection in MDP s via Knowledge Transfer In Proc. 20th National Conference on Artificial Intelligence (AAAI-05), July 9 13, 2005, Pittsburgh, USA. Improving Action Selection in MDP s via Knowledge Transfer Alexander A. Sherstov and Peter Stone

More information

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD TABLE OF CONTENTS LIST OF FIGURES LIST OF TABLES LIST OF APPENDICES LIST OF

More information

A Comparison of Annealing Techniques for Academic Course Scheduling

A Comparison of Annealing Techniques for Academic Course Scheduling A Comparison of Annealing Techniques for Academic Course Scheduling M. A. Saleh Elmohamed 1, Paul Coddington 2, and Geoffrey Fox 1 1 Northeast Parallel Architectures Center Syracuse University, Syracuse,

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design

Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design Paper #3 Five Q-to-survey approaches: did they work? Job van Exel

More information

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology

More information

TD(λ) and Q-Learning Based Ludo Players

TD(λ) and Q-Learning Based Ludo Players TD(λ) and Q-Learning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent self-learning ability

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions Ericsson Wallet Platform (EWP) 3.0 Training Programs Catalog of Course Descriptions Catalog of Course Descriptions INTRODUCTION... 3 ERICSSON CONVERGED WALLET (ECW) 3.0 RATING MANAGEMENT... 4 ERICSSON

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses Thomas F.C. Woodhall Masters Candidate in Civil Engineering Queen s University at Kingston,

More information

Mining Association Rules in Student s Assessment Data

Mining Association Rules in Student s Assessment Data www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama

More information

Multimedia Courseware of Road Safety Education for Secondary School Students

Multimedia Courseware of Road Safety Education for Secondary School Students Multimedia Courseware of Road Safety Education for Secondary School Students Hanis Salwani, O 1 and Sobihatun ur, A.S 2 1 Universiti Utara Malaysia, Malaysia, hanisalwani89@hotmail.com 2 Universiti Utara

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Different Requirements Gathering Techniques and Issues. Javaria Mushtaq

Different Requirements Gathering Techniques and Issues. Javaria Mushtaq 835 Different Requirements Gathering Techniques and Issues Javaria Mushtaq Abstract- Project management is now becoming a very important part of our software industries. To handle projects with success

More information

BMBF Project ROBUKOM: Robust Communication Networks

BMBF Project ROBUKOM: Robust Communication Networks BMBF Project ROBUKOM: Robust Communication Networks Arie M.C.A. Koster Christoph Helmberg Andreas Bley Martin Grötschel Thomas Bauschert supported by BMBF grant 03MS616A: ROBUKOM Robust Communication Networks,

More information

Xinyu Tang. Education. Research Interests. Honors and Awards. Professional Experience

Xinyu Tang. Education. Research Interests. Honors and Awards. Professional Experience Xinyu Tang Parasol Laboratory Department of Computer Science Texas A&M University, TAMU 3112 College Station, TX 77843-3112 phone:(979)847-8835 fax: (979)458-0425 email: xinyut@tamu.edu url: http://parasol.tamu.edu/people/xinyut

More information

An Introduction to Simio for Beginners

An Introduction to Simio for Beginners An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

MYCIN. The MYCIN Task

MYCIN. The MYCIN Task MYCIN Developed at Stanford University in 1972 Regarded as the first true expert system Assists physicians in the treatment of blood infections Many revisions and extensions over the years The MYCIN Task

More information

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology Tiancheng Zhao CMU-LTI-16-006 Language Technologies Institute School of Computer Science Carnegie Mellon

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Major Milestones, Team Activities, and Individual Deliverables

Major Milestones, Team Activities, and Individual Deliverables Major Milestones, Team Activities, and Individual Deliverables Milestone #1: Team Semester Proposal Your team should write a proposal that describes project objectives, existing relevant technology, engineering

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I Session 1793 Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I John Greco, Ph.D. Department of Electrical and Computer Engineering Lafayette College Easton, PA 18042 Abstract

More information

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling Adaptive Generation in Dialogue Systems Using Dynamic User Modeling Srinivasan Janarthanam Heriot-Watt University Oliver Lemon Heriot-Watt University We address the problem of dynamically modeling and

More information

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering

More information

A simulated annealing and hill-climbing algorithm for the traveling tournament problem

A simulated annealing and hill-climbing algorithm for the traveling tournament problem European Journal of Operational Research xxx (2005) xxx xxx Discrete Optimization A simulated annealing and hill-climbing algorithm for the traveling tournament problem A. Lim a, B. Rodrigues b, *, X.

More information

The Strong Minimalist Thesis and Bounded Optimality

The Strong Minimalist Thesis and Bounded Optimality The Strong Minimalist Thesis and Bounded Optimality DRAFT-IN-PROGRESS; SEND COMMENTS TO RICKL@UMICH.EDU Richard L. Lewis Department of Psychology University of Michigan 27 March 2010 1 Purpose of this

More information

The Evolution of Random Phenomena

The Evolution of Random Phenomena The Evolution of Random Phenomena A Look at Markov Chains Glen Wang glenw@uchicago.edu Splash! Chicago: Winter Cascade 2012 Lecture 1: What is Randomness? What is randomness? Can you think of some examples

More information

Execution Plan for Software Engineering Education in Taiwan

Execution Plan for Software Engineering Education in Taiwan 2012 19th Asia-Pacific Software Engineering Conference Execution Plan for Software Engineering Education in Taiwan Jonathan Lee 1, Alan Liu 2, Yu Chin Cheng 3, Shang-Pin Ma 4, and Shin-Jie Lee 1 1 Department

More information

Whole School Literacy Policy 2017/18

Whole School Literacy Policy 2017/18 Whole School Literacy Policy 2017/18 A central aim of teaching and learning is to develop students ability to use language effectively, in order to think, explore, organise and communicate meaning. Improved

More information

Unit 7 Data analysis and design

Unit 7 Data analysis and design 2016 Suite Cambridge TECHNICALS LEVEL 3 IT Unit 7 Data analysis and design A/507/5007 Guided learning hours: 60 Version 2 - revised May 2016 *changes indicated by black vertical line ocr.org.uk/it LEVEL

More information

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ; EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10 Instructor: Kang G. Shin, 4605 CSE, 763-0391; kgshin@umich.edu Number of credit hours: 4 Class meeting time and room: Regular classes: MW 10:30am noon

More information

Team Formation for Generalized Tasks in Expertise Social Networks

Team Formation for Generalized Tasks in Expertise Social Networks IEEE International Conference on Social Computing / IEEE International Conference on Privacy, Security, Risk and Trust Team Formation for Generalized Tasks in Expertise Social Networks Cheng-Te Li Graduate

More information

Improving Fairness in Memory Scheduling

Improving Fairness in Memory Scheduling Improving Fairness in Memory Scheduling Using a Team of Learning Automata Aditya Kajwe and Madhu Mutyam Department of Computer Science & Engineering, Indian Institute of Tehcnology - Madras June 14, 2014

More information

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method Farhadi F, Sorkhi M, Hashemi S et al. An effective framework for fast expert mining in collaboration networks: A grouporiented and cost-based method. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 27(3): 577

More information

Theory of Probability

Theory of Probability Theory of Probability Class code MATH-UA 9233-001 Instructor Details Prof. David Larman Room 806,25 Gordon Street (UCL Mathematics Department). Class Details Fall 2013 Thursdays 1:30-4-30 Location to be

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Seminar - Organic Computing

Seminar - Organic Computing Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts

More information

On-the-Fly Customization of Automated Essay Scoring

On-the-Fly Customization of Automated Essay Scoring Research Report On-the-Fly Customization of Automated Essay Scoring Yigal Attali Research & Development December 2007 RR-07-42 On-the-Fly Customization of Automated Essay Scoring Yigal Attali ETS, Princeton,

More information

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach Data Integration through Clustering and Finding Statistical Relations - Validation of Approach Marek Jaszuk, Teresa Mroczek, and Barbara Fryc University of Information Technology and Management, ul. Sucharskiego

More information

Genevieve L. Hartman, Ph.D.

Genevieve L. Hartman, Ph.D. Curriculum Development and the Teaching-Learning Process: The Development of Mathematical Thinking for all children Genevieve L. Hartman, Ph.D. Topics for today Part 1: Background and rationale Current

More information

Aclara is committed to improving your TWACS technical training experience as well as allowing you to be safe, efficient, and successful.

Aclara is committed to improving your TWACS technical training experience as well as allowing you to be safe, efficient, and successful. Aclara is committed to improving your TWACS technical training experience as well as allowing you to be safe, efficient, and successful. We've added new courses, included a semi-yearly meter school, updated

More information

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,

More information

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education GCSE Mathematics B (Linear) Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education Mark Scheme for November 2014 Oxford Cambridge and RSA Examinations OCR (Oxford Cambridge

More information

A Context-Driven Use Case Creation Process for Specifying Automotive Driver Assistance Systems

A Context-Driven Use Case Creation Process for Specifying Automotive Driver Assistance Systems A Context-Driven Use Case Creation Process for Specifying Automotive Driver Assistance Systems Hannes Omasreiter, Eduard Metzker DaimlerChrysler AG Research Information and Communication Postfach 23 60

More information

Document number: 2013/ Programs Committee 6/2014 (July) Agenda Item 42.0 Bachelor of Engineering with Honours in Software Engineering

Document number: 2013/ Programs Committee 6/2014 (July) Agenda Item 42.0 Bachelor of Engineering with Honours in Software Engineering Document number: 2013/0006139 Programs Committee 6/2014 (July) Agenda Item 42.0 Bachelor of Engineering with Honours in Software Engineering Program Learning Outcomes Threshold Learning Outcomes for Engineering

More information

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits. DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE Sample 2-Year Academic Plan DRAFT Junior Year Summer (Bridge Quarter) Fall Winter Spring MMDP/GAME 124 GAME 310 GAME 318 GAME 330 Introduction to Maya

More information

OPAC and User Perception in Law University Libraries in the Karnataka: A Study

OPAC and User Perception in Law University Libraries in the Karnataka: A Study ISSN 2229-5984 (P) 29-5576 (e) OPAC and User Perception in Law University Libraries in the Karnataka: A Study Devendra* and Khaiser Nikam** To Cite: Devendra & Nikam, K. (20). OPAC and user perception

More information

Modeling user preferences and norms in context-aware systems

Modeling user preferences and norms in context-aware systems Modeling user preferences and norms in context-aware systems Jonas Nilsson, Cecilia Lindmark Jonas Nilsson, Cecilia Lindmark VT 2016 Bachelor's thesis for Computer Science, 15 hp Supervisor: Juan Carlos

More information

Towards a Robuster Interpretive Parsing

Towards a Robuster Interpretive Parsing J Log Lang Inf (2013) 22:139 172 DOI 10.1007/s10849-013-9172-x Towards a Robuster Interpretive Parsing Learning from Overt Forms in Optimality Theory Tamás Biró Published online: 9 April 2013 Springer

More information

Training Catalogue for ACOs Global Learning Services V1.2. amadeus.com

Training Catalogue for ACOs Global Learning Services V1.2. amadeus.com Training Catalogue for ACOs Global Learning Services V1.2 amadeus.com Global Learning Services Training Catalogue for ACOs V1.2 This catalogue lists the training courses offered to ACOs by Global Learning

More information

Common Core Exemplar for English Language Arts and Social Studies: GRADE 1

Common Core Exemplar for English Language Arts and Social Studies: GRADE 1 The Common Core State Standards and the Social Studies: Preparing Young Students for College, Career, and Citizenship Common Core Exemplar for English Language Arts and Social Studies: Why We Need Rules

More information

Speeding Up Reinforcement Learning with Behavior Transfer

Speeding Up Reinforcement Learning with Behavior Transfer Speeding Up Reinforcement Learning with Behavior Transfer Matthew E. Taylor and Peter Stone Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188 {mtaylor, pstone}@cs.utexas.edu

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH Don McAllaster, Larry Gillick, Francesco Scattone, Mike Newman Dragon Systems, Inc. 320 Nevada Street Newton, MA 02160

More information

Soft Computing based Learning for Cognitive Radio

Soft Computing based Learning for Cognitive Radio Int. J. on Recent Trends in Engineering and Technology, Vol. 10, No. 1, Jan 2014 Soft Computing based Learning for Cognitive Radio Ms.Mithra Venkatesan 1, Dr.A.V.Kulkarni 2 1 Research Scholar, JSPM s RSCOE,Pune,India

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

Parsing of part-of-speech tagged Assamese Texts

Parsing of part-of-speech tagged Assamese Texts IJCSI International Journal of Computer Science Issues, Vol. 6, No. 1, 2009 ISSN (Online): 1694-0784 ISSN (Print): 1694-0814 28 Parsing of part-of-speech tagged Assamese Texts Mirzanur Rahman 1, Sufal

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

Distributed Weather Net: Wireless Sensor Network Supported Inquiry-Based Learning

Distributed Weather Net: Wireless Sensor Network Supported Inquiry-Based Learning Distributed Weather Net: Wireless Sensor Network Supported Inquiry-Based Learning Ben Chang, Department of E-Learning Design and Management, National Chiayi University, 85 Wenlong, Mingsuin, Chiayi County

More information

Computers Change the World

Computers Change the World Computers Change the World Computing is Changing the World Activity 1.1.1 Computing Is Changing the World Students pick a grand challenge and consider how mobile computing, the Internet, Big Data, and

More information

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Using and applying mathematics objectives (Problem solving, Communicating and Reasoning) Select the maths to use in some classroom

More information

Specification of the Verity Learning Companion and Self-Assessment Tool

Specification of the Verity Learning Companion and Self-Assessment Tool Specification of the Verity Learning Companion and Self-Assessment Tool Sergiu Dascalu* Daniela Saru** Ryan Simpson* Justin Bradley* Eva Sarwar* Joohoon Oh* * Department of Computer Science ** Dept. of

More information

Staff Briefing WHY IS IT IMPORTANT FOR STAFF TO PROMOTE THE NSS? WHO IS ELIGIBLE TO COMPLETE THE NSS? WHICH STUDENTS SHOULD I COMMUNICATE WITH?

Staff Briefing WHY IS IT IMPORTANT FOR STAFF TO PROMOTE THE NSS? WHO IS ELIGIBLE TO COMPLETE THE NSS? WHICH STUDENTS SHOULD I COMMUNICATE WITH? Staff Briefing WHY IS IT IMPORTANT FOR STAFF TO PROMOTE THE NSS? Around 40% of online respondents (that responded to the optional marketing question at the end of the online NSS survey) identified that

More information

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,

More information

ACADEMIC AFFAIRS GUIDELINES

ACADEMIC AFFAIRS GUIDELINES ACADEMIC AFFAIRS GUIDELINES Section 5: Course Instruction and Delivery Title: Instructional Methods: Schematic and Definitions Number (Current Format) Number (Prior Format) Date Last Revised 5.4 VI 08/2017

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

Initial English Language Training for Controllers and Pilots. Mr. John Kennedy École Nationale de L Aviation Civile (ENAC) Toulouse, France.

Initial English Language Training for Controllers and Pilots. Mr. John Kennedy École Nationale de L Aviation Civile (ENAC) Toulouse, France. Initial English Language Training for Controllers and Pilots Mr. John Kennedy École Nationale de L Aviation Civile (ENAC) Toulouse, France Summary All French trainee controllers and some French pilots

More information

English (native), German (fair/good, I am one year away from speaking at the classroom level), French (written).

English (native), German (fair/good, I am one year away from speaking at the classroom level), French (written). Curriculum Vitae: Dr. John D. Williams, Ph.D. Universität des Saarlandes Fachrichtung Mathematik Postfach 151150, 66041 Saarbrücken williams@math.uni-sb.de Phone: +(49) 177-564-4276 http://www.math.uni-sb.de/ag/speicher/williams.html

More information

Alberta Police Cognitive Ability Test (APCAT) General Information

Alberta Police Cognitive Ability Test (APCAT) General Information Alberta Police Cognitive Ability Test (APCAT) General Information 1. What does the APCAT measure? The APCAT test measures one s potential to successfully complete police recruit training and to perform

More information