A Production Scheduling Strategy for an Assembly Plant based on Reinforcement Learning

Size: px
Start display at page:

Download "A Production Scheduling Strategy for an Assembly Plant based on Reinforcement Learning"

Transcription

1 A Production Scheduling Strategy for an Assembly Plant based on Reinforcement Learning DRANIDIS D., KEHRIS E. Computer Science Department CITY LIBERAL STUDIES - Affiliated College of the University of Sheffield 13 Tsimiski st., Thessaloniki GREECE Abstract: - A reinforcement learning algorithm for the development of a system scheduling policy that controls a manufacturing system is investigated. The manufacturing system is characterized by considerable operation and setup times. The reinforcement learning algorithm learns to develop a scheduling policy that satisfies demand while keeping a given production mix. This paper discusses the reinforcement algorithm used, the state space representation and the structure of the neural network employed (due to the large state space of the problem). Despite the difficulty of the task assigned to the reinforcement learning algorithm, the results show that the policy learned demonstrates the desired properties. Key-Words: - Scheduling policies, Manufacturing systems, Reinforcement learning, Neural networks 1 Introduction Production scheduling deals with the way the resources of a manufacturing system (i.e. the workstations, personnel and support systems) are assigned over time to a set of activities so as to best meet a set of objectives. The inherent complexity of the current manufacturing systems, the frequently unstructured nature of managerial decision-making and the usually contradicting goals that have to be achieved through production scheduling render the problem not amenable to analytical solution. On the other hand, heuristic algorithms that obtain good production workplans have been developed and are usually evaluated through the use of simulation models that mimic the dynamic behavior of the real manufacturing system. Production scheduling is often regarded as a twolevel decision making [1,4] at the upper level system loading is decided while at the lower level the workstation loading is determined. It has been observed and reported in the literature that system scheduling (or system loading) plays a more important role than workstation control. In this paper, we investigate the possibility of utilizing a reinforcement learning (RL) algorithm for developing a system scheduling policy for a manufacturing system. The structure of this paper is as follows: section 2 describes the general concept of reinforcement learning, section 3 presents the manufacturing system for which we are going to develop a scheduling policy, while section 4 describes the simulation model developed for the simulation of the manufacturing system. The RL agent developed for the specific manufacturing system is described in section 5 and section 6 presents the results obtained by controlling the manufacturing system using the RL agent developed. These results and the conclusions derived are discussed in section 7. 2 Reinforcement Learning RL algorithms approximate dynamic programming on an incremental basis. In contrast to dynamic programming, RL algorithms do not require a model of the dynamics of the system and can be used online in an operating environment. A reinforcement learning agent senses its environment, takes actions and receives rewards depending on the effect of its actions on the environment. The agent has no knowledge of the dynamics of the environment (it cannot predict the consequences of its actions). Rewards provide the necessary information to the agent to adapt its actions. The aim of a reinforcement learning agent is to maximize the total reward received from the environment. RL agents do not have any memory, thus do not keep track of their actions. They only decide based on the knowledge they have about the current state of the environment. Figure 1 illustrates the interaction of the RL agent with the simulation system.

2 Fig. 1. αt The interaction of the RL agent with the simulation system. If s t S (S is a finite set of states) is the state of the system at time t then the RL agent decides the next action α t A (A is a finite set of actions) according to its current policy π : S A. A usual policy is to take greedy actions i.e. choose those actions that return the maximum expected long-term reward. The estimated long-term reward is represented by a function Q : S A R. So a greedy action corresponds to the action associated with V s ) = max( Q(, α )). (1) ( t s t t α t V : S R is called the value function and represents the long term reward the agent will receive if beginning from state s t it follows the greedy policy. The policy of a RL agent is not constant, since Q values are constantly changing during the on-line learning procedure. The following formula describes the update of Q, α ) ( s t t System r t RL agent Q s, α ) r + γq( s, α ) (2) ( t t t t+1 t+ 1 The new estimation of Q gets the value of the immediate received reward r t plus the discounted estimated Q value of taking the next action in the next state. The next Q value is discounted by a discount factor γ. During learning, a random exploration strategy is performed; with some small probability random actions are chosen instead of greedy actions to allow the network exploring new regions of the state-action space. It is proved [10] that under certain conditions the algorithm converges and the final policy of the agent is the optimal policy π *. The system should be described as a stationary Markov Decision Process (MDP) and a table should be used for storing the Q values. In real world problems usually the state space S is too large or even continuous, so the Q function cannot be represented in a tabular way. In these cases st a function approximator, usually a neural network, is employed to approximate the Q function. 2.1 Related work Reinforcement learning has been successfully applied to many application areas. The most impressive one is the TD-Gammon system [8], which achieved a master level in backgammon by applying the TD(λ) reinforcement learning algorithm (Temporal Difference algorithm [7]). TD-Gammon uses a backpropagation network for approximating the value function. The neural network receives the full representation of the board and the problem is clearly a Markov decision problem. RL algorithms are also successfully applied to control problems. Known applications are briefly described below. Crites and Barto [2] successfully apply RL algorithms in the domain of elevator dispatching. A team of RL agents (employing neural networks) is used to improve the performance of multiple elevator systems. Zhang and Dietterich [3] apply RL methods to incrementally improve a repair-based job-shop scheduler. They use the TD(λ) algorithm (the same algorithm used in TD-Gammon) to learn an evaluation function over states of scheduling. Their system has the disadvantage that it does not learn online (concurrently with the simulation). Mahadevan et al. [6] introduce a different algorithm for average-reward RL (called SMART). They apply the algorithm in controlling a productioninventory system with multiple product types. The RL agent has to decide between the two actions of producing or maintaining the system in order to avoid costly repairs. The work we present in this paper is closely related to Mahadevan et al. [6] since it concerns a manufacturing production system. However, the task assigned to the RL agent in our case is considerably harder due to the specific characteristics of the manufacturing system and the demanding objectives to be met by the system scheduling policy. 3 System description 3.1 Description of the assembly plant The manufacturing system described in this paper is a simplification of an existing assembly plant. We consider an assembly plant that consists of ten different workstations and produces two types of printed circuit boards (PCBs) referred to as Type A and Type B. Table 1 shows the existing workstations. Parts waiting for processing by the workstations are

3 temporarily stored at their local buffer which have a capacity of five. Due to the limited buffer capacities the system suffers from workstation blockings. Reflow soldering (workstation 3) and wave soldering stations (workstation 7) are continuous process machines and are limited only by the physical size of the moving belt the boards are placed on; the capacity of those workstations reflects the number of lots which can simultaneously be processed on them. Workstation Id Work area Comments 1 Solder paste painting 2 Surface Mounting Setup time: 9 (sec) 3 Reflow soldering Continuous process 4 SMD vision control 5 Assembly 6 Assembly 7 Wave soldering Continuous process 8 Final assembly 9 Vision control 10 Integrated circuit test Table 1: Production resources Setup time: 18 (sec) Automatic surface mounting (workstation 2) and Integrated Circuit testing machines (workstation 10) require a set up operation when a change in the type of board is encountered. Table 1, gives the set-up times for each machine of the workstation. Board Process Plan (workstation id) Type A B Table 2: Process plans The process plans of the two types are given in Table 2 in terms of the sequence of the workstations they have to visit to complete their assembly, while the duration of the corresponding operations are given in Table 3. It is evident from the duration of the processes that the setup operation is a time-consuming activity. Board Workstation id Type A B Table 3: Processing Times (in sec) 3.2 Scheduling policy objectives The system scheduling policy is responsible for deciding the time instances at which a part will be input in the manufacturing plant as well as the part type. The objective of the scheduling policy is to ensure demand satisfaction and balanced production rate of the required types. The balanced production is necessary because the assembly plant feeds successive production stages. The aim of this work is to investigate the possibility of deriving a RL agent that is capable of developing workplans for the given assembly plant that satisfy the demand while keeping a good balance of the production mix. A simulation program has been developed that mimics the dynamics of the assembly plant, and then, an RL agent has been built that determines the loading policy of the simulated plant. 4 The Simulation Program The simulation model that was built for the manufacturing system was based on the FMSLIB simulation library [5] which is a generic software library written in C that facilitates the simulation of flexible manufacturing systems (FMS) and their real time control strategies. FMSLIB employs the threephase approach [9] and provides facilities for modelling the physical structure, the part flow and the system and workstation loading policy of a family of FMSs. FMSLIB currently supports the following simulation entities: parts of different types machines workstations (a group of machines) limited capacity buffers non-accumulating conveyors FMSLIB advocates the separate development of the conceptually different views of the simulated system. This approach facilitates the modular program development, program readability and maintainability and the evaluation of different strategies (dispatching, control, etc) on the same system. A simulation program based on FMSLIB is comprised of the following modules: Physical Structure (Equipment) - Contains the descriptions of the machine, conveyor, buffer and workstations that make up the simulated system Operational Logic - Contains the descriptions of the feeding policies for the machine, workstation and conveyors of the simulated system i.e. determines the input buffer policy for each machine.

4 Input Data - Provides system-related static data like the demand and the machine processing times Part path - Describes the part flow through the system. This module explicitly describes the equipment required by each part type, at each stage of its manufacturing process. Data Collection - Defines the user-defined parts of the system for which data collection is necessary. Control Strategy - Determines the scheduling policy to be implemented for the control of the system; i.e. it determines which part type will be introduced in the system and when. In this paper, the control strategy is implemented by the neural network which is trained using the RL agent. The separation of the different views of the simulated system advocated by FMSLIB greatly facilitated the incorporation of the software that implements the RL agent (written in C++) with the simulation code. 5 The RL agent In order to define a reinforcement learning agent one has to define: a suitable representation of the state and a suitable reward function. Both definitions are very important for the success of an RL agent. Due to the large state space a neural network approximator is used for representing the Q function. Specifically the backpropagation learning algorithm is used for updating the weights of the network. The input to the network is described in the following section. 5.1 State representation One of the most important decisions when designing an RL agent is the representation of the state. In the system described in this paper this is one of the major concerns since a complete representation is not possible due to the complexity of the problem. Therefore we choose to include the following information in the state representation: state of machines. Each machine may be found in one of four distinct states: idle, working, setup or blocked. A separate input unit is used for each one of these states. state of input buffers. Buffers are of limited capacity. We decide to use two units for each buffer. One unit for the level of the buffer and a second one which turns on when the buffer is full. elapsed simulation time divided by total simulation time t T : 10 units are used for the representation of time. These units encode the time as thermometer units. feeding rates e t ( / P( for each type i of production parts. producing rates p t ( / P( for each type i of production parts. where P( is the total demand, e t ( the number of entered parts, p t ( the number of produced parts of type i at time t respectively and T is the total production time. For each of the continuous rate measures 10 units are used, similarly to the encoding of simulation time. 5.2 Actions The RL agent has to decide between three actions: entering a part of type A or B and doing nothing. The decision is based on the comparison of the outputs of three separate neural networks, which are trained simultaneously during the simulation. All three networks receive the same state input. 5.3 The reward function Special care has to be taken when it comes to define the reward function. The implicit mapping of the reward functions to scheduling policies has to be a monotonic function: higher rewards should correspond to better scheduling policies. Taking into consideration the scheduling policy objectives, the reward is calculated with the following formula pτ ( diτ rτ = max (3) i P( where τ is used for simulation time to distinguish from decision epoch (times at which the RL agent takes decisions), which is denoted as t. The term d i = P( T is the ideal production rate of part type i. In the ideal case in which the production is balanced pτ ( diτ tends to zero. So, the RL agent is punished with the maximum distance between the desired and actual amount of production. Simulation steps do not coincide with decision epochs of the RL agent, since during the simulations states occur in which there is only one possible action. At these states the simulation proceeds without

5 consulting the RL controller. However rewards are calculated at each simulation step and accumulated until the next decision epoch. 6 Experimental results After training the RL agent we have conducted several experiments for testing its performance on different manufacturing scenarios. Fig. 2 illustrates the derived scheduling policy for the task of producing 10 parts of type A and seven parts of type B. The selection of small productions increases the impact of the initial transient time on overall performance. In the graph of Fig. 2 one can qualitatively compare the ideal accumulative productions d i τ (shown in the figure as two straight lines, one for each type of production part) with the actual accumulative productions p τ ( (shown as stepwise functions). It can be observed (Fig. 2) that despite the initial transient time the RL agent produces a schedule that quickly approximates the ideal productivity rates. Accumulative Production Fig Time Actual Production A Ideal Production A Actual Production B Ideal Production B Actual accumulative productions versus ideal accumulative productions. 7 Discussion The manufacturing system presented in this paper is characterized by time-consuming processing and setup times as well as possible worskstation blockages due to limited buffer capacities. As a result of these characteristics, there is considerable time delay between the entrance of a part into the system until its exit (production). This implies that the consequences of the actions taken at the entry-level are observed with considerable time delay. This fact has been identified and tackled appropriately by the RL agent. Furthermore, the system undergoes through a long transient period due to the fact that it is initially empty. Thus, the behaviour observed by the RL agent at the beginning of the simulation is considerably different than that of the steady-state. This means that the RL agent should be able to distinguish among the transient and steady state. In addition to these characteristics, the objective of the system scheduling policy is to satisfy the demand while producing the part types at given production mix and keeping the work in progress low. Demand satisfaction is favored by keeping the setups at a minimum level while achieving the desired production mix requires an interchange of part types at feeding which results to setup operations. The work in progress may be kept low by adjusting the feeding to the system production capacity. As a consequence, the RL agent has to strike a balance between batch processing and product mixing while not feeding the system continuously. It becomes obvious from these observations that the RL agent is assigned to solve a hard problem. An open question remains whether the manufacturing system can be represented as a Markov Decision Process, since many rewards occur between successive decision epochs, and the system is not fully visible (due to the abstraction of the representation). Mahadevan etal [6] argue that these kinds of problems can be represented as Semi-Markov decision processes. Although we do not use the same average-reward RL algorithm, we do accumulate rewards between successive decision epochs and award them to the preceding action. As already mentioned the system used for the experiments is a simplification of an existing assembly plant. The actual system consists of multiple machines per workstation and the produced types are 17 (2 types were used for this study). Representation of the actual system would require an even larger state-action space and would considerably lengthen learning times. One of our future goals is to examine ways of adapting the RL agent already developed to deal with the increased complexity of the real problem. References: [1] R. Akella, Y. F. Choong, and S. B. Gershwin, Performance of hierarchical production scheduling policy, In IEEE Transactions Components Hybrids Manufacturing technol., Vol 7, No. 3, 1984, pp [2] R. Crites and A. Barto, Improving Elevator Performance Using Reinforcement Learning, In D. S. Touretzky, M. C.Mozer, and M. E. Hasselmo, eitors, Advances in Neural Information Processing Systems 8, MIT Press, 1996.

6 [3] Dietterich and Zhang, A reinforcement-learning approach to job-shop scheduling. In Proceedings of the 14 th International Joint Conference on Artificial Intelligence, [4] R. Graves, Hierarchical scheduling approach in flexible assembly systems, In Proceedings of the 1987 IEEE Conference on Robotics and Automation, Raleigh, NC, Vol. 1, 1987, pp [5] E. Kehris, Z. Doulgeri, An FMS simulation development environment for real time control strategies. XVI European Conference of Operational Research, Brussels, [6] S. Mahadevan, N. Marchalleck, T.K. Das, and A. Gosavi, Self-improving factory simulation using continuous-time average-reward reinforcement learning, In Proceedings of the 13 th International Conference on Machine Learning, 1996, pp [7] R. S. Sutton, Learning to predict by the methods of temporal differences, Machine Learning, 3, 1988, pp [8] G. Tesauro, TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play, Neural Computation 6, 1994, pp [9] K. Tocher, The Art of Simulation, Van Nostrand Company, Princeton NJ, [10]Watkins, C. J. C. H. and Dayan, P, Q-learning. Machine Learning, 8, 1992, pp

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

Lecture 10: Reinforcement Learning

Lecture 10: Reinforcement Learning Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation

More information

TD(λ) and Q-Learning Based Ludo Players

TD(λ) and Q-Learning Based Ludo Players TD(λ) and Q-Learning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent self-learning ability

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology Tiancheng Zhao CMU-LTI-16-006 Language Technologies Institute School of Computer Science Carnegie Mellon

More information

Georgetown University at TREC 2017 Dynamic Domain Track

Georgetown University at TREC 2017 Dynamic Domain Track Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain

More information

High-level Reinforcement Learning in Strategy Games

High-level Reinforcement Learning in Strategy Games High-level Reinforcement Learning in Strategy Games Christopher Amato Department of Computer Science University of Massachusetts Amherst, MA 01003 USA camato@cs.umass.edu Guy Shani Department of Computer

More information

Improving Action Selection in MDP s via Knowledge Transfer

Improving Action Selection in MDP s via Knowledge Transfer In Proc. 20th National Conference on Artificial Intelligence (AAAI-05), July 9 13, 2005, Pittsburgh, USA. Improving Action Selection in MDP s via Knowledge Transfer Alexander A. Sherstov and Peter Stone

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

An Introduction to Simio for Beginners

An Introduction to Simio for Beginners An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

A Reinforcement Learning Variant for Control Scheduling

A Reinforcement Learning Variant for Control Scheduling A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement

More information

AMULTIAGENT system [1] can be defined as a group of

AMULTIAGENT system [1] can be defined as a group of 156 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 2, MARCH 2008 A Comprehensive Survey of Multiagent Reinforcement Learning Lucian Buşoniu, Robert Babuška,

More information

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses Thomas F.C. Woodhall Masters Candidate in Civil Engineering Queen s University at Kingston,

More information

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com

More information

On the Combined Behavior of Autonomous Resource Management Agents

On the Combined Behavior of Autonomous Resource Management Agents On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science

More information

Learning Prospective Robot Behavior

Learning Prospective Robot Behavior Learning Prospective Robot Behavior Shichao Ou and Rod Grupen Laboratory for Perceptual Robotics Computer Science Department University of Massachusetts Amherst {chao,grupen}@cs.umass.edu Abstract This

More information

Practical Integrated Learning for Machine Element Design

Practical Integrated Learning for Machine Element Design Practical Integrated Learning for Machine Element Design Manop Tantrabandit * Abstract----There are many possible methods to implement the practical-approach-based integrated learning, in which all participants,

More information

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Exploration. CS : Deep Reinforcement Learning Sergey Levine Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

Speeding Up Reinforcement Learning with Behavior Transfer

Speeding Up Reinforcement Learning with Behavior Transfer Speeding Up Reinforcement Learning with Behavior Transfer Matthew E. Taylor and Peter Stone Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188 {mtaylor, pstone}@cs.utexas.edu

More information

Radius STEM Readiness TM

Radius STEM Readiness TM Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and

More information

Value Creation Through! Integration Workshop! Value Stream Analysis and Mapping for PD! January 31, 2002!

Value Creation Through! Integration Workshop! Value Stream Analysis and Mapping for PD! January 31, 2002! Presented by:! Hugh McManus for Rich Millard! MIT! Value Creation Through! Integration Workshop! Value Stream Analysis and Mapping for PD!!!! January 31, 2002! Steps in Lean Thinking (Womack and Jones)!

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information

Guidelines for Writing an Internship Report

Guidelines for Writing an Internship Report Guidelines for Writing an Internship Report Master of Commerce (MCOM) Program Bahauddin Zakariya University, Multan Table of Contents Table of Contents... 2 1. Introduction.... 3 2. The Required Components

More information

Discriminative Learning of Beam-Search Heuristics for Planning

Discriminative Learning of Beam-Search Heuristics for Planning Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University

More information

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Varun Raj Kompella, Marijn Stollenga, Matthew Luciw, Juergen Schmidhuber The Swiss AI Lab IDSIA, USI

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

Introduction to Simulation

Introduction to Simulation Introduction to Simulation Spring 2010 Dr. Louis Luangkesorn University of Pittsburgh January 19, 2010 Dr. Louis Luangkesorn ( University of Pittsburgh ) Introduction to Simulation January 19, 2010 1 /

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Transfer Learning Action Models by Measuring the Similarity of Different Domains Transfer Learning Action Models by Measuring the Similarity of Different Domains Hankui Zhuo 1, Qiang Yang 2, and Lei Li 1 1 Software Research Institute, Sun Yat-sen University, Guangzhou, China. zhuohank@gmail.com,lnslilei@mail.sysu.edu.cn

More information

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners Andrea L. Thomaz and Cynthia Breazeal Abstract While Reinforcement Learning (RL) is not traditionally designed

More information

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

USER ADAPTATION IN E-LEARNING ENVIRONMENTS USER ADAPTATION IN E-LEARNING ENVIRONMENTS Paraskevi Tzouveli Image, Video and Multimedia Systems Laboratory School of Electrical and Computer Engineering National Technical University of Athens tpar@image.

More information

PRODUCT COMPLEXITY: A NEW MODELLING COURSE IN THE INDUSTRIAL DESIGN PROGRAM AT THE UNIVERSITY OF TWENTE

PRODUCT COMPLEXITY: A NEW MODELLING COURSE IN THE INDUSTRIAL DESIGN PROGRAM AT THE UNIVERSITY OF TWENTE INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 6 & 7 SEPTEMBER 2012, ARTESIS UNIVERSITY COLLEGE, ANTWERP, BELGIUM PRODUCT COMPLEXITY: A NEW MODELLING COURSE IN THE INDUSTRIAL DESIGN

More information

Seminar - Organic Computing

Seminar - Organic Computing Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts

More information

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I Session 1793 Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I John Greco, Ph.D. Department of Electrical and Computer Engineering Lafayette College Easton, PA 18042 Abstract

More information

Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant Sudheer Takekar 1 Dr. D.N. Raut 2

Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant Sudheer Takekar 1 Dr. D.N. Raut 2 IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 04, 2014 ISSN (online): 2321-0613 Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

Major Milestones, Team Activities, and Individual Deliverables

Major Milestones, Team Activities, and Individual Deliverables Major Milestones, Team Activities, and Individual Deliverables Milestone #1: Team Semester Proposal Your team should write a proposal that describes project objectives, existing relevant technology, engineering

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

Visit us at:

Visit us at: White Paper Integrating Six Sigma and Software Testing Process for Removal of Wastage & Optimizing Resource Utilization 24 October 2013 With resources working for extended hours and in a pressurized environment,

More information

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14) IAT 888: Metacreation Machines endowed with creative behavior Philippe Pasquier Office 565 (floor 14) pasquier@sfu.ca Outline of today's lecture A little bit about me A little bit about you What will that

More information

Abstractions and the Brain

Abstractions and the Brain Abstractions and the Brain Brian D. Josephson Department of Physics, University of Cambridge Cavendish Lab. Madingley Road Cambridge, UK. CB3 OHE bdj10@cam.ac.uk http://www.tcm.phy.cam.ac.uk/~bdj10 ABSTRACT

More information

Spring 2015 IET4451 Systems Simulation Course Syllabus for Traditional, Hybrid, and Online Classes

Spring 2015 IET4451 Systems Simulation Course Syllabus for Traditional, Hybrid, and Online Classes Spring 2015 IET4451 Systems Simulation Course Syllabus for Traditional, Hybrid, and Online Classes Instructor: Dr. Gregory L. Wiles Email Address: Use D2L e-mail, or secondly gwiles@spsu.edu Office: M

More information

Reduce the Failure Rate of the Screwing Process with Six Sigma Approach

Reduce the Failure Rate of the Screwing Process with Six Sigma Approach Proceedings of the 2014 International Conference on Industrial Engineering and Operations Management Bali, Indonesia, January 7 9, 2014 Reduce the Failure Rate of the Screwing Process with Six Sigma Approach

More information

The Enterprise Knowledge Portal: The Concept

The Enterprise Knowledge Portal: The Concept The Enterprise Knowledge Portal: The Concept Executive Information Systems, Inc. www.dkms.com eisai@home.com (703) 461-8823 (o) 1 A Beginning Where is the life we have lost in living! Where is the wisdom

More information

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas Exploiting Distance Learning Methods and Multimediaenhanced instructional content to support IT Curricula in Greek Technological Educational Institutes P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou,

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data Kurt VanLehn 1, Kenneth R. Koedinger 2, Alida Skogsholm 2, Adaeze Nwaigwe 2, Robert G.M. Hausmann 1, Anders Weinstein

More information

Improving Fairness in Memory Scheduling

Improving Fairness in Memory Scheduling Improving Fairness in Memory Scheduling Using a Team of Learning Automata Aditya Kajwe and Madhu Mutyam Department of Computer Science & Engineering, Indian Institute of Tehcnology - Madras June 14, 2014

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

Circuit Simulators: A Revolutionary E-Learning Platform

Circuit Simulators: A Revolutionary E-Learning Platform Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

Learning to Schedule Straight-Line Code

Learning to Schedule Straight-Line Code Learning to Schedule Straight-Line Code Eliot Moss, Paul Utgoff, John Cavazos Doina Precup, Darko Stefanović Dept. of Comp. Sci., Univ. of Mass. Amherst, MA 01003 Carla Brodley, David Scheeff Sch. of Elec.

More information

Firms and Markets Saturdays Summer I 2014

Firms and Markets Saturdays Summer I 2014 PRELIMINARY DRAFT VERSION. SUBJECT TO CHANGE. Firms and Markets Saturdays Summer I 2014 Professor Thomas Pugel Office: Room 11-53 KMC E-mail: tpugel@stern.nyu.edu Tel: 212-998-0918 Fax: 212-995-4212 This

More information

A student diagnosing and evaluation system for laboratory-based academic exercises

A student diagnosing and evaluation system for laboratory-based academic exercises A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens

More information

BADM 641 (sec. 7D1) (on-line) Decision Analysis August 16 October 6, 2017 CRN: 83777

BADM 641 (sec. 7D1) (on-line) Decision Analysis August 16 October 6, 2017 CRN: 83777 BADM 641 (sec. 7D1) (on-line) Decision Analysis August 16 October 6, 2017 CRN: 83777 SEMESTER: Fall 2017 INSTRUCTOR: Jack Fuller, Ph.D. OFFICE: 108 Business and Economics Building, West Virginia University,

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

MAKINO GmbH. Training centres in the following European cities:

MAKINO GmbH. Training centres in the following European cities: MAKINO GmbH Training centres in the following European cities: Bratislava, Hamburg, Kirchheim unter Teck and Milano (Detailed addresses are given in the annex) Training programme 2nd Semester 2016 Selecting

More information

An OO Framework for building Intelligence and Learning properties in Software Agents

An OO Framework for building Intelligence and Learning properties in Software Agents An OO Framework for building Intelligence and Learning properties in Software Agents José A. R. P. Sardinha, Ruy L. Milidiú, Carlos J. P. Lucena, Patrick Paranhos Abstract Software agents are defined as

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

Robot manipulations and development of spatial imagery

Robot manipulations and development of spatial imagery Robot manipulations and development of spatial imagery Author: Igor M. Verner, Technion Israel Institute of Technology, Haifa, 32000, ISRAEL ttrigor@tx.technion.ac.il Abstract This paper considers spatial

More information

Improving Conceptual Understanding of Physics with Technology

Improving Conceptual Understanding of Physics with Technology INTRODUCTION Improving Conceptual Understanding of Physics with Technology Heidi Jackman Research Experience for Undergraduates, 1999 Michigan State University Advisors: Edwin Kashy and Michael Thoennessen

More information

Knowledge Elicitation Tool Classification. Janet E. Burge. Artificial Intelligence Research Group. Worcester Polytechnic Institute

Knowledge Elicitation Tool Classification. Janet E. Burge. Artificial Intelligence Research Group. Worcester Polytechnic Institute Page 1 of 28 Knowledge Elicitation Tool Classification Janet E. Burge Artificial Intelligence Research Group Worcester Polytechnic Institute Knowledge Elicitation Methods * KE Methods by Interaction Type

More information

Task Completion Transfer Learning for Reward Inference

Task Completion Transfer Learning for Reward Inference Machine Learning for Interactive Systems: Papers from the AAAI-14 Workshop Task Completion Transfer Learning for Reward Inference Layla El Asri 1,2, Romain Laroche 1, Olivier Pietquin 3 1 Orange Labs,

More information

Guidelines for Project I Delivery and Assessment Department of Industrial and Mechanical Engineering Lebanese American University

Guidelines for Project I Delivery and Assessment Department of Industrial and Mechanical Engineering Lebanese American University Guidelines for Project I Delivery and Assessment Department of Industrial and Mechanical Engineering Lebanese American University Approved: July 6, 2009 Amended: July 28, 2009 Amended: October 30, 2009

More information

Field Experience Management 2011 Training Guides

Field Experience Management 2011 Training Guides Field Experience Management 2011 Training Guides Page 1 of 40 Contents Introduction... 3 Helpful Resources Available on the LiveText Conference Visitors Pass... 3 Overview... 5 Development Model for FEM...

More information

GACE Computer Science Assessment Test at a Glance

GACE Computer Science Assessment Test at a Glance GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science

More information

ENEE 302h: Digital Electronics, Fall 2005 Prof. Bruce Jacob

ENEE 302h: Digital Electronics, Fall 2005 Prof. Bruce Jacob Course Syllabus ENEE 302h: Digital Electronics, Fall 2005 Prof. Bruce Jacob 1. Basic Information Time & Place Lecture: TuTh 2:00 3:15 pm, CSIC-3118 Discussion Section: Mon 12:00 12:50pm, EGR-1104 Professor

More information

Backwards Numbers: A Study of Place Value. Catherine Perez

Backwards Numbers: A Study of Place Value. Catherine Perez Backwards Numbers: A Study of Place Value Catherine Perez Introduction I was reaching for my daily math sheet that my school has elected to use and in big bold letters in a box it said: TO ADD NUMBERS

More information

A Comparison of Annealing Techniques for Academic Course Scheduling

A Comparison of Annealing Techniques for Academic Course Scheduling A Comparison of Annealing Techniques for Academic Course Scheduling M. A. Saleh Elmohamed 1, Paul Coddington 2, and Geoffrey Fox 1 1 Northeast Parallel Architectures Center Syracuse University, Syracuse,

More information

Self Study Report Computer Science

Self Study Report Computer Science Computer Science undergraduate students have access to undergraduate teaching, and general computing facilities in three buildings. Two large classrooms are housed in the Davis Centre, which hold about

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

Task Completion Transfer Learning for Reward Inference

Task Completion Transfer Learning for Reward Inference Task Completion Transfer Learning for Reward Inference Layla El Asri 1,2, Romain Laroche 1, Olivier Pietquin 3 1 Orange Labs, Issy-les-Moulineaux, France 2 UMI 2958 (CNRS - GeorgiaTech), France 3 University

More information

Generating Test Cases From Use Cases

Generating Test Cases From Use Cases 1 of 13 1/10/2007 10:41 AM Generating Test Cases From Use Cases by Jim Heumann Requirements Management Evangelist Rational Software pdf (155 K) In many organizations, software testing accounts for 30 to

More information

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach #BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying

More information

SAM - Sensors, Actuators and Microcontrollers in Mobile Robots

SAM - Sensors, Actuators and Microcontrollers in Mobile Robots Coordinating unit: Teaching unit: Academic year: Degree: ECTS credits: 2017 230 - ETSETB - Barcelona School of Telecommunications Engineering 710 - EEL - Department of Electronic Engineering BACHELOR'S

More information

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

Book Reviews. Michael K. Shaub, Editor

Book Reviews. Michael K. Shaub, Editor ISSUES IN ACCOUNTING EDUCATION Vol. 26, No. 3 2011 pp. 633 637 American Accounting Association DOI: 10.2308/iace-10118 Book Reviews Michael K. Shaub, Editor Editor s Note: Books for review should be sent

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

Lecture 6: Applications

Lecture 6: Applications Lecture 6: Applications Michael L. Littman Rutgers University Department of Computer Science Rutgers Laboratory for Real-Life Reinforcement Learning What is RL? Branch of machine learning concerned with

More information

HARPER ADAMS UNIVERSITY Programme Specification

HARPER ADAMS UNIVERSITY Programme Specification HARPER ADAMS UNIVERSITY Programme Specification 1 Awarding Institution: Harper Adams University 2 Teaching Institution: Askham Bryan College 3 Course Accredited by: Not Applicable 4 Final Award and Level:

More information

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling Adaptive Generation in Dialogue Systems Using Dynamic User Modeling Srinivasan Janarthanam Heriot-Watt University Oliver Lemon Heriot-Watt University We address the problem of dynamically modeling and

More information

Using focal point learning to improve human machine tacit coordination

Using focal point learning to improve human machine tacit coordination DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated

More information

E-Learning project in GIS education

E-Learning project in GIS education E-Learning project in GIS education MARIA KOULI (1), DIMITRIS ALEXAKIS (1), FILIPPOS VALLIANATOS (1) (1) Department of Natural Resources & Environment Technological Educational Institute of Grete Romanou

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria FUZZY EXPERT SYSTEMS 16-18 18 February 2002 University of Damascus-Syria Dr. Kasim M. Al-Aubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate

More information

FF+FPG: Guiding a Policy-Gradient Planner

FF+FPG: Guiding a Policy-Gradient Planner FF+FPG: Guiding a Policy-Gradient Planner Olivier Buffet LAAS-CNRS University of Toulouse Toulouse, France firstname.lastname@laas.fr Douglas Aberdeen National ICT australia & The Australian National University

More information

Knowledge-Based - Systems

Knowledge-Based - Systems Knowledge-Based - Systems ; Rajendra Arvind Akerkar Chairman, Technomathematics Research Foundation and Senior Researcher, Western Norway Research institute Priti Srinivas Sajja Sardar Patel University

More information

"On-board training tools for long term missions" Experiment Overview. 1. Abstract:

On-board training tools for long term missions Experiment Overview. 1. Abstract: "On-board training tools for long term missions" Experiment Overview 1. Abstract 2. Keywords 3. Introduction 4. Technical Equipment 5. Experimental Procedure 6. References Principal Investigators: BTE:

More information

Visual CP Representation of Knowledge

Visual CP Representation of Knowledge Visual CP Representation of Knowledge Heather D. Pfeiffer and Roger T. Hartley Department of Computer Science New Mexico State University Las Cruces, NM 88003-8001, USA email: hdp@cs.nmsu.edu and rth@cs.nmsu.edu

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

AQUA: An Ontology-Driven Question Answering System

AQUA: An Ontology-Driven Question Answering System AQUA: An Ontology-Driven Question Answering System Maria Vargas-Vera, Enrico Motta and John Domingue Knowledge Media Institute (KMI) The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom.

More information