Learning and Planning with Tabular Methods

Size: px
Start display at page:

Download "Learning and Planning with Tabular Methods"

Transcription

1 Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Learning and Planning with Tabular Methods Lecture 6, CMU Katerina Fragkiadaki

2 What can I learn by interacting with the world? Previous week: the agent learned to estimate value functions and optimal policies from experience. state action S t A t reward R t

3 Model-free RL Model Model learning Simulation Planning Interaction with Environment Experience Direct RL methods Value function Greedification Policy

4 What can I learn by interacting with the world? Two weeks ago: we did not interact with the environment! We knew the true environment (dynamics and rewards) and just used them to plan and estimate value functions (value iteration, policy iteration using exhaustive state sweeps of Bellman back-up operations..very slow when many states..) v*,q* Planning: any computational process that uses a model to create or improve a policy Planning Model Policy

5 Planning Model Model learning Simulation Planning Interaction with Environment Experience Direct RL methods Value function Greedification Policy

6 What can I learn by interacting with the world? This lecture: Model-based RL, we will combine both, learning from experience and planning: 1. If the model is unknown, we will learn the model. R t eward R t state action S t A t

7 What can I learn by interacting with the world? This lecture: Model-based RL, we will combine both, learning from experience and planning: 1. If the model is unknown, we will learn the model. 2. Learn value functions using both real and simulated experience R t eward R t state action S t A t

8 What can I learn by interacting with the world? This lecture: we will combine both, learning from experience and planning: 1. If the model is unknown, we will learn the model. 2. Learn value functions using both real and simulated experience 3. Learning value functions online using model-based lookahead search R t eward R t state action S t A t

9 What can I learn by interacting with the world? This lecture: we will combine both, learning from experience and planning: 1. If the model is unknown, we will learn the model. 2. Learn value functions using both real and simulated experience 3. Learning value functions online using model-based lookahead search R t eward R t state action S t A t

10 Model-based RL Model Model learning Simulation Planning Interaction with Environment Experience Direct RL methods Value function Greedification Policy

11 Advantages of Model-Based RL Advantages: Model learning transfers across tasks and environment configurations (learning physics) Better exploits experience in case of sparse rewards It is probably what the brain does (more to come) Helps exploration: Can reason about model uncertainty Disadvantages: First learn model, then construct a value function: Two sources of approximation error

12 What is a Model? Model: anything the agent can use to predict how the environment will respond to its actions, concretely, the transition (dynamics) T(s s,a) and reward functions R(s,a). r s a s 0 this includes transitions of the state of the environment and the state of the agent..

13 What is a Model? Model: anything the agent can use to predict how the environment will respond to its actions, concretely: 1. the transition function (dynamics) 2. reward function Distribution model: description of all possibilities and their probabilities, T(s s,a) for all (s, a, s ) Sample model, a.k.a. a simulation model: produces sample experiences for given s, often much easier to come by Both types of models can be used to produce hypothetical experience (what if )

14 Model Learning 1. If the model is unknown, we will learn the model. 2. Learn value functions using both real and simulated experience 3. Learning value functions online using model-based lookahea search R t eward R t state action S t A t

15 Model Learning Goal: estimate model M from experience This can be thought as a supervised learning problem S 1, A 1! R 2, S 2 {S 1, A 1,R 2,...,S T } r S 2, A 2! R 3, S 3... S T 1, A T 1! R T, S T s a s 0 Learning Learning s, a! r s, a! s 0 is a regression problem is a density estimation problem Pick loss function, e.g. mean-squared error, KL divergence, Find parameters that minimize empirical loss

16 Examples of Models for T(s s,a) Table lookup model (tabular): bookkeeping a probability of occurrence for each transition (s,a,s ) Transition function is approximated through some function approximator A S 0.5 Curre S

17 A supervised learning problem? To look ahead far in the future you need to chain your dynamic predictions Data is sequential i.i.d. assumptions break, errors accumulate in time! Solutions: r Hierarchical dynamics models Linear local approximations, etc (later lectures) s a s 0

18 Examples of Models for T(s s,a) Table lookup model (tabular): bookkeeping a probability of occurrence for each transition (s,a,s ) Transition function is approximated through some features 0.5 Curre This Lecture Later..

19 Model Model learning Simulation Planning Interaction with Environment Experience Direct RL methods Value function Greedification Policy

20 Table Lookup Model Model is an explicit MDP, Count visits Alternatively N(s, a) ˆT (s 0 s, a) = ˆR(s, a) = ˆT, ˆR to each state action pair 1 N(s, a) 1 N(s, a) TX t=1 1(S t,a t,s t+1 = s, a, s 0 ) At each time-step, record experience tuple hs t,a t,r t+1,s t+1 i t TX t=1 1(S t,a t = s, a)r t To sample model, randomly pick tuple matching hs, a,, i

21 A simple Example Two states A,B; no discounting; 8 episodes of experience A, 0, B, 0! B, 1! B, 1! B, 1! B, 1! B, 1! B, 1! B, 0! We have constructed a table lookup model from the experience

22 Planning with a Model Given a model Solve the MDP M = ht, R i hs, A,T R i Using favorite planning algorithm Value iteration Policy iteration Tree search

23 Planning with a Model Given a model Solve the MDP M = ht, R i hs, A,T R i Using favorite planning algorithm Value iteration Policy iteration Tree search curse of dimensionality!

24 Planning with a Model Given a model Solve the MDP M = ht, R i hs, A,T R i Using favorite planning algorithm Value iteration Policy iteration Tree search Sample-based planning (right next)

25 Sample-based Planning Use the model only to generate samples, not using its transition probabilities and expected immediate rewards Sample experience from model Apply model-free RL to samples, e.g.: Monte-Carlo control Sarsa Q-learning S t+1 T (S t+1 S t,a t ) R t+1 = R (R t+1 S t,a t ) Sample-based planning methods are often more efficient: rather than exhaustive state sweeps we focus on what is likely to happen

26 Sample-based planning Model Model learning Simulation Planning Interaction with Environment Experience Direct RL methods Value function Greedification Policy

27 A Simple Example Construct a table-lookup model from real experience Apply model-free RL to sampled experience Real experience A, 0, B, 0 B, 1 B, 1 B, 1 B, 1 B, 1 B, 1 B, 0 Sampled experience B, 1 B, 0 B, 1 A, 0, B, 1 B, 1 A, 0, B, 1 B, 1 B, 0 e.g. Monte-Carlo learning: v(a) =1, v(b) =0.75

28 Planning with an Inaccurate Model Given an imperfect model < T, R >6=< T, R > Performance of model-based RL is limited to optimal policy for approximate MDP < S, A,T, R > i.e. Model-based RL is only as good as the estimated model When the model is inaccurate, planning process will compute a suboptimal policy Solution 1: when model is wrong, use model-free RL Solution 2: reason explicitly about model uncertainty

29 Combine real and simulated experience 1. If the model is unknown, we will learn the model. 2. Learn value functions using both real and simulated experience 3. Learning value functions online using model-based lookahead search R t eward R t state action S t A t

30 Real and Simulated Experience We consider two sources of experience Real experience - Sampled from environment (true MDP) S 0 T (s 0 s, a) R = r(s, a) Simulated experience - Sampled from model (approximate MD) S 0 T (S 0 S, A) R = R (R S, A)

31 Integrating Learning and Planning Model-Free RL No model Learn value function (and/or policy) from real experience

32 Integrating Learning and Planning Model-Free RL No model Learn value function (and/or policy) from real experience Model-Based RL (using Sample-Based Planning) Learn a model from real experience Plan value function (and/or policy) from simulated experience

33 Integrating Learning and Planning Model-Free RL No model Learn value function (and/or policy) from real experience Model-Based RL (using Sample-Based Planning) Dyna Learn a model from real experience Plan value function (and/or policy) from simulated experience Learn a model from real experience Learn and plan value function (and/or policy) from real and simulated experience

34 Dyna Model Model learning Simulation Planning Interaction with Environment Experience Direct RL methods Value function Greedification Policy

35 Dyna-Q Algorithm

36 Dyna-Q on a Simple Maze

37 Midway in 2nd Episode WITHOUT PLANNING (N=0) n G S

38 Midway in 2nd Episode WITHOUT PLANNING (N=0) n G n WITH PLANNING (N=50) G S S

39 Dyna-Q with an Inaccurate Model The changed environment is harder

40 Dyna-Q with an Inaccurate model Cont. The changed environment is easier

41 Sampling-based look-ahead search 1. If the model is unknown, we will learn the model. 2. Learn value functions using both real and simulated experience 3. Learning value functions online using model-based lookahead search R t eward R t state action S t A t

42 Forward Search Model Model learning Simulation Planning Interaction with Environment Experience Direct RL methods Value function Greedification Action (from a given state s)

43 Forward Search Prioritizes the state the agent is currently in! Using a model of the MDP to look ahead (exhaustively) Builds a search tree with the current state at the root Focus on sub-mdp starting from now, often dramatically easier than solving the whole MDP s t Image T! T! T! T! T! T! T! T! T! T!

44 Why Forward search? Why don t we learn a value function directly for every state offline, so that we do not waste time online? Because the environment has many many states (consider Go 10^170, ChessL 10^48, real world.) Very hard to compute a good value function for each one of them, most you will never even visit Thus, it makes sense, condition on the current state you are in, to try to estimate the value function of the relevant part of the state space online! Focus your resources. Use the the online forward search to pick the best action Disadvantages: Nothing is learnt from episode to episode

45 Simulation-based Search I Forward search paradigm using sample-based planning Simulate episodes of experience starting from now with the model Apply model-free RL to simulated episodes Image s t T! T! T! T! T! T! T! T! T! T!

46 Simulation-Based Search II Simulate episodes of experience from now with the model {s k t, A k t,r k t+1,...,s k T } K k=1 M Apply model-free RL to simulated episodes! Monte-Carlo control Monte-Carlo search

47 Simple Monte-Carlo Search Given a model For each action K M a 2 A and a simulation policy Simulate episodes from current (real) state : s {s t,a,r k t+1,s k t+1,a k t+1,...,s k T } K k=1 M, Evaluate action value function of the root by mean return (Monte-Carlo evaluation) Q(s t,a)= 1 K KX k=1 G t P! q (s t,a) Select current (real) action with maximum value a t = argmax a2a Q(s t,a)

48 Monte-Carlo Tree Search (Evaluation) Given a model M Simulate K episodes from current state s t using current simulation policy {s t,a k t,r k t+1,s k t+1,...,s k T } K k=1 M, Build a search tree containing visited states and actions Evaluate states Q(s, a) by mean return of episodes from s, a for all states and actions in the tree Q(s t,a)= 1 N(s, a) KX k=1 TX u=t 1(S u,a u = s, a)g u P! q (s, a) After search is finished, select current (real) action with maximum value in search tree a t = argmax a2a Q(s t,a)

49 Monte-Carlo Tree Search (Simulation) In MCTS, the simulation policy improves Each simulation consists of two phases (in-tree, out-of-tree) Tree policy (improves): pick actions to maximize Default policy (fixed): pick actions randomly Repeat (each simulation) Q(s, a) Evaluate states by Monte-Carlo evaluation Improve there policy, e.g. by Monte-Carlo control applied to simulated experience Converges on the optimal search tree, greedy(q) Q(s, a) Q(S, A)! q (S, A)

50 Case Study: the Game of Go The ancient oriental game of Go is 2500 years old Considered to be the hardest classic board game Considered a grand challenge task for AI (John McCarthy) Traditional game-tree search has failed in Go

51 Rules of Go Usually played on 19x19, also 13x13 or 9x9 board Simple rules, complex strategy Black and white place down stones alternately Surrounded stones are captured and removed The player with more territory wins the game

52 Position Evaluation in Go How good is a position s? Reward function (undiscounted): R t =0 R T = for all non-terminal steps ( 1, if Black wins. 0, if White wins. t<t Policy = h B, W i Value function (how good is position s ): selects moves for both players v (s) =E [R T S = s] =P[Black wins S = s] v (s) = max B min W v (s)

53 Monte-Carlo Evaluation in Go V(s) = 2/4 = 0.5 Current position s Simulation Outcomes

54 Applying Monte-Carlo Tree Search

55 Applying Monte-Carlo Tree Search Cont.

56 Applying Monte-Carlo Tree Search (Cont.

57 Applying Monte-Carlo Tree Search (Cont.

58 Applying Monte-Carlo Tree Search (Cont.

59 Advantages of MC Tree Search Highly selective best-first search Evaluate states dynamically (unlike e.g. DP) Uses sampling to break curse of dimensionality Computationally efficient, anytime, parallelizable

60 Combining offline and online value function estimation Use policy networks to have priors on Q(s,a): a t = argmax a (Q(s t,a)+u(s t,a)) u(s, a) / P (s, a) 1+N(s, a) P (s, a) = (a s) Use fast and light policy networks for rollouts (instead of random policy) Use value function approximation computed offline to evaluate nodes in the tree: v(s L )=(1 )v (s L )+ z L

61 Combining offline and online value function estimation

Lecture 10: Reinforcement Learning

Lecture 10: Reinforcement Learning Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Exploration. CS : Deep Reinforcement Learning Sergey Levine Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?

More information

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and

More information

High-level Reinforcement Learning in Strategy Games

High-level Reinforcement Learning in Strategy Games High-level Reinforcement Learning in Strategy Games Christopher Amato Department of Computer Science University of Massachusetts Amherst, MA 01003 USA camato@cs.umass.edu Guy Shani Department of Computer

More information

Georgetown University at TREC 2017 Dynamic Domain Track

Georgetown University at TREC 2017 Dynamic Domain Track Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Introduction to Simulation

Introduction to Simulation Introduction to Simulation Spring 2010 Dr. Louis Luangkesorn University of Pittsburgh January 19, 2010 Dr. Louis Luangkesorn ( University of Pittsburgh ) Introduction to Simulation January 19, 2010 1 /

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Regret-based Reward Elicitation for Markov Decision Processes

Regret-based Reward Elicitation for Markov Decision Processes 444 REGAN & BOUTILIER UAI 2009 Regret-based Reward Elicitation for Markov Decision Processes Kevin Regan Department of Computer Science University of Toronto Toronto, ON, CANADA kmregan@cs.toronto.edu

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

FF+FPG: Guiding a Policy-Gradient Planner

FF+FPG: Guiding a Policy-Gradient Planner FF+FPG: Guiding a Policy-Gradient Planner Olivier Buffet LAAS-CNRS University of Toulouse Toulouse, France firstname.lastname@laas.fr Douglas Aberdeen National ICT australia & The Australian National University

More information

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

Automatic Discretization of Actions and States in Monte-Carlo Tree Search Automatic Discretization of Actions and States in Monte-Carlo Tree Search Guy Van den Broeck 1 and Kurt Driessens 2 1 Katholieke Universiteit Leuven, Department of Computer Science, Leuven, Belgium guy.vandenbroeck@cs.kuleuven.be

More information

Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design

Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design Paper #3 Five Q-to-survey approaches: did they work? Job van Exel

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

TD(λ) and Q-Learning Based Ludo Players

TD(λ) and Q-Learning Based Ludo Players TD(λ) and Q-Learning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent self-learning ability

More information

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley Challenges in Deep Reinforcement Learning Sergey Levine UC Berkeley Discuss some recent work in deep reinforcement learning Present a few major challenges Show some of our recent work toward tackling

More information

Seminar - Organic Computing

Seminar - Organic Computing Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts

More information

Rule-based Expert Systems

Rule-based Expert Systems Rule-based Expert Systems What is knowledge? is a theoretical or practical understanding of a subject or a domain. is also the sim of what is currently known, and apparently knowledge is power. Those who

More information

Go fishing! Responsibility judgments when cooperation breaks down

Go fishing! Responsibility judgments when cooperation breaks down Go fishing! Responsibility judgments when cooperation breaks down Kelsey Allen (krallen@mit.edu), Julian Jara-Ettinger (jjara@mit.edu), Tobias Gerstenberg (tger@mit.edu), Max Kleiman-Weiner (maxkw@mit.edu)

More information

AMULTIAGENT system [1] can be defined as a group of

AMULTIAGENT system [1] can be defined as a group of 156 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 2, MARCH 2008 A Comprehensive Survey of Multiagent Reinforcement Learning Lucian Buşoniu, Robert Babuška,

More information

Speeding Up Reinforcement Learning with Behavior Transfer

Speeding Up Reinforcement Learning with Behavior Transfer Speeding Up Reinforcement Learning with Behavior Transfer Matthew E. Taylor and Peter Stone Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188 {mtaylor, pstone}@cs.utexas.edu

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling Adaptive Generation in Dialogue Systems Using Dynamic User Modeling Srinivasan Janarthanam Heriot-Watt University Oliver Lemon Heriot-Watt University We address the problem of dynamically modeling and

More information

Learning and Transferring Relational Instance-Based Policies

Learning and Transferring Relational Instance-Based Policies Learning and Transferring Relational Instance-Based Policies Rocío García-Durán, Fernando Fernández y Daniel Borrajo Universidad Carlos III de Madrid Avda de la Universidad 30, 28911-Leganés (Madrid),

More information

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology Tiancheng Zhao CMU-LTI-16-006 Language Technologies Institute School of Computer Science Carnegie Mellon

More information

The Evolution of Random Phenomena

The Evolution of Random Phenomena The Evolution of Random Phenomena A Look at Markov Chains Glen Wang glenw@uchicago.edu Splash! Chicago: Winter Cascade 2012 Lecture 1: What is Randomness? What is randomness? Can you think of some examples

More information

Planning with External Events

Planning with External Events 94 Planning with External Events Jim Blythe School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 blythe@cs.cmu.edu Abstract I describe a planning methodology for domains with uncertainty

More information

An OO Framework for building Intelligence and Learning properties in Software Agents

An OO Framework for building Intelligence and Learning properties in Software Agents An OO Framework for building Intelligence and Learning properties in Software Agents José A. R. P. Sardinha, Ruy L. Milidiú, Carlos J. P. Lucena, Patrick Paranhos Abstract Software agents are defined as

More information

MYCIN. The MYCIN Task

MYCIN. The MYCIN Task MYCIN Developed at Stanford University in 1972 Regarded as the first true expert system Assists physicians in the treatment of blood infections Many revisions and extensions over the years The MYCIN Task

More information

Lecture 6: Applications

Lecture 6: Applications Lecture 6: Applications Michael L. Littman Rutgers University Department of Computer Science Rutgers Laboratory for Real-Life Reinforcement Learning What is RL? Branch of machine learning concerned with

More information

Improving Action Selection in MDP s via Knowledge Transfer

Improving Action Selection in MDP s via Knowledge Transfer In Proc. 20th National Conference on Artificial Intelligence (AAAI-05), July 9 13, 2005, Pittsburgh, USA. Improving Action Selection in MDP s via Knowledge Transfer Alexander A. Sherstov and Peter Stone

More information

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS

EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS by Robert Smith Submitted in partial fulfillment of the requirements for the degree of Master of

More information

Acquiring Competence from Performance Data

Acquiring Competence from Performance Data Acquiring Competence from Performance Data Online learnability of OT and HG with simulated annealing Tamás Biró ACLC, University of Amsterdam (UvA) Computational Linguistics in the Netherlands, February

More information

An Introduction to Simio for Beginners

An Introduction to Simio for Beginners An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality

More information

Task Completion Transfer Learning for Reward Inference

Task Completion Transfer Learning for Reward Inference Machine Learning for Interactive Systems: Papers from the AAAI-14 Workshop Task Completion Transfer Learning for Reward Inference Layla El Asri 1,2, Romain Laroche 1, Olivier Pietquin 3 1 Orange Labs,

More information

Emergency Management Games and Test Case Utility:

Emergency Management Games and Test Case Utility: IST Project N 027568 IRRIIS Project Rome Workshop, 18-19 October 2006 Emergency Management Games and Test Case Utility: a Synthetic Methodological Socio-Cognitive Perspective Adam Maria Gadomski, ENEA

More information

Major Milestones, Team Activities, and Individual Deliverables

Major Milestones, Team Activities, and Individual Deliverables Major Milestones, Team Activities, and Individual Deliverables Milestone #1: Team Semester Proposal Your team should write a proposal that describes project objectives, existing relevant technology, engineering

More information

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Varun Raj Kompella, Marijn Stollenga, Matthew Luciw, Juergen Schmidhuber The Swiss AI Lab IDSIA, USI

More information

Lecture 2: Quantifiers and Approximation

Lecture 2: Quantifiers and Approximation Lecture 2: Quantifiers and Approximation Case study: Most vs More than half Jakub Szymanik Outline Number Sense Approximate Number Sense Approximating most Superlative Meaning of most What About Counting?

More information

A Comparison of Annealing Techniques for Academic Course Scheduling

A Comparison of Annealing Techniques for Academic Course Scheduling A Comparison of Annealing Techniques for Academic Course Scheduling M. A. Saleh Elmohamed 1, Paul Coddington 2, and Geoffrey Fox 1 1 Northeast Parallel Architectures Center Syracuse University, Syracuse,

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Probability and Game Theory Course Syllabus

Probability and Game Theory Course Syllabus Probability and Game Theory Course Syllabus DATE ACTIVITY CONCEPT Sunday Learn names; introduction to course, introduce the Battle of the Bismarck Sea as a 2-person zero-sum game. Monday Day 1 Pre-test

More information

Using Deep Convolutional Neural Networks in Monte Carlo Tree Search

Using Deep Convolutional Neural Networks in Monte Carlo Tree Search Using Deep Convolutional Neural Networks in Monte Carlo Tree Search Tobias Graf (B) and Marco Platzner University of Paderborn, Paderborn, Germany tobiasg@mail.upb.de, platzner@upb.de Abstract. Deep Convolutional

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits. DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE Sample 2-Year Academic Plan DRAFT Junior Year Summer (Bridge Quarter) Fall Winter Spring MMDP/GAME 124 GAME 310 GAME 318 GAME 330 Introduction to Maya

More information

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Active Learning. Yingyu Liang Computer Sciences 760 Fall Active Learning Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven,

More information

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering Lecture Details Instructor Course Objectives Tuesday and Thursday, 4:00 pm to 5:15 pm Information Technology and Engineering

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

give every teacher everything they need to teach mathematics

give every teacher everything they need to teach mathematics give every teacher everything they need to teach mathematics AUSTRALIA give every teacher everything ORIGO Stepping Stones is an award winning, core mathematics program developed by specialists for Australian

More information

Probability estimates in a scenario tree

Probability estimates in a scenario tree 101 Chapter 11 Probability estimates in a scenario tree An expert is a person who has made all the mistakes that can be made in a very narrow field. Niels Bohr (1885 1962) Scenario trees require many numbers.

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

AI Agent for Ice Hockey Atari 2600

AI Agent for Ice Hockey Atari 2600 AI Agent for Ice Hockey Atari 2600 Emman Kabaghe (emmank@stanford.edu) Rajarshi Roy (rroy@stanford.edu) 1 Introduction In the reinforcement learning (RL) problem an agent autonomously learns a behavior

More information

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic

More information

The Strong Minimalist Thesis and Bounded Optimality

The Strong Minimalist Thesis and Bounded Optimality The Strong Minimalist Thesis and Bounded Optimality DRAFT-IN-PROGRESS; SEND COMMENTS TO RICKL@UMICH.EDU Richard L. Lewis Department of Psychology University of Michigan 27 March 2010 1 Purpose of this

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

BMBF Project ROBUKOM: Robust Communication Networks

BMBF Project ROBUKOM: Robust Communication Networks BMBF Project ROBUKOM: Robust Communication Networks Arie M.C.A. Koster Christoph Helmberg Andreas Bley Martin Grötschel Thomas Bauschert supported by BMBF grant 03MS616A: ROBUKOM Robust Communication Networks,

More information

A simulated annealing and hill-climbing algorithm for the traveling tournament problem

A simulated annealing and hill-climbing algorithm for the traveling tournament problem European Journal of Operational Research xxx (2005) xxx xxx Discrete Optimization A simulated annealing and hill-climbing algorithm for the traveling tournament problem A. Lim a, B. Rodrigues b, *, X.

More information

How long did... Who did... Where was... When did... How did... Which did...

How long did... Who did... Where was... When did... How did... Which did... (Past Tense) Who did... Where was... How long did... When did... How did... 1 2 How were... What did... Which did... What time did... Where did... What were... Where were... Why did... Who was... How many

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

Radius STEM Readiness TM

Radius STEM Readiness TM Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

Task Completion Transfer Learning for Reward Inference

Task Completion Transfer Learning for Reward Inference Task Completion Transfer Learning for Reward Inference Layla El Asri 1,2, Romain Laroche 1, Olivier Pietquin 3 1 Orange Labs, Issy-les-Moulineaux, France 2 UMI 2958 (CNRS - GeorgiaTech), France 3 University

More information

Introduction to Questionnaire Design

Introduction to Questionnaire Design Introduction to Questionnaire Design Why this seminar is necessary! Bad questions are everywhere! Don t let them happen to you! Fall 2012 Seminar Series University of Illinois www.srl.uic.edu The first

More information

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners Andrea L. Thomaz and Cynthia Breazeal Abstract While Reinforcement Learning (RL) is not traditionally designed

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

STA 225: Introductory Statistics (CT)

STA 225: Introductory Statistics (CT) Marshall University College of Science Mathematics Department STA 225: Introductory Statistics (CT) Course catalog description A critical thinking course in applied statistical reasoning covering basic

More information

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017 Instructor Syed Zahid Ali Room No. 247 Economics Wing First Floor Office Hours Email szahid@lums.edu.pk Telephone Ext. 8074 Secretary/TA TA Office Hours Course URL (if any) Suraj.lums.edu.pk FINN 321 Econometrics

More information

An Introduction to Simulation Optimization

An Introduction to Simulation Optimization An Introduction to Simulation Optimization Nanjing Jian Shane G. Henderson Introductory Tutorials Winter Simulation Conference December 7, 2015 Thanks: NSF CMMI1200315 1 Contents 1. Introduction 2. Common

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Getting Started with Deliberate Practice

Getting Started with Deliberate Practice Getting Started with Deliberate Practice Most of the implementation guides so far in Learning on Steroids have focused on conceptual skills. Things like being able to form mental images, remembering facts

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions Ericsson Wallet Platform (EWP) 3.0 Training Programs Catalog of Course Descriptions Catalog of Course Descriptions INTRODUCTION... 3 ERICSSON CONVERGED WALLET (ECW) 3.0 RATING MANAGEMENT... 4 ERICSSON

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Innov High Educ (2009) 34:93 103 DOI 10.1007/s10755-009-9095-2 Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Phyllis Blumberg Published online: 3 February

More information

Statistical Analysis of Climate Change, Renewable Energies, and Sustainability An Independent Investigation for Introduction to Statistics

Statistical Analysis of Climate Change, Renewable Energies, and Sustainability An Independent Investigation for Introduction to Statistics 5/22/2012 Statistical Analysis of Climate Change, Renewable Energies, and Sustainability An Independent Investigation for Introduction to Statistics College of Menominee Nation & University of Wisconsin

More information

A Reinforcement Learning Variant for Control Scheduling

A Reinforcement Learning Variant for Control Scheduling A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Why Did My Detector Do That?!

Why Did My Detector Do That?! Why Did My Detector Do That?! Predicting Keystroke-Dynamics Error Rates Kevin Killourhy and Roy Maxion Dependable Systems Laboratory Computer Science Department Carnegie Mellon University 5000 Forbes Ave,

More information

Concept mapping instrumental support for problem solving

Concept mapping instrumental support for problem solving 40 Int. J. Cont. Engineering Education and Lifelong Learning, Vol. 18, No. 1, 2008 Concept mapping instrumental support for problem solving Slavi Stoyanov* Open University of the Netherlands, OTEC, P.O.

More information

Conceptual and Procedural Knowledge of a Mathematics Problem: Their Measurement and Their Causal Interrelations

Conceptual and Procedural Knowledge of a Mathematics Problem: Their Measurement and Their Causal Interrelations Conceptual and Procedural Knowledge of a Mathematics Problem: Their Measurement and Their Causal Interrelations Michael Schneider (mschneider@mpib-berlin.mpg.de) Elsbeth Stern (stern@mpib-berlin.mpg.de)

More information

TOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATION. by Yang Xu PhD of Information Sciences

TOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATION. by Yang Xu PhD of Information Sciences TOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATION by Yang Xu PhD of Information Sciences Submitted to the Graduate Faculty of in partial fulfillment of the requirements for the degree of Doctor of Philosophy

More information

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14) IAT 888: Metacreation Machines endowed with creative behavior Philippe Pasquier Office 565 (floor 14) pasquier@sfu.ca Outline of today's lecture A little bit about me A little bit about you What will that

More information

Universal Design for Learning Lesson Plan

Universal Design for Learning Lesson Plan Universal Design for Learning Lesson Plan Teacher(s): Alexandra Romano Date: April 9 th, 2014 Subject: English Language Arts NYS Common Core Standard: RL.5 Reading Standards for Literature Cluster Key

More information

Investigate the program components

Investigate the program components Investigate the program components ORIGO Stepping Stones is an award-winning core mathematics program developed by specialists for Australian primary schools. Stepping Stones provides every teacher with

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

PreReading. Lateral Leadership. provided by MDI Management Development International

PreReading. Lateral Leadership. provided by MDI Management Development International PreReading Lateral Leadership NEW STRUCTURES REQUIRE A NEW ATTITUDE In an increasing number of organizations hierarchies lose their importance and instead companies focus on more network-like structures.

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

Ministry of Education, Republic of Palau Executive Summary

Ministry of Education, Republic of Palau Executive Summary Ministry of Education, Republic of Palau Executive Summary Student Consultant, Jasmine Han Community Partner, Edwel Ongrung I. Background Information The Ministry of Education is one of the eight ministries

More information

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I Session 1793 Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I John Greco, Ph.D. Department of Electrical and Computer Engineering Lafayette College Easton, PA 18042 Abstract

More information

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview Algebra 1, Quarter 3, Unit 3.1 Line of Best Fit Overview Number of instructional days 6 (1 day assessment) (1 day = 45 minutes) Content to be learned Analyze scatter plots and construct the line of best

More information

Truth Inference in Crowdsourcing: Is the Problem Solved?

Truth Inference in Crowdsourcing: Is the Problem Solved? Truth Inference in Crowdsourcing: Is the Problem Solved? Yudian Zheng, Guoliang Li #, Yuanbing Li #, Caihua Shan, Reynold Cheng # Department of Computer Science, Tsinghua University Department of Computer

More information