Reinforcement Learning: A Brief Tutorial. Doina Precup

Similar documents
Lecture 10: Reinforcement Learning

Reinforcement Learning by Comparing Immediate Reward

Axiom 2013 Team Description Paper

Lecture 1: Machine Learning Basics

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

TD(λ) and Q-Learning Based Ludo Players

Speeding Up Reinforcement Learning with Behavior Transfer

Exploration. CS : Deep Reinforcement Learning Sergey Levine

A Reinforcement Learning Variant for Control Scheduling

Artificial Neural Networks written examination

High-level Reinforcement Learning in Strategy Games

Python Machine Learning

Improving Action Selection in MDP s via Knowledge Transfer

Georgetown University at TREC 2017 Dynamic Domain Track

Introduction to Simulation

AMULTIAGENT system [1] can be defined as a group of

Regret-based Reward Elicitation for Markov Decision Processes

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

FF+FPG: Guiding a Policy-Gradient Planner

(Sub)Gradient Descent

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

CSL465/603 - Machine Learning

Intelligent Agents. Chapter 2. Chapter 2 1

Generative models and adversarial training

Lecture 6: Applications

Learning Prospective Robot Behavior

Laboratorio di Intelligenza Artificiale e Robotica

Evolutive Neural Net Fuzzy Filtering: Basic Description

An Introduction to Simulation Optimization

Learning Methods for Fuzzy Systems

Discriminative Learning of Beam-Search Heuristics for Planning

Task Completion Transfer Learning for Reward Inference

Task Completion Transfer Learning for Reward Inference

AI Agent for Ice Hockey Atari 2600

Learning to Schedule Straight-Line Code

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

An Online Handwriting Recognition System For Turkish

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Learning and Transferring Relational Instance-Based Policies

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

LEARNING TO PLAY IN A DAY: FASTER DEEP REIN-

BMBF Project ROBUKOM: Robust Communication Networks

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Laboratorio di Intelligenza Artificiale e Robotica

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Softprop: Softmax Neural Network Backpropagation Learning

Planning with External Events

SARDNET: A Self-Organizing Feature Map for Sequences

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Attributed Social Network Embedding

arxiv: v1 [cs.lg] 15 Jun 2015

On the Combined Behavior of Autonomous Resource Management Agents

Robot Learning Simultaneously a Task and How to Interpret Human Instructions

A Comparison of Annealing Techniques for Academic Course Scheduling

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners

Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

Lecture 1: Basic Concepts of Machine Learning

Probabilistic Latent Semantic Analysis

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

Switchboard Language Model Improvement with Conversational Data from Gigaword

An investigation of imitation learning algorithms for structured prediction

The Strong Minimalist Thesis and Bounded Optimality

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

UNIVERSITY OF CALIFORNIA SANTA CRUZ TOWARDS A UNIVERSAL PARAMETRIC PLAYER MODEL

Human Emotion Recognition From Speech

Probability and Game Theory Course Syllabus

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

INPE São José dos Campos

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

An Introduction to Simio for Beginners

Lesson plan for Maze Game 1: Using vector representations to move through a maze Time for activity: homework for 20 minutes

AP Calculus AB. Nevada Academic Standards that are assessable at the local level only.

Software Maintenance

Executive Guide to Simulation for Health

CS Machine Learning

An OO Framework for building Intelligence and Learning properties in Software Agents

GACE Computer Science Assessment Test at a Glance

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

A Reinforcement Learning Approach for Adaptive Single- and Multi-Document Summarization

University of Cincinnati College of Medicine. DECISION ANALYSIS AND COST-EFFECTIVENESS BE-7068C: Spring 2016

Speech Emotion Recognition Using Support Vector Machine

Mathematics subject curriculum

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Detailed course syllabus

Assignment 1: Predicting Amazon Review Ratings

Truth Inference in Crowdsourcing: Is the Problem Solved?

Case Acquisition Strategies for Case-Based Reasoning in Real-Time Strategy Games

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Transcription:

Reinforcement Learning: A Brief Tutorial Doina Precup Reasoning and Learning Lab McGill University http://www.cs.mcgill.ca/ dprecup With thanks to Rich Sutton

Outline The reinforcement learning problem What to learn: policies and value functions Monte Carlo estimation for value functions Markov Decision Processes Dynamic programming methods Temporal-difference learning methods Learning optimal control December 5, 2007 2 Reinforcement learning

The General Problem: Control Learning Consider learning to choose actions, e.g., Robot learning to dock on battery charger Choosing actions to optimize factory output Playing Backgammon, Go, Poker,... Choosing medical tests and treatments for a patient with a chronic illness Conversation Portofolio management Flying a helicopter Queue / router control All of these are sequential decision making problems December 5, 2007 3 Reinforcement learning

Reinforcement Learning Problem Agent state s t reward r t action a t r t+1 s t+1 Environment At each discrete time t, the agent (learning system) observes state s t S and chooses action a t A Then it receives an immediate reward r t+1 and the state changes to s t+1 December 5, 2007 4 Reinforcement learning

Example: Backgammon (Tesauro, 1992-1995) 24 23 22 21 20 19 18 17 16 15 14 13 white pieces move counterclockwise 1 2 3 4 5 6 7 8 9 10 11 12 black pieces move clockwise The states are board positions in which the agent can move The actions are the possible moves Reward is 0 until the end of the game, when it is ±1 depending on whether the agent wins or loses December 5, 2007 5 Reinforcement learning

Supervised Learning Training Info: Desired (target) Output Inputs Supervised Learning Outputs Error = (target output - actual output) December 5, 2007 6 Reinforcement learning

Reinforcement Learning (RL) Training Info: Evaluations (rewards/penalties) Inputs Reinforcement Learning Outputs: actions Objective: Get as much reward as possible December 5, 2007 7 Reinforcement learning

Key Features of RL The learner is not told what actions to take, instead it find finds out what to do by trial-and-error search The environment is stochastic The reward may be delayed, so the learner may need to sacrifice short-term gains for greater long-term gains The learner has to balance the need to explore its environment and the need to exploit its current knowledge December 5, 2007 8 Reinforcement learning

The Power of Learning from Experience Expert examples are expensive and scarce Experience is cheap and plentiful! December 5, 2007 9 Reinforcement learning

Agent s Learning Task Execute actions in environment, observe results, and learn policy (strategy, way of behaving) π : S A [0, 1], π(s, a) = P (a t = a s t = s) If the policy is deterministic, we will write it more simply as π : S A, with π(s) = a giving the action chosen in state s. Note that the target function is π : S A but we have no training examples of form s, a Training examples are of form s, a, r,s,... Reinforcement learning methods specify how the agent should change the policy as a function of the rewards received over time December 5, 2007 10 Reinforcement learning

The Objective: Maximize Long-Term Return Suppose the sequence of rewards received after time step t is r t+1, r t+2.... We want to maximize the expected return E{R t } for every time step t Episodic tasks: the interaction with the environment takes place in episodes (e.g. games, trips through a maze etc) R t = r t+1 + r t+2 + + r T where T is the time when a terminal state is reached December 5, 2007 11 Reinforcement learning

The Objective: Maximize Long-Term Return Suppose the sequence of rewards received after time step t is r t+1, r t+2.... We want to maximize the expected return E{R t } for every time step t Discounted continuing tasks : R t = r t+1 + γr t+2 + γ 2 r t+3 + = X k=1 γ t+k 1 r t+k where γ is a discount factor for later rewards (between 0 and 1, usually close to 1) The discount factor is sometimes viewed as an inflation rate or probability of dying December 5, 2007 12 Reinforcement learning

The Objective: Maximize Long-Term Return Suppose the sequence of rewards received after time step t is r t+1, r t+2.... We want to maximize the expected return E{R t } for every time step t Average-reward tasks: R t = lim T 1 T (r t+1 + r t+2 + + r T ) December 5, 2007 13 Reinforcement learning

Example: Mountain-Car GOAL Gravity States: position and velocity Actions: accelerate forward, accelerate backward, coast Two reward formulations: reward = 1 for every time step, until car reaches the top reward = 1 at the top, 0 otherwise γ < 1 In both cases, the return is maximized by minimizing the number of steps to the top of the hill December 5, 2007 14 Reinforcement learning

Example: Pole Balancing Avoid failure: pole falling beyond a given angle, or cart hitting the end of the track Episodic task formulation: reward = +1 for each step before failure return = number of steps before failure Continuing task formulation: reward = -1 upon failure, 0 otherwise, γ < 1 return = γ k if there are k steps before failure December 5, 2007 15 Reinforcement learning

Example: Pole Balancing Avoid failure: pole falling beyond a given angle, or cart hitting the end of the track Episodic task formulation: reward = +1 for each step before failure return = number of steps before failure Discounted continuing task formulation: reward = -1 upon failure, 0 otherwise, γ < 1 return = γ k if there are k steps before failure December 5, 2007 16 Reinforcement learning

Graduate school example r= 0.1 n Unemployed (U) r= 1 g 0.5 0.5 Grad School (G) i 0.8 0.2 0.4 i 0.6 0.1 0.9 a Industry (I) 0.9 Academia (A) r=+10 n=do Nothing i = Apply to industry g = Apply to grad school a = Apply to academia r=+1 What is the best policy? 0.1 December 5, 2007 17 Reinforcement learning

Finding a good policy The problem seems difficult to solve even for toy examples Since we do not have expert-labeled examples, ideas for supervised learning do not apply immediately. One way to address the problem is to use search for a good policy, in the space of all possible policies To do this, we need a measure of the quality of a policy December 5, 2007 18 Reinforcement learning

State Value Function The value of a state s under policy π is the expected return when starting from s and choosing actions according to π: V π (s) = E π {R 0 s 0 = s} = E π ( X k=1 γ k 1 r k s 0 = s If the state space is finite, the collection of values of all states, V π, can be represented as a vector of size equal to the number of states. This vector is called the state-value function ) December 5, 2007 19 Reinforcement learning

State-action value function Analogously, the value of taking action a in state s under policy π is: Q π (s, a) = E π ( X k=1 γ k 1 r k s 0 = s, a 0 = a Q π can be represented as a matrix of size S A ; this is called the action-value function ) December 5, 2007 20 Reinforcement learning

Policies and value functions Value functions define a partial order over policies: π 1 π 2 if and only if V π 1 (s) V π 2 (s) s S So a policy is better than another policy if and only if it generates at least the same amount of return at all states If π 1 has higher value than π 2 at some states and lower value at other, the two policies are not comparable. Computing the value of a policy will be helpful in searching for it. December 5, 2007 21 Reinforcement learning

Monte Carlo Methods Suppose we have an episodic task The agent behaves according to some policy π for a while, generating several trajectories. Compute V π (s) by averaging the observed returns after s on the trajectories in which s was visited. Two main approaches: Every-visit: average returns for every time a state is visited in an episode First-visit: average returns only for the first time a state is visited in an episode December 5, 2007 22 Reinforcement learning

Implementation of Monte Carlo Policy Evaluation Suppose that we have n + 1 returns from state s V n+1 (s) = = = 1 n + 1 n n + 1 n+1 X i=1 1 n R i (s) = 1 n + 1 nx i=1! nx R i (s) + R n+1 (s) i=1 R i (s) + 1 n + 1 Rn+1 (s) n n + 1 V n (s) + 1 n + 1 Rn+1 (s) = V n (s) + 1 n + 1 `Rn+1 (s) V n (s) If we do not want to keep counts of how many times states have been visited, we can use a learning rate version: V (s t ) V (s t ) + α t (R t V (s t )) December 5, 2007 23 Reinforcement learning

Monte Carlo estimation of action values We use the same idea: Q π (s, a) is the average of the returns obtained by starting in state s, doing action a and then choosing actions according to π Like the state-value version, it converges asymptotically if every state-action pair is visited But π might not choose every action in every state! Exploring starts: Every state-action pair has a non-zero probability of being the starting pair December 5, 2007 24 Reinforcement learning

Representing value functions If the state space is finite, V π can be represented as an array with one entry for every state If the state space is infinite, use your favorite function approximator that can represent real-values functions: Linear function approximator, with non-linear basis functions Nearest neighbor Neural networks Locally weighted regression Regression trees... Some choices are better than others, theoretically and in practice. December 5, 2007 25 Reinforcement learning

Sparse, coarse coding Main idea: we want linear function approximators (because they have good convergence guarantees, as we will see later) but with lots of features, so they can represent complex functions a) Narrow generalization b) Broad generalization c) Asymmetric generalization Coarse means that the receptive fields are typically large Sparse means that just a few units are active ar any given time E.g., CMACs, sparse distributed memories etc. December 5, 2007 26 Reinforcement learning

Markov Decision Processes A general framework for non-linear optimal control, extensively studied since the 1950s In optimal control Specializes to Ricati equations for linear systems Hamilton-Jacobi-Bellman equations for continuous-time In operations research Planning, scheduling, logistics, inventory control Sequential design of experiments Finance, marketing, queuing and telecommunications In artificial intelligence (last 15 years) Probabilistic planning December 5, 2007 27 Reinforcement learning

Markov Decision Processes (MDPs) Set of states S Set of actions A(s) available in each state s Markov assumption: s t+1 and r t+1 depend only on s t, a t and not on anything that happened before t Rewards: Transition probabilities r a s = E {r t+1 s t = s, a t = a} p a ss = P `s t+1 = s s t = s, a t = a Rewards and transition probabilities form the model of the MDP December 5, 2007 28 Reinforcement learning

Optimal Policies and Optimal Value Functions In an MDP, there is a a unique optimal value function: V (s) = max π V π (s) This result was proved by Bellman in the 1950s There is also at least one deterministic optimal policy: π = arg max π V π It is obtained by greedily choosing the action with the best value at each state Note that value functions are measures of long-term performance, so the greedy choice is not myopic December 5, 2007 29 Reinforcement learning

Bellman Equations Values can be written in terms of successor values E.g. V π (s) = E π rt+1 + γr t+2 + γ 2 r t+3 + s t = s = E π {r t+1 + γv (s t+1 ) s t = s} = X π(s, a) rs a + γ X! p a ss V π (s ) a A s S This is a system of linear equations whose unique solution is V π. Bellman optimality equations for the value of the optimal policy:! V (s) = max a A ra s + γ X s S p a ss V (s ) This produces a nonlinear system, but still with a unique solution December 5, 2007 30 Reinforcement learning

Dynamic Programming Main idea: turn Bellman equations into an update rules. For instance, value iteration approximates the optimal value function by doing repeated sweeps through the states: 1. Start with some initial guess, e.g. V 0 2. Repeat: V k+1 (s) max a A ra s + γ X s S p a ss V k(s )! 3. Stop when the maximum change between two iterations is smaller than a desired threshold (the values stop changing) In the limit of k, V k V, and any of the maximizing actions will be optimal. December 5, 2007 31 Reinforcement learning

Illustration: Rooms Example Four actions, fail 30% of the time No rewards until the goal is reached, γ = 0.9. Iteration #1 Iteration #2 Iteration #3 December 5, 2007 32 Reinforcement learning

Policy Iteration 1. Start with an initial policy π 0 2. Repeat: (a) Compute V π i using policy evaluation (b) Compute a new policy π i+1 that is greedy with respect to V π i until V π i = V π i+1 December 5, 2007 33 Reinforcement learning

Generalized Policy Iteration Any combination of policy evaluation and policy improvement steps, even if they are not complete π evaluation V V π π greedy(v) V improvement π * V * December 5, 2007 34 Reinforcement learning

Model-Based Reinforcement Learning Usually, the model of the environment (rewards and transition probabilities) is unknown Instead, the learner observes transitions in the environment and learns an approximate model ˆr s, a ˆp a ss Note that this is a classical machine learning problem! Pretend the approximate model is correct and use it to compute the value function as above Very useful approach if the models have intrinsic value, can be applied to new tasks (e.g. in robotics) December 5, 2007 35 Reinforcement learning

Asynchronous Dynamic Programming Updating all states in every sweep may be infeasible for very large environments Some states might be more important than others A more efficient idea: repeatedly pick states at random, and apply a backup, until some convergence criterion is met Often states are selected along trajectories experienced by the agent This procedure will naturally emphasize states that are visited more often, and hence are more important December 5, 2007 36 Reinforcement learning

Dynamic Programming Summary In the worst case, scales polynomially in S and A Linear programming solution methods for MDPs also exist, and have better worst-case bounds, but usually scale worse in practice Dynamic programming is routinely applied to problems with millions of states However, if the model of the environment is unknown, computing it based on simulations may be difficult December 5, 2007 37 Reinforcement learning

The Curse of Dimensionality The number of states grows exponentially with the number of state variables (the dimensionality of the problem) To solve large problems: We need to sample the states Values have to be generalized to unseen states using function approximation December 5, 2007 38 Reinforcement learning

Reinforcement Learning: Using Experience instead of Dynamics Consider a trajectory, with actions selected according to policy π: The Bellman equation is: V π (s t ) = E π [r t+1 + γv π (s t+1 ) s t ] which suggests the dynamic programming update: V (s t ) E π [r t+1 + γv (s t+1 ) s t ] In general, we do not know this expected value. But, by choosing an action according to π, we obtain an unbiased sample of it, r t+1 + γv (s t+1 ) In RL, we make an update towards the sample value, e.g. half-way V (s t ) 1 2 V (s t) + 1 2 (r t+1 + γv (s t+1 ) December 5, 2007 39 Reinforcement learning

Temporal-Difference (TD) Learning (Sutton, 1988) We want to update the prediction for the value function based on its change from one moment to the next, called temporal difference Tabular TD(0): V (s t ) V (s t )+α(r t+1 + γv (s t+1 ) V (s t )) t = 0, 1,2,... where α (0, 1) is a step-size or learning rate parameter Gradient-descent TD(0): If V is represented using a parametric function approximator, e.g. a neural network, with parameter θ: θ θ+α (r t+1 + γv θ (s t+1 ) V θ (s t )) θ V θ (s t ), t = 0,1, 2,... December 5, 2007 40 Reinforcement learning

Eligibility Traces (TD(λ)) e t e t e t δt s t-3 s t-2 s t-1 e t s t s t+1 Time On every time step t, we compute the TD error: δ t = r t+1 + γv (s t+1 ) V (s t ) Shout δ t backwards to past states The strength of your voice decreases with temporal distance by γλ, where λ [0, 1] is a parameter December 5, 2007 41 Reinforcement learning

Example: TD-Gammon predicted probability of winning, V t TD error, V t+1 V t............ hidden units (40-80)...... backgammon position (198 input units) Start with random network Play millions of games against itself Value function is learned from this experience using TD learning This approach obtained the best player among people and computers Note that classical dynamic programming is not feasible for this problem! December 5, 2007 42 Reinforcement learning

RL Algorithms for Control TD-learning (as above) is used to compute values for a given policy π Control methods aim to find the optimal policy In this case, the behavior policy will have to balance two important tasks: Explore the environment in order to get information Exploit the existing knowledge, by taking the action that currently seems best December 5, 2007 43 Reinforcement learning

Exploration In order to obtain the optimal solution, the agent must try all actions ǫ-soft policies ensure that each action has at least probability ǫ of being tried at every step Softmax exploration makes action probabilities conditional on the values of different actions More sophisticated methods offer exploration bonuses, in order to make the data acquisiton more efficient This is an area of on-going research... December 5, 2007 44 Reinforcement learning

A Spectrum of Solution Methods Value-based RL: use a function approximator to represent the value function, then use a policy that is based on the current values Sarsa: incremental version of generalized policy iteration Q-learning: incremental version of value iteration Actor-critic methods: use a function approximator for the value function and a function approximator to represent the policy The value function is the critic, which computes the TD error signal The policy is the actor; its parameters are updated directly based on the feedback from the critic. E.g., policy gradient methods December 5, 2007 45 Reinforcement learning

Summary: What RL Algorithms Do Continual, on-line learning Many RL methods can be understood as trying to solve the Bellman optimality equations in an approximate way. December 5, 2007 46 Reinforcement learning

Success Stories TD-Gammon (Tesauro, 1992) Elevator dispatching (Crites and Barto, 1995): better than industry standard Inventory management (Van Roy et. al): 10-15% improvement over industry standards Job-shop scheduling for NASA space missions (Zhang and Dietterich, 1997) Dynamic channel assignment in cellular phones (Singh and Bertsekas, 1994) Robotic soccer (Stone et al, Riedmiller et al...) Helicopter control (Ng, 2003) Modelling neural reward systems (Schultz, Dayan and Montague, 1997) December 5, 2007 47 Reinforcement learning

Reference books For RL: Sutton & Barto, Reinforcement learning: An introduction http://www.cs.ualberta.ca/ sutton/book/the-book.html For MDPs: Puterman, Markov Decision Processes For theory on RL with function approximation: Bertsekas & Tsitsiklis, Neuro-dynamic programming December 5, 2007 48 Reinforcement learning