Reinforcement Learning

Similar documents
Lecture 10: Reinforcement Learning

Reinforcement Learning by Comparing Immediate Reward

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Georgetown University at TREC 2017 Dynamic Domain Track

Improving Action Selection in MDP s via Knowledge Transfer

Speeding Up Reinforcement Learning with Behavior Transfer

Learning Prospective Robot Behavior

Exploration. CS : Deep Reinforcement Learning Sergey Levine

AMULTIAGENT system [1] can be defined as a group of

High-level Reinforcement Learning in Strategy Games

A Reinforcement Learning Variant for Control Scheduling

Artificial Neural Networks written examination

Axiom 2013 Team Description Paper

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

TD(λ) and Q-Learning Based Ludo Players

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Regret-based Reward Elicitation for Markov Decision Processes

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 6: Applications

Laboratorio di Intelligenza Artificiale e Robotica

Task Completion Transfer Learning for Reward Inference

FF+FPG: Guiding a Policy-Gradient Planner

On the Combined Behavior of Autonomous Resource Management Agents

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling

An Introduction to Simio for Beginners

Task Completion Transfer Learning for Reward Inference

The Strong Minimalist Thesis and Bounded Optimality

Mathematics subject curriculum

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners

While you are waiting... socrative.com, room number SIMLANG2016

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

Laboratorio di Intelligenza Artificiale e Robotica

Physics 270: Experimental Physics

Instructional Approach(s): The teacher should introduce the essential question and the standard that aligns to the essential question

Grade 6: Correlated to AGS Basic Math Skills

Foothill College Summer 2016

ENEE 302h: Digital Electronics, Fall 2005 Prof. Bruce Jacob

Software Maintenance

Cal s Dinner Card Deals

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Lecture 1: Machine Learning Basics

Generative models and adversarial training

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Math 1313 Section 2.1 Example 2: Given the following Linear Program, Determine the vertices of the feasible set. Subject to:

AI Agent for Ice Hockey Atari 2600

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

Improving Fairness in Memory Scheduling

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Informatics 2A: Language Complexity and the. Inf2A: Chomsky Hierarchy

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

Universal Design for Learning Lesson Plan

Discriminative Learning of Beam-Search Heuristics for Planning

B. How to write a research paper

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

An Introduction to Simulation Optimization

Robot Learning Simultaneously a Task and How to Interpret Human Instructions

Measurement & Analysis in the Real World

Focus of the Unit: Much of this unit focuses on extending previous skills of multiplication and division to multi-digit whole numbers.

Given a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations

Learning to Schedule Straight-Line Code

Intelligent Agents. Chapter 2. Chapter 2 1

Lecture 1: Basic Concepts of Machine Learning

Introduction to Simulation

PROGRAM REVIEW CALCULUS TRACK MATH COURSES (MATH 170, 180, 190, 191, 210, 220, 270) May 1st, 2012

Teaching a Laboratory Section

BMBF Project ROBUKOM: Robust Communication Networks

Python Machine Learning

Radius STEM Readiness TM

A Comparison of Annealing Techniques for Academic Course Scheduling

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

The Good Judgment Project: A large scale test of different methods of combining expert predictions

EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Probability estimates in a scenario tree

Shockwheat. Statistics 1, Activity 1

Catchy Title for Machine


Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

Statewide Framework Document for:

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

South Carolina College- and Career-Ready Standards for Mathematics. Standards Unpacking Documents Grade 5

UDL AND LANGUAGE ARTS LESSON OVERVIEW

IMGD Technical Game Development I: Iterative Development Techniques. by Robert W. Lindeman

An investigation of imitation learning algorithms for structured prediction

arxiv: v1 [math.at] 10 Jan 2016

A Grammar for Battle Management Language

AP Calculus AB. Nevada Academic Standards that are assessable at the local level only.

INTERMEDIATE ALGEBRA PRODUCT GUIDE

Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics

Surprise-Based Learning for Autonomous Systems

Simple Random Sample (SRS) & Voluntary Response Sample: Examples: A Voluntary Response Sample: Examples: Systematic Sample Best Used When

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD

(Sub)Gradient Descent

Transcription:

Reinforcement Learning Slides from R.S. Sutton and A.G. Barto Reinforcement Learning: An Introduction http://www.cs.ualberta.ca/~sutton/book/the-book.html http://rlai.cs.ualberta.ca/rlai/rlaicourse/rlaicourse.html R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

The Agent-Environment Interface...! s t a t r t +1 s t +1 a t +1 r t +2 s t +2 a t +2 r t +3 s t +3...! a t +3 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2

The Agent Learns a Policy Reinforcement learning methods specify how the agent changes its policy as a result of experience. Roughly, the agent s goal is to get as much reward as it can over the long run. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3

Getting the Degree of Abstraction Right Time steps need not refer to fixed intervals of real time. Actions can be low level (e.g., voltages to motors), or high level (e.g., accept a job offer), mental (e.g., shift in focus of attention), etc. States can be low-level sensations, or they can be abstract, symbolic, based on memory, or subjective (e.g., the state of being surprised or lost ). An RL agent is not like a whole animal or robot, which consist of many RL agents as well as other components. The environment is not necessarily unknown to the agent, only incompletely controllable. Reward computation is in the agent s environment because the agent cannot change it arbitrarily. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4

Goals and Rewards Is a scalar reward signal an adequate notion of a goal? maybe not, but it is surprisingly flexible. A goal should specify what we want to achieve, not how we want to achieve it. A goal must be outside the agent s direct control thus outside the agent. The agent must be able to measure success: explicitly; frequently during its lifespan. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5

Returns Episodic tasks: interaction breaks naturally into episodes, e.g., plays of a game, trips through a maze. where T is a final time step at which a terminal state is reached, ending an episode. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6

Returns for Continuing Tasks Continuing tasks: interaction does not have natural episodes. Discounted return: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7

An Example Avoid failure: the pole falling beyond a critical angle or the cart hitting end of track. As an episodic task where episode ends upon failure: As a continuing task with discounted return: In either case, return is maximized by avoiding failure for as long as possible. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8

Another Example Get to the top of the hill as quickly as possible. Return is maximized by minimizing number of steps reach the top of the hill. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9

A Unified Notation In episodic tasks, we number the time steps of each episode starting from zero. We usually do not have distinguish between episodes, so we write instead of for the state at step t of episode j. Think of each episode as ending in an absorbing state that always produces reward of zero: We can cover all cases by writing R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10

The Markov Property By the state at step t, the book means whatever information is available to the agent at step t about its environment. The state can include immediate sensations, highly processed sensations, and structures built up over time from sequences of sensations. Ideally, a state should summarize past sensations so as to retain all essential information, i.e., it should have the Markov Property: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11

Markov Decision Processes If a reinforcement learning task has the Markov Property, it is basically a Markov Decision Process (MDP). If state and action sets are finite, it is a finite MDP. To define a finite MDP, you need to give: state and action sets one-step dynamics defined by transition probabilities: reward probabilities: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12

An Example Finite MDP Recycling Robot At each step, robot has to decide whether it should (1) actively search for a can, (2) wait for someone to bring it a can, or (3) go to home base and recharge. Searching is better but runs down the battery; if runs out of power while searching, has to be rescued (which is bad). Decisions made on basis of current energy level: high, low. Reward = number of cans collected R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13

Value Functions The value of a state is the expected return starting from that state; depends on the agent s policy: The value of taking an action in a state under policy π is the expected return starting from that state, taking that action, and thereafter following π : R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14

Bellman Equation for a Policy π The basic idea: So: Or, without the expectation operator: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15

More on the Bellman Equation This is a set of equations (in fact, linear), one for each state. The value function for π is its unique solution. Backup diagrams: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16

Gridworld Actions: north, south, east, west; deterministic. If would take agent off the grid: no move but reward = 1 Other actions produce reward = 0, except actions that move agent out of special states A and B as shown. State-value function for equiprobable random policy; γ = 0.9 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17

Optimal Value Functions For finite MDPs, policies can be partially ordered: There is always at least one (and possibly many) policies that is better than or equal to all the others. This is an optimal policy. We denote them all π *. Optimal policies share the same optimal state-value function: Optimal policies also share the same optimal action-value function: This is the expected return for taking action a in state s and thereafter following an optimal policy. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18

Bellman Optimality Equation for V* The value of a state under an optimal policy must equal the expected return for the best action from that state: The relevant backup diagram: is the unique solution of this system of nonlinear equations. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19

Bellman Optimality Equation for Q* The relevant backup diagram: is the unique solution of this system of nonlinear equations. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20

Why Optimal State-Value Functions are Useful Any policy that is greedy with respect to is an optimal policy. Therefore, given, one-step-ahead search produces the long-term optimal actions. E.g., back to the gridworld: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 21

What About Optimal Action-Value Functions? Given, the agent does not even have to do a one-step-ahead search: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 22

Solving the Bellman Optimality Equation Finding an optimal policy by solving the Bellman Optimality Equation requires the following: accurate knowledge of environment dynamics; we have enough space an time to do the computation; the Markov Property. How much space and time do we need? polynomial in number of states (via dynamic programming methods; Chapter 4), BUT, number of states is often huge (e.g., backgammon has about 10**20 states). We usually have to settle for approximations. Many RL methods can be understood as approximately solving the Bellman Optimality Equation. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 23

TD Prediction Policy Evaluation (the prediction problem): for a given policy π, compute the state-value function target: an estimate of the return R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 24

Simplest TD Method T T T T T T T T T T R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 25

Example: Driving Home R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 26

Driving Home Changes recommended by Monte Carlo methods (α=1) Changes recommended by TD methods (α=1) R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 27

Advantages of TD Learning TD methods do not require a model of the environment, only experience TD methods can be fully incremental You can learn before knowing the final outcome Less memory Less peak computation You can learn without the final outcome From incomplete sequences R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 28

Random Walk Example Values learned by TD(0) after various numbers of episodes R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 29

TD and MC on the Random Walk Data averaged over 100 sequences of episodes R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 30

Optimality of TD(0) Batch Updating: train completely on a finite amount of data, e.g., train repeatedly on 10 episodes until convergence. Compute updates according to TD(0), but only update estimates after each complete pass through the data. For any finite Markov prediction task, under batch updating, TD(0) converges for sufficiently small α. Constant-α MC also converges under these conditions, but to a difference answer! R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 31

Random Walk under Batch Updating After each new episode, all previous episodes were treated as a batch, and algorithm was trained until convergence. All repeated 100 times. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 32

Learning An Action-Value Function: Q-Learning R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 33

Sarsa: On-Policy TD Control Turn this into a control method by always updating the policy to be greedy with respect to the current estimate: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 34

Windy Gridworld undiscounted, episodic, reward = 1 until goal R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 35

Results of Sarsa on the Windy Gridworld R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 36

Cliffwalking ε greedy, ε = 0.1 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 37