CS 5522: Artificial Intelligence II Reinforcement Learning

Similar documents
Lecture 10: Reinforcement Learning

Reinforcement Learning by Comparing Immediate Reward

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Improving Action Selection in MDP s via Knowledge Transfer

Laboratorio di Intelligenza Artificiale e Robotica

Axiom 2013 Team Description Paper

High-level Reinforcement Learning in Strategy Games

Laboratorio di Intelligenza Artificiale e Robotica

Artificial Neural Networks written examination

AMULTIAGENT system [1] can be defined as a group of

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Speeding Up Reinforcement Learning with Behavior Transfer

Regret-based Reward Elicitation for Markov Decision Processes

Georgetown University at TREC 2017 Dynamic Domain Track

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

Learning Prospective Robot Behavior

TD(λ) and Q-Learning Based Ludo Players

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Discriminative Learning of Beam-Search Heuristics for Planning

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Seminar - Organic Computing

Task Completion Transfer Learning for Reward Inference

An OO Framework for building Intelligence and Learning properties in Software Agents

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

What to Do When Conflict Happens

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Major Milestones, Team Activities, and Individual Deliverables

On the Combined Behavior of Autonomous Resource Management Agents

FF+FPG: Guiding a Policy-Gradient Planner

IMGD Technical Game Development I: Iterative Development Techniques. by Robert W. Lindeman

Task Completion Transfer Learning for Reward Inference

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

An investigation of imitation learning algorithms for structured prediction

AI Agent for Ice Hockey Atari 2600

Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics

Lecture 1: Machine Learning Basics

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

Running Head: STUDENT CENTRIC INTEGRATED TECHNOLOGY

College Pricing and Income Inequality

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

CS177 Python Programming

Visual CP Representation of Knowledge

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

The Evolution of Random Phenomena

Go fishing! Responsibility judgments when cooperation breaks down

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions

Planning with External Events

Lecture 6: Applications

Human-Computer Interaction CS Overview for Today. Who am I? 1/15/2012. Prof. Stephen Intille

Software Maintenance

A Reinforcement Learning Variant for Control Scheduling

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Conceptual and Procedural Knowledge of a Mathematics Problem: Their Measurement and Their Causal Interrelations

Agent-Based Software Engineering

College Pricing and Income Inequality

CAFE ESSENTIAL ELEMENTS O S E P P C E A. 1 Framework 2 CAFE Menu. 3 Classroom Design 4 Materials 5 Record Keeping

A Case Study: News Classification Based on Term Frequency

Foothill College Summer 2016

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Game-based formative assessment: Newton s Playground. Valerie Shute, Matthew Ventura, & Yoon Jeon Kim (Florida State University), NCME, April 30, 2013

Generating Test Cases From Use Cases

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Natural Language Processing. George Konidaris

Learning and Transferring Relational Instance-Based Policies

Objective: Total Time. (60 minutes) (6 minutes) (6 minutes) starting at 0. , 8, 10 many fourths? S: 4 fourths. T: (Beneat , 2, 4, , 14 , 16 , 12

Knowledge Transfer in Deep Convolutional Neural Nets

Active Learning. Yingyu Liang Computer Sciences 760 Fall

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

A Grammar for Battle Management Language

Evolutive Neural Net Fuzzy Filtering: Basic Description

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

Essentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology

ADDIE: A systematic methodology for instructional design that includes five phases: Analysis, Design, Development, Implementation, and Evaluation.

Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

SSIS SEL Edition Overview Fall 2017

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Managerial Decision Making

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Robot Learning Simultaneously a Task and How to Interpret Human Instructions

(Sub)Gradient Descent

What is Teaching? JOHN A. LOTT Professor Emeritus in Pathology College of Medicine

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Grade 6: Correlated to AGS Basic Math Skills

Shockwheat. Statistics 1, Activity 1

ICTCM 28th International Conference on Technology in Collegiate Mathematics

Probability and Statistics Curriculum Pacing Guide

Decision Making Lesson Review

IN THIS UNIT YOU LEARN HOW TO: SPEAKING 1 Work in pairs. Discuss the questions. 2 Work with a new partner. Discuss the questions.

SARDNET: A Self-Organizing Feature Map for Sequences

Fundraising 101 Introduction to Autism Speaks. An Orientation for New Hires

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Roadmap to College: Highly Selective Schools

Intelligent Agents. Chapter 2. Chapter 2 1

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Transcription:

CS 5522: Artificial Intelligence II Reinforcement Learning Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

Reinforcement Learning

Reinforcement Learning Agent State: s Reward: r Actions: a Environment Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards All learning is based on observed samples of outcomes!

Example: Learning to Walk Initial A Learning Trial After Learning [1K Trials] [Kohl and Stone, ICRA 2004]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Initial [Video: AIBO WALK initial]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Initial [Video: AIBO WALK initial]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Initial [Video: AIBO WALK initial]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Training [Video: AIBO WALK training]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Training [Video: AIBO WALK training]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Training [Video: AIBO WALK training]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Finished [Video: AIBO WALK finished]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Finished [Video: AIBO WALK finished]

Example: Learning to Walk [Kohl and Stone, ICRA 2004] Finished [Video: AIBO WALK finished]

Example: Toddler Robot [Tedrake, Zhang and Seung, 2005] [Video: TODDLER 40s]

Example: Toddler Robot [Tedrake, Zhang and Seung, 2005] [Video: TODDLER 40s]

Example: Toddler Robot [Tedrake, Zhang and Seung, 2005] [Video: TODDLER 40s]

The Crawler! [Demo: Crawler Bot (L10D1)] [You, in Project

Video of Demo Crawler Bot

Video of Demo Crawler Bot

Video of Demo Crawler Bot

Reinforcement Learning Still assume a Markov decision process (MDP): A set of states s S A set of actions (per state) A A model T(s,a,s ) A reward function R(s,a,s ) Still looking for a policy π(s)

Reinforcement Learning Still assume a Markov decision process (MDP): A set of states s S A set of actions (per state) A A model T(s,a,s ) A reward function R(s,a,s ) Still looking for a policy π(s) New twist: don t know T or R I.e. we don t know which states are good or what the actions do Must actually try actions and states out to learn

Reinforcement Learning Still assume a Markov decision process (MDP): A set of states s S A set of actions (per state) A A model T(s,a,s ) A reward function R(s,a,s ) Still looking for a policy π(s) New twist: don t know T or R I.e. we don t know which states are good or what the actions do Must actually try actions and states out to learn

Offline (MDPs) vs. Online (RL)

Offline (MDPs) vs. Online (RL) Offline Solution

Offline (MDPs) vs. Online (RL) Offline Solution Online Learning

Model-Based Learning

Model-Based Learning Model-Based Idea: Learn an approximate model based on experiences Solve for values as if the learned model were correct

Model-Based Learning Model-Based Idea: Learn an approximate model based on experiences Solve for values as if the learned model were correct Step 1: Learn empirical MDP model Count outcomes s for each s, a Normalize to give an estimate of Discover each when we experience (s, a, s )

Model-Based Learning Model-Based Idea: Learn an approximate model based on experiences Solve for values as if the learned model were correct Step 1: Learn empirical MDP model Count outcomes s for each s, a Normalize to give an estimate of Discover each when we experience (s, a, s ) Step 2: Solve the learned MDP For example, use value iteration, as before

Example: Model-Based Learning Input Policy π A B C D E Assume: γ = 1

Example: Model-Based Learning Input Policy π A B C D E Assume: γ = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10

Example: Model-Based Learning Input Policy π A B C D E Assume: γ = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 Learned Model T(s,a,s ). T(B, east, C) = 1.00 T(C, east, D) = 0.75 T(C, east, A) = 0.25 R(s,a,s ). R(B, east, C) = -1 R(C, east, D) = -1 R(D, exit, x) = +10

Example: Expected Age Goal: Compute expected age of cse5522 students

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A)

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A)

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A)

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ]

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based Why does this work? Because eventually you learn the right model.

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based Unknown P(A): Model Free Why does this work? Because eventually you learn the right model.

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based Unknown P(A): Model Free Why does this work? Because eventually you learn the right model.

Example: Expected Age Goal: Compute expected age of cse5522 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based Unknown P(A): Model Free Why does this work? Because eventually you learn the right model. Why does this work? Because samples appear with the right frequencies.

Model-Free Learning

Passive Reinforcement Learning

Passive Reinforcement Learning Simplified task: policy evaluation Input: a fixed policy π(s) You don t know the transitions T(s,a,s ) You don t know the rewards R(s,a,s ) Goal: learn the state values In this case: Learner is along for the ride No choice about what actions to take Just execute the policy and learn from experience This is NOT offline planning! You actually take actions in the world.

Direct Evaluation Goal: Compute values for each state under π Idea: Average together observed sample values Act according to π Every time you visit a state, write down what the sum of discounted rewards turned out to be Average those samples This is called direct evaluation

Example: Direct Evaluation Input Policy π Output Values A B C D E Assume: γ = 1

Example: Direct Evaluation Input Policy π Observed Episodes (Training) Output Values A B C D E Assume: γ = 1

Example: Direct Evaluation Input Policy π A B C D E Observed Episodes (Training) Episode 1 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Output Values Assume: γ = 1

Example: Direct Evaluation Input Policy π A B C D E Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Output Values Assume: γ = 1

Example: Direct Evaluation Input Policy π A B C D E Assume: γ = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 E, north, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Output Values

Example: Direct Evaluation Input Policy π A B C D E Assume: γ = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 Output Values

Example: Direct Evaluation Input Policy π A B C D E Assume: γ = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 Output Values A B C D E

Example: Direct Evaluation Input Policy π A B C D E Assume: γ = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 Output Values -10 A +8 +4 +10 B C D E -2

Problems with Direct Evaluation What s good about direct evaluation? It s easy to understand It doesn t require any knowledge of T, R It eventually computes the correct average values, using just sample transitions Output Values -10 A +8 +4 +10 B C D E -2

Problems with Direct Evaluation What s good about direct evaluation? It s easy to understand It doesn t require any knowledge of T, R It eventually computes the correct average values, using just sample transitions What bad about it? It wastes information about state connections Each state must be learned separately So, it takes a long time to learn Output Values -10 A +8 +4 +10 B C D E -2

Problems with Direct Evaluation What s good about direct evaluation? It s easy to understand It doesn t require any knowledge of T, R It eventually computes the correct average values, using just sample transitions What bad about it? It wastes information about state connections Each state must be learned separately So, it takes a long time to learn Output Values -10 A +8 +4 +10 B C D E -2 If B and E both go to C under this policy, how can their values be different?

Why Not Use Policy Evaluation? Simplified Bellman updates calculate V for a fixed policy: Each round, replace V with a one-step-look-ahead layer over V s π(s) s, π(s) s, π(s),s s

Why Not Use Policy Evaluation? Simplified Bellman updates calculate V for a fixed policy: Each round, replace V with a one-step-look-ahead layer over V s π(s) s, π(s) s, π(s),s s

Why Not Use Policy Evaluation? Simplified Bellman updates calculate V for a fixed policy: Each round, replace V with a one-step-look-ahead layer over V s π(s) s, π(s) s, π(s),s s

Why Not Use Policy Evaluation? Simplified Bellman updates calculate V for a fixed policy: Each round, replace V with a one-step-look-ahead layer over V This approach fully exploited the connections between the states Unfortunately, we need T and R to do it! s, π(s),s s π(s) s, π(s) s

Why Not Use Policy Evaluation? Simplified Bellman updates calculate V for a fixed policy: Each round, replace V with a one-step-look-ahead layer over V This approach fully exploited the connections between the states Unfortunately, we need T and R to do it! s, π(s),s s π(s) s, π(s) s Key question: how can we do this update to V without knowing T and R? In other words, how to we take a weighted average without knowing the weights?

Sample-Based Policy Evaluation?

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s π(s) s, π(s) s, π(s),s s'

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s π(s) s, π(s) s 1 '

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s π(s) s, π(s) s 2 ' s 1 '

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s π(s) s, π(s) s 2 ' s 1 ' s 3 '

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s π(s) s, π(s) s 2 ' s 1 ' s 3 '

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s π(s) s, π(s) s 2 ' s 1 ' s 3 ' Almost! But we can t rewind time to get sample after sample from state s.

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s π(s) s, π(s) s 2 ' s 1 ' s 3 ' Almost! But we can t rewind time to get sample after sample from state s.

Temporal Difference Learning Big idea: learn from every experience! Update V(s) each time we experience a transition (s, a, s, r) Likely outcomes s will contribute updates more often π(s) s s, π(s) s

Temporal Difference Learning Big idea: learn from every experience! Update V(s) each time we experience a transition (s, a, s, r) Likely outcomes s will contribute updates more often Temporal difference learning of values Policy still fixed, still doing evaluation! Move values toward value of whatever successor occurs: running average π(s) s s, π(s) s

Temporal Difference Learning Big idea: learn from every experience! Update V(s) each time we experience a transition (s, a, s, r) Likely outcomes s will contribute updates more often Temporal difference learning of values Policy still fixed, still doing evaluation! Move values toward value of whatever successor occurs: running average Sample of V(s): π(s) s s, π(s) s

Temporal Difference Learning Big idea: learn from every experience! Update V(s) each time we experience a transition (s, a, s, r) Likely outcomes s will contribute updates more often Temporal difference learning of values Policy still fixed, still doing evaluation! Move values toward value of whatever successor occurs: running average Sample of V(s): Update to V(s): π(s) s s, π(s) s

Temporal Difference Learning Big idea: learn from every experience! Update V(s) each time we experience a transition (s, a, s, r) Likely outcomes s will contribute updates more often Temporal difference learning of values Policy still fixed, still doing evaluation! Move values toward value of whatever successor occurs: running average Sample of V(s): Update to V(s): Same update: π(s) s s, π(s) s

Exponential Moving Average Exponential moving average

Exponential Moving Average Exponential moving average The running interpolation update:

Exponential Moving Average Exponential moving average The running interpolation update:

Exponential Moving Average Exponential moving average The running interpolation update: Makes recent samples more important:

Exponential Moving Average Exponential moving average The running interpolation update: Makes recent samples more important:

Exponential Moving Average Exponential moving average The running interpolation update: Makes recent samples more important: Forgets about the past (distant past values were wrong anyway)

Exponential Moving Average Exponential moving average The running interpolation update: Makes recent samples more important: Forgets about the past (distant past values were wrong anyway) Decreasing learning rate (alpha) can give converging averages

Example: Temporal Difference Learning States A B C D E Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States A B C D E 0 0 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions A B C D E 0 0 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 A B C D E 0 0 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 A B C D E 0 0 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 A B C D E 0 0 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 A B C D E 0 0 0 8 0 0-1 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 C, east, D, -2 A B C D E 0 0 0 8 0 0-1 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 C, east, D, -2 A B C D E 0 0 0 8 0 0-1 0 8 0 Assume: γ = 1, α = 1/2

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 C, east, D, -2 A 0 0 0 B C D 0 0 8-1 0 8-1 3 8 E 0 0 0 Assume: γ = 1, α = 1/2

Problems with TD Value Learning TD value leaning is a model-free way to do policy evaluation, mimicking Bellman updates with running sample averages However, if we want to turn values into a (new) policy, we re sunk: s Idea: learn Q-values, not values Makes action selection model-free too! s,a,s a s, a s

Active Reinforcement Learning

Active Reinforcement Learning Full reinforcement learning: optimal policies (like value iteration) You don t know the transitions T(s,a,s ) You don t know the rewards R(s,a,s ) You choose the actions now Goal: learn the optimal policy / values In this case: Learner makes choices! Fundamental tradeoff: exploration vs. exploitation This is NOT offline planning! You actually take actions in the world and find out what happens

Detour: Q-Value Iteration Value iteration: find successive (depth-limited) values Start with V 0 (s) = 0, which we know is right Given V k, calculate the depth k+1 values for all states:

Detour: Q-Value Iteration Value iteration: find successive (depth-limited) values Start with V 0 (s) = 0, which we know is right Given V k, calculate the depth k+1 values for all states:

Detour: Q-Value Iteration Value iteration: find successive (depth-limited) values Start with V 0 (s) = 0, which we know is right Given V k, calculate the depth k+1 values for all states: But Q-values are more useful, so compute them instead Start with Q 0 (s,a) = 0, which we know is right Given Q k, calculate the depth k+1 q-values for all q-states:

Detour: Q-Value Iteration Value iteration: find successive (depth-limited) values Start with V 0 (s) = 0, which we know is right Given V k, calculate the depth k+1 values for all states: But Q-values are more useful, so compute them instead Start with Q 0 (s,a) = 0, which we know is right Given Q k, calculate the depth k+1 q-values for all q-states:

Q-Learning Q-Learning: sample-based Q-value iteration [Demo: Q-learning gridworld (L10D2)]

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go [Demo: Q-learning gridworld (L10D2)]

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go [Demo: Q-learning gridworld (L10D2)]

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go Receive a sample (s,a,s,r) [Demo: Q-learning gridworld (L10D2)]

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go Receive a sample (s,a,s,r) Consider your old estimate: [Demo: Q-learning gridworld (L10D2)]

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go Receive a sample (s,a,s,r) Consider your old estimate: Consider your new sample estimate: [Demo: Q-learning gridworld (L10D2)]

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go Receive a sample (s,a,s,r) Consider your old estimate: Consider your new sample estimate: Incorporate the new estimate into a running average: [Demo: Q-learning gridworld (L10D2)]

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go Receive a sample (s,a,s,r) Consider your old estimate: Consider your new sample estimate: Incorporate the new estimate into a running average: [Demo: Q-learning gridworld (L10D2)]

Video of Demo Q-Learning -- Gridworld

Video of Demo Q-Learning -- Gridworld

Video of Demo Q-Learning -- Gridworld

Video of Demo Q-Learning -- Crawler

Video of Demo Q-Learning -- Crawler

Video of Demo Q-Learning -- Crawler

Q-Learning Properties Amazing result: Q-learning converges to optimal policy -- even if you re acting suboptimally! This is called off-policy learning Caveats: You have to explore enough You have to eventually make the learning rate small enough but not decrease it too quickly Basically, in the limit, it doesn t matter how you select actions (!)