Reinforcement Learning

Similar documents
Lecture 10: Reinforcement Learning

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Reinforcement Learning by Comparing Immediate Reward

Laboratorio di Intelligenza Artificiale e Robotica

Axiom 2013 Team Description Paper

Improving Action Selection in MDP s via Knowledge Transfer

Artificial Neural Networks written examination

Laboratorio di Intelligenza Artificiale e Robotica

AMULTIAGENT system [1] can be defined as a group of

High-level Reinforcement Learning in Strategy Games

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

TD(λ) and Q-Learning Based Ludo Players

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Speeding Up Reinforcement Learning with Behavior Transfer

Georgetown University at TREC 2017 Dynamic Domain Track

Regret-based Reward Elicitation for Markov Decision Processes

What to Do When Conflict Happens

Major Milestones, Team Activities, and Individual Deliverables

Discriminative Learning of Beam-Search Heuristics for Planning

IMGD Technical Game Development I: Iterative Development Techniques. by Robert W. Lindeman

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

Lecture 1: Machine Learning Basics

Go fishing! Responsibility judgments when cooperation breaks down

An OO Framework for building Intelligence and Learning properties in Software Agents

Lecture 6: Applications

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

Conceptual and Procedural Knowledge of a Mathematics Problem: Their Measurement and Their Causal Interrelations

FF+FPG: Guiding a Policy-Gradient Planner

Learning Prospective Robot Behavior

The Evolution of Random Phenomena

Software Maintenance

Seminar - Organic Computing

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Visual CP Representation of Knowledge

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Task Completion Transfer Learning for Reward Inference

Meta-Cognitive Strategies

Foothill College Summer 2016

CS177 Python Programming

Generating Test Cases From Use Cases

Running Head: STUDENT CENTRIC INTEGRATED TECHNOLOGY

Active Learning. Yingyu Liang Computer Sciences 760 Fall

CHAPTER IV RESEARCH FINDING AND DISCUSSION

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

On the Combined Behavior of Autonomous Resource Management Agents

Objective: Total Time. (60 minutes) (6 minutes) (6 minutes) starting at 0. , 8, 10 many fourths? S: 4 fourths. T: (Beneat , 2, 4, , 14 , 16 , 12

A Reinforcement Learning Variant for Control Scheduling

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

College Pricing and Income Inequality

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

Human-Computer Interaction CS Overview for Today. Who am I? 1/15/2012. Prof. Stephen Intille

ADDIE: A systematic methodology for instructional design that includes five phases: Analysis, Design, Development, Implementation, and Evaluation.

A Case Study: News Classification Based on Term Frequency

Grade 6: Correlated to AGS Basic Math Skills

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Shockwheat. Statistics 1, Activity 1

Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics

IN THIS UNIT YOU LEARN HOW TO: SPEAKING 1 Work in pairs. Discuss the questions. 2 Work with a new partner. Discuss the questions.

An investigation of imitation learning algorithms for structured prediction

ICTCM 28th International Conference on Technology in Collegiate Mathematics

Knowledge Transfer in Deep Convolutional Neural Nets

Decision Making Lesson Review

CAFE ESSENTIAL ELEMENTS O S E P P C E A. 1 Framework 2 CAFE Menu. 3 Classroom Design 4 Materials 5 Record Keeping

(Sub)Gradient Descent

Planning with External Events

Activities, Exercises, Assignments Copyright 2009 Cem Kaner 1

AI Agent for Ice Hockey Atari 2600

While you are waiting... socrative.com, room number SIMLANG2016

Natural Language Processing. George Konidaris

Teaching Architecture Metamodel-First

Probability and Statistics Curriculum Pacing Guide

Generative models and adversarial training

Book Review: Build Lean: Transforming construction using Lean Thinking by Adrian Terry & Stuart Smith

REFERENCE GUIDE AND TEST PRODUCED BY VIDEO COMMUNICATIONS

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

ASSESSMENT OVERVIEW Student Packets and Teacher Guide. Grades 6, 7, 8

Two heads can be better than one

Essentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology

SSIS SEL Edition Overview Fall 2017

(Includes a Detailed Analysis of Responses to Overall Satisfaction and Quality of Academic Advising Items) By Steve Chatman

BADM 641 (sec. 7D1) (on-line) Decision Analysis August 16 October 6, 2017 CRN: 83777

Creating Meaningful Assessments for Professional Development Education in Software Architecture

Making the ELPS-TELPAS Connection Grades K 12 Overview

Capitalism and Higher Education: A Failed Relationship

Running head: THE INTERACTIVITY EFFECT IN MULTIMEDIA LEARNING 1

PUPIL PREMIUM REVIEW

Task Completion Transfer Learning for Reward Inference

File # for photo

Improving Fairness in Memory Scheduling

P-4: Differentiate your plans to fit your students

College Pricing and Income Inequality

PLANNING FOR K TO 12. Don Brodeth, CFA Taft Consulting Group

Transcription:

Reinforcement Learning [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

Reinforcement Learning

Reinforcement Learning Agent State: s Reward: r Actions: a Environment Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards All learning is based on observed samples of outcomes!

Example: Learning to Walk Initial A Learning Trial After Learning [1K Trials] [Kohl and Stone, ICRA 2004]

The Crawler! [Demo: Crawler Bot (L10D1)] [You, in Project 3]

Reinforcement Learning Still assume a Markov decision process (MDP): A set of states s S A set of actions (per state) A A model T(s,a,s ) A reward function R(s,a,s ) Still looking for a policy (s) New twist: don t know T or R I.e. we don t know which states are good or what the actions do Must actually try actions and states out to learn

Offline (MDPs) vs. Online (RL) Offline Solution Online Learning

Model-Based Learning

Model-Based Learning Model-Based Idea: Learn an approximate model based on experiences Solve for values as if the learned model were correct Step 1: Learn empirical MDP model Count outcomes s for each s, a Normalize to give an estimate of Discover each when we experience (s, a, s ) Step 2: Solve the learned MDP For example, use value iteration, as before

Example: Model-Based Learning Input Policy A B C D E Assume: = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 Learned Model T(s,a,s ). T(B, east, C) = 1.00 T(C, east, D) = 0.75 T(C, east, A) = 0.25 R(s,a,s ). R(B, east, C) = -1 R(C, east, D) = -1 R(D, exit, x) = +10

Example: Expected Age Goal: Compute expected age of students in this class Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based Unknown P(A): Model Free Why does this work? Because eventually you learn the right model. Why does this work? Because samples appear with the right frequencies.

Model-Free Learning

Passive Reinforcement Learning

Passive Reinforcement Learning Simplified task: policy evaluation Input: a fixed policy (s) You don t know the transitions T(s,a,s ) You don t know the rewards R(s,a,s ) Goal: learn the state values In this case: Learner is along for the ride No choice about what actions to take Just execute the policy and learn from experience This is NOT offline planning! You actually take actions in the world.

Direct Evaluation Goal: Compute values for each state under Idea: Average together observed sample values Act according to Every time you visit a state, write down what the sum of discounted rewards turned out to be Average those samples This is called direct evaluation

Example: Direct Evaluation Input Policy A B C D E Assume: = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 Output Values -10 A +8 +4 +10 B C D E -2

Problems with Direct Evaluation What s good about direct evaluation? It s easy to understand It doesn t require any knowledge of T, R It eventually computes the correct average values, using just sample transitions What bad about it? It wastes information about state connections Each state must be learned separately So, it takes a long time to learn Output Values -10 A +8 +4 +10 B C D E -2 If B and E both go to C under this policy, how can their values be different?

Why Not Use Policy Evaluation? Simplified Bellman updates calculate V for a fixed policy: Each round, replace V with a one-step-look-ahead layer over V s (s) s, (s) s, (s),s s This approach fully exploited the connections between the states Unfortunately, we need T and R to do it! Key question: how can we do this update to V without knowing T and R? In other words, how to we take a weighted average without knowing the weights?

Sample-Based Policy Evaluation? We want to improve our estimate of V by computing these averages: Idea: Take samples of outcomes s (by doing the action!) and average s (s) s, (s) s, (s),s s 2 ' s' s 1 ' s 3 ' Almost! But we can t rewind time to get sample after sample from state s.

Temporal Difference Learning Big idea: learn from every experience! Update V(s) each time we experience a transition (s, a, s, r) Likely outcomes s will contribute updates more often Temporal difference learning of values Policy still fixed, still doing evaluation! Move values toward value of whatever successor occurs: running average Sample of V(s): (s) s s, (s) s Update to V(s): Same update:

Exponential Moving Average Exponential moving average The running interpolation update: Makes recent samples more important: Forgets about the past (distant past values were wrong anyway) Decreasing learning rate (alpha) can give converging averages

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 C, east, D, -2 A 0 0 0 B C D 0 0 8-1 0 8-1 3 8 E 0 0 0 Assume: = 1, α = 1/2

Problems with TD Value Learning TD value leaning is a model-free way to do policy evaluation, mimicking Bellman updates with running sample averages However, if we want to turn values into a (new) policy, we re sunk: a s, a s Idea: learn Q-values, not values Makes action selection model-free too! s,a,s s

Active Reinforcement Learning

Active Reinforcement Learning Full reinforcement learning: optimal policies (like value iteration) You don t know the transitions T(s,a,s ) You don t know the rewards R(s,a,s ) You choose the actions now Goal: learn the optimal policy / values In this case: Learner makes choices! Fundamental tradeoff: exploration vs. exploitation This is NOT offline planning! You actually take actions in the world and find out what happens

Detour: Q-Value Iteration Value iteration: find successive (depth-limited) values Start with V 0 (s) = 0, which we know is right Given V k, calculate the depth k+1 values for all states: But Q-values are more useful, so compute them instead Start with Q 0 (s,a) = 0, which we know is right Given Q k, calculate the depth k+1 q-values for all q-states: s,a,s s a s, a s

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q(s,a) values as you go Receive a sample (s,a,s,r) Consider your old estimate: Consider your new sample estimate: Incorporate the new estimate into a running average: [Demo: Q-learning gridworld (L10D2)] [Demo: Q-learning crawler (L10D3)]

Q-Learning Properties Amazing result: Q-learning converges to optimal policy -- even if you re acting suboptimally! This is called off-policy learning Caveats: You have to explore enough You have to eventually make the learning rate small enough but not decrease it too quickly Basically, in the limit, it doesn t matter how you select actions (!)