Announcements. o Homework 3. o Project 2. o Tutoring: on Piazza, we now have 1:1 tutoring available. o Due 2/18 at 11:59pm

Similar documents
Exploration. CS : Deep Reinforcement Learning Sergey Levine

Lecture 10: Reinforcement Learning

Reinforcement Learning by Comparing Immediate Reward

The Evolution of Random Phenomena

Laboratorio di Intelligenza Artificiale e Robotica

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

Axiom 2013 Team Description Paper

Improving Action Selection in MDP s via Knowledge Transfer

AMULTIAGENT system [1] can be defined as a group of

Artificial Neural Networks written examination

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

TD(λ) and Q-Learning Based Ludo Players

High-level Reinforcement Learning in Strategy Games

Laboratorio di Intelligenza Artificiale e Robotica

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Speeding Up Reinforcement Learning with Behavior Transfer

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Learning Prospective Robot Behavior

Seminar - Organic Computing

Georgetown University at TREC 2017 Dynamic Domain Track

Discriminative Learning of Beam-Search Heuristics for Planning

Regret-based Reward Elicitation for Markov Decision Processes

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

An OO Framework for building Intelligence and Learning properties in Software Agents

CS177 Python Programming

Active Learning. Yingyu Liang Computer Sciences 760 Fall

FF+FPG: Guiding a Policy-Gradient Planner

PHY2048 Syllabus - Physics with Calculus 1 Fall 2014

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

ADDIE: A systematic methodology for instructional design that includes five phases: Analysis, Design, Development, Implementation, and Evaluation.

An investigation of imitation learning algorithms for structured prediction

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

Roadmap to College: Highly Selective Schools

Shockwheat. Statistics 1, Activity 1

Lecture 1: Machine Learning Basics

What to Do When Conflict Happens

Improving Conceptual Understanding of Physics with Technology

ASSESSMENT OVERVIEW Student Packets and Teacher Guide. Grades 6, 7, 8

Major Milestones, Team Activities, and Individual Deliverables

A Game-based Assessment of Children s Choices to Seek Feedback and to Revise

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

Lecture 6: Applications

Robot Learning Simultaneously a Task and How to Interpret Human Instructions

BENCHMARK TREND COMPARISON REPORT:

Meta-Cognitive Strategies

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Learning to Schedule Straight-Line Code

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

Software Maintenance

Developing Grammar in Context

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

On the Combined Behavior of Autonomous Resource Management Agents

Objective: Total Time. (60 minutes) (6 minutes) (6 minutes) starting at 0. , 8, 10 many fourths? S: 4 fourths. T: (Beneat , 2, 4, , 14 , 16 , 12

Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics

A Comparison of Annealing Techniques for Academic Course Scheduling

Foothill College Summer 2016

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

IMPACT INSTITUTE BEHAVIOR MANAGEMENT. Krissy Matthaei Gina Schutt

Activities, Exercises, Assignments Copyright 2009 Cem Kaner 1

Welcome to the session on ACCUPLACER Policy Development. This session will touch upon common policy decisions an institution may encounter during the

Go fishing! Responsibility judgments when cooperation breaks down

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Grade 6: Correlated to AGS Basic Math Skills

Human-Computer Interaction CS Overview for Today. Who am I? 1/15/2012. Prof. Stephen Intille

Managerial Decision Making

File # for photo

Two heads can be better than one

C O U R S E. Tools for Group Thinking

IMGD Technical Game Development I: Iterative Development Techniques. by Robert W. Lindeman

Agent-Based Software Engineering

What is Teaching? JOHN A. LOTT Professor Emeritus in Pathology College of Medicine

Generative models and adversarial training

(Sub)Gradient Descent

How to Do Research. Jeff Chase Duke University

Learning and Transferring Relational Instance-Based Policies

A Reinforcement Learning Variant for Control Scheduling

Stochastic Calculus for Finance I (46-944) Spring 2008 Syllabus

Intelligent Agents. Chapter 2. Chapter 2 1

While you are waiting... socrative.com, room number SIMLANG2016

Game-based formative assessment: Newton s Playground. Valerie Shute, Matthew Ventura, & Yoon Jeon Kim (Florida State University), NCME, April 30, 2013

Alpha provides an overall measure of the internal reliability of the test. The Coefficient Alphas for the STEP are:

Syllabus for CHEM 4660 Introduction to Computational Chemistry Spring 2010

Introduction to Simulation

A Case Study: News Classification Based on Term Frequency

Every curriculum policy starts from this policy and expands the detail in relation to the specific requirements of each policy s field.

BADM 641 (sec. 7D1) (on-line) Decision Analysis August 16 October 6, 2017 CRN: 83777

SITUATING AN ENVIRONMENT TO PROMOTE DESIGN CREATIVITY BY EXPANDING STRUCTURE HOLES

Using focal point learning to improve human machine tacit coordination

Formative Assessment in Mathematics. Part 3: The Learner s Role

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Visual CP Representation of Knowledge

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Teaching Architecture Metamodel-First

ICTCM 28th International Conference on Technology in Collegiate Mathematics

WHAT ARE VIRTUAL MANIPULATIVES?

Transcription:

Announcements o Homework 3 o Due 2/18 at 11:59pm o Project 2 o Due 2/22 at 4:00pm o Tutoring: read @260 on Piazza, we now have 1:1 tutoring available

CS 188: Artificial Intelligence Reinforcement Learning Instructor: Sergey Levine & Anca Dragan University of California, Berkeley [Slides by Dan Klein, Pieter Abbeel, Anca Dragan, Sergey Levine. http://ai.berkeley.edu.]

Before: Markov Decision Processes o Still assume a Markov decision process (MDP): o A set of states s S o A set of actions (per state) A o A model T(s,a,s ) o A reward function R(s,a,s )

Reinforcement Learning

Example: Prescription Problem P(cure) = 0.2 P(cure) = 0.4 P(cure) = 0.9 P(cure) = 0.1 cured +1 start dead -1

Example: Prescription Problem P(cure) =? P(cure) =? P(cure) =? P(cure) =? cured +1 start dead -1

Let s Play! P(cure) =? P(cure) =? P(cure) =? P(cure) =? http://iosband.github.io/2015/07/28/beat-the-bandit.html

What Just Happened? o That wasn t planning, it was learning! o Specifically, reinforcement learning o There was an MDP, but you couldn t solve it with just computation o You needed to actually act to figure it out o Important ideas in reinforcement learning that came up o Exploration: you have to try unknown actions to get information o Exploitation: eventually, you have to use what you know o Regret: even if you learn intelligently, you make mistakes o Sampling: because of chance, you have to try things repeatedly o Difficulty: learning can be much harder than solving a known MDP

Reinforcement Learning o Still assume a Markov decision process (MDP): o A set of states s S o A set of actions (per state) A o A model T(s,a,s ) o A reward function R(s,a,s ) o Still looking for a policy (s) o New twist: don t know T or R o I.e. we don t know which states are good or what the actions do o Must actually try actions and states out to learn

Reinforcement Learning Agent State: s Reward: r Actions: a Environment o Basic idea: o Receive feedback in the form of rewards o Agent s utility is defined by the reward function o Must (learn to) act so as to maximize expected rewards o All learning is based on observed samples of outcomes!

Cheetah

Atari Two Minute Lectures 12

Robots

Robots

The Crawler! [Demo: Crawler Bot (L10D1)] [You, in Project 3]

Video of Demo Crawler Bot

Reinforcement Learning o Still assume a Markov decision process (MDP): o A set of states s S o A set of actions (per state) A o A model T(s,a,s ) o A reward function R(s,a,s ) o Still looking for a policy (s) o New twist: don t know T or R o I.e. we don t know which states are good or what the actions do o Must actually try actions and states out to learn

Offline (MDPs) vs. Online (RL) Offline Solution Online Learning

Model-Based Learning

Model-Based Learning o Model-Based Idea: o Learn an approximate model based on experiences o Solve for values as if the learned model were correct o Step 1: Learn empirical MDP model o Count outcomes s for each s, a o Normalize to give an estimate of o Discover each when we experience (s, a, s ) o Step 2: Solve the learned MDP o For example, use value iteration, as before

Example: Model-Based Learning Input Policy A B C D E Assume: = 1 Observed Episodes (Training) Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 Learned Model T(s,a,s ). T(B, east, C) = 1.00 T(C, east, D) = 0.75 T(C, east, A) = 0.25 R(s,a,s ). R(B, east, C) = -1 R(C, east, D) = -1 R(D, exit, x) = +10

Example: Expected Age Goal: Compute expected age of cs188 students Known P(A) Without P(A), instead collect samples [a 1, a 2, a N ] Unknown P(A): Model Based Unknown P(A): Model Free Why does this work? Because eventually you learn the right model. Why does this work? Because samples appear with the right frequencies.

Model-Free Learning

Passive Reinforcement Learning

Passive Reinforcement Learning o Simplified task: policy evaluation o Input: a fixed policy (s) o You don t know the transitions T(s,a,s ) o You don t know the rewards R(s,a,s ) o Goal: learn the state values o In this case: o Learner is along for the ride o No choice about what actions to take o Just execute the policy and learn from experience o This is NOT offline planning! You actually take actions in the world.

Direct Evaluation o Goal: Compute values for each state under o Idea: Average together observed sample values o Act according to o Every time you visit a state, write down what the sum of discounted rewards turned out to be o Average those samples o This is called direct evaluation

Example: Direct Evaluation Input Policy Observed Episodes (Training) Output Values A B C D E Assume: = 1 Episode 1 Episode 2 B, east, C, -1 C, east, D, -1 D, exit, x, +10 B, east, C, -1 C, east, D, -1 D, exit, x, +10 Episode 3 Episode 4 E, north, C, -1 C, east, D, -1 D, exit, x, +10 E, north, C, -1 C, east, A, -1 A, exit, x, -10 A +8 +4 +10 B C D E -10-2

Problems with Direct Evaluation o What s good about direct evaluation? o It s easy to understand o It doesn t require any knowledge of T, R o It eventually computes the correct average values, using just sample transitions o What bad about it? o It wastes information about state connections o Each state must be learned separately o So, it takes a long time to learn Output Values -10 A +8 +4 +10 B C D E -2 If B and E both go to C under this policy, how can their values be different?

Why Not Use Policy Evaluation? o Simplified Bellman updates calculate V for a fixed policy: o Each round, replace V with a one-step-look-ahead layer over V s (s) s, (s) o This approach fully exploited the connections between the states o Unfortunately, we need T and R to do it! s, (s),s s o Key question: how can we do this update to V without knowing T and R? o In other words, how to we take a weighted average without knowing the weights?

Sample-Based Policy Evaluation? o We want to improve our estimate of V by computing these averages: o Idea: Take samples of outcomes s (by doing the action!) and average s (s) s, (s) s, (s),s s 2 ' s' s 1 ' s 3 ' Almost! But we can t rewind time to get sample after sample from state s.

Temporal Difference Learning o Big idea: learn from every experience! o Update V(s) each time we experience a transition (s, a, s, r) o Likely outcomes s will contribute updates more often o Temporal difference learning of values o Policy still fixed, still doing evaluation! o Move values toward value of whatever successor occurs: running average Sample of V(s): Update to V(s): Same update: (s) s s, (s) s

Exponential Moving Average o Exponential moving average o The running interpolation update: o Makes recent samples more important: o Forgets about the past (distant past values were wrong anyway) o Decreasing learning rate (alpha) can give converging averages

Example: Temporal Difference Learning States Observed Transitions B, east, C, -2 C, east, D, -2 A 0 0 0 B C D 0 0 8-1 0 8-1 3 8 E 0 0 0 Assume: = 1, α = 1/2

Active Reinforcement Learning

Problems with TD Value Learning o TD value leaning is a model-free way to do policy evaluation, mimicking Bellman updates with running sample averages o However, if we want to turn values into a (new) policy, we re sunk: a s, a s o Idea: learn Q-values, not values o Makes action selection model-free too! s,a,s s

Detour: Q-Value Iteration o Value iteration: find successive (depth-limited) values o Start with V 0 (s) = 0, which we know is right o Given V k, calculate the depth k+1 values for all states: o But Q-values are more useful, so compute them instead o Start with Q 0 (s,a) = 0, which we know is right o Given Q k, calculate the depth k+1 q-values for all q-states:

Q-Learning o Q-Learning: sample-based Q-value iteration o Learn Q(s,a) values as you go o Receive a sample (s,a,s,r) o Consider your old estimate: o Consider your new sample estimate: o Incorporate the new estimate into a running average: [Demo: Q-learning gridworld (L10D2)] [Demo: Q-learning crawler (L10D3)]

Video of Demo Q-Learning -- Gridworld

Video of Demo Q-Learning -- Crawler

Q-Learning: act according to current policy (and also explore ) o Full reinforcement learning: optimal policies (like value iteration) o You don t know the transitions T(s,a,s ) o You don t know the rewards R(s,a,s ) o You choose the actions now o Goal: learn the optimal policy / values o In this case: o Learner makes choices! o Fundamental tradeoff: exploration vs. exploitation o This is NOT offline planning! You actually take actions in the world and find out what happens

Q-Learning Properties o Amazing result: Q-learning converges to optimal policy -- even if you re acting suboptimally! o This is called off-policy learning o Caveats: o You have to explore enough o You have to eventually make the learning rate small enough o but not decrease it too quickly o Basically, in the limit, it doesn t matter how you select actions (!)

Exploration vs. Exploitation

How to Explore? o Several schemes for forcing exploration o Simplest: random actions ( -greedy) o Every time step, flip a coin o With (small) probability, act randomly o With (large) probability 1-, act on current policy o Problems with random actions? o You do eventually explore the space, but keep thrashing around once learning is done o One solution: lower over time