Reinforcement Learning. CS 188: Artificial Intelligence Fall Model-Free Learning. Q-Learning. Q-Learning Properties. Exploration / Exploitation

Similar documents
Reinforcement Learning by Comparing Immediate Reward

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Python Machine Learning

Lecture 10: Reinforcement Learning

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Artificial Neural Networks written examination

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Lecture 1: Machine Learning Basics

Laboratorio di Intelligenza Artificiale e Robotica

Axiom 2013 Team Description Paper

Speeding Up Reinforcement Learning with Behavior Transfer

TD(λ) and Q-Learning Based Ludo Players

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

Go fishing! Responsibility judgments when cooperation breaks down

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Improving Action Selection in MDP s via Knowledge Transfer

Assignment 1: Predicting Amazon Review Ratings

Laboratorio di Intelligenza Artificiale e Robotica

Regret-based Reward Elicitation for Markov Decision Processes

Generative models and adversarial training

Shockwheat. Statistics 1, Activity 1

Probabilistic Latent Semantic Analysis

AMULTIAGENT system [1] can be defined as a group of

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Task Types. Duration, Work and Units Prepared by

High-level Reinforcement Learning in Strategy Games

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Softprop: Softmax Neural Network Backpropagation Learning

The Evolution of Random Phenomena

CS Machine Learning

arxiv: v1 [cs.lg] 15 Jun 2015

Learning Prospective Robot Behavior

C O U R S E. Tools for Group Thinking

Lecture 6: Applications

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Learning Methods for Fuzzy Systems

FF+FPG: Guiding a Policy-Gradient Planner

A Case Study: News Classification Based on Term Frequency

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

An OO Framework for building Intelligence and Learning properties in Software Agents

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

12- A whirlwind tour of statistics

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Learning Cases to Resolve Conflicts and Improve Group Behavior

Lecture 2: Quantifiers and Approximation

Using focal point learning to improve human machine tacit coordination

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Measurement. When Smaller Is Better. Activity:

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

CSL465/603 - Machine Learning

A Reinforcement Learning Variant for Control Scheduling

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Learning From the Past with Experiment Databases

Lecture 1: Basic Concepts of Machine Learning

Managerial Decision Making

arxiv: v2 [cs.ro] 3 Mar 2017

Learning and Transferring Relational Instance-Based Policies

Evidence for Reliability, Validity and Learning Effectiveness

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

Discriminative Learning of Beam-Search Heuristics for Planning

Evolutive Neural Net Fuzzy Filtering: Basic Description

Knowledge Transfer in Deep Convolutional Neural Nets

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

(Sub)Gradient Descent

Intelligent Agents. Chapter 2. Chapter 2 1

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Hi I m Ryan O Donnell, I m with Florida Tech s Orlando Campus, and today I am going to review a book titled Standard Celeration Charting 2002 by

Introduction to Simulation

How long did... Who did... Where was... When did... How did... Which did...

SIMPLY THE BEST! AND MINDSETS. (Growth or fixed?)

Summary / Response. Karl Smith, Accelerations Educational Software. Page 1 of 8

Probability and Statistics Curriculum Pacing Guide

UDL AND LANGUAGE ARTS LESSON OVERVIEW

Georgetown University at TREC 2017 Dynamic Domain Track

Software Maintenance

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

Grade 6: Correlated to AGS Basic Math Skills

The One Minute Preceptor: 5 Microskills for One-On-One Teaching

Learning goal-oriented strategies in problem solving

Seminar - Organic Computing

Speech Recognition at ICSI: Broadcast News and beyond

Detailed course syllabus

Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

What Am I Getting Into?

Temper Tamer s Handbook

How People Learn Physics

Team Formation for Generalized Tasks in Expertise Social Networks

WHEN THERE IS A mismatch between the acoustic

Analysis of Enzyme Kinetic Data

Math Placement at Paci c Lutheran University

On-the-Fly Customization of Automated Essay Scoring

Transcription:

CS 188: Artificial Intelligence Fall 8 Lecture 12: Reinforcement Learning 1/7/8 Reinforcement Learning Reinforcement learning: Still have an MDP: A set of states s S A set of actions (per state) A A model T(s,a,s ) A reward function R(s,a,s ) Still looking for a policy π(s) [DEMO] Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 New twist: don t know T or R I.e. don t know which states are good or what the actions do Must actually try actions and states out to learn 3 Model-Free Learning [DEMO Grid Q s] Temporal difference learning Update each time we experience a transition Frequent outcomes will contribute more updates (over time) s π(s) s, π(s) s Learn Q*(s,a) values Receive a sample (s,a,s,r) Consider your old estimate: Consider your new sample estimate: Incorporate the new estimate into a running average: 4 Properties Will converge to optimal policy If you explore enough If you make the learning rate small enough But not decrease it too quickly! Basically doesn t matter how you select actions (!) Neat property: learns optimal q-values regardless of action selection noise (some caveats) S E S E [DEMO Grid Q s] Exploration / Exploitation [DEMO RL Pacman] Several schemes for forcing exploration Simplest: random actions (ε greedy) Every time step, flip a coin With probability ε, act randomly With probability 1-ε, act according to current policy Problems with random actions? You do explore the space, but keep thrashing around once learning is done One solution: lower ε over time Another solution: exploration functions 6 7 1

Exploration Functions [DEMO Crawler Q s] When to explore Random actions: explore a fixed amount Better idea: explore areas whose badness is not (yet) established Q-learning produces tables of q-values: Exploration function Takes a value estimate and a count, and returns an optimistic utility, e.g. (exact form not important) 8 9 In realistic situations, we cannot possibly learn about every single state! Too many states to visit them all in training Too many states to hold the q-tables in memory Instead, we want to generalize: Learn about some small number of training states from experience Generalize that experience to new, similar states This is a fundamental idea in machine learning, and we ll see it over and over again Example: Pacman Let s say we discover through experience that this state is bad: In naïve q learning, we know nothing about this state or its q states: Or even this one! 1 11 Feature-Based Representations Solution: describe a state using a vector of features Features are functions from states to real numbers (often /1) that capture important properties of the state Example features: Distance to closest ghost Distance to closest dot Number of ghosts 1 / (dist to dot) 2 Is Pacman in a tunnel? (/1) etc. Is it the exact state on this slide? Can also describe a q-state (s, a) with features (e.g. action moves closer to food) Linear Feature Functions Using a feature representation, we can write a q function (or value function) for any state using a few weights: Advantage: our experience is summed up in a few powerful numbers Disadvantage: states may share features but be very different in value! 12 13 2

Function Approximation Example: Q-Pacman Q-learning with linear q-functions: Intuitive interpretation: Adjust weights of active features E.g. if something unexpectedly bad happens, disprefer all states with that state s features Formal justification: online least squares 14 1 Linear Regression Linear Regression 4 4 1 3 1 1 3 4 3 1 1 3 4 Given examples Predict given a new point 16 17 Ordinary Least Squares (OLS) Minimizing Error Observation Error or residual Approximate q update explained: 18 19 3

3 2 1 Overfitting Degree 1 polynomial [DEMO Helicopter] 1 - -1-1 2 4 6 8 1 12 14 16 18 [DEMO] 21 Problem: often the feature-based policies that work well aren t the ones that approximate V / Q best E.g. your value functions from project 2 were probably horrible estimates of future rewards, but they still produced good decisions We ll see this distinction between modeling and prediction again later in the course Solution: learn the policy that maximizes rewards rather than the value that predicts rewards This is the idea behind policy search, such as what controlled the upside-down helicopter 23 Simplest policy search: Start with an initial linear value function or q-function Nudge each feature weight up and down and see if your policy is better than before Problems: How do we tell the policy got better? Need to run many sample episodes! If there are a lot of features, this can be impractical * Advanced policy search: Write a stochastic (soft) policy: Turns out you can efficiently approximate the derivative of the returns with respect to the parameters w (details in the book, optional material) Take uphill steps, recalculate derivatives, etc. 2 4

Take a Deep Breath We re done with search and planning! Next, we ll look at how to reason with probabilities Diagnosis Tracking objects Speech recognition Robot mapping lots more! Last part of course: machine learning