REINFORCEMENT LEARNING

Similar documents
Lecture 10: Reinforcement Learning

Reinforcement Learning by Comparing Immediate Reward

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Axiom 2013 Team Description Paper

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

TD(λ) and Q-Learning Based Ludo Players

Georgetown University at TREC 2017 Dynamic Domain Track

High-level Reinforcement Learning in Strategy Games

Lecture 1: Machine Learning Basics

Regret-based Reward Elicitation for Markov Decision Processes

AMULTIAGENT system [1] can be defined as a group of

Artificial Neural Networks written examination

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica

Learning Prospective Robot Behavior

Python Machine Learning

Improving Action Selection in MDP s via Knowledge Transfer

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

On the Combined Behavior of Autonomous Resource Management Agents

(Sub)Gradient Descent

Lecture 6: Applications

Softprop: Softmax Neural Network Backpropagation Learning

An OO Framework for building Intelligence and Learning properties in Software Agents

Speeding Up Reinforcement Learning with Behavior Transfer

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

A Reinforcement Learning Variant for Control Scheduling

BMBF Project ROBUKOM: Robust Communication Networks

Learning Methods for Fuzzy Systems

CS Machine Learning

CSL465/603 - Machine Learning

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

FF+FPG: Guiding a Policy-Gradient Planner

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

Task Completion Transfer Learning for Reward Inference

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Probabilistic Latent Semantic Analysis

Lecture 1: Basic Concepts of Machine Learning

AI Agent for Ice Hockey Atari 2600

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Evolutive Neural Net Fuzzy Filtering: Basic Description

Intelligent Agents. Chapter 2. Chapter 2 1

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Seminar - Organic Computing

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Calibration of Confidence Measures in Speech Recognition

Task Completion Transfer Learning for Reward Inference

Human Emotion Recognition From Speech

Robot Learning Simultaneously a Task and How to Interpret Human Instructions

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Results In. Planning Questions. Tony Frontier Five Levers to Improve Learning 1

Decision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1

Generative models and adversarial training

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Word Segmentation of Off-line Handwritten Documents

The Strong Minimalist Thesis and Bounded Optimality

LEARNING TO PLAY IN A DAY: FASTER DEEP REIN-

Major Milestones, Team Activities, and Individual Deliverables

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

While you are waiting... socrative.com, room number SIMLANG2016

Learning and Transferring Relational Instance-Based Policies

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners

Henry Tirri* Petri Myllymgki

Predicting Future User Actions by Observing Unmodified Applications

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Introduction to Simulation

Speech Emotion Recognition Using Support Vector Machine

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

Knowledge Transfer in Deep Convolutional Neural Nets

SARDNET: A Self-Organizing Feature Map for Sequences

Applying Fuzzy Rule-Based System on FMEA to Assess the Risks on Project-Based Software Engineering Education

arxiv: v1 [cs.cv] 10 May 2017

EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS

An Introduction to Simulation Optimization

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

DOCTOR OF PHILOSOPHY HANDBOOK

A Case Study: News Classification Based on Term Frequency

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Software Maintenance

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

PM tutor. Estimate Activity Durations Part 2. Presented by Dipo Tepede, PMP, SSBB, MBA. Empowering Excellence. Powered by POeT Solvers Limited

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Reducing Features to Improve Bug Prediction

An investigation of imitation learning algorithms for structured prediction

TOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATION. by Yang Xu PhD of Information Sciences

MYCIN. The MYCIN Task

Team Formation for Generalized Tasks in Expertise Social Networks

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Using Deep Convolutional Neural Networks in Monte Carlo Tree Search

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Transcription:

REINFORCEMENT LEARNING ADAM ECK (SUPPLEMENTED BY LEEN-KIAT SOH) CSCE 990: Advanced MAS

Machine Learning 3 Primary Types of Machine Learning Supervised Learning n Learning how to prediction and classify n Decision trees, neural networks, SVMs Unsupervised Learning n Learning how to grouping and find relationships n Clustering: k-means, spectral Reinforcement Learning (RL) n Learning how to act and make decisions n Q-learning, Rmax, REINFORCE 2

Reinforcement Learning Learn rewards based on environment feedback Positive Rewards Negative Rewards 3

Single Agent Reinforcement Learning Framework: Markov Decision Process States S description of environment Actions A action taken to change environment Transitions T(s, a, s ) models dynamic changes in environment Reward R(s,a) numeric result of action 4

Single Agent Reinforcement Learning Reinforcement Learning Problem Given S and A Need to learn R (and maybe T) n Mapping of state/action pairs to: n Reward values n Probabilities of next states n From history (state/action/reward sequence) n H = s 0, a 0, r 0, s 1, a 1, r 1, s 2,. Use learned rewards to form policy π n Plans of actions maximizing rewards n Determines how agent acts, given current state 5

Examples Web server allocation (Tesauro et. al, 2007) Learn how many servers to assign to applications based on incoming requests Goal: maximize SLA revenue Source: (Tesauro et al., 2007) 6

Examples Ad hoc networks (Dowling et. al, 2005) Learn how to route packets through distributed network Goal: maximize packet delivery and adapt to changing network conditions (e.g., node failure) Source: (Dowling et al., 2005) 7

Examples Smart Grid (O Neill et. al, 2010) Learn how to allocate energy to residences and optimize schedule of energy usage Goal: Reduce cost of energy usage Source: (O Neill et al., 2010) 8

Examples Modular Robots (Varshavskaya et. al, 2008) Each robot module learns how to operate with a team Goal: move a robot consisting of multiple modules across an open space Source: (Varshavskaya et al., 2008) 9

Examples Poker Agents Learn how to play based on opponents behavior and available cards Goal: maximize winnings 10

Running Example 0.33 0 0.33 0 11

Example Comparison Web Server Allocation Ad Hoc Networks Smart Grid Modular Robots Poker Agents Maze States S # incoming requests Have packet? Packet transmitted? Price of energy, user demand Positions of all robots Cards, opponent model Grid location Actions A # servers to assign Transmit, find neighbors Allocation of energy Move module Raise, check, fold Movement: N, S, E, W Transitions T Change in requests over time Transmission success probability Change in price and demand Change in team configuration Changes in cards and model Change in location Rewards R Revenue $$$ Cost of sending, decay in learning User s utility of allocation +/- if move in correct/incor rect direction Chips won Inverse of distance to goal 12

Types of RL Model-free RL Learn reward for controller Ignore model parameters Example: Riding a bicycle Model-based RL Learn underlying model of environment, then solve n Often learn MDP Example: Playing poker 13

Types of RL Use model-free RL when Only care about rewards (and not dynamics) Very simple environment with fixed transitions or very complex environment More concerned with fast learning than optimal performance Use model-based RL when Want to consider dynamics Moderately complex environment with stochastic transitions More concerned with optimal performance and can afford longer learning phase 14

Types of RL Web server allocation (Tesauro et. al, 2007) Model-free (function approximation with SARSA rule) Ad hoc networks (Dowling et. al, 2005) Model-based (CRL) Smart Grid (O Neill et. al, 2010) Model-free (Q-Learning) Modular Robots (Varshavskaya et. al, 2008) Model-free (but assume know dynamics a priori) 15

Types of RL Poker Agents Model-based (if opponent modeling) n Want to determine how opponent will respond Model-free (if focused only on cards) Robotic Maze Model-free if perfect actuators Model-based if actuators can fail 16

Q-Learning Q-Learning: classic model-free RL algorithm (Watkins, 1989) Simple but powerful and effective Learns reward function as a table, based on current state and chosen action Guaranteed convergence to true reward function with enough exploration Assumes discrete state/action spaces 17

Q-Learning Learned rewards stored as a Q-table Actions States Reward Values Q(s,a) Initialize table Equal values Random values A priori information 18

Q-Learning Update Q-table after every action Q (s,a) = (1 α)q(s,a) + α [R(s,a) + γ max Q(s,a )] α = learning rate Balances old knowledge with new information γ = discount rate Determines how forward thinking the agent is n Myopic vs. non-myopic a ԑ A Accounts for uncertainty in future rewards 19

Q-Learning Policy for choosing actions Pick action with highest reward in current state π(s) = argmax Q(s,a) a ԑ A Looks myopic, but is actually non-myopic n Future rewards already considered in Q-table n Assuming γ > 0 20

RMax RMax: popular model-based RL algorithm (Brafman and Tennenholtz, 2002) Simple but powerful and effective Represents learned functions as tables Assumes discrete state/action spaces Also learns state transitions Probably Approximately Correct (PAC) learning algorithm Converges in polynomial time 21

RMax Maintain tables for both rewards and transitions Still based on states/action pairs, like in Q-Learning Initialization Assume all rewards equal to same value n Value = maximum possible reward value (RMax) Assume fixed transitions to special state n Don t know in advance what states lead to other states 22

RMax Update tables after k fixed number of interactions with the environment for a state/action pair Often k = 5, 10, 20, etc. Reward updates Store first reward experienced for a state/action Store expected reward over k iterations for a state/action Calculate probabilities of different rewards based on k rewards Transition updates Count number of state transitions after state/action Calculate probabilities based on first k transitions 23

RMax Policy for choosing actions Build a MDP model based on learning and solve Maximize current and future rewards from the current state, considering state transitions n Discount future rewards since uncertain transitions V(s) = max R(s, a) + γ T(s, a, s )V(s ) a ԑ A π(s) = argmax R(s, a) + γ T(s, a, s )V(s ) a ԑ A s ԑ S s ԑ S Can limit forward search to n future actions 24

Exploration vs. Exploitation Difficult problem: should I keep learning, or use what I ve learned? Use what I ve learned n More current rewards, less future rewards Additional learning n More future rewards, less current rewards Exploration: try to learn about uncertain state/action pairs Exploitation: maximize rewards based on learned information 25

Exploration vs. Exploitation Different methods to balance exploration and exploitation (Vermorel and Mohri, 2005): ε-greedy n Explore random action with probability ε (e.g., 10%) n Exploit best action with probability 1-ε Softmax: similar to humans (Daw et. al, 2006) n Choose actions with probabilities based on value of rewards n Higher rewards = more likely to be chosen 26

Continuous RL Both Q-Learning and RMax assume discrete state/action spaces Valid assumption in many MAS n Can convert continuous spaces into discrete n By assigning bins to ranges of continuous values What if continuous? Need to use function approximation n Learn a generic model of reward (and maybe transition) function output based on inputs n No tables Common approach: neural networks 27

Neural Networks Inputs Hidden Layer X 1 Weights Output X 2 f(x 1, X 2, X 3 ) X 3 28

Continuous RL REINFORCE (Williams, 1992) Train neural network to learn both reward function R and policy π n Reward function predicts rewards based on current state and action inputs n Policy probabilistically chooses actions given current state input based on learned rewards n Similar to Softmax, but done implicitly within the neural network Use eligibility backpropagation to train the policy n Different from neural network use in supervised learning 29

Summary Use RL to learn how to act and make decisions Maximize rewards learned from interactions with the environment Different types of algorithms Model-free: focus just on rewards n e.g., Q-Learning Model-based: learn full model of environment, then solve the model n e.g., RMax Exploration vs. Exploitation Control learning vs. using learning 30

More on RL: Model-free vs. Modelbased the main difference between model-free and modelbased RL is that model-based also learns the underlying dynamics of the environment (the stochastic T function in fully observable environments), whereas that knowledge is ignored in model-free n T is very rarely deterministic in the real-world, but learning updates do not happen until s' is known in Q-learning, so there is no need to consider T The other advantage of learning T explicitly is that the agent can actually do planning in model-based RL with T, it can project possible future states during planning That isn't explicitly possible with model-free algorithms such as Q-learning 31

More on RL: Model-free vs. Modelbased In Shoham's book, belief-based learning is when the agent considers the probabilities of each possible action of the other agents This is an improvement because often the total reward (and thus the Q function) depends not just on the agent's own action, but on the actions of the other agents. Belief-based learning could be considered model-based learning if the agent learns the Pr_i function while it operates in the environment If Pr_i is fixed from the start (e.g., to a uniform distribution, or some informed prior), then it wouldn't be model-based learning Although, some might argue that any RL is model-based if the agent has a model of the environment, not necessarily only if it learns that model 32

More on RL: Model-free vs. Modelbased Even more philosophical In a stochastic game setting (Shoham s book), the transition function represents which normal-form game (i.e., which payout table) appears next after the agents choose and execute their actions In single agent learning, the agent is really playing a game against nature (so there is only one column in the payout table for the agent itself), and nature determines the stochastic next game (i.e., state of the environment). So in that case, learning the T function in a single agent learning problem is equivalent to learning the Pr_i function might be altogether describing what nature will play Model-based? 33

More on RL Videos of AlphaGo: explanatory clips before it beat the Go world champion Lee Sedol https://deepmind.com/alpha-go Videos of Deep Mind playing Atari games earlier, before it moved on to Go https://www.youtube.com/watch?v=v1eynij0rnk https://www.youtube.com/watch?v=r3pb-zdekvg http://www.theverge.com/2016/6/9/11893002/goo gle-ai-deepmind-atari-montezumas-revenge 34

More on RL: Learning vs. Planning? Difference between RL and planning (specifically Q- Learning vs. MDP or POMDP planning)? The internal math looks very similar: for both, we create a Q-table (also the Value network learned by AlphaGo) from which we determine a policy of actions to take (also the Policy network learned by AlphaGo) As they work longer and longer, both improve over time The difference between the two is what powers the improvement, and which direction through time they gain that improvement 35

More on RL: Learning vs. Planning? Mitchell's definition of learning: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E In RL, the tasks T are whatever the agent is trying to do, the performance measure P is usually discounted cumulative rewards, and the experience E are the (s', r) pairs of state transitions and rewards the agent observes after it takes action a in state s. The more experience E, the better the agent performs by learning how the environment changes and how it is rewarded for those changes In planning, T and P are the same, but the experience E isn't necessary -- the agent already knows what (s', r) it can get after taking action a in state s. Instead, the agent improves from having more *time* to consider future (s', r) pairs -- that is, more contingencies of what it what it might encounter So the difference is planning for more possible experiences *in the future*, rather than gaining information from the experiences *it recently saw in the past* 36

More on RL: Learning vs. Planning? So the difference is planning for more possible experiences *in the future*, rather than gaining information from the experiences *it recently saw in the past* 37

More Information Great general reference: Sutton, R.S. and Barto, A.G. 1998. Reinforcement learning: an introduction. MIT Press:Cambridge, MA. Available online free at: http://webdocs.cs.ualberta.ca/~sutton/book/thebook.html 38

References Brafman, R.I. and Tennenholtz, M. 2002. R-max A general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research. 3. 213-231. Daw, N.D. et. al, 2006. Cortical substrates for exploratory decisions in humans, Nature. 441. 876-879. Dowling, J., et al. 2005. Using Feedback in Collaborative Reinforcement Learning to Adaptively Optimize MANET Routing, IEEE Transactions on SMC, Part A, 35(3). 360-372. O Neill, D. et. al. 2010. Residental demand response using reinforcement learning. Proc. of SmartGridComm 10. 409-414. Tesauro et. al. 2007. On the user of hybrid reinforcement learning for autonomic resource allocation, Cluster Computing, 10. 287-299. Vermorel, J. and Mohri, M. 2005. Multi-armed bandit algorithms and empirical evaluation, Proc. of ECML 05, 437-448. Varshavskaya, P. et. al. 2008. Automated Design of Adaptive Controllers for Modular Robots Using Reinforcement Learning. IJRR. 27. 505-526. Watkins, C.J. 1989. Learning from delayed rewards. Ph.D thesis, Cambridge University. Williams, R.J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8, 229-256. 39