Announcements. CS 188: Artificial Intelligence Spring Today. Q-Learning. Example: Pacman. The Story So Far: MDPs and RL

Similar documents
Lecture 10: Reinforcement Learning

The Evolution of Random Phenomena

Lecture 1: Machine Learning Basics

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Reinforcement Learning by Comparing Immediate Reward

Artificial Neural Networks written examination

Probabilistic Latent Semantic Analysis

Lecture 1: Basic Concepts of Machine Learning

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Curriculum Design Project with Virtual Manipulatives. Gwenanne Salkind. George Mason University EDCI 856. Dr. Patricia Moyer-Packenham

Rule-based Expert Systems

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

Speeding Up Reinforcement Learning with Behavior Transfer

Laboratorio di Intelligenza Artificiale e Robotica

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

Firms and Markets Saturdays Summer I 2014

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Python Machine Learning

Managerial Decision Making

Laboratorio di Intelligenza Artificiale e Robotica

Chapter 4 - Fractions

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Probability and Game Theory Course Syllabus

TD(λ) and Q-Learning Based Ludo Players

Learning Methods for Fuzzy Systems

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Georgetown University at TREC 2017 Dynamic Domain Track

Go fishing! Responsibility judgments when cooperation breaks down

Software Maintenance

Outline for Session III

12- A whirlwind tour of statistics

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

A Case Study: News Classification Based on Term Frequency

Introduction to Simulation

Probability and Statistics Curriculum Pacing Guide

Machine Learning and Development Policy

MYCIN. The MYCIN Task

Assignment 1: Predicting Amazon Review Ratings

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance

TUESDAYS/THURSDAYS, NOV. 11, 2014-FEB. 12, 2015 x COURSE NUMBER 6520 (1)

Using focal point learning to improve human machine tacit coordination

Sight Word Assessment

Genevieve L. Hartman, Ph.D.

Improving Action Selection in MDP s via Knowledge Transfer

CS Machine Learning

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Lecture 2: Quantifiers and Approximation

MGT/MGP/MGB 261: Investment Analysis

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Axiom 2013 Team Description Paper

CSL465/603 - Machine Learning

Hentai High School A Game Guide

A Case-Based Approach To Imitation Learning in Robotic Agents

Clouds = Heavy Sidewalk = Wet. davinci V2.1 alpha3

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Discriminative Learning of Beam-Search Heuristics for Planning

Intelligent Agents. Chapter 2. Chapter 2 1

Visual CP Representation of Knowledge

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Regret-based Reward Elicitation for Markov Decision Processes

Decision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1

High-level Reinforcement Learning in Strategy Games

Algebra 2- Semester 2 Review

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

What is this species called? Generation Bar Graph

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

Learning Prospective Robot Behavior

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

INPE São José dos Campos

Seminar - Organic Computing

Generative models and adversarial training

Short vs. Extended Answer Questions in Computer Science Exams

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The Strong Minimalist Thesis and Bounded Optimality

Spring 2016 Stony Brook University Instructor: Dr. Paul Fodor

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

Syllabus Foundations of Finance Summer 2014 FINC-UB

Evaluating Statements About Probability

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus

IMGD Technical Game Development I: Iterative Development Techniques. by Robert W. Lindeman

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Measuring physical factors in the environment

C O U R S E. Tools for Group Thinking

Knowledge-Based - Systems

Mathematics Success Grade 7

FINN FINANCIAL MANAGEMENT Spring 2014

AMULTIAGENT system [1] can be defined as a group of

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

Course Syllabus for Math

TOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATION. by Yang Xu PhD of Information Sciences

STT 231 Test 1. Fill in the Letter of Your Choice to Each Question in the Scantron. Each question is worth 2 point.

Paper Reference. Edexcel GCSE Mathematics (Linear) 1380 Paper 1 (Non-Calculator) Foundation Tier. Monday 6 June 2011 Afternoon Time: 1 hour 30 minutes

Probabilistic Mission Defense and Assurance

LEGO MINDSTORMS Education EV3 Coding Activities

Multiplication of 2 and 3 digit numbers Multiply and SHOW WORK. EXAMPLE. Now try these on your own! Remember to show all work neatly!

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Transcription:

CS 188: Artificial Intelligence Spring 11 Lecture 12: Probability 3/2/11 Announcements P3 due on Monday (3/7) at 4:59pm W3 going out tonight Midterm Tuesday 3/15 5pm-8pm Closed notes, books, laptops. May use one-page (two-sided) cheat sheet of your own design (group design OK but not recommended). Monday 3/14 : no lecture at usual 5:3-7:pm time Midterm review? Practice midterm? Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 2 Today The Story So Far: MDPs and RL MDP s and Reinforcement Learning Generalization One of the most important concepts in machine learning! Policy search Next, we ll start studying how to reason with probabilities Diagnosis Tracking objects Speech recognition Robot mapping lots more! Third part of course: machine learning 3 Things we know how to do: We can solve small MDPs exactly, offline We can estimate values V π (s) directly for a fixed policy π. We can estimate Q*(s,a) for the optimal policy while executing an exploration policy Techniques: Value and policy Iteration Temporal difference learning Q-learning Exploratory action selection 4 Q-Learning In realistic situations, we cannot possibly learn about every single state! Too many states to visit them all in training Too many states to hold the q-tables in memory Instead, we want to generalize: Learn about some small number of training states from experience Generalize that experience to new, similar states This is a fundamental idea in machine learning, and we ll see it over and over again 5 Example: Pacman Let s say we discover through experience that this state is bad: In naïve q learning, we know nothing about this state or its q states: Or even this one! 6 1

Feature-Based Representations Linear Feature Functions Solution: describe a state using a vector of features Features are functions from states to real numbers (often /1) that capture important properties of the state Example features: Distance to closest ghost Distance to closest dot Number of ghosts 1 / (dist to dot) 2 Is Pacman in a tunnel? (/1) etc. Can also describe a q-state (s, a) with features (e.g. action moves closer to food) 7 Using a feature representation, we can write a q function (or value function) for any state using a few weights: Advantage: our experience is summed up in a few powerful numbers Disadvantage: states may share features but be very different in value! 8 Function Approximation Example: Q-Pacman Q-learning with linear q-functions: Exact Q s Approximate Q s Intuitive interpretation: Adjust weights of active features E.g. if something unexpectedly bad happens, disprefer all states with that state s features Formal justification: online least squares 9 1 Linear regression Linear regression 4 4 26 26 24 24 22 22 1 3 1 1 3 4 3 1 1 3 4 Given examples Predict given a new point 11 Prediction Prediction 12 2

Ordinary Least Squares (OLS) Minimizing Error Observation Error or residual Prediction Value update explained: 13 14 3 25 Overfitting Policy Search 15 Degree 15 polynomial 1 5-5 -1-15 2 4 6 8 1 12 14 16 18 15 16 Policy Search Problem: often the feature-based policies that work well aren t the ones that approximate V / Q best Solution: learn the policy that maximizes rewards rather than the value that predicts rewards This is the idea behind policy search, such as what controlled the upside-down helicopter Policy Search Simplest policy search: Start with an initial linear value function or Q-function Nudge each feature weight up and down and see if your policy is better than before Problems: How do we tell the policy got better? Need to run many sample episodes! If there are a lot of features, this can be impractical 17 18 3

MDPs and RL Outline To Learn More About RL Markov Decision Processes (MDPs) Formalism Value iteration Expectimax Search vs. Value Iteration Policy Evaluation and Policy Iteration Reinforcement Learning Model-based Learning Model-free Learning Direct Evaluation [performs policy evaluation] Temporal Difference Learning [performs policy evaluation] Q-Learning [learns optimal state-action value function Q*] 19 Policy Search [learns optimal policy from subset of all policies] Online book: Sutton and Barto http://www.cs.ualberta.ca/~sutton/book/ebook/the-book.html Graduate level courses at Berkeley with reading material/lecture notes online: http://inst.eecs.berkeley.edu/~cs294-4/fa8/ http://www.cs.berkeley.edu/~russell/classes/ cs294/s11/ Take a Deep Breath Part II: Probabilistic Reasoning We re done with Part I Search and Planning! Part II: Probabilistic Reasoning Diagnosis Tracking objects Speech recognition Robot mapping Genetics Error correcting codes lots more! Part III: Machine Learning 21 Probability Distributions over LARGE Numbers of Random Variables Representation Independence Inference Variable Elimination Sampling Hidden Markov Models 22 Probability Inference in Ghostbusters Probability Random Variables Joint and Marginal Distributions Conditional Distribution Inference by Enumeration Product Rule, Chain Rule, Bayes Rule Independence You ll need all this stuff A LOT for the next few weeks, so make sure you go over it now and know it inside out! The next few weeks we will learn how to make these work computationally efficiently for LARGE numbers of random variables. 23 A ghost is in the grid somewhere Sensor readings tell how close a square is to the ghost On the ghost: red 1 or 2 away: orange 3 or 4 away: yellow 5+ away: green Sensors are noisy, but we know P(Color Distance) P(red 3) P(orange 3) P(yellow 3) P(green 3).5.15.5.3 4

Uncertainty General situation: Evidence: Agent knows certain things about the state of the world (e.g., sensor readings or symptoms) Hidden variables: Agent needs to reason about other aspects (e.g. where an object is or what disease is present) Model: Agent knows something about how the known variables relate to the unknown variables Probabilistic reasoning gives us a framework for managing our beliefs and knowledge Random Variables A random variable is some aspect of the world about which we (may) have uncertainty R = Is it raining? D = How long will it take to drive to work? L = Where am I? We denote random variables with capital letters Like variables in a CSP, random variables have domains R in {true, false} (sometimes write as {+r, r}) D in [, ) L in possible locations, maybe {(,), (,1), } 25 26 Probability Distributions Unobserved random variables have distributions Joint Distributions A joint distribution over a set of random variables: specifies a real number for each assignment (or outcome): T P warm.5 cold.5 A distribution is a TABLE of probabilities of values A probability (lower case value) is a single number W P sun.6 rain.1 fog.3 meteor. Size of distribution if n variables with domain sizes d? Must obey: hot sun.4 cold sun.2 cold rain.3 Must have: 27 For all but the smallest distributions, impractical to write out 28 Probabilistic Models Events A probabilistic model is a joint distribution over a set of random variables Probabilistic models: (Random) variables with domains Assignments are called outcomes Joint distributions: say whether assignments (outcomes) are likely Normalized: sum to 1. Ideally: only certain variables directly interact Constraint satisfaction probs: Variables with domains Constraints: state whether assignments are possible Ideally: only certain variables directly interact Distribution over T,W hot sun.4 cold sun.2 cold rain.3 Constraint over T,W hot sun T hot rain F cold sun F cold rain T 29 An event is a set E of outcomes From a joint distribution, we can calculate the probability of any event Probability that it s hot AND sunny? Probability that it s hot? Probability that it s hot OR sunny? Typically, the events we care about are partial assignments, like P(T=hot) hot sun.4 cold sun.2 cold rain.3 3 5

Marginal Distributions Marginal distributions are sub-tables which eliminate variables Marginalization (summing out): Combine collapsed rows by adding Conditional Probabilities A simple relation between joint and conditional probabilities In fact, this is taken as the definition of a conditional probability hot sun.4 cold sun.2 cold rain.3 T P hot.5 cold.5 W P sun.6 rain.4 hot sun.4 cold sun.2 31 cold rain.3 32 Conditional Distributions Conditional distributions are probability distributions over some variables given fixed values of others Conditional Distributions Joint Distribution Normalization Trick A trick to get a whole conditional distribution at once: Select the joint probabilities matching the evidence Normalize the selection (make it sum to one) W P sun.8 rain.2 hot sun.4 cold sun.2 hot sun.4 cold sun.2 cold rain.3 Select T R P cold rain.3 Normalize T P hot.25 cold.75 W P sun.4 cold rain.3 Why does this work? Sum of selection is P(evidence)! (P(r), here) rain.6 33 34 Probabilistic Inference Inference by Enumeration Probabilistic inference: compute a desired probability from other known probabilities (e.g. conditional from joint) We generally compute conditional probabilities P(on time no reported accidents) =.9 These represent the agent s beliefs given the evidence Probabilities change with new evidence: P(on time no accidents, 5 a.m.) =.95 P(on time no accidents, 5 a.m., raining) =.8 Observing new evidence causes beliefs to be updated P(sun)? P(sun winter)? P(sun winter, warm)? S summer hot sun.3 summer hot rain.5 summer cold sun.1 summer cold rain.5 winter hot sun.1 winter hot rain.5 winter cold sun.15 winter cold rain. 35 36 6

Inference by Enumeration The Product Rule General case: Evidence variables: Query* variable: Hidden variables: All variables Sometimes have conditional distributions but want the joint We want: First, select the entries consistent with the evidence Second, sum out H to get joint of Query and evidence: Example: Finally, normalize the remaining entries to conditionalize Obvious problems: Worst-case time complexity O(d n ) Space complexity O(d n ) to store the joint distribution * Works fine with multiple query variables, too R P sun.8 rain.2 D W P wet sun.1 dry sun.9 wet rain.7 dry rain.3 D W P wet sun.8 dry sun.72 wet rain.14 dry rain.6 38 The Chain Rule Bayes Rule More generally, can always write any joint distribution as an incremental product of conditional distributions Two ways to factor a joint distribution over two variables: That s my rule! Dividing, we get: Why is this always true? Why is this at all helpful? Lets us build one conditional from its reverse Often one conditional is tricky but the other one is simple Foundation of many systems we ll see later (e.g. ASR, MT) 39 In the running for most important AI equation! 4 Inference with Bayes Rule Example: Diagnostic probability from causal probability: Example: m is meningitis, s is stiff neck Example givens Ghostbusters, Revisited Let s say we have two distributions: Prior distribution over ghost location: P(G) Let s say this is uniform Sensor reading model: P(R G) Given: we know what our sensors do R = reading color measured at (1,1) E.g. P(R = yellow G=(1,1)) =.1 We can calculate the posterior distribution P(G r) over ghost locations given a reading using Bayes rule: Note: posterior probability of meningitis still very small Note: you should still get stiff necks checked out! Why? 41 42 7

Independence Example: Independence? Two variables are independent in a joint distribution if: T P warm.5 cold.5 Says the joint distribution factors into a product of two simple ones Usually variables aren t independent! Can use independence as a modeling assumption Independence can be a simplifying assumption Empirical joint distributions: at best close to independent What could we assume for {Weather, Traffic, Cavity}? Independence is like something from CSPs: what? 43 warm sun.4 warm rain.1 cold sun.2 cold rain.3 W P sun.6 rain.4 warm sun.3 warm rain.2 cold sun.3 cold rain.2 44 Example: Independence N fair, independent coin flips: H.5 T.5 H.5 T.5 H.5 T.5 45 8