Outline Learning agents Learning from Observations Inductive learning Decision tree learning Measuring learning performance Chapter 18, Sections 1 3 Chapter 18, Sections 1 3 1 Chapter 18, Sections 1 3 2 Learning Learning is essential for unknown environments, i.e., when designer lacks omniscience Learning is useful as a system construction method, i.e., epose the agent to reality rather than trying to write it down Learning modifies the agent s decision mechanisms to improve performance Performance standard feedback Agent learning goals Critic Learning element Problem generator Learning agents changes knowledge eperiments Sensors Performance element Effectors Environment Chapter 18, Sections 1 3 3 Chapter 18, Sections 1 3 4 Learning element Design of learning element is dictated by what type of performance element is used which functional component is to be learned how that functional compoent is represented what kind of feedback is available Eample scenarios: Performance element Component Representation Alpha beta search Logical agent Utility based agent Simple refle agent Eval. fn. ransition model ransition model Percept action fn Weighted linear function Successor state aioms Dynamic Bayes net Neural net Supervised learning: correct answers for each instance Reinforcement learning: occasional rewards eedback Win/loss Outcome Outcome Correct action Inductive learning (a.k.a. Science) Simplest form: learn a function from eamples (tabula rasa) f is the target function An eample is a pair, f(), e.g., O O X X X, +1 Problem: find a(n) hypothesis h such that h f given a training set of eamples (his is a highly simplified model of real learning: Ignores prior knowledge Assumes a deterministic, observable environment Assumes eamples are given Assumes that the agent wants to learn f why?) Chapter 18, Sections 1 3 5 Chapter 18, Sections 1 3 6
(h is consistent if it agrees with f on all eamples) f() (h is consistent if it agrees with f on all eamples) f() Chapter 18, Sections 1 3 7 Chapter 18, Sections 1 3 8 (h is consistent if it agrees with f on all eamples) f() (h is consistent if it agrees with f on all eamples) f() Chapter 18, Sections 1 3 9 Chapter 18, Sections 1 3 10 (h is consistent if it agrees with f on all eamples) f() (h is consistent if it agrees with f on all eamples) f() Ockham s razor: maimize a combination of consistency and simplicity Chapter 18, Sections 1 3 11 Chapter 18, Sections 1 3 12
Attribute-based representations Eamples described by attribute values (Boolean, discrete, continuous, etc.) E.g., situations where I will/won t wait for a table: Eample Attributes arget Alt Bar ri Hun Pat Price Rain Res ype Est WillWait X 1 Some $$$ rench 0 10 X 2 ull $ hai 30 60 X 3 Some $ Burger 0 10 X 4 ull $ hai 10 30 X 5 ull $$$ rench >60 X 6 Some $$ Italian 0 10 X 7 None $ Burger 0 10 X 8 Some $$ hai 0 10 X 9 ull $ Burger >60 X 10 ull $$$ Italian 10 30 X 11 None $ hai 0 10 X 12 ull $ Burger 30 60 Classification of eamples is positive () or negative () Decision trees One possible representation for hypotheses E.g., here is the true tree for deciding whether to wait: None Some ull Reservation? ri/sat? Bar? WaitEstimate? >60 30 60 10 30 0 10 Alternate? Hungry? Alternate? Raining? Chapter 18, Sections 1 3 13 Chapter 18, Sections 1 3 14 Epressiveness Decision trees can epress any function of the input attributes. E.g., for Boolean functions, truth table row path to leaf: A B A or B B A B rivially, there is a consistent decision tree for any training set w/ one path to leaf for each eample (unless f nondeterministic in ) but it probably won t generalize to new eamples Prefer to find more compact decision trees Chapter 18, Sections 1 3 15 Chapter 18, Sections 1 3 16 = number of distinct truth tables with 2 n rows Chapter 18, Sections 1 3 17 Chapter 18, Sections 1 3 18
Chapter 18, Sections 1 3 19 Chapter 18, Sections 1 3 20 How many purely conjunctive hypotheses (e.g., Hungry Rain)?? How many purely conjunctive hypotheses (e.g., Hungry Rain)?? Each attribute can be in (positive), in (negative), or out 3 n distinct conjunctive hypotheses More epressive hypothesis space increases chance that target function can be epressed increases number of hypotheses consistent w/ training set may get worse predictions Chapter 18, Sections 1 3 21 Chapter 18, Sections 1 3 22 Decision tree learning Aim: find a small tree consistent with the training eamples Idea: (recursively) choose most significant attribute as root of (sub)tree function DL(eamples, attributes, default) returns a decision tree if eamples is empty then return default else if all eamples have the same classification then return the classification else if attributes is empty then return Mode(eamples) else best Choose-Attribute(attributes, eamples) tree a new decision tree with root test best for each value v i of best do eamples i {elements of eamples with best = v i} subtree DL(eamples i, attributes best, Mode(eamples)) add a branch to tree with label v i and subtree subtree return tree Choosing an attribute Idea: a good attribute splits the eamples into subsets that are (ideally) all positive or all negative None Some ull ype? rench Italian hai Burger P atrons? is a better choice gives information about the classification Chapter 18, Sections 1 3 23 Chapter 18, Sections 1 3 24
Information Information answers questions he more clueless I am about the answer initially, the more information is contained in the answer Scale: 1 bit = answer to Boolean question with prior 0.5, 0.5 Informationinananswerwhenprioris P 1,...,P n is H( P 1,...,P n )=Σ n i =1 P i log 2 P i (also called entropy of the prior) Information contd. Suppose we have p positive and n negative eamples at the root H( p/(p+n),n/(p+n) ) bits needed to classify a new eample E.g., for 12 restaurant eamples, p = n =6 so we need 1 bit An attribute splits the eamples E into subsets E i, each of which (we hope) needs less information to complete the classification Let E i have p i positive and n i negative eamples H( p i /(p i +n i ),n i /(p i +n i ) ) bits needed to classify a new eample epected number of bits per eample over all branches is p Σ i + n i i p + n H( p i/(p i + n i ),n i /(p i + n i ) ) or P atrons?, this is 0.459 bits, for ype this is (still) 1 bit choose the attribute that minimizes the remaining information needed Chapter 18, Sections 1 3 25 Chapter 18, Sections 1 3 26 Eample contd. Decision tree learned from the 12 eamples: None Some ull Hungry? Yes No ype? rench Italian hai Burger ri/sat? Substantially simpler than true tree a more comple hypothesis isn t justified by small amount of data Performance measurement Howdoweknowthath f? (Hume sproblem of Induction) 1) Use theorems of computational/statistical learning theory 2) ry h on a new test set of eamples (use same distribution over eample space as training set) Learning curve = % correct on test set as a function of training set size 1 % correct on test set 0.9 0.8 0.7 0.6 0.5 0.4 0 10 20 30 40 50 60 70 80 90 100 raining set size Chapter 18, Sections 1 3 27 Chapter 18, Sections 1 3 28 Performance measurement contd. Learning curve depends on realizable (can epress target function) vs. non-realizable non-realizability can be due to missing attributes or restricted hypothesis class (e.g., thresholded linear function) redundant epressiveness (e.g., loads of irrelevant attributes) % correct 1 realizable redundant nonrealizable Summary Learning needed for unknown environments, lazy designers Learning agent = performance element + learning element Learning method depends on type of performance element, available feedback, type of component to be improved, and its representation or supervised learning, the aim is to find a simple hypothesis that is approimately consistent with training eamples Decision tree learning using information gain Learning performance = prediction accuracy measured on test set # of eamples Chapter 18, Sections 1 3 29 Chapter 18, Sections 1 3 30