Machine Learning. ML for NLP Lecturer: Kevin Koidl Assist. Lecturer Alfredo Maldonado

Similar documents
Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 1: Machine Learning Basics

Lecture 1: Basic Concepts of Machine Learning

Axiom 2013 Team Description Paper

Laboratorio di Intelligenza Artificiale e Robotica

Python Machine Learning

CSL465/603 - Machine Learning

(Sub)Gradient Descent

Laboratorio di Intelligenza Artificiale e Robotica

CS Machine Learning

Lecture 10: Reinforcement Learning

Artificial Neural Networks written examination

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

A Case Study: News Classification Based on Term Frequency

Using dialogue context to improve parsing performance in dialogue systems

Rule Learning With Negation: Issues Regarding Effectiveness

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

COMPUTER-AIDED DESIGN TOOLS THAT ADAPT

Seminar - Organic Computing

Speech Recognition at ICSI: Broadcast News and beyond

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Generative models and adversarial training

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Reinforcement Learning by Comparing Immediate Reward

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

MYCIN. The MYCIN Task

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Active Learning. Yingyu Liang Computer Sciences 760 Fall

An OO Framework for building Intelligence and Learning properties in Software Agents

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Linking Task: Identifying authors and book titles in verbose queries

SAM - Sensors, Actuators and Microcontrollers in Mobile Robots

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Rule Learning with Negation: Issues Regarding Effectiveness

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Radius STEM Readiness TM

Softprop: Softmax Neural Network Backpropagation Learning

Reducing Features to Improve Bug Prediction

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Learning From the Past with Experiment Databases

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Beyond the Pipeline: Discrete Optimization in NLP

STA 225: Introductory Statistics (CT)

Probabilistic Latent Semantic Analysis

The Strong Minimalist Thesis and Bounded Optimality

Knowledge-Based - Systems

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Using focal point learning to improve human machine tacit coordination

arxiv: v1 [cs.lg] 15 Jun 2015

Assignment 1: Predicting Amazon Review Ratings

Switchboard Language Model Improvement with Conversational Data from Gigaword

Intelligent Agents. Chapter 2. Chapter 2 1

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

Human Emotion Recognition From Speech

A study of speaker adaptation for DNN-based speech synthesis

CS 446: Machine Learning

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Welcome to. ECML/PKDD 2004 Community meeting

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

AQUA: An Ontology-Driven Question Answering System

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Chapter 2 Rule Learning in a Nutshell

OFFICE SUPPORT SPECIALIST Technical Diploma

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Evolution of Symbolisation in Chimpanzees and Neural Nets

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

MTH 141 Calculus 1 Syllabus Spring 2017

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Multi-Lingual Text Leveling

EDCI 699 Statistics: Content, Process, Application COURSE SYLLABUS: SPRING 2016

A survey of multi-view machine learning

Natural Language Processing: Interpretation, Reasoning and Machine Learning

On the Combined Behavior of Autonomous Resource Management Agents

A Version Space Approach to Learning Context-free Grammars

Evolutive Neural Net Fuzzy Filtering: Basic Description

A. What is research? B. Types of research

GDP Falls as MBA Rises?

Abstractions and the Brain

Word Segmentation of Off-line Handwritten Documents

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

University of Alberta. Large-Scale Semi-Supervised Learning for Natural Language Processing. Shane Bergsma

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Calibration of Confidence Measures in Speech Recognition

Transcription:

Machine Learning ML for NLP Lecturer: Kevin Koidl Assist. Lecturer Alfredo Maldonado https://www.cs.tcd.ie/kevin.koidl/cs4062/ kevin.koidl@scss.tcd.ie, maldonaa@tcd.ie 2017 Outline Does TC (and NLP) need Machine Learning? What can Machine Learning do for us ( what has machine learning ever done for us?) What is machine learning? How do we design machine learning systems? What is a well-defined learning problem? An example Why Machine Learning Progress in algorithms and theory Growing flood of online data Computational power is available Rich application potential The knowledge acquisition bottleneck Well-Defined Learning Problems Learning = Improving with experience at some task Improve over Task T, with respect to performance measure P, based on experience E. Example Checkers T: Play Checkers, P: Percentage of Games won in a tournament, P: Games played against self. 1

Machine Learning Definitions Machine learning is the subfield of computer science that gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959). A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E (Tom M. Mitchell, 1998). Machine Learning Challenges How can a computer program make an experience? How can this experience be codified? Examples of a codified experience? User interface agents? An (artificial) agent may help users cope with increasing information: An agent is a computer system that is situated in some environment and that is capable of autonomous action in its environment in order to meet its design objectives. (Wooldridge, 2002) Definition of a Rational Agent: A rational agent should select an action that is expected to maximize its performance measure, given the evidence provided. [Peter Norvig, 2003] Do Agents need machine learning? Practical concerns: large amounts of language data have become available (on the web and elsewhere), and one needs to be able to make sense of them all, knowledge engineering methods don t seem to be able to cope with the growing flood of data Machine learning can be used to automate knowledge acquisition and inference Theoretical contributions: reasonably solid foundations (theory and algorithms) 2

Machine Learning Categories Main Machine Learning Categories: Supervised Learning: Computer receives input and output data aka labelled data and creates a mapping between both. Unsupervised Learning: Input data has no labels are given. Learning algorithm has to identify structure in the input data. Supervised Learning Supervised Machine Learning Problem Categories: Regression Problem: Continous Output. For example predict percentage grade (e.g. 76%) based on hours studied. Classification Problem: Discrete Output. For example predict grade (e.g. A) based on hours studied. Supervised Learning Models Tyical Machine Learning Models: Supervised Vector Machines (SVM) Gaussian Process Artifical Neural Networks (ANN) Classification, Regression, Decission Trees, Random Forrest, Back Propagation. What is the goal of a model and how do I select the right one? Application niches for machine learning ML for text classification for use in, for instance, self customizing programs: Newsreader that learns user interests Data mining: using historical data to improve decisions medical records medical knowledge analysis of customer behaviour Software applications we can t program by hand autonomous driving speech recognition 3

Examples: data mining problem Patient103 Patient103 time=1 Patient103 time=2 time=n Age: 23 FirstPregnancy: no Anemia: no Diabetes: no PreviousPrematureBirth: no Ultrasound:? Elective C Section:? Emergency C Section:? Age: 23 FirstPregnancy: no Anemia: no Diabetes: YES PreviousPrematureBirth: no Ultrasound: abnormal Elective C Section: no Emergency C Section:? Age: 23 FirstPregnancy: no Anemia: no Diabetes: no PreviousPrematureBirth: no Ultrasound:? Elective C Section: no Emergency C Section: Yes Given: 9714 patient records, each describing a pregnancy and birth Each patient record contains 215 features Learn to predict: Classes of future patients at high risk for Emergency Cesarean Section Examples: data mining results Patient103 Patient103 time=1 Patient103 time=2 time=n Age: 23 FirstPregnancy: no Anemia: no Diabetes: no PreviousPrematureBirth: no Ultrasound:? Elective C Section:? Emergency C Section:? Age: 23 FirstPregnancy: no Anemia: no Diabetes: YES PreviousPrematureBirth: no Ultrasound: abnormal Elective C Section: no Emergency C Section:? Age: 23 FirstPregnancy: no Anemia: no Diabetes: no PreviousPrematureBirth: no Ultrasound:? Elective C Section: no Emergency C Section: Yes If No previous vaginal delivery, and Abnormal 2nd Trimester Ultrasound, and Malpresentation at admission Then Probability of Emergency C-Section is 0.6 Over training data: 26/41 =.63, Over test data: 12/20 =.60 Other prediction problems Customer purchase behavior: Customer103: (time=t0) Customer103: (time=t1) Customer103: (time=tn) Sex: M Age: 53 Income: $50k Own House: Yes MS Products: Word Computer: 386 PC Purchase Excel?:? Sex: M Age: 53 Income: $50k Own House: Yes MS Products: Word Computer: Pentium Purchase Excel?:? Sex: M Age: 53 Income: $50k Own House: Yes MS Products: Word Computer: Pentium Purchase Excel?: Yes 4

Process optimization: Product72: (time=t0) Product72: (time=t1) Product72: (time=tn) Stage: mix Mixing speed: 60rpm Viscosity: 1.3 Fat content: 15% Density: 2.8 Stage: cook Temperature: 325 Viscosity: 3.2 Fat content: 12% Density: 1.1 Stage: cool Fan speed: medium Viscosity: 1.3 Fat content: 12% Density: 1.2 Spectral peak: 2800 Spectral peak: 3200 Spectral peak: 3100 Product underweight?:?? Product underweight?:?? Product underweight?: Yes Problems Too Difficult to Program by Hand ALVINN (Pomerleau, 1994): drives 70 mph Software that adapts to its user Recommendation services, Bayes spam filtering etc Perspectives Common applications First-generation algorithms: neural nets, decision trees, regression 5

Applied to well-formated databases Advanced applications; areas of active reasearch: Learn across full mixed-media data Learn across multiple internal databases, plus the web and newsfeeds Learn by active experimentation Learn decisions rather than predictions Cumulative, lifelong learning Deep learning Defining learning ML has been studied from various perspectives (AI, control theory, statistics, information theory, ) From an AI perspective, the general definition is formulated in terms of agents and tasks. E.g.: [An agent] is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with E. (Mitchell, 1997, p. 2) Statistics, model-fitting, Agent Programs Agent = architecture + program Architecture = Sensors and Actuators Program = Decision Process Examples are: Simple Reflex Agents, Model-based reflex Agents, Goalbased Agents, Utility-based Agents and Learning (Intelligent) Agents. Simple Reflex Agent 6

Model-based Reflex Agent 7

Learning agents Performance standard Critic Sensors feedback learning goals Learning element changes knowledge Performance element Environment Problem generator Agent Effectors The archiecture in some detail Performance element: responsible for selecting appropriate actions Learning element: responsible for making improvements 8

Critic: evaluates action selection against a performance standard Problem generator: suggests actions that might lead to new and instructive experiences Designing a machine learning system Main design decisions: Training experience: How will the system access an use data? Target function: What exactly should be learnt? Hypothesis representation: How will we represent the concepts to be learnt? Inductive inference: What specific algorithm should be used to learn the target concepts? Accessing and using data How will the system be exposed to its training experience? Some distinctions: Direct or indirect access: indirect access: record of past experiences, corpora direct access: situated agents reinforcement learning Source of feedback ( teacher ): supervised learning unsupervised learning mixed: semi-supervised ( transductive ), active learning Determining the target function The target function specifies the concept to be learnt. In supervised learning, the target function is assumed to be specified through annotation of training data or some form of feedback: a corpus of words annotated for word senses, e.g. f : W S {0, 1} a database of medical data user feedback in spam filtering assessment of outcomes of actions by a situated agent Representing hypotheses and data The goal of the learning algorithm is to induce an approximation ˆf of a target function f The data used in the induction process needs to be represented uniformly E.g. representation of text as a bag of words, Boolean vectors, etc 9

The choice of representation often constrains the space of available hypotheses, hence the possible ˆf s. E.g.: the approximation to be learnt could, for instance, map conjunctions of Boolean literals to categories or it could assume that co-occurence of words do not matter for categorisation etc Deduction and Induction Deduction (conclusion guaranteed): From general premises to a conclusion. E.g. If x = 4 And if y = 1, Then 2x + y = 9. Induction (conclusion likely): from instances to generalisations. E.g. All of the swans we have seen are white. Therefore, (we expect) all swans to be white. Machine learning algorithms produce models that generalise from instances presented to the algorithm But all (useful) learners have some form of inductive bias: In terms of representation, as mentioned above, But also in terms of their preferences in generalisation procedures. E.g: prefer simpler hypotheses, or prefer shorter hypotheses, or incorporate domain (expert) knowledge, etc etc Given an function ˆf : X C trained on a set of instances D c describing a concept c, we say that the inductive bias of ˆf is a minimal set of assertions B, such that for any set of instanced X: Choosing an algorithm x X(B D c x ˆf(x)) Induction task as search for a hypothesis (or model) that fits the data and sample of the target function available to the learner, in a large space of hypotheses The choice of learning algorithm is conditioned to the choice of representation Since the target function is not completely accessible to the learner, the algorithm needs to operate under the inductive learning assumption that: an approximation that performs well over a sufficiently large set of instances will perform well on unseen data. Note: Computational Learning Theory 10

Computational learning theory deals in a precise manner with the concepts highlighted above, namely, what it means for an approximation (learnt function) to perform well, and what counts as a sufficiently large set of instances. An influential framework is the probably approximately correct (PAC) learning framework, proposed by Valiant (1984). For an accessible introduction to several aspects of machine learning, see (Domingos, 2012). For some interesting implications see the no-free lunch theorems and the Extended Bayesian Framework (Wolpert, 1996). An Example: learning to play (Mitchell, 1997) Learning to play draughts (checkers): Task? (target function, data representation) Training experience? Performance measure? A target function A target function for a draughts (checkers) player: f : Board R if b is a final board state that is won, then f(b) = 100 if b is a final board state that is lost, then f(b) = 100 if b is a final board state that is drawn, then f(b) = 0 if b is a not a final state in the game, then f(b) = f(b ), where b is the best final board state that can be achieved starting from b and playing optimally until the end of the game. How feasible would it be to implement it? Not very feasible and how can we find intermediate game states? 11

Representation collection of rules? neural network? polynomial function of board features? Approximation as a linear combination of features: ˆf(b) = w 0 + w 1 bp(b) + w 2 rp(b) + w 3 bk(b) + w 4 rk(b) + w 5 bt(b) + w 6 rt(b) where: bp(b): number of black pieces on board b rp(b): number of red pieces on b bk(b): number of black kings on b rk(b): number of red kings on b bt(b): number of red pieces threatened by black (i.e., which can be taken on black s next turn) rt(b): number of black pieces threatened by red Training Experience Distinctions: f(b): the true target function ˆf(b) : the learnt function f train (b): the training value A training set containing instances and its corresponding training values Problem: How do we estimate training values? A simple rule for estimating training values: f train (b) ˆf(Successor(b)) Successor(b) denotes the next board state following the programs move and the opponent s response. Note: (Successor(b) is an estimation of the value of board state b. Does the ˆf(b) tend to become more or less accurate for board states closer to the end of the game? Example: Choosing a Function Approximation Learning the target function by approximation ˆf(b) Based on a set of training examples describing a board state b And the corresponding training value f train (b) Each training example results in an ordered pair of the form < b, f train (b) > Example: << bp = 3, rp = 0, bk = 1, rk = 0, bt = 0, rt = 0 >, +100 > f train (b) is therefore +100 and black has won. 12

How do we learn the weights? Algorithm 1: Least Mean Square 1 LMS(c : l e a r n i n g r a t e ) 2 f o r each t r a i n i n g i n s t a n c e < b, f train (b) > 3 do 4 compute error(b) f o r c u r r e n t approximation 5 ( u s i n g c u r r e n t weights ) : 6 error(b) = f train (b) ˆf(b) 7 f o r each board f e a t u r e t i {bp(b), rp(b),... }, 8 do 9 update weight w i : 10 w i w i + c t i error(b) 11 done 12 done LMS minimises the squared error between training data and current approx.: E b,f (f train(b) D train(b) ˆf(b)) 2 Notice that if error(b) = 0 (i.e. target and approximation match) no weights change. Similarly, if or t i = 0 (i.e. feature t i doesn t occcur) the corresponding weight doesn t get updated. This weight update rule can be shown to perform a gradient descent search for the minimal squared error (i.e. weight updates are proportional to E where E = [ E w 0, E w 1,... ]). That the LMS weight update rule implements gradient descent can be seen by differentiating E: E = [f(b) 2 ˆf(b)] w i w i [f(b) 2 ˆf(b)] = w i = 2 [f(b) ˆf(b)] w i [f(b) ˆf(b)] = 2 [f(b) ˆf(b)] D [f(b) w i t i ] w i = 2 error(b) t i i Learning agent architecture 13

Performance standard Critic Training instances {<b, f_train(b)>, } feedback Hypothesis (f) Learning changes element (f) learning goals Problem generator Agent f_train(b) < f(successor(b)) Solution (b1,,bn) Sensors Performance element Effectors Environment New problem (e.g. initial board) Design choices: summary Games against experts Determine Type of Training Experience Games against self Table of correct moves Determine Target Function Board move Board value Determine Representation of Learned Function Polynomial Linear function of six features Artificial neural network Determine Learning Algorithm Gradient descent Linear programming Completed Design Copyright 2009 by Sean Luke and Vittorio Zipparo Licensed under the Academic Free License version 3.0 See the file LICENSE for more information */ /* Copyright 2009 by Sean Luke and Vittorio Zipparo Licensed under the Academic Free License version 3.0 See the file LICENSE for more information */ Mapping and structure Some target functions (specially in NLP) fit more naturally into a transducer pattern, and naturally have a signature f: sequence over vocab Σ sequence over (Σ labels C) eg. POS-tagging (Part-of Speech Tagging) last week IBM bought Lotus bought/vbd Lotus/NNP last/jj week/nn IBM/NNP 14

Targeting Sequences and Trees other functions do not fit this pattern either, but instead have a signature f: sequence over vocab Σ tree over (Σ labels C) S NP NP VP last week IBM VBD NP eg. parsing: last week IBM bought Lotus bought Lotus Issues in machine learning What algorithms can approximate functions well (and when)? How does number of training examples influence accuracy? How does complexity of hypothesis representation impact it? How does noisy data influence accuracy? What are the theoretical limits of learnability? How can prior knowledge of learner help? What clues can we get from biological learning systems? How can systems alter their own representations? Some application examples we will see in some detail Applications of Supervised learning in NLP: Text categorisation POS tagging (briefly) Word-sense disambiguation (briefly) Unsupervised learning: Keyword selection, feature set reduction Word-sense disambiguation (revisited) 15

References Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10):78 87. Mitchell, T. M. (1997). Machine Learning. McGraw-Hill. Pomerleau, D. A. (1994). Neural Network Perception for Mobile Robot Guidance. Kluwer, Dordrecht, Netherlands. Valiant, L. (1984). A theory of the learnable. Communications of the ACM, 27(11):1134 1142. Wolpert, D. H. (1996). The lack of a priori distinctions between learning algorithms. Neural Computation, 8(7):1341 1390. Wooldridge, M. (2002). An Introduction to MultiAgent Systems. John Wiley & Sons. 16