Statistical Pattern Recognition

Similar documents
Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Speech Recognition at ICSI: Broadcast News and beyond

Learning Methods for Fuzzy Systems

Python Machine Learning

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Lecture 1: Machine Learning Basics

Knowledge Transfer in Deep Convolutional Neural Nets

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Word Segmentation of Off-line Handwritten Documents

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Lecture 1: Basic Concepts of Machine Learning

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

INPE São José dos Campos

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Circuit Simulators: A Revolutionary E-Learning Platform

Softprop: Softmax Neural Network Backpropagation Learning

Human Emotion Recognition From Speech

Knowledge-Based - Systems

(Sub)Gradient Descent

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

Rule Learning With Negation: Issues Regarding Effectiveness

Artificial Neural Networks written examination

A Case Study: News Classification Based on Term Frequency

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Rule Learning with Negation: Issues Regarding Effectiveness

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

CS Machine Learning

Evolution of Symbolisation in Chimpanzees and Neural Nets

Calibration of Confidence Measures in Speech Recognition

A study of speaker adaptation for DNN-based speech synthesis

Modeling function word errors in DNN-HMM based LVCSR systems

Statistical Analysis of Climate Change, Renewable Energies, and Sustainability An Independent Investigation for Introduction to Statistics

Speech Emotion Recognition Using Support Vector Machine

Time series prediction

STA 225: Introductory Statistics (CT)

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Probabilistic Latent Semantic Analysis

An Introduction to Simio for Beginners

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Major Milestones, Team Activities, and Individual Deliverables

The Strong Minimalist Thesis and Bounded Optimality

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

AQUA: An Ontology-Driven Question Answering System

Learning Methods in Multilingual Speech Recognition

LEGO MINDSTORMS Education EV3 Coding Activities

Axiom 2013 Team Description Paper

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Artificial Neural Networks

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Abstractions and the Brain

Individual Component Checklist L I S T E N I N G. for use with ONE task ENGLISH VERSION

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

Firms and Markets Saturdays Summer I 2014

Seminar - Organic Computing

An Online Handwriting Recognition System For Turkish

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Modeling function word errors in DNN-HMM based LVCSR systems

Lecture 10: Reinforcement Learning

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Test Effort Estimation Using Neural Network

Probability estimates in a scenario tree

Evolutive Neural Net Fuzzy Filtering: Basic Description

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

A Context-Driven Use Case Creation Process for Specifying Automotive Driver Assistance Systems

(Includes a Detailed Analysis of Responses to Overall Satisfaction and Quality of Academic Advising Items) By Steve Chatman

On-Line Data Analytics

Introduction to Simulation

WHEN THERE IS A mismatch between the acoustic

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

CSL465/603 - Machine Learning

An Empirical and Computational Test of Linguistic Relativity

Using dialogue context to improve parsing performance in dialogue systems

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Australian Journal of Basic and Applied Sciences

SARDNET: A Self-Organizing Feature Map for Sequences

How the Guppy Got its Spots:

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

Problems of the Arabic OCR: New Attitudes

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Student Perceptions of Reflective Learning Activities

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Handbook for Graduate Students in TESL and Applied Linguistics Programs

A Case-Based Approach To Imitation Learning in Robotic Agents

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Software Maintenance

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Transcription:

1

Statistical Pattern Recognition Introduction Bayesian decision theory Maximum likelihood and Bayesian parameter estimation Nonparametric techniques Linear Discriminant Functions Stochastic Methods Algorithm-independent machine learning Unsupervised Learning and Clustering 2

Textbooks Pattern Classification (2nd ed.) by Richard O. Duda, Peter E. Hart and David G. Stork Pattern Recognition, 4 th Ed., Theodoridis and Koutroumbas Statistical Pattern Recognition, 3 rd Ed. Andrew R.Webb And Keith D. Copsey Pattern Recognition and Machine Learning, Bishop Introduction to Statistical Pattern Recognition, 2 nd Ed., Fukunaga A Statistical Approach to Neural Networks for Pattern Recognition, R. A. Dunne. Pattern Recognition and Image Analysis, Gose and Johansonbaugh 3

Grading Criteria Midterm Exam 25% HW, Comp. Assignments and projects: 30% Final exam 45% Course Website: http://ivut.iut.ac.ir Ebooks ثبت نام در سامانه الزامی است 4

Chapter 1: Introduction to Pattern Recognition Machine Perception An example Pattern Recognition Systems The Design Cycle Learning and Adaptation Conclusion All materials used in this course were taken from the textbook Pattern Classification by Duda et al., John Wiley & Sons, 2001 (Djamel Bouchaffra)

Pattern Recognition The real power of human thinking is based on recognizing patterns. The better computers get at pattern recognition, the more humanlike they will become. Ray Kurzweil, NY Times, Nov 24, 2003 6

What is a Pattern? A pattern is the opposite of a chaos; it is an entity vaguely defined, that could be given a name. (Watanabe) 7

Recognition Identification of a pattern as a member of a category we already know, or we are familiar with Classification (known categories) Clustering (creation of new categories) Category A Category B Classification Clustering 8

Pattern Recognition Given an input pattern, make a decision about the category or class of the pattern Pattern recognition is a very broad subject with many applications In this course we will study a variety of techniques to solve P.R. problems and discuss their relative strengths and weaknesses 9

Pattern Class A collection of similar (not necessarily identical) objects A class is defined by class samples (paradigms, exemplars, prototypes) Inter-class variability Intra-class variability 10

Pattern Class Model Different descriptions, which are typically mathematical in form for each class/population Given a pattern, choose the best-fitting model for it and then assign it to class associated with the model 11

Intra-class and Inter-class Variability The letter T in different typefaces Same face under different expression, pose. 12

Machine Perception Build a machine that can recognize patterns: Speech recognition Fingerprint identification OCR (Optical Character Recognition) DNA sequence identification 1 13

Pattern Recognition Applications Problem Input Output Speech recognition Speech waveforms Spoken words, speaker identity Non-destructive testing Detection and diagnosis of disease Natural resource identification Ultrasound, eddy current, acoustic emission waveforms EKG, EEG waveforms Multispectral images Presence/absence of flaw, type of flaw Types of cardiac conditions, classes of brain conditions Terrain forms, vegetation cover Aerial reconnaissance Visual, infrared, radar images Tanks, airfields Character recognition (page readers, zip code, license plate) Optical scanned image Alphanumeric characters 14

Pattern Recognition Applications Identification and counting of cells Problem Input Output Inspection (PC boards, IC masks, textiles) Manufacturing Slides of blood samples, micro-sections of tissues Scanned image (visible, infrared) 3-D images (structured light, laser, stereo) Type of cells Acceptable/unacceptable Identify objects, pose, assembly Web search Key words specified by a user Text relevant to the user Fingerprint identification Online handwriting retrieval Input image from fingerprint sensors Query word written by a user Owner of the fingerprint, fingerprint classes Occurrence of the word in the database 15

An Example Sorting incoming Fish on a conveyor according to species using optical sensing Species Sea bass گرگ دریایی- ماهی خاردار ماهی آزاد Salmon 2 16

Problem Analysis Set up a camera and take some sample images to extract features Length Lightness Width Number and shape of fins Position of the mouth, etc This is the set of all suggested features to explore for use in our classifier! 2 17

Preprocessing Use a segmentation operation to isolate fishes from one another and from the background Information from a single fish is sent to a feature extractor whose purpose is to reduce the data by measuring certain features The features are passed to a classifier 2 18

Figure 1.1: The objects to be classified are first sensed by a transducer (camera), whose signals are preprocessed, then the features extracted and finally the classification emitted (here either salmon or sea bass ). Although the information flow is often chosen to be from the source to the classifier ( bottom-up ), some systems employ top-down flow as well, in which earlier levels of processing can be altered based on the tentative or preliminary response in later levels (gray arrows). Yet others combine two or more stages into a unified step, such as simultaneous segmentation and feature extraction. 2 19

Classification: Select the length of the fish as a possible feature for discrimination Histograms for the length feature for the two categories. No single threshold value l * (decision boundary) will serve to unambiguously discriminate between the two categories; using length alone, we will have some errors. The value l * marked will lead to the smallest number of errors, on average. 20

The length is a poor feature alone! Select the lightness as a possible feature. Histograms for the lightness feature for the two categories. No single threshold value x* (decision boundary) will serve to unambiguously discriminate between the two categories; using lightness alone, we will have some errors. The value x* marked will lead to the smallest number of errors, on average. 2 21

Threshold decision boundary and cost relationship Move our decision boundary toward smaller values of lightness in order to minimize the cost (reduce the number of sea bass that are classified salmon!) Task of decision theory 2 22

Adopt the lightness and add the width of the fish Fish x = [x 1, x 2 ] Lightness Width We realize that the feature extractor has thus reduced the image of each fish to a point or feature vector x in a two-dimensional feature space. 2 23

The two features of lightness and width for sea bass and salmon. The dark line might serve as a decision boundary of our classifier. Overall classification error on the data shown is lower than if we use only one feature as in Fig. 1.3, but there will still be some errors. 2 24

We might add other features that are not correlated with the ones we already have. A precaution should be taken not to reduce the performance by adding such noisy features Ideally, the best decision boundary should be the one which provides an optimal performance such as in the following figure: 25

2 26

However, our satisfaction is premature because the central aim of designing a classifier is to correctly classify novel input Issue of generalization! 27

The decision boundary shown might represent the optimal trade off between performance on the training set and simplicity of classifier. 2 28

Generalization One approach would be to get more training samples for obtaining a better estimate of the true underlying characteristics, for instance the probability distributions of the categories. We might be satisfied with the slightly poorer performance on the training samples if it means that our classifier will have better performance on novel patterns (very complex recognizer or simpler classifiers?). 29

Decisions are fundamentally task or cost specific. Creating a single general purpose artificial pattern recognition device is a profoundly difficult challenge. Classification is, the task of recovering the model that generated the patterns, different classification techniques are useful depending on the type of candidate models themselves. In statistical pattern recognition we focus on the statistical properties of the patterns (generally expressed in probability densities). 30

Neural network pattern classification although can be considered its own discipline, because of its somewhat different intellectual pedigree, we will consider it a close descendant of statistical pattern recognition. If instead, the model consists of some set of crisp logical rules, then we employ the methods of syntactic pattern recognition, where rules or grammars describe our decision. 31

Representation A central aspect in virtually every pattern recognition problem is that of achieving such a good representation. In some cases patterns should be represented as vectors of real-valued numbers, in others ordered lists of attributes, in yet others descriptions of parts and their relations, and so forth. We seek a representation in which the patterns that lead to the same action are somehow close to one another, yet far from those that demand a different action. 32

We might wish to favor a small number of features, which might lead to simpler decision regions, and a classifier easier to train. We might also wish to have features that are robust, i.e., relatively insensitive to noise or other errors. In practical applications we may need the classifier to act quickly, or use few electronic components, memory or processing steps. 33

Difficulties of Representation How do you instruct someone (or some computer) to recognize caricatures in a magazine, let alone find a human figure in a misshapen piece of work? A program that could distinguish between male and female faces in a random snapshot would probably earn its author a Ph.D. in computer science. (Penzias 1989) A representation could consist of a vector of realvalued numbers, ordered list of attributes, parts and their relations. 34

Good Representation Should have some invariant properties (e.g., w.r.t. rotation, translation, scale ) Account for intra-class variations Ability to discriminate pattern classes of interest Robustness to noise/occlusion Lead to simple decision making (e.g., decision boundary) 35

Representation Each pattern is represented as a point in the d-dimensional feature space Features are domain-specific and be invariant to translation, rotation and scale x 2 x 2 x 1 x 1 Good representation small intraclass variation, large interclass separation, simple decision rule 36

Analysis by Synthesis A central technique, when we have insufficient training data, is to incorporate knowledge of the problem domain. In the ideal case one has a model of how each pattern is generated. In speech recognition, for example, the possible dee s that might be uttered by different people 37

So a physiological model (or so-called motor model) for production of the utterances is appropriate, and different (say) from that for doo and indeed all other utterances. If this underlying model of production can be determined from the sound (and that is a very big if ), then we can classify the utterance by how it was produced. That is to say, the production representation may be the best representation for classification. 38

Related fields Pattern classification differs from classical statistical hypothesis testing, wherein the sensed data are used to decide whether or not to reject a null hypothesis in favor of some alternative hypothesis. Pattern classification differs, too, from image processing. 39

Pattern Recognition Systems Sensing Use of a transducer (camera or microphone) PR system depends of the bandwidth, the resolution sensitivity distortion of the transducer Segmentation and grouping Patterns should be well separated and should not overlap 40

Feature extraction Discriminative features Invariant features with respect to translation, rotation and scale. Classification Use a feature vector provided by a feature extractor to assign the object to a category Post Processing Exploit context input dependent information other than from the target pattern itself to improve performance 41

42

Fingerprint Classification Assign fingerprints into one of pre-specified types Plain Arch Tented Arch Right Loop Left Loop Accidental Pocket Whorl Plain Whorl Double Loop 43

Fingerprint Enhancement To address the problem of poor quality fingerprints Noisy image Enhanced image 44

Pattern Recognition System Performance Error rate (Prob. of misclassification) on independent test samples Speed Cost Robustness Reject option 45

Noise The Sub-problems of Pattern Classification We define noise very general terms: any property of the sensed pattern due not to the true underlying model but instead to randomness in the world or the sensors. Overfitting While an overly complex model may allow perfect classification of the training samples, it is unlikely to give good classification of novel patterns a situation known as overfitting. 46

Model Selection How do we know when a hypothesized model differs significantly from the true model underlying our patterns, and thus a new model is needed? Prior Knowledge Information about the production of the patterns The form of the underlying categories Missing Features Like occlusion by another object 47

Context We might be able to use context input-dependent information other than from the target pattern itself to improve our recognizer. How m ch info mation are y u mi sing 48

Invariances Invariant to the transformation of translation and transformations (like orientation, size, the rate at which the pattern evolves, deformations, discrete symmetries). How do we determine whether an invariance is present? How do we efficiently incorporate such knowledge into our recognizer? Evidence Pooling If we have several component classifiers, but suppose they disagree. How should a super classifier pool the evidence from the component recognizers to achieve the best decision? 49

Costs and Risks We often design our classifier to recommend actions that minimize some total expected cost or risk. The simplest risk is the classification error, however the notion of risk is far more general. How do we incorporate knowledge about such risks and how will they affect our classification decision? Computational Complexity The computational complexity of different algorithms is of importance, especially for practical applications. 50

The Design Cycle Data collection Feature Choice Model Choice Training Evaluation Computational Complexity 51

52

Data Collection How do we know when we have collected an adequately large and representative set of examples for training and testing the system? Feature Choice Depends on the characteristics of the problem domain. Simple to extract, invariant to irrelevant transformation insensitive to noise. Model Choice Unsatisfied with the performance of our fish classifier and want to jump to another class of model 53

Training Use data to determine the classifier. Many different procedures for training classifiers and choosing models Evaluation Measure the error rate (or performance) and switch from one set of features to another one Computational Complexity What is the trade off between computational ease and performance? (How an algorithm scales as a function of the number of features, patterns or categories?) 54

Learning and Adaptation Supervised learning A teacher provides a category label or cost for each pattern in the training set Unsupervised learning The system forms clusters or natural groupings of the input patterns Reinforcement Learning In reinforcement learning or learning with a critic, no desired category signal is given; critic instead, the only teaching feedback is that the tentative category is right or wrong. This is analogous to a critic who merely states that something is right or wrong, but does not say specifically how it is wrong. 55

Supervised Classification 56

Unsupervised Classification 57

Models for Pattern Recognition Template matching Statistical (geometric) Syntactic (structural) Artificial neural network (biologically motivated?) Hybrid approach 58

Template Matching Template Input scene 59

Deformable Template: Corpus Callosum Segmentation Shape training set Prototype and variation learning Prototype registration to the low-level segmented image Prototype warping 60

Structural Patten Recognition Decision-making when features are nonnumeric or structural Describe complicated objects in terms of simple primitives and structural relationship Scene N M L T X Z Object Background D E M N D E L T X Y Z 61

Syntactic Pattern Recognition pattern Preprocessing Primitive, relation extraction Syntax, structural analysis Recognition Training Patterns + Class labels Preprocessing Primitive selection Grammatical, structural inference 62

Chromosome Grammars Terminals: V T ={,,,, } Non-terminals: V N ={A,B,C,D,E,F} Pattern Classes: Median Submedian Acrocentric Telocentric 63

Chromosome Grammars Image of human chromosomes Hierarchical-structure description of a submedian chromosome 64

Artificial Neural Networks Massive parallelism is essential for complex pattern recognition tasks (e.g., speech and image recognition) Human take only a few hundred ms for most cognitive tasks; suggests parallel computation Biological networks attempt to achieve good performance via dense interconnection of simple computational elements (neurons) Number of neurons 10 10 10 12 Number of interconnections/neuron 10 3 10 4 Total number of interconnections 10 14 65

Artificial Neural Networks Nodes in neural networks are nonlinear, typically analog x 1 x 2 x d w 1 w d Y (output) where is internal threshold or offset 66

Multilayer Perceptron Feed-forward nets with one or more layers (hidden) between the input and output nodes A three-layer net can generate arbitrary complex decision regions.. d inputs... First hidden layer NH 1 input units These nets can be trained by backpropagation training algorithm... Second hidden layer NH 2 input units. c outputs 67

Super Classifier Pool the evidence from component recognizers (classifier combination, mixture of experts, evidence accumulation) 68

Statistical Pattern Recognition pattern Preprocessing Feature extraction Classification Recognition Training Patterns + Class labels Preprocessing Feature selection Learning 69

Statistical Pattern Recognition Patterns represented in a feature space Statistical model for pattern generation in feature space Given training patterns from each class, goal is to partition the feature space. 70

Approaches to Statistical Pattern Recognition Prior Information COMPLETE INCOMPLETE Bayes Decision Theory Supervised Learning Unsupervised Learning Parametric Approach Nonparametric Approach Parametric Approach Non-parametric Approach "Optimal" Rules Plug-in Rules Density Estimation Geometric Rules (K-NN,MLP) Mixture Resolving Cluster Analysis (Hard, Fuzzy) 71

Comparing Pattern Recognition Models Template Matching Assumes very small intra-class variability Learning is difficult for deformable templates Syntactic Primitive extraction is sensitive to noise Describing a pattern in terms of primitives is difficult Statistical Assumption of density model for each class Neural Network Parameter tuning and local minima in learning In practice, statistical and neural network approaches work well 72