Outline. Ensemble Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

Similar documents
Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Python Machine Learning

Lecture 1: Machine Learning Basics

Learning From the Past with Experiment Databases

(Sub)Gradient Descent

Activity Recognition from Accelerometer Data

CS Machine Learning

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 1: Basic Concepts of Machine Learning

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Softprop: Softmax Neural Network Backpropagation Learning

Speech Emotion Recognition Using Support Vector Machine

Switchboard Language Model Improvement with Conversational Data from Gigaword

A survey of multi-view machine learning

Word Segmentation of Off-line Handwritten Documents

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Universidade do Minho Escola de Engenharia

Rule Learning With Negation: Issues Regarding Effectiveness

Model Ensemble for Click Prediction in Bing Search Ads

Human Emotion Recognition From Speech

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Assignment 1: Predicting Amazon Review Ratings

Probabilistic Latent Semantic Analysis

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Artificial Neural Networks written examination

The Boosting Approach to Machine Learning An Overview

Rule Learning with Negation: Issues Regarding Effectiveness

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

An Empirical Comparison of Supervised Ensemble Learning Approaches

Reducing Features to Improve Bug Prediction

Learning Methods in Multilingual Speech Recognition

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

arxiv: v1 [cs.lg] 15 Jun 2015

Issues in the Mining of Heart Failure Datasets

Probability and Statistics Curriculum Pacing Guide

Improving Simple Bayes. Abstract. The simple Bayesian classier (SBC), sometimes called

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

Australian Journal of Basic and Applied Sciences

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

A Case Study: News Classification Based on Term Frequency

Calibration of Confidence Measures in Speech Recognition

Discriminative Learning of Beam-Search Heuristics for Planning

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Semi-Supervised Face Detection

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Generative models and adversarial training

Learning Distributed Linguistic Classes

Cooperative evolutive concept learning: an empirical study

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Optimizing to Arbitrary NLP Metrics using Ensemble Selection

SARDNET: A Self-Organizing Feature Map for Sequences

Cultivating DNN Diversity for Large Scale Video Labelling

College Pricing. Ben Johnson. April 30, Abstract. Colleges in the United States price discriminate based on student characteristics

Time series prediction

Test Effort Estimation Using Neural Network

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

CSC200: Lecture 4. Allan Borodin

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Linking Task: Identifying authors and book titles in verbose queries

Axiom 2013 Team Description Paper

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning

Evolutive Neural Net Fuzzy Filtering: Basic Description

CSL465/603 - Machine Learning

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Multivariate k-nearest Neighbor Regression for Time Series data -

Why Did My Detector Do That?!

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

An investigation of imitation learning algorithms for structured prediction

Learning and Transferring Relational Instance-Based Policies

Content-based Image Retrieval Using Image Regions as Query Examples

On the Formation of Phoneme Categories in DNN Acoustic Models

Truth Inference in Crowdsourcing: Is the Problem Solved?

Knowledge-Based - Systems

Welcome to. ECML/PKDD 2004 Community meeting

Laboratorio di Intelligenza Artificiale e Robotica

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Student Course Evaluation Class Size, Class Level, Discipline and Gender Bias

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

CAFE Collaboration Aimed at Finding Experts

Conference Presentation

Team Formation for Generalized Tasks in Expertise Social Networks

Applications of data mining algorithms to analysis of medical data

Probability estimates in a scenario tree

Transcription:

Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012)

Outline Outline I 1 Introduction 2 Voting 3 Stacking 4 Bagging 5 Boosting

Rationale There is no single learning algorithm that in any domain always induces the most accurate learner. Each learning algorithm dictates a certain model with a set of assumptions, leading to the corresponding model bias. If the assumptions do not hold for the data, the model bias leads to error. Ensemble learning: We construct a group of base learners which, when combined, has higher accuracy than the individual learners. The base learners are usually not chosen for their accuracy, but for their simplicity. The base learners should be accurate on different instances, specializing in different subdomains of the problem, so that they can complement each other.

Differences between Base Learners Different learning algorithms: different algorithms make different assumptions about the data and lead to different classifiers. Different hyperparameters of the same algorithm: e.g., number of hidden units in a multilayer perceptron, K in K -nearest neighbor classifier, error threshold in a decision tree, initial state of an iterative procedure, etc. Different representations of the same input object or event: multiple sources of information are combined, e.g., both acoustic input and video sequence of lip movements for speech recognition. Different training sets: multiple base learners are trained either in parallel or serially using different training sets. Different subtasks: the main task is defined in terms of a number of subtasks solved by different base learners.

Combining Base Learners Multiexpert combination methods (parallel style): The base learners work in parallel. Given an instance, they all give their decisions which are then combined to give the final decision. E.g., voting, mixture of experts, stacked generalization. Multistage combination methods (sequential style): The base learners work serially. The base learners are sorted in increasing complexity: a complex base learner is not used unless the preceding simpler base learners are not confident. E.g., cascading.

Why Ensembles Superior to Singles The generalization ability of an ensemble is usually much stronger than that of a single learner. The reasons: The training data might not provide sufficient information for choosing a single best learner. The search processes of the learning algorithms might be imperfect. The hypothesis space being searched might not contain the true target function, while ensembles can give some good approximation.

Model Selection vs. Model Averaging Model selection: works better if one model is significantly more accurate than other models No ambiguity of which single model is better Equally weighted averaging: works better if all models have similar prediction accuracy, but are different Some ambiguity of which single model is better Key to the success of model ensemble (averaging): All models are reasonably accurate Models are diverse (they have different predictions)

More Comments Empirical studies on popular ensemble methods: [BK99] [TW99] [OM99]...... Zhou et al. [ZWT02]: Many could be better than all selective ensembles Ensemble methods are designed for classification, regression, clustering, and many kinds of machine learning tasks. Unsatisfactory points: the comprehensibility of ensembles [ZJC03] measures for diversity [KW03] A good reference book by Prof. Zhihua Zhou: Ensemble Methods: Foundations and Algorithms, Boca Raton, FL: Chapman & Hall/CRC, 2012.

Voting

Voting (2) Voting takes a convex combination of the base learners: y = f (d 1,..., d L Φ) = L w j d j j=1 where w j and d j are the weight and prediction of learner j with w j 0 and L w j = 1, j=1 Φ = (w 1,..., w L ) T are the parameters and y is the final prediction.

Voting for Classification For class C i : y i = L w j d ji j=1 where d ji is the vote of learner j for C i and w j is the weight of its vote. Simple voting (a.k.a. plurality voting, majority voting for 2 classes): w j = 1 L Bayesian model selection: P(C i x) = P(C i x, M j )P(M j ) all models M j So the weights w j can be seen as approximating the prior model probabilities P(M j ).

Analysis Let there be L independent two-class classifiers, where E[d j ] and var(d j ) are the expected value and variance of d j for classifier j. Expected value and variance of output: E[y] = E[ j d j L ] = 1 L LE[d j] = E[d j ] var(y) = var( j d j L ) = 1 L 2 var( j d j ) = 1 L var(d j) As L increase, the expected value (and hence the bias) does not change but the variance (and hence the mean squared error) decreases, leading to an increase in accuracy. General case (non-independent classifiers): var(y) = 1 L 2 var( j d j ) = 1 L 2 [ j var(d j ) + 2 j cov(d j, d i )] i<j

Stacking In typical stacking [Wol92] implementation: A number of first-level individual learners are generated from the training data set by employing different learning algorithms. The individual learners are then combined by a second-level learner which is called as meta-learner. It is closely related to information fusion methods.

Stacking Algorithm Input: Data set D = {(x (1), y (1) ),..., (x (N), y (N) )} First-level learning algorithms L 1,..., L T Second-level learning algorithm L Process: For t = 1,..., T h t = L t (D) %Train first-level individual learner h t End D = %Generate a new data set For i = 1,..., N For t = 1,..., T z it = h t (x (i) ) End D = D {((z i1,..., z it ), y (i) )} End h = L(D ) %Train the second-level learner h Output: H(x) = h (h 1 (x),..., h T (x))

Bagging Bagging [Bre96], a short form for bootstrap aggregating, is a voting method whereby the base learners are made different by training on slightly different training sets. Different training sets are generated by bootstrap, which draws N instances randomly from a training set X of size N with replacement. Bagging can be seen as a special case of model averaging which helps to reduce variance and hence improve accuracy. Unstable algorithms (e.g., decision trees and multilayer perceptrons) that cause large changes in the generated learner (i.e., high variance) with small changes in the training set can particularly benefit from bagging.

Bagging Algorithm Input: Data set D = {(x (1), y (1) ),..., (x (N), y (N) )} Base learning algorithm L Number of learning rounds T Process: For t = 1,..., T D t = Bootstrap(D) %Generate a bootstrap sample from D h t = L(D t ) %Train a base learner h t from the bootstrap sample End Output: H(x) = arg max y Y T t=1 1(y = h t(x))

Analysis The bootstrap samples usually overlap more than the cross validation samples and hence their estimates are more dependent. Probability that an instance is not chosen after N random draws: (1 1 N )N e 1 = 0.368 So each bootstrap sample contains only approximately 63.2% of the instances. Multiple bootstrap samples are used to maximize the chance that the system is trained on all the instances. Majority voting is usually used to predict the most-voted class. A variant of bagging, Random Forests [Bre01], is a powerful ensemble method.

Boosting In bagging, generating complementary base learners is left to chance and to the instability of the learning algorithm. In boosting [Sch90][FR97], complementary base learners are generated by training the next learner on the mistakes of the previous learners. Boosting combines weak learners (learners with accuracy just required to be better than random guessing, i.e., > 1/K for K -class classification problems; weak but not too weak) to generate a strong learner.

AdaBoost AdaBoost [FR97] (a short form for adaptive boosting) is an iterative procedure that generates a sequence of base learners each focusing on the errors of previous ones. The original algorithm is AdaBoost.M1, but many variants of AdaBoost have also been proposed. AdaBoost modifies the probabilities of drawing instances for classifier training as a function of the error of the previous base learner. Initially all N instances have the same probability of being drawn. Moving from one iteration to the next iteration, the probability of a correctly classified instance is decreased and that of a misclassified instance is increased. The success of AdaBoost is due to its property of increasing the margin, making the aim of AdaBoost similar to that of SVM.

AdaBoost Algorithm Input: Data set D = {(x (1), y (1) ),..., (x (N), y (N) )} Base learning algorithm L Number of learning rounds T Process: W 1 (i) = 1/N %Initial the weight distribution For t = 1,..., T h t = L(D, W t ) ɛ t = the error of h t α t = 1 1 ɛt 2 ln ɛ t %Determine the weight of h t W t+1 (i) = Wt (i) exp( αt y (i) h t (x (i) )) Z t %Update the weight distribution, %where Z t is a normalization factor End Output: H(x) = sign(f (x)) = sign T t=1 α th t (x)

E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36(1-2):105 139, 1999. L. Breiman. Bagging predictors. Machine Learning, 24(2):123 140, 1996. L. Breiman. Random forests. Machine Learning, 45(1):5 32, 2001. Y. Freund and R.E.Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 139, 1997. L.I. Kuncheva and C.J. Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning, 51(2):181 207, 2003. D. Opitz and R. Maclin. Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research, 11:169 198, 1999. R.E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197 227, 1990. K.M. Ting and I.H. Witten. Issues in stacked generalization. Journal of Artificial Intelligence Research, 10:271 289, 1999. D.H. Wolpert. Stacked generalization. Neural Networks, 5(2):241 260, 1992.

Z.H. Zhou, Y. Jiang, and S.F. Chen. Extracting symbolic rules from trained neural network ensembles. AI Communications, 16(1):3 15, 2003. Z.H. Zhou, J. Wu, and W. Tang. Ensembling neural networks: Many could be better than all. Artificial Intelligence, 137(1-2):239 263, 2002.