autobagging: Learning to Rank Bagging Workflows with Metalearning

Similar documents
Learning From the Past with Experiment Databases

Rule Learning With Negation: Issues Regarding Effectiveness

Python Machine Learning

Handling Concept Drifts Using Dynamic Selection of Classifiers

Rule Learning with Negation: Issues Regarding Effectiveness

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 1: Machine Learning Basics

Assignment 1: Predicting Amazon Review Ratings

Word Segmentation of Off-line Handwritten Documents

SPECIAL ARTICLES Pharmacy Education in Vietnam

Reducing Features to Improve Bug Prediction

Truth Inference in Crowdsourcing: Is the Problem Solved?

Sweden, The Baltic States and Poland November 2000

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

Australian Journal of Basic and Applied Sciences

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Linking Task: Identifying authors and book titles in verbose queries

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Detecting English-French Cognates Using Orthographic Edit Distance

A Case Study: News Classification Based on Term Frequency

CSL465/603 - Machine Learning

Learning Methods in Multilingual Speech Recognition

What Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Probability and Statistics Curriculum Pacing Guide

CS Machine Learning

FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification

Model Ensemble for Click Prediction in Bing Search Ads

Learning to Rank with Selection Bias in Personal Search

Team Formation for Generalized Tasks in Expertise Social Networks

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

The stages of event extraction

Switchboard Language Model Improvement with Conversational Data from Gigaword

Multi-label Classification via Multi-target Regression on Data Streams

CS 446: Machine Learning

Mining Student Evolution Using Associative Classification and Clustering

Term Weighting based on Document Revision History

Probabilistic Latent Semantic Analysis

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

On-Line Data Analytics

Conversational Framework for Web Search and Recommendations

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Welcome to. ECML/PKDD 2004 Community meeting

An Empirical Comparison of Supervised Ensemble Learning Approaches

Using Web Searches on Important Words to Create Background Sets for LSI Classification

A Survey on Unsupervised Machine Learning Algorithms for Automation, Classification and Maintenance

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Softprop: Softmax Neural Network Backpropagation Learning

Multi-Lingual Text Leveling

Discriminative Learning of Beam-Search Heuristics for Planning

Generative models and adversarial training

Universidade do Minho Escola de Engenharia

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Customized Question Handling in Data Removal Using CPHC

The University of Amsterdam s Concept Detection System at ImageCLEF 2011

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

Data Stream Processing and Analytics

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

A new way to share, organize and learn from experiments

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Matching Similarity for Keyword-Based Clustering

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models

Cultivating DNN Diversity for Large Scale Video Labelling

Beyond the Pipeline: Discrete Optimization in NLP

Evidence for Reliability, Validity and Learning Effectiveness

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Content-free collaborative learning modeling using data mining

Human Emotion Recognition From Speech

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

(Sub)Gradient Descent

Semi-Supervised Face Detection

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Exposé for a Master s Thesis

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Automating the E-learning Personalization

A Comparison of Two Text Representations for Sentiment Analysis

Axiom 2013 Team Description Paper

Activity Recognition from Accelerometer Data

Indian Institute of Technology, Kanpur

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

Bug triage in open source systems: a review

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Time series prediction

Multi-label classification via multi-target regression on data streams

Evaluating and Comparing Classifiers: Review, Some Recommendations and Limitations

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2

Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing

Optimizing to Arbitrary NLP Metrics using Ensemble Selection

Mining Association Rules in Student s Assessment Data

arxiv: v1 [cs.lg] 15 Jun 2015

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Transcription:

autobagging: Learning to Rank Bagging Workflows with Metalearning Fábio Pinto ( ), Vítor Cerqueira, Carlos Soares an João Menes-Moreira INESC TEC/Faculae e Engenharia, Universiae o Porto Rua Dr. Roberto Frias, s/n Porto, Portugal 4200-465 fhpinto@inesctec.pt vmac@inesctec.pt csoares@fe.up.pt jmoreira@fe.up.pt Abstract. Machine Learning (ML) has been successfully applie to a wie range of omains an applications. One of the techniques behin most of these successful applications is Ensemble Learning (EL), the fiel of ML that gave birth to methos such as Ranom Forests or Boosting. The complexity of applying these techniques together with the market scarcity on ML experts, has create the nee for systems that enable a fast an easy rop-in replacement for ML libraries. Automate machine learning (automl) is the fiel of ML that attempts to answers these nees. We propose autobagging, an automl system that automatically ranks 63 bagging workflows by exploiting past performance an metalearning. Results on 140 classification atasets from the OpenML platform show that autobagging can yiel better performance than the Average Rank metho an achieve results that are not statistically ifferent from an ieal moel that systematically selects the best workflow for each ataset. For the purpose of reproucibility an generalizability, autobagging is publicly available as an R package on CRAN. Keywors: automate machine learning, metalearning, bagging, classification 1 Introuction Ensemble learning (EL) has proven itself as one of the most powerful techniques in Machine Learning (ML), leaing to state-of-the-art results across several omains. Methos such as bagging, boosting or Ranom Forests are consiere some of the favourite algorithms among ata science practitioners. However, getting the most out of these techniques still requires significant expertise an it is often a complex an time consuming task. Furthermore, since the number of ML applications is growing exponentially, there is a nee for tools that boost the ata scientist s prouctivity. The resulting research fiel that aims to answers these nees is Automate Machine Learning (automl). In this paper, we aress the problem of how to automatically tune an EL algorithm, covering all components within it: generation (how to generate the moels an how many), pruning (which technique shoul be use to prune the ensemble an how many moels shoul be iscare) an

integration (which moel(s) shoul be selecte an combine for each preiction). We focus specifically in the bagging algorithm [2] an four components of the algorithm: 1) the number of moels that shoul be generate 2) the pruning metho 3) how much moels shoul be prune an 4) which ynamic integration metho shoul be use. For the remaining of this paper, we call to a set of these four elements a bagging workflow. Our proposal is autobagging, a system that combines a learning to rank approach together with metalearning to tackle the problem of automatically generate bagging workflows. Ranking is a common task in information retrieval. For instance, to answer the query of a user, a search engine ranks a plethora of ocuments accoring to their relevance. In this case, the query is replace by new ataset an autobagging acts as ranking engine. Figure 1 shows an overall schema of the propose system. We leverage the historical preictive performance of each workflow in several atasets, where each ataset is characterise by a set of metafeatures. This metaata is then use to generate a metamoel, using a learning to rank approach. Given a new ataset, we are able to collect metafeatures from it an fee them to the metamoel. Finally, the metamoel outputs an orere list of the workflows, taking into account the characteristics of the new ataset. Metaata Algorithm Algorithm A Algorithm A a Estimates of Performance Learning to Rank Metafeatures Extraction Metamoel New Dataset Ranke Algorithms Fig. 1. Learning to Rank with Metalearning. The re lines represent offline tasks an the green ones represent online ones. We teste the approach in 140 classification atasets from the OpenML platform for collaborative ML [12] an 63 bagging workflows, that inclue two pruning techniques an two ynamic selection techniques. Results show that auto- Bagging has a better performance than two strong baselines, Bagging with 100 trees an the average rank. Furthermore, testing the top 5 workflows recommene by autobagging guarantees an outcome that is not statistically ifferent from the Oracle, an ieal metho that for each ataset always selects the best workflow. For the purpose of reproucibility an generalizability, autobagging is available as an R package. 1 1 https://github.com/fhpinto/autobagging

2 autobagging We approach the problem of algorithm selection as a learning to rank problem [7]. Lets take D as the ataset set an A as the algorithm set. Y = {1, 2,..., l} is the label set, where each value represents a relevance score, which represents the relative performance of a given algorithm. Therefore, l l 1... 1, where represents an orer relationship. Furthermore, D m = { 1, 2,..., m } is the set of atasets for training an i is the i-th ataset, A i = {a i,1, a i,2,..., a i,ni } is the set of algorithms associate with ataset i an y i = {y i,1, y i,2,..., y i,ni } is the set of labels associate with ataset i, where n i represents the sizes of A i an y i ; a i,j represents the j-th algorithm in A i ; an y i,j Y represents the j-th label in y i, representing the relevance score of a i,j with respect to i. Finally, the meta-ataset is enote as S = {( i, A i ), y i } m i=1. We use metalearning to generate the metafeature vectors x i,j = φ( i, a i,j ) for each ataset-algorithm pair, where i = 1, 2,..., m; j = 1, 2,..., n i an φ represents the metafeatures extraction functions. These metafeatures can escribe i, a i,j or even the relationship between both. Therefore, taking x i = {x i,1, x i,2,..., x i,ni } we can represent the meta-ataset as S = {(x i, y i )} m i=1. Our goal is to train a meta ranking moel f(, a) = f(x) that is able to assign a relevance score to a given new ataset-algorithm pair an a, given x. 2.1 Metafeatures We approach the problem of generating metafeatures to characterize an a with the ai of a framework for systematic metafeatures generation [10]. Essentially, this framework regars a metafeature as a combination of three components: meta-function, a set of input objects an a post-processing function. The framework establishes how to systematically generate metafeatures from all possible combinations of object an post-processing alternatives that are compatible with a given meta-function. Thus, the evelopment of metafeatures for a MtL approach simply consists of selecting a set of meta-functions (e.g. entropy, mutual information an correlation) an the framework systematically generates the set of metafeatures that represent all the information that can be obtaine with those meta-functions from the ata. For this task in particular, we selecte a set of meta-functions that are able to characterize the atasets as completely as possible (measuring information regaring the target variable, the categorical an numerical features, etc) the algorithms an the relationship between the atasets an the algorithms (who can be seen as lanmarkers [9]). Therefore, the set of meta-functions use is: skewness, Pearson s correlation, Maximal Information Coefficient (MIC [11]), entropy, mutual Information, eta square (from ANOVA test) an rank of each algorithm [1]. Each meta-function is use to systematically measure information from all possible combination of input objects available for this task. We efine the input objects available as: iscrete escriptive ata of the atasets, continuous

escriptive ata of the atasets, iscrete output ata of the atasets, five sets of preictions (iscrete preicte ata) for each ataset (naive bayes, ecision tree with epth 1, 2 an 3, an majority class). For instance, if we take the example of using Entropy as meta-function, it is possible to measure information in iscrete escriptive ata, iscrete output ata an iscrete preicte ata (if the base-level problem is a classification task). After computing the entropy of all these objects, it might be necessary to aggregate the information in orer to keep the tabular form of the ata. Take for the example the aggregation require for the entropy values compute for each iscrete attribute. Therefore, we choose a palette of aggregation functions to capture several imensions of these values an minimize the loss of information by aggregation. In that sense, the post-processing functions chosen were: average, maximum, minimum, stanar eviation, variance an histogram binning. Given these meta-functions, the available input objects an post-processing functions, we are able to generate a set of 131 metafeatures. To this set we a eight metafeatures: the number of examples of the ataset, the number of attributes an the number of classes of the target variable; an five lanmarkers (the ones alreay escribe above) estimate using accuracy as error measure. Furthermore, we a four metafeatures to escribe the components of each workflow: the number of trees, the pruning metho, the pruning cut point an the ynamic selection metho. In total, autobagging uses a set of 143 metafeatures. 2.2 Metatarget In orer to be able to learn a ranking meta-moel f(, a), we nee to compute a metatarget that represents a score z to each ataset-algorithm pair (, a), so that: F : (D, A) Z, where F is the ranking meta-moels set an Z is the metatarget set. To compute z, we use a cross valiation error estimation methoology (4-fol cross valiation in the experiments reporte in this paper, Section 3), in which we estimate the performance of each bagging workflow for each ataset using Cohen s kappa score [4]. On top of the estimate kappa score, for each ataset, we rank the bagging workflows. This ranking is the final form of the metatarget an it is then use for learning the meta-moel. 3 Experiments Our experimental setup comprises 140 classification atasets extracte from the OpenML platform for collaborative machine learning [12]. We limite the atasets extracte to a maximum of 5000 instances, a minimum of 300 instances an a maximum of 1000 attributes, in orer to spee up the experiments an exclue atasets that coul be too small for some of bagging workflows that we wante to test. Regaring bagging workflows, we limite the hyperparameters of the bagging workflows to four: number of moels generate, pruning metho, pruning cut

point an ynamic selection metho. Specifically, each hyperparameter coul take the following values: Number of moels: 50, 100 or 200. Decision trees was chosen as learning algorithm. Pruning metho: Margin Distance Minimization(MDSQ) [8], Boosting-Base Pruning (BB) [8] or none. Pruning cut point: 25%, 50% or 75%. Dynamic integration metho: Overall Local Accuracy (OLA), a ynamic selection metho [13]; K-nearest-oracles-eliminate (KNORA-E) [6], a ynamic combination metho; an none. The combination of all these hyperparameters escribe above generate 63 vali workflows. We teste these bagging workflows in the atasets extracte from OpenML with 4-fol cross valiation, using Cohen s kappa as evaluation metric. We use the XGBoost learning to rank implementation for graient boosting of ecision trees [3] to learn the metamoel as escribe in Section 2. As baselines, at the base-level, we use 1) bagging with 100 ecision trees 2) the average rank metho, which basically is a moel that always preicts the bagging workflow with the best average rank in the meta training set an the 3) oracle, an ieal moel that always selects the best bagging workflow for each ataset. As for the meta-level, we use as baseline the average rank metho. As evaluation methoology, we use an approach similar to the leave-one-out methoology. However, each test fol consists of all the algorithm-ataset pairs associate with the test ataset. The remaining examples are use for training purposes. The evaluation metric at the meta-level is the Mean Average Precision at 10 (MAP@10) an at the base-level, as mentione before, we use Cohen s kappa. The methoology recommene by Demšar [5] was use for statistical valiation of the results. 3.1 Results Figure 2 shows a loss curve, relating the average loss in terms of performance with the number of workflows teste following the ranking suggeste by each metho. The loss is calculate as the ifference between the performance of the best algorithm ranke by the metho in comparison with the groun truth ranking. The loss for all atasets is then average for aggregation purposes. We can see, as expecte, that the average loss ecreases for both methos as the number of workflows teste increases. In terms of comparison between autobagging an the Average Rank metho, it is possible to visualize that autobagging shows a superior performance for all the values of the x axis. Interestingly, this result is particularly noticeable in the first tests. For instance, if we test only the top 1 workflow recommene by autobagging, on average, the kappa loss is half of the one we shoul expect from the suggestion mae by the average rank metho.

0.00 0.08 0.06 Kappa Loss % 0.04 Metho Autobagging Average Rank 0.02 0 20 40 60 Number of Tests Fig. 2. Loss curve comparing autobagging with the Average Rank baseline. We evaluate these results to assess their statistical significance using Demšar s methoology. Figures 3 an 4 show the Critical Difference (CD) iagrams for both the meta an the base-level. CD CD 1 2 1 Oracle 2 3 4 5 6 Bagging AutoBagging Average Rank AutoBagging@5 AutoBagging@3 AverageRank AutoBagging@1 Fig. 3. Critical Difference iagram (with α = 0.05) of the experiments at the meta-level. Fig. 4. Critical Difference iagram (with α = 0.05) of the experiments at the base-level. At the meta-level, using MAP@10 as evaluation metric, autobagging presents a clearly superior performance in comparison with the Average Rank. The ifference is statistically significant, as one can see in the CD iagram. This result is in accorance with performance that visualize in Figure 2 for both methos. At the base-level, we compare autobagging with three baselines, as mentione before: bagging with 100 ecision trees, the Average Rank metho an the oracle. We test three versions of autobagging, taking the top 1, 3 an 5 bagging workflows ranke by the meta-moel. For instance, in autobagging@3, we test the top 3 bagging workflows ranke by the meta-moel an we choose the best. Starting by the tail of the CD iagram, both the Average Rank metho an autobagging@1 show a superior performance than Bagging with 100 ecision trees. Furthermore, autobagging@1 also shows a superior performance than the Average Rank metho. This result confirms the inications that we visualize in Figure 2. The CD iagram shows also autobagging@3 an autobagging@5 have a similar performance. However, an we must highlight these results, autobagging@5 shows a performance that is not statistically ifferent from the oracle. This is extremely promising since it shows that the performance of autobagging excels if the user is able to test the top 5 bagging workflows ranke by the system.

4 Conclusion This paper presents autobagging, an automl system that makes use of a learning to rank approach an metalearning to automatically suggest a bagging ensemble specifically esigne for a given ataset. We teste the approach on 140 classification atasets an the results show that autobagging is clearly better than the baselines to which was compare. In fact, if the top five workflows suggeste by autobagging are teste, results show that the system achieves a performance that is not statistically ifferent from the oracle, a metho that systematically selects the best workflow for each ataset. For the purpose of reproucibility an generalizability, autobagging is publicly available as an R package. Acknowlegements This work was partly fune by the ECSEL Joint Unertaking, the framework programme for research an innovation horizon 2020 (20142020) uner grant agreement number 662189-MANTIS-2014-1. References 1. Brazil, P., Carrier, C.G., Soares, C., Vilalta, R.: Metalearning: Applications to ata mining. Springer Science & Business Meia (2008) 2. Breiman, L.: Bagging preictors. Machine learning 24(2), 123 140 (1996) 3. Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. pp. 785 794. KDD 16, ACM (2016) 4. Cohen, J.: A coefficient of agreement for nominal scales. Eucational an psychological measurement 20(1), 37 46 (1960) 5. Demšar, J.: Statistical comparisons of classifiers over multiple ata sets. JMLR 7(Jan), 1 30 (2006) 6. Ko, A.H., Sabourin, R., Britto Jr, A.S.: From ynamic classifier selection to ynamic ensemble selection. Pattern Recognition 41(5), 1718 1731 (2008) 7. Liu, T.Y.: Learning to rank for information retrieval. Founations an Trens in Information Retrieval 3(3), 225 331 (2009) 8. Martínez-Muñoz, G., Hernánez-Lobato, D., Suárez, A.: An analysis of ensemble pruning techniques base on orere aggregation. IEEE Transactions on Pattern Analysis an Machine Intelligence 31(2), 245 259 (2009) 9. Pfahringer, B., Bensusan, H., Girau-Carrier, C.: Tell me who can learn you an i can tell you who you are: Lanmarking various learning algorithms. In: ICML. pp. 743 750 (2000) 10. Pinto, F., Soares, C., Menes-Moreira, J.: Towars automatic generation of metafeatures. In: PAKDD. pp. 215 226. Springer (2016) 11. Reshef, D.N., Reshef, Y.A., Finucane, H.K., Grossman, S.R., McVean, G., Turnbaugh, P.J., Laner, E.S., Mitzenmacher, M., Sabeti, P.C.: Detecting novel associations in large ata sets. Science 334(6062), 1518 1524 (2011) 12. Vanschoren, J., Van Rijn, J.N., Bischl, B., Torgo, L.: Openml: networke science in machine learning. ACM SIGKDD Explorations Newsletter 15(2), 49 60 (2014) 13. Woos, K., Kegelmeyer Jr, W.P., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. Transactions on Pattern Analysis an Machine Intelligence 19(4), 405 410 (1997)