autobagging: Learning to Rank Bagging Workflows with Metalearning
|
|
- Ophelia Foster
- 6 years ago
- Views:
Transcription
1 autobagging: Learning to Rank Bagging Workflows with Metalearning Fábio Pinto ( ), Vítor Cerqueira, Carlos Soares an João Menes-Moreira INESC TEC/Faculae e Engenharia, Universiae o Porto Rua Dr. Roberto Frias, s/n Porto, Portugal fhpinto@inesctec.pt vmac@inesctec.pt csoares@fe.up.pt jmoreira@fe.up.pt Abstract. Machine Learning (ML) has been successfully applie to a wie range of omains an applications. One of the techniques behin most of these successful applications is Ensemble Learning (EL), the fiel of ML that gave birth to methos such as Ranom Forests or Boosting. The complexity of applying these techniques together with the market scarcity on ML experts, has create the nee for systems that enable a fast an easy rop-in replacement for ML libraries. Automate machine learning (automl) is the fiel of ML that attempts to answers these nees. We propose autobagging, an automl system that automatically ranks 63 bagging workflows by exploiting past performance an metalearning. Results on 140 classification atasets from the OpenML platform show that autobagging can yiel better performance than the Average Rank metho an achieve results that are not statistically ifferent from an ieal moel that systematically selects the best workflow for each ataset. For the purpose of reproucibility an generalizability, autobagging is publicly available as an R package on CRAN. Keywors: automate machine learning, metalearning, bagging, classification 1 Introuction Ensemble learning (EL) has proven itself as one of the most powerful techniques in Machine Learning (ML), leaing to state-of-the-art results across several omains. Methos such as bagging, boosting or Ranom Forests are consiere some of the favourite algorithms among ata science practitioners. However, getting the most out of these techniques still requires significant expertise an it is often a complex an time consuming task. Furthermore, since the number of ML applications is growing exponentially, there is a nee for tools that boost the ata scientist s prouctivity. The resulting research fiel that aims to answers these nees is Automate Machine Learning (automl). In this paper, we aress the problem of how to automatically tune an EL algorithm, covering all components within it: generation (how to generate the moels an how many), pruning (which technique shoul be use to prune the ensemble an how many moels shoul be iscare) an
2 integration (which moel(s) shoul be selecte an combine for each preiction). We focus specifically in the bagging algorithm [2] an four components of the algorithm: 1) the number of moels that shoul be generate 2) the pruning metho 3) how much moels shoul be prune an 4) which ynamic integration metho shoul be use. For the remaining of this paper, we call to a set of these four elements a bagging workflow. Our proposal is autobagging, a system that combines a learning to rank approach together with metalearning to tackle the problem of automatically generate bagging workflows. Ranking is a common task in information retrieval. For instance, to answer the query of a user, a search engine ranks a plethora of ocuments accoring to their relevance. In this case, the query is replace by new ataset an autobagging acts as ranking engine. Figure 1 shows an overall schema of the propose system. We leverage the historical preictive performance of each workflow in several atasets, where each ataset is characterise by a set of metafeatures. This metaata is then use to generate a metamoel, using a learning to rank approach. Given a new ataset, we are able to collect metafeatures from it an fee them to the metamoel. Finally, the metamoel outputs an orere list of the workflows, taking into account the characteristics of the new ataset. Metaata Algorithm Algorithm A Algorithm A a Estimates of Performance Learning to Rank Metafeatures Extraction Metamoel New Dataset Ranke Algorithms Fig. 1. Learning to Rank with Metalearning. The re lines represent offline tasks an the green ones represent online ones. We teste the approach in 140 classification atasets from the OpenML platform for collaborative ML [12] an 63 bagging workflows, that inclue two pruning techniques an two ynamic selection techniques. Results show that auto- Bagging has a better performance than two strong baselines, Bagging with 100 trees an the average rank. Furthermore, testing the top 5 workflows recommene by autobagging guarantees an outcome that is not statistically ifferent from the Oracle, an ieal metho that for each ataset always selects the best workflow. For the purpose of reproucibility an generalizability, autobagging is available as an R package
3 2 autobagging We approach the problem of algorithm selection as a learning to rank problem [7]. Lets take D as the ataset set an A as the algorithm set. Y = {1, 2,..., l} is the label set, where each value represents a relevance score, which represents the relative performance of a given algorithm. Therefore, l l , where represents an orer relationship. Furthermore, D m = { 1, 2,..., m } is the set of atasets for training an i is the i-th ataset, A i = {a i,1, a i,2,..., a i,ni } is the set of algorithms associate with ataset i an y i = {y i,1, y i,2,..., y i,ni } is the set of labels associate with ataset i, where n i represents the sizes of A i an y i ; a i,j represents the j-th algorithm in A i ; an y i,j Y represents the j-th label in y i, representing the relevance score of a i,j with respect to i. Finally, the meta-ataset is enote as S = {( i, A i ), y i } m i=1. We use metalearning to generate the metafeature vectors x i,j = φ( i, a i,j ) for each ataset-algorithm pair, where i = 1, 2,..., m; j = 1, 2,..., n i an φ represents the metafeatures extraction functions. These metafeatures can escribe i, a i,j or even the relationship between both. Therefore, taking x i = {x i,1, x i,2,..., x i,ni } we can represent the meta-ataset as S = {(x i, y i )} m i=1. Our goal is to train a meta ranking moel f(, a) = f(x) that is able to assign a relevance score to a given new ataset-algorithm pair an a, given x. 2.1 Metafeatures We approach the problem of generating metafeatures to characterize an a with the ai of a framework for systematic metafeatures generation [10]. Essentially, this framework regars a metafeature as a combination of three components: meta-function, a set of input objects an a post-processing function. The framework establishes how to systematically generate metafeatures from all possible combinations of object an post-processing alternatives that are compatible with a given meta-function. Thus, the evelopment of metafeatures for a MtL approach simply consists of selecting a set of meta-functions (e.g. entropy, mutual information an correlation) an the framework systematically generates the set of metafeatures that represent all the information that can be obtaine with those meta-functions from the ata. For this task in particular, we selecte a set of meta-functions that are able to characterize the atasets as completely as possible (measuring information regaring the target variable, the categorical an numerical features, etc) the algorithms an the relationship between the atasets an the algorithms (who can be seen as lanmarkers [9]). Therefore, the set of meta-functions use is: skewness, Pearson s correlation, Maximal Information Coefficient (MIC [11]), entropy, mutual Information, eta square (from ANOVA test) an rank of each algorithm [1]. Each meta-function is use to systematically measure information from all possible combination of input objects available for this task. We efine the input objects available as: iscrete escriptive ata of the atasets, continuous
4 escriptive ata of the atasets, iscrete output ata of the atasets, five sets of preictions (iscrete preicte ata) for each ataset (naive bayes, ecision tree with epth 1, 2 an 3, an majority class). For instance, if we take the example of using Entropy as meta-function, it is possible to measure information in iscrete escriptive ata, iscrete output ata an iscrete preicte ata (if the base-level problem is a classification task). After computing the entropy of all these objects, it might be necessary to aggregate the information in orer to keep the tabular form of the ata. Take for the example the aggregation require for the entropy values compute for each iscrete attribute. Therefore, we choose a palette of aggregation functions to capture several imensions of these values an minimize the loss of information by aggregation. In that sense, the post-processing functions chosen were: average, maximum, minimum, stanar eviation, variance an histogram binning. Given these meta-functions, the available input objects an post-processing functions, we are able to generate a set of 131 metafeatures. To this set we a eight metafeatures: the number of examples of the ataset, the number of attributes an the number of classes of the target variable; an five lanmarkers (the ones alreay escribe above) estimate using accuracy as error measure. Furthermore, we a four metafeatures to escribe the components of each workflow: the number of trees, the pruning metho, the pruning cut point an the ynamic selection metho. In total, autobagging uses a set of 143 metafeatures. 2.2 Metatarget In orer to be able to learn a ranking meta-moel f(, a), we nee to compute a metatarget that represents a score z to each ataset-algorithm pair (, a), so that: F : (D, A) Z, where F is the ranking meta-moels set an Z is the metatarget set. To compute z, we use a cross valiation error estimation methoology (4-fol cross valiation in the experiments reporte in this paper, Section 3), in which we estimate the performance of each bagging workflow for each ataset using Cohen s kappa score [4]. On top of the estimate kappa score, for each ataset, we rank the bagging workflows. This ranking is the final form of the metatarget an it is then use for learning the meta-moel. 3 Experiments Our experimental setup comprises 140 classification atasets extracte from the OpenML platform for collaborative machine learning [12]. We limite the atasets extracte to a maximum of 5000 instances, a minimum of 300 instances an a maximum of 1000 attributes, in orer to spee up the experiments an exclue atasets that coul be too small for some of bagging workflows that we wante to test. Regaring bagging workflows, we limite the hyperparameters of the bagging workflows to four: number of moels generate, pruning metho, pruning cut
5 point an ynamic selection metho. Specifically, each hyperparameter coul take the following values: Number of moels: 50, 100 or 200. Decision trees was chosen as learning algorithm. Pruning metho: Margin Distance Minimization(MDSQ) [8], Boosting-Base Pruning (BB) [8] or none. Pruning cut point: 25%, 50% or 75%. Dynamic integration metho: Overall Local Accuracy (OLA), a ynamic selection metho [13]; K-nearest-oracles-eliminate (KNORA-E) [6], a ynamic combination metho; an none. The combination of all these hyperparameters escribe above generate 63 vali workflows. We teste these bagging workflows in the atasets extracte from OpenML with 4-fol cross valiation, using Cohen s kappa as evaluation metric. We use the XGBoost learning to rank implementation for graient boosting of ecision trees [3] to learn the metamoel as escribe in Section 2. As baselines, at the base-level, we use 1) bagging with 100 ecision trees 2) the average rank metho, which basically is a moel that always preicts the bagging workflow with the best average rank in the meta training set an the 3) oracle, an ieal moel that always selects the best bagging workflow for each ataset. As for the meta-level, we use as baseline the average rank metho. As evaluation methoology, we use an approach similar to the leave-one-out methoology. However, each test fol consists of all the algorithm-ataset pairs associate with the test ataset. The remaining examples are use for training purposes. The evaluation metric at the meta-level is the Mean Average Precision at 10 (MAP@10) an at the base-level, as mentione before, we use Cohen s kappa. The methoology recommene by Demšar [5] was use for statistical valiation of the results. 3.1 Results Figure 2 shows a loss curve, relating the average loss in terms of performance with the number of workflows teste following the ranking suggeste by each metho. The loss is calculate as the ifference between the performance of the best algorithm ranke by the metho in comparison with the groun truth ranking. The loss for all atasets is then average for aggregation purposes. We can see, as expecte, that the average loss ecreases for both methos as the number of workflows teste increases. In terms of comparison between autobagging an the Average Rank metho, it is possible to visualize that autobagging shows a superior performance for all the values of the x axis. Interestingly, this result is particularly noticeable in the first tests. For instance, if we test only the top 1 workflow recommene by autobagging, on average, the kappa loss is half of the one we shoul expect from the suggestion mae by the average rank metho.
6 Kappa Loss % 0.04 Metho Autobagging Average Rank Number of Tests Fig. 2. Loss curve comparing autobagging with the Average Rank baseline. We evaluate these results to assess their statistical significance using Demšar s methoology. Figures 3 an 4 show the Critical Difference (CD) iagrams for both the meta an the base-level. CD CD Oracle Bagging AutoBagging Average Rank AutoBagging@5 AutoBagging@3 AverageRank AutoBagging@1 Fig. 3. Critical Difference iagram (with α = 0.05) of the experiments at the meta-level. Fig. 4. Critical Difference iagram (with α = 0.05) of the experiments at the base-level. At the meta-level, using MAP@10 as evaluation metric, autobagging presents a clearly superior performance in comparison with the Average Rank. The ifference is statistically significant, as one can see in the CD iagram. This result is in accorance with performance that visualize in Figure 2 for both methos. At the base-level, we compare autobagging with three baselines, as mentione before: bagging with 100 ecision trees, the Average Rank metho an the oracle. We test three versions of autobagging, taking the top 1, 3 an 5 bagging workflows ranke by the meta-moel. For instance, in autobagging@3, we test the top 3 bagging workflows ranke by the meta-moel an we choose the best. Starting by the tail of the CD iagram, both the Average Rank metho an autobagging@1 show a superior performance than Bagging with 100 ecision trees. Furthermore, autobagging@1 also shows a superior performance than the Average Rank metho. This result confirms the inications that we visualize in Figure 2. The CD iagram shows also autobagging@3 an autobagging@5 have a similar performance. However, an we must highlight these results, autobagging@5 shows a performance that is not statistically ifferent from the oracle. This is extremely promising since it shows that the performance of autobagging excels if the user is able to test the top 5 bagging workflows ranke by the system.
7 4 Conclusion This paper presents autobagging, an automl system that makes use of a learning to rank approach an metalearning to automatically suggest a bagging ensemble specifically esigne for a given ataset. We teste the approach on 140 classification atasets an the results show that autobagging is clearly better than the baselines to which was compare. In fact, if the top five workflows suggeste by autobagging are teste, results show that the system achieves a performance that is not statistically ifferent from the oracle, a metho that systematically selects the best workflow for each ataset. For the purpose of reproucibility an generalizability, autobagging is publicly available as an R package. Acknowlegements This work was partly fune by the ECSEL Joint Unertaking, the framework programme for research an innovation horizon 2020 ( ) uner grant agreement number MANTIS References 1. Brazil, P., Carrier, C.G., Soares, C., Vilalta, R.: Metalearning: Applications to ata mining. Springer Science & Business Meia (2008) 2. Breiman, L.: Bagging preictors. Machine learning 24(2), (1996) 3. Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. pp KDD 16, ACM (2016) 4. Cohen, J.: A coefficient of agreement for nominal scales. Eucational an psychological measurement 20(1), (1960) 5. Demšar, J.: Statistical comparisons of classifiers over multiple ata sets. JMLR 7(Jan), 1 30 (2006) 6. Ko, A.H., Sabourin, R., Britto Jr, A.S.: From ynamic classifier selection to ynamic ensemble selection. Pattern Recognition 41(5), (2008) 7. Liu, T.Y.: Learning to rank for information retrieval. Founations an Trens in Information Retrieval 3(3), (2009) 8. Martínez-Muñoz, G., Hernánez-Lobato, D., Suárez, A.: An analysis of ensemble pruning techniques base on orere aggregation. IEEE Transactions on Pattern Analysis an Machine Intelligence 31(2), (2009) 9. Pfahringer, B., Bensusan, H., Girau-Carrier, C.: Tell me who can learn you an i can tell you who you are: Lanmarking various learning algorithms. In: ICML. pp (2000) 10. Pinto, F., Soares, C., Menes-Moreira, J.: Towars automatic generation of metafeatures. In: PAKDD. pp Springer (2016) 11. Reshef, D.N., Reshef, Y.A., Finucane, H.K., Grossman, S.R., McVean, G., Turnbaugh, P.J., Laner, E.S., Mitzenmacher, M., Sabeti, P.C.: Detecting novel associations in large ata sets. Science 334(6062), (2011) 12. Vanschoren, J., Van Rijn, J.N., Bischl, B., Torgo, L.: Openml: networke science in machine learning. ACM SIGKDD Explorations Newsletter 15(2), (2014) 13. Woos, K., Kegelmeyer Jr, W.P., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. Transactions on Pattern Analysis an Machine Intelligence 19(4), (1997)
Learning From the Past with Experiment Databases
Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationHandling Concept Drifts Using Dynamic Selection of Classifiers
Handling Concept Drifts Using Dynamic Selection of Classifiers Paulo R. Lisboa de Almeida, Luiz S. Oliveira, Alceu de Souza Britto Jr. and and Robert Sabourin Universidade Federal do Paraná, DInf, Curitiba,
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationAssignment 1: Predicting Amazon Review Ratings
Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for
More informationWord Segmentation of Off-line Handwritten Documents
Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationSPECIAL ARTICLES Pharmacy Education in Vietnam
American Journal of Pharmaceutical Eucation 2013; 77 (6) Article 114. SPECIAL ARTICLES Pharmacy Eucation in Vietnam Thi-Ha Vo, MSc, a,b Pierrick Beouch, PharmD, PhD, b,c Thi-Hoai Nguyen, PhD, a Thi-Lien-Huong
More informationReducing Features to Improve Bug Prediction
Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science
More informationTruth Inference in Crowdsourcing: Is the Problem Solved?
Truth Inference in Crowdsourcing: Is the Problem Solved? Yudian Zheng, Guoliang Li #, Yuanbing Li #, Caihua Shan, Reynold Cheng # Department of Computer Science, Tsinghua University Department of Computer
More informationSweden, The Baltic States and Poland November 2000
Folkbilning co-operation between Sween, The Baltic States an Polan 1990 2000 November 2000 TABLE OF CONTENTS FOREWORD...3 SUMMARY...4 I. CONCLUSIONS FROM THE COUNTRIES...6 I.1 Estonia...8 I.2 Latvia...12
More informationCLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH
ISSN: 0976-3104 Danti and Bhushan. ARTICLE OPEN ACCESS CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH Ajit Danti 1 and SN Bharath Bhushan 2* 1 Department
More informationAustralian Journal of Basic and Applied Sciences
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean
More informationPredicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks
Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com
More informationLinking Task: Identifying authors and book titles in verbose queries
Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,
More informationExperiment Databases: Towards an Improved Experimental Methodology in Machine Learning
Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning Hendrik Blockeel and Joaquin Vanschoren Computer Science Dept., K.U.Leuven, Celestijnenlaan 200A, 3001 Leuven, Belgium
More informationIntroduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition
Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and
More informationDetecting English-French Cognates Using Orthographic Edit Distance
Detecting English-French Cognates Using Orthographic Edit Distance Qiongkai Xu 1,2, Albert Chen 1, Chang i 1 1 The Australian National University, College of Engineering and Computer Science 2 National
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center
More informationCSL465/603 - Machine Learning
CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am
More informationLearning Methods in Multilingual Speech Recognition
Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex
More informationWhat Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models
What Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models Michael A. Sao Pedro Worcester Polytechnic Institute 100 Institute Rd. Worcester, MA 01609
More informationTwitter Sentiment Classification on Sanders Data using Hybrid Approach
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders
More informationProbability and Statistics Curriculum Pacing Guide
Unit 1 Terms PS.SPMJ.3 PS.SPMJ.5 Plan and conduct a survey to answer a statistical question. Recognize how the plan addresses sampling technique, randomization, measurement of experimental error and methods
More informationCS Machine Learning
CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing
More informationFSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification
FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification arxiv:1709.09268v2 [cs.lg] 15 Nov 2017 Kamran Kowsari, Nima Bari, Roman Vichr and Farhad A. Goodarzi Department of Computer
More informationModel Ensemble for Click Prediction in Bing Search Ads
Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com
More informationLearning to Rank with Selection Bias in Personal Search
Learning to Rank with Selection Bias in Personal Search Xuanhui Wang, Michael Bendersky, Donald Metzler, Marc Najork Google Inc. Mountain View, CA 94043 {xuanhui, bemike, metzler, najork}@google.com ABSTRACT
More informationTeam Formation for Generalized Tasks in Expertise Social Networks
IEEE International Conference on Social Computing / IEEE International Conference on Privacy, Security, Risk and Trust Team Formation for Generalized Tasks in Expertise Social Networks Cheng-Te Li Graduate
More informationIterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages
Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer
More informationThe stages of event extraction
The stages of event extraction David Ahn Intelligent Systems Lab Amsterdam University of Amsterdam ahn@science.uva.nl Abstract Event detection and recognition is a complex task consisting of multiple sub-tasks
More informationSwitchboard Language Model Improvement with Conversational Data from Gigaword
Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword
More informationMulti-label Classification via Multi-target Regression on Data Streams
Multi-label Classification via Multi-target Regression on Data Streams Aljaž Osojnik 1,2, Panče Panov 1, and Sašo Džeroski 1,2,3 1 Jožef Stefan Institute, Jamova cesta 39, Ljubljana, Slovenia 2 Jožef Stefan
More informationCS 446: Machine Learning
CS 446: Machine Learning Introduction to LBJava: a Learning Based Programming Language Writing classifiers Christos Christodoulopoulos Parisa Kordjamshidi Motivation 2 Motivation You still have not learnt
More informationMining Student Evolution Using Associative Classification and Clustering
Mining Student Evolution Using Associative Classification and Clustering 19 Mining Student Evolution Using Associative Classification and Clustering Kifaya S. Qaddoum, Faculty of Information, Technology
More informationTerm Weighting based on Document Revision History
Term Weighting based on Document Revision History Sérgio Nunes, Cristina Ribeiro, and Gabriel David INESC Porto, DEI, Faculdade de Engenharia, Universidade do Porto. Rua Dr. Roberto Frias, s/n. 4200-465
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationThe 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X
The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,
More informationOn-Line Data Analytics
International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob
More informationConversational Framework for Web Search and Recommendations
Conversational Framework for Web Search and Recommendations Saurav Sahay and Ashwin Ram ssahay@cc.gatech.edu, ashwin@cc.gatech.edu College of Computing Georgia Institute of Technology Atlanta, GA Abstract.
More informationScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 98 (2016 ) 368 373 The 6th International Conference on Current and Future Trends of Information and Communication Technologies
More informationWelcome to. ECML/PKDD 2004 Community meeting
Welcome to ECML/PKDD 2004 Community meeting A brief report from the program chairs Jean-Francois Boulicaut, INSA-Lyon, France Floriana Esposito, University of Bari, Italy Fosca Giannotti, ISTI-CNR, Pisa,
More informationAn Empirical Comparison of Supervised Ensemble Learning Approaches
An Empirical Comparison of Supervised Ensemble Learning Approaches Mohamed Bibimoune 1,2, Haytham Elghazel 1, Alex Aussem 1 1 Université de Lyon, CNRS Université Lyon 1, LIRIS UMR 5205, F-69622, France
More informationUsing Web Searches on Important Words to Create Background Sets for LSI Classification
Using Web Searches on Important Words to Create Background Sets for LSI Classification Sarah Zelikovitz and Marina Kogan College of Staten Island of CUNY 2800 Victory Blvd Staten Island, NY 11314 Abstract
More informationA Survey on Unsupervised Machine Learning Algorithms for Automation, Classification and Maintenance
A Survey on Unsupervised Machine Learning Algorithms for Automation, Classification and Maintenance a Assistant Professor a epartment of Computer Science Memoona Khanum a Tahira Mahboob b b Assistant Professor
More informationADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF
Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download
More informationSoftprop: Softmax Neural Network Backpropagation Learning
Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science
More informationMulti-Lingual Text Leveling
Multi-Lingual Text Leveling Salim Roukos, Jerome Quin, and Todd Ward IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 {roukos,jlquinn,tward}@us.ibm.com Abstract. Determining the language proficiency
More informationDiscriminative Learning of Beam-Search Heuristics for Planning
Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationUniversidade do Minho Escola de Engenharia
Universidade do Minho Escola de Engenharia Universidade do Minho Escola de Engenharia Dissertação de Mestrado Knowledge Discovery is the nontrivial extraction of implicit, previously unknown, and potentially
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationCustomized Question Handling in Data Removal Using CPHC
International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 1, Issue 8, December 2014, PP 29-34 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org Customized
More informationThe University of Amsterdam s Concept Detection System at ImageCLEF 2011
The University of Amsterdam s Concept Detection System at ImageCLEF 2011 Koen E. A. van de Sande and Cees G. M. Snoek Intelligent Systems Lab Amsterdam, University of Amsterdam Software available from:
More informationProduct Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments
Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &
More informationLarge-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy
Large-Scale Web Page Classification by Sathi T Marath Submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy at Dalhousie University Halifax, Nova Scotia November 2010
More informationData Stream Processing and Analytics
Data Stream Processing and Analytics Vincent Lemaire Thank to Alexis Bondu, EDF Outline Introduction on data-streams Supervised Learning Conclusion 2 3 Big Data what does that mean? Big Data Analytics?
More informationImpact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees
Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees Mariusz Łapczy ski 1 and Bartłomiej Jefma ski 2 1 The Chair of Market Analysis and Marketing Research,
More informationA new way to share, organize and learn from experiments
Mach Learn (2012) 87:127 158 DOI 10.1007/s10994-011-5277-0 Experiment databases A new way to share, organize and learn from experiments Joaquin Vanschoren Hendrik Blockeel Bernhard Pfahringer Geoffrey
More informationSemi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.
Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link
More informationExperiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling
Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad
More informationMatching Similarity for Keyword-Based Clustering
Matching Similarity for Keyword-Based Clustering Mohammad Rezaei and Pasi Fränti University of Eastern Finland {rezaei,franti}@cs.uef.fi Abstract. Semantic clustering of objects such as documents, web
More informationMachine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler
Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina
More informationUSER ADAPTATION IN E-LEARNING ENVIRONMENTS
USER ADAPTATION IN E-LEARNING ENVIRONMENTS Paraskevi Tzouveli Image, Video and Multimedia Systems Laboratory School of Electrical and Computer Engineering National Technical University of Athens tpar@image.
More informationExtracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models
Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models Richard Johansson and Alessandro Moschitti DISI, University of Trento Via Sommarive 14, 38123 Trento (TN),
More informationCultivating DNN Diversity for Large Scale Video Labelling
Cultivating DNN Diversity for Large Scale Video Labelling Mikel Bober-Irizar mikel@mxbi.net Sameed Husain sameed.husain@surrey.ac.uk Miroslaw Bober m.bober@surrey.ac.uk Eng-Jon Ong e.ong@surrey.ac.uk Abstract
More informationBeyond the Pipeline: Discrete Optimization in NLP
Beyond the Pipeline: Discrete Optimization in NLP Tomasz Marciniak and Michael Strube EML Research ggmbh Schloss-Wolfsbrunnenweg 33 69118 Heidelberg, Germany http://www.eml-research.de/nlp Abstract We
More informationEvidence for Reliability, Validity and Learning Effectiveness
PEARSON EDUCATION Evidence for Reliability, Validity and Learning Effectiveness Introduction Pearson Knowledge Technologies has conducted a large number and wide variety of reliability and validity studies
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More informationContent-free collaborative learning modeling using data mining
User Model User-Adap Inter DOI 10.1007/s11257-010-9095-z ORIGINAL PAPER Content-free collaborative learning modeling using data mining Antonio R. Anaya Jesús G. Boticario Received: 23 April 2010 / Accepted
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationP. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas
Exploiting Distance Learning Methods and Multimediaenhanced instructional content to support IT Curricula in Greek Technological Educational Institutes P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou,
More informationRule discovery in Web-based educational systems using Grammar-Based Genetic Programming
Data Mining VI 205 Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming C. Romero, S. Ventura, C. Hervás & P. González Universidad de Córdoba, Campus Universitario de
More information(Sub)Gradient Descent
(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include
More informationSemi-Supervised Face Detection
Semi-Supervised Face Detection Nicu Sebe, Ira Cohen 2, Thomas S. Huang 3, Theo Gevers Faculty of Science, University of Amsterdam, The Netherlands 2 HP Research Labs, USA 3 Beckman Institute, University
More informationActive Learning. Yingyu Liang Computer Sciences 760 Fall
Active Learning Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven,
More informationExposé for a Master s Thesis
Exposé for a Master s Thesis Stefan Selent January 21, 2017 Working Title: TF Relation Mining: An Active Learning Approach Introduction The amount of scientific literature is ever increasing. Especially
More informationAn Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method
Farhadi F, Sorkhi M, Hashemi S et al. An effective framework for fast expert mining in collaboration networks: A grouporiented and cost-based method. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 27(3): 577
More informationSemi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration
INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One
More informationAutomating the E-learning Personalization
Automating the E-learning Personalization Fathi Essalmi 1, Leila Jemni Ben Ayed 1, Mohamed Jemni 1, Kinshuk 2, and Sabine Graf 2 1 The Research Laboratory of Technologies of Information and Communication
More informationA Comparison of Two Text Representations for Sentiment Analysis
010 International Conference on Computer Application and System Modeling (ICCASM 010) A Comparison of Two Text Representations for Sentiment Analysis Jianxiong Wang School of Computer Science & Educational
More informationAxiom 2013 Team Description Paper
Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association
More informationActivity Recognition from Accelerometer Data
Activity Recognition from Accelerometer Data Nishkam Ravi and Nikhil Dandekar and Preetham Mysore and Michael L. Littman Department of Computer Science Rutgers University Piscataway, NJ 08854 {nravi,nikhild,preetham,mlittman}@cs.rutgers.edu
More informationIndian Institute of Technology, Kanpur
Indian Institute of Technology, Kanpur Course Project - CS671A POS Tagging of Code Mixed Text Ayushman Sisodiya (12188) {ayushmn@iitk.ac.in} Donthu Vamsi Krishna (15111016) {vamsi@iitk.ac.in} Sandeep Kumar
More informationA GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING
A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland
More informationBug triage in open source systems: a review
Int. J. Collaborative Enterprise, Vol. 4, No. 4, 2014 299 Bug triage in open source systems: a review V. Akila* and G. Zayaraz Department of Computer Science and Engineering, Pondicherry Engineering College,
More informationLip reading: Japanese vowel recognition by tracking temporal changes of lip shape
Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Koshi Odagiri 1, and Yoichi Muraoka 1 1 Graduate School of Fundamental/Computer Science and Engineering, Waseda University,
More informationTime series prediction
Chapter 13 Time series prediction Amaury Lendasse, Timo Honkela, Federico Pouzols, Antti Sorjamaa, Yoan Miche, Qi Yu, Eric Severin, Mark van Heeswijk, Erkki Oja, Francesco Corona, Elia Liitiäinen, Zhanxing
More informationMulti-label classification via multi-target regression on data streams
Mach Learn (2017) 106:745 770 DOI 10.1007/s10994-016-5613-5 Multi-label classification via multi-target regression on data streams Aljaž Osojnik 1,2 Panče Panov 1 Sašo Džeroski 1,2,3 Received: 26 April
More informationEvaluating and Comparing Classifiers: Review, Some Recommendations and Limitations
Evaluating and Comparing Classifiers: Review, Some Recommendations and Limitations Katarzyna Stapor (B) Institute of Computer Science, Silesian Technical University, Gliwice, Poland katarzyna.stapor@polsl.pl
More informationAssessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2
Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2 Ted Pedersen Department of Computer Science University of Minnesota Duluth, MN, 55812 USA tpederse@d.umn.edu
More informationFragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing
Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing D. Indhumathi Research Scholar Department of Information Technology
More informationOptimizing to Arbitrary NLP Metrics using Ensemble Selection
Optimizing to Arbitrary NLP Metrics using Ensemble Selection Art Munson, Claire Cardie, Rich Caruana Department of Computer Science Cornell University Ithaca, NY 14850 {mmunson, cardie, caruana}@cs.cornell.edu
More informationMining Association Rules in Student s Assessment Data
www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama
More informationarxiv: v1 [cs.lg] 15 Jun 2015
Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and
More informationDeveloping True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability
Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability Shih-Bin Chen Dept. of Information and Computer Engineering, Chung-Yuan Christian University Chung-Li, Taiwan
More informationChapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard
Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA Alta de Waal, Jacobus Venter and Etienne Barnard Abstract Most actionable evidence is identified during the analysis phase of digital forensic investigations.
More information