A practical way of handling missing values in combination with tree-based learners

Size: px
Start display at page:

Download "A practical way of handling missing values in combination with tree-based learners"

Transcription

1 A practical way of handling missing values in combination with tree-based learners V.J.J. Kusters S.J.J. Leemans D.M.M. Schunselaar F. Staals Eindhoven University of Technology, Den Dolech 2, 5612AZ Eindhoven Abstract In this age of information, techniques for analysing the huge amounts of data that are generated every day are becoming increasingly important. We would often like to use simple tree-based learners on these data sets. Unfortunately, the performance of these algorithms decreases rapidly as more missing values, or nulls, are present in the data set. In this paper we present a practical way of imputing missing values before learning a tree-based model. This method is then applied to the Census-Income (KDD) data set with various amounts of missing values. Additionally, to confirm our expectations that the imputation is less useful when using a noise-tolerant learner, the same procedure is executed with a Naive Bayes learner. The proposed method was shown to greatly increase the accuracy of tree-based learners on a data set with many missing values. 1 Introduction As our civilization continues to rely more and more on computer systems, the amount of information that is collected and generated increases vastly. Since data sets quickly become too large to analyse by hand, computer tools are required to classify data and find relationships. Various methods exist for the classification of tuples into categories, but tree-based learners are among the easiest to use. They match the way a human would tackle a classification problem, making them easy to understand. Consequently, tree-based learners are widely used in the industry to solve classification problems. Many real-world data sets contain missing values. Unfortunately, the presence of large amounts of missing values severely impacts the accuracy of tree-based learners. In this paper we present a practical method of imputing missing values prior to learning a model. Our hypothesis is: Theorem 1.1. A method that imputes missing values before applying a tree-based learner produces a more accurate model than a method that applies a tree-based learner directly. We will test our hypothesis on the Census-Income (KDD) data set provided by Asuncion and Newman[1]. This data set contains census data of the American Department of Commerce. Every tuple represents a person and his or her age, education level, race, etcetera. We will learn classification models on this data set to categorise persons based on their yearly income. A person should be categorised in the category if their income is is at most $50.000, and in otherwise. class. Furthermore, we examine the effects of missing value imputation on the results of a Naive Bayes learner. In our experiment we compare two methods to learn a classification on our data set. One of those methods will impute the missing values before learning a classification model, while the other method will directly learn the classification model. A detailed description of both methods and the required preprocessing steps is given in Section 2. The results of our experiment show that the method which imputes missing values results in a significantly more accurate model when used with a decision tree. When we use a Naive Bayes learner, there is no significant change in the accuracy. Detailed results are presented in Section 3.

2 2 Approach To test our hypothesis, we set up our experiment as depicted in Figure 1. We apply a learner to the data set directly (normal method) and apply the same learner after imputing missing values (imputation method) and compare the results. These methods learn a model on the training set. Given a tuple, these models classify the tuple into the or the class. The normal method immediately learns a classification model on the training set. Validation of the results is done by applying the model to the test set. This method is shown on the right side of Figure 1. The imputation method learns the missing value models and imputes the missing values in the training set, before learning the classification model. We use the approach by Quinlan [6] to impute the missing values. This approach involves learning a model for every attribute of the data set. We use the models we learned on the training set and apply them to the test set to validate the results. This method is shown on the left side of Figure 1. We use a decision tree learner to compute the classification model. The learner is the default decision tree as provided by Rapid Miner[9] and is similar to the C4.5 decision tree as presented by Quinlan [8]. To learn the missing value models we will use the same decision tree learner that is used to compute the classification model. Note that decision trees are sensitive to missing values. This is intentional: to achieve good results, we want to use a learner that works well on a particular data set. If a learner, which is not sensitive to missing values, produced good results on our data set, we could simply use that learner directly; i.e. there would be no need to impute missing values in the first place. We have focused our research on imputing nominal attributes. Various other authors, such as Fujikawa and Ho[3] have done research on the imputation of numerical attributes but this is beyond the scope of our paper. training set test set training set impute missing values model learn model learn model validate model impute missing values validate model 1 2 Figure 1: A model of our test setup Finally, we will run the same procedure again, but replace the tree-based learner for a Naive Bayes learner and compare the results. However before we can execute the experiment in Figure 1, some preprocessing steps have to be executed. The following steps are executed in the order in which they are listed: Sampling To increase the speed of computation, we consider only the first rows of the data set and remove the rest. Since the original data set is not ordered in a particular way this is equivalent to taking a random sample.

3 Missing Value Introduction We introduce extra missing values to investigate the effect of increasing amounts of nulls for missing values imputation. For every experiment, we choose a probability p. A value in the data set then has probability p of being converted to a null. Magnani refers to this as the Missing Completely At Random method [5]. p is chosen from the set {0, 0.05, 0.10, 0.15, 0.20, 0.25}. Duplication Since our class contains only 6% of the rows in the data set, a naive classification approach would classify everything in the class. This would result in an accuracy of 94%. In such a classification it would not be possible to assess the results of the missing value imputation. To prevent the learner from classifying everything in the class, we duplicate the rows of the class 15 times. The classes in the resulting data set are of roughly equal size. After duplicating the rows, we shuffle the data set to prevent clustering of the rows. Attribute Filtering One of the attributes in our data set is the instance weight attribute. According to the attribute description accompanying the data set, this attribute should not be used for classification. Hence, it was removed. Validation We split our data set into two disjoint sets: a training set, which we will use to learn our model, and a test set, on which we will validate our model. Since we randomised the data set after duplicating the rows in the class we can now take the first n rows for the training set and the last n rows for the test set. To make sure our sample is of sufficient size we tested with n = 2000 and n = Implementation The sampling, missing value introduction, and duplication operations described in the previous paragraphs were implemented by a custom Python script. The attribute filtering and validation steps were implemented by various operators in Rapidminer[9]. After these steps we use the Rapidminer[9] MissingValueImputation operator to learn, apply and store the missing value models. 3 Experimental Validation In our experiment we found support for the following claims. Claim 3.1. The accuracy of a decision tree learner declines significantly as the number of missing values in a data set increases. Claim 3.2. Missing value imputation grows more effective when using a decision tree learner as the number of missing values in a data set increases. Claim 3.3. The accuracy of a Naive Bayes learner does not decline significantly as the number of missing values in a data set increases. Claim 3.4. The number of missing values in a data set has no significant effect on a naive bayes learner. Applying the algorithm described in the previous section for varying amounts of missing values and learners yields the results in Table 1. Figure 2a shows a decline of the decision tree s accuracy when the number of missing values in the data set increases. This supports Claim 3.1. Additionally, the difference between the accuracies of both methods increases as the data set contains more missing values, which supports Claim 3.2. The difference is especially clear with the larger sample of 7500, initially the difference is less than 1%, but if we increase the amount of missing values by 25%, the accuracy difference increases to 18%. The smaller sample of 2000 tuples exhibits the same, though less extreme, behaviour. Figure 2b shows a radically different behavior: there is almost no difference between the imputation method and the normal method. Furthermore, the accuracy of the Naive Bayes learner shows no significant decrease when the number of missing values increases. The difference between the accuracies for the 0%-experiment and the 25%-experiment is less than a percent for the larger sample. This supports both Claims 3.3 and 3.4.

4 Learner Introduction MVI Accuracy decision tree 0% 83,75% 90,69% decision tree 0% 83,65% 90,51% decision tree 5% 77,50% 73,03% decision tree 5% 79,40% 78,15% decision tree 10% 73,45% 78,76% decision tree 10% 79,40% 85,87% decision tree 15% 71,15% 76,51% decision tree 15% 74,35% 83,11% decision tree 20% 73,00% 67,08% decision tree 20% 80,05% 84,31% decision tree 25% 70,40% 66,68% decision tree 25% 79,95% 84,92% Naive Bayes 0% 85,85% 84,99% Naive Bayes 0% 86,10% 84,92% Naive Bayes 5% 83,55% 83,96% Naive Bayes 5% 83,55% 84,25% Naive Bayes 10% 83,25% 84,85% Naive Bayes 10% 83,10% 85,23% Naive Bayes 15% 85,80% 84,25% Naive Bayes 15% 85,10% 84,27% Naive Bayes 20% 85,60% 83,73% Naive Bayes 20% 85,50% 83,05% Naive Bayes 25% 84,05% 84,11% Naive Bayes 25% 83,55% 84,01% Table 1: Decision tree and Naive Bayes results on various amounts of missing values. Decision Tree Naïve Bayes Accuracy Accuracy Percentage Extra Missing Values Introduced Percentage Extra Missing Values Introduced (a) Decision tree results (b) Naive Bayes results Sample size 2000 Sample size 2000, with missing value imputation Sample size 7500 Sample size 7500, with missing value imputation Figure 2: Results on the Census Income data set.

5 4 Related Work Several authors have done research related to decision trees and missing values. Magnani and Zamboni[5] stated that there are three types of missing value introduction: Missing completely at random, Missing at random and Not missing at random. In our research the Missing completely at random method was used. Furthermore, other authors have investigated the effect of replacing missing values. Buck[2] describes a method to replace missing values of an attribute A with the mean value for attribute A. There are also several articles written about actual imputation of missing values. A method using clustering is given in Fujikawa and Ho[3]; Zhang et.al[11] describe a method using non-parametric regression; and a method to replace missing values by the literal string unknown is presented in Quinlan[7]. Schoier[10] makes a distinction between Hot-deck imputation, in which the missing value imputation is based only upon information in the data set, and Cold-deck imputation, in which also information from other data sets is used to impute the missing values. A completely different way of handling missing values is presented by Gorodetsky in Gorodetsky et.al[4]. Their work presents a method for classification learners to work around missing values instead of imputing them. 5 Conclusions and Future work This paper supports the claim that imputing missing values prior to computing a decision tree will result in a more accurate classification model. It shows the accuracy improves significantly if the data set contains a substantial amount of missing values. As we expected, imputing missing values does not improve the accuracy of Naive Bayes, which is less sensitive to missing values. Our work only presents the effect of missing value imputation when used with a decision tree and a Naive Bayes learner. Further research with other classification learners, other missing value imputation learners, other data sets, and larger amounts of missing values is needed in order to increase our knowledge of the advantages and limitations of missing value imputation. References [1] A. Asuncion and D.J. Newman. UCI machine learning repository, [2] SF Buck. A method of estimation of missing values in multivariate data suitable for use with an electronic computer. Journal of the Royal Statistical Society. Series B (Methodological), pages , [3] Y. Fujikawa and T.B. Ho. Cluster-based algorithms for dealing with missing values. Lecture Notes in Computer Science, pages , [4] V. Gorodetsky, O. Karsaev, and V. Samoilov. Direct Mining of Rules from Data with Missing Values. Studies in Computational Intelligence, TY Lin, S. Ohsuga, CJ Liau, XT Hu, S. Tsumoto (Eds.). Foundation of Data Mining and Knowledge Discovery, Springer, pages , [5] M. Magnani and M.A. Zamboni. Techniques for dealing with missing data in knowledge discovery tasks. Obtido em web. cs. unibo. it/index. html em, 15(01):2007, [6] JR Quinlan. Induction of decision trees. Machine learning, 1(1):81 106, [7] JR Quinlan. Unknown attribute values in induction. In Proceedings of the sixth international workshop on Machine learning table of contents, pages Morgan Kaufmann Publishers Inc. San Francisco, CA, USA, [8] J.R. Quinlan. C4. 5: programs for machine learning. Morgan Kaufmann, [9] RapidMiner. Rapidminer. visited on [10] G. Schoier. On partial nonresponse situations: The hot deck imputation method. Retrieved May, 14:2004, 2004.

6 [11] C. Zhang, X. Zhu, J. Zhang, Y. Qin, and S. Zhang. GBKII: An Imputation Method for Missing Values. Lecture Notes in Computer Science, 4426:1080, 2007.

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

CSL465/603 - Machine Learning

CSL465/603 - Machine Learning CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer

More information

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Twitter Sentiment Classification on Sanders Data using Hybrid Approach IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees Mariusz Łapczy ski 1 and Bartłomiej Jefma ski 2 1 The Chair of Market Analysis and Marketing Research,

More information

Mining Association Rules in Student s Assessment Data

Mining Association Rules in Student s Assessment Data www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama

More information

CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE. Mingon Kang, PhD Computer Science, Kennesaw State University

CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE. Mingon Kang, PhD Computer Science, Kennesaw State University CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE Mingon Kang, PhD Computer Science, Kennesaw State University Self Introduction Mingon Kang, PhD Homepage: http://ksuweb.kennesaw.edu/~mkang9

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning Hendrik Blockeel and Joaquin Vanschoren Computer Science Dept., K.U.Leuven, Celestijnenlaan 200A, 3001 Leuven, Belgium

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming Data Mining VI 205 Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming C. Romero, S. Ventura, C. Hervás & P. González Universidad de Córdoba, Campus Universitario de

More information

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D. INTRODUCTION Anyone who spends ample time working

More information

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 98 (2016 ) 368 373 The 6th International Conference on Current and Future Trends of Information and Communication Technologies

More information

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation School of Computer Science Human-Computer Interaction Institute Carnegie Mellon University Year 2007 Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation Noboru Matsuda

More information

Applications of data mining algorithms to analysis of medical data

Applications of data mining algorithms to analysis of medical data Master Thesis Software Engineering Thesis no: MSE-2007:20 August 2007 Applications of data mining algorithms to analysis of medical data Dariusz Matyja School of Engineering Blekinge Institute of Technology

More information

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District Report Submitted June 20, 2012, to Willis D. Hawley, Ph.D., Special

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

CS 446: Machine Learning

CS 446: Machine Learning CS 446: Machine Learning Introduction to LBJava: a Learning Based Programming Language Writing classifiers Christos Christodoulopoulos Parisa Kordjamshidi Motivation 2 Motivation You still have not learnt

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Using Web Searches on Important Words to Create Background Sets for LSI Classification Using Web Searches on Important Words to Create Background Sets for LSI Classification Sarah Zelikovitz and Marina Kogan College of Staten Island of CUNY 2800 Victory Blvd Staten Island, NY 11314 Abstract

More information

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2 Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2 Ted Pedersen Department of Computer Science University of Minnesota Duluth, MN, 55812 USA tpederse@d.umn.edu

More information

Miami-Dade County Public Schools

Miami-Dade County Public Schools ENGLISH LANGUAGE LEARNERS AND THEIR ACADEMIC PROGRESS: 2010-2011 Author: Aleksandr Shneyderman, Ed.D. January 2012 Research Services Office of Assessment, Research, and Data Analysis 1450 NE Second Avenue,

More information

An extended dual search space model of scientific discovery learning

An extended dual search space model of scientific discovery learning Instructional Science 25: 307 346, 1997. 307 c 1997 Kluwer Academic Publishers. Printed in the Netherlands. An extended dual search space model of scientific discovery learning WOUTER R. VAN JOOLINGEN

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

Linking Task: Identifying authors and book titles in verbose queries

Linking Task: Identifying authors and book titles in verbose queries Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,

More information

Content-based Image Retrieval Using Image Regions as Query Examples

Content-based Image Retrieval Using Image Regions as Query Examples Content-based Image Retrieval Using Image Regions as Query Examples D. N. F. Awang Iskandar James A. Thom S. M. M. Tahaghoghi School of Computer Science and Information Technology, RMIT University Melbourne,

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Innov High Educ (2009) 34:93 103 DOI 10.1007/s10755-009-9095-2 Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Phyllis Blumberg Published online: 3 February

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA Alta de Waal, Jacobus Venter and Etienne Barnard Abstract Most actionable evidence is identified during the analysis phase of digital forensic investigations.

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany Jana Kitzmann and Dirk Schiereck, Endowed Chair for Banking and Finance, EUROPEAN BUSINESS SCHOOL, International

More information

Handling Concept Drifts Using Dynamic Selection of Classifiers

Handling Concept Drifts Using Dynamic Selection of Classifiers Handling Concept Drifts Using Dynamic Selection of Classifiers Paulo R. Lisboa de Almeida, Luiz S. Oliveira, Alceu de Souza Britto Jr. and and Robert Sabourin Universidade Federal do Paraná, DInf, Curitiba,

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus CS 1103 Computer Science I Honors Fall 2016 Instructor Muller Syllabus Welcome to CS1103. This course is an introduction to the art and science of computer programming and to some of the fundamental concepts

More information

Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio

Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio SCSUG Student Symposium 2016 Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio Praneth Guggilla, Tejaswi Jha, Goutam Chakraborty, Oklahoma State

More information

K-Medoid Algorithm in Clustering Student Scholarship Applicants

K-Medoid Algorithm in Clustering Student Scholarship Applicants Scientific Journal of Informatics Vol. 4, No. 1, May 2017 p-issn 2407-7658 http://journal.unnes.ac.id/nju/index.php/sji e-issn 2460-0040 K-Medoid Algorithm in Clustering Student Scholarship Applicants

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

A Note on Structuring Employability Skills for Accounting Students

A Note on Structuring Employability Skills for Accounting Students A Note on Structuring Employability Skills for Accounting Students Jon Warwick and Anna Howard School of Business, London South Bank University Correspondence Address Jon Warwick, School of Business, London

More information

Generation of Attribute Value Taxonomies from Data for Data-Driven Construction of Accurate and Compact Classifiers

Generation of Attribute Value Taxonomies from Data for Data-Driven Construction of Accurate and Compact Classifiers Generation of Attribute Value Taxonomies from Data for Data-Driven Construction of Accurate and Compact Classifiers Dae-Ki Kang, Adrian Silvescu, Jun Zhang, and Vasant Honavar Artificial Intelligence Research

More information

The Impact of Honors Programs on Undergraduate Academic Performance, Retention, and Graduation

The Impact of Honors Programs on Undergraduate Academic Performance, Retention, and Graduation University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Journal of the National Collegiate Honors Council - -Online Archive National Collegiate Honors Council Fall 2004 The Impact

More information

A NEW ALGORITHM FOR GENERATION OF DECISION TREES

A NEW ALGORITHM FOR GENERATION OF DECISION TREES TASK QUARTERLY 8 No 2(2004), 1001 1005 A NEW ALGORITHM FOR GENERATION OF DECISION TREES JERZYW.GRZYMAŁA-BUSSE 1,2,ZDZISŁAWS.HIPPE 2, MAKSYMILIANKNAP 2 ANDTERESAMROCZEK 2 1 DepartmentofElectricalEngineeringandComputerScience,

More information

How to Judge the Quality of an Objective Classroom Test

How to Judge the Quality of an Objective Classroom Test How to Judge the Quality of an Objective Classroom Test Technical Bulletin #6 Evaluation and Examination Service The University of Iowa (319) 335-0356 HOW TO JUDGE THE QUALITY OF AN OBJECTIVE CLASSROOM

More information

Constructive Induction-based Learning Agents: An Architecture and Preliminary Experiments

Constructive Induction-based Learning Agents: An Architecture and Preliminary Experiments Proceedings of the First International Workshop on Intelligent Adaptive Systems (IAS-95) Ibrahim F. Imam and Janusz Wnek (Eds.), pp. 38-51, Melbourne Beach, Florida, 1995. Constructive Induction-based

More information

Lectora a Complete elearning Solution

Lectora a Complete elearning Solution Lectora a Complete elearning Solution Irina Ioniţă 1, Liviu Ioniţă 1 (1) University Petroleum-Gas of Ploiesti, Department of Information Technology, Mathematics, Physics, Bd. Bucuresti, No.39, 100680,

More information

Evidence for Reliability, Validity and Learning Effectiveness

Evidence for Reliability, Validity and Learning Effectiveness PEARSON EDUCATION Evidence for Reliability, Validity and Learning Effectiveness Introduction Pearson Knowledge Technologies has conducted a large number and wide variety of reliability and validity studies

More information

Mining Student Evolution Using Associative Classification and Clustering

Mining Student Evolution Using Associative Classification and Clustering Mining Student Evolution Using Associative Classification and Clustering 19 Mining Student Evolution Using Associative Classification and Clustering Kifaya S. Qaddoum, Faculty of Information, Technology

More information

OFFICE OF ENROLLMENT MANAGEMENT. Annual Report

OFFICE OF ENROLLMENT MANAGEMENT. Annual Report 2014-2015 OFFICE OF ENROLLMENT MANAGEMENT Annual Report Table of Contents 2014 2015 MESSAGE FROM THE VICE PROVOST A YEAR OF RECORDS 3 Undergraduate Enrollment 6 First-Year Students MOVING FORWARD THROUGH

More information

Like much of the country, Detroit suffered significant job losses during the Great Recession.

Like much of the country, Detroit suffered significant job losses during the Great Recession. 36 37 POPULATION TRENDS Economy ECONOMY Like much of the country, suffered significant job losses during the Great Recession. Since bottoming out in the first quarter of 2010, however, the city has seen

More information

Pp. 176{182 in Proceedings of The Second International Conference on Knowledge Discovery and Data Mining. Predictive Data Mining with Finite Mixtures

Pp. 176{182 in Proceedings of The Second International Conference on Knowledge Discovery and Data Mining. Predictive Data Mining with Finite Mixtures Pp. 176{182 in Proceedings of The Second International Conference on Knowledge Discovery and Data Mining (Portland, OR, August 1996). Predictive Data Mining with Finite Mixtures Petri Kontkanen Petri Myllymaki

More information

Managing Experience for Process Improvement in Manufacturing

Managing Experience for Process Improvement in Manufacturing Managing Experience for Process Improvement in Manufacturing Radhika Selvamani B., Deepak Khemani A.I. & D.B. Lab, Dept. of Computer Science & Engineering I.I.T.Madras, India khemani@iitm.ac.in bradhika@peacock.iitm.ernet.in

More information

STA 225: Introductory Statistics (CT)

STA 225: Introductory Statistics (CT) Marshall University College of Science Mathematics Department STA 225: Introductory Statistics (CT) Course catalog description A critical thinking course in applied statistical reasoning covering basic

More information

BUILD-IT: Intuitive plant layout mediated by natural interaction

BUILD-IT: Intuitive plant layout mediated by natural interaction BUILD-IT: Intuitive plant layout mediated by natural interaction By Morten Fjeld, Martin Bichsel and Matthias Rauterberg Morten Fjeld holds a MSc in Applied Mathematics from Norwegian University of Science

More information

Algebra 2- Semester 2 Review

Algebra 2- Semester 2 Review Name Block Date Algebra 2- Semester 2 Review Non-Calculator 5.4 1. Consider the function f x 1 x 2. a) Describe the transformation of the graph of y 1 x. b) Identify the asymptotes. c) What is the domain

More information

re An Interactive web based tool for sorting textbook images prior to adaptation to accessible format: Year 1 Final Report

re An Interactive web based tool for sorting textbook images prior to adaptation to accessible format: Year 1 Final Report to Anh Bui, DIAGRAM Center from Steve Landau, Touch Graphics, Inc. re An Interactive web based tool for sorting textbook images prior to adaptation to accessible format: Year 1 Final Report date 8 May

More information

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE Pierre Foy TIMSS Advanced 2015 orks User Guide for the International Database Pierre Foy Contributors: Victoria A.S. Centurino, Kerry E. Cotter,

More information

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access The courses availability depends on the minimum number of registered students (5). If the course couldn t start, students can still complete it in the form of project work and regular consultations with

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH ISSN: 0976-3104 Danti and Bhushan. ARTICLE OPEN ACCESS CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH Ajit Danti 1 and SN Bharath Bhushan 2* 1 Department

More information

A Version Space Approach to Learning Context-free Grammars

A Version Space Approach to Learning Context-free Grammars Machine Learning 2: 39~74, 1987 1987 Kluwer Academic Publishers, Boston - Manufactured in The Netherlands A Version Space Approach to Learning Context-free Grammars KURT VANLEHN (VANLEHN@A.PSY.CMU.EDU)

More information

STAT 220 Midterm Exam, Friday, Feb. 24

STAT 220 Midterm Exam, Friday, Feb. 24 STAT 220 Midterm Exam, Friday, Feb. 24 Name Please show all of your work on the exam itself. If you need more space, use the back of the page. Remember that partial credit will be awarded when appropriate.

More information

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN From: AAAI Technical Report WS-98-08. Compilation copyright 1998, AAAI (www.aaai.org). All rights reserved. Recommender Systems: A GroupLens Perspective Joseph A. Konstan *t, John Riedl *t, AI Borchers,

More information

Spanish Users and Their Participation in College: The Case of Indiana

Spanish Users and Their Participation in College: The Case of Indiana and Their Participation in College: The Case of Indiana CAROLINA PELAEZ-MORALES Purdue University Spanish has become a widely used second language in the U.S. As the number of Spanish users (SUs) continues

More information

Stochastic Calculus for Finance I (46-944) Spring 2008 Syllabus

Stochastic Calculus for Finance I (46-944) Spring 2008 Syllabus Stochastic Calculus for Finance I (46-944) Spring 2008 Syllabus Introduction. This is a first course in stochastic calculus for finance. It assumes students are familiar with the material in Introduction

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Data Fusion Through Statistical Matching

Data Fusion Through Statistical Matching A research and education initiative at the MIT Sloan School of Management Data Fusion Through Statistical Matching Paper 185 Peter Van Der Puttan Joost N. Kok Amar Gupta January 2002 For more information,

More information

BENCHMARK TREND COMPARISON REPORT:

BENCHMARK TREND COMPARISON REPORT: National Survey of Student Engagement (NSSE) BENCHMARK TREND COMPARISON REPORT: CARNEGIE PEER INSTITUTIONS, 2003-2011 PREPARED BY: ANGEL A. SANCHEZ, DIRECTOR KELLI PAYNE, ADMINISTRATIVE ANALYST/ SPECIALIST

More information

Universidade do Minho Escola de Engenharia

Universidade do Minho Escola de Engenharia Universidade do Minho Escola de Engenharia Universidade do Minho Escola de Engenharia Dissertação de Mestrado Knowledge Discovery is the nontrivial extraction of implicit, previously unknown, and potentially

More information

Welcome to. ECML/PKDD 2004 Community meeting

Welcome to. ECML/PKDD 2004 Community meeting Welcome to ECML/PKDD 2004 Community meeting A brief report from the program chairs Jean-Francois Boulicaut, INSA-Lyon, France Floriana Esposito, University of Bari, Italy Fosca Giannotti, ISTI-CNR, Pisa,

More information

GDP Falls as MBA Rises?

GDP Falls as MBA Rises? Applied Mathematics, 2013, 4, 1455-1459 http://dx.doi.org/10.4236/am.2013.410196 Published Online October 2013 (http://www.scirp.org/journal/am) GDP Falls as MBA Rises? T. N. Cummins EconomicGPS, Aurora,

More information

IT Students Workshop within Strategic Partnership of Leibniz University and Peter the Great St. Petersburg Polytechnic University

IT Students Workshop within Strategic Partnership of Leibniz University and Peter the Great St. Petersburg Polytechnic University IT Students Workshop within Strategic Partnership of Leibniz University and Peter the Great St. Petersburg Polytechnic University 06.11.16 13.11.16 Hannover Our group from Peter the Great St. Petersburg

More information

Multivariate k-nearest Neighbor Regression for Time Series data -

Multivariate k-nearest Neighbor Regression for Time Series data - Multivariate k-nearest Neighbor Regression for Time Series data - a novel Algorithm for Forecasting UK Electricity Demand ISF 2013, Seoul, Korea Fahad H. Al-Qahtani Dr. Sven F. Crone Management Science,

More information

Australia s tertiary education sector

Australia s tertiary education sector Australia s tertiary education sector TOM KARMEL NHI NGUYEN NATIONAL CENTRE FOR VOCATIONAL EDUCATION RESEARCH Paper presented to the Centre for the Economics of Education and Training 7 th National Conference

More information

Education: Integrating Parallel and Distributed Computing in Computer Science Curricula

Education: Integrating Parallel and Distributed Computing in Computer Science Curricula IEEE DISTRIBUTED SYSTEMS ONLINE 1541-4922 2006 Published by the IEEE Computer Society Vol. 7, No. 2; February 2006 Education: Integrating Parallel and Distributed Computing in Computer Science Curricula

More information

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,

More information

Activity Recognition from Accelerometer Data

Activity Recognition from Accelerometer Data Activity Recognition from Accelerometer Data Nishkam Ravi and Nikhil Dandekar and Preetham Mysore and Michael L. Littman Department of Computer Science Rutgers University Piscataway, NJ 08854 {nravi,nikhild,preetham,mlittman}@cs.rutgers.edu

More information

CS 101 Computer Science I Fall Instructor Muller. Syllabus

CS 101 Computer Science I Fall Instructor Muller. Syllabus CS 101 Computer Science I Fall 2013 Instructor Muller Syllabus Welcome to CS101. This course is an introduction to the art and science of computer programming and to some of the fundamental concepts of

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

Firms and Markets Saturdays Summer I 2014

Firms and Markets Saturdays Summer I 2014 PRELIMINARY DRAFT VERSION. SUBJECT TO CHANGE. Firms and Markets Saturdays Summer I 2014 Professor Thomas Pugel Office: Room 11-53 KMC E-mail: tpugel@stern.nyu.edu Tel: 212-998-0918 Fax: 212-995-4212 This

More information

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 6, Ver. IV (Nov Dec. 2015), PP 01-07 www.iosrjournals.org Longest Common Subsequence: A Method for

More information

Longitudinal Analysis of the Effectiveness of DCPS Teachers

Longitudinal Analysis of the Effectiveness of DCPS Teachers F I N A L R E P O R T Longitudinal Analysis of the Effectiveness of DCPS Teachers July 8, 2014 Elias Walsh Dallas Dotter Submitted to: DC Education Consortium for Research and Evaluation School of Education

More information

Cooperative evolutive concept learning: an empirical study

Cooperative evolutive concept learning: an empirical study Cooperative evolutive concept learning: an empirical study Filippo Neri University of Piemonte Orientale Dipartimento di Scienze e Tecnologie Avanzate Piazza Ambrosoli 5, 15100 Alessandria AL, Italy Abstract

More information

Redirected Inbound Call Sampling An Example of Fit for Purpose Non-probability Sample Design

Redirected Inbound Call Sampling An Example of Fit for Purpose Non-probability Sample Design Redirected Inbound Call Sampling An Example of Fit for Purpose Non-probability Sample Design Burton Levine Karol Krotki NISS/WSS Workshop on Inference from Nonprobability Samples September 25, 2017 RTI

More information

Improving Simple Bayes. Abstract. The simple Bayesian classier (SBC), sometimes called

Improving Simple Bayes. Abstract. The simple Bayesian classier (SBC), sometimes called Improving Simple Bayes Ron Kohavi Barry Becker Dan Sommereld Data Mining and Visualization Group Silicon Graphics, Inc. 2011 N. Shoreline Blvd. Mountain View, CA 94043 fbecker,ronnyk,sommdag@engr.sgi.com

More information