Evaluation and Comparison of Performance of different Classifiers

Similar documents
Rule Learning With Negation: Issues Regarding Effectiveness

CS Machine Learning

Rule Learning with Negation: Issues Regarding Effectiveness

Learning From the Past with Experiment Databases

Python Machine Learning

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Assignment 1: Predicting Amazon Review Ratings

Australian Journal of Basic and Applied Sciences

A Case Study: News Classification Based on Term Frequency

Reducing Features to Improve Bug Prediction

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Mining Association Rules in Student s Assessment Data

Applications of data mining algorithms to analysis of medical data

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Word Segmentation of Off-line Handwritten Documents

Lecture 1: Machine Learning Basics

On-Line Data Analytics

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Automating the E-learning Personalization

CSL465/603 - Machine Learning

CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE. Mingon Kang, PhD Computer Science, Kennesaw State University

Lecture 1: Basic Concepts of Machine Learning

Speech Emotion Recognition Using Support Vector Machine

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning

Linking Task: Identifying authors and book titles in verbose queries

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

Why Did My Detector Do That?!

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Disambiguation of Thai Personal Name from Online News Articles

Universidade do Minho Escola de Engenharia

AQUA: An Ontology-Driven Question Answering System

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Historical maintenance relevant information roadmap for a self-learning maintenance prediction procedural approach

Softprop: Softmax Neural Network Backpropagation Learning

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Probabilistic Latent Semantic Analysis

Activities, Exercises, Assignments Copyright 2009 Cem Kaner 1

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Calibration of Confidence Measures in Speech Recognition

Mining Student Evolution Using Associative Classification and Clustering

Statistics and Data Analytics Minor

Human Emotion Recognition From Speech

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Radius STEM Readiness TM

Probability estimates in a scenario tree

Using dialogue context to improve parsing performance in dialogue systems

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

Data Fusion Through Statistical Matching

Learning Methods in Multilingual Speech Recognition

Modeling user preferences and norms in context-aware systems

Modeling function word errors in DNN-HMM based LVCSR systems

Evolutive Neural Net Fuzzy Filtering: Basic Description

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Evaluating and Comparing Classifiers: Review, Some Recommendations and Limitations

GACE Computer Science Assessment Test at a Glance

Learning Methods for Fuzzy Systems

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma

Computerized Adaptive Psychological Testing A Personalisation Perspective

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

A Decision Tree Analysis of the Transfer Student Emma Gunu, MS Research Analyst Robert M Roe, PhD Executive Director of Institutional Research and

WHEN THERE IS A mismatch between the acoustic

Modeling function word errors in DNN-HMM based LVCSR systems

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Issues in the Mining of Heart Failure Datasets

CS 446: Machine Learning

A Neural Network GUI Tested on Text-To-Phoneme Mapping

MMOG Subscription Business Models: Table of Contents

Software Maintenance

Multivariate k-nearest Neighbor Regression for Time Series data -

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing

Handling Concept Drifts Using Dynamic Selection of Classifiers

Ontologies vs. classification systems

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

The Method of Immersion the Problem of Comparing Technical Objects in an Expert Shell in the Class of Artificial Intelligence Algorithms

INPE São José dos Campos

Semi-Supervised Face Detection

On the Combined Behavior of Autonomous Resource Management Agents

Introduction of Open-Source e-learning Environment and Resources: A Novel Approach for Secondary Schools in Tanzania

Detecting English-French Cognates Using Orthographic Edit Distance

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

(Sub)Gradient Descent

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Welcome to. ECML/PKDD 2004 Community meeting

A Comparison of Standard and Interval Association Rules

Unit purpose and aim. Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50

Seminar - Organic Computing

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Transcription:

Evaluation and Comparison of Performance of different Classifiers Bhavana Kumari 1, Vishal Shrivastava 2 ACE&IT, Jaipur Abstract:- Many companies like insurance, credit card, bank, retail industry require direct marketing. Data mining may help those institutes to set marketing targets. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this proposed work evaluated performance of two data mining techniques: the decision tree and Discreminant analysis algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. In this paper comparative study of performances of two Data Mining algorithms. Publicly available UCI dataset is used to train and test the performance of the algorithms. Finally concluded that decision tree has shown better result than Discreminant analysis algorithm. Keywords: Decision Tree, Discreminant Analysis, Data Mining, ROC, Classification I. INTRODUCTION Data mining is a process that uses a variety of data analysis tools to discover patterns and relationships in data that may be used to make valid predictions [1, 2]. Most commonly used techniques in data mining are: artificial neural networks, genetic algorithms, rule induction, nearest neighbor method and memory based reasoning, logistic regression, discreminant analysis and decision trees. A formal definition of data mining (DM), also known historically as data fishing, data dredging, knowledge discovery in databases, or depending on the domain, as business intelligence, information discovery, information harvesting or data pattern processing [3]: Definition: Knowledge Discovery in Databases (KDD) is the non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data. 1.1 Machine learning and classifiers This heading begins by introducing the reader to the field of machine learning and especially the topics concerning classifiers and classifier performance. This is followed by a discussion about classifier comparison and the problems related to this subject. The area of machine learning constitutes a number of paradigms and algorithms for classification and learning, each having its objectives, goals, weaknesses and strengths. Important types of algorithms are those that learn a classifier from examples. A simple and general explanation of these algorithms is that they are used to learn from data to be able to classify instances of data into different categories (classes). Although many of the algorithms are very different in constitution they all have a common interface; they are often configurable and they produce a classifier based on a set of training data. A classifier is built by letting a learning algorithm generalize from a set of data (often referred to as training data). The training data consists of a number of instances. Instances are described by a set of attributes. Thus, a particular instance is described by a set of attribute values. There exist several types of values and classification types. Attributes can consist of numerical values, Boolean values or other types of values. One of the attributes is often referred to as the target attribute. The target attribute corresponds to the class of the instance. In order words, a classifier should be able to predict the value of the target attribute of an instance, given the values of some or all of the other attributes of the instance. This is true for one type of classification. Other types include concept classification (where the target attribute is a Boolean value; yes/no or true/false) and numerical prediction where the value of the target attribute is computed from the values of the other parameters. 1.2. Classifier comparison 604 Page

One way to find a good solution for a classifier learning problem is to compare the performance of different classifiers on the same data. A simple comparison could be made by training a number of classifiers on the same data set and comparing their accuracy over the test data. 1.3.Evaluation Tactics The main difficulty in predicting the expected classifier performance on a new problem is the limited amount of data available and the fact that the sample may not be representative enough. Therefore, performing a single train-test split on the data, generating the model on the training set and evaluating its performance on the test split. There are few methods for split to dataset one of them which is used in proposed work is given below. 1.3.1 Hold out Method In the holdout method, data are divided into a training set and a testing set. It takes 2/3 of the data are assigned to the training set and 1/3 to the testing set. Hold-out or (simple) validation relies on a single split of data. The holdout method is the simplest kind of validation. The data set is divided into two sets, called the training dataset and the testing dataset. The function cvpartition fits a function using the training set only. Then the function cvpartition is asked to predict the output values for the data in the testing set (it has never seen these output values before). The advantage of this method is that it is usually preferable to the residual method and takes no longer for computation. However, its evaluation can have a high variance. The evaluation may depend heavily on which data points end up in the training set and which end up in the test set, and thus the evaluation may be significantly different depending on how the division is made. In proposed work uses this method for splitting to dataset. Fig 1.1 Hold out Method for partitioned to dataset 1.4 Confusion Matrix Almost all performance metrics are represented in terms of the elements of the confusion matrix generated by the model on a test sample. Table 1.1 presents the structure of a confusion matrix for a two-class problem, with classes positive and negative. A column represents an actual class, while a row represents the predicted class. The total number of instances in the test set is represented on the top of the table (P=total number of positive instances, and N=total number of negative instances), while the number of instances predicted to belong to each class are represented to the left of the table (p= total number of instances classified as positive; n=total number of instances classified as negative). True Positive (TP) is the number of correct predictions that an instance is true, or in other words we can say that it is occurring when the positive prediction of the classifier coincided with a positive prediction of target attribute. True Negative (TN) is presenting a number of correct predictions that an instance is false, (i.e.) it occurs when both the classifier, and the target attribute suggests the absence of a positive prediction. The False Positive (FP) is the number of incorrect predictions that an instance is true. Finally, False Negative (FN) is the number of incorrect predictions that an instance is false. Table 1.1 shows the confusion matrix for a two-class classifier. Table 1.1: The confusion matrix returned by a classifier Cm1 Cm2 1.5 Data Set Cm1 Cm2 True positives(tp) False positives(fp) False negatives(fn) True negatives(tn) Proposed work extracted the datasets of bank direct marketing from UCI repository. It has dimensions of 16 attribute and 45,211 instances. For proposes of training and testing, only 60% of the overall data is used for training and the remaining 40% dataset is used for testing the accuracy of the selected classification algorithms. The detail descriptions of the data sets are summarized in Tb 1.2. 605 Page

The classifiers has to predict if the client will subscribe a term deposit or not (variable y). The bank direct marketing data set contains 45211 observations capturing 16 attributes/features. Table :-1.2 Bank Direct Marketing Dataset Output variable (desired target): 1. Y: Has the client subscribed a term deposit? For Ex.-(binary: "yes","no") II. METHODOLOGY In Proposed work two classifiers have implemented shown their performance as well as shown the comparison with each other, and finally concluded that which one showing best results. In proposed work implementation of two classifiers one is decision tree and other is Discriminate Analysis in matlab. After implemented these classifier next problem is which dataset have to choose for evaluate performance of classifiers,for this bank direct marketing data set from the University of California at Irvine (UCI) Machine Learning Repository have been used to evaluate the performances of the Decision tree, and Discriminate Analysis classification models. For evaluation of performance of classifier, divided the dataset in two parts one is training dataset and second is test dataset with the hold out validation. classifiers learn from training dataset and perform prediction on test dataset in the form of confusion matrix, which is a source to calculate three performance measures which are accuracy, sensitivity and specificity. In proposed work learning process is comprised of: 1. A data preprocessor, 2. A learning algorithm 2.1 Overview of the learning scheme in proposed work Fig. 2.1 contains the details. At the first learning scheme evaluation stage, the performances of the different classifiers are evaluated with bank direct marketing dataset to determine whether a certain classifiers performs sufficiently well for prediction purposes or to select the best from a set of competing schemes. In Fig. 2.1, see that the bank direct marketing dataset are divided into two parts: a training set for building learners with the given learning schemes, and a test set for evaluating the performances of the classifiers. It is very important that the test data are not used in any way to build the learners. 606 Page

Fig 2.1:- Learning Scheme At the prediction stage, according to the performance report of the first stage, a learning scheme is selected and predicts the result in the form of confusion matrix.the problem of learning scheme is how to divide historical data into training dataset and test dataset. As given above, the test dataset should be independent for the learner construction. This is a requisite precondition to evaluate the performance of a learner for newly dataset, for this Hold out method is used to estimate with how much accuracy a predictive model will perform in practice, involves partitioned of dataset into complementary subsets, performing the analysis on one given sub dataset, and validating the analysis on the other sub dataset. The detail of hold out method is given introduction. 2.2 Prediction:- The trained classifier is then used to make a prediction on the test dataset. Predicted values will be compared with actual values to compute the confusion matrix. Confusion matrix is used to visualize the performance of a machine learning techniques. In Proposed work analyzed the performance of different classification techniques to select the one with the most accurate results for classification of bank direct marketing dataset. In proposed work choose two very commonly used techniques Decision Tree classification and Discreminant Analysis techniques, chosen from machine learning. The brief description of the classification techniques for bank direct marketing is shown below. 2.2.1 Decision trees Decision Trees are considered to be one of the most popular approaches for representing classifiers. Researchers from various disciplines such as statistics, machine learning, pattern recognition, and Data Mining have dealt with the issue of growing a decision tree from available data. Decision trees are trees that classify instances by sorting them based on feature values. Each node in a decision tree represents a feature in an instance to be classified, and each branch represents a value that the node can assume. Instances are classified starting at the root node and sorted based on their feature values. Decision tree rules provide model transparency so that a user can understand the basis of the model's predictions, and therefore, be comfortable acting on them and explaining them to others. 2.2.2 Discriminate Analysis Discriminate Analysis is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes).this is generalized linear type model that uses statistical analysis to predict an event based on known factors.a Discriminate Analysis can make predictions about whether a customer will buy a product based on age, gender, geography and other demographic data. It also called logistic model and logic model. 2.2.3 Confusion Matrix Confusion Matrix contains information about actual and predicted classifications done by a classification 607 Page

system. 2.2.4 Three Measures and ROC for performance Accuracy of Classification is defined as the ratio of the number of correctly classified cases and is equal to the sum of TP and TN divided by the total number of cases N.Accuracy is defined as the ratio of number of classes (including faulty and non- faulty) that are predicted correctly to the total number of classes. Sensitivity measures the correctness of the predicted model. It is defined as the percentage of classes correctly predicted to be fault prone. Specificity also measures the correctness of the predicted model. It is defined as the percentage of classes predicted that will not be faulted prone. ROC curves: ROC stands for receiver operating characteristics. This is a term used in signal detection which characterizes the tradeoff between hit rate and false alarm rate over a noisy channel (Witten and Frank, 1999). III. THE EXPERIMENTAL RESULTS The performance of each classification model is evaluated using three statistical measures; classification accuracy, sensitivity and specificity. These measures are calculated by confusion matrix, contains information about actual and predicted classifications done by a classification system. It is using true positive (TP), true negative (TN), false positive (FP) and false negative (FN). The percentage of Correct/Incorrect classification is the difference between the actual and predicted values of variables. True Positive (TP) is the number of correct predictions that an instance is true, or in other words we can say that it is occurring when the positive prediction of the classifier coincided with a positive prediction of target attribute. True Negative (TN) is presenting a number of correct predictions that an instance is false, (i.e.) it occurs when both the classifier, and the target attribute suggests the absence of a positive prediction. The False Positive (FP) is the number of incorrect predictions that an instance is true. Finally, False Negative (FN) is the number of incorrect predictions that an instance is false. Table 3.1 shows the confusion matrix for a two-class classifier.predicted class Table 3.1 confusion matrix Actual Class 3.1 Dataset Cm1 Cm2 Cm1 True positives(tp) False positives(fp) Cm2 False negatives(fn) True negatives(tn) Bank direct marketing data set node is connected directly to an EXCEL sheet file that contains the source data. The data set was explored as ordinal data types. The type node specifies the field metadata and properties that 608 Page

are important for modeling. These properties include specifying a usage type, setting options for handling missing values, as well as setting the role of an attribute for modeling purposes; input or output. As previously stated, the first 16 attributes are defined as input attributes and the output attribute (y) is defined as a target. Now perform experiment, the input for classifier is 16 attributes of dataset and the output attribute is y in which classifier has to predict that how many people has subscribed fixed deposit (yes) or (no), which has to be predict to classifier. In given dataset the actual values for y means number of no is 39922 and number of yes is 5289.which shown in following table 3.2 Table 3.2:- Dataset values for attribute y Value Count Percent no 39922 88.30% yes 5289 11.70% The First step is data preprocessing in which data is divided in two parts one part is trainee data set and other part is test data set, it will learn from trainee dataset that, what is attributes values for who has subscribed the term deposit (yes) or not (no). In trainee dataset number of yes is 3179 and number of no is 23948, shown in following table 4.3. Table 3.3:- Divided trainee dataset value of y Value Count Percent no 23948 88.28% yes 3179 11.72% Now classifier whatever learn, perform prediction for the attribute y that who has subscribed the term deposit (yes) or no on the test Dataset in which value of y attributes is 15947 for no and 2110 for yes, shown in figure 4.4. Table 3.4:-Actual test dataset values for y. Value Count Percent no 15974 88.33% yes 2110 11.67% Now we compare prediction output which is predicted by different classifier with actual output shown in table 3.4, and evaluate accuracy, sensitivity and specificity of classifier using confusion matrix. 3.2 Classifier s Prediction:- 3.2.1:- Discriminant Classifier In following table shown confusion matrix generated by Discreminant Classifier. Predicted class Table 3.5:-The Confusion Matrix for Discreminant Classifier C1 C2 609 Page

Actual Class C1 14417 1557 C2 1093 1017 Accuracy of Discreminant Classifier = 85.30 % Sensitivity of Discreminant Classifier = 90.25 % Specificity of Discreminant Classifier = 92.95% 1. 3.2.2 Decision Tree:- Tab 3.6:- Confusion Matrix for Decision Tree Predicted class C1 C2 C1 15030 944 C2 1104 1006 Fig 3.1:-ROC curve for Discreminant Classifier Actual Class Accuracy of Decision Tree Classifier = 88.67 Sensitivity of Decision Tree Classifier = 94.09 Specificity of Decision Tree Classifier = 93.15 3.3 Complete result:- Fig 3.2:-ROC Curve for Decision Tree In table 4.11 shown comparison of classifier performance in tabular form. Tab 4.11:- Comparison of classifier performance 610 Page

Classifiers Performance Measures Accuracy Sensitivity Specificity Discreminant 85.30 90.25 92.95 Decision Tree 88.67 94.09 93.15 IV. CONCLUSION Bank direct marketing and business decisions are more important than ever for preserving the relationship with the best customers. To success and survival, the business there is a need for customer care and marketing strategies. Data mining and predictive analytics can provide help in such marketing strategies. These applications are influential in almost every field containing complex data and large procedures. It has proven the ability to reduce the number of false positives and false-negative decisions. Proposed work has been evaluating and comparing the classification performance of two different data mining techniques models Decision Tree and Discreminant Analysis on the bank direct marketing data set to classify for bank deposit subscription. The purpose is increasing the campaign effectiveness by identifying the main characteristics that affect the success (the deposit subscribed by the client). The classification performances of the three models have been using three statistical measures; Classification accuracy, sensitivity and specificity. This data set has partitioned into training and test by the ratio 60% and 40%, respectively. Experimental results have shown the effectiveness of models Decision Tree has achieved slightly better performance than Discreminant Analysis. REFERENCES 1. C. X. Ling and C. Li, Data Mining for Direct Marketing: Problems and Solutions, Proceedings of International Conference on Knowledge Discovery from Data (KDD 98), New York City, 27-31 August 1998, pp. 73-79. 2. G. Dimitoglou, J. A. Adams and C. M. Jim, Comparison of the C4.5 and a Naïve Bayes Classifier for the Prediction of Lung Cancer Survivability, Journal of Comput-ing, Vol. 4, No. 2, 2012, pp. 1-9. 3. Fayyad U.M., Piatetsky-Shapiro G. and Smyth, Data Mining to Knowledge Discovery in Databases Artificial Intelligence Magazine, 17(3): 37-54. 4. Velmurugan T., T. Santhanam(2010), performance evaluation of k-means & fuzzy c-means clustering algorithm for statistical distribution of input data points., European Journal of Scientific Research, vol 46 5. Jayaprakash et all, performance characteristics of data mining applications using minebench, National Science Foundation (NSF). 611 Page