Analysis of Clustering and Classification Methods for Actionable Knowledge

Similar documents
Python Machine Learning

Lecture 1: Machine Learning Basics

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Test Effort Estimation Using Neural Network

Learning From the Past with Experiment Databases

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Australian Journal of Basic and Applied Sciences

Assignment 1: Predicting Amazon Review Ratings

CS Machine Learning

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Human Emotion Recognition From Speech

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

(Sub)Gradient Descent

Evolutive Neural Net Fuzzy Filtering: Basic Description

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Model Ensemble for Click Prediction in Bing Search Ads

Rule Learning With Negation: Issues Regarding Effectiveness

Generative models and adversarial training

Reducing Features to Improve Bug Prediction

Switchboard Language Model Improvement with Conversational Data from Gigaword

Rule Learning with Negation: Issues Regarding Effectiveness

Learning Methods for Fuzzy Systems

A Case Study: News Classification Based on Term Frequency

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Time series prediction

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

WHEN THERE IS A mismatch between the acoustic

Mining Association Rules in Student s Assessment Data

Probability and Statistics Curriculum Pacing Guide

Why Did My Detector Do That?!

Issues in the Mining of Heart Failure Datasets

Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Word Segmentation of Off-line Handwritten Documents

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Applications of data mining algorithms to analysis of medical data

Speech Emotion Recognition Using Support Vector Machine

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

STA 225: Introductory Statistics (CT)

Universidade do Minho Escola de Engenharia

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

arxiv: v1 [cs.lg] 15 Jun 2015

Modeling function word errors in DNN-HMM based LVCSR systems

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

On-Line Data Analytics

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Calibration of Confidence Measures in Speech Recognition

Softprop: Softmax Neural Network Backpropagation Learning

Probabilistic Latent Semantic Analysis

Artificial Neural Networks written examination

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Semi-Supervised Face Detection

Comparison of EM and Two-Step Cluster Method for Mixed Data: An Application

Modeling function word errors in DNN-HMM based LVCSR systems

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

INPE São José dos Campos

CSL465/603 - Machine Learning

STT 231 Test 1. Fill in the Letter of Your Choice to Each Question in the Scantron. Each question is worth 2 point.

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Constructing a support system for self-learning playing the piano at the beginning stage

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Learning Methods in Multilingual Speech Recognition

Lecture 1: Basic Concepts of Machine Learning

A Model to Predict 24-Hour Urinary Creatinine Level Using Repeated Measurements

A Decision Tree Analysis of the Transfer Student Emma Gunu, MS Research Analyst Robert M Roe, PhD Executive Director of Institutional Research and

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Seminar - Organic Computing

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

College Pricing. Ben Johnson. April 30, Abstract. Colleges in the United States price discriminate based on student characteristics

The Good Judgment Project: A large scale test of different methods of combining expert predictions

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Knowledge Transfer in Deep Convolutional Neural Nets

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Attributed Social Network Embedding

Activity Recognition from Accelerometer Data

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

Feature Selection based on Sampling and C4.5 Algorithm to Improve the Quality of Text Classification using Naïve Bayes

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method

FRAMEWORK FOR IDENTIFYING THE MOST LIKELY SUCCESSFUL UNDERPRIVILEGED TERTIARY STUDY BURSARY APPLICANTS

A Comparison of Standard and Interval Association Rules

arxiv: v2 [cs.cv] 30 Mar 2017

Radius STEM Readiness TM

Knowledge-Based - Systems

Transcription:

Available online at www.sciencedirect.com ScienceDirect Materials Today: Proceedings XX (2016) XXX XXX www.materialstoday.com/proceedings PMME 2016 Analysis of Clustering and Classification Methods for Actionable Knowledge Arumugam P a, Christy V b* a Associate Professor, Department of Statistics, Manonmanium Sundarnar University, Tirunelveli-627012,Tamilnadu, India b Research Scholar,Department of Statistics, Manonmanium Sundarnar University, Tirunelveli-627012,Tamilnadu, India Abstract Data Mining becomes a vital aspect in data analysis. Study on data mining using a synchronized Clustering, Neural based approach gives us the usage trend analysis and it is very much depends on the performance of the clustering of the number of requests. Clustering before classification is termed as cluster Classifier. Numerous classification techniques are there for data modeling. Recently knowledge based approached has become the key forces in data classification. Here performed a four way comparison of Logistic Regression (LR), Classification and Regression Trees (CART), Random Forest (RF) and Neural Network (NN) models using a continuous and categorical dependent variable for classification. A Customer relationship management (CRM) data set is used to run these models. Measurement of different classification accuracy methods are used to compare the performance of the models. Experimental results of test data of the model is used here to predict the accuracy. Based on the efficient method actionable knowledge is derived from the proposed methodology. 2016 Elsevier Ltd. All rights reserved. Selection and Peer-review under responsibility of International Conference on Processing of Materials, Minerals and Energy (July 29th 30th) 2016, Ongole, Andhra Pradesh, India. * Corresponding author. Tel.: +0-91-9900058913 E-mail address: christy.eben@gmail.com 2214-7853 2016 Elsevier Ltd. All rights reserved. Selection and Peer-review under responsibility of International Conference on Processing of Materials, Minerals and Energy (July 29th 30th) 2016, Ongole, Andhra Pradesh, India.

2 Arumugam P,Christy V/ Materials Today: Proceedings XX (2016) XXX XXX Keywords: Data Mining, Random Forest, Actionable Knowledge, Clustering, Classification. 1. Introduction Statistical methods like regression analysis, multivariate analysis and pattern recognition models have been applied to a wide range of decisions in many disciplines. Machine Learning algorithms have also been used in defining business insights. In this analysis CRM data set is used to analyze the methodology. Clustering before classification works well in defining the data classification [3]. Clustering algorithms has been surveyed and found self organizing map is the current researchers context in the real world. The objective of the study is to define a hybrid method for clustering based on principle component analysis and neural gas combined with the self organizing map [4,7,9] and using the clusters in classification algorithms like logistic regression, classification and regression trees, neural network and random forest [5,8,6]. Although in this paper the classification method predictive accuracy is compared with CRM dataset. The main objective of this work is to provide an actionable insight to the business to obtain its productivity. 2. Related Works Evans and Pfahringer introduced a new concept of clustering for classification [3]. This paper shows that clustering prior to classification is beneficial when using the sophisticated classifier. Ngai[5] Compared the data mining techniques like logistic regression, Linear discriminate Analysis and decision trees and found the decision trees as a appropriate model for customer relationship management. The paper [5] try to detection of outliers in multivariate data. There used various outlier techniques such as Mahalanobis distance, Cook s distance.[6] Parneet compared classification methods like Multilayer Preceptron from neural network, some tree methods and naïve bayes. [10] who extracted actionable knowledge from decision trees. Decision trees identify the features that are most discerning when it comes to identifying classes. Two well known methods are Boosting and Bagging. Breiman and Cutlers (2013) proposed random forests package using R programming, which add an additional layer of randomness to bagging. The combination of learning models increases the classification accuracy. 3. Proposed Methodology The effectiveness of the proposed approach has been analyzed using the CRM data set. Almost 126 variables has been extracted and based on the data reduction methods like factor analysis, principle component analysis it has been derived to get some of 15 variables for the analysis [5]. 3.1. Proposed Algorithm for Clustering 1. Initialise all weight vectors randomly 2. Chose a random data point from training data and present it to the self organizing map.

Arumugam P,Christy V/ / Materials Today: Proceedings XX (2016) XXX XXX 3 Fig. 1. Flowchart of Proposed work. 3. Find the Bes Matching Unit (BMU) in map the most similar node based on Mahalanobis distance Δ. 4. Determine the nodes within the neighbourhood of the BMU 1. The size of the neighbourhood decreases with each iteration. 5. Adjust Weights of nodes in the BMU neighbourhood towards the chosen data point 1. Learning Rate decreases with each iteration based on neural gas 2. The magnitude of the adjustment is proportional to the proximity of the node to the BMU 6. Repeat steps 2 to 5 for N iterations or convergence Here the self organized algorithm is used with the mahalanobis distance measure as follows. The standardized Mahalanobis distance depends on estimates of the mean, standard deviation, and correlation for the data. A classical approach for is to compute the Mahalanobis distance (MD i ) for each observation x i is, Where µ and are mean and vector, covariance matrix respectively. (1) 3.2. Comparison of Classification Methods Regression analysis is used to fit data where the relation between independent and dependent variables is nonlinear when the specific form of the nonlinear relationship is unknown. Logistic regression has the binary outcome Y, and the conditional probability Pr(Y = 1 X = x) is a function of x, any unknown parameters in the function are to be estimated by maximum likelihood. The most obvious idea is to let p(x) be a linear function of x. Every increment of a component of x would add or subtract so much to the probability. The conceptual problem here is that p must be between 0 and 1, and linear functions are unbounded. Finally, the easiest modification of log p which has an unbounded range is the logistic transformation, log (p /1 p). Formally, the model logistic regression model is that

4 Arumugam P,Christy V/ Materials Today: Proceedings XX (2016) XXX XXX p(x log = β 1 p(x) 0 + x. β (2) The decision trees methods are widely accepted and it also has some weak points like data fragmentation in data mining. CART procedure derives conditional distribution of sales y given x independent variables. The partitioning procedure searches from beginning to end all values of dependent variables to find the independent variable that provides best partition into child nodes. Each and every node spilt is formed based on the conditional probability.the best partition is the one that minimizes the weighted variance and the distribution f(y Øi) of y x represents the situation that x reside in the side corresponding to the i th terminal node. Neural network works with both categorical and continuous variables. Neural network has many advantage over classical models used to analyzed data. Neural Network methods handles nonlinearity associated with the data well. NNs method imitates the structure of biological neural network. Processing elements (PE) are the neurons in a Neural Network. The neuron receives one or more inputs, processes those inputs, and generates a single or more output. The main components of information processing in the Neural Networks are Inputs, Weights, Function (weighted average of all input data going into a processing element, Transformation function and Outputs. Random Forest uses a large number of decision trees are generated randomly for the same data set, and used simultaneously for prediction. A random forest is a classifier consisting of a collection of tree form of classifiers {h(x,θk ), k=1,...} where the {Θk} are iid (independent identically distributed) random vectors and each tree casts a unit vote for the most likely class at input x. Given an ensemble of classifiers h1(x), h2(x),..., hk (x), and with the training of random set drawn from the distribution of the random vector Y, X, which defines the margin function as mg(x,y) = avk I(hk (X)=Y) maxj Y avk I(hk (X)=j ). (3) where I( ) is the indicator function and the margin measures the extent to which the average number of votes at X,Y for the right set exceeds the average vote for any other set. In random forests, hk (X) = h(x, Θk ). The large number of trees follows the Strong Law of Large Numbers and the tree structures used. In this section the classification analysis is done based on clustering prior to classification is beneficial when using the sophisticated classifier. Based on this concept of classification after clustering is used is our analysis. Random forest has given the efficient classification accuracy based on the clusters we derived from the proposed hybrid algorithm compared with LR, CART and NN. 3.3. Proposed Algorithm for Actionable Knowledge The random forest algorithm builds multiple decision trees using a concept called bagging. Bagging is the idea of collecting a random sample of observations into a bag. Multiple bags are made up of randomly selected observations obtained from the dataset.

Arumugam P,Christy V/ / Materials Today: Proceedings XX (2016) XXX XXX 5 Fig. 2. Random Forest Variable Importance Step.1 Create Random Subset with random values. Step.2 Build 100 Decision Trees with random subsets. Step.3 Classify the error of each (100) decision tree rule and find the min rule. Step.4 Define the individual Variable Loss probability value and concentrate only on high probability value >0.8 Step.5 Sort the corresponding rule variable loss probability. Step. 6 Extract the top 3 Variable with high loss probability. Even though consider the large number of R Random forest packages. Randomforest R package is used in this analysis. Initially 100 trees are running using Random Forest package. Then the proposed methodology is coded using R programming to get the deal loss indicators. 4. Results The CRM data set is used and it has taken as input with 87 variables in the proposed clustering method. The grasping error is a weighted sum of different translational and rotational deviations from the ideal grasping posture and the lower values are better. Here the values around 1.0 indicate a very good performance. Table 1 reports the similarity results between SOM and proposed algorithm. The Proposed algorithm manages to achieve overall lowest error as 0.96 and it is better than the classic SOM algorithm. Table. 1. Proposed Clustering Method -Error Results, m is number of principle component Methods m=3 m=6 m=9 SOM 1.2 1.3 1.5 Proposed 0.96 1.0 1.6 Three principle components with.96 error rate is defined when compared to the SOM cluster with the proposed methodology. After using the proposed methodology in the CRM data three clusters were formed. Based on the profile of these data it named as top, middle, low clusters.

6 Arumugam P,Christy V/ Materials Today: Proceedings XX (2016) XXX XXX Table. 2. Classification Methods - Receiver Operating Characteristic (ROC) Result Data Set LR CART RF NN Top - Cluster 72% 73% 83% 81% Medium - Cluster 78% 79% 82% 79% Low -Cluster 68% 61% 81% 76% CRM 75% 78% 83% 80% Fig. 3. CRM data set ROC curve for CART, RF and NN. Three clusters in the CRM data set as well as the entire CRM data are used to run the classification methods. Comparison of all classifiers using R programming is shown in Fig 3 using ROC curve. ROC Curve is a plot of the true positive rate against the false positive rate for the different possible cut points of test data for the model. The area under the curve (AUC) is a measure of model accuracy. It is related to the Gini index coefficient ( ) by the formula. The performance comparison on the basis of accuracy among methods are shown in Table2. Table.3. Snap shot of actionable knowledge - Deal Loss Indicators ID -Cluster PB 1.Loss Indicator 2.Loss Indicator 3.Loss Indicator S90345 C1.02 Business Line Business unit Price S67846 C2.03 Account WR Product WR Business Unit S65793 C3.01 Parent Loss Service Business Line Random forest method is used to give the actionable knowledge to the end users. The proposed methodology enhances to extract the loss indictor variable from the selected random forest rules. Finally the loss deal indicators using the R programming is shown in Table 3. It gives us the detailed representation about the deal loss. For each and every deal it can describe the deal loss indicators. 5. Conclusions and Future Enhancement In this paper classification methods are used to analyze the CRM data set. Prior to classification the proposed hybrid clustering method is used for clustering the data set. The derived clusters are used for classification analysis.

Arumugam P,Christy V/ / Materials Today: Proceedings XX (2016) XXX XXX 7 Random forest classification method performs effectively with 83% accuracy. Therefore the proposed the deal loss indicator algorithm is based on the random forest algorithm. The framework consists of algorithms extracting and selecting conditions/rules, and extracting frequent variable interactions/conditions from tree ensembles. Note the methods here can be applied to both classification and regression problems. The proposed algorithm has been implemented using R defined functions.. However, the low error loss rule has been extracted and analyzed for deal indictors. This conclusion can be valuable to the rule mining area. Further extracted deal loss indicators which leveraging the random forest rule and gives the actionable insights for the end users. Also the proposed methodology is easy and understandable and it is interfaced with various techniques. Proposed methodology can be applied to any kind of business to get actionable knowledge to improve the business process. Hence the future is promising for other research areas like sports, healthcare, based on the availability of huge dataset. Acknowledgements This research paper is truly contribution and guidance that belong to the list of author. The authors pay homage to Manonmanium Sundarnar University for continuous encouragement, guidance and support during their work. References [1] P.Arumugam. and V. Christy, A Hybrid Method for Data Mining. International Journal of Research and Scientific Innovation IJRSI 3(7), 2016, pp 87-94. [2] Breiman, Cutler s, Random forests for classification and regression. R Package random Forest version 4.6-7, 2013. [3] R. Evans, B. Pfahringer, Clustering for classification. In Proceedings of 2011 7th International Conference Information Technology in Asia (CITA 11), IEEE Publications, 12-13 July 2011, pp. 1-8). [4] P. Hanafizadeh, M. Mirzazadeh, Visualizing market segmentation using self organizing maps and fuzzy Delphi method-adsl market of a telecommunication company. Expert systems with Applications, 2011, 38(1), pp198-205. [5] E.W.T. Ngai, Li. Xiu, D.C.K. Chau, Application of data mining techniques in customer relationship management: A literature review and classification, Expert Systems with Applications. Elsevier, 36 (2009) : 2592 2602 [6] Parneet Kaur, Manpreet Singh, Gurpreet Singh Josan, Classification and Prediction based data mining algorithms to predict slow learners in education sector, 3 rd International Conference on recent trends in computing (ICRTC) 57(2015), 500-508. [7] K. Senthamarai Kannan, K. Manoj, Outlier detection in Multivariate data, Proceedings of the international conference on Mathematics and its applications 2014, University College of Engineering, Villupuram, 2014, pp1150-1159 [8] S.Vijayarani, S. Deepa, Protein Sequence Classification In Data Mining A Study, International Journal of Information Technology, Modeling and Computing (IJITMC), 2 (2014), pp 1 8. [9] Xiufen Fang, Guisong Liu,Ting-zhu Huang, Principal Components Analysis Neural Gas Algorithm for Anomalies Clustering. WSEAS TRANSACTIONS on SYSTEMS. 9 (2010). [10] Qiang Yang, Jie Yin, Charles Ling, Rong Pan, Extracting Actionable Knowledge from Decision Trees, IEEE Transactions on Knowledge and Data Engineering, 19(1), 2007,pp 43-56.