An Analysis of Classification Algorithms in Offline Handwritten Digit Recognition

Similar documents
Python Machine Learning

INPE São José dos Campos

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Artificial Neural Networks written examination

Word Segmentation of Off-line Handwritten Documents

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Artificial Neural Networks

Test Effort Estimation Using Neural Network

CSL465/603 - Machine Learning

Human Emotion Recognition From Speech

Evolutive Neural Net Fuzzy Filtering: Basic Description

Rule Learning With Negation: Issues Regarding Effectiveness

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Lecture 1: Machine Learning Basics

Circuit Simulators: A Revolutionary E-Learning Platform

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

Learning Methods for Fuzzy Systems

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Modeling function word errors in DNN-HMM based LVCSR systems

Softprop: Softmax Neural Network Backpropagation Learning

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

CS Machine Learning

A Case Study: News Classification Based on Term Frequency

Lecture 10: Reinforcement Learning

Knowledge Transfer in Deep Convolutional Neural Nets

Australian Journal of Basic and Applied Sciences

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Rule Learning with Negation: Issues Regarding Effectiveness

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

Evolution of Symbolisation in Chimpanzees and Neural Nets

Off-line handwritten Thai name recognition for student identification in an automated assessment system

A study of speaker adaptation for DNN-based speech synthesis

Speech Emotion Recognition Using Support Vector Machine

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Modeling function word errors in DNN-HMM based LVCSR systems

WHEN THERE IS A mismatch between the acoustic

Applications of data mining algorithms to analysis of medical data

AUTOMATED FABRIC DEFECT INSPECTION: A SURVEY OF CLASSIFIERS

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Reducing Features to Improve Bug Prediction

Axiom 2013 Team Description Paper

Issues in the Mining of Heart Failure Datasets

Classification Using ANN: A Review

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Probabilistic Latent Semantic Analysis

Using focal point learning to improve human machine tacit coordination

Large vocabulary off-line handwriting recognition: A survey

Speaker Identification by Comparison of Smart Methods. Abstract

Calibration of Confidence Measures in Speech Recognition

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Learning to Schedule Straight-Line Code

Generative models and adversarial training

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

arxiv: v1 [cs.lg] 3 May 2013

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

Learning Methods in Multilingual Speech Recognition

Soft Computing based Learning for Cognitive Radio

Data Fusion Through Statistical Matching

Learning From the Past with Experiment Databases

Beyond the Pipeline: Discrete Optimization in NLP

Beyond Classroom Solutions: New Design Perspectives for Online Learning Excellence

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Lecture 1: Basic Concepts of Machine Learning

Arabic Orthography vs. Arabic OCR

arxiv: v1 [cs.lg] 15 Jun 2015

Linking Task: Identifying authors and book titles in verbose queries

Discriminative Learning of Beam-Search Heuristics for Planning

Early Model of Student's Graduation Prediction Based on Neural Network

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Using Web Searches on Important Words to Create Background Sets for LSI Classification

MTH 141 Calculus 1 Syllabus Spring 2017

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

TD(λ) and Q-Learning Based Ludo Players

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

On-Line Data Analytics

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

Mining Association Rules in Student s Assessment Data

AQUA: An Ontology-Driven Question Answering System

On the Formation of Phoneme Categories in DNN Acoustic Models

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems

An empirical study of learning speed in backpropagation

Mathematics process categories

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Deep Neural Network Language Models

(Sub)Gradient Descent

Forget catastrophic forgetting: AI that learns after deployment

Transcription:

1 An Analysis of Classification Algorithms in Offline Handwritten Digit Recognition Logan A. Helms, Jonathon Daniele Abstract The construction and implementation of computerized systems capable of classifying visual input with speed and accuracy comparable to that of the human brain has remained an open problem in computer science for over 40 years. Here two classification methods for offline handwritten digit recognition are analyzed; naïve Baye s classifier and feedforward neural network. The analysis presented in this paper suggests that a feedforward neural network combined with adaptive methods is capable of achieving better accuracy than a naïve Baye s classifier when used as a classification algorithm in offline handwritten digit recognition. Keywords Backpropagation, Classification Algorithms, Feedforward Neural Networks, Handwritten Digit Recognition, Machine Learning, Naïve Bayes Classifier. M I. INTRODUCTION UCH progress has been made in the last several years in the area of machine learning techniques for pattern recognition [1]. However, computerized systems continue to lack in perceptual performance when compared to humans. The main message of this paper is that an artificial neural network (ANN) can be built by using adaptive methods that can achieve a greater accuracy than a naïve Bayes classifier when used as the classification algorithm in an offline handwritten digit recognition system. This is made possible by relying on adaptive methods rather than hand-crafted feature extraction techniques [1]. Feature extraction is commonly defined as determining what information is most relevant to a given problem [2]. Feature extraction allows only relevant inputs into the system, creating the best model for the given information [2]. While this might be the ideal situation, handcrafting feature extraction algorithms for the variability of real world problems is a unique and daunting task for each problem [1]. Many studies of pattern recognition are dedicated to feature extraction techniques for particular problems [1]. However, much of the success in handwritten digit recognition can be attributed to advances in machine learning techniques and large training sets [1]. The availability of large training sets has allowed designers to focus more on real world problems and less on hand-crafted feature extraction algorithms [1]. II. OFFLINE HANDWRITTEN DIGIT RECOGNITION There are two states in which a handwriting recognition classification technique may be tested: an online state or an offline state. In an online system, the classification not only receives the image itself, but additional feedback such as stylus/pen pressure, multiple connected characters and stroke direction. The classification is also expected to occur in real time, or near real time, and provide immediate results. An offline system is usually trained and tested on individual characters, examining a far more limited group of possible feature sets and is not expected to provide such immediate feedback. The relevance of offline handwriting recognition is evident in check reading systems in banks worldwide. Check reading systems verify the amount and other pertinent handwritten information that previously required a human, saving banks time and money [1]. III. CLASSIFICATION ALGORITHMS A. Naïve Bayes Classifier A.1 Baye s theorem and conditional probability Baye s theorem allows a probability model to be constructed using known outcomes as shown here (1) p(b A) = p (A B) p(b) p(a) However, this formula only accounts for the probability of a single feature. A naïve approach allows the model to be constructed with a set of features for each class instead of a single feature, given as (2) p({f n } A) = p(f 1 A) p(f 2 A) p(f n A) p(a) By using a set of known features from a set of known classes, we may arrive at the probability of a feature occurring in a given class by a simple summation of the feature occurrence divided by the summation of the class occurrence (within the training set). (3) n F k A i =1 i This paper was submitted for review on December 3, 2014. The authors are undergraduate students with the Department of Computer Science, University of North Carolina Wilmington, Wilmington, NC 28403.

2 A.2 The independence assumption and Naïve Baye s Classification Assuming the independence of features in a class is the key distinction responsible for the unexpectedly accurate results from a proper application of a Naïve classifier (NBC). The NBC model does make the independence assumption in order to build a conditional probability set for each feature, given that feature is part of some class. While this assumption would seem to violate many real-world applications the assumption of independence has consistently provided high results in actual application [3]. A.3 Some statistical requirements It is important to note that, should a reasonable expectation of even class distribution exist, as is the case with the set of digits [0-9], the decision should be made as to how an uneven distribution within the training set may be corrected. An appropriate training set such as the MNIST database used here, maintains a relatively even distribution between classes and avoids the introduction of erroneous probability summations. However, as this has serious implications for the accuracy of feature probability and thus efficacy of the constructed model, it remains important to examine the training set in light of known statistical models and the body of previous research available. That being said, with the rising interest in machine learning techniques, the vastly lowered cost of computer equipment when compared to just a decade ago, and the high availability of computing equipment capable of both maintaining and making use of large, curated data sets has contributed a great deal to a marked rise in the quality and access to accurate training data [1]. A.4 Additional considerations While the assumption of feature independence provides at least the appearance of greater freedom in implementation, it does introduce a new set of issues. For example, in high variability systems, the inclusion of extraneous features or noise when building a model may create sufficient error as to drop the accuracy of an NBC to near the same level as random guessing [4]. Similarly, normalization above and beyond size and color features may also be required prior to applying either training or testing methodologies. In the case of the MNIST data set, while all images are contained within the same 28 x 28 grid of pixels, with grayscale values [0,255], the characters are not deslanted to align with the same axis [1]. By allowing each feature to be considered independent and therefore valid for inclusion, the NBC makes no distinction between informative and uninformative features, placing the onus on the designer to either hand-pick feature sets, or construct methodology to do so prior to training. As the purpose of a computational approach to classification is, overall, a reduction in complexity and effort relative to the task, additional complexity raises the question of comparative effort between implementation of classification techniques, i.e. NBC or an Artificial Neural Network. While the NBC algorithm itself may be simplistic, the complexity of data preparation became far greater than the complexity required for the ANN constructed as part of this research. Feature extraction or identification is almost certainly a necessary step prior to training the classifier, i.e. part of the Fig. 1. A model of an artificial neuron. preprocessing. Similarly, when examining the testing set it is important to utilize the same techniques as when preprocessing the training data. This distinction between preparing a data base and preparing the input for training or testing is often overlooked or lumped together under the same heading. It may even be included as part of the classification algorithm itself. However, the authors did find a clear distinction between the preparation of the data set (done by LeCun), the preprocessing required for application of the classification techniques (required for implementation), and the application of the classification algorithms. The distinctions and reasoning will be further examined in Section V. Computational complexity for an NBC is a function of the number of features being extracted and the number of class. Consider a training set consisting of n number of distinct classes, with each class having k features. In order to build the model for classification, it is necessary to make k passes n number of times. This reduces down to simply O(N). B. Artificial Neural Networks One of the most common classification methods for pattern recognition is neural networks. ANNs for pattern recognition, such as the feedforward neural network used in this case study, are nonlinear processors that map inputs nonlinearly to outputs [2]. ANNs achieve this by nonlinearly processing in neurons [2]. B.1 Artificial Neuron The neuron is the basic element in an ANN. The artificial neuron is inspired by neurons in biological nervous systems, in particular the human brain [5]. Fig. 1 shows an illustration of the artificial neuron considered in this paper. Biological neurons in nervous systems receive electrical or chemical signals through synapses that either excite or inhibit the signal being sent to the neuron [5]. In an attempt to mimic this observation, ANNs allow a positive weight to represent excitatory synaptic connections and a negative weight to represent inhibitory synaptic connections [5]. Inputs are multiplied by synaptic weights and summed for each neuron as modeled by (4). (4)

3 m x i w i i=0 The sum is then passed to an activation function, φ, which produces the neuron s output. The activation function is a nonlinear sigmoidal function that is both continuous and monotonic with an upper and lower bound [2]. A frequent choice for activation function is of the form (5) 1 1 + e x where x is (4). This function has an upper bound of 1 and a lower bound of 0 as depicted in Fig. 2. B.2 Feedforward Neural Network Feedforward neural networks are networks that propagate from input neurons to output neurons in one direction without cycles as shown in Fig. 3. There can be any number of hidden layers with any number of neurons. Feedforward neural networks for classification will have an output neuron for each class [2]. Input neurons accept one input and do not have an activation function. Therefore the input neuron s input is essentially its output. The input neurons outputs are propagated to a hidden layer of neurons through synaptic weights that connect the input layer to the hidden layer. Each neuron in the hidden layer will sum its weighted inputs and then compute its activation function to produce the hidden neuron s output. Outputs from neurons in the hidden layer are propagated to the next layer of neurons as inputs. This process continues until the neurons in the output layer have produced an output which is considered the network s response to the given input vector [5]. Training: There are two types of training for neural networks, supervised and unsupervised. For the ANN in this paper, supervised training is used. Supervised training is the process of presenting the network with an input vector and the target for that input vector. The target is used to calculate the error signal for each output neuron as such (6) δpj = (Tpj Opj) Opj (1 Opj) where Tpj is the target for the output neuron j for pattern p, and Opj is the actual output for the output neuron j for pattern p [5]. Similarly, the error signal for each hidden neuron is calculated as such (7) δpj = Opj (1 Opj) k δ pk W kj where δpk is the error signal of the post-synaptic neuron k and Wkj is the weight of the synaptic weight from pre-synaptic neuron j to post-synaptic neuron k [5]. The error signal for each post-synaptic neuron is used to adjust the weights that connect the pre-synaptic neurons to the post-synaptic neurons. The synaptic weight adjustments are computed in the form (8) ΔWji (t) = η δpj Opi Fig. 2. A graph of a sigmoidal function. Fig. 3. A feedforward neural network. where η is the learning rate for the network, δpj is the jth postsynaptic neuron s error signal and Opi is the output of the ith pre-synaptic neuron [5]. The learning rate η is a network variable that controls the speed at which the weights are adjusted [2]. Once weight adjustments have been computed the weights are updated as such (9) Wji (t + 1) = Wji (t) + ΔWji (t) where Wji (t + 1) is the updated weight for the weight connecting the ith pre-synaptic neuron to the jth post-synaptic neuron. It is worth mentioning that all weights are adjusted simultaneously [2]. The computational complexity of a propagation of a feedforward neural network is dependent upon the number of layers and the number of neurons in each layer. Consider a network with an input vector of size m connected to a single hidden layer with n neurons and an output layer with p neurons. Each of the n neurons in the hidden layer will sum the weighted inputs from each of the m inputs from the input vector. Each of the p neurons in the output layer will sum the weighted inputs from each of the n outputs from the hidden layer. Thus the computational complexity of a propagation through the described feedforward neural network is on the order of O(m + m n + n p ). The computational complexity of backpropagating the error of the described feedforward neural network is on the order of O(n p + m n + m).

4 A. Database: MNIST set Fig. 4. Samples from MNIST database. IV. RESULTS AND COMPARISON The database used to train and test the classifiers in this analysis was constructed from two NIST databases for handwritten digits [1]. One of the NIST databases contained samples that were collected from Census Bureau employees while the other database contained samples collected from high school students [1]. Understandably, the database containing samples from Census Bureau employees had samples that were cleaner than the samples in the database containing samples from high school students. The designers of the MNIST database mixed these two NIST databases to build a reliable training and test set [1]. Each sample is centered in a 28x28 pixels grayscale image. The grayscale values range from 0 to 255 and are stored as bytes in the MNIST database. Thus there are 784 bytes in each sample. The training set contains 60,000 samples while the test set contains 10,000 samples. The MNIST database along with the details of the designers work is available at http://www.research.att.com/yann/ocr/mnis t. B. Results Naïve Baye s Classifier: The author's application of a Naïve Baye's classification technique has only resulted in a success rate of 13.76% at this time. The distribution of classes in the data was essentially even, so the success of this approach is barely higher than a random assignment of labels. However, this does not serve to discredit at all the efficacy of such classification. Testing with the WEKA suite of tools for statistical analysis and classification with a Naïve Baye's classifier resulted in 69.65% accuracy, and the use of a multinomial Naïve Baye's classifier (MNBC) resulted in 83.65% accuracy. Summary tables for the NBC and MNBC from WEKA may be found at the end of this paper in Fig. 5 and Fig. 6, respectively. The WEKA toolkit may be found at http://www.cs.waikato.ac.nz/ml/weka/. The problem in using an NBC almost certainly arises out of the number of non-discrete features in each digit [6]. The time to build the system model for the implemented NBC was approx. 3 seconds; the NBC used in the WEKA toolkit took approx. 4.7 seconds, while the MNBC took approx. 0.5 seconds. As is evident from the rising accuracy of the tested methodologies, an NBC may be quite successful; however, the required preprocessing remains the same as, or greater than, the simple NBC implemented for this paper. Feedforward neural network : The other classification algorithm tested was a feedforward neural network trained with backpropagation as described in Section III. Two versions were compared, a network with two layers of weights (one hidden layer) and another network with three layers of weights (two hidden layers). Both networks were trained for 20 epochs using the entire MNIST training set [1]. The learning rate was 0.25 for all 20 epochs. Accuracy on the MNIST test set was 84.01% for the two layer network with 300 hidden neurons, and 97.60% for the three layer network with 300 neurons in the first hidden layer and 100 neurons in the second hidden layer. Training time for both networks was also tested. Both neural networks could be trained in less than an hour per epoch. This amounts to less than 18 hours of CPU to train either neural network for 20 epochs. It is worth mentioning that training time is dependent upon the designer s implementation and is irrelevant to end users [1]. V. CONCLUSIONS In considering implementations for this research paper, the decision was made to keep as close as possible to a Naïve Baye's approach and performing a classification of the MNIST data set as-is. To perform an objective analysis, it is important to begin with the same ground between classification techniques and thus making observations based on the state in which the MNIST data set originally resides was a focus. While the ANN may have required more effort strictly to set up the classification algorithm, the extensive preprocessing and feature extraction required to prepare the data for use by the NBC was not. The separation of processes required to prepare data for classification delimits a clear distinction in utility between the classification methodologies, above and beyond accuracy and training time as is commonly discussed. Using the MNIST data set as the first stage of data processing sets a common ground for further effort required in designing and implementing an NBC and ANN. Generalizing available data to common form encourages methodologies that may function with minimal customization to a specific problem. With that consideration, measuring the complexity of implementation between an NBC and ANN should obviously include any preprocessing necessary for a minimum successful application of a technique, much less comparable levels of accuracy. The neural network showed strong performance on the data with no further preprocessing, while the NBC did not. In a similar line of reasoning, while both networks are fully functional once trained, the NBC testing data requires the same preprocessing as the training data. This means that, while the processing required by the neural network grows

5 linearly with any testing data, the Naïve Baye's classifier will also grow by whatever factor is introduced in the preprocessing and is often substantial. The final area of consideration, time complexity between classification algorithms, does show the largest advantage of a Naïve Baye's classifier. The complexity of the implemented ANN, given earlier in this paper, is simply the complexity of propagating through the network. This is much higher than that of the NBC, as the complexity is the number of features raised to the number of classes. However, in testing both networks on the 10,000 testing samples from the MNIST data set, actual runtimes were minimal in both cases. The strength of neural networks will continue to grow as the size and availability of training databases continue to grow [1]. Although the same may be said of a Naïve Baye's classifier, as the training data increases, the neural network remains in a better position to predict results with higher accuracy and no further preprocessing, while the Naïve Baye's classifier requirements for preprocessing such as feature extraction, and noise suppression grow linearly along with the training data. The neural network's ability to function on data without the required overhead of preprocessing also applies to the testing data under examination. These advantages taken together lead to the conclusion that a feedforward neural network retains a greater advantage in comparison. REFERENCES [1] Y. LeCun, Gradient-Based Learning Applied to Document Recognition, in Proc. IEEE, vol. 86, Red Bank, NJ, Nov. 1998, pp. 2278-2324. [2] S. Samarasinghe, Neural Networks for Nonlinear Pattern Recognition, in Neural Networks for Applied Sciences and Engineering, 1st ed. Boca Raton: Auerbach, 2007, pp. 69-110. [3] H. Zhang, Exploring Conditions for the Optimality of Naïve Bayes, in International Journal of Pattern Recognition and Artificial Intelligence, vol. 19, New Brunswick, CAN, Mar. 2005, pp. 183-198. [4] L. Jiang, Learning Instance Weighted Naïve Bayes from labeled and unlabeled data, in Journal of Intelligent Information Systems, vol. 38, Wuhan, Hubei, China, Feb. 2012, pp. 257-268. [5] G. A. Tagliarini, Optimization Using Neural Networks, in IEEE Transactions on Computers, vol. 40, Clemson, SC, Jul. 1991, pp. 1347-1358. [6] N. Rooney, D. Patterson, and M. Galushka, A Comprehensive Review of Recursive Naïve Bayes Classifiers, in Intelligent Data Analysis, vol. 8. Newtonabbey, UK, Jan. 2004, pp. 615-628. Fig. 5. Summary for WEKA NBC. Fig. 6. Summary for WEKA MNBC.