Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Similar documents
Learning From the Past with Experiment Databases

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

CS Machine Learning

Assignment 1: Predicting Amazon Review Ratings

Probabilistic Latent Semantic Analysis

A Case Study: News Classification Based on Term Frequency

Linking Task: Identifying authors and book titles in verbose queries

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

A Comparison of Two Text Representations for Sentiment Analysis

Switchboard Language Model Improvement with Conversational Data from Gigaword

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Rule Learning With Negation: Issues Regarding Effectiveness

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

AQUA: An Ontology-Driven Question Answering System

Python Machine Learning

Lecture 1: Machine Learning Basics

Rule Learning with Negation: Issues Regarding Effectiveness

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Prediction of Maximal Projection for Semantic Role Labeling

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Reducing Features to Improve Bug Prediction

Multilingual Sentiment and Subjectivity Analysis

A Bayesian Learning Approach to Concept-Based Document Classification

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

The stages of event extraction

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language

Using dialogue context to improve parsing performance in dialogue systems

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

On-Line Data Analytics

Radius STEM Readiness TM

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

Indian Institute of Technology, Kanpur

Lecture 1: Basic Concepts of Machine Learning

Extracting Verb Expressions Implying Negative Opinions

The taming of the data:

Movie Review Mining and Summarization

Evidence for Reliability, Validity and Learning Effectiveness

CS 446: Machine Learning

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Calibration of Confidence Measures in Speech Recognition

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Beyond the Pipeline: Discrete Optimization in NLP

Emotions from text: machine learning for text-based emotion prediction

Learning Methods in Multilingual Speech Recognition

Artificial Neural Networks written examination

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2

Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger

Speech Emotion Recognition Using Support Vector Machine

The Smart/Empire TIPSTER IR System

Software Maintenance

Australian Journal of Basic and Applied Sciences

Proof Theory for Syntacticians

Memory-based grammatical error correction

Learning Computational Grammars

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

LQVSumm: A Corpus of Linguistic Quality Violations in Multi-Document Summarization

Robust Sense-Based Sentiment Classification

A Vector Space Approach for Aspect-Based Sentiment Analysis

Parsing of part-of-speech tagged Assamese Texts

Leveraging Sentiment to Compute Word Similarity

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Machine Learning and Development Policy

Truth Inference in Crowdsourcing: Is the Problem Solved?

Online Updating of Word Representations for Part-of-Speech Tagging

Ensemble Technique Utilization for Indonesian Dependency Parser

Word Segmentation of Off-line Handwritten Documents

Using Web Searches on Important Words to Create Background Sets for LSI Classification

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

The University of Amsterdam s Concept Detection System at ImageCLEF 2011

GACE Computer Science Assessment Test at a Glance

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Axiom 2013 Team Description Paper

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Multi-Lingual Text Leveling

The Strong Minimalist Thesis and Bounded Optimality

Disambiguation of Thai Personal Name from Online News Articles

Chapter 2 Rule Learning in a Nutshell

Transcription:

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 1 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models James B. Steck A Thesis Submitted in Partial Fulfillment for the Requirements of the Degree of Master of Science in Data Mining Central Connecticut State University New Britain, Connecticut 23 September 2005 Thesis Advisor Dr. Daniel T. Larose Department of Mathematical Sciences

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 2 ABSTRACT The online DVD rental company Netflix advertises that their service is the best way to rent movies 1. Though Netflix claims they enable customers to find and discover movies they will enjoy, consistently renting movies that meet personal tastes and standards still remains an elusive task. An intelligent data mining model that recommends movies according to each viewer s personal preference his or her net picks, so to speak would likely increase customer satisfaction. Researchers have proposed several techniques that accurately classify the underlying sentiment found in reviews. In several cases, these techniques rely on adjectives as likely indicators of subjectiveness, sentiment, or opinion. This thesis describes a method that extracts useful features from a collection of movie reviews and uses them to build data mining models capable of accurately classifying a new review as either Good or Bad. The experiments described in this thesis use attribute selection methods in WEKA 2 to evaluate each feature s relevance, with respect to the task of movie review classification. Subsets of the ranked features are then programmatically input to Bayesian-based classifiers in WEKA to generate classification results. These methods are proven to produce highly accurate classification models with results often more competitive than those reported in current literature. 1 1997-2005 Netflix, Inc. All rights reserved. 2 WEKA (Waikato Environment for Knowledge Analysis) 3.4 Data Mining Software.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 3 ACKNOWLEDGEMENTS I am thankful for the wonderful support I received from my family while completing this thesis. They listened to me repeatedly describe my setbacks, possible approaches, and intermediate results to this particular problem more times than I can possibly count. I am especially grateful to my wife Daphne, whose editorial comments asnd love for good movies inspired the completion of this document. She also manages our Netflix queue.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 4 TABLE OF CONTENTS ABSTRACT...2 ACKNOWLEDGEMENTS...3 TABLE OF CONTENTS...4 I. INTRODUCTION...5 II. BACKGROUND... 10 1. Data Extraction...10 2. Method of Ranking and Classification...13 3. Bayesian Algorithms...14 4. WEKA Attribute Evaluation Methods...17 III. RELATED WORK...21 IV. EXPERIMENTS AND RESULTS... 26 1. Naive Bayes: Classifying Ranked Adjectives (Unigrams)...26 2. Naive Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams)...29 3. Bayes Net: Classifying Ranked Adjectives and Adverbs (Unigrams)...31 4. Bayes Net: Classifying Ranked Adjectives and Adverbs (unigrams and bigrams)..33 V. CONCLUSIONS... 37 VI. REFERENCES...39

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 5 I. INTRODUCTION There has been an increasing amount of interest devoted to exploring computational methods and models capable of identifying user sentiment found in natural language documents. Sentiment analysis seeks to determine the subjectivity, opinion, or polarity found in unstructured on-line information sources such as product reviews, movie reviews, or political commentary. Possible applications should efficiently summarize huge amounts of data to detect underlying sentiment, and may include a recommendation system or review evaluator which produces a recommendation, such as, buy or do not buy, or good or bad. An important first step to developing such an application might be to view it as a dichotomous problem; this way, possible solutions attempt to identify the underlying tone of the text as either positive or negative. A document-level classification model may provide an acceptable solution; however, the task of first determining a relevant set of features to serve as input to the classifier presents a significant challenge. A review of published research literature (Bai, Padman, Airoldi, 2004; Mullen, Collier, 2004; Pang, Lee, 2004; Pang, Lee, Vaithyanathan, 2002; Sista, Srinivasan, 2004; Turney 2002) reveals that methods designed to classify document sentiment are often represented as bag-ofwords solutions. That is, individual features typically correspond to words, pairs of words, parts Figure 1 Document Collection Feature Space

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 6 of speech, or sentences extracted from the document collection, as illustrated in Figure 1. Additionally, the attribute values associated with each feature are usually represented as frequency counts, polarity measures, distance measures, or Boolean values. Regardless of chosen feature selection and representation, however, each approach must confront a high-dimension feature space, where the individual features may either be useful, redundant, correlated, or simply irrelevant to the task of classification. As a result, each solution generally differentiates itself based on its ability to reduce the dimensionality of the feature space, ultimately leading to a desirable level of classification accuracy. This thesis demonstrates an effective approach to reducing the dimensionality of a highly-dimensional feature space from which optimal feature subsets are identified, thereby leading to accurate document-level sentiment classification models. Each of the four experiments described in this document begins by specifying a feature type, which is programmatically extracted from the document collection to create a feature space, as shown in Figure 1. One or more of WEKA s attribute selection and ranking methods are then applied to the feature space to produce a ranked feature space. Programmatic methods select subsets of the ranked feature space, build input files for one or more WEKA classifiers, and report classification results. A conceptual overview of the process is shown in Figure 2.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 7 Figure 2 1 2 3 4 5 6 7 1 1 2 3 1 2 3 4 5 1 2 3 4 5 k WEKA n Feature Space Ranked Feature Space Feature Space Subsets This thesis presents the classification results from each experiment using graphical summaries. These results are quite compelling, as they outperform several state-of-the-art approaches recently published in machine learning conference proceedings. The proposed methods of feature space reduction and document classification presented in this thesis are evaluated using a set of one thousand positive and one thousand negative movie reviews available from the Cornell Natural Language Processing Group (Pang, Lee, 2004.) The remaining four sections of the document are organized as follows: The BACKGROUND section explains in detail the programmatic methods developed in Perl 3 to extract features, derive classification input files, and select ranked feature subsets. The RELATED WORK section describes related research focused on developing data mining models capable of classifying sentiment at the document-level. Note that 3 Perl 5.8.4 from ActiveState Corporation.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 8 these papers all contain experimental results obtained by classifying the same (or similar) set of movie reviews explored in this thesis. The EXPERIMENT AND RESULTS section contains four experiments, which together present conclusive results used to support the thesis statement. These experiments are generally presented in a progressive manner so that the introduction of each one leads to improved classification accuracy. The first experiment extracts adjectives from the movie reviews and provides evidence that selecting and ranking the features improves classification accuracy using WEKA s Naive Bayes 4 classifier, as compared to using the baseline (non-ranked) set of features. Similarly, the second experiment extracts adjectives and adverbs. When compared to using the baseline set of features, selection and ranking techniques are again shown to improve classification accuracy using the Naive Bayes classifier. The third experiment uses the Bayesian Networks (Bayes Net) classifier, which further improves the classification accuracy using a feature space comprised on adjectives and adverbs. Finally, the fourth experiment extracts both unigrams 5 and bigrams from the movie reviews, which maximizes the classification accuracy using the Bayes Net classifier. That is, of all experiments performed in this thesis this approach leads to the best classification accuracy, when classifying the movie review data set. 4 The Naive Bayes classifier is based on Bayes Rule and naively assumes independence of events, given the class. 5 A unigram is a feature which represents a single word, such as wonderful. Similarly, a bigram defines a two-word phrase (or pair), such as wonderful movie.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 9 The CONLUSION section summarizes the experimental results to provide evidence that the thesis statement has been clearly supported.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 10 II. BACKGROUND This section provides background information to enhance understanding of the experiments performed in the EXPERIMENTS AND RESULTS section. It describes the programmatic methods developed in Perl by which a specified feature space can be derived from the set of movie review documents. It also discusses automated methods used to generate the ARFF 6 files which are input to WEKA classifiers and the process used to select feature subsets. 1. Data Extraction Unless indicated otherwise, all classification experiments use the data set named polarity dataset v2.0, which was made available (see reference indicating Web site) by Pang and Lee in June, 2004 for use with sentiment analysis experiments (Pang, Lee, 2004.) This data set represents a movie review corpus consisting of one thousand positive and one thousand negative movie reviews extracted from the Internet Movie Database (IMDb.) 7 Pang and Lee developed tools to automatically pre-classify the reviews with pos (positive) or neg (negative) categorical tags. Each movie review is stored in an individual file where, according to Pang, the actual review text has been processed down in an attempt to remove any information indicative of its rating. For example, the pos subdirectory contains the following five individual review files: 6 WEKA requires input files in ARFF format. 7 Internet Movie Data Base: http://www.imdb.com/.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 11 cv004_11636.txt cv000_29590.txt cv001_1843.txt cv002_15918.txt cv003_11664.txt... A Perl script named perl-extract.pl is developed to read the set of reviews and automatically extract specified parts of speech from the text. To accomplish this task, Perlextract.pl uses the Lingua::EN::Tagger 8 Perl class, which assigns parts of speech tags to English text based on statistics found in the Penn Treebank Project 9. For example, the snippet of text, bizarre, not only, extracted from a movie review is shown next with the parts of speech tags applied by Tagger: bizarre/jj,/ppc not/rb only/rb In this case, the unigrams are tagged as follows: bizarre is an adjective (/JJ), the comma is punctuation (/PPC), and both not and only are adverbs (/RB). As a result, all individual words (unigrams) in the reviews matching a specified part-of-speech, such as an adjective, can be programmatically identified and extracted into a keywords list. The script reports all extracted unigrams in descending order, according to the word s frequency of occurrence within the movie review corpus. In the following list, for example, perl-extract.pl reports that the adjective good occurs 2,321 times in 1,150 different movie reviews found within the movie review corpus. 8 Developed by Aaron Coburn. 9 Penn Treebank: http://www.cis.upenn.edu/~treebank/home.html.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 12... more,2879.1393 good,2321.1150 most,2060.1110... In addition to building a list of unigrams and associated frequency counts that occur within the corpus, perl-extract.pl also generates files in ARFF format, as required by WEKA for classification. Let the set of k features extracted from the movie reviews be represented as F = f, f, f,..., f }. In addition, each movie review d i is represented as the { 1 2 3 k vector d i = B ( d ), B ( d ), B ( d ),..., B ( d )}, where the Boolean function B d ) evaluates to 1 or { 1 i 2 i 3 i k i 0 corresponding to whether the i th document either contains or does not contain the j th feature, respectively. This way, perl-extract.pl builds an ARFF file where each movie review is represented as a comma-separated Boolean vector. For example, suppose a keyword file contains only the keywords more, good, and most, as shown in the previous example. In this case, generating the corresponding ARFF file produces a set of instances, where each line represents a specific movie review, as follows: j ( i... 0,1,1,pos 0,0,0,neg 1,0,1,neg 1,1,0,pos... Note that the first review, identified as positive, contains the words good and most, but does not contain the word more. Similarly, only two of the three words, more and good, are found in the last review, which again happens to be a positive review.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 13 2. Method of Ranking and Classification The previous section described the general process by which one or more parts of speech can be extracted from the full set of reviews to create a keywords list and corresponding ARFF file. Specifying the entire set of unigrams as input to a classifier, however, is not likely to produce optimal results because many features found in text documents are likely irrelevant to the classification task (Chakrabarti, 2003; Witten, Frank, 2000.) In addition, Larose states that the inclusion of variables which are highly correlated may lead to components being double counted (Larose, 2006.) For example, according to perl-extract.pl and Tagger, 7,781 unique adjectives can be found in the set of two thousand movie reviews. Although Naive Bayes handles highly-dimensional data sets extremely well, specifying this entire list as a set of inputs does not necessarily lead to optimal classification accuracy. More importantly, specifying large data sets as input to a WEKA classification or selection task may lead to a java.lang.outofmemoryerror error 10, where the result is nothing more than a hung application. The Perl script perl-weka.pl is used to address several of these weaknesses. First, it takes the entire attribute set, as generated by perl-extract.pl, and assigns a specific selection method in WEKA that measures the individual worth of each attribute, with respect to the task of classifying a review as good or bad. The list of measurements associated with each feature is ranked in decreasing order, from which perl-weka.pl chooses a subset of the most promising attributes and then generates the corresponding ARFF file as input to a particular WEKA classifier. For example, perl-weka.pl can be configured to read the list of 7,781 10 In this study, the WEKA platform contains 512MB memory.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 14 adjectives, from which it chooses subsets in increments of one hundred, from the top four thousand adjectives in the list. This way, forty increasingly larger subsets (size one hundred to four thousand) of attributes are input to the specified WEKA classifier, such as Naive Bayes, where movie review classification is performed. With this efficient approach, larger portions of the solution space are programmatically searched, which ultimately leads to identifying the most promising classification models. 3. Bayesian Algorithms This thesis uses Naive Bayes and Bayes Net WEKA learning schemes to classify the movie review data. This section briefly describes these two algorithms. Mitchell (1997) says that probabilistic learning algorithms such as Naive Bayes are very effective at classifying text documents. In addition, although Naive Bayes naively assumes independence between terms, it is widely used because of its simplicity and quick training times (Chakrabarti 2003.) Assuming that the predictor variables are independent given the class value, conditional independence is expressed as follows (Larose, 2006.) θ Naive Bayes = arg max = p( X i = xi θ m i 1 θ ) p( θ ) where, θ takes on the class value either pos or neg. Furthermore, because any single zero probability value will render this product to be zero, the probability of each cell must be adjusted (Larose, 2006.) WEKA avoids zero-based cells by adding 0.5 the default virtual value to each cell. Consider the sample data set shown in Table 1. Table 1: Sample ARFF Data @attribute only {0,1} @attribute okay {0,1} @attribute political {0,1} @attribute quick {0,1}

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 15 @attribute new {0,1} @attribute straightforward {0,1} @attribute CLASS {neg, pos} @data 0,1,0,0,1,0,neg 0,0,0,1,0,0,neg 1,0,0,1,0,0,neg 1,0,0,0,0,0,neg 1,1,0,0,0,0,neg 0,0,0,0,1,0,neg 1,0,0,0,0,0,neg 1,0,0,0,0,0,neg 0,0,0,1,0,0,neg 1,1,0,0,0,0,neg 0,0,0,0,1,0,pos 0,0,0,0,0,0,pos 0,0,0,0,1,0,pos 0,0,1,0,1,0,pos 0,0,1,0,1,0,pos 0,0,1,0,1,1,pos 1,0,0,0,1,1,pos 0,0,0,0,1,0,pos 0,0,0,0,0,0,pos 0,0,0,0,0,0,pos According to this data, the probability cell values for the only attribute are derived in Table 2. For example, given that the review is negative, the conditional probability of the word only 6 occurring is p( only = 1 CLASS = neg) = = 0. 60 based on frequency counts derived from Table 10 1. However, the adjusted cell probability produced internally by Naive Bayes to avoid potential 6 + 1 zero-based cells is p ( only = 1 CLASS = neg) = = 0. 58. Larose presents a detailed 10 + 2 example describing how WEKA s Naive Bayes classifier derives its probabilities for a classification problem (Larose, 2006.)

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 16 Table 2: Probability Cell Values only neg 1 0 6/10 4/10 pos 1 0 1/10 9/10 The term independence assumption made by Naive Bayes may in fact be too overreaching for some application tasks and and therefore not applicable (Chakrabarti, 2003; Larose, 2006; Witten, Frank, 2000.) The Bayes Net classifier provides an alternative way to apply Bayesian analysis without making the independence assumption. Although Bayesian Networks have shown mild improvements compared to other schemes classifying the Reuters 11 data set, Chakrabarti suspects that this learning scheme will show marked improvements when confronted with more complex data sets (Chakrabarti, 2003.) In the context of the movie review classification task, the graph structure of the network consists of one node for the class variable C and a node for each of the k input variables, as shown in Figure 3. In this scenario, the predictor variables are parents of the class node C. The relationship between variables in the network is defined as follows (Larose, 2006): p( X 1 = x1, X 2 = x2, L, X m = xm ) = = p( X i = x m i 1 i parents( X i )) 11 Reuters is a widely used data set for text categorization: http://www.daviddlewis.com/resources/testcollections/reuters21578/.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 17 Figure 3 1 2 3 k C Like Naive Bayes, Bayes Net uses a simple estimation method and also adds 0.5 (by default) to probability cells to avoid the zero-based cell problem. Larose presents an in-depth example using Bayes Net that derives the probabilities for a small data set classification task (Larose, 2006.) 4. WEKA Attribute Evaluation Methods Witten and Frank (2000) point out that irrelevant attributes can degrade the performance of a learning scheme; however, by reducing the dimensionality of the input space, the performance of the learning algorithm can be improved. This thesis performs classification experiments by applying two different WEKA evaluation measures to individual attributes in the feature space. The set of movie review attributes are evaluated according to the two WEKA evaluators shown in Table 3. In particular, the relevance of each attribute is evaluated, with respect to the task of classification, before actual learning commences. Table 3: WEKA Attribute Evaluators WEKA Evaluators InfoGainAttributeEval SymmetricalUncertAttributeEval

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 18 The output produced by each attribute evaluator is combined with WEKA s Ranker method, where the attributes are then sorted (ranked) in descending order according to the evaluator s measurement criterion. Next, the measurement criterion for each of the two WEKA evaluation methods is described using a small sample data set. The WEKA method InfoGainAttributeEval measures the information gain of each attribute S, with respect to the class T, using the formula Gain( S) = H ( T ) H ( T ). The function H (T ) measures entropy using H ( T ) = p j log 2 ( p j ), and H S k ( T ) = P H ( T ) measures the entropy at attribute S, which splits the data into T partitions i= 1 i S i (Larose, 2005.) Using the sample ARFF data shown in Table 1, the InfoGainAttributeEval measurement is calculated for the first attribute in the file, only. First, the entropy is calculated using class values p = 0. 5and p = 0. 5. pos neg H ( T ) 10 10 10 = log2 log 20 20 20 = (0.5) + (0.5) = 1.0 = j j S p j log2 ( p j ) = Next, the entropy for the only attribute is calculated: 2 10 20

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 19 7 P1 = 20 6 H only ( T1 ) = log 7 13 P0 = 20 H H only only 6 7 4 4 ( T0 ) = log2 13 13 7 ( T ) = (0.5917) + 20 2 1 log 7 9 13 2 1 = 0.1906 + 0.4011 = 0.5917 7 log 2 9 13 13 (0.8905) 20 = 0.5232 + 0.3673 = 0.8905 = 0.7859 The information gain, according to the only attribute, is measured as 1.0 0.7859 = 0.2141 bits. Using this measure, attributes resulting in the highest information gain are considered better. In fact, when using this measure the WEKA output shows the information gain for all six attributes, as shown in the next table. Note that only has the highest information gain (0.214). Ranked attributes: 0.214 1 only 0.191 5 new 0.169 2 okay 0.169 3 political 0.169 4 quick 0.108 6 straightforward Similarly, SymmetricalUncertAttributeEval measures the symmetrical uncertainty of each attribute S, with respect to the class T, using the function 2( H ( T ) H S ( T )) SymmU( S) =. H ( T ) + H ( S) Using values derived in the previous example, the symmetric uncertainty measurement for the only attribute is calculated using the data from Table 1: 7 7 13 13 H ( S) = log 2 log 2 = 0.5301+ 0.4041 = 0.9341 20 20 20 20 2( H ( T ) H only ( T )) 2(1.0 0.7859) 0.4282 SymmU ( S) = = = = 0.2214 H ( T ) + H ( only) 1.0 + 0.9341 1.9341

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 20 Note that the information measure of the only attribute using SymmetricalUncertAttributeEval is 0.2214. Again, the WEKA output shows that, according to this measure, only is the best attribute (0.221), as indicated in the next table: Ranked attributes: 0.221 1 only 0.21 3 political 0.21 2 okay 0.21 4 quick 0.192 5 new 0.147 6 straightforward

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 21 III. RELATED WORK This section surveys published research papers which focus on the same task of accurately classifying sentiment at the document level. In fact, each paper is concerned with the task of classifying the same movie review data set as this thesis. Pang, Lee, and Vaithyanathan (2002) report that classifying documents according to sentiment is a more difficult problem than traditional topic-based 12 classification. For example, Mitchell describes a topic-based classification task which achieves 89% accuracy classifying twenty-thousand news articles into one of twenty different categories (Mitchell, 1997.) Pang, et al. acknowledge that despite the use of several machine learning and feature extraction methods, their sentiment-based classification results are not comparable to topic-based accuracies reported elsewhere. They speculate that the reduced accuracy obtained for the sentiment-based problem using a simple bag-of-words classification approach occurs because reviews frequently contain a thwarted expectations narrative. This phrase refers to reviews containing contrasting sentiment, where the reviewers expectations do not correspond with their actual experience. Similarly, Turney (2002) observes that classifying documents according to sentiment is likely impacted by positive reviews containing unpleasant text and bad reviews containing pleasant text. Finally, Mullen and Collier (2004) also recognize the challenge of classifying documentlevel sentiment from decontextualized snippets where, for example, negatively-toned reviews often contain positive phrases. Mullen and Collier also note that the use of sarcasm, contrast, and digression contained in reviews further impedes more accurate classification of sentiment. Pang et al. (2002) use Naive Bayes, Maximum Entropy, and Support Vector Machines (SVMs) learning schemes to classify a set of seven hundred positive and seven hundred negative 12 Yahoo! organizes documents into a tree-like hierarchy organized by topic.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 22 movie reviews (polarity dataset 1.0) 13 according to sentiment, using three-fold cross validation 14. They report that both Maximum Entropy and SVM classifiers are sometimes shown to outperform Naive Bayes in certain natural text classification tasks. In their experiments, they extract eight different feature sets from the reviews, which include unigrams, bigrams, unigrams + bigrams, and adjectives. They achieve their best results with unigrams (including presence and negation) input to a SVM classifier, where they reach 82.9% accuracy. In contrast, when using the most frequent 2,633 adjectives, they report 75.1% accuracy using SVM. Pang and Lee (2004) extend their previous work by identifying only the subjective sentences found in the movie reviews, from which data is extracted and input to different learning schemes. First, they train a Naive Bayes subjectivity detector classification model against a collection of ten thousand sentences, which are derived from information sources independent from the movie reviews. This model achieves 92% accuracy correctly classifying individual sentences as being either subjective or objective. This subjectivity detector model is then applied to the movie review corpus to discard the objective sentences likely to contain misleading text, which might prove harmful to overall sentiment-based document classification accuracy. In contrast to their earlier experiments, this approach is applied to a larger data set (polarity dataset 2.0) 15 consisting of one thousand positive and one thousand negative movie reviews. A reduced movie review data set is generated where, on average, each review is reduced to about 60% of the full review according to word count, after discarding objective sentences. Next, the reduced movie review data set is input to a document-level Naive Bayes classifier 13 Polarity dataset v1.0, released July, 2002. 14 Three-fold cross validation is used because of the long training times associated with Maximum Entropy. 15 Polarity dataset v2.0, released June, 2004.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 23 where it achieves 86.4% accuracy; in contrast, a smaller accuracy rate of 82.8% results when the full movie review set is input to Naive Bayes. Pang et al. thus demonstrate that their subjectivity extraction method, when input to Naive Bayes, achieves about a 3.5% absolute improvement in classification accuracy. Mullen and Collier (2004) extract features from a variety of different information sources, including movie reviews 13 and record reviews. These different features include unigrams, lemmas 16, semantic orientation measures for phrases developed by Turney (2002), and semantic measures for adjectives based on Osgood s theory of semantic differentiation proposed by Kamps and Marx (2002.) They report movie review classification results for twelve experiments where each of the information sources are specified individually, or combined with one or more other sources, to derive vectors of real-valued input to an SVM classifier configured with a linear kernel. In addition, classification results are reported for two Hybrid SVM models, where SVM output from earlier experiments serves as input to a second downstream SVM model. They report a maximum classification accuracy of 86.0% using a hybrid SVM model with ten-fold cross validation, where input is derived from semantic orientation measures combined with lemmas. They state that their classification results using a Hybrid SVM are the best published results, to date, using the movie review data set. Bai, Padman, and Airoldi (2004) propose a two-stage approach (Two-Stage Markov Blanket) for classifying movie review sentiment using a Bayesian algorithm. During the first stage, the conditional dependencies between the features are encoded into a Markov Blanket Directed Acyclic Graph (MB DAG) corresponding to the target class. Next, a meta-heuristic strategy (Tabu Search) is applied to the MB DAG to enable exploration of the solution space, 16 In this case, lemmas are derived using the Conexor FDG parser.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 24 which precisely adjusts the model while avoiding local minima. Learning dependent patterns between features enables the network to discard redundant predictors, resulting in an accurate classifier that uses a very small subset of available features. In their experiments, 7,717 features are extracted from the reviews, consisting of all unigrams occurring in eight or more documents. Their two-stage Markov Blanket identifies a set of 22 predictors and achieves 87.52% classification accuracy. In other words, their Two-Stage Markov Blanket discards 99.71% (6,695/7,717) of the features, while still attaining a high-level of accuracy. Sista and Srinivasan (2004) describe a movie review classification approach that requires identifying both positive and negative features contained in the movie reviews 13. To do so, they exploit knowledge contained in the General Inquirer 17 (GI) lexicon and the WordNet (Miller, et al., 1993) database. First, they remove word sense disambiguation information from the GI lexicon to create a list of 8,640 words. Next, they design a set of rules to match annotation used by the GI lexicon, from which they select only the words determined as having either a positive or negative connotation. This step derives positive and negative lexicon tables containing 5,977 and 2,200 entries, respectively. Finally, the negative lexicon table is extended by adding similar entries found in WordNet according to the synonym relationship. After final derivation and crosschecking, their polarized lexicon contains a set of 5,977 positive and 3,700 negative words. The movie reviews are parsed to create feature vectors in ARFF format, where attribute entries correspond only to the words found in the polarized lexicon. Finally, the ARFF files are input to several WEKA classifiers including Naive Bayes, Multinomial Naive Bayes, and SMO 18. They 17 General Inquirer: http://www.wjh.harvard.edu/~inquirer/. 18 According to the WEKA documentation, SMO implements Platt's sequential minimal optimization algorithm for training a SVM.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 25 report a best classification accuracy of 84.20% using Multinomial Naive Bayes classifier with the reviews split into 71% training and 39% test sets.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 26 IV. EXPERIMENTS AND RESULTS This section presents a set of four experiments. Each one describes the approach, configuration, and experimental results obtained when classifying the movie review corpus. In general, the experiments are ordered progressively to show incremental improvements in classification accuracy. The first experiment compares classification results produced by Naive Bayes using selected and ranked input and baseline (non-ranked) input. Results indicate that applying selection and ranking methods to the input in this case, adjectives have a beneficial effect on classification accuracy. In the second experiment, both adjectives and adverbs are extracted from the reviews and then WEKA s selection and ranking methods are used to identify input to the Naive Bayes classifier. In this experiment, the classification accuracy is again shown to increase, compared to using baseline input, as a result of using the selected and ranked input corresponding to the two parts of speech. In the third experiment, selected and ranked adjectives and adverbs are once again input to a classifier; however, in this case a parameter of the Bayes Net classifier is fine-tuned to show an additional increase in classification accuracy. Finally, in the fourth experiment, both unigrams and bigrams are extracted from the reviews, selection and ranking methods are applied, and then classified by Bayes Net. This configuration produces the best classification results. 1. Naive Bayes: Classifying Ranked Adjectives (Unigrams) This experiment produces classification results by extracting adjectives from the movie reviews and using them as input to the Naive Bayes classifier in WEKA. In this case, perl-

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 27 extract.pl is configured to extract all adjectives from the reviews identified by the parts of speech Tagger, according to the definitions in Table 4. Table 4: Adjective Tags Produced by Tagger Tag Definition Example JJ Adjective happy, bad JJR Adjective, comparative happier, worse JJS Adjective, superlative happiest, worst Parsing the movie reviews generates a keywords file containing 7,781 adjectives. An exhaustive review of the keywords file to assess the accuracy of the words identified as adjectives by Tagger is not performed; however, 202 entries are subjectively deleted where the keyword is either not grammatically correct, contains a special character, or identifies an actor or director. For example, deleted words include ex, arnold, pacino, altman, and d'angelo. Note that after editing the final adjectives-only keywords file, 7,579 entries remain. When using ten-fold cross validation, Naive Bayes achieves 80.20% accuracy when all 7,579 adjectives are specified as input to the classifier. This result is considered the baseline accuracy. As described in the WEKA Attribute Evaluation Methods section, two separate approaches are defined which evaluate and rank the adjectives in descending order of importance. In each case, according to the given method s evaluation criterion, the most important adjectives appear near the top of the list and the least important adjectives are found closer to the bottom of the list. Perl-classify.pl is configured to programmatically select increasingly larger subsets of the four thousand most important adjectives from the keywords list, in increments of two hundred. For each subset of adjectives, an ARFF file is generated as

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 28 input to WEKA, where the movie reviews are classified using Naive Bayes combined with tenfold cross validation. The Naive Bayes classification results obtained using the two attribute evaluation and ranking methods in WEKA are shown in Figure 4. This illustration plots Naive Bayes classification accuracy against number of inputs, when selecting attribute subsets using evaluators InfoGainAttributeEval, and SymmetricalUncertAttributeEval. Figure 4 NaiveBayes Classification: Ranked Adjectives Classification Accuracy 90 89 88 87 86 85 84 83 82 81 100 300 500 700 900 1100 1300 1500 1700 1900 2100 2300 Number of Inputs 2500 2700 2900 3100 3300 3500 3700 3900 InfoGainAttrEval SymmetricalUncertAttrEval When using 100 to 1,900 inputs, Naive Bayes achieves the best results using input ranked by SymmetricalUncertAttributeEval. It is shown to achieve greater than 87.00% accuracy over the range 1,000 to 1,500 inputs, peaking at 87.30% accuracy using 1,300 inputs. Throughout the remaining input range, Naive Bayes classification accuracy improves slightly (+~0.15%) using input ranked by InfoGainAttributeEval. Results also indicate that classification accuracy appears to flatten out over the range of 1,900 to 4,000 inputs, independent of ranking method. In fact, similar test results (not shown in Figure 4) indicate that over the remaining range of inputs, Naive Bayes classification accuracy continues declining to a low of 80.20% accuracy. This equals the baseline accuracy.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 29 It is now determined that the use of evaluation and ranking methods in WEKA leads to a 7.10% (87.30% versus 80.20%) absolute improvement in classification accuracy. When this result is compared to the baseline case, it is clear that using attribute evaluation and ranking methods has proven beneficial, as indicated by the classification results reported by Naive Bayes. In other words, the baseline classification error rate is 19.80% (1 80.20). The classification error rate obtained using evaluated and ranked adjectives is 12.70% (1 87.30), which equals a 19.80 12.70 7.10 ~35.86%, = = 0.3586 reduction in classification error rate. In the next 19.80 19.80 section, evaluation and ranking methods are applied to adjectives and adverbs, which have been extracted from the set of reviews. 2. Naive Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams) This experiment logically extends the methodology introduced in the previous section, where adjectives are extracted, evaluated, ranked, and then input to Naive Bayes to enhance classification accuracy. In this case, words identified as either adjectives or adverbs are simultaneously extracted from the reviews and then evaluated and ranked. In addition to the adjective tags (refer to Table 4), perl-extract.pl also extracts the adverbs from the review corpus according to the parts of speech Tagger definitions shown in Table 5. Table 5: Adverbs Tags Produced by Tagger Tag Definition Example RB Adverb often, not, very, here RBR Adverb, comparative faster RBS Adverb, superlative fastest

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 30 The reviews are parsed and a total of 8,928 unigrams are extracted matching the adjective and adverb tags, as defined by Tagger (refer to Table 4 and Table 5 for the complete listing.) This combined keyword list contains 7,579 19 adjectives and 1,349 adverbs. By default, when all adjectives and adverbs are input to Naive Bayes, it produces an accuracy of 81.45% using tenfold cross validation. Therefore, the baseline accuracy, when using all input, is 81.45%. Perl-classify.pl selects subsets of the four thousand most important adjectives and adverbs from the keywords list, as measured by each of the two WEKA evaluation and ranking methods described in the WEKA Attribute Evaluation Methods section. Classification results using Naive Bayes, combined with ten-fold cross validation, are shown in Figure 5. Figure 5 NaiveBayes Classification: Ranked Adjectives and Adverbs Classification Accuracy 90 89 88 87 86 85 84 83 82 81 100 300 500 700 900 1100 1300 1500 1700 1900 2100 2300 Number of Inputs 2500 2700 2900 3100 3300 3500 3700 3900 InfoGainAttrEval SymmetricalUncertAttrEval Over the range of 1,100 to 4,000 inputs, the Naive Bayes classifier, using input selected by SymmetricalUncertAttributeEval, consistently achieves the highest classification accuracy. Specifically, Naive Bayes achieves a maximum accuracy of 89.10% using 1,500 evaluated and ranked inputs. Much like the first experiment, the plotted trend shown in Figure 5 19 The same 202 entries incorrectly identified as adjectives are subjectively deleted from the list.

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 31 suggests that beyond the level of about 1,900 inputs, classification accuracy begins to slowly degrade. These results show that the combination of both adjectives and adverbs, when evaluated and ranked as input to Naive Bayes, improves the absolute maximum classification accuracy by ~2% (89.10% versus 87.30%), as compared to using only selected and ranked adjectives. In other words, including adverbs as part of the feature space leads to an increase in accuracy. These promising results indicate that a 7.65% absolute increase (89.10% versus 81.45%) in accuracy is achieved by ranking the adjectives and adverbs, as compared to simply using the complete (baseline) set of inputs. Furthermore, in this case the baseline error rate is 18.55% (1 81.45) and the classification error rate obtained using selected and ranked adjectives and adverbs is 10.9% (1 89.10). This results in a 41.24% 18.55 10.9 = 18.55 7.65 18.55 = 0.4124 reduction in the classification error rate. Again, the use of evaluation and ranking methods to choose features is shown to improve classification accuracy. Because the first two experiments both clearly achieve their highest classification accuracy using input selected according to SymmetricalUncertAttributeEval, further experiments are performed using only this evaluation method. 3. Bayes Net: Classifying Ranked Adjectives and Adverbs (Unigrams) In this section, additional classification tests are performed using evaluated and ranked adjectives and adverbs as input to a classifier. This time, however, WEKA s Bayes Net classifier is specified instead of Naive Bayes. By default, Bayes Net sets the option SimpleEstimator -A 0.5 to initialize the cells in the probability tables to 0.5, which prevents zero-based probabilities from occurring. The initial count for each value in the tables is specified using the

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 32 -A option. Perl-classify.pl is configured to test a smaller input range, 1,200 to 2,200, in increments of fifty, according to the SymmetricalUncertAttributeEval evaluation criterion. Bayes Net is specified and the -A option is tested using values of 0.5, 0.005, 0.007, and 0.009; however, results are only shown for values of 0.5 (default) and 0.007. Figure 6 shows the movie review classification results for Naive Bayes and Bayes Net, with -A set to both 0.5 and 0.007. The Naive Bayes results closely resemble those results obtained in the Naive Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams) section and are presented for comparative purposes. Specifically, Naive Bayes is shown to reach a maximum accuracy just over 89.00%. Figure 6 NaiveBayes vs. BayesNet Classification: Ranked Adjectives and Adverbs Classification Accuracy 93 92 91 90 89 88 87 1200 1250 1300 1350 1400 1450 1500 1550 1600 1650 Number of Inputs 1700 1750 1800 1850 1900 1950 2000 2050 2100 2150 2200 NaiveBayes BayesNet, A = 0.5 BayesNet, A = 0.007 Bayes Net, when using -A equal to 0.5, appears to slightly outperform Naive Bayes. Over the range of inputs shown in the illustration Bayes Net is, on average, about 0.80% more accurate. Bayes Net, however, performs extremely well with -A set to 0.007, where it achieves 92.25% accuracy using 1,850 inputs. In fact, Figure 6 shows that it performs above 92.00% over the range of 1,650 to 1,850 inputs. In absolute terms, Bayes Net (-A = 0.007) classification accuracy outperforms Bayes Net (default options) by 2.4% (92.25% versus 89.85%) and Naive Bayes by 3.15% (92.25% versus 89.10%). Recall that the previous experiment in section Naive

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 33 Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams) achieved 89.10% accuracy, which equals an error rate of 10.90% (1 89.10). Therefore, compared to those results, the use of Bayes Net in this experiment has led to a 28.90% 10.90 7.75 = 10.90 3.15 10.90 = 0.2890 reduction in classification error rate. 4. Bayes Net: Classifying Ranked Adjectives and Adverbs (unigrams and bigrams) All three previous experiments extracted individual words (unigrams) from the reviews, corresponding to one or more defined parts of speech. For example, the third experiment extracted both adjectives and adverbs from the reviews to form a set of keywords. Now consider a bigram, which represents a two-word sequence where the word pair must contain an adjective. For example, the text snippet, not a good one, is identified by Tagger as: not/rb a/det good/jj one/nn In this case, in addition to extracting the individual adjective good, perl-extract.pl also extracts the bigrams not_good, and good_one and includes them in the set of keywords. There are three specific rules used by perl-extract.pl to extract bigrams from the reviews. Two bigram rules are applied to adjectives and operate against only the three leading contextual word positions. Rule 1 examines the three leading positions preceding an adjective and attempts to match negations, where words such as not or wasn t are present. Table 6 shows three examples where Rule 1 is satisfied, forming the bigrams not_good, isnt_enough, and wasnt_funny, respectively. This rule is similar to the one used by Pang, Lee, and

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 34 Vaithyanathan (2002), where they apply the tag NOT_ to every word following a negation word and leading up to a punctuation mark. Table 6: Bigrams, Rule 1 Position 3 Position 2 Position 1 Adjective Example but not a good but not a good there is n t enough there isn t enough was n t very funny wasn t very funny Next, Rule 2 is applied to an adjective only when Rule 1 does not match a negation to form a bigram. Again, the first three leading word positions are examined, moving from right to left preceding the adjective. Rule 2 attempts to match the first leading word that is not a preposition, conjunction, or punctuation mark to form a bigram. According to Rule 2, the example text shown in Table 7 leads to the formation of the bigram just_bizarre. Table 7: Bigrams, Rule 2 Position 3 Position 2 Position 1 Adjective Example movie was just bizarre movie was just bizarre Finally, a third rule attempts to match the single trailing word position, adjacent to each adjective. Rule 3 only forms bigrams when the trailing position is identified as a noun part of speech. Therefore, as the examples in Table 8 show, the snippet good acting forms the bigram good_acting ; however, the text string bad as does not match Rule 3, and therefore no bigram is formed. Table 8: Bigrams, Rule 3 Adjective Position +1 Example good acting good acting bad as bad as

Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 35 By definition, the three bigram rules indicate that each adjective will not necessarily form bigrams by combining with leading and/or trailing word tokens. That is, the rules do not require that each adjective form a bigram. Now, all adjectives and adverbs (unigrams), and bigrams corresponding to Rules 1, 2, and 3 are extracted from the reviews, which results in a list containing 105,616 unique keywords. Next, a subset of keywords is chosen from the full keyword list, where each keyword is required to have a document frequency greater than or equal to four. That is, the keyword must be contained in at least four of the two thousand reviews. This arbitrary selection process reduces the full set of keywords to a manageable set of size 8,549. Perl-classify.pl is configured to test the keywords in the range, 1,100 to 2,000, in increments of ten, according to the SymmetricalUncertAttributeEval evaluation criterion. The Bayes Net classifier is again specified, but this time using the -A option with values of 0.001, 0.003, and 0.005. The highlyaccurate classification results using Bayes Net with -A = 0.003 are shown in Figure 7. Figure 7 BayesNet Classification: Ranked Adjectives and Adverbs (Unigrams and Bigrams) Classification Accuracy 96 95.5 95 94.5 94 93.5 93 1100 1140 1180 1220 1260 1300 1340 1380 1420 1460 1500 1540 1580 1620 Number of Inputs 1660 1700 1740 1780 1820 1860 1900 1940 1980 BayesNet, A = 0.003 Classification accuracy is shown to be greater than 95% over the range of 1,300 to 1,420 inputs, and achieves a maximum accuracy of 95.50% using 1,380 inputs. Compared to results