Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Size: px
Start display at page:

Download "Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models"

Transcription

1 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 1 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models James B. Steck A Thesis Submitted in Partial Fulfillment for the Requirements of the Degree of Master of Science in Data Mining Central Connecticut State University New Britain, Connecticut 23 September 2005 Thesis Advisor Dr. Daniel T. Larose Department of Mathematical Sciences

2 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 2 ABSTRACT The online DVD rental company Netflix advertises that their service is the best way to rent movies 1. Though Netflix claims they enable customers to find and discover movies they will enjoy, consistently renting movies that meet personal tastes and standards still remains an elusive task. An intelligent data mining model that recommends movies according to each viewer s personal preference his or her net picks, so to speak would likely increase customer satisfaction. Researchers have proposed several techniques that accurately classify the underlying sentiment found in reviews. In several cases, these techniques rely on adjectives as likely indicators of subjectiveness, sentiment, or opinion. This thesis describes a method that extracts useful features from a collection of movie reviews and uses them to build data mining models capable of accurately classifying a new review as either Good or Bad. The experiments described in this thesis use attribute selection methods in WEKA 2 to evaluate each feature s relevance, with respect to the task of movie review classification. Subsets of the ranked features are then programmatically input to Bayesian-based classifiers in WEKA to generate classification results. These methods are proven to produce highly accurate classification models with results often more competitive than those reported in current literature Netflix, Inc. All rights reserved. 2 WEKA (Waikato Environment for Knowledge Analysis) 3.4 Data Mining Software.

3 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 3 ACKNOWLEDGEMENTS I am thankful for the wonderful support I received from my family while completing this thesis. They listened to me repeatedly describe my setbacks, possible approaches, and intermediate results to this particular problem more times than I can possibly count. I am especially grateful to my wife Daphne, whose editorial comments asnd love for good movies inspired the completion of this document. She also manages our Netflix queue.

4 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 4 TABLE OF CONTENTS ABSTRACT...2 ACKNOWLEDGEMENTS...3 TABLE OF CONTENTS...4 I. INTRODUCTION...5 II. BACKGROUND Data Extraction Method of Ranking and Classification Bayesian Algorithms WEKA Attribute Evaluation Methods...17 III. RELATED WORK...21 IV. EXPERIMENTS AND RESULTS Naive Bayes: Classifying Ranked Adjectives (Unigrams) Naive Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams) Bayes Net: Classifying Ranked Adjectives and Adverbs (Unigrams) Bayes Net: Classifying Ranked Adjectives and Adverbs (unigrams and bigrams)..33 V. CONCLUSIONS VI. REFERENCES...39

5 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 5 I. INTRODUCTION There has been an increasing amount of interest devoted to exploring computational methods and models capable of identifying user sentiment found in natural language documents. Sentiment analysis seeks to determine the subjectivity, opinion, or polarity found in unstructured on-line information sources such as product reviews, movie reviews, or political commentary. Possible applications should efficiently summarize huge amounts of data to detect underlying sentiment, and may include a recommendation system or review evaluator which produces a recommendation, such as, buy or do not buy, or good or bad. An important first step to developing such an application might be to view it as a dichotomous problem; this way, possible solutions attempt to identify the underlying tone of the text as either positive or negative. A document-level classification model may provide an acceptable solution; however, the task of first determining a relevant set of features to serve as input to the classifier presents a significant challenge. A review of published research literature (Bai, Padman, Airoldi, 2004; Mullen, Collier, 2004; Pang, Lee, 2004; Pang, Lee, Vaithyanathan, 2002; Sista, Srinivasan, 2004; Turney 2002) reveals that methods designed to classify document sentiment are often represented as bag-ofwords solutions. That is, individual features typically correspond to words, pairs of words, parts Figure 1 Document Collection Feature Space

6 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 6 of speech, or sentences extracted from the document collection, as illustrated in Figure 1. Additionally, the attribute values associated with each feature are usually represented as frequency counts, polarity measures, distance measures, or Boolean values. Regardless of chosen feature selection and representation, however, each approach must confront a high-dimension feature space, where the individual features may either be useful, redundant, correlated, or simply irrelevant to the task of classification. As a result, each solution generally differentiates itself based on its ability to reduce the dimensionality of the feature space, ultimately leading to a desirable level of classification accuracy. This thesis demonstrates an effective approach to reducing the dimensionality of a highly-dimensional feature space from which optimal feature subsets are identified, thereby leading to accurate document-level sentiment classification models. Each of the four experiments described in this document begins by specifying a feature type, which is programmatically extracted from the document collection to create a feature space, as shown in Figure 1. One or more of WEKA s attribute selection and ranking methods are then applied to the feature space to produce a ranked feature space. Programmatic methods select subsets of the ranked feature space, build input files for one or more WEKA classifiers, and report classification results. A conceptual overview of the process is shown in Figure 2.

7 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 7 Figure k WEKA n Feature Space Ranked Feature Space Feature Space Subsets This thesis presents the classification results from each experiment using graphical summaries. These results are quite compelling, as they outperform several state-of-the-art approaches recently published in machine learning conference proceedings. The proposed methods of feature space reduction and document classification presented in this thesis are evaluated using a set of one thousand positive and one thousand negative movie reviews available from the Cornell Natural Language Processing Group (Pang, Lee, 2004.) The remaining four sections of the document are organized as follows: The BACKGROUND section explains in detail the programmatic methods developed in Perl 3 to extract features, derive classification input files, and select ranked feature subsets. The RELATED WORK section describes related research focused on developing data mining models capable of classifying sentiment at the document-level. Note that 3 Perl from ActiveState Corporation.

8 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 8 these papers all contain experimental results obtained by classifying the same (or similar) set of movie reviews explored in this thesis. The EXPERIMENT AND RESULTS section contains four experiments, which together present conclusive results used to support the thesis statement. These experiments are generally presented in a progressive manner so that the introduction of each one leads to improved classification accuracy. The first experiment extracts adjectives from the movie reviews and provides evidence that selecting and ranking the features improves classification accuracy using WEKA s Naive Bayes 4 classifier, as compared to using the baseline (non-ranked) set of features. Similarly, the second experiment extracts adjectives and adverbs. When compared to using the baseline set of features, selection and ranking techniques are again shown to improve classification accuracy using the Naive Bayes classifier. The third experiment uses the Bayesian Networks (Bayes Net) classifier, which further improves the classification accuracy using a feature space comprised on adjectives and adverbs. Finally, the fourth experiment extracts both unigrams 5 and bigrams from the movie reviews, which maximizes the classification accuracy using the Bayes Net classifier. That is, of all experiments performed in this thesis this approach leads to the best classification accuracy, when classifying the movie review data set. 4 The Naive Bayes classifier is based on Bayes Rule and naively assumes independence of events, given the class. 5 A unigram is a feature which represents a single word, such as wonderful. Similarly, a bigram defines a two-word phrase (or pair), such as wonderful movie.

9 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 9 The CONLUSION section summarizes the experimental results to provide evidence that the thesis statement has been clearly supported.

10 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 10 II. BACKGROUND This section provides background information to enhance understanding of the experiments performed in the EXPERIMENTS AND RESULTS section. It describes the programmatic methods developed in Perl by which a specified feature space can be derived from the set of movie review documents. It also discusses automated methods used to generate the ARFF 6 files which are input to WEKA classifiers and the process used to select feature subsets. 1. Data Extraction Unless indicated otherwise, all classification experiments use the data set named polarity dataset v2.0, which was made available (see reference indicating Web site) by Pang and Lee in June, 2004 for use with sentiment analysis experiments (Pang, Lee, 2004.) This data set represents a movie review corpus consisting of one thousand positive and one thousand negative movie reviews extracted from the Internet Movie Database (IMDb.) 7 Pang and Lee developed tools to automatically pre-classify the reviews with pos (positive) or neg (negative) categorical tags. Each movie review is stored in an individual file where, according to Pang, the actual review text has been processed down in an attempt to remove any information indicative of its rating. For example, the pos subdirectory contains the following five individual review files: 6 WEKA requires input files in ARFF format. 7 Internet Movie Data Base:

11 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 11 cv004_11636.txt cv000_29590.txt cv001_1843.txt cv002_15918.txt cv003_11664.txt... A Perl script named perl-extract.pl is developed to read the set of reviews and automatically extract specified parts of speech from the text. To accomplish this task, Perlextract.pl uses the Lingua::EN::Tagger 8 Perl class, which assigns parts of speech tags to English text based on statistics found in the Penn Treebank Project 9. For example, the snippet of text, bizarre, not only, extracted from a movie review is shown next with the parts of speech tags applied by Tagger: bizarre/jj,/ppc not/rb only/rb In this case, the unigrams are tagged as follows: bizarre is an adjective (/JJ), the comma is punctuation (/PPC), and both not and only are adverbs (/RB). As a result, all individual words (unigrams) in the reviews matching a specified part-of-speech, such as an adjective, can be programmatically identified and extracted into a keywords list. The script reports all extracted unigrams in descending order, according to the word s frequency of occurrence within the movie review corpus. In the following list, for example, perl-extract.pl reports that the adjective good occurs 2,321 times in 1,150 different movie reviews found within the movie review corpus. 8 Developed by Aaron Coburn. 9 Penn Treebank:

12 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models more, good, most, In addition to building a list of unigrams and associated frequency counts that occur within the corpus, perl-extract.pl also generates files in ARFF format, as required by WEKA for classification. Let the set of k features extracted from the movie reviews be represented as F = f, f, f,..., f }. In addition, each movie review d i is represented as the { k vector d i = B ( d ), B ( d ), B ( d ),..., B ( d )}, where the Boolean function B d ) evaluates to 1 or { 1 i 2 i 3 i k i 0 corresponding to whether the i th document either contains or does not contain the j th feature, respectively. This way, perl-extract.pl builds an ARFF file where each movie review is represented as a comma-separated Boolean vector. For example, suppose a keyword file contains only the keywords more, good, and most, as shown in the previous example. In this case, generating the corresponding ARFF file produces a set of instances, where each line represents a specific movie review, as follows: j ( i... 0,1,1,pos 0,0,0,neg 1,0,1,neg 1,1,0,pos... Note that the first review, identified as positive, contains the words good and most, but does not contain the word more. Similarly, only two of the three words, more and good, are found in the last review, which again happens to be a positive review.

13 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models Method of Ranking and Classification The previous section described the general process by which one or more parts of speech can be extracted from the full set of reviews to create a keywords list and corresponding ARFF file. Specifying the entire set of unigrams as input to a classifier, however, is not likely to produce optimal results because many features found in text documents are likely irrelevant to the classification task (Chakrabarti, 2003; Witten, Frank, 2000.) In addition, Larose states that the inclusion of variables which are highly correlated may lead to components being double counted (Larose, 2006.) For example, according to perl-extract.pl and Tagger, 7,781 unique adjectives can be found in the set of two thousand movie reviews. Although Naive Bayes handles highly-dimensional data sets extremely well, specifying this entire list as a set of inputs does not necessarily lead to optimal classification accuracy. More importantly, specifying large data sets as input to a WEKA classification or selection task may lead to a java.lang.outofmemoryerror error 10, where the result is nothing more than a hung application. The Perl script perl-weka.pl is used to address several of these weaknesses. First, it takes the entire attribute set, as generated by perl-extract.pl, and assigns a specific selection method in WEKA that measures the individual worth of each attribute, with respect to the task of classifying a review as good or bad. The list of measurements associated with each feature is ranked in decreasing order, from which perl-weka.pl chooses a subset of the most promising attributes and then generates the corresponding ARFF file as input to a particular WEKA classifier. For example, perl-weka.pl can be configured to read the list of 7, In this study, the WEKA platform contains 512MB memory.

14 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 14 adjectives, from which it chooses subsets in increments of one hundred, from the top four thousand adjectives in the list. This way, forty increasingly larger subsets (size one hundred to four thousand) of attributes are input to the specified WEKA classifier, such as Naive Bayes, where movie review classification is performed. With this efficient approach, larger portions of the solution space are programmatically searched, which ultimately leads to identifying the most promising classification models. 3. Bayesian Algorithms This thesis uses Naive Bayes and Bayes Net WEKA learning schemes to classify the movie review data. This section briefly describes these two algorithms. Mitchell (1997) says that probabilistic learning algorithms such as Naive Bayes are very effective at classifying text documents. In addition, although Naive Bayes naively assumes independence between terms, it is widely used because of its simplicity and quick training times (Chakrabarti 2003.) Assuming that the predictor variables are independent given the class value, conditional independence is expressed as follows (Larose, 2006.) θ Naive Bayes = arg max = p( X i = xi θ m i 1 θ ) p( θ ) where, θ takes on the class value either pos or neg. Furthermore, because any single zero probability value will render this product to be zero, the probability of each cell must be adjusted (Larose, 2006.) WEKA avoids zero-based cells by adding 0.5 the default virtual value to each cell. Consider the sample data set shown in Table 1. Table 1: Sample ARFF only okay political quick {0,1}

15 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models new straightforward CLASS {neg, 0,1,0,0,1,0,neg 0,0,0,1,0,0,neg 1,0,0,1,0,0,neg 1,0,0,0,0,0,neg 1,1,0,0,0,0,neg 0,0,0,0,1,0,neg 1,0,0,0,0,0,neg 1,0,0,0,0,0,neg 0,0,0,1,0,0,neg 1,1,0,0,0,0,neg 0,0,0,0,1,0,pos 0,0,0,0,0,0,pos 0,0,0,0,1,0,pos 0,0,1,0,1,0,pos 0,0,1,0,1,0,pos 0,0,1,0,1,1,pos 1,0,0,0,1,1,pos 0,0,0,0,1,0,pos 0,0,0,0,0,0,pos 0,0,0,0,0,0,pos According to this data, the probability cell values for the only attribute are derived in Table 2. For example, given that the review is negative, the conditional probability of the word only 6 occurring is p( only = 1 CLASS = neg) = = based on frequency counts derived from Table However, the adjusted cell probability produced internally by Naive Bayes to avoid potential zero-based cells is p ( only = 1 CLASS = neg) = = Larose presents a detailed example describing how WEKA s Naive Bayes classifier derives its probabilities for a classification problem (Larose, 2006.)

16 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 16 Table 2: Probability Cell Values only neg 1 0 6/10 4/10 pos 1 0 1/10 9/10 The term independence assumption made by Naive Bayes may in fact be too overreaching for some application tasks and and therefore not applicable (Chakrabarti, 2003; Larose, 2006; Witten, Frank, 2000.) The Bayes Net classifier provides an alternative way to apply Bayesian analysis without making the independence assumption. Although Bayesian Networks have shown mild improvements compared to other schemes classifying the Reuters 11 data set, Chakrabarti suspects that this learning scheme will show marked improvements when confronted with more complex data sets (Chakrabarti, 2003.) In the context of the movie review classification task, the graph structure of the network consists of one node for the class variable C and a node for each of the k input variables, as shown in Figure 3. In this scenario, the predictor variables are parents of the class node C. The relationship between variables in the network is defined as follows (Larose, 2006): p( X 1 = x1, X 2 = x2, L, X m = xm ) = = p( X i = x m i 1 i parents( X i )) 11 Reuters is a widely used data set for text categorization:

17 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 17 Figure k C Like Naive Bayes, Bayes Net uses a simple estimation method and also adds 0.5 (by default) to probability cells to avoid the zero-based cell problem. Larose presents an in-depth example using Bayes Net that derives the probabilities for a small data set classification task (Larose, 2006.) 4. WEKA Attribute Evaluation Methods Witten and Frank (2000) point out that irrelevant attributes can degrade the performance of a learning scheme; however, by reducing the dimensionality of the input space, the performance of the learning algorithm can be improved. This thesis performs classification experiments by applying two different WEKA evaluation measures to individual attributes in the feature space. The set of movie review attributes are evaluated according to the two WEKA evaluators shown in Table 3. In particular, the relevance of each attribute is evaluated, with respect to the task of classification, before actual learning commences. Table 3: WEKA Attribute Evaluators WEKA Evaluators InfoGainAttributeEval SymmetricalUncertAttributeEval

18 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 18 The output produced by each attribute evaluator is combined with WEKA s Ranker method, where the attributes are then sorted (ranked) in descending order according to the evaluator s measurement criterion. Next, the measurement criterion for each of the two WEKA evaluation methods is described using a small sample data set. The WEKA method InfoGainAttributeEval measures the information gain of each attribute S, with respect to the class T, using the formula Gain( S) = H ( T ) H ( T ). The function H (T ) measures entropy using H ( T ) = p j log 2 ( p j ), and H S k ( T ) = P H ( T ) measures the entropy at attribute S, which splits the data into T partitions i= 1 i S i (Larose, 2005.) Using the sample ARFF data shown in Table 1, the InfoGainAttributeEval measurement is calculated for the first attribute in the file, only. First, the entropy is calculated using class values p = 0. 5and p = pos neg H ( T ) = log2 log = (0.5) + (0.5) = 1.0 = j j S p j log2 ( p j ) = Next, the entropy for the only attribute is calculated:

19 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 19 7 P1 = 20 6 H only ( T1 ) = log 7 13 P0 = 20 H H only only ( T0 ) = log ( T ) = (0.5917) log = = log (0.8905) 20 = = = The information gain, according to the only attribute, is measured as = bits. Using this measure, attributes resulting in the highest information gain are considered better. In fact, when using this measure the WEKA output shows the information gain for all six attributes, as shown in the next table. Note that only has the highest information gain (0.214). Ranked attributes: only new okay political quick straightforward Similarly, SymmetricalUncertAttributeEval measures the symmetrical uncertainty of each attribute S, with respect to the class T, using the function 2( H ( T ) H S ( T )) SymmU( S) =. H ( T ) + H ( S) Using values derived in the previous example, the symmetric uncertainty measurement for the only attribute is calculated using the data from Table 1: H ( S) = log 2 log 2 = = ( H ( T ) H only ( T )) 2( ) SymmU ( S) = = = = H ( T ) + H ( only)

20 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 20 Note that the information measure of the only attribute using SymmetricalUncertAttributeEval is Again, the WEKA output shows that, according to this measure, only is the best attribute (0.221), as indicated in the next table: Ranked attributes: only political okay quick new straightforward

21 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 21 III. RELATED WORK This section surveys published research papers which focus on the same task of accurately classifying sentiment at the document level. In fact, each paper is concerned with the task of classifying the same movie review data set as this thesis. Pang, Lee, and Vaithyanathan (2002) report that classifying documents according to sentiment is a more difficult problem than traditional topic-based 12 classification. For example, Mitchell describes a topic-based classification task which achieves 89% accuracy classifying twenty-thousand news articles into one of twenty different categories (Mitchell, 1997.) Pang, et al. acknowledge that despite the use of several machine learning and feature extraction methods, their sentiment-based classification results are not comparable to topic-based accuracies reported elsewhere. They speculate that the reduced accuracy obtained for the sentiment-based problem using a simple bag-of-words classification approach occurs because reviews frequently contain a thwarted expectations narrative. This phrase refers to reviews containing contrasting sentiment, where the reviewers expectations do not correspond with their actual experience. Similarly, Turney (2002) observes that classifying documents according to sentiment is likely impacted by positive reviews containing unpleasant text and bad reviews containing pleasant text. Finally, Mullen and Collier (2004) also recognize the challenge of classifying documentlevel sentiment from decontextualized snippets where, for example, negatively-toned reviews often contain positive phrases. Mullen and Collier also note that the use of sarcasm, contrast, and digression contained in reviews further impedes more accurate classification of sentiment. Pang et al. (2002) use Naive Bayes, Maximum Entropy, and Support Vector Machines (SVMs) learning schemes to classify a set of seven hundred positive and seven hundred negative 12 Yahoo! organizes documents into a tree-like hierarchy organized by topic.

22 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 22 movie reviews (polarity dataset 1.0) 13 according to sentiment, using three-fold cross validation 14. They report that both Maximum Entropy and SVM classifiers are sometimes shown to outperform Naive Bayes in certain natural text classification tasks. In their experiments, they extract eight different feature sets from the reviews, which include unigrams, bigrams, unigrams + bigrams, and adjectives. They achieve their best results with unigrams (including presence and negation) input to a SVM classifier, where they reach 82.9% accuracy. In contrast, when using the most frequent 2,633 adjectives, they report 75.1% accuracy using SVM. Pang and Lee (2004) extend their previous work by identifying only the subjective sentences found in the movie reviews, from which data is extracted and input to different learning schemes. First, they train a Naive Bayes subjectivity detector classification model against a collection of ten thousand sentences, which are derived from information sources independent from the movie reviews. This model achieves 92% accuracy correctly classifying individual sentences as being either subjective or objective. This subjectivity detector model is then applied to the movie review corpus to discard the objective sentences likely to contain misleading text, which might prove harmful to overall sentiment-based document classification accuracy. In contrast to their earlier experiments, this approach is applied to a larger data set (polarity dataset 2.0) 15 consisting of one thousand positive and one thousand negative movie reviews. A reduced movie review data set is generated where, on average, each review is reduced to about 60% of the full review according to word count, after discarding objective sentences. Next, the reduced movie review data set is input to a document-level Naive Bayes classifier 13 Polarity dataset v1.0, released July, Three-fold cross validation is used because of the long training times associated with Maximum Entropy. 15 Polarity dataset v2.0, released June, 2004.

23 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 23 where it achieves 86.4% accuracy; in contrast, a smaller accuracy rate of 82.8% results when the full movie review set is input to Naive Bayes. Pang et al. thus demonstrate that their subjectivity extraction method, when input to Naive Bayes, achieves about a 3.5% absolute improvement in classification accuracy. Mullen and Collier (2004) extract features from a variety of different information sources, including movie reviews 13 and record reviews. These different features include unigrams, lemmas 16, semantic orientation measures for phrases developed by Turney (2002), and semantic measures for adjectives based on Osgood s theory of semantic differentiation proposed by Kamps and Marx (2002.) They report movie review classification results for twelve experiments where each of the information sources are specified individually, or combined with one or more other sources, to derive vectors of real-valued input to an SVM classifier configured with a linear kernel. In addition, classification results are reported for two Hybrid SVM models, where SVM output from earlier experiments serves as input to a second downstream SVM model. They report a maximum classification accuracy of 86.0% using a hybrid SVM model with ten-fold cross validation, where input is derived from semantic orientation measures combined with lemmas. They state that their classification results using a Hybrid SVM are the best published results, to date, using the movie review data set. Bai, Padman, and Airoldi (2004) propose a two-stage approach (Two-Stage Markov Blanket) for classifying movie review sentiment using a Bayesian algorithm. During the first stage, the conditional dependencies between the features are encoded into a Markov Blanket Directed Acyclic Graph (MB DAG) corresponding to the target class. Next, a meta-heuristic strategy (Tabu Search) is applied to the MB DAG to enable exploration of the solution space, 16 In this case, lemmas are derived using the Conexor FDG parser.

24 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 24 which precisely adjusts the model while avoiding local minima. Learning dependent patterns between features enables the network to discard redundant predictors, resulting in an accurate classifier that uses a very small subset of available features. In their experiments, 7,717 features are extracted from the reviews, consisting of all unigrams occurring in eight or more documents. Their two-stage Markov Blanket identifies a set of 22 predictors and achieves 87.52% classification accuracy. In other words, their Two-Stage Markov Blanket discards 99.71% (6,695/7,717) of the features, while still attaining a high-level of accuracy. Sista and Srinivasan (2004) describe a movie review classification approach that requires identifying both positive and negative features contained in the movie reviews 13. To do so, they exploit knowledge contained in the General Inquirer 17 (GI) lexicon and the WordNet (Miller, et al., 1993) database. First, they remove word sense disambiguation information from the GI lexicon to create a list of 8,640 words. Next, they design a set of rules to match annotation used by the GI lexicon, from which they select only the words determined as having either a positive or negative connotation. This step derives positive and negative lexicon tables containing 5,977 and 2,200 entries, respectively. Finally, the negative lexicon table is extended by adding similar entries found in WordNet according to the synonym relationship. After final derivation and crosschecking, their polarized lexicon contains a set of 5,977 positive and 3,700 negative words. The movie reviews are parsed to create feature vectors in ARFF format, where attribute entries correspond only to the words found in the polarized lexicon. Finally, the ARFF files are input to several WEKA classifiers including Naive Bayes, Multinomial Naive Bayes, and SMO 18. They 17 General Inquirer: 18 According to the WEKA documentation, SMO implements Platt's sequential minimal optimization algorithm for training a SVM.

25 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 25 report a best classification accuracy of 84.20% using Multinomial Naive Bayes classifier with the reviews split into 71% training and 39% test sets.

26 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 26 IV. EXPERIMENTS AND RESULTS This section presents a set of four experiments. Each one describes the approach, configuration, and experimental results obtained when classifying the movie review corpus. In general, the experiments are ordered progressively to show incremental improvements in classification accuracy. The first experiment compares classification results produced by Naive Bayes using selected and ranked input and baseline (non-ranked) input. Results indicate that applying selection and ranking methods to the input in this case, adjectives have a beneficial effect on classification accuracy. In the second experiment, both adjectives and adverbs are extracted from the reviews and then WEKA s selection and ranking methods are used to identify input to the Naive Bayes classifier. In this experiment, the classification accuracy is again shown to increase, compared to using baseline input, as a result of using the selected and ranked input corresponding to the two parts of speech. In the third experiment, selected and ranked adjectives and adverbs are once again input to a classifier; however, in this case a parameter of the Bayes Net classifier is fine-tuned to show an additional increase in classification accuracy. Finally, in the fourth experiment, both unigrams and bigrams are extracted from the reviews, selection and ranking methods are applied, and then classified by Bayes Net. This configuration produces the best classification results. 1. Naive Bayes: Classifying Ranked Adjectives (Unigrams) This experiment produces classification results by extracting adjectives from the movie reviews and using them as input to the Naive Bayes classifier in WEKA. In this case, perl-

27 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 27 extract.pl is configured to extract all adjectives from the reviews identified by the parts of speech Tagger, according to the definitions in Table 4. Table 4: Adjective Tags Produced by Tagger Tag Definition Example JJ Adjective happy, bad JJR Adjective, comparative happier, worse JJS Adjective, superlative happiest, worst Parsing the movie reviews generates a keywords file containing 7,781 adjectives. An exhaustive review of the keywords file to assess the accuracy of the words identified as adjectives by Tagger is not performed; however, 202 entries are subjectively deleted where the keyword is either not grammatically correct, contains a special character, or identifies an actor or director. For example, deleted words include ex, arnold, pacino, altman, and d'angelo. Note that after editing the final adjectives-only keywords file, 7,579 entries remain. When using ten-fold cross validation, Naive Bayes achieves 80.20% accuracy when all 7,579 adjectives are specified as input to the classifier. This result is considered the baseline accuracy. As described in the WEKA Attribute Evaluation Methods section, two separate approaches are defined which evaluate and rank the adjectives in descending order of importance. In each case, according to the given method s evaluation criterion, the most important adjectives appear near the top of the list and the least important adjectives are found closer to the bottom of the list. Perl-classify.pl is configured to programmatically select increasingly larger subsets of the four thousand most important adjectives from the keywords list, in increments of two hundred. For each subset of adjectives, an ARFF file is generated as

28 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 28 input to WEKA, where the movie reviews are classified using Naive Bayes combined with tenfold cross validation. The Naive Bayes classification results obtained using the two attribute evaluation and ranking methods in WEKA are shown in Figure 4. This illustration plots Naive Bayes classification accuracy against number of inputs, when selecting attribute subsets using evaluators InfoGainAttributeEval, and SymmetricalUncertAttributeEval. Figure 4 NaiveBayes Classification: Ranked Adjectives Classification Accuracy Number of Inputs InfoGainAttrEval SymmetricalUncertAttrEval When using 100 to 1,900 inputs, Naive Bayes achieves the best results using input ranked by SymmetricalUncertAttributeEval. It is shown to achieve greater than 87.00% accuracy over the range 1,000 to 1,500 inputs, peaking at 87.30% accuracy using 1,300 inputs. Throughout the remaining input range, Naive Bayes classification accuracy improves slightly (+~0.15%) using input ranked by InfoGainAttributeEval. Results also indicate that classification accuracy appears to flatten out over the range of 1,900 to 4,000 inputs, independent of ranking method. In fact, similar test results (not shown in Figure 4) indicate that over the remaining range of inputs, Naive Bayes classification accuracy continues declining to a low of 80.20% accuracy. This equals the baseline accuracy.

29 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 29 It is now determined that the use of evaluation and ranking methods in WEKA leads to a 7.10% (87.30% versus 80.20%) absolute improvement in classification accuracy. When this result is compared to the baseline case, it is clear that using attribute evaluation and ranking methods has proven beneficial, as indicated by the classification results reported by Naive Bayes. In other words, the baseline classification error rate is 19.80% ( ). The classification error rate obtained using evaluated and ranked adjectives is 12.70% ( ), which equals a ~35.86%, = = reduction in classification error rate. In the next section, evaluation and ranking methods are applied to adjectives and adverbs, which have been extracted from the set of reviews. 2. Naive Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams) This experiment logically extends the methodology introduced in the previous section, where adjectives are extracted, evaluated, ranked, and then input to Naive Bayes to enhance classification accuracy. In this case, words identified as either adjectives or adverbs are simultaneously extracted from the reviews and then evaluated and ranked. In addition to the adjective tags (refer to Table 4), perl-extract.pl also extracts the adverbs from the review corpus according to the parts of speech Tagger definitions shown in Table 5. Table 5: Adverbs Tags Produced by Tagger Tag Definition Example RB Adverb often, not, very, here RBR Adverb, comparative faster RBS Adverb, superlative fastest

30 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 30 The reviews are parsed and a total of 8,928 unigrams are extracted matching the adjective and adverb tags, as defined by Tagger (refer to Table 4 and Table 5 for the complete listing.) This combined keyword list contains 7, adjectives and 1,349 adverbs. By default, when all adjectives and adverbs are input to Naive Bayes, it produces an accuracy of 81.45% using tenfold cross validation. Therefore, the baseline accuracy, when using all input, is 81.45%. Perl-classify.pl selects subsets of the four thousand most important adjectives and adverbs from the keywords list, as measured by each of the two WEKA evaluation and ranking methods described in the WEKA Attribute Evaluation Methods section. Classification results using Naive Bayes, combined with ten-fold cross validation, are shown in Figure 5. Figure 5 NaiveBayes Classification: Ranked Adjectives and Adverbs Classification Accuracy Number of Inputs InfoGainAttrEval SymmetricalUncertAttrEval Over the range of 1,100 to 4,000 inputs, the Naive Bayes classifier, using input selected by SymmetricalUncertAttributeEval, consistently achieves the highest classification accuracy. Specifically, Naive Bayes achieves a maximum accuracy of 89.10% using 1,500 evaluated and ranked inputs. Much like the first experiment, the plotted trend shown in Figure 5 19 The same 202 entries incorrectly identified as adjectives are subjectively deleted from the list.

31 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 31 suggests that beyond the level of about 1,900 inputs, classification accuracy begins to slowly degrade. These results show that the combination of both adjectives and adverbs, when evaluated and ranked as input to Naive Bayes, improves the absolute maximum classification accuracy by ~2% (89.10% versus 87.30%), as compared to using only selected and ranked adjectives. In other words, including adverbs as part of the feature space leads to an increase in accuracy. These promising results indicate that a 7.65% absolute increase (89.10% versus 81.45%) in accuracy is achieved by ranking the adjectives and adverbs, as compared to simply using the complete (baseline) set of inputs. Furthermore, in this case the baseline error rate is 18.55% ( ) and the classification error rate obtained using selected and ranked adjectives and adverbs is 10.9% ( ). This results in a 41.24% = = reduction in the classification error rate. Again, the use of evaluation and ranking methods to choose features is shown to improve classification accuracy. Because the first two experiments both clearly achieve their highest classification accuracy using input selected according to SymmetricalUncertAttributeEval, further experiments are performed using only this evaluation method. 3. Bayes Net: Classifying Ranked Adjectives and Adverbs (Unigrams) In this section, additional classification tests are performed using evaluated and ranked adjectives and adverbs as input to a classifier. This time, however, WEKA s Bayes Net classifier is specified instead of Naive Bayes. By default, Bayes Net sets the option SimpleEstimator -A 0.5 to initialize the cells in the probability tables to 0.5, which prevents zero-based probabilities from occurring. The initial count for each value in the tables is specified using the

32 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 32 -A option. Perl-classify.pl is configured to test a smaller input range, 1,200 to 2,200, in increments of fifty, according to the SymmetricalUncertAttributeEval evaluation criterion. Bayes Net is specified and the -A option is tested using values of 0.5, 0.005, 0.007, and 0.009; however, results are only shown for values of 0.5 (default) and Figure 6 shows the movie review classification results for Naive Bayes and Bayes Net, with -A set to both 0.5 and The Naive Bayes results closely resemble those results obtained in the Naive Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams) section and are presented for comparative purposes. Specifically, Naive Bayes is shown to reach a maximum accuracy just over 89.00%. Figure 6 NaiveBayes vs. BayesNet Classification: Ranked Adjectives and Adverbs Classification Accuracy Number of Inputs NaiveBayes BayesNet, A = 0.5 BayesNet, A = Bayes Net, when using -A equal to 0.5, appears to slightly outperform Naive Bayes. Over the range of inputs shown in the illustration Bayes Net is, on average, about 0.80% more accurate. Bayes Net, however, performs extremely well with -A set to 0.007, where it achieves 92.25% accuracy using 1,850 inputs. In fact, Figure 6 shows that it performs above 92.00% over the range of 1,650 to 1,850 inputs. In absolute terms, Bayes Net (-A = 0.007) classification accuracy outperforms Bayes Net (default options) by 2.4% (92.25% versus 89.85%) and Naive Bayes by 3.15% (92.25% versus 89.10%). Recall that the previous experiment in section Naive

33 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 33 Bayes: Classifying Ranked Adjectives and Adverbs (Unigrams) achieved 89.10% accuracy, which equals an error rate of 10.90% ( ). Therefore, compared to those results, the use of Bayes Net in this experiment has led to a 28.90% = = reduction in classification error rate. 4. Bayes Net: Classifying Ranked Adjectives and Adverbs (unigrams and bigrams) All three previous experiments extracted individual words (unigrams) from the reviews, corresponding to one or more defined parts of speech. For example, the third experiment extracted both adjectives and adverbs from the reviews to form a set of keywords. Now consider a bigram, which represents a two-word sequence where the word pair must contain an adjective. For example, the text snippet, not a good one, is identified by Tagger as: not/rb a/det good/jj one/nn In this case, in addition to extracting the individual adjective good, perl-extract.pl also extracts the bigrams not_good, and good_one and includes them in the set of keywords. There are three specific rules used by perl-extract.pl to extract bigrams from the reviews. Two bigram rules are applied to adjectives and operate against only the three leading contextual word positions. Rule 1 examines the three leading positions preceding an adjective and attempts to match negations, where words such as not or wasn t are present. Table 6 shows three examples where Rule 1 is satisfied, forming the bigrams not_good, isnt_enough, and wasnt_funny, respectively. This rule is similar to the one used by Pang, Lee, and

34 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 34 Vaithyanathan (2002), where they apply the tag NOT_ to every word following a negation word and leading up to a punctuation mark. Table 6: Bigrams, Rule 1 Position 3 Position 2 Position 1 Adjective Example but not a good but not a good there is n t enough there isn t enough was n t very funny wasn t very funny Next, Rule 2 is applied to an adjective only when Rule 1 does not match a negation to form a bigram. Again, the first three leading word positions are examined, moving from right to left preceding the adjective. Rule 2 attempts to match the first leading word that is not a preposition, conjunction, or punctuation mark to form a bigram. According to Rule 2, the example text shown in Table 7 leads to the formation of the bigram just_bizarre. Table 7: Bigrams, Rule 2 Position 3 Position 2 Position 1 Adjective Example movie was just bizarre movie was just bizarre Finally, a third rule attempts to match the single trailing word position, adjacent to each adjective. Rule 3 only forms bigrams when the trailing position is identified as a noun part of speech. Therefore, as the examples in Table 8 show, the snippet good acting forms the bigram good_acting ; however, the text string bad as does not match Rule 3, and therefore no bigram is formed. Table 8: Bigrams, Rule 3 Adjective Position +1 Example good acting good acting bad as bad as

35 Netpix: A Method of Feature Selection Leading to Accurate Sentiment-Based Classification Models 35 By definition, the three bigram rules indicate that each adjective will not necessarily form bigrams by combining with leading and/or trailing word tokens. That is, the rules do not require that each adjective form a bigram. Now, all adjectives and adverbs (unigrams), and bigrams corresponding to Rules 1, 2, and 3 are extracted from the reviews, which results in a list containing 105,616 unique keywords. Next, a subset of keywords is chosen from the full keyword list, where each keyword is required to have a document frequency greater than or equal to four. That is, the keyword must be contained in at least four of the two thousand reviews. This arbitrary selection process reduces the full set of keywords to a manageable set of size 8,549. Perl-classify.pl is configured to test the keywords in the range, 1,100 to 2,000, in increments of ten, according to the SymmetricalUncertAttributeEval evaluation criterion. The Bayes Net classifier is again specified, but this time using the -A option with values of 0.001, 0.003, and The highlyaccurate classification results using Bayes Net with -A = are shown in Figure 7. Figure 7 BayesNet Classification: Ranked Adjectives and Adverbs (Unigrams and Bigrams) Classification Accuracy Number of Inputs BayesNet, A = Classification accuracy is shown to be greater than 95% over the range of 1,300 to 1,420 inputs, and achieves a maximum accuracy of 95.50% using 1,380 inputs. Compared to results

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Twitter Sentiment Classification on Sanders Data using Hybrid Approach IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Linking Task: Identifying authors and book titles in verbose queries

Linking Task: Identifying authors and book titles in verbose queries Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,

More information

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence. NLP Lab Session Week 8 October 15, 2014 Noun Phrase Chunking and WordNet in NLTK Getting Started In this lab session, we will work together through a series of small examples using the IDLE window and

More information

A Comparison of Two Text Representations for Sentiment Analysis

A Comparison of Two Text Representations for Sentiment Analysis 010 International Conference on Computer Application and System Modeling (ICCASM 010) A Comparison of Two Text Representations for Sentiment Analysis Jianxiong Wang School of Computer Science & Educational

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Chinese Language Parsing with Maximum-Entropy-Inspired Parser Chinese Language Parsing with Maximum-Entropy-Inspired Parser Heng Lian Brown University Abstract The Chinese language has many special characteristics that make parsing difficult. The performance of state-of-the-art

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases POS Tagging Problem Part-of-Speech Tagging L545 Spring 203 Given a sentence W Wn and a tagset of lexical categories, find the most likely tag T..Tn for each word in the sentence Example Secretariat/P is/vbz

More information

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &

More information

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17. Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link

More information

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,

More information

AQUA: An Ontology-Driven Question Answering System

AQUA: An Ontology-Driven Question Answering System AQUA: An Ontology-Driven Question Answering System Maria Vargas-Vera, Enrico Motta and John Domingue Knowledge Media Institute (KMI) The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom.

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

Prediction of Maximal Projection for Semantic Role Labeling

Prediction of Maximal Projection for Semantic Role Labeling Prediction of Maximal Projection for Semantic Role Labeling Weiwei Sun, Zhifang Sui Institute of Computational Linguistics Peking University Beijing, 100871, China {ws, szf}@pku.edu.cn Haifeng Wang Toshiba

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

Multilingual Sentiment and Subjectivity Analysis

Multilingual Sentiment and Subjectivity Analysis Multilingual Sentiment and Subjectivity Analysis Carmen Banea and Rada Mihalcea Department of Computer Science University of North Texas rada@cs.unt.edu, carmen.banea@gmail.com Janyce Wiebe Department

More information

A Bayesian Learning Approach to Concept-Based Document Classification

A Bayesian Learning Approach to Concept-Based Document Classification Databases and Information Systems Group (AG5) Max-Planck-Institute for Computer Science Saarbrücken, Germany A Bayesian Learning Approach to Concept-Based Document Classification by Georgiana Ifrim Supervisors

More information

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

The stages of event extraction

The stages of event extraction The stages of event extraction David Ahn Intelligent Systems Lab Amsterdam University of Amsterdam ahn@science.uva.nl Abstract Event detection and recognition is a complex task consisting of multiple sub-tasks

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Nathaniel Hayes Department of Computer Science Simpson College 701 N. C. St. Indianola, IA, 50125 nate.hayes@my.simpson.edu

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D. INTRODUCTION Anyone who spends ample time working

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

Radius STEM Readiness TM

Radius STEM Readiness TM Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and

More information

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Innov High Educ (2009) 34:93 103 DOI 10.1007/s10755-009-9095-2 Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Phyllis Blumberg Published online: 3 February

More information

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN From: AAAI Technical Report WS-98-08. Compilation copyright 1998, AAAI (www.aaai.org). All rights reserved. Recommender Systems: A GroupLens Perspective Joseph A. Konstan *t, John Riedl *t, AI Borchers,

More information

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data Ebba Gustavii Department of Linguistics and Philology, Uppsala University, Sweden ebbag@stp.ling.uu.se

More information

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections Tyler Perrachione LING 451-0 Proseminar in Sound Structure Prof. A. Bradlow 17 March 2006 Intra-talker Variation: Audience Design Factors Affecting Lexical Selections Abstract Although the acoustic and

More information

Indian Institute of Technology, Kanpur

Indian Institute of Technology, Kanpur Indian Institute of Technology, Kanpur Course Project - CS671A POS Tagging of Code Mixed Text Ayushman Sisodiya (12188) {ayushmn@iitk.ac.in} Donthu Vamsi Krishna (15111016) {vamsi@iitk.ac.in} Sandeep Kumar

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

Extracting Verb Expressions Implying Negative Opinions

Extracting Verb Expressions Implying Negative Opinions Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Extracting Verb Expressions Implying Negative Opinions Huayi Li, Arjun Mukherjee, Jianfeng Si, Bing Liu Department of Computer

More information

The taming of the data:

The taming of the data: The taming of the data: Using text mining in building a corpus for diachronic analysis Stefania Degaetano-Ortlieb, Hannah Kermes, Ashraf Khamis, Jörg Knappen, Noam Ordan and Elke Teich Background Big data

More information

Movie Review Mining and Summarization

Movie Review Mining and Summarization Movie Review Mining and Summarization Li Zhuang Microsoft Research Asia Department of Computer Science and Technology, Tsinghua University Beijing, P.R.China f-lzhuang@hotmail.com Feng Jing Microsoft Research

More information

Evidence for Reliability, Validity and Learning Effectiveness

Evidence for Reliability, Validity and Learning Effectiveness PEARSON EDUCATION Evidence for Reliability, Validity and Learning Effectiveness Introduction Pearson Knowledge Technologies has conducted a large number and wide variety of reliability and validity studies

More information

CS 446: Machine Learning

CS 446: Machine Learning CS 446: Machine Learning Introduction to LBJava: a Learning Based Programming Language Writing classifiers Christos Christodoulopoulos Parisa Kordjamshidi Motivation 2 Motivation You still have not learnt

More information

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH ISSN: 0976-3104 Danti and Bhushan. ARTICLE OPEN ACCESS CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH Ajit Danti 1 and SN Bharath Bhushan 2* 1 Department

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition Chapter 2: The Representation of Knowledge Expert Systems: Principles and Programming, Fourth Edition Objectives Introduce the study of logic Learn the difference between formal logic and informal logic

More information

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

The Internet as a Normative Corpus: Grammar Checking with a Search Engine The Internet as a Normative Corpus: Grammar Checking with a Search Engine Jonas Sjöbergh KTH Nada SE-100 44 Stockholm, Sweden jsh@nada.kth.se Abstract In this paper some methods using the Internet as a

More information

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING SISOM & ACOUSTICS 2015, Bucharest 21-22 May THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING MarilenaăLAZ R 1, Diana MILITARU 2 1 Military Equipment and Technologies Research Agency, Bucharest,

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities Yoav Goldberg Reut Tsarfaty Meni Adler Michael Elhadad Ben Gurion

More information

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

Beyond the Pipeline: Discrete Optimization in NLP

Beyond the Pipeline: Discrete Optimization in NLP Beyond the Pipeline: Discrete Optimization in NLP Tomasz Marciniak and Michael Strube EML Research ggmbh Schloss-Wolfsbrunnenweg 33 69118 Heidelberg, Germany http://www.eml-research.de/nlp Abstract We

More information

Emotions from text: machine learning for text-based emotion prediction

Emotions from text: machine learning for text-based emotion prediction Emotions from text: machine learning for text-based emotion prediction Cecilia Ovesdotter Alm Dept. of Linguistics UIUC Illinois, USA ebbaalm@uiuc.edu Dan Roth Dept. of Computer Science UIUC Illinois,

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation tatistical Parsing (Following slides are modified from Prof. Raymond Mooney s slides.) tatistical Parsing tatistical parsing uses a probabilistic model of syntax in order to assign probabilities to each

More information

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2 Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2 Ted Pedersen Department of Computer Science University of Minnesota Duluth, MN, 55812 USA tpederse@d.umn.edu

More information

Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger

Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger Page 1 of 35 Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger Kaihong Liu, MD, MS, Wendy Chapman, PhD, Rebecca Hwa, PhD, and Rebecca S. Crowley, MD, MS

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

The Smart/Empire TIPSTER IR System

The Smart/Empire TIPSTER IR System The Smart/Empire TIPSTER IR System Chris Buckley, Janet Walz Sabir Research, Gaithersburg, MD chrisb,walz@sabir.com Claire Cardie, Scott Mardis, Mandar Mitra, David Pierce, Kiri Wagstaff Department of

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

Proof Theory for Syntacticians

Proof Theory for Syntacticians Department of Linguistics Ohio State University Syntax 2 (Linguistics 602.02) January 5, 2012 Logics for Linguistics Many different kinds of logic are directly applicable to formalizing theories in syntax

More information

Memory-based grammatical error correction

Memory-based grammatical error correction Memory-based grammatical error correction Antal van den Bosch Peter Berck Radboud University Nijmegen Tilburg University P.O. Box 9103 P.O. Box 90153 NL-6500 HD Nijmegen, The Netherlands NL-5000 LE Tilburg,

More information

Learning Computational Grammars

Learning Computational Grammars Learning Computational Grammars John Nerbonne, Anja Belz, Nicola Cancedda, Hervé Déjean, James Hammerton, Rob Koeling, Stasinos Konstantopoulos, Miles Osborne, Franck Thollard and Erik Tjong Kim Sang Abstract

More information

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics (L615) Markus Dickinson Department of Linguistics, Indiana University Spring 2013 The web provides new opportunities for gathering data Viable source of disposable corpora, built ad hoc for specific purposes

More information

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com

More information

LQVSumm: A Corpus of Linguistic Quality Violations in Multi-Document Summarization

LQVSumm: A Corpus of Linguistic Quality Violations in Multi-Document Summarization LQVSumm: A Corpus of Linguistic Quality Violations in Multi-Document Summarization Annemarie Friedrich, Marina Valeeva and Alexis Palmer COMPUTATIONAL LINGUISTICS & PHONETICS SAARLAND UNIVERSITY, GERMANY

More information

Robust Sense-Based Sentiment Classification

Robust Sense-Based Sentiment Classification Robust Sense-Based Sentiment Classification Balamurali A R 1 Aditya Joshi 2 Pushpak Bhattacharyya 2 1 IITB-Monash Research Academy, IIT Bombay 2 Dept. of Computer Science and Engineering, IIT Bombay Mumbai,

More information

A Vector Space Approach for Aspect-Based Sentiment Analysis

A Vector Space Approach for Aspect-Based Sentiment Analysis A Vector Space Approach for Aspect-Based Sentiment Analysis by Abdulaziz Alghunaim B.S., Massachusetts Institute of Technology (2015) Submitted to the Department of Electrical Engineering and Computer

More information

Parsing of part-of-speech tagged Assamese Texts

Parsing of part-of-speech tagged Assamese Texts IJCSI International Journal of Computer Science Issues, Vol. 6, No. 1, 2009 ISSN (Online): 1694-0784 ISSN (Print): 1694-0814 28 Parsing of part-of-speech tagged Assamese Texts Mirzanur Rahman 1, Sufal

More information

Leveraging Sentiment to Compute Word Similarity

Leveraging Sentiment to Compute Word Similarity Leveraging Sentiment to Compute Word Similarity Balamurali A.R., Subhabrata Mukherjee, Akshat Malu and Pushpak Bhattacharyya Dept. of Computer Science and Engineering, IIT Bombay 6th International Global

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

CHAPTER 4: REIMBURSEMENT STRATEGIES 24 CHAPTER 4: REIMBURSEMENT STRATEGIES 24 INTRODUCTION Once state level policymakers have decided to implement and pay for CSR, one issue they face is simply how to calculate the reimbursements to districts

More information

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Transfer Learning Action Models by Measuring the Similarity of Different Domains Transfer Learning Action Models by Measuring the Similarity of Different Domains Hankui Zhuo 1, Qiang Yang 2, and Lei Li 1 1 Software Research Institute, Sun Yat-sen University, Guangzhou, China. zhuohank@gmail.com,lnslilei@mail.sysu.edu.cn

More information

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Exploration. CS : Deep Reinforcement Learning Sergey Levine Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?

More information

Machine Learning and Development Policy

Machine Learning and Development Policy Machine Learning and Development Policy Sendhil Mullainathan (joint papers with Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Ziad Obermeyer) Magic? Hard not to be wowed But what makes

More information

Truth Inference in Crowdsourcing: Is the Problem Solved?

Truth Inference in Crowdsourcing: Is the Problem Solved? Truth Inference in Crowdsourcing: Is the Problem Solved? Yudian Zheng, Guoliang Li #, Yuanbing Li #, Caihua Shan, Reynold Cheng # Department of Computer Science, Tsinghua University Department of Computer

More information

Online Updating of Word Representations for Part-of-Speech Tagging

Online Updating of Word Representations for Part-of-Speech Tagging Online Updating of Word Representations for Part-of-Speech Tagging Wenpeng Yin LMU Munich wenpeng@cis.lmu.de Tobias Schnabel Cornell University tbs49@cornell.edu Hinrich Schütze LMU Munich inquiries@cislmu.org

More information

Ensemble Technique Utilization for Indonesian Dependency Parser

Ensemble Technique Utilization for Indonesian Dependency Parser Ensemble Technique Utilization for Indonesian Dependency Parser Arief Rahman Institut Teknologi Bandung Indonesia 23516008@std.stei.itb.ac.id Ayu Purwarianti Institut Teknologi Bandung Indonesia ayu@stei.itb.ac.id

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Using Web Searches on Important Words to Create Background Sets for LSI Classification Using Web Searches on Important Words to Create Background Sets for LSI Classification Sarah Zelikovitz and Marina Kogan College of Staten Island of CUNY 2800 Victory Blvd Staten Island, NY 11314 Abstract

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

The University of Amsterdam s Concept Detection System at ImageCLEF 2011

The University of Amsterdam s Concept Detection System at ImageCLEF 2011 The University of Amsterdam s Concept Detection System at ImageCLEF 2011 Koen E. A. van de Sande and Cees G. M. Snoek Intelligent Systems Lab Amsterdam, University of Amsterdam Software available from:

More information

GACE Computer Science Assessment Test at a Glance

GACE Computer Science Assessment Test at a Glance GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science

More information

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Alistair Conkie AT&T abs - Research 180 Park Avenue, Florham Park,

More information

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS Daffodil International University Institutional Repository DIU Journal of Science and Technology Volume 8, Issue 1, January 2013 2013-01 BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS Uddin, Sk.

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA Alta de Waal, Jacobus Venter and Etienne Barnard Abstract Most actionable evidence is identified during the analysis phase of digital forensic investigations.

More information

Multi-Lingual Text Leveling

Multi-Lingual Text Leveling Multi-Lingual Text Leveling Salim Roukos, Jerome Quin, and Todd Ward IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 {roukos,jlquinn,tward}@us.ibm.com Abstract. Determining the language proficiency

More information

The Strong Minimalist Thesis and Bounded Optimality

The Strong Minimalist Thesis and Bounded Optimality The Strong Minimalist Thesis and Bounded Optimality DRAFT-IN-PROGRESS; SEND COMMENTS TO RICKL@UMICH.EDU Richard L. Lewis Department of Psychology University of Michigan 27 March 2010 1 Purpose of this

More information

Disambiguation of Thai Personal Name from Online News Articles

Disambiguation of Thai Personal Name from Online News Articles Disambiguation of Thai Personal Name from Online News Articles Phaisarn Sutheebanjard Graduate School of Information Technology Siam University Bangkok, Thailand mr.phaisarn@gmail.com Abstract Since online

More information

Chapter 2 Rule Learning in a Nutshell

Chapter 2 Rule Learning in a Nutshell Chapter 2 Rule Learning in a Nutshell This chapter gives a brief overview of inductive rule learning and may therefore serve as a guide through the rest of the book. Later chapters will expand upon the

More information