Cross-lingual Short-Text Document Classification for Facebook Comments

Similar documents
A Case Study: News Classification Based on Term Frequency

Rule Learning With Negation: Issues Regarding Effectiveness

Reducing Features to Improve Bug Prediction

Rule Learning with Negation: Issues Regarding Effectiveness

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

A Comparison of Two Text Representations for Sentiment Analysis

Linking Task: Identifying authors and book titles in verbose queries

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Assignment 1: Predicting Amazon Review Ratings

Cross Language Information Retrieval

Using dialogue context to improve parsing performance in dialogue systems

AQUA: An Ontology-Driven Question Answering System

Australian Journal of Basic and Applied Sciences

Word Segmentation of Off-line Handwritten Documents

Python Machine Learning

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Switchboard Language Model Improvement with Conversational Data from Gigaword

Learning From the Past with Experiment Databases

Modeling function word errors in DNN-HMM based LVCSR systems

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

METHODS FOR EXTRACTING AND CLASSIFYING PAIRS OF COGNATES AND FALSE FRIENDS

Probabilistic Latent Semantic Analysis

arxiv: v1 [cs.cl] 2 Apr 2017

Multi-Lingual Text Leveling

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

CS 446: Machine Learning

Multilingual Sentiment and Subjectivity Analysis

Speech Emotion Recognition Using Support Vector Machine

Matching Similarity for Keyword-Based Clustering

Disambiguation of Thai Personal Name from Online News Articles

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

arxiv: v1 [cs.lg] 3 May 2013

Learning Methods in Multilingual Speech Recognition

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Postprint.

Modeling function word errors in DNN-HMM based LVCSR systems

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Exposé for a Master s Thesis

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Lecture 1: Machine Learning Basics

Feature Selection based on Sampling and C4.5 Algorithm to Improve the Quality of Text Classification using Naïve Bayes

Sentiment Analysis of Tunisian Dialect: Linguistic Resources and Experiments

Identification of Opinion Leaders Using Text Mining Technique in Virtual Community

CS Machine Learning

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

Detecting English-French Cognates Using Orthographic Edit Distance

Cross-Lingual Text Categorization

A heuristic framework for pivot-based bilingual dictionary induction

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio

Problems of the Arabic OCR: New Attitudes

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

Language Independent Passage Retrieval for Question Answering

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Automating the E-learning Personalization

Arabic Orthography vs. Arabic OCR

Time series prediction

Mining Association Rules in Student s Assessment Data

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

How to read a Paper ISMLL. Dr. Josif Grabocka, Carlotta Schatten

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

Bug triage in open source systems: a review

Ensemble Technique Utilization for Indonesian Dependency Parser

Universiteit Leiden ICT in Business

UMass at TDT Similarity functions 1. BASIC SYSTEM Detection algorithms. set globally and apply to all clusters.

The stages of event extraction

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Indian Institute of Technology, Kanpur

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Learning Methods for Fuzzy Systems

A Bayesian Learning Approach to Concept-Based Document Classification

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

As a high-quality international conference in the field

The taming of the data:

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

Team Formation for Generalized Tasks in Expertise Social Networks

Constructing Parallel Corpus from Movie Subtitles

Multivariate k-nearest Neighbor Regression for Time Series data -

Conversational Framework for Web Search and Recommendations

Performance Analysis of Optimized Content Extraction for Cyrillic Mongolian Learning Text Materials in the Database

Automatic document classification of biological literature

Article A Novel, Gradient Boosting Framework for Sentiment Analysis in Languages where NLP Resources Are Not Plentiful: A Case Study for Modern Greek

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Customized Question Handling in Data Removal Using CPHC

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

Calibration of Confidence Measures in Speech Recognition

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Transcription:

2014 International Conference on Future Internet of Things and Cloud Cross-lingual Short-Text Document Classification for Facebook Comments Mosab Faqeeh, Nawaf Abdulla, Mahmoud Al-Ayyoub, Yaser Jararweh and Muhannad Quwaider Jordan University of Science and Technology Irbid, Jordan Emails: {mos3b.faqeeh, nawaf.iqbal, malayyoub, yaser.amd, quwaider}@gmail.com Abstract Document Classification (DC) is one of the fundamental problems in text mining. Plenty of works exist on DC with interesting approaches and excellent results; however, most of them focus on a long-text documents written in a single language with English being the most studied language. This work is concerned with the natural step beyond such works which is cross-lingual DC for short-text documents. Specifically, we consider two languages, Arabic and English, and compare the performance of some of the most popular document classifiers on two datasets of short Facebook comments. Apart from limited attempts, the addressed problem has not been studied well enough. The results are encouraging and new insights are obtained. Index Terms document classification; cross-lingual text analysis; social network comments; support vector machine; decision tree; naive Bayes; k-nearest neighbor I. INTRODUCTION The Document Classification (DC) problem is concerned with automatically placing text documents in categories/classes based on their contents. It is one of the fundamental problems in many fields such as text mining, machine learning, natural language processing, information retrieval, etc., with a vast range of applications such as spam filtering [1], authorship authentication [2], [3], gender identification [4], dialect identification [5], [6], native language identification [7], sentiment analysis [8], [9], [10], etc. The DC problem gained more importance due to the explosion in the size of text data available on the Web over the past two decades. Not only this expansion forced people to consider scalability issues (giving rise to important fields such as Big Data), it has also produced special challenges for the DC problem. One such example is the short length of the text document to be classified. Short-text documents (excerpts) exits all over the Web from website summaries to image captions. Most importantly, the comments written on social networks and mobile applications have a rather short length. The prevalence of short-text documents and the additional challenges they pose call for special attention to the problem of Short-text Document Classification (SDC) [11]. In general DC, we are given a large-enough dataset of manually labeled training documents and the objective is to build a classification model based on this dataset capable of accurately predicting the class of an unlabeled document. While this description applies to all supervised learning problems, DC has some special characteristics requiring special attention. For example, DC uses word occurrences in documents to build a feature vector for each document in what is known as the bagof-words (BOW) approach [12]. Such an approach results in a dataset of sparse vectors with high dimensionality due to the high number of features/attributes (i.e., the different words in a natural language). Not all classifiers can handle such high dimensional datasets. However, there are several exceptions such as Naive Bayes (NB), Support Vector Machines (SVM), Decision Trees (DT), etc., which perform very well for the DC problem [1] even with the Arabic language [13]. It is worth mentioning here that the short length of the documents can cause serious degradation in the performance of classifiers due to the small number of words in each short-text document and their low occurrence rate across documents leading to a small number of common words [11]. Hence, SDC is a problem worth studying on its own. The two languages under consideration are Arabic and English. Most existing works on NLP in general consider English text from text processing tools to optimized classifiers. Arabic, on the other hand, is largely understudied despite being one of the six official languages of the UN and being Arabic is the native language of 420 million people living in the Arab world, which spans regions of the Middle East and North Africa (MENA) in addition to parts of East Africa (Horn of Africa) [14]. Moreover, the amount of Arabic content on the Web and the number of Arabic speaking users are growing rapidly [14]. Finally, Arabic is a rich morphological language with many challenging aspects. The importance of the Arabic language and the interesting challenges associated with studying it make it one of the most interesting languages to study. The rest of this paper is organized as follows. In Section II, we present an up-to-date coverage of the addressed problem while, in Section III, we present our research methodology and experimental results. Finally, we conclude the paper and discuss future work guidelines in Section IV. II. LITERATURE REVIEW Since DC is a fundamental problem based on which many problems are formulated in many fields such as machine learning, data mining, information retrieval and natural language processing, there are numerous papers addressing different aspects of it. Since the English language is the most studied language in the literature making it the focal point of the 978-1-4799-4357-9/14 $31.00 2014 IEEE DOI 10.1109/FiCloud.2014.99 573

textbooks on NLP and DC, we will simply refer the interested readers to surveys with thorough coverage of DC for the English language such as [15], [12], [1], [16], [17]. Below, we mention some of the interesting works on Arabic DC before discussing the most relevant works in terms of SDC. Since the number and quality of features used to ex-press documents have a direct effect on categorization algorithms, the following discusses the main ideas and techniques of feature reduction and selection and their impact on DC. Duwairi et al. [18] compared between three reduction techniques (stemming, light stemming, and word cluster). KNN was selected for training and testing and the results showed that light stemming yielded the highest accuracy and lowest time of model construction. Another study [19] compared 17 Feature Subset Selection (FSS) metrics. They carried out a comparative study to examine the effect of the feature selection metrics in terms of precision, recall, F1-measure, and model building time. The results in general revealed that Odd Ratio (OR) worked better than the others. Some studies focused on other techniques like N-gram and different distance measures and proved their effects on Arabic DC. For instance, [20] used a statistical method called Maximum Entropy (ME) for the classification of Arab News articles. The author showed that the Dice measures using N-gram outperforms using the Manhattan distance. Similar classifier was used in [21], but different selection and reduction techniques were applied. The author used normalization, stemming and stop words removal to increase the ultimate accuracy. However, many studies focused on the exploited classifiers, and the evolvement of the classifiers used in each study keeps growing alongside pre-processing tasks such as stemming, weighting techniques, N-gram, and so forth. In [22], the authors classified a dataset collected from different Arabic web sites utilizing the Naive Bayes (NB) algorithm and achieved 69% accuracy. In [23], the author used three classification algorithms: K-Nearest Neighbor (KNN), SVM and NB, for classifying each document into one of nine classes (Computer, Economics, Education, Engineering, Law, Medicine, Politics, Religion and Sports). Whilst, in [24], the NB and KNN classifiers were only used to classify Arabic text collected from online Arabic newspapers such as Aljazeera, Al-Ahram, Al-Dostor, etc. Then in [25] proposed a hybrid algorithm for Arabic stemming and compared it to other proposed stemmer, for example, Khoja stemmer. The outcomes were promising and acceptable. Moreover, [26] studied four different term-frequency weighting schemes such as: raw Term Frequency (TF), which improves recall, Inverse Document Frequency (IDF), which improves precision, Term Frequency-Inverse Document frequency (TF-IDF), which improves both recall and precision, and Weighted Inverse Document Frequency (WIDF). Finally, experiments showed that, in general, NB was the best followed by KNN and Rocchio. Researchers have not stopped investigating the impact of several factors on the classification accuracy. These factors include corpora size and diversity, features selection and reduction, targeted language, classification approach, etc. Hence, studies like [27], [28], [28], [29], [30], [31], [32] are comprehensive references for that matter. Most of them illustrate the importance of choosing certain criteria during the training and testing phases and some of them suggested new ideas and vectors which handle the classification process quite differently. For instance, [33] proposed a distance based classifier in which each category is represented by a vector containing the words of the training documents after applying pre-processing and stemming operations. Finally, for comprehensive comparative studies of the different tools for Arabic text preprocessing, attribute selection and reduction and classification, the interested reader is referred to the works of Said et al. [34], Saad [32], Khorsheed and Al-Thubaity [13]. The SDC problem has gained a lot of interest due to the widespread of short-text documents (mainly within Web services and mobile applications) and the special challenges posed by it [11]. The main approach for SDC, which is followed here, is the BOW approach. One advantage of the BOW approach is its language-independence, which means that no special language-specific tools and lexicons are required to apply it. According to [11], [35], previous works on SDC are limited by their dependence on either language-specific syntactic and semantic features or some meta-data injected into each document from external sources such as search engines, WordNet, Wikipedia, etc. This makes them suitable for a very small number of languages such as English and Chinese since the required tools to calculate these features are often unavailable in other languages. This makes BOW the approach of choice in this work since we are interesting in studying the differences in SDC between Arabic and English and we do not want our results to be biased by the effectiveness and quality of the available language-specific tools and lexicons especially considering the huge gap between the English NLP field with its diverse resources and rich studies and the Arabic one, which is still at its early stages of development. Finally, the idea of conducting cross-lingual studies in NLP where the performance of an NLP procedure is evaluated across different natural languages has been considered before in literature such as the case of [11], in which the authors considered the SDC problem for English and Korean. Examples in which both Arabic and English are considered span many subjects including authorship authentication [36], [37], [38], subjectivity and sentiment analysis [39], [40], [8], [41], [42], [43], etc. III. METHODOLOGY This section presents the methodology followed in this work starting with the manual collection, filtering and annotation of the datasets followed by text processing. A. Dataset The objective of this work is to study the problem of crosslingual DC on short texts typically found on social networks. 574

Facebook posts are perfect for the purposes of this work due to their abundance and short length. Moreover, Facebook is one of the most popular social network at this time with diverse user-base. This means that collecting a large-enough dataset for each language on various topics is feasible. To ensure consistency, we collected 1,000 comments for each topic of each language. In this pilot study, we limit ourselves to two languages (Arabic and English) and two soft or light topics (weather and food). Future extensions of this work will include more languages and more topics. The Facebook posts are collected using a specialized crawler equipped with basic processing and filtering capabilities. The collected posts are then manually filtered to remove irrelevant, repeated or very short (less than 3 words in length) posts. The documents of the resulting datasets are indeed much shorter than typical DC datasets. Table I shows some length statistics about the collected datasets. The table shows that, on average, each Arabic post contains 39.1 characters or 8.7 words spread across 2.4 lines. An English post, on the other hand, is longer with 62.7 characters or 14.1 words spread across 3 lines. With such short-text posts, it is expected that typical DC classifiers would fail due to the intuitive assumption that such short posts would not contain enough unique keywords to distinguish between classes. Surprisingly, the considered classifiers did so with an unexpected accuracy level as shown in Subsection III-B. Finally, the table shows a tendency to use relatively smaller words (about 4.4 characters/word for both languages). While such number is expected for the Arabic language as shown by previous works like [44], it is a little bit surprising for the English language especially since it is usually assumed that English words are noticeably longer (in terms of number of characters) than Arabic ones [37]. The main justification is lies within the pattern in which social networks posts are written. Specifically, since such posts are often written hastily using a smart phone or a tablet PC, it is natural for them to be short and filled with abbreviations and spelling mistakes. This further complicates the job of the document classifiers. Basic text pre-processing steps are applied to the collected posts remove unwanted characters and stop words. Stemming is then used to ensure that the dimensionality of the problem remain within reasonable bounds. The stemmers used are the Arabic light stemmer and the Porter stemmer, both of which are widely used in the literature. B. Experiments and Results In this section, we present the details of the experiments conducted on the collected datasets and analyze the results providing some insights into the addressed problem of crosslingual short-text document classification. As mentioned in the previous subsection, we have two datasets written in two languages. The two datasets are consistent in the sense that they cover the same topics and are of the same size. We intend to investigate the differences in performance between the following four document classifiers: Support Vector Machines (SVM), Naive Bayes (NB), K-Nearest Neighbor (KNN) and Decision Trees (DT). With the exception of the KNN classifier, these classifiers are known to perform very well for the DC problem [1] and we aim to investigate whether this remains true for the SDC problem. The reason for adding the KNN classifier is because of its high sensitivity to the sparsity and high dimensionality of the DC problem in what is known as the curse of dimensionality. This issue is further amplified in SDC due to the increased sparsity of the dataset. Several libraries exist in the literature with tested and verified implementations of the classifiers under consideration such as WEKA, 1 RapidMiner, 2 Bow, 3 MALLET, 4 KNIME, 5 and LingPipe. 6 The experiments of this work are conducted on the WEKA tool due to the added functionality it offers related to text processing and analysis. The classifiers under consideration are executed on both datasets with the testing method being the holdout method, in which two thirds of the dataset is used for training and the remaining third is reserved for testing. To measure the performance of the classifiers under consideration, four widely used accuracy measures are exploited: precision (p), recall (r), F-measure (F ) and accuracy (a). To explain these measures, it is generally assumed for a binary classification problem such as ours that there is a positive class and a negative one. Precision calculates the ratio of the true positives to the total number of positives predicted by the classifier. The higher the precision, the more accurate the predication of the positive class. On the other hand, recall divides the true positives by the total actual positives belongs to that class. A high recall means high number of comments from the same class is labeled to its exact class. F-measure is weighted average of p and r. Finally, the accuracy computes the ratio of the correctly classified documents regardless of their class. The following are the formulas for these measures [45]: TP p = TP + FP TP r = TP + FN F = 2 p r p + r TP + TN a = TP + FP + TN + FN where TP, FP, TN and FN are the numbers of true positives, false positives, true negatives and false negatives, respectively. True positives and negatives are the correctly classified comments whereas false positives are the number of comments that are incorrectly classified as positive and false negative are the number of comments that are incorrectly classified as negative. 1 http://www.cs.waikato.ac.nz/ml/weka/ 2 http://rapidminer.com/ 3 http://www.cs.cmu.edu/ mccallum/bow/ 4 http://mallet.cs.umass.edu/ 5 http://www.knime.org/ 6 http://alias-i.com/lingpipe/ 575

TABLE I STRUCTURE OF THE COLLECTED REVIEWS. Arabic Arabic Arabic English English English Weather Food Dataset Weather Food Dataset Number of lines 2.6 2.2 2.4 3.3 2.7 3 Average number of characters 43.8 34.3 39.1 67.8 57.7 62.7 Average number of words/post 10 7.5 8.7 15 13.15 14.1 Figure 1(d) shows the accuracy measures of the four classifiers under consideration on the Arabic as well as the English datasets. As the figure shows, the employed classifiers achieve higher accuracies on the Arabic dataset than the English one despite the consistency in terms of the covered topics and the English language having slightly longer comments. This observation can be attributed to an inherent characteristic of the language (such as the Arabic keywords of both domains possess more discriminative power compared with their English counterparts) or it can be attributed to the more positive effect stemming has on Arabic due to its morphological nature. The figure shows that using Arabic light stemming improves the accuracy of all classifiers by 4% to 9% while using the Snowball English stemmer has a lower impact with increases between 1% and 3%. The gab, as can be seen from the figure, between classifiers accuracies on the Arabic dataset and the English dataset ranges from 5% and 11%. Similar trends can be observed from Figures 1(a), 1(b) and 1(c) for the presicion, recall and F-measure values. From these figures, it can be observed that while each classifier obtain higher precision than recall, the gap (between the Arabic dataset and the English dataset) in recall is higher than that of the precision. Finally, these figures confirm the superiority of the SVM classifier in SDC and the inferiority of the KNN and DT classifiers. IV. CONCLUSIONS AND FUTURE WORK This work focused on Short-text Document Classification (SDC) in a cross-lingual setting. Specifically, we considered two languages, Arabic and English, and compared the performance of some of the most popular document classifiers on two datasets of short Facebook comments. The results showed interesting trends and observations. For future work, we plan to involve more classifiers, expand the data set classes and size, explore other languages beside Arabic and English. Furthermore, embracing other social networks like Twitter could be the subject of our future goals. REFERENCES [1] C. C. Aggarwal and C. Zhai, A survey of text classification algorithms, in Mining text data. Springer, 2012, pp. 163 222. [2] P. Juola, Authorship attribution, Foundations and Trends in information Retrieval, vol. 1, no. 3, pp. 233 334, 2006. [3] E. Stamatatos, A survey of modern authorship attribution methods, Journal of the American Society for information Science and Technology, vol. 60, no. 3, pp. 538 556, 2009. [4] N. Cheng, R. Chandramouli, and K. Subbalakshmi, Author gender identification from text, Digital Investigation, vol. 8, no. 1, pp. 78 88, 2011. [5] O. F. Zaidan and C. Callison-Burch, The arabic online commentary dataset: an annotated dataset of informal arabic with high dialectal content, in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-volume 2. Association for Computational Linguistics, 2011, pp. 37 41. [6], Arabic dialect identification, Computational Linguistics, vol. 40, no. 1, pp. 171 202, 2013. [7] J. Tetreault, J. Burstein, and C. Leacock, Eds., Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Atlanta, Georgia: Association for Computational Linguistics, June 2013. [Online]. Available: http://www.aclweb.org/anthology/w13-17 [8] A. Abbasi, H. Chen, and A. Salem, Sentiment analysis in multiple languages: Feature selection for opinion classification in web forums, ACM Transactions on Information Systems (TOIS), vol. 26, no. 3, p. 12, 2008. [9] N. Abdulla, N. Mahyoub, M. Shehab, and M. Al-Ayyoub, Arabic sentiment analysis: Corpus-based and lexicon-based, in Proceedings of The IEEE conference on Applied Electrical Engineering and Computing Technologies (AEECT), 2013. [10] M. N. Al-Kabi, N. A. Abdulla, and M. Al-Ayyoub, An analytical study of arabic sentiments: Maktoob case study, in The 8th International Conference for Internet Technology and Secured Transactions (ICITST). IEEE, 2013, pp. 89 94. [11] K. Kim, B.-S. Chung, Y. Choi, S. Lee, J.-Y. Jung, and J. Park, Language independent semantic kernels for short-text classification, Expert Systems with Applications, vol. 41, no. 2, pp. 735 743, 2014. [12] T. Joachims, Learning to classify text using support vector machines: Methods, theory and algorithms. Kluwer Academic Publishers, 2002. [13] M. S. Khorsheed and A. O. Al-Thubaity, Comparative evaluation of text classification techniques using a large diverse arabic dataset, Language resources and evaluation, vol. 47, no. 2, pp. 513 538, 2013. [14] N. A. Abdulla, M. Al-Ayyoub, and M. N. Al-Kabi, An extended analytical study of arabic sentiments, International Journal of Big Data Intelligence (IJBDI), to appear. [15] F. Sebastiani, Machine learning in automated text categorization, ACM computing surveys (CSUR), vol. 34, no. 1, pp. 1 47, 2002. [16] V. Korde and C. N. Mahender, Text classification and classifiers: A survey, International Journal of Artificial Intelligence & Applications, vol. 3, no. 2, 2012. [17] K. Aas and L. Eikvil, Text categorisation: A survey, Raport NR, vol. 941, 1999. [18] R. Duwairi, M. N. Al-Refai, and N. Khasawneh, Feature reduction techniques for arabic text categorization, Journal of the American society for information science and technology, vol. 60, no. 11, pp. 2347 2352, 2009. [19] A. Mesleh, Feature sub-set selection metrics for arabic text classification, Pattern Recognition Letters, vol. 32, no. 14, pp. 1922 1929, 2011. [20] L. Khreisat, Arabic text classification using n-gram frequency statistics a comparative study, in Conference on Data Mining DMIN 06, 2006, p. 79. [21] A. El-Halees, Arabic text classification using maximum entropy, The Islamic University Journal (Series of Natural Studies and Engineering), vol. 15, pp. 157 167, 2007. [22] M. El Kourdi, A. Bensaid, and T.-e. Rachidi, Automatic arabic document categorization based on the naïve bayes algorithm, in Proceedings of the Workshop on Computational Approaches to Arabic Script-based Languages. Association for Computational Linguistics, 2004, pp. 51 58. [23] A. M. Mesleh, Chi square feature extraction based svms arabic lan- 576

(a) Precision (b) Recall (c) F-Measure (d) Accuracy Fig. 1. The accuracy measures of the four classifiers under consideration on the two datasets. guage text categorization system, Journal of Computer Science, vol. 3, no. 6, p. 430, 2007. [24] W. Hadi, F. Thabtah, S. ALHawari, and J. Ababneh, Naive bayesian and k-nearest neighbour to categorize arabic text data, in European Simulation and Modeling Conference, 2008, pp. 196 200. [25] M. Hadni, A. Lachkar, and S. A. Ouatik, A new and efficient stemming technique for arabic text categorization, in Multimedia Computing and Systems (ICMCS), 2012 International Conference on. IEEE, 2012, pp. 791 796. [26] G. Kanaan, R. Al-Shalabi, S. Ghwanmeh, and H. Al-Ma adeed, A comparison of text-classification techniques applied to arabic text, Journal of the American society for information science and technology, vol. 60, no. 9, pp. 1836 1844, 2009. [27] S. Alsaleem, Automated arabic text categorization using svm and nb, Int. Arab J. e-technol., vol. 2, no. 2, pp. 124 128, 2011. [28] F. Harrag, E. El-Qawasmeh, and P. Pichappan, Improving arabic text categorization using decision trees, in Networked Digital Technologies, 2009. NDT 09. First International Conference on. IEEE, 2009, pp. 110 115. [29] F. Harrag, E. El-Qawasmah, and A. M. S. Al-Salman, Stemming as a feature reduction technique for arabic text categorization, in Programming and Systems (ISPS), 2011 10th International Symposium on. IEEE, 2011, pp. 128 133. [30] M. F. Umer and M. Khiyal, Classification of textual documents using learning vector quantization, Information Technology Journal, vol. 6, no. 1, 2007. [31] I. Hmeidi, B. Hawashin, and E. El-Qawasmeh, Performance of knn and svm classifiers on full word arabic articles, Advanced Engineering Informatics, vol. 22, no. 1, pp. 106 111, 2008. [32] M. K. Saad, The impact of text preprocessing and term weighting on arabic text classification, Master s thesis, Computer Engineering, The Islamic University-Gaza, 2010. [33] R. M. Duwairi, Machine learning for arabic text categorization, Journal of the American Society for Information Science and Technology, vol. 57, no. 8, pp. 1005 1010, 2006. [34] D. Said, N. M. Wanas, N. M. Darwish, and N. Hegazy, A study of text preprocessing tools for arabic text categorization, in The Second International Conference on Arabic Language, 2009, pp. 230 236. [35] B. Sriram, D. Fuhry, E. Demir, H. Ferhatosmanoglu, and M. Demirbas, Short text classification in twitter to improve information filtering, in Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval. ACM, 2010, pp. 841 842. [36] A. Abbasi and H. Chen, Applying authorship analysis to arabic web content, in Intelligence and Security Informatics. Springer, 2005, pp. 183 197. [37], Applying authorship analysis to extremist-group web forum messages, Intelligent Systems, IEEE, vol. 20, no. 5, pp. 67 75, 2005. [38] E. Stamatatos, Author identification: Using text sampling to handle the class imbalance problem, Information Processing & Management, vol. 44, no. 2, pp. 790 799, 2008. [39] K. Ahmad, D. Cheng, and Y. Almas, Multi-lingual sentiment analysis of financial news streams, in Proc. of the 1st Intl. Conf. on Grid in Finance, 2006. [40] Y. Almas and K. Ahmad, A note on extracting sentiments in financial news in english, arabic & urdu, in The Second Workshop on Computational Approaches to Arabic Script-based Languages, 2007, pp. 21 22. [41] C. Banea, R. Mihalcea, and J. Wiebe, Multilingual subjectivity: are 577

more languages better? in Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguistics, 2010, pp. 28 36. [42] M. Rushdi-Saleh, M. T. Martín-Valdivia, L. A. Ureña-López, and J. M. Perea-Ortega, Bilingual experiments with an arabic-english corpus for opinion mining, in Proceedings of Recent Advances in Natural Language Processing, 2011, pp. 740 745. [43], Oca: Opinion corpus for arabic, Journal of the American Society for Information Science and Technology, vol. 62, no. 10, pp. 2045 2054, 2011. [44] A. Alwajeeh, M. Al-Ayyoub, and I. Hmeidi, On authorship authentication of arabic articles, in The fifth International Conference on Information and Communication Systems (ICICS 2014), 2014. [45] I. H. Witten and E. Frank, Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, 2005. 578