JCHPS Special Issue 10: December Page 17

Similar documents
Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

arxiv: v1 [cs.cl] 20 Jul 2015

A Case Study: News Classification Based on Term Frequency

Python Machine Learning

Linking Task: Identifying authors and book titles in verbose queries

Probabilistic Latent Semantic Analysis

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Second Exam: Natural Language Parsing with Neural Networks

A Comparison of Two Text Representations for Sentiment Analysis

A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

Georgetown University at TREC 2017 Dynamic Domain Track

arxiv: v2 [cs.cl] 26 Mar 2015

Word Embedding Based Correlation Model for Question/Answer Matching

Assignment 1: Predicting Amazon Review Ratings

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Reducing Features to Improve Bug Prediction

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

ON THE USE OF WORD EMBEDDINGS ALONE TO

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

LIM-LIG at SemEval-2017 Task1: Enhancing the Semantic Similarity for Arabic Sentences with Vectors Weighting

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Cross-lingual Short-Text Document Classification for Facebook Comments

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Word Segmentation of Off-line Handwritten Documents

Semantic and Context-aware Linguistic Model for Bias Detection

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

A deep architecture for non-projective dependency parsing

A Vector Space Approach for Aspect-Based Sentiment Analysis

Rule Learning With Negation: Issues Regarding Effectiveness

Learning Methods for Fuzzy Systems

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

The stages of event extraction

Matching Similarity for Keyword-Based Clustering

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

Taxonomy-Regularized Semantic Deep Convolutional Neural Networks

arxiv: v1 [cs.lg] 3 May 2013

arxiv: v1 [cs.cl] 27 Apr 2016

Rule Learning with Negation: Issues Regarding Effectiveness

arxiv: v1 [cs.cl] 2 Apr 2017

Dialog-based Language Learning

A Bayesian Learning Approach to Concept-Based Document Classification

Australian Journal of Basic and Applied Sciences

AQUA: An Ontology-Driven Question Answering System

arxiv: v5 [cs.ai] 18 Aug 2015

arxiv: v4 [cs.cl] 28 Mar 2016

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Learning Methods in Multilingual Speech Recognition

Speech Emotion Recognition Using Support Vector Machine

Cross-Lingual Text Categorization

Using dialogue context to improve parsing performance in dialogue systems

arxiv: v2 [cs.ir] 22 Aug 2016

Knowledge Transfer in Deep Convolutional Neural Nets

Attributed Social Network Embedding

Cross Language Information Retrieval

EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

Conversational Framework for Web Search and Recommendations

On document relevance and lexical cohesion between query terms

Performance Analysis of Optimized Content Extraction for Cyrillic Mongolian Learning Text Materials in the Database

TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

CSL465/603 - Machine Learning

Deep Neural Network Language Models

THE world surrounding us involves multiple modalities

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Generative models and adversarial training

Mining Association Rules in Student s Assessment Data

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Calibration of Confidence Measures in Speech Recognition

Artificial Neural Networks written examination

Model Ensemble for Click Prediction in Bing Search Ads

Modeling function word errors in DNN-HMM based LVCSR systems

Prediction of Maximal Projection for Semantic Role Labeling

Human Emotion Recognition From Speech

Evolutive Neural Net Fuzzy Filtering: Basic Description

Improving Machine Learning Input for Automatic Document Classification with Natural Language Processing

THE enormous growth of unstructured data, including

Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models

Lecture 1: Machine Learning Basics

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

Word Sense Disambiguation

Online Updating of Word Representations for Part-of-Speech Tagging

Modeling function word errors in DNN-HMM based LVCSR systems

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio

Preference Learning in Recommender Systems

Using Web Searches on Important Words to Create Background Sets for LSI Classification

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Multi-Lingual Text Leveling

arxiv: v1 [cs.lg] 15 Jun 2015

Ensemble Technique Utilization for Indonesian Dependency Parser

Transcription:

Convolutional Neural Networks for Text Categorization Using Concept Generation Marlene Grace Verghese D*, P. Vijaya Pal Reddy Department of Information Technology, SRKR Engineering College, Bhimavaram, India Department of Computer Science and Engineering, Matrusri Engineering College, Hyderabad, India *Corresponding author: E-Mail: degala.marlene@gmail.com ABSTRACT Text Categorization is a task of assigning documents to a fixed number of pre-defined categories. Concept is a grouping of semantically related items under a unique name. High dimensionality space and sparsity of the document representation can be reduced using concepts. Conceptual representation of text can be generated using WordNet. In this paper, an empirical evolutions using Convolutional Neural Networks (CNN) for text categorization has been performed. The Convolutional Neural Networks exploit the one-dimensional structures of the text such as words, concepts and their combination to improve the categorical label prediction. The Reuters data set is evaluated with K-Nearest Neighbour (KNN) classifier and Convolutional Neural Networks on four categories of data. The representation of the text as a combination of words and concepts together results to a better classification performance using CNN compared with representation of a text as group of words and concepts individually. The influence of Term Frequency and Inverse Document Frequency for text categorization is also observed on the data set using CNN and KNN. The weight of words and concepts as a multiplication of Term Frequency (TF) and Inverse Document Frequency (IDF) results to a good classification performance using Convolutional Neural Networks compared with K Nearest Neighbour classifier. KEY WORDS: Text Categorization, Convolutional Neural Networks, K nearest Neighbour, Term Frequency, Inverse Document Frequency, WordNet. 1. INTRODUCTION With the advent of Internet the usage of internet users was a big explosion in the history of information technology according to statistics it exceeded three billion by the end of 2015. So the availability of information increased and people were unable to utilize large amounts of information. Text Categorization is the main source for handling and organizing text data in which it assigns one or more classes to a document according to their content. WordNet contains a set of synsets. A synset is group of words having similar meaning. In WordNet, it establishes different relationships such as hyperonymy, hyponymy or ISA relation among synsets. WordNet can be used in various applications suchs as Natural Language Processing, Text Processing and Artificial Intelligence. Deep Neural Networks has been the inspiration to various Natural Language Processing (NLP) tasks, the Recursive NN considers the semantics of a sentence through a tree structure which reduces the effectiveness when we want to consider the of a whole document. To find a solution to this problem, in latest studies the Convolution Neural Network (CNN) model is used for Natural Language Processing (NLP). The problem of high dimensionality and sparsity of data are addressed using Deep Neural Networks (Joachims, 1998). Word embedding is a generation of concepts from words. There are many tools available for word embeddings such as word2vec, sen2ven and Glove. Word embeddings is an important concept in deep neural networks. In Bag of words model, the object is represented as a vector which contains words and their weights. The word embedding are used to generate concept vectors for a given word vectors. By using concept vectors, a semantic relationship among the objects are established. In an object, the number of times the term appears is called Term Frequency (TF). Inverse Document Frequency (IDF) computes frequency how many times a term that occurs in other documents. With Term Frequency - Inverse Document Frequency (TF-IDF) assign high value to a term which appears less times in other documents within the corpus and that occur many times within a document. Related Works: The state-of-the art methods for text categorization had long been linear predictors with either bagof-word or bag-of-n-gram vectors (BOW) as input as in (Joachims, 1998; Yang, 2004). In recent trends, Non-linear methods that can make effective use of word order have been shown to produce more accurate predictors than the traditional bow-based linear models as in (Dai and Le, 2015; Zhang, 2015). In particular, let us first focus on onehot CNN which we proposed in JZ15 (Johnson and Zhang 2015). For Text classification, the documents are represented with set of features such uni-grams, bi-grams, n- grams. But the traditional methods to represent the document using bag of words representation suffers with the problem of identifying the semantical relationships among the terms in the document. There are some features such as second order n-grams tree structures (Aggarwal and Zhai, 2012) are proposed to capture the semantic relations among the terms in the document. But these features are suffered with the problem of data sparsity which reduces the performance of the classifiers. Now a days the developments in the deep neural networks leads to address the problems in NLP tasks. Using the concept of word embedding reduces the problem of data sparsity. As in (Baroni, 2014; Bengio, 2003), word embeddings captures the semantic and syntactic relations among the terms in the JCHPS Special Issue 10: December 2016 www.jchps.com Page 17

document. As in (Bengio, 2013), proposed the Recursive Neural Network (RNN) which is more effective for sentence representation in semantic space. But RNN uses tree structures to represent the sentence in a document which is not suitable for long sentences. Another drawback is its heavy time complexity. RNN model stores the semantics of the term word by word using hidden layers as in (Bottou, 1999). Text Categorization contains three topics such as feature engineering, feature selection and machine learning algorithms. The BOW model is used for feature engineering. Some other features such as noun phrases, POS tagging has proposed (Cai and Hofmann, 2003) and tree kernels (Charniak and Johnson, 2005). Identifying the suitable feature from the documents can improve the performance the classification system. The commonly used process for text classification is elimination of stop words from the document. There are some approaches such as information gain, chi square indexing, mutual information are used to identify the importance of the feature. There are various machine learning algorithms are used to built a learning model for classification. These methods leads to the problem of data sparsity. Deep neural networks and representation learning 15 have is used to come out from the high dimensionality space and sparsity of data problem in the document representation (Aggarwal and Zhai, 2012; Hinton and Salakhutdinov, 2006). The representation of a word in the form of a neuron is known as embedding of word in the form of a vector. The word embedding are used to measure semantic relationship between two words using word vectors. With word embeddings in neural networks, the performance of classification models are improved. As in (Huang, 2012), semi supervised recursive auto encoders are used to identify sentiment terms from the sentences. As in (Kalchbrenner and Blunsom 2013), RNN is used to propose to predict the para detection. As in (Klementiev, 2012), the sentiments in tensor networks is explored using recursive neural tensor networks. As in (Le and Mikolov 2014), the language models are built using RNN. In (Mikolov, 2013), RNN is used for dialogue act classification. 2. PROPOSED MODEL The proposed model consists of various phases such as pre-processing the raw dataset of both training and testing, Constructing a vector space model using terms and concepts of the document and building a classification model using Convolution Neural Network and K-nearest neighbour model and finally assigning a class label for the test document using the classification model. The various steps are explained as follows. Pre Processing: The different steps in Pre Processing involved. In the First Phase the non-content words is removed from the text. In the Second Phase the words are converted into their root forms. In third phase Tagging each of the words are assigned with the Part-Of-Speech (POS) Information. In Fourth Phase Stop words are Noisy Words are removed from the Text. The The flow of pre - processing is as follows as in fig.1. The proposed model has presented in the figure.2. It consists of various phases such as representation of training and test text documents using terms and concepts which are generated using WordNet. The text documents are pre processed using various pre preprocessing techniques. These pre processed texts are inputted to the classifiers such as K-Nearest Neighbour classifier or Convolutional Neural Networks. The classification model has been generated using one of the classifiers. The pre processed test documents are inputted to the classification model to label the test documents with their suitable class label. Convolutional Neural Network: A convolutional neural network (CNN) (Aggarwal and Zhai, 2012) is a feedforward neural network with convolution layers interleaved with pooling layers, originally developed for image processing. In its convolution layer, a small region of data at every location is converted to a low-dimensional vector with information relevant to the task being preserved, which we loosely term embedding. The embedding function is shared among all the locations, so that useful features can be detected irrespective of their locations. In its simplest form, one-hot CNN works as follows. A document is represented as a sequence of one-hot vectors a convolution layer converts small regions of the document to low-dimensional vectors at every location a pooling layer aggregates the region embedding results to a document vector by taking component-wise maximum or average and the top layer classifies a document vector with a linear model. The one- ot CNN and its semi-supervised extension were shown to be superior to a number of previous methods. Figure.1. The pre-processing for the document WordNet: WordNet is like a thesaurus for the English language. It has many applications in various fields such as natural language processing, text processing, information retreival. WordNet is useful to find the semantic relationship between words in a document. Many algorithms considers the length and depth of a word in the WordNet by using synsets to get the closeness among the words that are close in their meaning. WordNet Based Texts Categorization has two stages. The first stage is learning phase in which we get a new text by combining the terms with their relevant concepts this enables to select or create categorical profiles based on characteristic features and JCHPS Special Issue 10: December 2016 www.jchps.com Page 18

the second stage relates to the classification phase in which weights are given to the features in the categorical profiles. Term Frequency-Inverse Document Frequency (TF-IDF): In order to calculate the weights to the terms in a document we use the following measures Term Frequency (TF) and Term Frequency - Inverse Document Frequency (TF-IDF). A term frequency tf(t,d) is measure to calculate the number of times that term t occurs in document d. Which is denoted below: TF(t,d)=f(t,d) The objective behind Term Frequency - Inverse Document Frequency is to find the terms that occur many times within the document (Term Frequency) and occur less times in other documents (Inverse Document Frequency): TF IDF(t,d) = log ( df(t) N ) tf(t,d) (d) (t, d) is the frequency of the given term t from the text d. d shows the word count in the text. df(t) finds the number of texts in the corpus which contains the term t in it. N is the total number of text documents in the whole corpus. Figure.2. The proposed model for Text Categorization Algorithm: Input: Training dataset and Test dataset Step1: Pre - process the data for both training and test datasets using various pre-processing techniques Step 2: Identify unique content terms from the training dataset and test dataset Step 3: Identify unique concepts using WordNet from identified unique terms Step 4: Represent each document of training and test datasets in vector space model using terms and concepts with their corresponding weightings Step 5: Construct a classifier using vector space model of documents with convolution neural networks. Step 6: Identify the class label of test document by inputting the vector space model to the learnt classifier. Evaluation and Discussions: In this paper on the dataset a series of experiments are carried in order to categorize the documents into predefined categories by using the algorithm explained in section 3.6 and to estimate the accuracy of classification model. Dataset Description: In this paper, the experiments were performed on the Reuters dataset. It contains four categories of dataset namely CRAN, CISI, CACM and MED. For empirical evaluations only 800 documents are considered based on the minimum number of sentences in the document. From 800 documents, 640 documents were considered as training set and the remaining were considered as test set. After applying various preprocessing techniques the vector representation of the documents are inputted to KNN classification model and CNN model for learning classification model. Evaluation Measures: The performance of the obtained classification model is measured using precision, recall and F1 measure. The formulas for calculating precision, recall and F1 measures are as follows: Precision= X X+Y Recall= X X+Z 2 Recall Precision F 1 = Recall+Precision X is the number of documents retrieved from the system and relevant, Y shows the number of texts retrieved but not relevant, Z is the number of texts retrieved but not relevant to the given query. Macro-averaged F-Measure is calculated to find the average F1 value of all the categories. JCHPS Special Issue 10: December 2016 www.jchps.com Page 19

3. RESULTS The efficiency of a classifier is measured on the test set by using precision, recall and F1 measures. Out of 800 documents, 640 documents are considered as training set and the remaining 160 are documents as test set. The results of our experiments results are given in the following tables. Table.1. The Precision, Recall and F1 measure values using K-Nearest Neighbour Approach for term, concepts and with their combination Term Frequency Term Frequency * Inverse Document Frequency Precision Recall F1 Measure Precision Recall F1 Measure Terms 0.59 0.62 0.61 0.67 0.71 0.69 Concepts 0.41 0.48 0.44 0.56 0.62 0.59 Terms and Concepts 0.64 0.68 0.66 0.72 0.76 0.74 Table.2. The Precision, Recall and F1 measure values using Convolution Neural Networks Approach for term, concepts and with their combination Term Frequency Term Frequency * Inverse Document Frequency Precision Recall F1 Measure Precision Recall F1 Measure Terms 0.69 0.74 0.71 0.73 0.82 0.77 Concepts 0.58 0.65 0.61 0.65 0.76 0.70 Terms and Concepts 0.74 0.79 0.77 0.78 0.86 0.82 By our proposed approach we compared Convolution Neural Network to widely used traditional method such as K-Nearest Neighbour the experimental results show that the Convolution Neural Network approach gives better results than the traditional method for all four datasets and provides reliable approach on semantic representation of texts. Convolution Neural Networks gives more contextual information of features compared with K-Nearest Neighbour (K-NN) method. 4. CONCLUSION Our model captures contextual information and constructs the representation of text using a Convolutional Neural Network in Text Categorization. It demonstrates that our model of Convolutional Neural Network gives best results using four different text classification datasets. In our paper, we gave a new approach for Text Categorization by considering background knowledge that is WordNet into text representation. The experimental results with both Reuters 21578 dataset proved that by considering background knowledge in order to know the relationships between words gave especially effective results in raising the F1 value. A challenging issue is that a word has multiple synonyms with somewhat different meanings so it is difficult to find correct synonyms automatically. The combination of terms and concepts generated using WordNet results to better classification of documents using Convolution Neural Networks than K-Nearest Neigbour Approach. Another possible extension is using more suitable weighting techniques for representation of terms and concepts. It is also required to experiment with various possible Deep Neural Network approaches for different term representation techniques. REFERENCES Aggarwal C.C and Zhai C, A survey of text classification algorithms, In Mining text data, Springer US, 2012, 163-222. Baroni M, Dinu G and Kruszewski G, June, Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors, In ACL, 1, 2014, 238-247. Bengio Y, Courville A and Vincent P, Representation learning, A review and new perspectives, IEEE transactions on pattern analysis and machine intelligence, 35 (8), 2013, 1798-1828. Bengio Y, Ducharme R, Vincent P and Jauvin C, A neural probabilistic language model, Journal of machine learning research, 3, 2003, 1137-1155. Bottou L, Learning of gradient in networks using CNN, In Proc. On Neuro-Nımes, 91, 1999. Cai L and Hofmann T, Text categorization by boosting automatically extracted concepts, In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, 2003, 182-189. Charniak E and Johnson M, Coarse-to-fine n-best parsing and MaxEnt discriminative re ranking, In Proceedings of the 43rd annual meeting on association for computational linguistics, Association for Computational Linguistics, 2005, 173-180. Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K and Kuksa P, Natural language processing (almost) from scratch, Journal of Machine Learning Research, 12, 2011, 2493-2537. JCHPS Special Issue 10: December 2016 www.jchps.com Page 20

Cover T.M and Thomas J.A, Elements of information theory, John Wiley & Sons, 2012. Dai A.M and Le Q.V, Semi-supervised sequence learning, In Advances in Neural Information Processing Systems, 2015, 3079-3087. Hingmire S, Chougule S, Palshikar G.K and Chakraborti S, Document classification by topic labeling, In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, 2013, 877-880. Hinton G.E and Salakhutdinov R.R, Reducing the dimensionality of data with neural networks, Science, 313 (5786), 2006, 504-507. Huang E.H, Socher R, Manning C.D and Ng A.Y, Improving word representations via global context and multiple word prototypes, In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Long Papers, Association for Computational Linguistics, 1, 2012, 873-882. Joachims T, Text categorization with support vector machines, Learning with many relevant features, In European conference on machine learning, Springer Berlin Heidelberg, 1998, 137-142. Johnson R and Zhang T, Semi-supervised convolutional neural networks for text categorization via region embedding, In Advances in neural information processing systems, 2015, 919-927. Kalchbrenner N and Blunsom P, Recurrent convolutional neural networks for discourse compositionality, arxiv preprint arxiv, 2013, 1306, 3584. Klementiev A, Titov I and Bhattarai B, Inducing cross lingual distributed representations of words, Proceedings of COLING, 2012. Le Q.V and Mikolov T, Distributed Representations of Sentences and Documents, In ICML, 14, 2014, 1188-1196. Mikolov T, Sutskever I, Chen K, Corrado G.S and Dean J, Distributed representations of words and phrases and their compositionality, In Advances in neural information processing systems, 2013, 3111-3119. Mikolov T, Yih W.T and Zweig G, Linguistic Regularities in Continuous Space Word Representations, In Hlt-naacl, 13, 2013, 746-751. Yang, Semi supervised RNN classification of text with word embedding, JMLR Research, 2004, 361 397. Zhang X, Zhao J and LeCun Y, Character-level convolutional networks for text classification, In Advances in neural information processing systems, 2015, 649-657. JCHPS Special Issue 10: December 2016 www.jchps.com Page 21