Twitter Sentiment Analysis with Recursive Neural Networks

Similar documents
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Python Machine Learning

Assignment 1: Predicting Amazon Review Ratings

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Lecture 1: Machine Learning Basics

(Sub)Gradient Descent

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Second Exam: Natural Language Parsing with Neural Networks

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Probabilistic Latent Semantic Analysis

Model Ensemble for Click Prediction in Bing Search Ads

Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Evolutive Neural Net Fuzzy Filtering: Basic Description

Attributed Social Network Embedding

arxiv: v1 [cs.cl] 20 Jul 2015

Rule Learning With Negation: Issues Regarding Effectiveness

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

A Vector Space Approach for Aspect-Based Sentiment Analysis

arxiv: v1 [cs.lg] 7 Apr 2015

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

arxiv: v1 [cs.lg] 15 Jun 2015

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Deep Neural Network Language Models

arxiv: v5 [cs.ai] 18 Aug 2015

Calibration of Confidence Measures in Speech Recognition

CS Machine Learning

The stages of event extraction

Prediction of Maximal Projection for Semantic Role Labeling

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Speech Recognition at ICSI: Broadcast News and beyond

arxiv: v2 [cs.ir] 22 Aug 2016

Human Emotion Recognition From Speech

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Semantic and Context-aware Linguistic Model for Bias Detection

Developing a TT-MCTAG for German with an RCG-based Parser

Linking Task: Identifying authors and book titles in verbose queries

Rule Learning with Negation: Issues Regarding Effectiveness

Test Effort Estimation Using Neural Network

A Case Study: News Classification Based on Term Frequency

arxiv: v4 [cs.cl] 28 Mar 2016

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Modeling function word errors in DNN-HMM based LVCSR systems

Artificial Neural Networks written examination

Word Segmentation of Off-line Handwritten Documents

A JOINT MANY-TASK MODEL: GROWING A NEURAL NETWORK FOR MULTIPLE NLP TASKS

Spoken Language Parsing Using Phrase-Level Grammars and Trainable Classifiers

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

INPE São José dos Campos

arxiv: v1 [cs.cl] 2 Apr 2017

A deep architecture for non-projective dependency parsing

Learning Methods in Multilingual Speech Recognition

Switchboard Language Model Improvement with Conversational Data from Gigaword

CSL465/603 - Machine Learning

Article A Novel, Gradient Boosting Framework for Sentiment Analysis in Languages where NLP Resources Are Not Plentiful: A Case Study for Modern Greek

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Cultivating DNN Diversity for Large Scale Video Labelling

A Deep Bag-of-Features Model for Music Auto-Tagging

arxiv: v1 [cs.cv] 10 May 2017

Learning From the Past with Experiment Databases

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Comment-based Multi-View Clustering of Web 2.0 Items

A study of speaker adaptation for DNN-based speech synthesis

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Beyond the Pipeline: Discrete Optimization in NLP

A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Knowledge Transfer in Deep Convolutional Neural Nets

Using dialogue context to improve parsing performance in dialogue systems

Residual Stacking of RNNs for Neural Machine Translation

Softprop: Softmax Neural Network Backpropagation Learning

Truth Inference in Crowdsourcing: Is the Problem Solved?

Accurate Unlexicalized Parsing for Modern Hebrew

Modeling function word errors in DNN-HMM based LVCSR systems

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

arxiv: v2 [cs.cl] 26 Mar 2015

Learning to Schedule Straight-Line Code

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Indian Institute of Technology, Kanpur

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

Dialog-based Language Learning

UNIVERSITY OF OSLO Department of Informatics. Dialog Act Recognition using Dependency Features. Master s thesis. Sindre Wetjen

Postprint.

CS 446: Machine Learning

Multi-label classification via multi-target regression on data streams

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Dropout improves Recurrent Neural Networks for Handwriting Recognition

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Transcription:

Twitter Sentiment Analysis with Recursive Neural Networks Ye Yuan, You Zhou Department of Computer Science Stanford University Stanford, CA 94305 {yy0222, youzhou}@stanford.edu Abstract In this paper, we explore the application of Recursive Neural Networks on the sentiment analysis task with tweets. Tweets, being a form of communication that has been largely infused with symbols and short-hands, are especially challenging as a sentiment analysis task. In this project, we experiment with different genres of neural net and analyze how models suit the data set in which the nature of the data and model structures come to play. The neural net structures we experimented include one-hidden-layer Recursive Neural Net (RNN), two-hidden-layer RNN and Recursive Neural Tensor Net (RNTN). Different data filtering layers, such as ReLU, tanh, and drop-out also yields many insights while different combination of them might affect the performance in different ways. 1 Introduction Sentiment analysis has been a popular topic in the field of machine learning. It is largely applied to data that comes with self-labeled information such as movie reviews on imdb. A scalar score comes along with the review text a user writes, which provides a good and reliable labelling of the text polarity. This ability to identify the positive or negative sentiment behind a piece of text is even more interesting when it comes to social data. Twitter gets new user data literally every second. If our model can predict sentiment labels for incoming live tweets, we d be able to understand the most recent user attitude towards a variety of topics from a commercial flight satisfaction to brand image. We used a logistic regression baseline model and complex-structured neural networks, Recursive Neural Network(RNN) and Recursive Neural Tensor Network(RNTN). Considering the nature of tweets, we first preprocessed the tweets, built a binary dependence tree as the input to the RNNs. We tuned our hyper-parameters and applied regularization methods such as L2 regularization as dropouts to optimize the performance. 2 Related Word Researchers have applied traditional machine learning technologies to solve the sentiment analysis problem on the Twitter data set. Agarwal et al [1] proposed a method to incorporate tree structure to help feature engineering. On the other hand, deep learning researchers have a more natural way to train directly on tree structure data using recursive neural networks[2]. Furthermore, complex models such as Matrix-Vector RNN and Recursive Neural Tensor Networks proposed by Socher, Richard, et al.[4] have been proved to have promising performance on sentiment analysis task. This motivates us to apply deep learning methods to the Twitter data. 1

3 Technical Approach and Models 3.1 Preprocessing Due to the specific format (for example, 140 character limit) and the mostly casual nature of Tweets, the vocabulary used in Tweets are very different from formal English used in popular NLP datasets such as the Wall Street Journal Dataset. Tweets contains a lot of emoticons, abbreviations and creative ways of expressing excitment such as long tailing (ex. happyyyy). We normalize all letters to lowercase and perform abstractions such as representing any @USERNAME as a <user> token and convert a single #hashtag input to a <hashtag> token and a token with the actual tag value hashtag. Our preprocessing script is based on the Stanford nlp twitter preprocessing script[6]. 3.2 Logistic Regression Baseline First, we establish our baseline model as a simple logistics regression model using the Bag of Word representation. Besides extracting words (unigrams) from the tweets, we also include word bigrams as input features to include introduce some context information to the model. The model is trained with stochastic gradient descent. Here our task is a multi-label classification problem. Our baseline model is a combined model with a positive classifier, a negative classifier and a neutral classifier. To output a final label, the model looks at the three scores produced by the three sub-classifiers and chooses the one label with highest score. Each tweet is represented by a sparse vector of word counts, denoted by x. Each sub-classifier learns a weight vector w based on training examples minimizing the hinge loss. Loss hinge (x, y, w) = max(0, 1 w φ(x)y) 3.3 Recursive Neural Network: Two-Layer RNN and One-Layer RNTN We used the cross-entropy loss defined as CE(θ) = y i logŷ i, where y is the one-hot representation of the actual label and ŷ is the probability prediction output by the softmax layer and θ is the set of our model parameters. 3.3.1 Two-Layer RNN Forward Propagation: ŷ = softmax(θ) θ = Uh (2) + b (s) h (2) = ReLu(z (2) ), where ReLu(z) = max(z, 0) z (2) = W (2) h (1) + b (1) h (1) = ReLu(z (1) ) [ ] z (1) = W (1) hleft + b h (1) Right where h (1) R d is either the word vector at leaf node or a function of h (1) s from its children. h (2) R D and ŷ R n. d is the dimension of word vectors, D is dimension of the hidden layer and n is the dimension of the output layer. Back Propagation: For the root node: δ 3 = θ = ŷ y δ (2) 2 = U T δ 3 ReLu (z (2) ) δ (1) 2 = W (1)T δ (2) 2 ReLu (z (1) ) δ below = W (1)T δ (1) 2 W U = δ 3h (1)T b (s) = δ 3 = δ(2) (2) 2 h(2)t = δ(2) (2) 2 = δ(1) W (1) 2 [ ] T hleft h Right b = δ(1) b (1) 2 2

Figure 1: Example Two-Layer Recursive Neural Network Structure For intermediate nodes: δ (1) 2 = δ above ReLu (z (1) ) δ below = W (1)T δ (1) 2 [ ] T hleft = δ(1) W (1) 2 h Right = δ(1) b (1) 2 Note here δ above refers to either the first half or the second half of the δ below from the higher layer, depending on whether the node is a left or a right child. 3.3.2 One-Layer Recursive Neural Tensor Network The general structure of the RNTN described by [4] is similar to that of the RNN. We ve taken away the hidden layer h (2) and we used tanh as the activation function for H (1). The important model formulation follows, Forward Propagation: Back Propagation: ŷ = softmax(uh (1) + b (s) ) h (1) = tanh(z (1) ) z (1) k = [ ] T [ ] [ ] hleft V h (1)[k] hleft + W Right h (1) hleft + b Right h (1) Right [ ] [ ] T hleft hleft = δ(1) 2 V (1)[k] h Right h Right Due to space limitation, we omit the other derivatives similar to what we did in section 3.3.1. 4 Experiment 4.1 Data Set We used the SemEval-2013 data set collected by York University[7], which consists of 6092 rows of training data. We further divided the training data into a training set of size 4874 and a dev set 3

of size 1218. Examples in the original data set are classified with five labels: negative, objective, neutral, objective-or-neutral and positive. In fact, the difference between objective-or-neutral and objective/neutral labels are not very well defined, so for the purpose of our project, we treated the objective class, neutral, and objective-or-neutral all as neutral examples. 4.2 Evaluation Metric Naturally, we chose accuracy as our performance metric for this classification task. At the same time, we choose the average F-1 score of the positive and negative groups as our metric so that we have integrate the precision and recall on the two class labels together. 4.3 RNN input format A recursive neural network requires the training data to have a pre-determined tree structure. We used the PCFG Stanford NLP Parser[3] to build estimates of the actual optimal tree structures. We chose to run the parser basing on a careless probabilistic context-free grammar model, which works better than traditional PCFG models on less strictly grammatical input data such as tweets in our case. Moreover, our recursive neural network assumes each non-leaf node to have two children. So we also binarized our parse tree using a binarizer based on Michael Collin s English head finder. After these processes, all non-leaf nodes in our parse tree have at most two children. It is possible that a node has only one child, for example NP N. We chose to soft delete this node in our NN implementation where cost and errors are directly passed to the next level without modification at this level. 4.4 Regularization Neural networks are much more powerful than our baseline logistic regression model bacause they can learn complex intermediate units(neurons) and capture nonlinear interactions between inputs. They are also prone to over-fitting for the same reason. They are so powerful that they usually fit noises in the training data as well as the general model. In order to generalize the model to unseen data sets, we put a lot of emphasis on regularization methods. First of all, we applied a standard L2 norm on the U and W parameters, as well as the V parameters in RNTN to avoid overfitting. Furthermore, we experimented with the dropout regularization described by Srivastava et al[5]. The idea is to randomly omit half of the neurons at training time for each iteration, which allows us to achieve the same effect as if we are training on 2N individual neural networks with N being the number of neurons. We applied dropout to the softmax layer of RNN and RNTN models. 4.5 Results We initialized our word vectors with GloVe word vectors pre-trained on 2 billion tweets published by the Stanford NLP group. [8] Experimenting with different combination of layers with neural net models, the optimal combination for each model is: - Drop-out ReLU Tanh One-hidden-layer RNN Yes Yes No Two-hidden-layer RNN Yes Yes No RNTN Yes No Yes Hyper parameters also play a significant role affecting the performance. The parameters we have been tuning include: epochs: epoches number step: step size wvecdim: word vector dimension middledim: Dimension of the second hidden layer (only applied to RNN2 and RNTN) minibatch: 4

size of minibatch rho: regulization strength For RNN: RNN best suits this data set among the three net structures. Since labels on the word- and phraselevels aren t completed, RNN2 and RNTN didn t give much leeway when models fit the data. However, this structure also severely suffers from very fitting, as being such a shallow net structure, fitting all the training data can be challenging. Due to this phenomena, we adjust the regularization force to correct the overfitting, which we can see from Figure 2. The best performance is given at reg = 8 10 4 (a) reg = 8 10 4 (b) reg = 10 3 (c) reg = 5 10 4 Figure 2: Examples of how regulization strength affects performance in RNN The confusion matrix of RNN gives us more insight of about the performance. We can see that the model is not good at predicting negative label, due to the lack of negative training data. Barely any instance is classified as negative. It is doing a decent job in neutral and positive labels. The problem also appears in RNN2 and RNTN models, due to the imbalanced training data is used to train all the three models. Figure 3: RNN confusion matrix For two-hidden-layer RNN: In the two-hidden-layer RNN, over-fitting is not as severe as in the one-hidden-layer RNN, but a more appropriate regularisation strength can still give a rise on the performance, see Figure 4. In RNN2, reg = 1 10 3 gives the best performance. In RNN2, reg = 1 10 3 gives the best performance. 5

(a) reg = 8 10 4 (b) reg = 1 10 3 Figure 4: Examples of how regularization strength affects performance in RNN 2 Apart from regularization, the dimension of the middle hidden layer also come to play, since the two-hidden-layer RNN has one more layer than the one-hidden-layer RNN that can be tuned. We can see that despite the general over-fitting phenomena, when middle dimension is 25, the model gives better performance in terms of data over-fitting and dev accuracy.(figure 5) (a) middledim = 25 (b) middledim = 35 Figure 5: Examples of how middledim affects performance in RNN 2 From the confusion matrix, we can see that the model is still nor good at predicting the negative labels, it has a tendency to mislabel positive as negative, which is not a surprise as the positive training data has a dominating amount. Despite the average dev accurance of RNN2 is not as good as RNN, it has improved in the prediction of neutral and positive labels by mislabeling less positive instance as neutral labels. Figure 6: RNN 2 confusion matrix 6

RNTN: Theretically speaking, RNTN could have been performing better than RNN and RNN2. However, due to the lack of word- and phrase- level in the dataset, RNTN model is under-fit. With other hyperparameters tuned to its best, we try to adjust the dimension of the middle hidden layer to have the model properly fit the data. We can see in Figure 7 that, apparently, the lower dimension performs better. Results at a glance: (a) middledim = 25 (b) middledim = 35 Figure 7: Examples of how middledim affects performance in RNN 2 Models Dev ACC Avg F1 score One-hidden-layer RNN 63.71 0.512 Two-hidden-layer RNN 62.45 0.517 RNTN 59.32 0.483 For refernece, when running the same models on tree bank, the accuracy on dev set is as follows. We can see that with better-labeled data set, these models can generate quality performance. 5 Conclusion Models Dev ACC One-hidden-layer RNN 84.17 Two-hidden-layer RNN 80.68 In summary, sentiment analysis in twitter data strikes for cautious pre-processing and the proper model that best fits the data set. Balance of the data set and available labels of intermediate levels play a significant roles in training such models. Imbalance of our data set lead to a poor performance in predicting negative labels through out the models, and the insufficient intermediate-level (word and phrase- level) labels lead to an under-fit in RNTN. Another take-home lesson would be tuning the hyperparameters for a better data fit. Our data set could overfit shallow neural nets, such as the one-hidden-layer RNN. By increasing the regularization strength, we are able to obtain a decent performance on one-hidden-layer RNN. Continuous work after this project to perfect the models could be experimenting with more data set to look for a proper fit that contains more intermidiate-level information and more fine-tuning on the hyperparameters, some of which largely depend on the nature of the data set. 6 Reference [1] Agarwal, Xie, Vovsha, Rambow, Passonneau. (2011) Sentiment analysis of Twitter data. Proceedings of the Workshop on Language in Social Media (LSM 2011) [2] Goller, A. Kuchler. (1996) Learning task-dependent distributed representations by backpropagation through structure. Proceedings of the International Conference on Neural Networks (ICNN-96). 7

[3] Klein, Manning. (2003) Accurate Unlexicalized Parsing. Proceedings of the 41st Meeting of the Association for Computational Linguistics pp. 423-430. [4] Socher, Perelygin, Wu, Chuang, Manning, Ng, Potts. (2013) Recursive deep models for semantic compositionality over a sentiment treebank. Proceedings of the conference on empirical methods in natural language processing (EMNLP). Vol. 1631. [5] Srivastava, Hinton, Krizhevsky, Sutskever, Salakhutdinov. (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15 [6] Script for preprocessing tweets by Romain Paulus. http://nlp.stanford.edu/projects/glove/preprocesstwitter.rb Retrieved on April 30th 2015. [7] Semeval 2013 Task 2 Data. http://www.cs.york.ac.uk/semeval-2013/task2/. Retrieved on April 28th 2015. [8] Twitter GloVe word vectors. http://nlp.stanford.edu/projects/glove/ Retrieved on May 6th 2015. 8