Multiclass Classification of Tweets and Twitter Users Based on Kindness Analysis

Similar documents
Assignment 1: Predicting Amazon Review Ratings

Lecture 1: Machine Learning Basics

Python Machine Learning

Probabilistic Latent Semantic Analysis

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Generative models and adversarial training

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Switchboard Language Model Improvement with Conversational Data from Gigaword

Learning From the Past with Experiment Databases

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

CS 446: Machine Learning

arxiv: v2 [cs.cv] 30 Mar 2017

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Modeling function word errors in DNN-HMM based LVCSR systems

(Sub)Gradient Descent

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Modeling function word errors in DNN-HMM based LVCSR systems

A study of speaker adaptation for DNN-based speech synthesis

Truth Inference in Crowdsourcing: Is the Problem Solved?

Artificial Neural Networks written examination

CS Machine Learning

Speech Emotion Recognition Using Support Vector Machine

arxiv: v1 [cs.lg] 3 May 2013

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Rule Learning With Negation: Issues Regarding Effectiveness

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2

WHEN THERE IS A mismatch between the acoustic

A Case Study: News Classification Based on Term Frequency

Semantic and Context-aware Linguistic Model for Bias Detection

A Comparison of Two Text Representations for Sentiment Analysis

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Learning Methods in Multilingual Speech Recognition

Indian Institute of Technology, Kanpur

Linking Task: Identifying authors and book titles in verbose queries

Semi-Supervised Face Detection

Exploration. CS : Deep Reinforcement Learning Sergey Levine

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Online Updating of Word Representations for Part-of-Speech Tagging

Australian Journal of Basic and Applied Sciences

Human Emotion Recognition From Speech

Speaker recognition using universal background model on YOHO database

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

Multi-Dimensional, Multi-Level, and Multi-Timepoint Item Response Modeling.

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Attributed Social Network Embedding

Rule Learning with Negation: Issues Regarding Effectiveness

Applications of data mining algorithms to analysis of medical data

A survey of multi-view machine learning

Comment-based Multi-View Clustering of Web 2.0 Items

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Calibration of Confidence Measures in Speech Recognition

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

A Vector Space Approach for Aspect-Based Sentiment Analysis

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011

Identifying Topical Authorities in Microblogs

Word Segmentation of Off-line Handwritten Documents

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

STT 231 Test 1. Fill in the Letter of Your Choice to Each Question in the Scantron. Each question is worth 2 point.

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

Using focal point learning to improve human machine tacit coordination

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

CSL465/603 - Machine Learning

Learning Methods for Fuzzy Systems

Speech Recognition at ICSI: Broadcast News and beyond

Classify: by elimination Road signs

arxiv: v1 [cs.cl] 2 Apr 2017

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Mining Association Rules in Student s Assessment Data

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Laboratorio di Intelligenza Artificiale e Robotica

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Probability and Statistics Curriculum Pacing Guide

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification

Universityy. The content of

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Georgetown University at TREC 2017 Dynamic Domain Track

Transcription:

CS9 Final Project Report Multiclass Classification of Tweets and Twitter Users Based on Kindness Analysis I. Introduction Wanzi Zhou Chaosheng Han Xinyuan Huang Nowadays social networks such as Twitter and Facebook are most indispensable in people s daily lives, and thus it is important to keep the social community healthy. Establishing a kindness assessment mechanism is very helpful for maintaining a healthy environment, which could be used for applications like a rewarding system or parent control modes for children using social network. Our goal is to set up a kindness rating system for tweets/ Twitter users. To accomplish this, we decompose the task into two stages: firstly, for stream data of tweets, we run unsupervised learning algorithms to classify them into three clusters: positive, negative and neutral. Secondly, we choose a group of Twitter users and apply our trained model to assess their kindness. II. Related work In 5, Cheng et al [] from Stanford and Cornell Universities have developed a logistic regression model, using labeled posts to predict antisocial behavior in online discussion communities. Their study focuses on spotting out whether a user is a troll or not, which is a binary classification problem. Earlier in, Sood et al [] from Pomona College and Yahoo Company developed a model for automatic identification of personal insults on social news sites, which is also a supervised learning work and belongs to binary classification problem. They got their data labeled via Amazon Mechanical Turk. Meanwhile, sentiment analysis using Twitter data has been a popular topic in machine learning. Bifet and Frank [] conducted a supervised learning with multinomial naive Bayes classifier to predict the sentiment and opinion of tweets. wanziz@stanford.edu hcs@stanford.edu xhuang9@stanford.edu Pak and Paroubek [4] improved this model by better cleaning the input data. Agarwal et al [5] from Columbia University further explored tweets with a -way classification, namely positive, negative and neutral. All the mentioned research studies are supervised learning, however, it is infeasible to label enough training data in short time. Thus, different from former work, we propose to give each tweet/twitter user a kindness rating, leading to an unsupervised multinomial classification or regression. III. Dataset and Features Twitter has always been a great resource for Natural Language Processing researchers. It has sufficiently large size of data, along with outstanding qualities - it comprises of real-life conversations, uniform length (4 characters), rich variety, and real-time data stream. With Twitter API, we captured a random sample of tweets in continuous 4 hours in a regular day and picked out all the English tweets. After the above procedures, we obtained 589 tweets as our dataset for this project. We first used the lexicon features. We collected two lexicons of positive words [6] such as "amazing" and negative words [7] such as "bastard", which has 7 and 6 words respectively. We clean the data by transforming all the letters into lowercases and neglecting the punctuations. For every tweet we obtained from the dataset, we compare them to the words in the dictionary of both positive words and negative words, and obtain a 959 feature vector, where each value in the vector represents the number of times the word appears in a certain tweet. We then use the features to implement the learning part. For a second try, since our data does not have labels, we want the features to be more reason-

CS9 Final Project Report able and objective so that the later unsupervised learning can lead to a better result, so we also tried considering the semantics and relations between words to assign a different weight to words in the dictionary. To achieve this, we used a wordvec method using an online dataset from the Data Compression Programs by Matt Mahoney[8]. Using the package in the program and fitting it into our model, we pre-processed the data file to obtain 757 words of all kinds(including positive words such as "optimistic" and negative words such as "bastard" in our positive and negative dictionaries). Among these there are 5 unique words in all. Then we built a skip gram model and trained the model with SGD optimization for 4 steps to obtain a 5 8 word embedding matrix, where each row is the word embedding vector of each of the 5 words. We extracted vectors for words in our positive and negative dictionaries from the matrix. Then, for each word, we compute its cosine similarity with every other words and take the average similarities of positive/negative words as a measurement towards "negative"/"positive". Based on above we assign different weights to build the 959 feature vector of every tweet for the learning part. IV. Unsupervised Learning Model For the project we are using three methods to implement the unsupervised clustering: K- means, principle component analysis (PCA) incorporated with K-means and Gaussian Mixture Model with EM algorithm. We then compare the results between these methods. i. K-means Since the data we have obtained are unlabeled data, we do unsupervised learning by classifying the tweets into three clusters: positive, negative and neutral. We first try very straightforward method of K-means clustering. The input vector we have obtained through feature extraction is the feature vector containing information of use of positive words and negative words. We run K-means for all the 5895 tweets data. We initial the cluster centroids based on the prior knowledge that the K cluster centroids should be well separated from each. We also add a random process in generating the centroids to avoid local minimum. Then we repeat the following K-means algorithms until convergence: For every i, i =,..., 5895, set c (i) = arg min j x (i) µ j For every j, j =,,..., K, set µ j := m i= {c(i) = j}x (i) m i= {ci = j} To choose the optimal cluster number K, we visualize the clustering results in a two dimension space, where the two dimensions represent the normalized sum of positive and negative feature number counts, respectively. We use this D projection result as a criterion to determine the optimal K based on the fact that if the samples are well clustered in a low dimensional space, they must be better if not equally clustered in a higher dimensional space. According to our clustering results with K =,, 4, 5, as shown in section V, we find the optimal cluster number K =. We then look into the values of the three cluster centroids. One of them is extremely close to a zero vector while the other two s positive and negative components are distinctly recognized, which shows that the three clusters correspond to the three categories: positive, negative and neutral as we discussed in the previous section. ii. PCA After trying out straight forward K-means, we think it might be helpful to reduce the computation time by applying PCA (principal component analysis) before the K-means algorithms. We first shrink the 959 (7 negative + 6 positive) feature vector to 65 by eliminating the word that never showed up in the dataset.

CS9 Final Project Report Then we normalize the feature data to zero mean and unit-variance for each component. Afterwards, we calculate the the empirical covariance matrix Σ of the feature data. Then we project our data into a k-dimensional subspace (k < m). Here we choose k =. Specifically we choose u,..., u k to be the top k eigenvectors of Σ. Then we present the feature vector on the basis of u i s. iii. Gaussian Mixture Model To reflect the correlation between the individual components in the feature vector, we also use Expectation-Maximization (EM) algorithm to learn a Gaussian mixture model. Since we have already demonstrated K = is the optimal cluster number in the previous discussion, for Gaussian mixture model we use three Gaussians representing cluster for neutral, cluster for positive and cluster for negative words. Our goal is to maximize the log likelihood l(φ, µ, Σ) = m i= log k z (i) = p(x (i) z (i) ; µ, Σ)p(z (i) ; φ) where x (i) is data of every tweet i, and z (i) is its corresponding latent variable in GMM. Here k =,,. The parameters φ, µ, Σ for our GMM model is maximized by the EM algorithm. Then repeat the following EM algorithm until convergence: E-step: we "guess" the value of z (i) s. Set w (i) j = p(z (i) = j x (i) ; φ, µ, Σ) M-step: update our parameters φ j, µ j, Σ j for every j. V. Experimental Results & Discussion Wordvec The following figures show the D distance map of all the words in the wordvec model. Average wordvect distance towards positive.5..5..5 -.5 Dictionary wordvect distance map Negative words Positive words -. -.4 -...4.6.8. Average wordvect distance towards negative Comparing positive/negative words distribution in this D distance map, positive words tends to appear more on the upper-left of the map than negative words, which gives us a quantitative description of how "positive" or "negative" a word can be. K-Means The following figures show the D projection results of applying K-means clustering with different cluster number K =,, 4, 5..4..8.6.4. K-means (K =, thresh = e-8) Cluster Cluster -. -.5.5.5.5.4..8.6.4. K-means (K =, thresh = e-8) Cluster Cluster Cluster -. -.5.5.5.5

CS9 Final Project Report Cluster Cluster Cluster Cluster4..8 K means with PCA (k = ).6.4.4.. -. -.5.5.5.5 K-means (K = 5, thresh = e-8).4 Cluster Cluster Cluster Cluster4 Cluster5. Neutural Negative Positive.8.6.4...5.5.5.5.8.6.4. -. -.5.5.5.5 As mentioned in section IV, we use the D visualized result to determine the effectiveness of clustering with different cluster number K. For K =, the clustering is biased in either negative or positive direction which apparently is not a good result. For K =, the three clusters are symmetrically well separated from each other. For K = 4, 5..., we begin to see some finer structures inside of the clusters, while the clustering on the far end of the two direction follows the same pattern as K =. Therefore, we consider K = as our optimal cluster number. Taking a deeper look into the three centroids values, we find that the green cluster represents the tweets containing more positive words and the blue cluster represents the tweets containing more negative words. The red cluster contains tweets that are mostly neutral, i.e, not containing many positive words or negative words. The result shows using Kmeans with K = does a pretty good job in discerning negative, neutral and positive tweets. 4 With cluster number K = and shrink dimension k =, below shows the result of applying K-means with PCA. K-means (K = 4, thresh = e-8).4 Surprisingly, K-means with PCA does not give us satisfactory result as it fails to distinguish among negative, neutral and positive tweets. We think it is because as the dimension of the feature vector shrinks, we lose some of the nontrivial information of the original tweets. Although some words in the dictionary could be strongly correlated such as "cock", "c-o-c-k" and "cocks", the amount of redundancy could only allow us to shrink the dimension to / or even less, since most of the words are unique to others. Gaussian Mixture Model The following figure shows the -way classification with Gaussian mixture model. The plot of classification is similar to that of K-means with K =, but the neutral tweets area is smaller. Meanwhile, the area of positive and negative tweets expand a lot. Comparison The following plots are the learning curves of K-means and GMM.

CS9 Final Project Report Relative error - K-means relative error.5 Relative error.5.5 5 5 Iteration times 7 x 6 5 4 Gaussian Mixture Model relative error Application We apply our trained GMM model to test on the recent tweets of three US politicians Barack Obama, Donald Trump and Hillary Clinton. The result shows that they all follow the same pattern: while most of their tweets are neutral, their proportion of positive tweets are significantly higher than general public. This should be an expected result because intuitively politicians tend to convey more positive ideas and information to the public. Table : Model Test on Three US Politicians Barack Obama Donald Trump Hillary Clinton Positive 5 5 47 Negative 8 Neutral 55 4 45 4 5 6 7 8 9 Iteration times Both of the two algorithms converge quickly, and GMM converges even quicker than K- means. We then list the number of tweets in each category with K-means and GMM. Table : Multi-class Misclassification on 5895 Tweets K-Means GMM Positive 46 Negative 7 9 Neutral 5848 5575 Comparing the results from K-means and Gaussian mixture model, we find that most of the tweets online are neutral. With K-means and Gaussian mixture model a proportion of.4% and 9% tweets are classified either positive or negative respectively, which shows that Gaussian mixture model can better recognize positive or negative tweets. This result is because with hard assignment, K-means only realizes spherical clusterings, while GMM considers probability and incorporates the covariance structure of data and adjusts itself to elliptic clusters. VI. Conclusion & Future Work So far, the basic structure of the model is understood, and we have implemented the classification of positive, neutral and negative tweets and comparison between different unsupervised learning methods. We found that classifying tweets into three clusters(positive, neutral and negative) is currently most reasonable. Most tweets are neutral and a small portion of tweets are either positive or negative. K-means with PCA is not doing as good as K-means alone, we think it is because PCA actually removes non-trivial information in the feature vectors. Compared to K-means, Gaussian mixture model is performing better at classifying tweets into the three clusters because it considers the correlation between different components of feature. We tested our model on three US politicians and the result aligns with intuition. Our next step is to use data of a set of individual tweeter users, based on their tweets history and build a model to give them a kindness score, thus establishing the kindness assessment system. Also we want to know deeper in the logic gap between positive and negative words on a psychological level. 5

CS9 Final Project Report References [] Cheng, Justin, Cristian Danescu-Niculescu- Mizil, and Jure Leskovec. "Antisocial behavior in online discussion communities." arxiv preprint arxiv:54.68 (5). [] Sood, Sara Owsley, Elizabeth F. Churchill, and Judd Antin. "Automatic identification of personal insults on social news sites." Journal of the American Society for Information Science and Technology 6. (): 7-85. [] Bifet, Albert, and Eibe Frank. "Sentiment knowledge discovery in twitter streaming data." International Conference on Discovery Science. Springer Berlin Heidelberg,. [4] Pak, Alexander, and Patrick Paroubek. "Twitter as a Corpus for Sentiment Analysis and Opinion Mining." LREc. Vol... [5] Agarwal, Apoorv, et al. "Sentiment analysis of twitter data." Proceedings of the workshop on languages in social media. Association for Computational Linguistics,. [6] http://www.frontgatemedia.com/a-listof-7-bad-words-to-blacklist-and-how-touse-facebooks-moderation-tool/ [7] http://www.the-benefits-of-positivethinking.com/list-of-positive-words.html [8] http://www.mattmahoney.net/dc/ 6