Computing Semantic Relatedness using Wikipedia Taxonomy by Spreading Activation

Similar documents
Matching Similarity for Keyword-Based Clustering

A Case Study: News Classification Based on Term Frequency

Word Segmentation of Off-line Handwritten Documents

AQUA: An Ontology-Driven Question Answering System

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

arxiv: v1 [cs.cl] 2 Apr 2017

Python Machine Learning

Rule Learning With Negation: Issues Regarding Effectiveness

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

A DISTRIBUTIONAL STRUCTURED SEMANTIC SPACE FOR QUERYING RDF GRAPH DATA

Mining meaning from Wikipedia

Reinforcement Learning by Comparing Immediate Reward

A Semantic Similarity Measure Based on Lexico-Syntactic Patterns

Australian Journal of Basic and Applied Sciences

Rule Learning with Negation: Issues Regarding Effectiveness

Probabilistic Latent Semantic Analysis

A Domain Ontology Development Environment Using a MRD and Text Corpus

Term Weighting based on Document Revision History

Artificial Neural Networks written examination

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

A Comparison of Two Text Representations for Sentiment Analysis

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Differential Evolutionary Algorithm Based on Multiple Vector Metrics for Semantic Similarity Assessment in Continuous Vector Space

Exploiting Wikipedia as External Knowledge for Named Entity Recognition

Linking Task: Identifying authors and book titles in verbose queries

Automatic Extraction of Semantic Relations by Using Web Statistical Information

Learning From the Past with Experiment Databases

Vocabulary Usage and Intelligibility in Learner Language

Leveraging Sentiment to Compute Word Similarity

Lecture 1: Machine Learning Basics

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

CSC200: Lecture 4. Allan Borodin

What is PDE? Research Report. Paul Nichols

Learning Methods for Fuzzy Systems

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2

Beyond the Pipeline: Discrete Optimization in NLP

Short Text Understanding Through Lexical-Semantic Analysis

CS Machine Learning

Calibration of Confidence Measures in Speech Recognition

Statewide Framework Document for:

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

On document relevance and lexical cohesion between query terms

TextGraphs: Graph-based algorithms for Natural Language Processing

The MEANING Multilingual Central Repository

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method

A Bayesian Learning Approach to Concept-Based Document Classification

University of Groningen. Systemen, planning, netwerken Bosman, Aart

This scope and sequence assumes 160 days for instruction, divided among 15 units.

Data Fusion Models in WSNs: Comparison and Analysis

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

Axiom 2013 Team Description Paper

Test Effort Estimation Using Neural Network

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Modeling function word errors in DNN-HMM based LVCSR systems

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Detecting English-French Cognates Using Orthographic Edit Distance

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Assignment 1: Predicting Amazon Review Ratings

Comment-based Multi-View Clustering of Web 2.0 Items

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

The Method of Immersion the Problem of Comparing Technical Objects in an Expert Shell in the Class of Artificial Intelligence Algorithms

METHODS FOR EXTRACTING AND CLASSIFYING PAIRS OF COGNATES AND FALSE FRIENDS

INPE São José dos Campos

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Multilingual Sentiment and Subjectivity Analysis

Using Web Searches on Important Words to Create Background Sets for LSI Classification

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Radius STEM Readiness TM

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Human Emotion Recognition From Speech

Ontologies vs. classification systems

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

Abstractions and the Brain

The Role of String Similarity Metrics in Ontology Alignment

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio

Constructing Parallel Corpus from Movie Subtitles

Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

Team Formation for Generalized Tasks in Expertise Social Networks

PIRLS. International Achievement in the Processes of Reading Comprehension Results from PIRLS 2001 in 35 Countries

Mining Association Rules in Student s Assessment Data

Introduction to Causal Inference. Problem Set 1. Required Problems

(Sub)Gradient Descent

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Lecture 10: Reinforcement Learning

Software Maintenance

Universidade do Minho Escola de Engenharia

The Importance of Social Network Structure in the Open Source Software Developer Community

On-Line Data Analytics

We are strong in research and particularly noted in software engineering, information security and privacy, and humane gaming.

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

Integrating Semantic Knowledge into Text Similarity and Information Retrieval

Efficient Online Summarization of Microblogging Streams

Automating the E-learning Personalization

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Transcription:

Computing Semantic Relatedness using Wikipedia Taxonomy by Spreading Activation May Sabae Han, and Ei Ei Mon Abstract Semantic relatedness means the degree of the nearness of two documents or two terms based on the sameness of their meaning or semantic contents. A method for computing semantic relatedness is crucial in the web mining applications such as search engine, recommendation systems and information retrieval systems, and also in the area of natural language processing. This paper proposes a new technique for measuring semantic relatedness between two terms. It uses Wikipedia as a source of structured world knowledge about the terms of interest while ontologies are used to retrieve the semantically related information in most systems. The proposed method used spreading activation approach over the taxonomy of Wikipedia categories to get the measures of semantic relatedness. Spreading activation strategy is a popular approach in associative retrieval. The proposed method is experienced with the benchmark dataset to show the comparison our approach with some existing approaches. Keywords Information retrieval, Semantic relatedness, Spreading activation strategy, and Wikipedia taxonomy I. INTRODUCTION IMILARITY of two terms is noted as the relatedness Sbetween them. Semantic relatedness refers to the degree of the nearness of two documents or two terms based on the sameness of their meaning or semantic contents. Determining semantic relatedness is crucial in the web mining applications such as search engine, recommendation systems and information retrieval systems, and also in the area of natural language processing such as word sense disambiguation, text classification and so on. Humans can easily judge whether the two words are closely related or not in some way. For example, people can decide that student and university are more related than student and car. How are seagulls related to the sea? For the people, marking the degree of the semantic relatedness of two different words can be done by drawing on the large amount of background knowledge about the concepts these terms define. But it is a difficult task for the computer to determine this relatedness. So, to compute the semantic relatedness automatically by the computer, it must be provided the external source of the world knowledge. Prior work on semantic relatedness made use of purely statistical techniques that did not use of background knowledge or on lexical resources that incorporate very limited knowledge about the world. A number of techniques such as the cosine May Sabae Han is a PhD candidate from University of Technology, (Yatanarpon Cyber City) in Myanmar. (e-mail: may.utycc@gmail.com). Ei Ei Mon is Lecturer from University of Technology, (Yatanarpon Cyber City) in Myanmar. (e-mail: eieimonucsy@gmail.com). similarity measure, Dice s coefficient and jaccard s index have been defined to compute this relatedness. Among these, the most widely applied similarity measure, the cosine similarity measure has been applied to content matching scenarios such as document matching, ontology mapping, document clustering, multimedia search, and as a part of web service matchmaking frameworks [1]. Many natural language processing tasks require external sources of lexical semantic knowledge such as Wordnet. Traditionally, these resources have been built manually by experts in a time consuming and expensive manner. Wikipedia has recently provided a wide range of knowledge including some special proper nouns in different areas of expertise which is not described in WordNet. It also includes a large volume of articles about almost every entity in the world. Wikipedia provides a semantic network for computing semantic relatedness in a more structured fashion than a search engine and with more coverage than WordNet. And Wikipedia articles have been categorized by providing the taxonomy of categories. This feature provides the hierarchical structure or network. Wikipedia also provides articles link graph. So, many researchers have recently used Wikipedia as an active source of knowledge to measure semantic similarity. In this paper, we propose a method to compute semantic relatedness using structured knowledge extracted from the English version of Wikipedia. In this method, the taxonomy of categories in Wikipedia is used as a semantic network by considering that every article in Wikipedia as a concept. Our system introduces spreading activation strategy over the network of Wikipedia categories to evaluate semantic relatedness. The rest of the paper is organized as follows. Section 2 expresses about related semantic relatedness computing techniques based on Wikipedia. Section 3 describes Wikipedia taxonomy. Section 4 discusses about spreading activation strategy. Section 5 explains the proposed technique to compute semantic relatedness. Section 6 mentions experiment and evaluation to compare our method with existing semantic relatedness measuring methods that used Wikipedia and section 7 concludes. II. RELATED WORK The depth and coverage of Wikipedia has received a lot of attention from researchers who have used it as a knowledge source for computing semantic relatedness. Explicit Semantic Analysis (ESA) [3] represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. ESA uses machine learning techniques to explicitly represent the meaning of any text as a weighted vector of http://dx.doi.org/10.15242/iie.e0314048 49

Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). However, ESA does not use link structure and other structures knowledge from Wikipedia, although these contain valuable information about relatedness between articles. Majid Yazdani et al. [4] build a network of concepts from Wikipedia documents using a random walk approach to compute distances between documents. Three algorithms for distance computation such as hitting/commute time, personalized page rank, and truncated visiting probability are proposed. Four types of weighted links in the document network such as actual hyperlinks, lexical similarity, common category membership and common template use are considered. The resulting network is used to solve three benchmark semantic tasks- word similarity, paraphrase detection between sentences, and document similarity by mapping pairs of data to the network, and then computing a distance between these representations. Lu Zhiqiang et al. [5] uses snippets from Wikipedia to calculate the semantic similarity between words by using cosine similarity and tf-idf. The stemmer algorithm and stop words are also applied in the preprocessing the snippets from Wikipedia. Behanam et al. [6] extracted the multi-tree for each entity from Wikipedia categories network. It uses multi-tree model to measure semantic similarity. This method gets the highest score in correlation among many state-of-the-art approaches. Similar to our approach, this method extracted the categories of each term except that it extracted the categories of pages to which the page of original term links. These extracted categories are marked as the child nodes of the multi-tree. Then combined two multi-trees and used multi-tree similarity algorithm to this combined tree to compute similarity. Milne and Witten [7] measure semantic relatedness by using hyperlink structure of Wikipedia. Each article is represented by a list of its incoming and outgoing links. To compute relatedness, they use tf-idf using link counts weighted by the probability of each link occurring. In WikiRelate [8], the two articles corresponding to two terms are retrieved firstly. Then the categories related to these articles are extracted and map onto the category network. Given the set of paths found between the category pairs, Strube and Ponzetto compute the relatedness by selecting the shortest path and the path which maximizes information content for information content based measures. Stephan Gouws et al. [9] propose the Target Activation Approach(TAA) and the Agglomerative Approach (AA) for computing semantic relatedness by spreading activation energy over the hyperlink structure of Wikipedia. Relatedness between two nodes can be measured as either 1) the ratio of initial energy that reaches the target node, or 2) the amount of overlap between their individual activation vectors by spreading from both nodes individually. The second method is adaptation of the Wikipedia Link-based Measure (WLM) approach to the one with using spreading activation. Another method that uses web to compute the semantic similarity between words is proposed by Turney [10]. It defines a point-wise mutual information using the number of hits returned by Web search engine to recognize synonyms. Among the existing methods, WikiRelate is very similar to our approach. We use the idea of Wikipedia taxonomy as in WikiRelate as a semantic network. The difference is that we compute semantic relatedness by spreading activation over the taxonomy while WikiRelate uses the shortest path length between two terms, information contents, and text overlap based on this taxonomy. III. WIKIPEDIA TAXONOMY Wikipedia is a free online encyclopedia which grows through the collaborative efforts of volunteers over the Internet: anyone can contribute by writing or editing articles. As of March 2008, the English Wikipedia contains more than 2,300,000 articles. The articles are organized in categories that can be created and edited as well. The categories themselves are organized into a hierarchy. Wikipedia s category and page network can be seen as a large semantic network. The project [11] developed the Wikipedia taxonomy with over 473K unique Wikipedia categories and over 995K edges in the Wikipedia categories taxonomic tree. The program is more proof of concept than production grade, enhancements and improvements are welcomed. The work described in this paper uses this taxonomy released on October 29, 2010 as a semantic network to compute semantic relatedness. Figure 1 shows an example of Wikipedia taxonomy in which Forest within the oval shape represents an article and the rounded rectangles are categories. The dotted lines express the links between article and their categories, and the solid lines indicate the links between categories. IV. SPREADING ACTIVATION Spreading activation is a method for searching associative networks, neural networks, or semantic networks. The search process is initiated by labelling a set of source nodes (e.g. concepts in a semantic network) with weights or "activation" and then iteratively propagating or "spreading" that activation out to other nodes linked to the source nodes. Most often these "weights" are real values that decay as activation propagates through the network. When the weights are discrete, this process is often referred to as marker passing. Activation may originate from alternate paths, identified by distinct markers, and terminate when two alternate paths reach the same node. Spreading activation models are used in cognitive psychology to model the fan out effect. Spreading activation can also be applied in information retrieval, by means of a network of nodes representing documents and terms contained in those documents [12]. Also it has proved a significant result in word sense disambiguation. In Wikipedia, the links between categories show association between concepts of articles and hence can be used as such for finding related concepts to a given concept. The algorithm starts with a set of activated nodes and, in each iteration, the activation of nodes is spread to associated nodes. The spread of activation may be directed by addition of different constraints like distance constraints, fan out constraints, path constraint, threshold. These parameters are mostly domain specific [13]. http://dx.doi.org/10.15242/iie.e0314048 50

Games Sports by type Team sports Sports originating in the United Kingdom 19th century Introductions by year Ball games Football Sports originating in England 19th-century introductions Football V. SEMANTIC RELATEDNESS MEASURING To compute semantic relatedness between two terms, firstly we extract the Wikipedia categories of each term. Then we use all these categories extracted as the child nodes of the category tree of Wikipedia and apply the spreading activation method to this category tree to get semantic relatedness value. The followings are the node input function, output function and function of semantic relatedness computing. (1) Where the variables are defined as: O i : Output of node i connected to node j A j : Activation value of node j p d : Path length : Decay factor Fig. 1 An example of Wikipedia taxonomy (2) (3) maximum path length p max. After a certain number of path length (i.e., the maximum path length is reached), the highest activation value among the nodes that are associated with each of the original node is retrieved into a set Act = {A 1, A 2,, A n+m}. Then the relatedness value is the average of the values from the Act set that is computed by using equation (3). A. Distance Constraint and Decay Factor In each iteration in spreading activation process, a node s activation value is multiplied by a decay factor(d), 0 < d < 1. This factor decays activation of each node exponentially in the path length. For example, with a path length of one, activation is decayed by d, with a path length of two, activation is decayed by d 2, etc. This penalises activation transfer over longer paths. So, in the process of computing relatedness, we use this decay factor and another parameter called path length bounded by maximum path length, which limits how far activation can spread. The following example shows the computation of semantic relatedness between two words, tennis and football. Firstly we assign d = 0.1. Then we extract categories of each word. Categories of Tennis, OLYMPIC SPORTS RACQUET SPORTS TENNIS I j N : Input to node j from the child node i (Activation value of node j) : Number of nodes connected to node j Categories of Football, FOOTBALL 19TH-CENTURY INTRODUCTIONS Act : set of activation value The activation process is iterative. All the original nodes take their occurrences as their initial activation value. And the activation value of all the other nodes are initialized to zero if it is not included in the categories of two target words and initialized to one, otherwise. Every node propagates its activation to its parents. The propagated value (O j ) is the result obtained from a function of its activation level. Path length p is initialized to 1. After each propagation process, p is increased by one. The activation process is iterative to reach the Combined categories of two words (Tennis and Football), OLYMPIC SPORTS RACQUET SPORTS TENNIS FOOTBALL 19TH-CENTURY INTRODUCTIONS Initial activation values for these categories, 2.0 http://dx.doi.org/10.15242/iie.e0314048 51

2.0 The activation values of each categories after evaluating two iterations with equation (1), (2) and (3), 0.951 0.653 0.654 0.728 0.728 0.505 0.446 Finally, we gain the semantic relatedness value by averaging above seven activation values according to the equation (3). Therefore, the score of Semantic relatedness between Tennis and Football is 0.67. The higher the score, the more relatedness the two words have. VI. EVALUATION The scores produced by the systems of measuring semantic relatedness can be determined whether it is highly related or not by comparing their results with the human judgements. There are three popular datasets that are used in computing semantic relatedness between two terms. The proposed method is evaluated on the benchmark dataset, Rubenstein and Goodenough s (1965) (R&G) dataset. We downloaded the XML format of Wikipedia articles that was released on December 01, 2012. We ignored the history, talk pages, user pages, etc. We also downloaded the Wikipedia taxonomy database which includes two tables supported by the project expressed in [11]. One of them is the category table with 473,639 categories and another is the taxonomy table which represents 995,863 links between categories. It was released on October 29, 2010. Figure 2 shows the performance of proposed method by the correlation between our results and human judgements from standard R&G s dataset. We computed the relatedness using maximum path length p max = 2 and decay factor d = 0.1, 0.15 and 0.2. From this experiment, we observed that we have the best result while using d = 0.1. A. Comparison To Alternative Methods We evaluate our approach by comparing it to some methods described in the related work. The best score obtained by our method is shown in bold. From the experiment, we observed that the result of our method strongly depends on the decay factor. TABLE I ACCURACY OF SEMANTIC RELATEDNESS MEASURES FOR RUBENSTEIN AND GOODENOUGH S DATASET Methods Correlation WikiRelate 0.52 Using snippets (a=0) 0.60 Using snippets (a=0.1) 0.61 PMI 0.53 Proposed method (d=0.1) 0.60 Proposed method (d=0.15) 0.57 Proposed method (d=0.2) 0.59 Table I shows the comparison between proposed method and three approaches WikiRelate, PMI and the one that uses the snippets from Wikipedia by their correlations with standard manually defined human judgments. Pearson correlation coefficient is used to obtain correlation between the results produced by the semantic relatedness computing methods and human ranking. Only the best measures obtained by the different approaches are shown. Evaluating R&G s dataset, we see a consistent trend: our approach outperformed WikiRelate. WikiRelate used Wikipedia category taxonomy as in the proposed method to compute the semantic relatedness between two words. This approach calculated the semantic relatedness by using three measures: path based measures (lch and wup), information content based measures (res) and text overlap based measures. Another measuring method (PMI) uses Pointwise Mutual Information to sort list of important neighbor words of the two words for computing semantic similarity between these two words. The correlation coefficient between PMI and R&G s human judgments is 0.534 that is quite less accurate than proposed approach. The method which uses snippets from Wikipedia outperformed PMI. Our result is less accurate than that one while it is implementing with threshold (a) = 0.1. But, our result implemented with decay factor (d) = 0.1 can match its result with a=0. From the experiment, we have seen that our approach produced more accurate result while its result (0.597) is with threshold a=0.11. By analyzing comparisons, we have seen that our proposed method can produce more accurate result than some methods such as WikiRelate and PMI. Moreover our result can match the one which uses Wikipedia snippets. Fig. 2 Performance of proposed method using different values of decay factor http://dx.doi.org/10.15242/iie.e0314048 52

VII. CONCLUSION In this paper, we proposed the new method for computing semantic relatedness by spreading activation over Wikipedia taxonomy. This method shows that the semantically related terms can be found with the help of Wikipedia, a large knowledge source. Our future work will be experimentation for the pair of phrases to calculate their semantic relatedness. We also can experiment the approach by using taxonomy modified lately. It will give the large coverage of categories. Hence we can get more accurate measure. The potential extension is in using the information retrieval system in order to produce the semantically related results for the user. [14] M. Strube, and S. P. Ponzetto, Deriving a large scale taxonomy from wikipedia, in Proc. Of the 22 nd National Conference on Artificial Intelligence, Vancouver, B.C., Canada, 22-26 July 2007. ACKNOWLEDGMENT I would like to express my special thanks to my supervisor, Dr. Ei Ei Mon, Lecturer of our university, UTYCC, (University of Technology, Yatanarpon Cyber City) for her kindheartedly supports. I would like to show my deepest thanks to all my teachers and colleagues who give a hand directly or indirectly during the laborious process of completing this work successfully. I also highly appreciate the mentally support of my parents. Finally, I am grateful to the developers of Wikipedia taxonomy project in sourceforge.net for kindly sharing of their project. REFERENCES [1] R. Thiagarajan, G. Manjunath, and M. Stumptner, Computing semantic similarity using ontologies, the International Semantic Web Conference (ISWC), 2008, Karlsruhe, Germany. [2] K. Sapkota, L. Thapa, and S. Pandey, Efficient information retrieval using measures of semantic similarity, Nepal Engineering College. [3] E. Gabrilovich, and S. Markovitch, Computing semantic relatedness using wikipedia-based explicit semantic analysis, in Proc. Of the 20 th International joint Conference on Artificial Intelligence (IJCAI 07), p. 6-12. [4] M. Yazdani, and A. Popescu-Belis, A random walk framework to compute textual semantic similarity: a unified model for three benchmark tasks. [5] L. Zhiqiang, S. Werimin, and Y. Zhenhua, Measuring semantic similarity between words using wikipedia, International Conference on Web Information Systems and Mining, 2009, p. 251-254. [6] B. Hajian, and T. White, Measuring semantic smilarity using a multi-tree model, 2011. [7] D. Milne, and I. H. Witten, An effective, low-cost measure of semantic relatedness obtained from wikipedia links, in Proc. Of AAAI Workshop on Wikipedia and Artificial Intelligence: an Evolving Synergy, AAAI Press, Chicago, USA, p. 25-30. [8] M. Strube, and S. P. Ponzetto, WikiRelate! Computing semantic relatedness using wikipedia, in Proc. Of the National Conference on Artificial Intellignece, 2006, volume 21. [9] S. Gouws, G. Rooyen, and H. A. Engelbrecht, Measuring conceptual similarity by spreading activation over wikipedia s hyperlink structure, in Proc. Of the 2 nd Workshop on Collaboratively Constructed Semantic Resources, Coling 2010, Beijing, August 2010, p. 46-54. [10] D. Turney, Mining the Web for synonyms: PMI-IR verus LAS on TOEFL, in Proc. of the 12 th European Conference on Machine Learning, p. 591-502. [11] http://sourceforge.net/projects/wikicategory/files/?source=navbar [12] http://wikipedia.org/spreading_acitvation.htm [13] Z. S. Syed, T. Finin, and A. Joshi, Wikipedia as an ontology for describing documents, Association for the Advancement of Artificial Intelligence, 2008, p. 136-144. http://dx.doi.org/10.15242/iie.e0314048 53