Short Text Similarity with Word Embeddings
|
|
- Ernest Fox
- 6 years ago
- Views:
Transcription
1 Short Text Similarity with s CS 6501 Advanced Topics in Information Tom Kenter 1, Maarten de Rijke 1 1 University of Amsterdam, Amsterdam, The Netherlands Presented by Jibang Wu Apr 19th, 2017 Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
2 Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
3 Introduction Why Short Text Similarity? Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
4 Introduction Why Short Text Similarity? Why Short Text Similarity? Example The procedure is generally performed in the second or third trimester. The technique is used during the second and, occasionally, third trimester of pregnancy. Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
5 Introduction Why Short Text Similarity? Why Short Text Similarity? Example The procedure is generally performed in the second or third trimester. The technique is used during the second and, occasionally, third trimester of pregnancy. Word-level similarity not enough query-query similarity, query-image caption similarity Cannot easily go from word-level to text-level similarity text structure should be taken into account Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
6 Introduction How Traditional Approaches Fail? Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
7 Introduction How Traditional Approaches Fail? Lexical Matching Largest common substring, edit distance, lexical overlap 1 United States United Kingdom 2 United States USA Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
8 Introduction How Traditional Approaches Fail? Lexical Matching Largest common substring, edit distance, lexical overlap 1 United States United Kingdom 2 United States USA FAILED: The second one should be better matched Linguistic Analysis Parse tree following grammar feature Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
9 Introduction How Traditional Approaches Fail? Lexical Matching Largest common substring, edit distance, lexical overlap 1 United States United Kingdom 2 United States USA FAILED: The second one should be better matched Linguistic Analysis Parse tree following grammar feature Not all texts are necessarily parseable (e.g., tweets) High-quality parses usually expensive to compute at run time. Structured Semantic Knowledge WordNet, Wikipedia Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
10 Introduction How Traditional Approaches Fail? Lexical Matching Largest common substring, edit distance, lexical overlap 1 United States United Kingdom 2 United States USA FAILED: The second one should be better matched Linguistic Analysis Parse tree following grammar feature Not all texts are necessarily parseable (e.g., tweets) High-quality parses usually expensive to compute at run time. Structured Semantic Knowledge WordNet, Wikipedia Not available to all language, and domain-specific terms Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
11 Introduction Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
12 Introduction How do we represent the meaning of a word? Navies Approach: one-hot representation store in a vector of vocabulary set size Example hotel = [ ] motel = [ ] Dimensionality: 20K (speech) 500K (dictionary) 13M (Google 1T) Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
13 Introduction How do we represent the meaning of a word? Navies Approach: one-hot representation store in a vector of vocabulary set size Example hotel = [ ] motel = [ ] Dimensionality: 20K (speech) 500K (dictionary) 13M (Google 1T) Problems: Waste of memory Hard to show semantic similiarity Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
14 Introduction How do we represent the meaning of a word? : distributional similarity based representations build a dense vector for each word type, chosen so that it is good at predicting other words appearing in its context Example hotel = [ ] motel = [ ] Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
15 Introduction How do we represent the meaning of a word? : distributional similarity based representations build a dense vector for each word type, chosen so that it is good at predicting other words appearing in its context Example hotel = [ ] motel = [ ] Dimensionality: (Word2vec) 300 (GloVe) Neural network trained from extensive unlabeled context. [more details] Advantage: Efficient in memory and computation Easy to show semantic similarity Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
16 Introduction Intuitions Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
17 Methodology From Word-level to Text-level Semantics Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
18 Methodology From Word-level to Text-level Semantics Semantic Space : w S 1 : w S 2 : S 1 = w S 1 w : S 2 = w S 2 w Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
19 Methodology From Word-level to Text-level Semantics Semantic Space : w S 1 : w S 2 : S 1 = w S 1 w : S 2 = w S 2 w Average sum?= Sentence similarity Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
20 Methodology From Word-level to Text-level Semantics Unweighted Semantic Similarity 1 For each pair of terms (w 1, w 2 ) in S 1 and S 2, compute the cosine similarities 2 Fully connected, unweighted, bipartite graph 3 Maximum Bipartite Matching 4 Separate the word pairs into bins of different similarity level Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
21 Methodology From Word-level to Text-level Semantics Unweighted Semantic Similarity Not all terms are equally important Longer text has more probability to hit 1 For each pair of terms (w 1, w 2 ) in S 1 and S 2, compute the cosine similarities 2 Fully connected, unweighted, bipartite graph 3 Maximum Bipartite Matching 4 Separate the word pairs into bins of different similarity level Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
22 Methodology Saliency-weighted Semantic Similarity Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
23 Methodology Saliency-weighted Semantic Similarity From BM25 r(q, d) = w q d IDF (w) c(w, d) (k 1 + 1) c(w, d) + k 1 (1 b + b n n avg ) c(w, d) literal match of words Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
24 Methodology Saliency-weighted Semantic Similarity From BM25 f sts (s l, s s ) = sem(w, s s ) (k 1 + 1) IDF (w) w s l sem(w, s s ) + k 1 (1 b + b s s avg sl ) sem(w, s s ) = max w s f sem(w, w ) f sem (w, w ) returns semantic match score from word embedding Common words has smaller IDF(w) than rare words. Bin summands of different range of score together Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
25 Methodology Learning Algorithm Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
26 Methodology Learning Algorithm Models Pre-trained Out-of-the-Box word embeddings Word2vec 300-dimensions by Mikolov et al. Word2vec 400-dimensions by Baroni et al. GloVe 300-dimensional trained on 840 billion token corpus GloVe 300-dimensional trained on 42 billion token corpus Auxiliary word embeddings trained on INEX with 1.2 billion tokens based either on Word2vec or GloVe Algorithm to optimize parameter setting for predicting short text similarity Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
27 Methodology Learning Algorithm Binary Classifier from Supervised Learning Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
28 Summary Experiment Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
29 Summary Experiment Experiment Setup Dataset: Microsoft Research Paraphrase(MSR) Corpus 5801 sentence pairs annotated with binary labels divided into training set of 4076, and testing set of 1725 Handle Out-of-vocabulary word ignore in training, map randomly in runtime Paremeter settings for f sts, k 1 = 1.2, b = 0.75, IDF calculated from INEX data Three bin threshold: Similarity level Highly Medium Unlikely Saliency-weighted Semantic Network Unweighted Semantic Network Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
30 Summary Experiment Experiment Results OoB: aux: w2v: glv: out-of-the-box vectors auxiliary vectors Word2vec GloVe unwghtd: unweighted semantic feature swsn: saliency-weighted semantic feature Best model uses all features and word embedding models The method overall outperform previous approaches Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
31 Summary Analysis Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
32 Summary Analysis Performance Across Sentence Length Perform better on sentences that are alike in length Tend to predict dissimilarity when texts substantially differ in length Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
33 Summary Analysis Performance Across Levels of Lexical Overlap At low lexical overlap level, the algorithm shows the benefit of semantic matching over lexical matching Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
34 Summary Conclusion Outline 1 Introduction Why Short Text Similarity? How Traditional Approaches Fail? 2 Methodology From Word-level to Text-level Semantics Saliency-weighted Semantic Similarity Learning Algorithm 3 Summary Experiment Analysis Conclusion Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
35 Summary Conclusion Advantages: Word embedding based unsupervised learning Substitute methods based on external semantic knowledge Crucial application in search, query suggestion Limitations: The order of words is not taken into account Context awareness is important in real applications Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
36 Appendix Citation Citation I Kenter, Tom, and Maarten de Rijke Short Text Similarity with s. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, Mikolov, Tomas, et al. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. Glove: Global Vectors for Word Representation.s. EMNLP. Vol Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
37 More Outline 4 More Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
38 More Mainstream Algorithms Word2Vec predict surrounding words in a window of radius m of every word Continuous bag-of-words (CBOW) predicting the word given its context several times faster to train than the skip-gram slightly better accuracy for the frequent words Skip-gram predicting the context given a word works well with small amount of the training data represents well even rare words or phrases Global Vectors for Word Representation (GloVe) combines the advantages of global matrix factorization and local context window methods Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
39 More Window based co-occurrence matrix Slide from Stanford CS224n Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
40 More Feature Highlights W ( woman ) W ( man ) W ( aunt ) W ( uncle ) Presented by Jibang Wu Short Text Similarity with s Apr 19th, / 32
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering
More informationarxiv: v1 [cs.cl] 2 Apr 2017
Word-Alignment-Based Segment-Level Machine Translation Evaluation using Word Embeddings Junki Matsuo and Mamoru Komachi Graduate School of System Design, Tokyo Metropolitan University, Japan matsuo-junki@ed.tmu.ac.jp,
More informationUnsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model
Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center
More informationarxiv: v1 [cs.cl] 20 Jul 2015
How to Generate a Good Word Embedding? Siwei Lai, Kang Liu, Liheng Xu, Jun Zhao National Laboratory of Pattern Recognition (NLPR) Institute of Automation, Chinese Academy of Sciences, China {swlai, kliu,
More informationAssignment 1: Predicting Amazon Review Ratings
Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for
More informationTraining a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski
Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski Problem Statement and Background Given a collection of 8th grade science questions, possible answer
More informationCROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2
1 CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2 Peter A. Chew, Brett W. Bader, Ahmed Abdelali Proceedings of the 13 th SIGKDD, 2007 Tiago Luís Outline 2 Cross-Language IR (CLIR) Latent Semantic Analysis
More informationVariations of the Similarity Function of TextRank for Automated Summarization
Variations of the Similarity Function of TextRank for Automated Summarization Federico Barrios 1, Federico López 1, Luis Argerich 1, Rosita Wachenchauzer 12 1 Facultad de Ingeniería, Universidad de Buenos
More informationLinking Task: Identifying authors and book titles in verbose queries
Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationUnsupervised Cross-Lingual Scaling of Political Texts
Unsupervised Cross-Lingual Scaling of Political Texts Goran Glavaš and Federico Nanni and Simone Paolo Ponzetto Data and Web Science Group University of Mannheim B6, 26, DE-68159 Mannheim, Germany {goran,
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationSemi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.
Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link
More informationAttributed Social Network Embedding
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MAY 2017 1 Attributed Social Network Embedding arxiv:1705.04969v1 [cs.si] 14 May 2017 Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua Abstract Embedding
More informationCS 446: Machine Learning
CS 446: Machine Learning Introduction to LBJava: a Learning Based Programming Language Writing classifiers Christos Christodoulopoulos Parisa Kordjamshidi Motivation 2 Motivation You still have not learnt
More informationA DISTRIBUTIONAL STRUCTURED SEMANTIC SPACE FOR QUERYING RDF GRAPH DATA
International Journal of Semantic Computing Vol. 5, No. 4 (2011) 433 462 c World Scientific Publishing Company DOI: 10.1142/S1793351X1100133X A DISTRIBUTIONAL STRUCTURED SEMANTIC SPACE FOR QUERYING RDF
More informationHandling Sparsity for Verb Noun MWE Token Classification
Handling Sparsity for Verb Noun MWE Token Classification Mona T. Diab Center for Computational Learning Systems Columbia University mdiab@ccls.columbia.edu Madhav Krishna Computer Science Department Columbia
More informationGeorgetown University at TREC 2017 Dynamic Domain Track
Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain
More informationLIM-LIG at SemEval-2017 Task1: Enhancing the Semantic Similarity for Arabic Sentences with Vectors Weighting
LIM-LIG at SemEval-2017 Task1: Enhancing the Semantic Similarity for Arabic Sentences with Vectors Weighting El Moatez Billah Nagoudi Laboratoire d Informatique et de Mathématiques LIM Université Amar
More informationA study of speaker adaptation for DNN-based speech synthesis
A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,
More informationDifferential Evolutionary Algorithm Based on Multiple Vector Metrics for Semantic Similarity Assessment in Continuous Vector Space
Differential Evolutionary Algorithm Based on Multiple Vector Metrics for Semantic Similarity Assessment in Continuous Vector Space Yuanyuan Cai, Wei Lu, Xiaoping Che, Kailun Shi School of Software Engineering
More informationA Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention
A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention Damien Teney 1, Peter Anderson 2*, David Golub 4*, Po-Sen Huang 3, Lei Zhang 3, Xiaodong He 3, Anton van den Hengel 1 1
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationSummarizing Answers in Non-Factoid Community Question-Answering
Summarizing Answers in Non-Factoid Community Question-Answering Hongya Song Zhaochun Ren Shangsong Liang hongya.song.sdu@gmail.com zhaochun.ren@ucl.ac.uk shangsong.liang@ucl.ac.uk Piji Li Jun Ma Maarten
More informationTwitter Sentiment Classification on Sanders Data using Hybrid Approach
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders
More informationOnline Updating of Word Representations for Part-of-Speech Tagging
Online Updating of Word Representations for Part-of-Speech Tagging Wenpeng Yin LMU Munich wenpeng@cis.lmu.de Tobias Schnabel Cornell University tbs49@cornell.edu Hinrich Schütze LMU Munich inquiries@cislmu.org
More informationUsing Web Searches on Important Words to Create Background Sets for LSI Classification
Using Web Searches on Important Words to Create Background Sets for LSI Classification Sarah Zelikovitz and Marina Kogan College of Staten Island of CUNY 2800 Victory Blvd Staten Island, NY 11314 Abstract
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationCS Machine Learning
CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing
More informationCross-lingual Text Fragment Alignment using Divergence from Randomness
Cross-lingual Text Fragment Alignment using Divergence from Randomness Sirvan Yahyaei, Marco Bonzanini, and Thomas Roelleke Queen Mary, University of London Mile End Road, E1 4NS London, UK {sirvan,marcob,thor}@eecs.qmul.ac.uk
More informationhave to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,
A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994
More informationFinding Translations in Scanned Book Collections
Finding Translations in Scanned Book Collections Ismet Zeki Yalniz Dept. of Computer Science University of Massachusetts Amherst, MA, 01003 zeki@cs.umass.edu R. Manmatha Dept. of Computer Science University
More informationA Graph Based Authorship Identification Approach
A Graph Based Authorship Identification Approach Notebook for PAN at CLEF 2015 Helena Gómez-Adorno 1, Grigori Sidorov 1, David Pinto 2, and Ilia Markov 1 1 Center for Computing Research, Instituto Politécnico
More informationShort Text Understanding Through Lexical-Semantic Analysis
Short Text Understanding Through Lexical-Semantic Analysis Wen Hua #1, Zhongyuan Wang 2, Haixun Wang 3, Kai Zheng #4, Xiaofang Zhou #5 School of Information, Renmin University of China, Beijing, China
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationMatching Similarity for Keyword-Based Clustering
Matching Similarity for Keyword-Based Clustering Mohammad Rezaei and Pasi Fränti University of Eastern Finland {rezaei,franti}@cs.uef.fi Abstract. Semantic clustering of objects such as documents, web
More informationPOS tagging of Chinese Buddhist texts using Recurrent Neural Networks
POS tagging of Chinese Buddhist texts using Recurrent Neural Networks Longlu Qin Department of East Asian Languages and Cultures longlu@stanford.edu Abstract Chinese POS tagging, as one of the most important
More informationOn document relevance and lexical cohesion between query terms
Information Processing and Management 42 (2006) 1230 1247 www.elsevier.com/locate/infoproman On document relevance and lexical cohesion between query terms Olga Vechtomova a, *, Murat Karamuftuoglu b,
More informationSyntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm
Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm syntax: from the Greek syntaxis, meaning setting out together
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationCombining a Chinese Thesaurus with a Chinese Dictionary
Combining a Chinese Thesaurus with a Chinese Dictionary Ji Donghong Kent Ridge Digital Labs 21 Heng Mui Keng Terrace Singapore, 119613 dhji @krdl.org.sg Gong Junping Department of Computer Science Ohio
More informationThe Internet as a Normative Corpus: Grammar Checking with a Search Engine
The Internet as a Normative Corpus: Grammar Checking with a Search Engine Jonas Sjöbergh KTH Nada SE-100 44 Stockholm, Sweden jsh@nada.kth.se Abstract In this paper some methods using the Internet as a
More informationCalibration of Confidence Measures in Speech Recognition
Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE
More informationEnsemble Technique Utilization for Indonesian Dependency Parser
Ensemble Technique Utilization for Indonesian Dependency Parser Arief Rahman Institut Teknologi Bandung Indonesia 23516008@std.stei.itb.ac.id Ayu Purwarianti Institut Teknologi Bandung Indonesia ayu@stei.itb.ac.id
More informationA Comparison of Two Text Representations for Sentiment Analysis
010 International Conference on Computer Application and System Modeling (ICCASM 010) A Comparison of Two Text Representations for Sentiment Analysis Jianxiong Wang School of Computer Science & Educational
More informationDetecting English-French Cognates Using Orthographic Edit Distance
Detecting English-French Cognates Using Orthographic Edit Distance Qiongkai Xu 1,2, Albert Chen 1, Chang i 1 1 The Australian National University, College of Engineering and Computer Science 2 National
More informationHLTCOE at TREC 2013: Temporal Summarization
HLTCOE at TREC 2013: Temporal Summarization Tan Xu University of Maryland College Park Paul McNamee Johns Hopkins University HLTCOE Douglas W. Oard University of Maryland College Park Abstract Our team
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationThe stages of event extraction
The stages of event extraction David Ahn Intelligent Systems Lab Amsterdam University of Amsterdam ahn@science.uva.nl Abstract Event detection and recognition is a complex task consisting of multiple sub-tasks
More informationSemantic and Context-aware Linguistic Model for Bias Detection
Semantic and Context-aware Linguistic Model for Bias Detection Sicong Kuang Brian D. Davison Lehigh University, Bethlehem PA sik211@lehigh.edu, davison@cse.lehigh.edu Abstract Prior work on bias detection
More informationUsing dialogue context to improve parsing performance in dialogue systems
Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,
More informationA deep architecture for non-projective dependency parsing
Universidade de São Paulo Biblioteca Digital da Produção Intelectual - BDPI Departamento de Ciências de Computação - ICMC/SCC Comunicações em Eventos - ICMC/SCC 2015-06 A deep architecture for non-projective
More informationThe Role of String Similarity Metrics in Ontology Alignment
The Role of String Similarity Metrics in Ontology Alignment Michelle Cheatham and Pascal Hitzler August 9, 2013 1 Introduction Tim Berners-Lee originally envisioned a much different world wide web than
More informationA Bayesian Learning Approach to Concept-Based Document Classification
Databases and Information Systems Group (AG5) Max-Planck-Institute for Computer Science Saarbrücken, Germany A Bayesian Learning Approach to Concept-Based Document Classification by Georgiana Ifrim Supervisors
More informationAQUA: An Ontology-Driven Question Answering System
AQUA: An Ontology-Driven Question Answering System Maria Vargas-Vera, Enrico Motta and John Domingue Knowledge Media Institute (KMI) The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom.
More informationWord Sense Disambiguation
Word Sense Disambiguation D. De Cao R. Basili Corso di Web Mining e Retrieval a.a. 2008-9 May 21, 2009 Excerpt of the R. Mihalcea and T. Pedersen AAAI 2005 Tutorial, at: http://www.d.umn.edu/ tpederse/tutorials/advances-in-wsd-aaai-2005.ppt
More informationSecond Exam: Natural Language Parsing with Neural Networks
Second Exam: Natural Language Parsing with Neural Networks James Cross May 21, 2015 Abstract With the advent of deep learning, there has been a recent resurgence of interest in the use of artificial neural
More informationActive Learning. Yingyu Liang Computer Sciences 760 Fall
Active Learning Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven,
More informationProbing for semantic evidence of composition by means of simple classification tasks
Probing for semantic evidence of composition by means of simple classification tasks Allyson Ettinger 1, Ahmed Elgohary 2, Philip Resnik 1,3 1 Linguistics, 2 Computer Science, 3 Institute for Advanced
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationMETHODS FOR EXTRACTING AND CLASSIFYING PAIRS OF COGNATES AND FALSE FRIENDS
METHODS FOR EXTRACTING AND CLASSIFYING PAIRS OF COGNATES AND FALSE FRIENDS Ruslan Mitkov (R.Mitkov@wlv.ac.uk) University of Wolverhampton ViktorPekar (v.pekar@wlv.ac.uk) University of Wolverhampton Dimitar
More informationCLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH
ISSN: 0976-3104 Danti and Bhushan. ARTICLE OPEN ACCESS CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH Ajit Danti 1 and SN Bharath Bhushan 2* 1 Department
More informationWord Embedding Based Correlation Model for Question/Answer Matching
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Word Embedding Based Correlation Model for Question/Answer Matching Yikang Shen, 1 Wenge Rong, 2 Nan Jiang, 2 Baolin
More information(Sub)Gradient Descent
(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include
More informationarxiv: v2 [cs.ir] 22 Aug 2016
Exploring Deep Space: Learning Personalized Ranking in a Semantic Space arxiv:1608.00276v2 [cs.ir] 22 Aug 2016 ABSTRACT Jeroen B. P. Vuurens The Hague University of Applied Science Delft University of
More informationModel Ensemble for Click Prediction in Bing Search Ads
Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com
More informationTerm Weighting based on Document Revision History
Term Weighting based on Document Revision History Sérgio Nunes, Cristina Ribeiro, and Gabriel David INESC Porto, DEI, Faculdade de Engenharia, Universidade do Porto. Rua Dr. Roberto Frias, s/n. 4200-465
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14
More informationarxiv: v4 [cs.cl] 28 Mar 2016
LSTM-BASED DEEP LEARNING MODELS FOR NON- FACTOID ANSWER SELECTION Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou IBM Watson Core Technologies Yorktown Heights, NY, USA {mingtan,cicerons,bingxia,zhou}@us.ibm.com
More informationLanguage Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus
Language Acquisition Fall 2010/Winter 2011 Lexical Categories Afra Alishahi, Heiner Drenhaus Computational Linguistics and Phonetics Saarland University Children s Sensitivity to Lexical Categories Look,
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationWeb as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics
(L615) Markus Dickinson Department of Linguistics, Indiana University Spring 2013 The web provides new opportunities for gathering data Viable source of disposable corpora, built ad hoc for specific purposes
More informationGraph Alignment for Semi-Supervised Semantic Role Labeling
Graph Alignment for Semi-Supervised Semantic Role Labeling Hagen Fürstenau Dept. of Computational Linguistics Saarland University Saarbrücken, Germany hagenf@coli.uni-saarland.de Mirella Lapata School
More informationCross Language Information Retrieval
Cross Language Information Retrieval RAFFAELLA BERNARDI UNIVERSITÀ DEGLI STUDI DI TRENTO P.ZZA VENEZIA, ROOM: 2.05, E-MAIL: BERNARDI@DISI.UNITN.IT Contents 1 Acknowledgment.............................................
More informationAutoencoder and selectional preference Aki-Juhani Kyröläinen, Juhani Luotolahti, Filip Ginter
ESUKA JEFUL 2017, 8 2: 93 125 Autoencoder and selectional preference Aki-Juhani Kyröläinen, Juhani Luotolahti, Filip Ginter AN AUTOENCODER-BASED NEURAL NETWORK MODEL FOR SELECTIONAL PREFERENCE: EVIDENCE
More informationTowards a MWE-driven A* parsing with LTAGs [WG2,WG3]
Towards a MWE-driven A* parsing with LTAGs [WG2,WG3] Jakub Waszczuk, Agata Savary To cite this version: Jakub Waszczuk, Agata Savary. Towards a MWE-driven A* parsing with LTAGs [WG2,WG3]. PARSEME 6th general
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationLearning a Cross-Lingual Semantic Representation of Relations Expressed in Text
Learning a Cross-Lingual Semantic Representation of Relations Expressed in Text Achim Rettinger, Artem Schumilin, Steffen Thoma, and Basil Ell Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
More informationSINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)
SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,
More informationIterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages
Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer
More informationIntroduction to Causal Inference. Problem Set 1. Required Problems
Introduction to Causal Inference Problem Set 1 Professor: Teppei Yamamoto Due Friday, July 15 (at beginning of class) Only the required problems are due on the above date. The optional problems will not
More informationPREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES
PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,
More informationIndian Institute of Technology, Kanpur
Indian Institute of Technology, Kanpur Course Project - CS671A POS Tagging of Code Mixed Text Ayushman Sisodiya (12188) {ayushmn@iitk.ac.in} Donthu Vamsi Krishna (15111016) {vamsi@iitk.ac.in} Sandeep Kumar
More informationBridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models
Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models Jung-Tae Lee and Sang-Bum Kim and Young-In Song and Hae-Chang Rim Dept. of Computer &
More informationAnalyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio
SCSUG Student Symposium 2016 Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio Praneth Guggilla, Tejaswi Jha, Goutam Chakraborty, Oklahoma State
More informationSARDNET: A Self-Organizing Feature Map for Sequences
SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu
More informationLongest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 6, Ver. IV (Nov Dec. 2015), PP 01-07 www.iosrjournals.org Longest Common Subsequence: A Method for
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationThe Importance of Social Network Structure in the Open Source Software Developer Community
The Importance of Social Network Structure in the Open Source Software Developer Community Matthew Van Antwerp Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556
More information11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation
tatistical Parsing (Following slides are modified from Prof. Raymond Mooney s slides.) tatistical Parsing tatistical parsing uses a probabilistic model of syntax in order to assign probabilities to each
More informationConstructing Parallel Corpus from Movie Subtitles
Constructing Parallel Corpus from Movie Subtitles Han Xiao 1 and Xiaojie Wang 2 1 School of Information Engineering, Beijing University of Post and Telecommunications artex.xh@gmail.com 2 CISTR, Beijing
More informationTRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY
TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY Philippe Hamel, Matthew E. P. Davies, Kazuyoshi Yoshii and Masataka Goto National Institute
More informationMeasuring the relative compositionality of verb-noun (V-N) collocations by integrating features
Measuring the relative compositionality of verb-noun (V-N) collocations by integrating features Sriram Venkatapathy Language Technologies Research Centre, International Institute of Information Technology
More informationSwitchboard Language Model Improvement with Conversational Data from Gigaword
Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword
More informationMandarin Lexical Tone Recognition: The Gating Paradigm
Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition
More informationBeyond the Pipeline: Discrete Optimization in NLP
Beyond the Pipeline: Discrete Optimization in NLP Tomasz Marciniak and Michael Strube EML Research ggmbh Schloss-Wolfsbrunnenweg 33 69118 Heidelberg, Germany http://www.eml-research.de/nlp Abstract We
More informationEdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar
EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar Chung-Chi Huang Mei-Hua Chen Shih-Ting Huang Jason S. Chang Institute of Information Systems and Applications, National Tsing Hua University,
More informationChapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard
Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA Alta de Waal, Jacobus Venter and Etienne Barnard Abstract Most actionable evidence is identified during the analysis phase of digital forensic investigations.
More information