Employing External Rich Knowledge for Machine Comprehension

Similar documents
arxiv: v4 [cs.cl] 28 Mar 2016

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

arxiv: v3 [cs.cl] 7 Feb 2017

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Dialog-based Language Learning

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Lecture 1: Machine Learning Basics

Second Exam: Natural Language Parsing with Neural Networks

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

Prediction of Maximal Projection for Semantic Role Labeling

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Assignment 1: Predicting Amazon Review Ratings

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

A Vector Space Approach for Aspect-Based Sentiment Analysis

arxiv: v1 [cs.cv] 10 May 2017

Truth Inference in Crowdsourcing: Is the Problem Solved?

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

arxiv: v1 [cs.cl] 2 Apr 2017

arxiv: v1 [cs.lg] 15 Jun 2015

Residual Stacking of RNNs for Neural Machine Translation

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Model Ensemble for Click Prediction in Bing Search Ads

Python Machine Learning

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Deep Facial Action Unit Recognition from Partially Labeled Data

A Neural Network GUI Tested on Text-To-Phoneme Mapping

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Calibration of Confidence Measures in Speech Recognition

The stages of event extraction

arxiv: v5 [cs.ai] 18 Aug 2015

Georgetown University at TREC 2017 Dynamic Domain Track

Indian Institute of Technology, Kanpur

Online Updating of Word Representations for Part-of-Speech Tagging

Lecture 1: Basic Concepts of Machine Learning

Ensemble Technique Utilization for Indonesian Dependency Parser

Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

NEURAL DIALOG STATE TRACKER FOR LARGE ONTOLOGIES BY ATTENTION MECHANISM. Youngsoo Jang*, Jiyeon Ham*, Byung-Jun Lee, Youngjae Chang, Kee-Eung Kim

Attributed Social Network Embedding

Statewide Framework Document for:

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

arxiv: v1 [cs.lg] 7 Apr 2015

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

Learning Methods in Multilingual Speech Recognition

Modeling function word errors in DNN-HMM based LVCSR systems

A deep architecture for non-projective dependency parsing

Mining Topic-level Opinion Influence in Microblog

Multi-Lingual Text Leveling

Summarizing Answers in Non-Factoid Community Question-Answering

Probabilistic Latent Semantic Analysis

Semantic Inference at the Lexical-Syntactic Level for Textual Entailment Recognition

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX,

First Grade Standards

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Semi-Supervised Face Detection

Mathematics subject curriculum

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Learning to Rank with Selection Bias in Personal Search

Developing a TT-MCTAG for German with an RCG-based Parser

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

Compositional Semantics

Extending Place Value with Whole Numbers to 1,000,000

A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval

CSL465/603 - Machine Learning

On the Formation of Phoneme Categories in DNN Acoustic Models

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

arxiv: v3 [cs.cl] 24 Apr 2017

Some Principles of Automated Natural Language Information Extraction

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

arxiv: v2 [cs.ir] 22 Aug 2016

Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing

Probing for semantic evidence of composition by means of simple classification tasks

1.11 I Know What Do You Know?

Beyond the Pipeline: Discrete Optimization in NLP

Program Matrix - Reading English 6-12 (DOE Code 398) University of Florida. Reading

A Case Study: News Classification Based on Term Frequency

Modeling function word errors in DNN-HMM based LVCSR systems

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Lecture 10: Reinforcement Learning

CS 598 Natural Language Processing

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews

Cultivating DNN Diversity for Large Scale Video Labelling

The CTQ Flowdown as a Conceptual Model of Project Objectives

Grammars & Parsing, Part 1:

Dropout improves Recurrent Neural Networks for Handwriting Recognition

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

RANKING AND UNRANKING LEFT SZILARD LANGUAGES. Erkki Mäkinen DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF TAMPERE REPORT A ER E P S I M S

Human Emotion Recognition From Speech

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Transcription:

Employing External Rich Knowledge for Machine Comprehension IJCAI-16 Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He, Jun Zhao National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences Presented By : Dushyanta Dhyani Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 1 / 35

Outline 1 Problem Definition 2 Challenges 3 DataSets 4 Approach 5 Experiments 6 Results Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 2 / 35

Outline 1 Problem Definition 2 Challenges 3 DataSets 4 Approach 5 Experiments 6 Results Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 3 / 35

Problem Definition Machine Comprehension Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 4 / 35 [1]

Outline 1 Problem Definition 2 Challenges 3 DataSets 4 Approach 5 Experiments 6 Results Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 5 / 35

Challenges The Nature of this task requires a supervised learning approach. Availability of labeled data thus serves as a major bottleneck. Deep architectures which have proven to contain rich semantic understanding of text require large data. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 6 / 35

Outline 1 Problem Definition 2 Challenges 3 DataSets 4 Approach 5 Experiments 6 Results Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 7 / 35

Existing Datasets Machine Comprehension Test (MCTest) [2] Children s Book Test (CBT) - Part of Facebook s babi Project [3] CNN/DailyMail Dataset [4] Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 8 / 35

Dataset MCTest Collection of 660 stories and associated questions. Collected using Amazon Mechanical Turk Each questions is labeled as one or multiple to indicate the number of sentences in the document that are related to this question. Each question has four candidate answers which may span single or multiple words. Questions maybe factoid or non-factoid Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 9 / 35

Outline 1 Problem Definition 2 Challenges 3 DataSets 4 Approach 5 Experiments 6 Results Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 10 / 35

Proposed Approach External Supervision!! Given the small amount of data, deep architectures might not perform well. Use additional data to train an additional model that provides external supervision. Use a traditional Recurrent Neural Network with attention and incorporate the above model. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 11 / 35

Proposed Approach Transform the problem of Machine Comprehension into the standard question answering task which is subdivided into Answer Selection (AS) Answer Generation For Answer Selection, an attention based RNN For Answer Generation, the question is combined with each of the candidate answers and transformed into a sentence, and then each of these answers are ranked according to the semantic similarity to the answer selected in the previous stage. External supervision is utilized in both the steps. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 12 / 35

Approach Notations Document is denoted as D. Document Sentences are denoted as {s 0, s 1,..., s n } Document Questions are denoted as Q = {q 0, q 1,..., q m } Each q i consists of 4 candidate answers A i = {a i0,..., a i3 } Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 13 / 35

Approach Mathematical Formulation The task of selecting relevant answer to the given question can be expressed as: p(a q, D) = p(s q, D)p(a q, S) Thus, the task can be divided into two components Answer Selection - Select an answer statement given the question and the document. Answer Generation - Given the question and the and the answer statement, select the best candidate answer. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 14 / 35

Approach Mathematical Formulation Objective Function - Regularized log likelihood L 1 (θ; D train ) = log D train i=1 Q j=1 P(a ij q ij, D i ) λg(θ) Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 15 / 35

Approach External Answer Selection (AS) If we have an external AS model with parameter θ AS, then the AS process can be represented as s AS = argmax s DP(s q; θ AS ) Thus the External AS component can first be trained on external AS resource and then re-fit on MCTest. To balance the trade-off between the external AS Model and the domain specific AS model, we introduce a hyper-parameter η, and the objective function to maximize is : L 2 (θ +AS ; D train ) = log D train i=1 Q [P(aij q ij) ηl AS (q ij, D i )] λg(θ +AS ) j=1 Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 16 / 35

Approach External Answer Selection (AS) - Model (Quoted from the paper)... We adopt a smaller neural network architecture that uses semantically expressive recurrent neural network (RNN) to model the question and candidate support sentences.... In MCTest, the length of most sentences and questions are no more than 10 tokens, the gradient... We add attention information from question to the candidate sentence output representation as follows : s t h T t W qo h q n h t = s t h t Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 17 / 35

Approach External Answer Selection (AS) - Model Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 18 / 35

Approach External Answer Selection (AS) - Model (Quoting from the paper)... For the question, we use the last output vector as its representation, for the candidate supporting sentence, we average each time-step output variable ỹ t to get the final sentence representation o s The question-sentence pair score is obtained as : SCORE(q,s) = cosine{o q, o s } To obtain the distribution of question-sentence pairs i.e. P(s q, D; θ RNN ), the similarity score of all q-s pairs are softmaxed. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 19 / 35

Approach External Answer Selection (AS) - Model For training, cross - entropy loss function is used L AS (q, D) = P(s q, D; θ RNN )logq(s q, D) s D where Q(s q, D) is the supporting sentence probability that the external LSTM AS model predicts Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 20 / 35

Approach External Answer Selection (AS) - Model WIKIQA was selected as the training corpus because: It matches the MCTest narrative style. It contains not only factoid questions but also non-factoid questions. Relatively Large dataset (more than 20K sentences) All named entites in question or answer are replaced with their types (i.e. PERSON, ORGANIZATION, LOCATION) An attention based LSTM model is used (similar to that explained previous in Answer Selection Model) Instead of Cosine similarity, Geometric mean of Euclidean and Sigmoid Dot (GESD) is used to measure similarity between two representations 1 GESD(x, y) = 1+ x y. 1 1+exp( γ(xy T +c)) Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 21 / 35

Approach External Answer Generation Knowledge At this stage, we have the supporting sentence probability and consequently the most confident supporting sentence s. This sentence must be combined with the question q i to get the final answer. The problem is transformed into an RTE problem. RTE : Recognizing Textual Entailment - Determining the truth of one text fragment given another (true) text fragment. Thus, each question-answer pair is first transformed into a statement and then an external RTE-enhanced method is used to measure the relationship between the sentence and the candidate statement. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 22 / 35

Approach External Answer Generation Knowledge Question-Answer Pair to Sentence Transformation A rule based system is designed to perform the above task StanfordCoreNLP is used to get the constituency tree and Named entities of the question. If there exists a NNP with child nodes DT+NN in constituency parsing tree, or a named entity with type PERSON, we transform these words to a special symbol PERSON. Some additional rules are designed to convert each question based on the POS of a constituents or dependency relation between two words. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 23 / 35

Approach External Answer Generation Knowledge Question-Answer Pair to Sentence Transformation For e.g. If the question type is why, the POS of the root in dependency tree is VB, the root is located between the question word why and the named entity PERSON, then all the words before the PERSON should be deleted and add because + answer Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 24 / 35

Approach External Answer Generation Knowledge - RTE Recognizing Textual Entailment The premise and hypothesis may have no words in common or the linguistic representation might be very different. Thus we use two models, a linguistic feature based model and an external RTE model. Let the parameters learn from the external RTE resource be θ RTE and those learned from the linguistic features be θ 1 The inference from the two models can be combined as follows : P(a s, D) = [βp(s q s; θ 1 ) + (1 β)p(s q s; θ RTE )] When we cannot infer the entailment from simple linguistic features, we resort to external RTE to judge the entailment probability. β is not a hyper-parameter, but is computed as follows: β = similarity(s q, s) Two types of similarity features are used: Constituency Match Dependency Match Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 25 / 35

Approach External Answer Generation Knowledge Finally, the combined objective function to be maximized looks as follows: log D train i=1 L 3 (θ +AS+RTE ; D train ) = Q [P(aij q ij) + ηl AS (q ij, D i )] λg(θ +AS+RTE ) j=1 where P(a ij q ij) = P(s q, D; θ RNN ) [βp(s q s; θ 1 ) + (1 β)p(s q s; θ RTE )] Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 26 / 35

Approach Designing External RTE Model The Stanford Natural Language Inference (SNLI) dataset is used to train the RTE model. The RTE model is similar in design to the Answer Selection Model (AS) as discussed earlier Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 27 / 35

Outline 1 Problem Definition 2 Challenges 3 DataSets 4 Approach 5 Experiments 6 Results Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 28 / 35

Experiments Evaluation - Measure Evaluation Measure For Answer Selection, MAP(Mean Average Precision) and MRR(Mean Reciprocal Rank) is used. For Answer Generation / RTE simple accuracy is used as evaluation measure. Data MCTest is inherently divided into two parts MC160 and MC500 (with the total number of stories being 660) Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 29 / 35

Experiments Baselines 1 Sliding Window - Uses a window over document to get bag of words similarity between question+hypothesized answer and document. 2 Sliding Window + Word Distance - Word Distance simply subtracted from the sliding-window score 3 Sliding Window + Word Distance + RTE - Uses off the shelf RTE system in addition to the above 4 Dynamic Memory Networks 5 Discourse Parser to model the relationship between two selected sentences. 6 Extensive features with frames arugments matching and syntax matching as similarity scores. 7 Enhancement of the Sliding Window Method 8 Structural SVM that model the alignment between document sentences and statement as hidden variable. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 30 / 35

Outline 1 Problem Definition 2 Challenges 3 DataSets 4 Approach 5 Experiments 6 Results Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 31 / 35

Results MCTest Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 32 / 35

Results External Answer Selection Supervision From the equation L 2 (θ +AS ; D train ) = log D train i=1 Q [P(aij q ij) ηl AS (q ij, D i )] λg(θ +AS ) j=1 Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 33 / 35

References I Phil Blunsom. Teaching Machines to Read and Comprehend - Lisbon Summer School, 2015. http://lxmls.it.pt/2015/lxmls15.pdf, 2015. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 3, page 4, 2013. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children s books with explicit memory representations. arxiv preprint arxiv:1511.02301, 2015. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 34 / 35

References II Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS), 2015. Bingning Wang, Shangmin Guo, Kang Liu, Shizhu He Employing, Jun ZhaoExternal Rich Knowledge for Machine Comprehension 35 / 35