Modelling word behaviour. Foundations for Natural Language Processing Lecture 11 Syntax and parsing. Long-range dependencies. Phrasal categories

Similar documents
Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

Basic Parsing with Context-Free Grammars. Some slides adapted from Julia Hirschberg and Dan Jurafsky 1

Context Free Grammars. Many slides from Michael Collins

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

Grammars & Parsing, Part 1:

CS 598 Natural Language Processing

Parsing of part-of-speech tagged Assamese Texts

Natural Language Processing. George Konidaris

Informatics 2A: Language Complexity and the. Inf2A: Chomsky Hierarchy

ENGBG1 ENGBL1 Campus Linguistics. Meeting 2. Chapter 7 (Morphology) and chapter 9 (Syntax) Pia Sundqvist

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

The Interface between Phrasal and Functional Constraints

Dear Teacher: Welcome to Reading Rods! Reading Rods offer many outstanding features! Read on to discover how to put Reading Rods to work today!

A Minimalist Approach to Code-Switching. In the field of linguistics, the topic of bilingualism is a broad one. There are many

Chapter 4: Valence & Agreement CSLI Publications

1/20 idea. We ll spend an extra hour on 1/21. based on assigned readings. so you ll be ready to discuss them in class

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Proof Theory for Syntacticians

Introduction to HPSG. Introduction. Historical Overview. The HPSG architecture. Signature. Linguistic Objects. Descriptions.

RANKING AND UNRANKING LEFT SZILARD LANGUAGES. Erkki Mäkinen DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF TAMPERE REPORT A ER E P S I M S

Inleiding Taalkunde. Docent: Paola Monachesi. Blok 4, 2001/ Syntax 2. 2 Phrases and constituent structure 2. 3 A minigrammar of Italian 3

Derivational: Inflectional: In a fit of rage the soldiers attacked them both that week, but lost the fight.

Prediction of Maximal Projection for Semantic Role Labeling

Developing a TT-MCTAG for German with an RCG-based Parser

Ensemble Technique Utilization for Indonesian Dependency Parser

Analysis of Probabilistic Parsing in NLP

Character Stream Parsing of Mixed-lingual Text

LTAG-spinal and the Treebank

Some Principles of Automated Natural Language Information Extraction

"f TOPIC =T COMP COMP... OBJ

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

Construction Grammar. University of Jena.

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Compositional Semantics

EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

Basic Syntax. Doug Arnold We review some basic grammatical ideas and terminology, and look at some common constructions in English.

Towards a MWE-driven A* parsing with LTAGs [WG2,WG3]

Ch VI- SENTENCE PATTERNS.

Modeling Attachment Decisions with a Probabilistic Parser: The Case of Head Final Structures

Specifying Logic Programs in Controlled Natural Language

Parsing with Treebank Grammars: Empirical Bounds, Theoretical Models, and the Structure of the Penn Treebank

Efficient Normal-Form Parsing for Combinatory Categorial Grammar

Three New Probabilistic Models. Jason M. Eisner. CIS Department, University of Pennsylvania. 200 S. 33rd St., Philadelphia, PA , USA

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Accurate Unlexicalized Parsing for Modern Hebrew

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

An Interactive Intelligent Language Tutor Over The Internet

UNIVERSITY OF OSLO Department of Informatics. Dialog Act Recognition using Dependency Features. Master s thesis. Sindre Wetjen

Parsing natural language

What is NLP? CS 188: Artificial Intelligence Spring Why is Language Hard? The Big Open Problems. Information Extraction. Machine Translation

What Can Neural Networks Teach us about Language? Graham Neubig a2-dlearn 11/18/2017

The presence of interpretable but ungrammatical sentences corresponds to mismatches between interpretive and productive parsing.

SEMAFOR: Frame Argument Resolution with Log-Linear Models

A Version Space Approach to Learning Context-free Grammars

IBAN LANGUAGE PARSER USING RULE BASED APPROACH

NATURAL LANGUAGE PARSING AND REPRESENTATION IN XML EUGENIO JAROSIEWICZ

a) analyse sentences, so you know what s going on and how to use that information to help you find the answer.

Control and Boundedness

Hyperedge Replacement and Nonprojective Dependency Structures

Pre-Processing MRSes

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Hans-Ulrich Block, Hans Haugeneder Siemens AG, MOnchen ZT ZTI INF W. Germany. (2) [S' [NP who][s does he try to find [NP e]]s IS' $=~

Argument structure and theta roles

Case government vs Case agreement: modelling Modern Greek case attraction phenomena in LFG

Constraining X-Bar: Theta Theory

GACE Computer Science Assessment Test at a Glance

Adjectives tell you more about a noun (for example: the red dress ).

NAME: East Carolina University PSYC Developmental Psychology Dr. Eppler & Dr. Ironsmith

Chapter 9 Banked gap-filling

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Pseudo-Passives as Adjectival Passives

Rule-based Expert Systems

Virtually Anywhere Episodes 1 and 2. Teacher s Notes

Segmented Discourse Representation Theory. Dynamic Semantics with Discourse Structure

LNGT0101 Introduction to Linguistics

Developing Grammar in Context

California Department of Education English Language Development Standards for Grade 8

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

DESIGNING NARRATIVE LEARNING MATERIAL AS A GUIDANCE FOR JUNIOR HIGH SCHOOL STUDENTS IN LEARNING NARRATIVE TEXT

Derivational and Inflectional Morphemes in Pak-Pak Language

A Usage-Based Approach to Recursion in Sentence Processing

A Computational Evaluation of Case-Assignment Algorithms

Part I. Figuring out how English works

The College Board Redesigned SAT Grade 12

Programma di Inglese

Name: Class: Date: ID: A

Cognitive Modeling. Tower of Hanoi: Description. Tower of Hanoi: The Task. Lecture 5: Models of Problem Solving. Frank Keller.

A Grammar for Battle Management Language

Multimedia Application Effective Support of Education

Using a Native Language Reference Grammar as a Language Learning Tool

AQUA: An Ontology-Driven Question Answering System

The Role of the Head in the Interpretation of English Deverbal Compounds

RETURNING TEACHER REQUIRED TRAINING MODULE YE TRANSCRIPT

Hindi Aspectual Verb Complexes

Language acquisition: acquiring some aspects of syntax.

Language and Computers. Writers Aids. Introduction. Non-word error detection. Dictionaries. N-gram analysis. Isolated-word error correction

Approaches to control phenomena handout Obligatory control and morphological case: Icelandic and Basque

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Organizing Comprehensive Literacy Assessment: How to Get Started

Multiple case assignment and the English pseudo-passive *

Transcription:

Modelling word behaviour Foundations for Natural Language cessing Lecture 11 yntax and parsing Alex Lascarides (slides from Alex Lascarides, haron Goldwater Mark teedman and Philipp Koehn) 27 February 2018 We ve seen various ways to model word behaviour. Bag-of-words models: ignore word order entirely N-gram models: capture a fixed-length history to predict word sequences. HMMs: also capture fixed-length history, using latent variables. Useful for various tasks, but a really accurate model of language needs more than a fixed-length history! Alex Lascarides FNLP Lecture 11 27 February 2018 Alex Lascarides FNLP Lecture 11 1 Long-range dependencies The form of one word often depends on (agrees with) another, even when arbitrarily long material intervenes. am/dogs sleeps/sleep soundly am, who is my cousin, sleeps soundly Dogs often stay at my house and sleep soundly am, the man with red hair who is my cousin, sleeps soundly We want models that can capture these dependencies. Phrasal categories We may also want to capture substitutability at the phrasal level. PO categories indicate which words are substitutable. substituting adjectives: I a red cat I a former cat I a billowy cat Phrasal categories indicate which phrases are substitutable. substituting noun phrase: Dogs sleep soundly My next-door neighbours sleep soundly Green ideas sleep soundly For example, For example, Alex Lascarides FNLP Lecture 11 2 Alex Lascarides FNLP Lecture 11 3

Theories of syntax A theory of syntax should explain which sentences are well-formed (grammatical) and which are not. Note that well-formed is distinct from meaningful. Famous example from Chomsky: Colorless green ideas sleep furiously However we ll see shortly that the reason we care about syntax is mainly for interpreting meaning. Theories of syntax We ll look at two theories of syntax to handle one or both phenomena above (long-range dependencies, phrasal substitutability): Context-free grammar (and variants): today, next class Dependency grammar: following class These can be viewed as different models of language behaviour. As with other models, we will look at What each model can capture, and what it cannot. Algorithms that provide syntactic analyses for sentences using these models (i.e., syntactic parsers). Alex Lascarides FNLP Lecture 11 4 Alex Lascarides FNLP Lecture 11 5 Reminder: Context-free grammar CFG example Two types of grammar symbols: terminals (t): words. Non-terminals (NT): phrasal categories like,,, PP, with being the tart symbol. In practice, we sometimes distinguish pre-terminals (PO tags), a type of NT. Rules of the form NT β, where β is any string of NT s and t s. trictly speaking, that s a notation for a rule. There s also an abbreviated notation for sets of rules with same LH: NT β 1 β 2 β 3... A CFG in Chomsky Normal Form only has rules of the form NT i NT j NT k or NT i t j D N pn D Pos Art s Vi Vt Vp i we you he she him her Pos my our your his her pn Robin Jo Art a an the N man duck park telescope Vi sleep run duck Vt eat break see Vp see heard (entences) (Noun phrases) (Determiners) (Verb phrases) (nouns) (Possessive pronouns) (per nouns) (Articles) (Nouns) (Intransitive verbs) (Transitive verbs) (Verbs with args) Alex Lascarides FNLP Lecture 11 6 Alex Lascarides FNLP Lecture 11 7

Example syntactic analysis To show that a sentence is well-formed under this CFG, we must provide a parse. One way to do this is by drawing a tree: tructural Ambiguity ome sentences have more than one parse: structural ambiguity. i Vt the You can think of a tree like this as proving that its leaves are in the language generated by the grammar. D Art N man he Vt Pos N her duck he Vp her Vi duck Here, the structural ambiguity is caused by PO ambiguity in several of the words. (Both are types of syntactic ambiguity.) Alex Lascarides FNLP Lecture 11 8 Alex Lascarides FNLP Lecture 11 9 Attachment ambiguity Attachment ambiguity ome sentences have structural ambiguity even without part-of-speech ambiguity. This is called attachment ambiguity. Depends on where different phrases attach in the tree. Different attachments have different meanings: I the man with the telescope he ate the pizza on the floor Good boys and girls get presents from anta i Vt the man P PP Next slides show trees for the first example: prepositional phrase (PP) attachment ambiguity. (Trees slightly abbreviated...) with the telescope Alex Lascarides FNLP Lecture 11 10 Alex Lascarides FNLP Lecture 11 11

Attachment ambiguity Parsing algorithms Goal: compute the structure(s) for an input string given a grammar. i Vt the man P PP Ultimately, want to use the structure to interpret meaning. As usual, ambiguity is a huge problem. For correctness: need to find the right structure to get the right meaning. For efficiency: searching all possible structures can be very slow; want to use parsing for large-scale language tasks (e.g., used to create Google s infoboxes ). with the telescope Alex Lascarides FNLP Lecture 11 12 Alex Lascarides FNLP Lecture 11 13 Global and local ambiguity Parser properties We ve already seen examples of global ambiguity: multiple analyses for a full sentence, such as I the man with the telescope But local ambiguity is also a big problem: sentence. multiple analyses for parts of the dog bit the child: first three words could be (but aren t). Building useless partial structures wastes time. Avoiding useless computation is a major issue in parsing. yntactic ambiguity is rampant; humans usually don t even notice because we are good at using context/semantics to disambiguate. All parsers have two fundamental properties: Directionality: the sequence in which the structures are constructed. top-down: start with root category (), choose expansions, build down to words. bottom-up: build subtrees over words, build up to. Mixed strategies also possible (e.g., left corner parsers) earch strategy: the order in which the search space of possible analyses is explored. Alex Lascarides FNLP Lecture 11 14 Alex Lascarides FNLP Lecture 11 15

Example: search space for top-down parser earch strategies tart with node. Choose one of many possible expansions. Each of which has children with many possible expansions... aux............... depth-first search: explore one branch of the search space at a time, as far as possible. If this branch is a dead-end, parser needs to backtrack. breadth-first search: expand all possible branches in parallel (or simulated parallel). Requires storing many incomplete parses in memory at once. best-first search: score each partial parse and pursue the highest-scoring options first. (Will get back to this when discussing statistical parsing.) etc Alex Lascarides FNLP Lecture 11 16 Alex Lascarides FNLP Lecture 11 17 Recursive Descent Parsing A recursive descent parser treats a grammar as a specification of how to break down a top-level goal (find ) into subgoals (find ). It is a top-down, depth-first parser: Blindly expand nonterminals until reaching a terminal (word). If multiple options available, choose one but store current state as a backtrack point (in a stack to ensure depth-first.) If terminal matches next input word, continue; else, backtrack. RD Parsing algorithm tart with subgoal =, then repeat until input/subgoals are empty: If first subgoal in list is a non-terminal A, then pick an expansion A B C from grammar and replace A in subgoal list with B C If first subgoal in list is a terminal w: If input is empty, backtrack. If next input word is different from w, backtrack. If next input word is w, match! i.e., consume input word w and subgoal w and move to next subgoal. If we run out of backtrack points but not input, no parse is possible. Alex Lascarides FNLP Lecture 11 18 Alex Lascarides FNLP Lecture 11 19

Recursive descent example Recursive descent example Consider a very simple example: Grammar contains only these rules: V NN bit V bit DT NN DT the NN dog V dog The input sequence is the dog bit Operations: Expand (E) Match (M) Backtrack to step n (Bn) tep Op. ubgoals Input 0 the dog bit 1 E the dog bit 2 E DT NN the dog bit 3 E the NN the dog bit 4 M NN dog bit 5 E bit dog bit 6 B4 NN dog bit 7 E dog dog bit 8 M bit 9 E V bit 10 E bit bit 11 M Alex Lascarides FNLP Lecture 11 20 Alex Lascarides FNLP Lecture 11 21 Further notes hift-reduce Parsing The above sketch is actually a recognizer: it tells us whether the sentence has a valid parse, but not what the parse is. For a parser, we d need more details to store the structure as it is built. We only had one backtrack, but in general things can be much worse! ee Inf2a Lecture 17 for a much longer example showing inefficiency. If we have left-recursive rules like PP, we get an infinite loop! earch strategy and directionality are orthogonal properties. hift-reduce parsing is depth-first (like RD) but bottom-up (unlike RD). Basic shift-reduce recognizer repeatedly: Whenever possible, reduces one or more items from top of stack that match RH of rule, replacing with LH of rule. When that s not possible, shifts an input symbol onto a stack. Like RD parser, needs to maintain backtrack points. Alex Lascarides FNLP Lecture 11 22 Alex Lascarides FNLP Lecture 11 23

ame example grammar and sentence. Operations: Reduce (R) hift () Backtrack to step n (Bn) Note that at 9 and 11 we skipped over backtracking to 7 and 5 respectively as there were actually no choices to be made at those points. hift-reduce example tep Op. tack Input 0 the dog bit 1 the dog bit 2 R DT dog bit 3 DT dog bit 4 R DT V bit 5 R DT bit 6 DT bit 7 R DT V 8 R DT 9 B6 DT bit 10 R DT NN 11 B4 DT V bit 12 DT V bit 13 R DT V V 14 R DT V 15 B3 DT dog bit 16 R DT NN bit 17 R bit... Depth-first parsing in practice Depth-first parsers are very efficient for unambiguous structures. Widely used to parse/compile programming languages, which are constructed to be unambiguous. But can be massively inefficient (exponential in sentence length) if faced with local ambiguity. Blind backtracking may require re-building the same structure over and over: so, simple depth-first parsers are not used in NLP. But: if we use a probabilistic model to learn which choices to make, we can do very well in practice (coming next week...) Alex Lascarides FNLP Lecture 11 24 Alex Lascarides FNLP Lecture 11 25 Breadth-first search using dynamic programming Parsing as dynamic programming With a CFG, you should be able to avoid re-analysing any substring because its analysis is independent of the rest of the parse. [he] np [ her duck] vp chart parsing algorithms exploit this fact. use dynamic programming to store and reuse sub-parses, composing them into a full solution. o multiple potential parses are explored at once: a breadth-first strategy. For parsing, subproblems are analyses of substrings, memoized in chart (aka well-formed substring table, WFT). Chart entries are indexed by start and end positions in the sentence, and correspond to: either a complete constituent (sub-tree) spanning those positions (if working bottom-up), or a prediction about what complete constituent might be found (if working top-down). Alex Lascarides FNLP Lecture 11 26 Alex Lascarides FNLP Lecture 11 27

What s in the chart? We assume indices between each word in the sentence: 0 he 1 2 her 3 duck 4 The chart is a matrix where cell [i, j] holds information about the word span from position i to position j: The root node of any constituent(s) spanning those words Pointers to its sub-constituents (Depending on parsing method,) predictions about what constituents might follow the substring. Algorithms for Chart Parsing Many different chart parsing algorithms, including the CKY algorithm, which memoizes only complete constituents various algorithms that also memoize predictions/partial constituents often using mixed bottom-up and top-down approaches, e.g., the Earley algorithm described in J&M, and left-corner parsing. We ll look at CKY parsing and statistical parsing next time... Alex Lascarides FNLP Lecture 11 28 Alex Lascarides FNLP Lecture 11 29