AI Programming CS S-13 Statistical Natural Language Processing

Similar documents
Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

Basic Parsing with Context-Free Grammars. Some slides adapted from Julia Hirschberg and Dan Jurafsky 1

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

Natural Language Processing. George Konidaris

Parsing of part-of-speech tagged Assamese Texts

CS 598 Natural Language Processing

Context Free Grammars. Many slides from Michael Collins

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

Grammars & Parsing, Part 1:

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

Memory-based grammatical error correction

EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

Linking Task: Identifying authors and book titles in verbose queries

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

Disambiguation of Thai Personal Name from Online News Articles

Compositional Semantics

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Developing a TT-MCTAG for German with an RCG-based Parser

Distant Supervised Relation Extraction with Wikipedia and Freebase

Using dialogue context to improve parsing performance in dialogue systems

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Some Principles of Automated Natural Language Information Extraction

Informatics 2A: Language Complexity and the. Inf2A: Chomsky Hierarchy

Modeling Attachment Decisions with a Probabilistic Parser: The Case of Head Final Structures

The Smart/Empire TIPSTER IR System

Prediction of Maximal Projection for Semantic Role Labeling

Text-mining the Estonian National Electronic Health Record

ESSLLI 2010: Resource-light Morpho-syntactic Analysis of Highly

Character Stream Parsing of Mixed-lingual Text

Introduction to Text Mining

Loughton School s curriculum evening. 28 th February 2017

Learning Computational Grammars

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Proof Theory for Syntacticians

The stages of event extraction

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

A Case Study: News Classification Based on Term Frequency

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Using Semantic Relations to Refine Coreference Decisions

UNIVERSITY OF OSLO Department of Informatics. Dialog Act Recognition using Dependency Features. Master s thesis. Sindre Wetjen

AQUA: An Ontology-Driven Question Answering System

Switchboard Language Model Improvement with Conversational Data from Gigaword

Corrective Feedback and Persistent Learning for Information Extraction

ENGBG1 ENGBL1 Campus Linguistics. Meeting 2. Chapter 7 (Morphology) and chapter 9 (Syntax) Pia Sundqvist

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Applications of memory-based natural language processing

Constraining X-Bar: Theta Theory

Three New Probabilistic Models. Jason M. Eisner. CIS Department, University of Pennsylvania. 200 S. 33rd St., Philadelphia, PA , USA

Ensemble Technique Utilization for Indonesian Dependency Parser

University of Alberta. Large-Scale Semi-Supervised Learning for Natural Language Processing. Shane Bergsma

Argument structure and theta roles

A Minimalist Approach to Code-Switching. In the field of linguistics, the topic of bilingualism is a broad one. There are many

Speech Recognition at ICSI: Broadcast News and beyond

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

ARNE - A tool for Namend Entity Recognition from Arabic Text

Chapter 4: Valence & Agreement CSLI Publications

Short Text Understanding Through Lexical-Semantic Analysis

Universiteit Leiden ICT in Business

Analysis of Probabilistic Parsing in NLP

CS Machine Learning

SEMAFOR: Frame Argument Resolution with Log-Linear Models

Part I. Figuring out how English works

BYLINE [Heng Ji, Computer Science Department, New York University,

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Derivational: Inflectional: In a fit of rage the soldiers attacked them both that week, but lost the fight.

An Interactive Intelligent Language Tutor Over The Internet

A Syllable Based Word Recognition Model for Korean Noun Extraction

Accurate Unlexicalized Parsing for Modern Hebrew

A Bayesian Learning Approach to Concept-Based Document Classification

CAS LX 522 Syntax I. Long-distance wh-movement. Long distance wh-movement. Islands. Islands. Locality. NP Sea. NP Sea

Performance Analysis of Optimized Content Extraction for Cyrillic Mongolian Learning Text Materials in the Database

The Ups and Downs of Preposition Error Detection in ESL Writing

Formulaic Language and Fluency: ESL Teaching Applications

Cross-Lingual Text Categorization

Rule-based Expert Systems

Introduction to HPSG. Introduction. Historical Overview. The HPSG architecture. Signature. Linguistic Objects. Descriptions.

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

Cross Language Information Retrieval

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger

1/20 idea. We ll spend an extra hour on 1/21. based on assigned readings. so you ll be ready to discuss them in class

The Interface between Phrasal and Functional Constraints

ScienceDirect. Malayalam question answering system

Training and evaluation of POS taggers on the French MULTITAG corpus

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Search right and thou shalt find... Using Web Queries for Learner Error Detection

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

How to analyze visual narratives: A tutorial in Visual Narrative Grammar

On document relevance and lexical cohesion between query terms

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

A corpus-based approach to the acquisition of collocational prepositional phrases

The Role of the Head in the Interpretation of English Deverbal Compounds

Segmented Discourse Representation Theory. Dynamic Semantics with Discourse Structure

IBAN LANGUAGE PARSER USING RULE BASED APPROACH

Language and Computers. Writers Aids. Introduction. Non-word error detection. Dictionaries. N-gram analysis. Isolated-word error correction

Detecting English-French Cognates Using Orthographic Edit Distance

Towards a MWE-driven A* parsing with LTAGs [WG2,WG3]

Transcription:

AI Programming CS662-2013S-13 Statistical Natural Language Processing David Galles Department of Computer Science University of San Francisco

13-0: Outline n-grams Applications of n-grams review - Context-free grammars Probabilistic CFGs Information Extraction

13-1: Advantages of IR approaches Recall that IR-based approaches use the bag of words model. TFIDF is used to account for word frequency. Takes information about common words into account. Can deal with grammatically incorrect sentences. Gives us a degree of correctness, rather than just yes or no.

13-2: Disadvantges of IR No use of structural information. Not even co-occurrence of words Can t deal with synonyms or dereferencing pronouns Very little semantic analysis.

13-3: Advantages of classical NLP Classical NLP approaches use a parser to generate a parse tree. This can then be used to transform knowledge into a form that can be reasoned with. Identifies sentence structure Easier to do semantic interpretation Can handle anaphora, synonyms, etc.

13-4: Disadvantages of class. NLP Doesn t take frequency into account No way to choose between different parses for a sentence Can t deal with incorrect grammar Requires a lexicon. Maybe we can incorporate both statistical information and structure.

13-5: n-grams The simplest way to add structure to our IR approach is to count the occurrence not only of single tokens, but of sequences of tokens. So far, we ve considered words as tokens. A token is sometimes called a gram an n-gram model considers the probability that a sequence of n tokens occurs in a row. More precisely, it is the probability P(token i token i 1, token i 2,..., token i n )

13-6: n-grams We could also choose to count bigrams, or 2-grams. The sentence Every good boy deserves fudge contains the bigrams every good, good boy, boy deserves, deserves fudge We could continue this approach to 3-grams, or 4-grams, or 5-grams. Longer n-grams give us more accurate information about content, since they include phrases rather than single words. What s the downside here?

13-7: Sampling theory We need to be able to estimate the probability of each n-gram occurring. We could do this by collecting a corpus and counting the distribution of words in the corpus. If the corpus is too small, these counts may not be reflective of an n-gram s true frequency. Many n-grams will not appear at all in our corpus. For example, if we have a lexicon of 20,000 words, there are: 20, 000 2 = 400 million distinct bigrams 20, 000 3 = 8 trillion distinct trigrams 20, 000 4 = 1.6 10 17 distinct 4-grams

13-8: Application: segmentation One application of n-gram models is segmentation Splitting a sequence of characters into tokens, or finding word boundaries. Speech-to-text systems Chinese and Japanese genomic data Documents with other characters, such as representing space. The algorithm for doing this is called Viterbi segmentation (Like parsing, it s a form of dynamic programming)

13-9: Viterbi segmentation input: a string S, a 1-gram distribution P n = length(s) words = array[n+1] best = array[n+1] = 0.0 * (n+1) best[0] = 1.0 for i = 1 to n for j = 0 to i - 1 word = S[j:i] ##get the substring from j to i w = length(word) if (P[word] x best[i - w] >= best[i]) best[i] = P[word] x best[i - w] words[i] = word ### now get best words result = [] i = n while i > 0 push words[i] onto result i = i - len(words[i]) return result, best[i]

13-10: Example Input cattlefish P(cat) = 0.1, P(cattle) = 0.3, P(fish) = 0.1. all other 1-grams are 0.001. best[0] = 1.0 i: 1, j: 0 word: c. w = 1 0.001 * 1.0 >= 0.0 best[1] = 0.001 words[1] = c i = 2, j = 0 word = ca, w = 2 0.001 * 1.0 >= 0.0 best[2] = 0.001 words[2] = ca i = 2, j = 1 word = a, w = 1 0.001 * 0.001 < 0.001

13-11: Example i = 3, j = 0, word= cat, w=3 0.1 * 1.0 > 0.0 best[3] = 0.1 words[3] = cat i = 3, j = 1, word = at, w=2 0.001 * 0.001 < 0.1 i = 3, j = 2, word = t, w=1 0.001 * 0.001 < 0.1

13-12: Example i=4, j=0, word= catt, w=4 0.001 * 1.0 > 0.0 best[4] = 0.001 words[4] = catt i=4,j=1 word = att, w=3 0.001 * 0.001 < 0.001 i=4, j=2, word= tt, w=2 0.001 * 0.001 < 0.001 i=4, j=3, word= t, w=1 0.001 * 0.1 < 0.001

13-13: Example i=5, j=0, word= cattl, w=5 0.001 * 1.0 > 0.0 best[5] = 0.001 word[5] = cattl i=5, j=1, word= attl, w=4 0.001 * 0.001 < 0.001 i=5, j=2, word= ttl, w=3 0.001 * 0.001 < 0.001 i=5, j=3, word= tl, w=2 0.001 * 0.1 < 0.001 i=5, j=4, word= l, w=1 0.001 * 0.001 < 0.001

13-14: Example i=6, j=0, word= cattle, w=6 0.3 * 1.0 > 0.0 word[6] = cattle best[6] = 0.3 etc...

13-15: Example best: [1.0 0.001 0.001 0.1 0.001 0.001 0.3 0.001 0.001 0.2] words: [ c ca cat catt cattl cattle cattlef cattlefi cattlefis fish ] i = 10 push fish onto result i = i-4 push cattle onto result i = 0

13-16: What s going on here? The Viterbi algorithm is searching through the space of all combinations of substrings. States with high probability mass are pursued. The best array is used to prevent the algorithm from repeatedly expanding portions of the search space. This is an example of dynamic programming (like chart parsing)

13-17: Application: language detection n-grams have also been successfully used to detect the language a document is in. Approach: consider letters as tokens, rather than words. Gather a corpus in a variety of different languages (Wikipedia works well here.) Process the documents, and count all two-grams. Estimate probabilities for Language L with count #o f 2 grams Call this P L Assumption: different languages have characteristic two-grams.

13-18: Application: language detection To classify a document by language: Find all two-grams in the document. Call this set T. For each language L, the likelihood that the document is of language L is: P L (t 1 ) P L (t 2 )... P L (t n ) The language with the highest likelihood is the most probable language. (this is a form of Bayesian inference - we ll spend more time on this later in the semester.)

13-19: Going further n-grams and segmentation provide some interesting ideas: We can combine structure with statistical knowledge. Probabilities can be used to help guide search Probabilities can help a parser choose between different outcomes. But, no structure used apart from colocation. Maybe we can apply these ideas to grammars.

13-20: Reminder: CFGs Recall context-free grammars from the last lecture Single non-terminal on the left, anything on the right. S -> NP VP VP -> Verb Verb PP Verb -> run sleep We can construct sentences that have more than one legal parse. Squad helps dog bite victim CFGs don t give us any information about which parse to select.

13-21: Probabalistic CFGs A probabalisitc CFG is just a regular CFG with probabilities attached to the right-hand sides of rules. The have to sum up to 1 They indicate how often a particular non-terminal derives that right-hand side.

13-22: Example S -> NP VP (1.0) PP -> P NP (1.0) VP -> V NP (0.7) VP -> VP PP (0.3) P -> with (1.0) V -> saw (1.0) NP -> NP PP (0.4) NP -> astronomers (0.1) NP -> stars (0.18) NP -> saw (0.04) NP -> ears (0.18) NP -> telescopes (0.1)

13-23: Disambiguation The probability of a parse tree being correct is just the product of each rule in the tree being derived. This lets us compare two parses and say which is more likely.

13-24: Disambiguation S (1.0) NP(0.1) VP (0.7) astronomers V (1.0) NP (0.4) saw NP (0.18) PP(1.0) stars P(1.0) NP(0.18) with ears P1 = 1.0*0.1*0.7*1.0*0.4*0.18*1.0*1.0*0.18 = 0.0009072 S (1.0) NP(0.1) VP (0.3) astronomers VP (0.7) PP(1.0) V (1.0) saw NP (0.18) stars P(1.0) with NP(0.18) ears P1 = 1.0*0.1*0.3*0.7*1.0*0.18*1.0*1.0*0.18 = 0.00068

13-25: Faster Parsing We can also use probabilities to speed up parsing. Recall that both top-down and chart pasring proceed in a primarily depth-first fashion. They choose a rule to apply, and based on its right-hand side, they choose another rule. Probabilities can be used to better select which rule to apply, or which branch of the search tree to follow. This is a form of best-first search.

13-26: Information Extraction An increasingly common application of parsing is information extraction. This is the process of creating structured information (database or knowledge base entries) from unstructured text.

13-27: Information Extraction Example: Suppose we want to build a price comparison agent that can visit sites on the web and find the best deals on flatscreen TVs? Suppose we want to build a database about video games. We might do this by hand, or we could write a program that could parse wikipedia pages and insert knowledge such as madeby(blizzard, WorldOfWarcraft) into a knowledge base.

13-28: Extracting specific information A program that fetches HTML pages and extracts specfic information is called a scraper. Simple scrapers can be built with regular expressions. For example, prices typically have a dollar sign, some digits, a period, and two digits. $[0-9]+.[0-9]{2} This approach will work, but it has several limitations Can only handle simple extractions Brittle and page specific

13-29: Steps in information extraction A more robust system will need to take advantage of sentence structure. A typical system will have the following components: Sentence segmenter. Tokenizer. Part of speech tagger. Chunker. Named Entity detector. Relation extractor.

13-30: POS tagging There are a number of approaches to part-of-speech tagging. We can write rules based on a word s structure. ( -ed is a past tense verb) We can learn rules based on labeled data. Most common tag - ZeroR. We can use contextual information - n-grams. We can combine them, and learn more complex rules.

13-31: Chunking A chunk is a larger part of a sentence, such as a noun phrase. This will help us identify entities and relations. We can identify chunks with a chunk grammar: NP :< DT>?< JJ> < NN> Once we ve tagged words with parts of speech, we use a parser to identify chunks. This can be done top-down or bottom up.

13-32: Named Entities These are noun phrases that refer to specific individuals, places, or organizations. How can we identify them, and what type of entity they are? e.g. University of San Francisco: NP - Organization, Barack Obama: NP - Person. Maybe we have a gazetteer (lookup table), but this is very brittle. We can also build a classifier to label entities. Input: token with a part-of-speech label Output: whether it is a Named Entity, and its type.

13-33: Relation extraction Once we have Named Entities, we would like to know relations between them. In(USF, San Francisco) We can write a set of augmented regular expressions to do this. <ORG>(.+)VP in(.+)<city> will match <organization> verb-phrase in blah <city>. There will be false positives; getting this highly accurate takes some care. We can trade off precision and accuracy here - more restrictive regular expressions might miss some relations, but avoid adding false positives.

13-34: Summary We can combine the best of probabilistic and classical NLP approaches. n-grams take advantage of co-occurrence information. Segmenting, language detection CFGs can be augmented with probabilities Speeds parsing, deals with ambiguity. Information extraction is an increasingly common application. Still no discussion of semantics; just increasingly complex syntax processing.