Natural Language Processing with Deep Learning CS224N/Ling284

Similar documents
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Python Machine Learning

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Residual Stacking of RNNs for Neural Machine Translation

Lecture 1: Machine Learning Basics

(Sub)Gradient Descent

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Model Ensemble for Click Prediction in Bing Search Ads

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

arxiv: v1 [cs.cv] 10 May 2017

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Second Exam: Natural Language Parsing with Neural Networks

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Deep Neural Network Language Models

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Assignment 1: Predicting Amazon Review Ratings

Artificial Neural Networks written examination

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

arxiv: v1 [cs.lg] 15 Jun 2015

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Name: Class: Date: ID: A

A study of speaker adaptation for DNN-based speech synthesis

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Calibration of Confidence Measures in Speech Recognition

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Natural Language Processing. George Konidaris

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Switchboard Language Model Improvement with Conversational Data from Gigaword

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Lecture 10: Reinforcement Learning

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Word Segmentation of Off-line Handwritten Documents

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

arxiv: v1 [cs.cl] 2 Apr 2017

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

Modeling function word errors in DNN-HMM based LVCSR systems

A Case Study: News Classification Based on Term Frequency

Axiom 2013 Team Description Paper

arxiv: v1 [cs.lg] 7 Apr 2015

Language Independent Passage Retrieval for Question Answering

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Attributed Social Network Embedding

Modeling function word errors in DNN-HMM based LVCSR systems

CS177 Python Programming

Context Free Grammars. Many slides from Michael Collins

Getting Started with Deliberate Practice

Decision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1

Lecturing Module

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Georgetown University at TREC 2017 Dynamic Domain Track

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Evolutive Neural Net Fuzzy Filtering: Basic Description

Learning Methods for Fuzzy Systems

CS224d Deep Learning for Natural Language Processing. Richard Socher, PhD

CSL465/603 - Machine Learning

arxiv: v4 [cs.cl] 28 Mar 2016

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Probabilistic Latent Semantic Analysis

Introduction to Simulation

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Human Emotion Recognition From Speech

THE world surrounding us involves multiple modalities

Linking Task: Identifying authors and book titles in verbose queries

A deep architecture for non-projective dependency parsing

INPE São José dos Campos

Multi-Lingual Text Leveling

My Favorite Sports Team

Speech Recognition at ICSI: Broadcast News and beyond

Spring 2015 IET4451 Systems Simulation Course Syllabus for Traditional, Hybrid, and Online Classes

What Can Neural Networks Teach us about Language? Graham Neubig a2-dlearn 11/18/2017

Stochastic Calculus for Finance I (46-944) Spring 2008 Syllabus

CS 101 Computer Science I Fall Instructor Muller. Syllabus

The stages of event extraction

Disambiguation of Thai Personal Name from Online News Articles

Rule Learning With Negation: Issues Regarding Effectiveness

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

CS Machine Learning

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Truth Inference in Crowdsourcing: Is the Problem Solved?

SEMAFOR: Frame Argument Resolution with Log-Linear Models

MYCIN. The MYCIN Task

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

Knowledge Transfer in Deep Convolutional Neural Nets

Lessons from a Massive Open Online Course (MOOC) on Natural Language Processing for Digital Humanities

CLEARWATER HIGH SCHOOL

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Reducing Features to Improve Bug Prediction

EVERYTHING DiSC WORKPLACE LEADER S GUIDE

WHEN THERE IS A mismatch between the acoustic

Transcription:

Natural Language Processing with Deep Learning CS224N/Ling284 Lecture 8: Recurrent Neural Networks and Language Models Abigail See

Announcements Assignment 1: Grades will be released after class Assignment 2: Coding session next week on Monday; details on Piazza Midterm logistics: Fill out form on Piazza if you can t do main midterm, have special requirements, or other special case 2

Announcements Default Final Project (PA4) release late tonight Read the handout, look at the code, decide which project you want to do You may not understand all the technical parts, but you ll get an overview You don t yet have the Azure resources you need to run the code Project proposal due next week (Thurs Feb 8) Details released later today Everyone submits their teams Custom final project teams also describe their project 3

Call for participation 4

Overview Today we will: Introduce a new NLP task Language Modeling motivates Introduce a new family of neural networks Recurrent Neural Networks (RNNs) THE most important idea for the rest of the class! 5

Language Modeling Language Modeling is the task of predicting what word comes next. the students opened their books minds laptops exams More formally: given a sequence of words, compute the probability distribution of the next word : where is a word in the vocabulary A system that does this is called a Language Model. 6

You use Language Models every day! 7

You use Language Models every day! 8

n-gram Language Models the students opened their Question: How to learn a Language Model? Answer (pre- Deep Learning): learn a n-gram Language Model! Definition: A n-gram is a chunk of n consecutive words. unigrams: the, students, opened, their bigrams: the students, students opened, opened their trigrams: the students opened, students opened their 4-grams: the students opened their Idea: Collect statistics about how frequent different n-grams are, and use these to predict next word. 9

n-gram Language Models First we make a simplifying assumption: depends only on the preceding (n-1) words n-1 words (assumption) prob of a n-gram prob of a (n-1)-gram (definition of conditional prob) Question: How do we get these n-gram and (n-1)-gram probabilities? Answer: By counting them in some large corpus of text! (statistical approximation) 10

n-gram Language Models: Example Suppose we are learning a 4-gram Language Model. as the proctor started the clock, the students opened their discard condition on this In the corpus: students opened their occurred 1000 times students opened their books occurred 400 times à P(books students opened their) = 0.4 students opened their exams occurred 100 times à P(exams students opened their) = 0.1 11 Should we have discarded the proctor context?

Problems with n-gram Language Models Sparsity Problem 1 Problem: What if students opened their never occurred in data? Then has probability 0! (Partial) Solution: Add small δ to count for every. This is called smoothing. Sparsity Problem 2 Problem: What if students opened their never occurred in data? Then we can t calculate probability for any! (Partial) Solution: Just condition on opened their instead. This is called backoff. Note: Increasing n makes sparsity problems worse. Typically we can t have n bigger than 5. 12

Problems with n-gram Language Models Storage: Need to store count for all possible n-grams. So model size is O(exp(n)). Increasing n makes model size huge! 13

n-gram Language Models in practice You can build a simple trigram Language Model over a 1.7 million word corpus (Reuters) in a few seconds on your laptop* today the Business and financial news get probability distribution 14 company 0.153 bank 0.153 price 0.077 italian 0.039 emirate 0.039 Otherwise, seems reasonable! Sparsity problem: not much granularity in the probability distribution * Try for yourself: https://nlpforhackers.io/language-models/

Generating text with a n-gram Language Model You can also use a Language Model to generate text. today the condition on this get probability distribution company 0.153 bank 0.153 price 0.077 italian 0.039 emirate 0.039 sample 15

Generating text with a n-gram Language Model You can also use a Language Model to generate text. today the price condition on this get probability distribution of 0.308 for 0.050 it 0.046 to 0.046 is 0.031 sample 16

Generating text with a n-gram Language Model You can also use a Language Model to generate text. today the price of condition on this get probability distribution the 0.072 18 0.043 oil 0.043 its 0.036 gold 0.018 sample 17

Generating text with a n-gram Language Model You can also use a Language Model to generate text. today the price of gold 18

Generating text with a n-gram Language Model You can also use a Language Model to generate text. today the price of gold per ton, while production of shoe lasts and shoe industry, the bank intervened just after it considered and rejected an imf demand to rebuild depleted european stocks, sept 30 end primary 76 cts a share. Incoherent! We need to consider more than 3 words at a time if we want to generate good text. But increasing n worsens sparsity problem, and exponentially increases model size 19

How to build a neural Language Model? Recall the Language Modeling task: Input: sequence of words Output: prob dist of the next word How about a window-based neural model? We saw this applied to Named Entity Recognition in Lecture 4 20

A fixed-window neural Language Model as the proctor started the clock the students opened their 21 discard fixed window

A fixed-window neural Language Model books laptops output distribution a zoo hidden layer concatenated word embeddings words / one-hot vectors the students opened their 22

A fixed-window neural Language Model Improvements over n-gram LM: No sparsity problem Model size is O(n) not O(exp(n)) books laptops Remaining problems: Fixed window is too small Enlarging window enlarges Window can never be large enough! Each uses different rows of. We don t share weights across the window. a zoo 23 We need a neural architecture that can process any length input the students opened their

Recurrent Neural Networks (RNN) A family of neural architectures Core idea: Apply the same weights repeatedly outputs (optional) hidden states input sequence (any length) 24

A RNN Language Model output distribution books laptops a zoo hidden states is the initial hidden state word embeddings words / one-hot vectors the students opened their 25 Note: this input sequence could be much longer, but this slide doesn t have space!

A RNN Language Model books laptops RNN Advantages: Can process any length input Model size doesn t increase for longer input Computation for step t can (in theory) use information from many steps back Weights are shared across timesteps à representations are shared a zoo RNN Disadvantages: Recurrent computation is slow In practice, difficult to access information from many steps back 26 More on these next week the students opened their

Training a RNN Language Model Get a big corpus of text which is a sequence of words Feed into RNN-LM; compute output distribution for every step t. i.e. predict probability dist of every word, given words so far Loss function on step t is usual cross-entropy between our predicted probability distribution, and the true next word : Average this to get overall loss for entire training set: 27

Training a RNN Language Model Loss = negative log prob of students Corpus 28 the students opened their exams

Training a RNN Language Model Loss = negative log prob of opened Corpus the students opened their exams 29

Training a RNN Language Model Loss = negative log prob of their Corpus the students opened their exams 30

Training a RNN Language Model Loss = negative log prob of exams Corpus the students opened their exams 31

Training a RNN Language Model Loss + + + + = Corpus the students opened their exams 32

Training a RNN Language Model However: Computing loss and gradients across entire corpus is too expensive! Recall: Stochastic Gradient Descent allows us to compute loss and gradients for small chunk of data, and update. à In practice, consider as a sentence Compute loss for a sentence (actually usually a batch of sentences), compute gradients and update weights. Repeat. 33

Backpropagation for RNNs Question: What s the derivative of weight matrix? Answer: w.r.t. the repeated The gradient w.r.t. a repeated weight is the sum of the gradient w.r.t. each time it appears Why? 34

Multivariable Chain Rule Source: https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/differentiating-vector-valued-functions/a/multivariable-chain-rule-simple-version 35

Backpropagation for RNNs: Proof sketch In our example: Apply the multivariable chain rule: = 1 Source: https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/differentiating-vector-valued-functions/a/multivariable-chain-rule-simple-version 36

Backpropagation for RNNs Answer: Backpropagate over timesteps i=t,,0, summing gradients as you go. This algorithm is called backpropagation through time 37 Question: How do we calculate this?

Generating text with a RNN Language Model Just like a n-gram Language Model, you can use a RNN Language Model to generate text by repeated sampling. Sampled output is next step s input. favorite season is spring sample sample sample sample 38 my favorite season is spring

Generating text with a RNN Language Model Let s have some fun! You can train a RNN-LM on any kind of text, then generate text in that style. RNN-LM trained on Obama speeches: Source: https://medium.com/@samim/obama-rnn-machine-generated-political-speeches-c8abd18a2ea0 39

Generating text with a RNN Language Model Let s have some fun! You can train a RNN-LM on any kind of text, then generate text in that style. RNN-LM trained on Harry Potter: Source: https://medium.com/deep-writing/harry-potter-written-by-artificial-intelligence-8a9431803da6 40

Generating text with a RNN Language Model Let s have some fun! You can train a RNN-LM on any kind of text, then generate text in that style. RNN-LM trained on Seinfeld scripts: Source: https://www.avclub.com/a-bunch-of-comedy-writers-teamed-up-with-a-computer-to-1818633242 41

Generating text with a RNN Language Model Let s have some fun! You can train a RNN-LM on any kind of text, then generate text in that style. (character-level) RNN-LM trained on paint colors: Source: http://aiweirdness.com/post/160776374467/new-paint-colors-invented-by-neural-network 42

Evaluating Language Models The traditional evaluation metric for Language Models is perplexity. Inverse probability of dataset Normalized by number of words Lower is better! In Assignment 2 you will show that minimizing perplexity and minimizing the loss function are equivalent. 43

RNNs have greatly improved perplexity n-gram model Increasingly complex RNNs Perplexity improves (lower is better) Source: https://research.fb.com/building-an-efficient-neural-language-model-over-a-billion-words/ 44

Why should we care about Language Modeling? Language Modeling is a subcomponent of other NLP systems: Speech recognition Use a LM to generate transcription, conditioned on audio Machine Translation Use a LM to generate translation, conditioned on original text Summarization Use a LM to generate summary, conditioned on original text These systems are called conditional Language Models Language Modeling is a benchmark task that helps us measure our progress on understanding language 45

Recap Language Model: A system that predicts the next word Recurrent Neural Network: A family of neural networks that: Take sequential input of any length Apply the same weights on each step Can optionally produce output on each step Recurrent Neural Network Language Model We ve shown that RNNs are a great way to build a LM. But RNNs are useful for much more! 46

RNNs can be used for tagging e.g. part-of-speech tagging, named entity recognition DT VBN NN VBN IN DT NN the startled cat knocked over the vase 47

RNNs can be used for sentence classification e.g. sentiment classification positive How to compute sentence encoding? Sentence encoding overall I enjoyed the movie a lot 48

RNNs can be used for sentence classification e.g. sentiment classification positive How to compute sentence encoding? Sentence encoding Basic way: Use final hidden state overall I enjoyed the movie a lot 49

RNNs can be used for sentence classification e.g. sentiment classification positive How to compute sentence encoding? Sentence encoding Usually better: Take element-wise max or mean of all hidden states overall I enjoyed the movie a lot 50

RNNs can be used to generate text e.g. speech recognition, machine translation, summarization what s the weather <START> what s the Remember: these are called conditional language models. We ll see Machine Translation in much more detail later. 51

RNNs can be used as an encoder module e.g. question answering, machine translation Answer: German Question encoding = element-wise max of hidden states Context: Ludwig van Beethoven was a German composer and pianist. A crucial figure Question: what nationality was Beethoven? Here the RNN acts as an encoder for the Question. The encoder is part of a larger neural system. 52

A note on terminology RNN described in this lecture = vanilla RNN Next lecture: You will learn about other RNN flavors like GRU and LSTM and multi-layer RNNs By the end of the course: You will understand phrases like stacked bidirectional LSTM with residual connections and self-attention 53

Next time Problems with RNNs! Vanishing gradients motivates Fancy RNN variants! LSTM GRU multi-layer bidirectional 54