Words: Pronunciations and Language Models

Similar documents
Speech Recognition at ICSI: Broadcast News and beyond

Learning Methods in Multilingual Speech Recognition

Letter-based speech synthesis

Investigation on Mandarin Broadcast News Speech Recognition

Lecture 9: Speech Recognition

COPING WITH LANGUAGE DATA SPARSITY: SEMANTIC HEAD MAPPING OF COMPOUND WORDS

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

English Language and Applied Linguistics. Module Descriptions 2017/18

Switchboard Language Model Improvement with Conversational Data from Gigaword

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Calibration of Confidence Measures in Speech Recognition

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Modeling function word errors in DNN-HMM based LVCSR systems

Improvements to the Pruning Behavior of DNN Acoustic Models

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Modeling function word errors in DNN-HMM based LVCSR systems

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Lecture 1: Machine Learning Basics

Large vocabulary off-line handwriting recognition: A survey

ESSLLI 2010: Resource-light Morpho-syntactic Analysis of Highly

Deep Neural Network Language Models

Modeling Attachment Decisions with a Probabilistic Parser: The Case of Head Final Structures

A study of speaker adaptation for DNN-based speech synthesis

Using dialogue context to improve parsing performance in dialogue systems

arxiv:cmp-lg/ v1 7 Jun 1997 Abstract

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Cross-Lingual Text Categorization

ENGBG1 ENGBL1 Campus Linguistics. Meeting 2. Chapter 7 (Morphology) and chapter 9 (Syntax) Pia Sundqvist

On the Formation of Phoneme Categories in DNN Acoustic Models

Characterizing and Processing Robot-Directed Speech

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Disambiguation of Thai Personal Name from Online News Articles

LEARNING A SEMANTIC PARSER FROM SPOKEN UTTERANCES. Judith Gaspers and Philipp Cimiano

Phonological Processing for Urdu Text to Speech System

Natural Language Processing. George Konidaris

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Linking Task: Identifying authors and book titles in verbose queries

1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Mandarin Lexical Tone Recognition: The Gating Paradigm

Chapter 5: Language. Over 6,900 different languages worldwide

The Strong Minimalist Thesis and Bounded Optimality

Learning Disability Functional Capacity Evaluation. Dear Doctor,

The taming of the data:

Language Model and Grammar Extraction Variation in Machine Translation

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Detecting English-French Cognates Using Orthographic Edit Distance

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Online Updating of Word Representations for Part-of-Speech Tagging

Training and evaluation of POS taggers on the French MULTITAG corpus

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

CS 598 Natural Language Processing

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Journal of Phonetics

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Problems of the Arabic OCR: New Attitudes

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

First Grade Curriculum Highlights: In alignment with the Common Core Standards

Language Independent Passage Retrieval for Question Answering

Spoken Language Parsing Using Phrase-Level Grammars and Trainable Classifiers

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Constructing Parallel Corpus from Movie Subtitles

Universiteit Leiden ICT in Business

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

CS Machine Learning

The Acquisition of Person and Number Morphology Within the Verbal Domain in Early Greek

Formulaic Language and Fluency: ESL Teaching Applications

Clickthrough-Based Translation Models for Web Search: from Word Models to Phrase Models

Toward a Unified Approach to Statistical Language Modeling for Chinese

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Parallel Evaluation in Stratal OT * Adam Baker University of Arizona

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

A Reinforcement Learning Variant for Control Scheduling

CEFR Overall Illustrative English Proficiency Scales

arxiv: v1 [cs.cl] 2 Apr 2017

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Finding Translations in Scanned Book Collections

Arabic Orthography vs. Arabic OCR

PAGE(S) WHERE TAUGHT If sub mission ins not a book, cite appropriate location(s))

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Program in Linguistics. Academic Year Assessment Report

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Linguistics. Undergraduate. Departmental Honors. Graduate. Faculty. Linguistics 1

Noisy SMS Machine Translation in Low-Density Languages

Investigation of Indian English Speech Recognition using CMU Sphinx

Modeling full form lexica for Arabic

Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models

Informatics 2A: Language Complexity and the. Inf2A: Chomsky Hierarchy

Rote rehearsal and spacing effects in the free recall of pure and mixed lists. By: Peter P.J.L. Verkoeijen and Peter F. Delaney

On document relevance and lexical cohesion between query terms

Transcription:

Words: Pronunciations and Language Models Steve Renals Informatics 2B Learning and Data Lecture 9 19 February 2009 Steve Renals Words: Pronunciations and Language Models 1 Overview Words The lexicon Pronunciation dictionary Out-of-vocabulary rate Pronunciation modelling Language modelling n-gram language models The zero probability problem and smoothing Steve Renals Words: Pronunciations and Language Models 2

HMM Speech Recognition Recorded Speech Decoded Text (Transcription) Acoustic Features Acoustic Model Training Data Lexicon Language Model Search Space Steve Renals Words: Pronunciations and Language Models 3 Pronunciation dictionary Words and their pronunciations provide the link between sub-word HMMs and language models Written by human experts Typically based on phones Constructing a dictionary involves 1 Selection of the words in the dictionary want to ensure high coverage of words in test data 2 Representation of the pronunciation(s) of each word Explicit modelling of pronunciation variation Steve Renals Words: Pronunciations and Language Models 4

Out-of-vocabulary (OOV) rate OOV rate: percent of word tokens in test data that are not contained in the ASR system dictionary Training vocabulary requires pronunciations for all words in training data (since training requires an HMM to be constructed for each training utterance Select the recognition vocabulary to minimize the OOV rate (by testing on development data) Recognition vocabulary may be different to training vocabulary Empirical result: each OOV word results in 1.5 2 extra errors (>1 due to the loss of contextual information) Steve Renals Words: Pronunciations and Language Models 5 Multilingual aspects Many languages are morphologically richer than English: this has a major effect of vocabulary construction and language modelling Compounding (eg German): decompose compund words into constituent parts, and carry out pronunciation and language modelling on the decomposed parts Highly inflected languages (eg Arabic, Slavic languages): specific components for modelling inflection (eg factored language models) Inflecting and compounding languages (eg Finnish) All approaches aim to reduce ASR errors by reducing the OOV rate through modelling at the morph level; also addresses data sparsity Steve Renals Words: Pronunciations and Language Models 6

Vocabulary size for different languages 3:18 M. Creutz et al. Unique words [million words] Finnish 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 Turkish Estonian 0.2 English 0 0 4 8 12 16 20 24 28 32 36 40 44 Corpus size [million words] Fig. 7. Vocabulary growth curves for different languages: For growing amounts of text (word M. Creutz tokens), et the al, Morph-based numbers of unique speechdifferent recognition word andforms modeling (wordoov types), words occurring across languages, in the text ACM are plotted. Trans Speech and Language Processing, 5(1), art. 3. http://doi.acm.org/10.1145/1322391.1322394 3.3 Word Models, Vocabulary Growth, and Spontaneous Speech To improve the word models, one could attempt to increase the vocabulary (recognition lexicon) of these models. A high coverage of the vocabulary of the training set might also reduce the OOV rate of the recognition data (test set). However, this may be difficult to obtain. Figure 7 shows the development of the size of the training set vocabulary for growing amounts of training data. Morph-Based The corpora Speech used Recognition for Finnish, Estonian, 3:19 and Turkish are the datasets used for training language models (mentioned in Section 3.1.2). 30 For comparison, a curve for English is also shown; the English corpus consists of text from the New York Times magazine. While there are fewer than 200,000 different word forms in the 40-million word English corpus, the corresponding values for Finnish and Estonian corpora of the same 25 size exceed 1.8 million and 1.5 million words, respectively. The rate of growth remains high as the entire Finnish LM training data of 150 million words (used 20 in Fin4) contains more than 4 million unique word forms. This value is thus ten times the size of the (rather large) word lexicon currently used in the Finnish experiments. 15 Figure 8 illustrates the development of the OOV rate in the test sets for growing amounts of training data. That is, assuming that the entire vocabulary of the training 10 set is used as the recognition lexicon, the words in the test set that do not occur in the training set are OOVs. The test sets are the same as used in the speech recognition experiments, and for English, a held-out subset of the New 5York Times corpus was used. Again, the proportions of OOVs are fairly high for Finnish and Estonian; at 25 million words, the OOV rates are Turkish 3.6% and 4.4%, respectively (compared with 1.7% for Turkish and only 0.74% OOV Rate for different languages New words in test set [%] English Steve Renals Words: Pronunciations and Language Models 7 Estonian Finnish 0 0 4 8 12 16 20 24 28 32 36 40 44 ACM Transactions on Speech and Language Processing, Vol. 5, No. 1, Article 3, Publication date: December 2007. Training corpus size [million words] Fig. 8. For growing amounts of training data, development of the proportions of words in the test M. Creutz et al, Morph-based speech recognition and modeling OOV words across languages, ACM Trans set that are not covered by the training set. Speech and Language Processing, 5(1), art. 3. http://doi.acm.org/10.1145/1322391.1322394 for English). If the entire 150-million word Finnish corpus were to be used (i.e., a lexicon containing more than 4 million words), the OOV rate for the test set would still be 1.5%. Steve Renals Words: Pronunciations and Language Models 8

Single and multiple pronunciations Words may have multiple pronunciations: 1 Accent, dialect: tomato, zebra global changes to dictionary based on consistent pronunciation variations 2 Phonological phenomena: handbag/ h ae m b ae g I can t stay / [ah k ae n s t ay] 3 Part of speech: project, excuse This seems to imply many pronunciations per word, including: 1 Global transform based on speaker characteristics 2 Context-dependent pronunciation models, encoding of phonological phenomena BUT state-of-the-art large vocabulary systems average about 1.1 pronunciations per word: most words have a single pronunciation Steve Renals Words: Pronunciations and Language Models 9 Consistency vs Fidelity Empirical finding: adding pronunciation variants can result in reduced accuracy Adding pronunciations gives more flexibility to word models and increases the number of potential ambiguities more possible state sequences to match the observed acoustics Speech recognition uses a consistent rather than a faithful representation of pronunciations A consistent representation requires only that the same word has the same phonemic representation (possibly with alternates): the training data need only be transcribed at the word level A faithful phonemic representation requires a detailed phonetic transcription of the training speech (much too expensive for large training data sets) Steve Renals Words: Pronunciations and Language Models 10

Modelling pronunciation variability State-of-the-art systems absorb variations in pronunciation in the acoustic models Context-dependent acoustic models may be though of as giving broad class representation of word context Cross-word context dependent models can implicitly represent cross-word phonological phenomena Hain (2002): a carefully constructed single pronunciation dictionary (using most common alignments) can result in a more accurate system than a multiple pronunciation dictionary Steve Renals Words: Pronunciations and Language Models 11 Mathematical framework HMM Framework for speech recognition. Let W be the universe of possible utterances, and X be the observed acoustics, then we want to find: W = arg max W P(W X ) = arg max W P(X W )P(W ) P(X ) = arg max P(X W )P(W ) W Words are composed of a sequence of HMM states Q: W = arg max W arg max W arg max W P(X Q, W )P(Q, W ) P(X Q)P(Q W )P(W ) Q max Q P(X Q)P(Q W )P(W ) Steve Renals Words: Pronunciations and Language Models 12

Three levels of model Acoustic model P(X Q) Probability of the acoustics given the phone states: context-dependent HMMs using state clustering, phonetic decision trees, etc. Pronunciation model P(Q W ) Probability of the phone states given the words; may be as simple a dictionary of pronunciations, or a more complex model Language model P(W ) Probability of a sequence of words. Typically an n-gram Steve Renals Words: Pronunciations and Language Models 13 Language modelling Basic idea The language model is the prior probability of the word sequence P(W ) Use a language model to disambiguate between similar acoustics when combining linguistic and acoustic evidence never mind the nudist play / never mind the new display Use hand constructed networks in limited domains Statistical language models: cover ungrammatical utterances, computationally efcient, trainable from huge amounts of data, can assign a probability to a sentence fragment as well as a whole sentence Steve Renals Words: Pronunciations and Language Models 14

Finite-state network one ticket Edinburgh two tickets to London three Leeds and typically hand-written does not have a wide coverage or robustness Steve Renals Words: Pronunciations and Language Models 15 Statistical language models For use in speech recognition a language model must be: statistical, have wide coverage, and be compatible with left-to-right search algorithms Only a few grammar-based models have met this requirement (eg Chelba and Jelinek, 2000), and do not yet scale as well as simple statistical models n-grams are (still) the state-of-the-art language model for ASR Unsophisticated, linguistically implausible Short, finite context Model solely at the shallow word level But: wide coverage, able to deal with ungrammatical strings, statistical and scaleable Probability of a word depends only on the identity of that word and of the preceding n-1 words. These short sequences of n words are called n-grams. Steve Renals Words: Pronunciations and Language Models 16

Bigram language model Word sequence W = w 1, w 2,... w M P(W) = P(w 1 )P(w 2 w 1 )P(w 3 w 1, w 2 )... P(w M w 1, w 2,... w M 1 ) Bigram approximation consider only one word of context: P(W) P(w 1 )P(w 2 w 1 )P(w 3 w 2 )... P(w M w M 1 ) Parameters of a bigram are the conditional probabilities P(w i w j ) Maximum likelihood estimates by counting: P(w i w j ) c(w j, w i ) c(w j ) where c(w j, w i ) is the number of observations of w j followed by w i, and c(w j ) is the number of observations of w j (irrespective of what follows) Steve Renals Words: Pronunciations and Language Models 17 Bigram network P(one start of sentence) one P(ticket one) ticket P(Edinburgh one) Edinburgh P(end of sentence Edinburgh) n-grams can be represented as probabilistic finite state networks only some arcs (and nodes) are shown for clarity: in a full model there is an arc from every word to every word note the special start and end sentence probabilities Steve Renals Words: Pronunciations and Language Models 18

The zero probability problem Maximum likelihood estimation is based on counts of words in the training data If a n-gram is not observed, it will have a count of 0 and the maximum likelihood probability estimate will be 0 The zero probability problem: just because something does not occur in the training data does not mean that it will not occur As n grows larger, so the data grow sparser, and the more zero counts there will be Solution: smooth the probability estimates so that unobserved events do not have a zero probability Since probabilities sum to 1, this means that some probability is redistributed from observed to unobserved n-grams Steve Renals Words: Pronunciations and Language Models 19 Smoothing language models What is the probability of an unseen n-gram? Add-one smoothing: add one to all counts and renormalize. Discounts non-zero counts and redistributes to zero counts Since most n-grams are unseen (for large n more types than tokens!) this gives too much probability to unseen n-grams (discussed in Manning and Schütze) Absolute discounting: subtract a constant from the observed (non-zero count) n-grams, and redistribute this subtracted probability over the unseen n-grams (zero counts) Kneser-Ney smoothing: family of smoothing methods based on absolute discounting that are at the state of the art (Goodman, 2001) Steve Renals Words: Pronunciations and Language Models 20

Backing off How is the probability distributed over unseen events? Basic idea: estimate the probability of an unseen n-gram using the (n-1)-gram estimate Use successively less context: trigram bigram unigram Back-off models redistribute the probability freed by discounting the n-gram counts For a bigram P(w i w j ) = c(w j, w i ) D c(w j ) = P(w i )b wj otherwise if c(w j, w i ) > c c is the count threshold, and D is the discount. b wj backoff weight required for normalization is the Steve Renals Words: Pronunciations and Language Models 21 References Fosler-Lussier (2003) - pronunciation modelling tutorial Hain (2002) - implicit pronunciation modelling by context-dependent acoustic models Gotoh and Renals (2003) - language modelling tutorial (and see refs within) Good coverage of n-gram models in Manning and Schütze (1999) Jelinek (1991) - review of early attempts to go beyond n-grams Chelba and Jelinek (2000) - example of a probabilistic grammar-based language model Goodman (2001) - state-of-the-art smoothing for n-grams Steve Renals Words: Pronunciations and Language Models 22