Lecture 9: Speech Recognition

Similar documents
Speech Recognition at ICSI: Broadcast News and beyond

On the Formation of Phoneme Categories in DNN Acoustic Models

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Learning Methods in Multilingual Speech Recognition

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Modeling function word errors in DNN-HMM based LVCSR systems

Lecture 1: Machine Learning Basics

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Modeling function word errors in DNN-HMM based LVCSR systems

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Speaker recognition using universal background model on YOHO database

A study of speaker adaptation for DNN-based speech synthesis

Speech Emotion Recognition Using Support Vector Machine

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

WHEN THERE IS A mismatch between the acoustic

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Human Emotion Recognition From Speech

Lecture 10: Reinforcement Learning

Detecting English-French Cognates Using Orthographic Edit Distance

Automatic Pronunciation Checker

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

Segregation of Unvoiced Speech from Nonspeech Interference

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Mandarin Lexical Tone Recognition: The Gating Paradigm

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Investigation on Mandarin Broadcast News Speech Recognition

Natural Language Processing. George Konidaris

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India

Large vocabulary off-line handwriting recognition: A survey

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Rhythm-typology revisited.

Speech Recognition by Indexing and Sequencing

Speaker Identification by Comparison of Smart Methods. Abstract

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

The Good Judgment Project: A large scale test of different methods of combining expert predictions

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Speaker Recognition. Speaker Diarization and Identification

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Body-Conducted Speech Recognition and its Application to Speech Support System

A Case Study: News Classification Based on Term Frequency

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Clickthrough-Based Translation Models for Web Search: from Word Models to Phrase Models

Switchboard Language Model Improvement with Conversational Data from Gigaword

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Improvements to the Pruning Behavior of DNN Acoustic Models

Edinburgh Research Explorer

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

Books Effective Literacy Y5-8 Learning Through Talk Y4-8 Switch onto Spelling Spelling Under Scrutiny

Online Updating of Word Representations for Part-of-Speech Tagging

Evolutive Neural Net Fuzzy Filtering: Basic Description

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Disambiguation of Thai Personal Name from Online News Articles

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

SARDNET: A Self-Organizing Feature Map for Sequences

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Proceedings of Meetings on Acoustics

Calibration of Confidence Measures in Speech Recognition

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

ESSLLI 2010: Resource-light Morpho-syntactic Analysis of Highly

English Language and Applied Linguistics. Module Descriptions 2017/18

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

Corpus Linguistics (L615)

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

South Carolina English Language Arts

1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

An Online Handwriting Recognition System For Turkish

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Voice conversion through vector quantization

Transcription:

EE E6820: Speech & Audio Processing & Recognition Lecture 9: Speech Recognition 1 Recognizing speech 2 Feature calculation Dan Ellis <dpwe@ee.columbia.edu> Michael Mandel <mim@ee.columbia.edu> 3 Sequence recognition Columbia University Dept. of Electrical Engineering http://www.ee.columbia.edu/ dpwe/e6820 April 7, 2009 4 Large vocabulary, continuous speech recognition (LVCSR) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 1 / 43

Outline 1 Recognizing speech 2 Feature calculation 3 Sequence recognition 4 Large vocabulary, continuous speech recognition (LVCSR) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 2 / 43

Recognizing speech So, I thought about that and I think it s still possible 4000 Frequency 2000 0 0 0.5 1 1.5 2 2.5 3 Time What kind of information might we want from the speech signal? words phrasing, speech acts (prosody) mood / emotion speaker identity What kind of processing do we need to get at that information? time scale of feature extraction signal aspects to capture in features signal aspects to exclude from features E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 3 / 43

Speech recognition as Transcription Transcription = speech to text find a word string to match the utterance Gives neat objective measure: word error rate (WER) % can be a sensitive measure of performance Reference: Recognized: Three kinds of errors: THE CAT SAT ON THE MAT CAT SAT AN THE A MAT Deletion Substitution Insertion WER = (S + D + I )/N E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 4 / 43

Problems: Within-speaker variability Timing variation word duration varies enormously Frequency 4000 2000 0 0 0.5 1.0 1.5 2.0 2.5 3.0 s ow SO ay aa ax b aw axay ih th dx th n th n k ih t s t ih p aa s b ax l I ABOUT I IT'S STILL POSSIBLE THOUGHT THAT THINK AND fast speech reduces vowels Speaking style variation careful/casual articulation soft/loud speech Contextual effects speech sounds vary with context, role: How do you do? l E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 5 / 43

Problems: Between-speaker variability Accent variation regional / mother tongue Voice quality variation gender, age, huskiness, nasality Individual characteristics mannerisms, speed, prosody mbma0 fjdm2 freq / Hz 8000 6000 4000 2000 0 8000 6000 4000 2000 0 0 0.5 1 1.5 2 2.5 time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 6 / 43

Problems: Environment variability Background noise fans, cars, doors, papers Reverberation boxiness in recordings Microphone/channel huge effect on relative spectral gain Close mic freq / Hz 4000 2000 0 Tabletop mic 4000 2000 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 7 / 43

How to recognize speech? Cross correlate templates? waveform? spectrogram? time-warp problems Match short-segments & handle time-warp later model with slices of 10 ms pseudo-stationary model of words: freq / Hz 4000 3000 sil g w eh n sil 2000 1000 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 other sources of variation... time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 8 / 43

Probabilistic formulation Probability that segment label is correct gives standard form of speech recognizers Feature calculation: s[n] X m (m = n H ) transforms signal into easily-classified domain Acoustic classifier: p(q i X ) calculates probabilities of each mutually-exclusive state q i Finite state acceptor (i.e. HMM) Q = argmax p(q 0, q 1,... q L X 0, X 1,... X L ) {q 0,q 1,...q L } MAP match of allowable sequence to probabilities: X q 0 = ay q 1... 0 1 2... time E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 9 / 43

Standard speech recognizer structure Fundamental equation of speech recognition: Q = argmax p(q X, Θ) Q = argmax p(x Q, Θ)p(Q Θ) Q X = acoustic features p(x Q, Θ) = acoustic model p(q Θ) = language model argmax Q = search over sequences Questions: what are the best features? how do we do model them? how do we find/match the state sequence? E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 10 / 43

Outline 1 Recognizing speech 2 Feature calculation 3 Sequence recognition 4 Large vocabulary, continuous speech recognition (LVCSR) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 11 / 43

Feature Calculation Goal: Find a representational space most suitable for classification waveform: voluminous, redundant, variable spectrogram: better, still quite variable...? Pattern Recognition: representation is upper bound on performance maybe we should use the waveform... or, maybe the representation can do all the work Feature calculation is intimately bound to classifier pragmatic strengths and weaknesses Features develop by slow evolution current choices more historical than principled E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 12 / 43

Features (1): Spectrogram Plain STFT as features e.g. X m [k] = S[mH, k] = n s[n + mh] w[n] e j2πkn/n Consider examples: freq / Hz 8000 6000 4000 2000 0 8000 6000 4000 2000 Feature vector slice 0 0 0.5 1 1.5 2 2.5 Similarities between corresponding segments but still large differences time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 13 / 43

Features (2): Cepstrum Idea: Decorrelate, summarize spectral slices: X m [l] = IDFT{log S[mH, k] } good for Gaussian models greatly reduce feature dimension Male spectrum cepstrum Female spectrum 8000 4000 0 8 6 4 2 8000 4000 cepstrum 0 8 6 4 2 0 0.5 1 1.5 2 2.5 time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 14 / 43

Features (3): Frequency axis warp Linear frequency axis gives equal space to 0-1 khz and 3-4 khz but perceptual importance very different Warp frequency axis closer to perceptual axis mel, Bark, constant-q... Male spectrum 8000 4000 0 X [c] = u c k=l c S[k] 2 audspec Female audspec 15 10 5 15 10 5 0 0.5 1 1.5 2 2.5 time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 15 / 43

Features (4): Spectral smoothing Generalizing across different speakers is helped by smoothing (i.e. blurring) spectrum Truncated cepstrum is one way: MMSE approx to log S[k] LPC modeling is a little different: MMSE approx to S[k] prefers detail at peaks Male level / db audspec 50 40 15 plp 10 smoothed 5 30 0 2 4 6 8 10 12 14 16 18 freq / chan 15 10 5 plp audspec 0 0.5 1 1.5 2 2.5 time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 16 / 43

Features (5): Normalization along time Idea: feature variations, not absolute level Hence: calculate average level and subtract it: Ŷ [n, k] = ˆX [n, k] mean{ ˆX [n, k]} n Factors out fixed channel frequency response x[n] = h c s[n] ˆX [n, k] = log X [n, k] = log H c [k] + log S[n, k] Male plp 15 10 5 mean norm Female mean norm 15 10 5 15 10 5 0 0.5 1 1.5 2 2.5 time / s E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 17 / 43

Delta features Want each segment to have static feature vals but some segments intrinsically dynamic! calculate their derivatives maybe steadier? Append dx /dt (+ d 2 X /dt 2 ) to feature vectors Male ddeltas deltas 15 10 5 15 10 5 15 10 5 plp (µ,σ norm) 0 0.5 1 1.5 2 2.5 time / s Relates to onset sensitivity in humans? E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 18 / 43

Overall feature calculation MFCCs and/or RASTA-PLP Sound cepstra FFT X[k] Mel scale freq. warp log X[k] IFFT Truncate Subtract mean spectra audspec FFT X[k] Bark scale freq. warp log X[k] Rasta band-pass LPC smooth Cepstral recursion smoothed onsets LPC spectra Key attributes: spectral, auditory scale decorrelation smoothed (spectral) detail normalization of levels CMN MFCC features Rasta-PLP cepstral features E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 19 / 43

Features summary spectrum 8000 4000 Male Female audspec 0 15 10 5 rasta 15 10 5 deltas 15 10 5 0 0.5 1 1.5 0 0.5 1 time / s Normalize same phones Contrast different phones E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 20 / 43

Outline 1 Recognizing speech 2 Feature calculation 3 Sequence recognition 4 Large vocabulary, continuous speech recognition (LVCSR) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 21 / 43

Sequence recognition: Dynamic Time Warp (DTW) Framewise comparison with stored templates: Reference ONE TWO THREE FOUR FIVE 70 60 50 40 30 20 10 10 20 30 40 50 time /frames Test distance metric? comparison across templates? E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 22 / 43

Dynamic Time Warp (2) Find lowest-cost constrained path: matrix d(i, j) of distances between input frame f i and reference frame r j allowable predecessors and transition costs Txy Reference frames r j Lowest cost to (i,j) D(i-1,j) + T D(i,j) = d(i,j) + min{ 10 T 10 D(i,j-1) + T 01 D(i-1,j) D(i-1,j-1) + T 11 } Local match cost Best predecessor D(i-1,j) D(i-1,j) (including transition cost) T 11 T 01 Input frames f i Best path via traceback from final state store predecessors for each (i, j) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 23 / 43

DTW-based recognition Reference templates for each possible word For isolated words: mark endpoints of input word calculate scores through each template (+prune) Reference ONE TWO THREE FOUR Input frames continuous speech: link together word ends Successfully handles timing variation recognize speech at reasonable cost E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 24 / 43

Statistical sequence recognition DTW limited because it s hard to optimize learning from multiple observations interpretation of distance, transition costs? Need a theoretical foundation: Probability Formulate recognition as MAP choice among word sequences: Q = argmax p(q X, Θ) Q X = observed features Q = word-sequences Θ = all current parameters E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 25 / 43

State-based modeling Assume discrete-state model for the speech: observations are divided up into time frames model states observations: Model M j Q k : q 1 q 2 q 3 q 4 q 5 q 6... states time N X 1 : x 1 x 2 x 3 x 4 x 5 x 6... observed feature vectors Probability of observations given model is: p(x Θ) = all Q p(x N 1 Q, Θ) p(q Θ) sum over all possible state sequences Q How do observations X1 N depend on states Q? How do state sequences Q depend on model Θ? E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 26 / 43

HMM review HMM is specified by parameters Θ: - states q i k a t - transition probabilities a ij k a t k a t k a t 1.0 0.0 0.0 0.0 0.9 0.1 0.0 0.0 0.0 0.9 0.1 0.0 0.0 0.0 0.9 0.1 - emission distributions b i (x) p(x q) k a t x (+ initial state probabilities π i ) a ij p(q j n q i n 1) b i (x) p(x q i ) π i p(q i 1) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 27 / 43

HMM summary (1) HMMs are a generative model: recognition is inference of p(q X ) During generation, behavior of model depends only on current state q n : transition probabilities p(qn+1 q n ) = a ij observation distributions p(x n q n ) = b i (x) Given states Q = {q 1, q 2,..., q N } and observations X = X N 1 = {x 1, x 2,..., x N } Markov assumption makes p(x, Q Θ) = n p(x n q n )p(q n q n 1 ) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 28 / 43

HMM summary (2) Calculate p(x Θ) via forward recursion: [ S ] p(x1 n, qn) j = α n (j) = α n 1 (i)a ij b j (x n ) i=1 Viterbi (best path) approximation [ αn(j) { } ] = max α n 1 (i)a ij b j (x n ) i then backtrace... Q = argmax(x, Q Θ) Q Pictorially: M = M* X Q = {q 1,q 2,...q n } Q* assumed, hidden observed inferred E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 29 / 43

Outline 1 Recognizing speech 2 Feature calculation 3 Sequence recognition 4 Large vocabulary, continuous speech recognition (LVCSR) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 30 / 43

Recognition with HMMs Isolated word choose best p(m X ) p(x M)p(M) Model M 1 w ah n p(x M 1 ) p(m 1 ) =... Input Model M 2 t uw p(x M 2 ) p(m 2 ) =... Model M 3 th r iy p(x M 3 ) p(m 3 ) =... Continuous speech Viterbi decoding of one large HMM gives words Input p(m 1 ) sil p(m 3 ) p(m 2 ) w ah n t uw th r iy E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 31 / 43

Training HMMs Probabilistic foundation allows us to train HMMs to fit training data i.e. estimate a ij, b i (x) given data better than DTW... Algorithms to improve p(θ X ) are key to success of HMMs maximum-likelihood of models... State alignments Q for training examples are generally unknown... else estimating parameters would be easy Viterbi training Forced alignment choose best labels (heuristic) EM training fuzzy labels (guaranteed local convergence) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 32 / 43

Overall training procedure Labelled training data two one five four three Word models one w ah n two t uw three th r iy Data Models t uw w ah n th r iy f ao th r iy Fit models to data Re-estimate model parameters Repeat until convergence E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 33 / 43

Language models Recall, fundamental equation of speech recognition Q = argmax p(q X, Θ) Q = argmax p(x Q, Θ A )p(q Θ L ) Q So far, looked at p(x Q, Θ A ) What about p(q Θ L )? Q is a particular word sequence ΘL are parameters related to the language Two components: link state sequences to words p(q w i ) priors on word sequences p(wi M j ) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 34 / 43

HMM Hierarchy HMMs support composition can handle time dilation, pronunciation, grammar all within the same framework ae 1 ae 2 ae 3 k THE ae aa CAT DOG t ATE SAT p(q M) = p(q, φ, w M) = p(q φ) p(φ w) p(w n w n 1 1, M) E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 35 / 43

Pronunciation models Define states within each word p(q w i ) Can have unique states for each word ( whole-word modeling), or... Sharing (tying) subword units between words to reflect underlying phonology more training examples for each unit generalizes to unseen words (or can do it automatically... ) Start e.g. from pronunciation dictionary: ZERO(0.5) ZERO(0.5) ONE(1.0) TWO(1.0) z iy r ow z ih r ow w ah n tcl t uw E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 36 / 43

Learning pronunciations Phone recognizer transcribes training data as phones align to canonical pronunciations Baseform Phoneme String f ay v y iy r ow l d f ah ay v y uh r ow l Surface Phone String infer modification rules predict other pronunciation variants e.g. d deletion : d l stop p = 0.9 Generate pronunciation variants; use forced alignment to find weights E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 37 / 43

Grammar Account for different likelihoods of different words and word sequences p(w i M j ) True probabilities are very complex for LVCSR need parses, but speech often agrammatic Use n-grams: p(w n w L 1 ) = p(w n w n K,..., w n 1 ) e.g. n-gram models of Shakespeare: n=1 To him swallowed confess hear both. Which. Of save on... n=2 What means, sir. I confess she? then all sorts, he is trim,... n=3 Sweet prince, Falstaff shall die. Harry of Monmouth s grave... n=4 King Henry. What! I will go seek the traitor Gloucester.... Big win in recognizer WER raw recognition results often highly ambiguous grammar guides to reasonable solutions E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 38 / 43

Smoothing LVCSR grammars n-grams (n = 3 or 4) are estimated from large text corpora 100M+ words but: not like spoken language 100,000 word vocabulary 10 15 trigrams! never see enough examples unobserved trigrams should NOT have Pr = 0! Backoff to bigrams, unigrams p(wn ) as an approx to p(w n w n 1 ) etc. interpolate 1-gram, 2-gram, 3-gram with learned weights? Lots of ideas e.g. category grammars p(place went, to )p(wn PLACE) how to define categories? how to tag words in training corpus? E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 39 / 43

Decoding How to find the MAP word sequence? States, pronunciations, words define one big HMM with 100,000+ individual states for LVCSR! Exploit hierarchic structure phone states independent of word next word (semi) independent of word history oy DECOY s DECODES iy k ow d z DECODES d uw DO DECODE axr DECODER root b E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 40 / 43

Decoder pruning Searching all possible word sequences? need to restrict search to most promising ones: beam search sort by estimates of total probability = Pr(so far)+ lower bound estimate of remains trade search errors for speed Start-synchronous algorithm: extract top hypothesis from queue: [Pn, {w 1,..., w k }, n] pr. so far words next time frame find plausible words {wi } starting at time n new hypotheses: [P n p(x n+n 1 n w i )p(w i w k...), {w 1,..., w k, w i }, n + N] discard if too unlikely, or queue is too long else re-insert into queue and repeat E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 41 / 43

Summary Speech signal is highly variable need models that absorb variability hide what we can with robust features Speech is modeled as a sequence of features need temporal aspect to recognition best time-alignment of templates = DTW Hidden Markov models are rigorous solution self-loops allow temporal dilation exact, efficient likelihood calculations Language modeling captures larger structure pronunciation, word sequences fits directly into HMM state structure need to prune search space in decoding Parting thought Forward-backward trains to generate, can we train to discriminate? E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 42 / 43

References Lawrence R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257 286, 1989. Mehryar Mohri, Fernando Pereira, and Michael Riley. Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1):69 88, 2002. Wendy Holmes. Speech Synthesis and Recognition. CRC, December 2001. ISBN 0748408576. Lawrence Rabiner and Biing-Hwang Juang. Fundamentals of Speech Recognition. Prentice Hall PTR, April 1993. ISBN 0130151572. Daniel Jurafsky and James H. Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Prentice Hall, January 2000. ISBN 0130950696. Frederick Jelinek. Statistical Methods for Speech Recognition (Language, Speech, and Communication). The MIT Press, January 1998. ISBN 0262100665. Xuedong Huang, Alex Acero, and Hsiao-Wuen Hon. Spoken Language Processing: A Guide to Theory, Algorithm and System Development. Prentice Hall PTR, April 2001. ISBN 0130226165. E6820 (Ellis & Mandel) L9: Speech recognition April 7, 2009 43 / 43