This lecture. Automatic speech recognition (ASR) Applying HMMs to ASR, Practical aspects of ASR, and Levenshtein distance. CSC401/2511 Spring

Similar documents
On the Formation of Phoneme Categories in DNN Acoustic Models

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Modeling function word errors in DNN-HMM based LVCSR systems

Learning Methods in Multilingual Speech Recognition

Modeling function word errors in DNN-HMM based LVCSR systems

Lecture 9: Speech Recognition

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A Neural Network GUI Tested on Text-To-Phoneme Mapping

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Speaker recognition using universal background model on YOHO database

Body-Conducted Speech Recognition and its Application to Speech Support System

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Characterizing and Processing Robot-Directed Speech

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

Investigation on Mandarin Broadcast News Speech Recognition

Proceedings of Meetings on Acoustics

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

Calibration of Confidence Measures in Speech Recognition

A study of speaker adaptation for DNN-based speech synthesis

Bi-Annual Status Report For. Improved Monosyllabic Word Modeling on SWITCHBOARD

Speech Emotion Recognition Using Support Vector Machine

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Segregation of Unvoiced Speech from Nonspeech Interference

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Consonants: articulation and transcription

WHEN THERE IS A mismatch between the acoustic

Letter-based speech synthesis

LEARNING A SEMANTIC PARSER FROM SPOKEN UTTERANCES. Judith Gaspers and Philipp Cimiano

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Individual Component Checklist L I S T E N I N G. for use with ONE task ENGLISH VERSION

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Human Emotion Recognition From Speech

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Deep Neural Network Language Models

Speaker Recognition. Speaker Diarization and Identification

Probabilistic Latent Semantic Analysis

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Improvements to the Pruning Behavior of DNN Acoustic Models

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India

Switchboard Language Model Improvement with Conversational Data from Gigaword

Listening and Speaking Skills of English Language of Adolescents of Government and Private Schools

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Improved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Large vocabulary off-line handwriting recognition: A survey

Automatic Pronunciation Checker

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company

Speaker Identification by Comparison of Smart Methods. Abstract

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA

Large Kindergarten Centers Icons

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

CEFR Overall Illustrative English Proficiency Scales

UDL Lesson Plan Template : Module 01 Group 4 Page 1 of 5 Shannon Bates, Sandra Blefko, Robin Britt

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Speech Recognition by Indexing and Sequencing

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 1: Machine Learning Basics

An Online Handwriting Recognition System For Turkish

Detecting English-French Cognates Using Orthographic Edit Distance

Characteristics of the Text Genre Realistic fi ction Text Structure

Books Effective Literacy Y5-8 Learning Through Talk Y4-8 Switch onto Spelling Spelling Under Scrutiny

Noisy Channel Models for Corrupted Chinese Text Restoration and GB-to-Big5 Conversion

Fountas-Pinnell Level P Informational Text

The IRISA Text-To-Speech System for the Blizzard Challenge 2017

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

Phonological Processing for Urdu Text to Speech System

Clickthrough-Based Translation Models for Web Search: from Word Models to Phrase Models

Edinburgh Research Explorer

The Good Judgment Project: A large scale test of different methods of combining expert predictions

UNIVERSITY OF OSLO Department of Informatics. Dialog Act Recognition using Dependency Features. Master s thesis. Sindre Wetjen

Transcription:

This lecture Automatic speech recognition (ASR) Applying HMMs to ASR, Practical aspects of ASR, and Levenshtein distance. CSC401/2511 Spring 2017 2

Consider what we want speech to do Buy ticket... AC490... yes My hands are in the air. Dictation Telephony Put this there. Multimodal interaction Can we just use GMMs? CSC401/2511 Spring 2017 3

Speech is dynamic Speech changes over time. GMMs are good for high-level clustering, but they encode no notion of order, sequence, or time. Speech is an expression of language. We want to incorporate knowledge of how phonemes and words are ordered with language models. CSC401/2511 Spring 2017 4

Speech is sequences of phonemes (*) /ow p ah n dh ah p aa d b ey d ao r z/ open(podbay.doors); open the pod bay doors We want to convert a series of MFCC vectors into a sequence of phonemes. (*) not really CSC401/2511 Spring 2017 5

Phoneme dictionaries There are many phonemic dictionaries that map words to pronunciations (i.e., lists of phoneme sequences). The CMU dictionary (http://www.speech.cs.cmu.edu/cgi-bin/cmudict) is popular. 127K words transcribed with the ARPAbet. Includes some rudimentary prosody markers. EVOLUTION EVOLUTION(2) EVOLUTION(3) EVOLUTION(4) EVOLUTIONARY EH2 V AH0 L UW1 SH AH0 N IY2 V AH0 L UW1 SH AH0 N EH2 V OW0 L UW1 SH AH0 N IY2 V OW0 L UW1 SH AH0 N EH2 V AH0 L UW1 SH AH0 N EH2 R IY0 CSC401/2511 Spring 2017 6

Annotation/transcription Speech data must be segmented and annotated in order to be useful to an ASR learning component. Programs like Wavesurfer or Praat allow you to demarcate where a phoneme begins and ends in time. CSC401/2511 Spring 2017 7

Putting it together? open the pod bay doors Language model Acoustic model CSC401/2511 Spring 2017 8

The noisy channel model for ASR Language model Source!(#) W Acoustic model Channel!(% #) ( Word sequence * # Decoder Observed % * = argmax 1 CSC401/2511 Spring 2017 9 2(( *)2(*) Acoustic sequence ( How to encode 2(( *)?

Reminder discrete HMMs Previously we saw discrete HMMs: at each state we observed a discrete symbol from a finite set of discrete symbols. word P(word) ship 0.1 pass 0.05 camp 0.05 frock 0.6 soccer 0.05 mother 0.1 tops 0.05 word P(word) ship 0.25 pass 0.25 camp 0.05 frock 0.3 soccer 0.05 mother 0.09 tops 0.01 word P(word) ship 0.3 pass 0 camp 0 frock 0.2 soccer 0.05 mother 0.05 tops 0.4 CSC401/2511 Spring 2017 10

Continuous HMMs (CHMM) A continuous HMM has observations that are distributed over continuous variables. Observation probabilities, 3 4, are also continuous. E.g., here 3 5 (6 ) tells us the probability of seeing the (multivariate) continuous observation 6 while in state 0. 6 = 4.32957 2.48562 1.08139 0.45628 b0 b1 b2 CSC401/2511 Spring 2017 11

Defining CHMMs Continuous HMMs are very similar to discrete HMMs. 8 = {: ;,, : > } : set of states (e.g., subphones) ( = R BC : continuous observation space Π = {E ;,, E > } R F = G 4H, I, J 8 L = 3 4 6, I 8, 6 ( yielding M = {N 5,, N O }, N 4 8 P = o 5,, o O, o 4 ( : initial state probabilities : state transition probabilities : state output probabilities (i.e., Gaussian mixtures) : state sequence : observation sequence CSC401/2511 Spring 2017 12

Word-level HMMs? Imagine that we want to learn an HMM for each word in our lexicon (e.g., 60K words 60K HMMs). No, thank you! Zipf s law tells us that many words occur very infrequently. 1 (or a few) training examples of a word is not enough to train a model as highly parameterized as a CHMM. In a word-level HMM, each state might be a phoneme. b0 b1 b2 CSC401/2511 Spring 2017 13

Phoneme HMMs Phonemes change over time we model these dynamics by building one HMM for each phoneme. Tristate phoneme models are popular. The centre state is often the steady part. b0 b1 b2 tristate phoneme model (e.g., /oi/) CSC401/2511 Spring 2017 14

Phoneme HMMs We train each phoneme HMM using all sequences of that phoneme. Even from different words. Phoneme HMMs /iy/ /ih/... 64 85 ae 85 96 sh 96 102 epi 102 106 m... MFCC Time, S 85 96 1 2 3 /eh/ /s/ 42 /sh/ annotation observations CSC401/2511 Spring 2017 15

Combining models We can learn an N-gram language model from word-level transcriptions of speech data. These models are discrete and are trained using MLE. Our phoneme HMMs together constitute our acoustic model. Each phoneme HMM tells us how a phoneme sounds. We can combine these models by concatenating phoneme HMMs together according to a known lexicon. We use a word-to-phoneme dictionary. CSC401/2511 Spring 2017 16

Combining models If we know how phonemes combine to make words, we can simply concatenate together our phoneme models by inserting and adjusting transition weights. e.g., Zipf is pronounced /z ih f/, so (It s a tiny bit more complicated than this normally phoneme HMMs have special handle states at either end that connect to other HMMs) CSC401/2511 Spring 2017 17

Co-articulation and triphones Co-articulation: n. When a phoneme is influenced by adjacent phonemes. A triphone HMM captures co-articulation. Triphone model /a-b+c/ is phoneme b when preceded by a and followed by c. Two (of many) triphone HMMs for /t/ /s-t+iy/ /iy-t+eh/ CSC401/2511 Spring 2017 18

Combining triphone HMMs Triphone models can only connect to other triphone models that match. /z+ih/ /z-ih+f/ /ih-f/ CSC401/2511 Spring 2017 19

Concatenating phoneme models We can easily incorporate unigram probabilities through transitions, too. From Jurafsky & Martin text CSC401/2511 Spring 2017 20

Bigram models From Jurafsky & Martin text CSC401/2511 Spring 2017 21

Using CHMMs As before, these HMMs are generative models that encode statistical knowledge of how output is generated. We train CHMMs with Baum-Welch (a type of Expectation- Maximization), as we did before with discrete HMMs. Here, the observation parameters, 3 4 6, are adjusted using the GMM training recipe from last lecture. We find the best state sequences using Viterbi, as before. Here, the best state sequence gives us a sequence of phonemes and words. CSC401/2511 Spring 2017 22

Speech recognition architecture Cepstral feature extraction ( MFCC features 2(( *) Gaussian Mixture models Phoneme likelihoods HMM lexicon 2(*) N-gram language model Viterbi decoder a real poncho * CSC401/2511 Spring 2017 23

Speech databases Large-vocabulary continuous ASR is meant to encode full conversational speech, with a vocabulary of >64K words. This requires lots of data to train our models. The Switchboard corpus contains 2430 conversations spread out over about 240 hours of data (~14 GB). The TIMIT database contains 63,000 sentences from 630 speakers. Relatively small (~750 MB), but very popular. Speech data from conferences (e.g., TED) or from broadcast news tends to be between 3 GB and 30 GB. CSC401/2511 Spring 2017 24

Aspects of ASR systems in the world Speaking mode: Isolated word (e.g., yes ) vs. continuous (e.g., Siri, ask Cortana for the weather ) Speaking style: Read speech vs. spontaneous speech; the latter contains many dysfluencies Enrolment: Vocabulary: Transducer: (e.g., stuttering, uh, like, ) Speaker-dependent (all training data from one speaker) vs. speaker-independent (training data from many speakers). Small (<20 words) or large (>50,000 words). Cell phone? Noise-cancelling microphone? Teleconference microphone? CSC401/2511 Spring 2017 25

Signal-to-noise ratio We are often concerned with the signal-to-noise ratio (SNR), which measures the ratio between the power of a desired signal within a recording (2 T4UVWX, e.g., the human speech) and additive noise (2 VY4TZ ). Noise typically includes: Background noise (e.g., people talking, wind), Signal degradation. This is normally white noise produced by the medium of transmission. 8[\ ]^ = 10 log ;5 2 T4UVWX 2 VY4TZ High 8[\ ]^ is >30dB. Low 8[\ ]^ is < 10 db. CSC401/2511 Spring 2017 26 You don t have to memorize this formula.

Audio-visual speech methods Observing the vocal tract directly, rather than through inference, can be very helpful in automatic speech recognition. The shape and aperture of the mouth gives some clues as to the phoneme being uttered. Depending on the level of invasiveness, we can even measure the glottis and tongue directly. CSC401/2511 Spring 2017 27

Example of articulatory data TORGO was built to train augmented ASR systems. 9 subjects with cerebral palsy (1 with ALS), 9 matched controls. Each reads 500 1000 prompts over 3 hours that cover phonemes and articulatory contrasts (e.g., meat vs. beat). Electromagnetic articulography (and video) track points to <1 mm. CSC401/2511 Spring 2017 28

Example Lip aperture and nasals /m/ /n/ /ng/ Lip apertures over time Acoustic spectrograms CSC401/2511 Spring 2017 29

Evaluating ASR accuracy How can you tell how good an ASR system at recognizing speech? E.g., if somebody said Reference: how to recognize speech but an ASR system heard Hypothesis: how to wreck a nice beach how do we quantify the error? One measure is word accuracy: #CorrectWords/#ReferenceWords E.g., 2/4, above This runs into problems similar to those we saw with SMT. E.g., the hypothesis how to recognize speech boing boing boing boing boing has 100% accuracy by this measure. Normalizing by #HypothesisWords also has problems CSC401/2511 Spring 2017 30

Word-error rates (WER) ASR enthusiasts are often concerned with word-error rate (WER), which counts different kinds of errors that can be made by ASR at the word-level. Substitution error: One word being mistook for another Deletion error: Insertion error: e.g., shift given ship An input word that is skipped e.g. I Torgo given I am Torgo A hallucinated word that was not in the input. e.g., This Norwegian parrot is no more given This parrot is no more CSC401/2511 Spring 2017 31

Evaluating ASR accuracy But how to decide which errors are of each type? E.g., Reference: how to recognize speech Hypothesis: how to wreck a nice beach, It s not so simple: speech seems to be mistaken for beach, except the /s/ phoneme is incorporated into the preceding hypothesis word, nice (/n ay s/). Here, recognize seems to be mistaken for wreck a nice Are each of wreck a nice substitutions of recognize? Is wreck a substitution for recognize? If so, the words a and nice must be insertions. Is nice a substitution for recognize? If so, the words wreck and a must be insertions. CSC401/2511 Spring 2017 32

Levenshtein distance In practice, ASR people are often more concerned with overall WER, and don t care about how those errors are partitioned. E.g., 3 substitution errors are equivalent to 1 substitution plus 2 insertions. The Levenshtein distance is a straightforward algorithm based on dynamic programming that allows us to compute overall WER. CSC401/2511 Spring 2017 33

Levenshtein distance Allocate matrix \[g + 1, i + 1] // where g is the number of reference words // and i is the number of hypothesis words Initialize \ 0,0 0, and \ I, J for all other I = 0 or J = 0 for I 1.. g // #ReferenceWords for J 1.. i // #Hypothesis words \[I, J] min ( \ I 1, J + 1, // deletion \ I 1, J 1, // if the I qr reference word and // the J qr hypothesis word match \ I 1, J 1 + 1, // if they differ, i.e., substitution \ I, J 1 + 1 ) // insertion Return 100 \ g, i /g CSC401/2511 Spring 2017 34

Levenshtein distance initialization hypothesis - how to wreck a nice beach - 0 Reference how to recognize speech The value at cell (I, J) is the minimum number of errors necessary to align I with J. CSC401/2511 Spring 2017 35

Levenshtein distance hypothesis - how to wreck a nice beach - 0 Reference how 0 to recognize speech \ 1,1 = min + 1, (0), + 1 = 0 (match) We put a little arrow in place to indicate the choice. Arrows are normally stored in a backtrace matrix. CSC401/2511 Spring 2017 36

Levenshtein distance hypothesis - how to wreck a nice beach - 0 Reference how 0 1 2 3 4 5 to recognize speech We continue along for the first reference word These are all insertion errors CSC401/2511 Spring 2017 37

Levenshtein distance hypothesis - how to wreck a nice beach - 0 Reference how 0 1 2 3 4 5 to 1 0 1 2 3 4 recognize speech And onto the second reference word CSC401/2511 Spring 2017 38

Levenshtein distance hypothesis - how to wreck a nice beach - 0 Reference how 0 1 2 3 4 5 to 1 0 1 2 3 4 recognize 2 1 1 2 3 4 speech Since recognize wreck, we have a substitution error. At some points, you have >1 possible path as indicated. We can prioritize types of errors arbitrarily. CSC401/2511 Spring 2017 39

Levenshtein distance hypothesis - how to wreck a nice beach - 0 Reference how 0 1 2 3 4 5 to 1 0 1 2 3 4 recognize 2 1 1 2 3 4 speech 3 2 2 2 3 4 And we finish the grid. There are \ g, i = 4 word errors and a WER of 4 4 = 100%. WER can be greater than 100% (relative to the reference). CSC401/2511 Spring 2017 40

Levenshtein distance hypothesis - how to wreck a nice beach - 0 Reference how 0 1 2 3 4 5 to 1 0 1 2 3 4 recognize 2 1 1 2 3 4 speech 3 2 2 2 3 4 If we want, we can backtrack using our arrows to find the proportion of substitution, deletion, and insertion errors. CSC401/2511 Spring 2017 41

Levenshtein distance hypothesis - how to wreck a nice beach - 0 Reference how 0 1 2 3 4 5 to 1 0 1 2 3 4 recognize 2 1 1 2 3 4 speech 3 2 2 2 3 4 Here, we estimate 2 substitution errors and 2 insertion errors. Arrows can be encoded within a special backtrace matrix. CSC401/2511 Spring 2017 42

Recent performance Corpus Speech type Lexicon size ASR WER (%) Human WER (%) Digits Spontaneous 10 0.3 % 0.009 % Phone directory Wall Street Journal Read 1000 3.6 % 0.1 % Read 64,000 6.6 % 1 % Radio news Mixed 64,000 13.5 % - Switchboard (telephone) conversation 10,000 19.3 % 4 % CSC401/2511 Spring 2017 43