Joint Decoding for Phoneme-Grapheme Continuous Speech Recognition Mathew Magimai.-Doss a b Samy Bengio a Hervé Bourlard a b IDIAP RR 03-52

Similar documents
Learning Methods in Multilingual Speech Recognition

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Speech Recognition at ICSI: Broadcast News and beyond

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

On the Formation of Phoneme Categories in DNN Acoustic Models

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

A study of speaker adaptation for DNN-based speech synthesis

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Segregation of Unvoiced Speech from Nonspeech Interference

Lecture 9: Speech Recognition

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Automatic Pronunciation Checker

Speech Emotion Recognition Using Support Vector Machine

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Improvements to the Pruning Behavior of DNN Acoustic Models

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

WHEN THERE IS A mismatch between the acoustic

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Letter-based speech synthesis

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Human Emotion Recognition From Speech

A Neural Network GUI Tested on Text-To-Phoneme Mapping

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Speaker recognition using universal background model on YOHO database

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

Edinburgh Research Explorer

Calibration of Confidence Measures in Speech Recognition

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

SARDNET: A Self-Organizing Feature Map for Sequences

Speech Recognition by Indexing and Sequencing

Learning Methods for Fuzzy Systems

Deep Neural Network Language Models

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

Python Machine Learning

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

Investigation on Mandarin Broadcast News Speech Recognition

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Probabilistic Latent Semantic Analysis

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

Speaker Identification by Comparison of Smart Methods. Abstract

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Support Vector Machines for Speaker and Language Recognition

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Dropout improves Recurrent Neural Networks for Handwriting Recognition

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Voice conversion through vector quantization

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

LEARNING A SEMANTIC PARSER FROM SPOKEN UTTERANCES. Judith Gaspers and Philipp Cimiano

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

Characterizing and Processing Robot-Directed Speech

Artificial Neural Networks written examination

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

Using dialogue context to improve parsing performance in dialogue systems

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

Lecture 1: Machine Learning Basics

Proceedings of Meetings on Acoustics

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Florida Reading Endorsement Alignment Matrix Competency 1

Speech Translation for Triage of Emergency Phonecalls in Minority Languages

English Language and Applied Linguistics. Module Descriptions 2017/18

Word Segmentation of Off-line Handwritten Documents

Rule Learning With Negation: Issues Regarding Effectiveness

CROSS-LANGUAGE MAPPING FOR SMALL-VOCABULARY ASR IN UNDER-RESOURCED LANGUAGES: INVESTIGATING THE IMPACT OF SOURCE LANGUAGE CHOICE

Mandarin Lexical Tone Recognition: The Gating Paradigm

Small-Vocabulary Speech Recognition for Resource- Scarce Languages

Effect of Word Complexity on L2 Vocabulary Learning

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Transcription:

R E S E A R C H R E P O R T I D I A P Joint Decoding for Phoneme-Grapheme Continuous Speech Recognition Mathew Magimai.-Doss a b Samy Bengio a Hervé Bourlard a b IDIAP RR 03-52 October 2003 submitted for publication D a l l e M o l l e I n s t i t u t e f or Perceptual Artif icial Intelligence P.O.Box 592 Martigny Valais Switzerland phone +41 27 721 77 11 fax +41 27 721 77 12 e-mail secretariat@idiap.ch internet http://www.idiap.ch a Dalle Molle Institute for Artificial Intelligence CH-1920, Martigny, Switzerland b Swiss Federal Institute of Technology (EPFL) CH-1015, Lausanne, Switzerland

IDIAP Research Report 03-52 Joint Decoding for Phoneme-Grapheme Continuous Speech Recognition Mathew Magimai.-Doss Samy Bengio Hervé Bourlard October 2003 submitted for publication Abstract. Standard ASR systems typically use phoneme as the subword units. Preliminary studies have shown that the performance of the ASR system could be improved by using grapheme as additional subword units. In this paper, we investigate such a system where the word models are defined in terms of two different subword units, i.e., phoneme and grapheme. During training, models for both the subword units are trained, and then during recognition either both or just one subword unit is used. We have studied this system for a continuous speech recognition task in American English language. Our studies show that grapheme information used along with phoneme information improves the performance of ASR.

2 IDIAP RR 03-52 1 Introduction State-of-the-art HMM-based Automatic Speech Recognition (ASR) systems model p(q, X), the evolution of the hidden space Q = {q 1,, q n, q N } and the observed feature space X = {x 1,, x n, x N } also denoted X1 N over time frame 1,, N [RJ93]. The states represent the subword units which describe the word model. Standard ASR typically use phoneme as subword units. In recent studies, good results have been reported using grapheme as subword units. There are certain advantages in using graphemes as subword unit such as, the word models could be easily derived from the orthographic transcription of the word and it is relatively noise free as compared to word models based upon phoneme units, for e.g. the word ZERO can be pronounced as /z/ /ih/ /r/ /ow/ or /z/ /iy/ /r/ /ow/; but the grapheme-based representation remains as [Z][E][R][O]. At the same time there are certain drawbacks, such as, acoustic feature vectors derived from the smoothed spectral envelope of the speech signal typically depict the characteristics of phonemes and there is a weak correspondence between the graphemes and the phonemes in languages such as English, e.g., the grapheme [E] in word ZERO associates itself to phoneme /ih/, where as, in word EIGHT it associates itself to phoneme /ey/. In [KN02], the orthographic transcription of the words are used to map them onto acoustic HMM state models using phonetically motivated decision tree questions, for instance, a grapheme is assigned to a phonetic question if the grapheme is part of the phoneme. This, however, makes the modelling process complex. In [MDSBB03], the approach to model grapheme is similar to modelling auxiliary information [SMDB03, MDSB03]. The grapheme is treated as an auxiliary information L = {l 1,, l n, l N } and the evolution of the hidden spaces Q and L over X is modelled (i.e. p(q, X, L) instead of p(q, X)). This system could be seen as a system where word models are described by two different subword units, the phonemes and the graphemes. During training, models are trained for both the subword units maximizing the likelihood of the training data. During recognition, decoding is performed using either one or both the subword units. This system is similar to factorial HMMs [GJ97], where there are several chains of states as opposed to a single chain in standard HMMs. Each chain has its own states and dynamics; but the observation at any time depends upon the current state in all the chains. In [MDSBB03], the preliminary studies conducted on isolated word recognition task showed that the performance of the ASR could be improved by using phoneme and grapheme subword units together. In this paper, we further investigate the system proposed in [MDSBB03] in the context of continuous speech recognition task. In Section 2, we briefly describe the system we are investigating. Section 3 presents the experimental studies. Finally in Section 4, we summarize and conclude with future work. 2 Modelling Phoneme Grapheme Standard ASR models p(q, X) as p(q, X) N p(x n q n ) P(q n q n 1 ) (1) n=1 where q n Q = {1,, k,, K}, represents the phoneme. Similarly for a system with L as the hidden space we model p(l, X) N p(x n l n ) P(l n l n 1 ) (2) n=1 where l n L = {1,, r,, R}, the grapheme set. We are interested in an ASR where the word models are described by two different subword units, and hence, interested in modelling the evolution of two hidden spaces Q and L (instead of just one)

IDIAP RR 03-52 3 and the observed space X over time i.e. p(q, L, X) p(q,l, X) NY p(x n q n, l n)p(q n q n 1)P(l n l n 1) (3) n=1 For such a system, the forward recurrence can be written as: α(n, k, r)= p(q n = k, l n = r, X1 n ) K = p(x n q n =k, l n =r) P(q n =k q n 1 =i) i=1 R P(l n = r l n 1 = j) α(n 1, i, j) (4) j=1 During recognition, we decode in the joint phoneme and grapheme spaces. The Viterbi decoding algorithm that gives the best sequence in the Q and L spaces, can be written as: V (n, k, r) = p(x n q n = k, l n = r)max P(q n =k q n 1 =i) i maxp(l n = r l n 1 = j) V (n 1, i, j) (5) j For task such as isolated word recognition studied in [MDSBB03], this decoding step could be reduced to two independent decoding steps in Q and L space, respectively; but for continuous speech recognition we need to perform 2D decoding as described above. We are investigating the proposed system in the framework of hybrid HMM/ANN ASR [BM94]. In hybrid HMM/ANN ASR, during training a Multilayer Perceptron (MLP) is trained say, with K output units for system in (1). The likelihood estimate is replaced by the scaled-likelihood estimate which is computed from the output of the MLP (posterior estimates) and priors of the output units (hand counting). For instance, p(x n q n ) in (1) is replaced by its scaled-likelihood estimate p sl (x n q n ), which is estimated as [BM94]: p sl (x n q n ) = p(x n q n ) p(x n ) = P(q n x n ) P(q n ) The emission distribution p(x n q n = k, l n = r) of the phoneme-grapheme system could be estimated in different ways, such as, we could train an MLP with K R output units and estimate the scaled-likelihood as p(x n q n = k, l n = r) = P(q n = k, l n = r x n ) (7) p(x n ) P(q n = k, l n = r) Such a system, during training would automatically, model the association between the subword units in Q and L. This system has an added advantage that it could be reduced to a single hidden variable system by marginalizing any one of the hidden variables, yielding: p(x n q n = k) p(x n ) p(x n l n = r) p(x n ) R j=1 = P(q n = k, l n = j x n ) P(q n = k) K i=1 = P(q n = i, l n = r x n ) P(l n = r) and using this scaled-likelihood estimate to decode according to (1) or (2), respectively. Yet another approach would be to assume independence between the two hidden variables Q and L, train two separate systems one for each phoneme and grapheme, and estimate the scaled-likelihood as following: p(x n q n = k, l n = r) p sl (x n q n =k)p sl (x n l n =r) (10) p(x n ) In [MDSBB03], the phoneme-grapheme system studies were conducted in the lines of (10). In this paper, we will be investigating phoneme-grapheme systems in the lines of both (7) and (10). (6) (8) (9)

4 IDIAP RR 03-52 3 Experimental Studies Standard ASR typically use phonemes as subword units. The lexicon of an ASR contains the orthographic transcription of the word and its phonetic transcription. During decoding, standard ASR uses the phonetic transcription only, ignoring the orthographic transcription. In this paper, we are mainly interested in investigating the use of the orthographic information for automatic speech recognition. We use OGI Numbers database for connected word recognition task [CFL94]. The training set contains 3233 utterances spoken by different speakers and the validation set consists of 357 utterances. The test set contains 1206 utterances. The vocabulary consists of 31 words with single pronunciation for each word. The acoustic vector x n is the PLP cepstral coefficients [Her90] extracted from the speech signal using a window of 25 ms with a shift of 12.5 ms, followed by cepstral mean subtraction. At each time frame, 13 PLP cepstral coefficients c 0 c 12, their first-order and second-order derivatives are extracted, resulting in 39 dimensional acoustic vector. All the MLPs trained in our studies take nine frames input feature (4 frames of left and right context, each) and have the same number of parameters. There are 24 context-independent phonemes including silence associated with Q, each modelled by a single emitting state. We trained a phoneme baseline system (System P) via embedded Viterbi training [BM94] and performed recognition using single pronunciation of each word. The performance of the phoneme baseline system is given in Table 1. There are 19 context-independent grapheme subword units including silence associated with L representing the characters in the orthographic transcription of the words. Similar to phonemes each of the grapheme units are modelled by a single emitting state. We trained a grapheme baseline system (System G) via embedded Viterbi training and performed recognition experiments using the orthographic transcription of the words. The performance of the grapheme baseline system is given in Table 1. The phoneme baseline system performs significantly better than the grapheme baseline system. Table 1: Performance of phoneme and grapheme baseline systems. The performance is expressed in terms of Word Error Rate (WER). System # of output units WER System P 24 9.6% System G 19 17.8% As suggested in Section 2, we study the two approaches to model phoneme and grapheme subword units, (a) Modelling the phoneme and grapheme subword units with a single MLP. (b) Modelling the phoneme and grapheme subword units through separate MLPs. For (a), we trained an MLP with 24 19 = 456 output units (System PG). During training, at each iteration, we marginalize out the phoneme information as per (9) and perform Viterbi decoding according to (2) to get the segmentation in terms of graphemes. We performed recognition experiments by marginalizing the grapheme subword units according to (8) and decoding according to (1), and similarly we performed recognition experiments by marginalizing the phoneme subword units according to (9) and decoding according to (2). The performances are given in Table 2. There is no improvement in the performance of the phoneme system; but there is a significant improvement in the performance of the grapheme system. We also studied systems where the phoneme is reduced to its broad-phonetic-class representation. By broad-phonetic-class, we refer to the phonetic features, such as manner, place, height. In our studies, we use the phonetic feature values similar to the one used in [Hos00, Chapter 7] and [MDSBB03]. The

IDIAP RR 03-52 5 Table 2: Performance of phoneme-only and grapheme-only system by marginalizing (hide) grapheme and phoneme, respectively, at the output of phoneme-grapheme MLP (System PG). The performance is expressed in terms of Word Error Rate (WER). Subword Unit Subword Unit WER Hidden Phoneme Grapheme 9.6% Grapheme Phoneme 14.5% mapping between the phonemes and the values of the broad-phonetic-class could be obtained from a International Phonetic Alphabet (IPA) chart. We studied three different grapheme-broad-phonetic-class systems corresponding to the different broad-phonetic-classes, (1) manner (System GBM), (2) place (System GBP) and (3) height (System GBH). We train acoustic models for both grapheme units and values of the broad-phonetic-class by training a single MLP via embedded Viterbi training, similar to the phoneme-grapheme MLP. We performed recognition studies: 1. Marginalizing the broad-phonetic-class according to (9) and performing decoding just using grapheme transcription. 2. Performing 2D decoding in the grapheme and broad-phonetic-class space according to (5). Table 3 presents the experimental results of this study. The grapheme systems perform significantly better than the grapheme baseline system; but the grapheme-broad-phonetic-class system performs significantly better than all the grapheme systems. Table 3: Performance of grapheme-broad-phonetic-class based system. The performance is expressed in terms of Word Error Rate (WER). The results of the grapheme systems (Graph) are given in column 3 and of the grapheme-broad-phonetic-class systems (GB) in column 4. System Broad-phonetic WER WER class Graph GB System G - 17.8% - System GBM Manner 15.3% 13.1% System GBP Place 14.4% 11.9% System GBH Height 15.0% 11.7% Next, we study the performance of the phoneme-grapheme system. As mentioned earlier in this paper, we study two different kinds of system. (a) Modelling the phoneme and grapheme subword units through single MLP. For such a system the scaled-likelihood is estimated as per (7) from the posterior output of the MLP and the decoding is performed according to (5) (System PG). (b) Assuming independence between the phoneme units and the grapheme units, i.e., modelling them through different MLPs. The scaled-likelihood p(x n q n, l n ) is then obtained from the scaledlikelihood estimate of phoneme units and grapheme units according to (10) and the decoding is performed according to(5). In [MDSBB03], the best results were obtained by weighting the log probability streams of phoneme and grapheme differently. However, in this paper we estimate p(x n q n, l n ) exactly according to (10).

6 IDIAP RR 03-52 The results of this study are given in Table 4. The first row contains the performance of the phoneme baseline system. The second row contains the performance of System PG. This system performs slightly poorer compared to the baseline system. The remaining rows are the results obtained for phoneme-grapheme system where the phoneme units and grapheme units are modelled by different MLPs. These systems perform better than the baseline system. Table 4: Performance of phoneme-grapheme system. Columns 1 and 2 indicate from which of the MLPs the phoneme and grapheme scaled-likelihood estimates were estimated, respectively for the system where the independence between phoneme units and grapheme units is assumed. The performance is expressed in terms of Word Error Rate (WER). Phoneme Grapheme WER System P - 9.6% System PG System PG 9.9% System P System PG 9.0% System P System GBM 9.0% System P System GBP 8.9% System P System GBH 9.2% 4 Conclusion and Future Work In this paper, we investigated a continuous speech recognizer which uses both phoneme and grapheme as subword units. ASR using just grapheme as subword unit yields acceptable performance, which could be further improved by introducing phonetic knowledge in it. We studied two different phoneme-grapheme systems. The results obtained from phonemegrapheme system studies suggest that modelling phoneme and grapheme subword units, and using them together during recognition could help in improving the performance of ASR. This has to be further studied for large vocabulary continuous speech recognition task. In this paper, our primary focus was upon using the grapheme information at model level. In future work, we intend to investigate combining hypotheses generated by separate phoneme and grapheme recognizers. In languages such as English, there is a weak correspondence between the graphemes and the phonemes. So, it would be worth investigating this approach for languages such as German which has strong correspondence between the graphemes and the phonemes. 5 Acknowledgment This work was supported by the Swiss National Science Foundation (NSF) under grant MULTI (2000-068231.02/1) and Swiss National Center of Competence in Research (NCCR) on Interactive Multimodal Information Management (IM)2. The NCCR is managed by the Swiss NSF on behalf of the federal authorities. The authors would like to thank Mark Barnard, IDIAP for his help with phonetic knowledge. References [BM94] [CFL94] H. Bourlard and N. Morgan. Connectionist Speech Recognition - A Hybrid Approach. Kluwer Academic Publishers, 1994. R. A. Cole, M. Fanty, and T. Lander. Telephone speech corpus development at CSLU. In ICLSP, September 1994.

IDIAP RR 03-52 7 [GJ97] Z. Ghahramani and M. I. Jordan. Factorial hidden Markov models. Machine Learning, 29:245 273, 1997. [Her90] H. Hermansky. Perceptual linear predictive(plp) analysis of speech. JASA, 87(4):1738 1752, 1990. [Hos00] [KN02] [MDSB03] J.-P Hosom. Automatic Time Alignment of Phonemes Using Acoustic-Phonetic Information. PhD dissertation, CSLU, OGI, USA, 2000. S. Kanthak and H. Ney. Context-dependent acoustic modeling using graphemes for large vocabulary speech recognition. In ICASSP, pages 845 848, 2002. M. Magimai.-Doss, T. A. Stephenson, and H. Bourlard. Using pitch frequency information in speech recognition. In Eurospeech, pages 2525 2528, September 2003. [MDSBB03] M. Magimai.-Doss, T. A. Stephenson, H. Bourlard, and S. Bengio. Phoneme-Grapheme based automatic speech recognition system. In ASRU, December 2003. [RJ93] [SMDB03] L. R. Rabiner and H. W. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs New Jersey, 1993. T. A. Stephenson, M. Magimai.-Doss, and H. Bourlard. Speech recognition with auxiliary information. To appear in IEEE Trans. Speech and Audio Processing, 2003.