Domain Adaptation of Language Model for Speech Recognition

Similar documents
A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Speech Recognition at ICSI: Broadcast News and beyond

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems

Calibration of Confidence Measures in Speech Recognition

Deep Neural Network Language Models

Probabilistic Latent Semantic Analysis

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

A study of speaker adaptation for DNN-based speech synthesis

Investigation on Mandarin Broadcast News Speech Recognition

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Learning Methods in Multilingual Speech Recognition

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Improvements to the Pruning Behavior of DNN Acoustic Models

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

arxiv: v1 [cs.cl] 27 Apr 2016

Lecture 1: Machine Learning Basics

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

A Case Study: News Classification Based on Term Frequency

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Assignment 1: Predicting Amazon Review Ratings

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Linking Task: Identifying authors and book titles in verbose queries

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Speech Emotion Recognition Using Support Vector Machine

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Reducing Features to Improve Bug Prediction

Using dialogue context to improve parsing performance in dialogue systems

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Python Machine Learning

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

A Neural Network GUI Tested on Text-To-Phoneme Mapping

arxiv: v1 [cs.cl] 2 Apr 2017

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

WHEN THERE IS A mismatch between the acoustic

Edinburgh Research Explorer

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

The Karlsruhe Institute of Technology Translation Systems for the WMT 2011

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Speech Translation for Triage of Emergency Phonecalls in Minority Languages

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

Rule Learning With Negation: Issues Regarding Effectiveness

THE world surrounding us involves multiple modalities

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

On the Formation of Phoneme Categories in DNN Acoustic Models

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

The 2014 KIT IWSLT Speech-to-Text Systems for English, German and Italian

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

The Strong Minimalist Thesis and Bounded Optimality

Switchboard Language Model Improvement with Conversational Data from Gigaword

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

A Reinforcement Learning Variant for Control Scheduling

Human Emotion Recognition From Speech

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

How to Judge the Quality of an Objective Classroom Test

Axiom 2013 Team Description Paper

Cross Language Information Retrieval

CSL465/603 - Machine Learning

COPING WITH LANGUAGE DATA SPARSITY: SEMANTIC HEAD MAPPING OF COMPOUND WORDS

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

Clickthrough-Based Translation Models for Web Search: from Word Models to Phrase Models

Word Segmentation of Off-line Handwritten Documents

Noisy SMS Machine Translation in Low-Density Languages

Multi-Lingual Text Leveling

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

Large vocabulary off-line handwriting recognition: A survey

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma

On the Combined Behavior of Autonomous Resource Management Agents

Domain Adaptation in Statistical Machine Translation of User-Forum Data using Component-Level Mixture Modelling

arxiv: v1 [cs.lg] 7 Apr 2015

Universiteit Leiden ICT in Business

Model Ensemble for Click Prediction in Bing Search Ads

Improved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge

Delaware Performance Appraisal System Building greater skills and knowledge for educators

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Rule Learning with Negation: Issues Regarding Effectiveness

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Eye Movements in Speech Technologies: an overview of current research

CS Machine Learning

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Greedy Decoding for Statistical Machine Translation in Almost Linear Time

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Running head: LISTENING COMPREHENSION OF UNIVERSITY REGISTERS 1

Artificial Neural Networks written examination

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Lecture 9: Speech Recognition

Transcription:

Domain Adaptation of Language Model for Speech Recognition A Confirmation Report Submitted to the School of Computer Science and Engineering of the Nanyang Technological University by Yerbolat Khassanov for the Confirmation for Admission to the Degree of Doctor of Philosophy January 7, 2017

Abstract i

Acknowledgments I would like to express my sincere thanks and appreciation to my supervisor Dr. Chng Eng Siong for his invaluable guidance, support and suggestions. His knowledge, suggestions, and discussions help me to become a capable researcher. His encouragement also helped me to overcome the difficulties encountered in my research. I also want to thank my colleagues in Rolls-Royce@NTU Corporate lab for their generous help. I want to thank Chong Tze Yuang for his generous help to write my first paper and prepare presentation slides. I also want to thank Benjamin Bigot for introducing me to the speech recognition systems. I am very grateful to the members of our RT1.1 team. It is a pleasure to collaborate with my team mates, Kyaw Zin Tun and San Linn. Last but not least, I want to thank my family in Kazakhstan, for their constant love and encouragement. ii

Contents Abstract...................................... i Acknowledgments................................ ii List of Figures.................................. v List of Tables................................... vi List of Abbreviations.............................. vii 1 Introduction 1 1.1 Motivation................................... 1 1.2 Contributions................................. 6 1.3 Report Organization............................. 7 2 Introduction to Language Model Adaptation for ASR 8 2.1 Background.................................. 9 2.1.1 Automatic Speech Recognition.................... 9 2.1.2 Statistical Language Models..................... 10 2.1.3 Domain Mismatch Problem..................... 14 2.2 General LM Adaptation Framework..................... 15 2.2.1 Supervised vs. Unsupervised..................... 15 2.2.2 Cross-domain vs. Within-domain.................. 16 2.2.3 Re-decoding vs. N-best and Lattice Re-scoring........... 16 2.3 Review of Unsupervised LM Domain Adaptation Techniques....... 17 2.3.1 Cache-based.............................. 18 2.3.2 Topic-mixture............................. 19 2.3.3 Query-based.............................. 21 2.4 Summary................................... 22 iii

3 Review of Data Selection 23 3.1 Overview.................................... 23 3.1.1 Data availability............................ 24 3.1.2 Application scenarios......................... 24 3.1.3 Domain adaptation by data selection................. 25 3.2 Data Selection Techniques.......................... 25 3.3 Applications.................................. 30 3.4 Summary................................... 32 4 LM Adaptation by Data Selection for ASR 34 4.1 Proposed Framework............................. 35 4.1.1 Overview............................... 35 4.1.2 Data Selection............................. 36 4.2 Experiment and Discussion.......................... 37 4.2.1 Data.................................. 37 4.2.2 The ASR System........................... 38 4.2.3 Experiment Setup and Results.................... 39 4.3 Summary................................... 43 5 Conclusions and Future Work 45 5.1 Contributions................................. 45 5.2 Future Directions............................... 47 5.2.1 Extracting Richer Linguistic Information.............. 47 5.2.2 Domain Tracking........................... 48 Publication 50 References 51 iv

List of Figures 2.1 Architecture of automatic speech recognition system............ 10 2.2 General LM adaptation framework...................... 15 2.3 Architecture of cache-based adaption techniques for ASR.......... 19 2.4 Architecture of topic-mixture based adaption techniques for ASR..... 20 2.5 Architecture of query-based adaption techniques for ASR......... 21 3.1 Data selection framework........................... 24 4.1 Proposed LM adaptation framework based on data selection........ 35 4.2 WER results obtained by the proposed LM adaptation framework..... 40 4.3 Perplexity results of target domain LMs computed on reference data... 40 4.4 WER results for 2-gram feature........................ 42 4.5 WER results for BOW feature........................ 42 v

List of Tables 4.1 TED-LIUM corpus characteristics...................... 37 4.2 TED-LIUM corpus test set details...................... 38 vi

List of Abbreviation AM ASR BOW CE CED DNN fmllr GMM HMM IDF KN LDA LM LSA LVCSR MFCC ML MLLT MT NER NLP POS PPL RBM RNN SLM SMT smbr TF TM WER WFST WWW Acoustic Model Automatic Speech Recognition Bag-of-words Cross Entropy Cross Entropy Difference Deep Neural Networks Feature Space Maximum Likelihood Linear Regression Gaussian Mixture Model Hidden Markov Model Inverse Document Frequency Kneser-Ney Latent Dirichlet Allocation Language Model Latent Semantic Analysis Large Vocabulary Continuous Speech Recognition Mel-Filterbank Cepstral Coefficient Maximum Likelihood Maximum Likelihood Linear Transform Machine Translation Named-entity Recognition Natural Language Processing Part-of-speech Perplexity Restricted Boltzmann Machine Recurrent Neural Network Statistical Language Model Statistical Machine Translation State-level Minimum Bayes risk Term Frequency Translation Model Word Error Rate Weighted Finite State Transducers World Wide Web vii

Chapter 1 Introduction 1.1 Motivation A brief history of speech recognition systems. Designing a machine that can mimic complex human behaviors such as understanding spoken language and responding accordingly has been envisioned long before advancement of computers. The major step in fulfilling this vision is to develop automatic speech recognition (ASR) systems which have attracted a substantial amount of effort over the last few decades [1]. Given the complexity of human language, the speech recognition technology evolved gradually. The first speech recognition systems focused on simple tasks such as recognizing numbers. For example, in 1952, Bell Laboratories designed Audrey [2] which is a first known and documented speech recognizer. Audrey could recognize ten digits spoken isolatedly by a single speaker with an accuracy of 97-99%. In 1962, IBM demonstrated Shoebox 1, a system which could recognize sixteen words, including ten digits and six arithmetic operations. Over the next decade, speech recognition technology advanced progressively from a simple machine that can recognize a few words to a sophisticated system that can recognize speech with a large vocabulary. Notably, in 1971, DARPA initiated Speech Understanding Research program which was responsible for Carnegie Mellon s Harpy [3] system. Harpy could recognize speech using a vocabulary of 1, 011 words, approximately the vocabulary of an average three-year-old. In these large vocabulary systems, however, the complexity of the task had considerably increased, particularly the confusion attributed to homophones. For example, the 1 www-03.ibm.com/ibm/history/exhibits/specialprod1/specialprod1 7.html 1

Chapter 1. Introduction words buy, bye and by comprise same phoneme sequence B AY (based on ARPAbet 2 phoneme set). Distinguishing such words was an infeasible task for the early speech recognition systems that mainly relied on acoustic information. Thus, the recognition capability of large vocabulary systems was limited. Introduction of language models in ASR. The use of only acoustic information proved to be insufficient to achieve human-like performance. Hence, other sources of knowledge were required. Therefore, in 1975, Jelinek et al. [4] proposed to incorporate a grammar structure of the natural language into the speech recognizer. The grammar structure was encoded into a language model (LM) based on statistical principles. The function of the statistical LM was to encapsulate syntactic, semantic, and pragmatic properties of the language considered. In the speech recognition system, the encapsulated knowledge was used to constrain search in a decoder by limiting the number of possible words to follow at any one point. The consequence was faster search and higher recognition accuracy. Since then, the statistical LMs have become an indispensable part of large vocabulary speech recognition systems. We will provide a thorough explanation of the state-of-the-art statistical LMs in chapter 2. The domain mismatch problem. The statistical LMs retain encapsulated knowledge in the form of probability distribution of linguistic units (e.g. words, sentences) learned from textual training data [5]. It is desirable for this training data to possess characteristics similar to input utterances submitted to the ASR system. For example, covering similar topics, speaking styles or both. Otherwise, the distribution learned by LM might mismatch with the target domain distribution of input utterances. As a result, the ASR output will be corrupted [5]. For example, a LM trained on industry domain data, but applied to input utterances from the math domain, might confuse the ASR to recognize COFACTOR IS as COW FACTORIES (the hamming distance between phoneme sequence of these phrases is 1). Therefore, for reliable performance of ASR systems the distribution learned by LM should fit the target domain. In ASR systems, however, maintaining a LM that fits the distribution of input test data is a challenging task. Specifically, in the cases where input utterances cover several 2 http://www.speech.cs.cmu.edu/cgi-bin/cmudict 2

Chapter 1. Introduction domains changing over the time such as in broadcast news, talk shows and documentary programs. The trivial solution to deal with such heterogeneous inputs is to assemble training data from various domains in order to construct generic LM. The generic LM enables ASR system to handle input utterance from any domain. While the generic LM offers a good coverage, the recognition performance of ASR system will still be sub-optimal due to the distribution mismatch between generic and specific domain data. Particularly, in the generic ASR systems, commonly used terms will push aside domainspecific terms (e.g. law, technical and medical domain terms). For example, technical domain term ipad might be misrecognized as a combination of two commonly used terms such as eye and pad. The domain-specific terms constitute an essential part of utterances that contribute to the context and meaning. Therefore, the correct recognition of such terms is crucial. In this thesis, we will focus on adapting generic LM to better fit the specific domain data. The comprehensive explanation of domain mismatch problem will be provided in chapter 2 Extracting target domain information. To perform LM adaptation, information about target domain is required, such as a list of keywords, a topic of discourse or collection of in-domain documents. This target domain information can be obtained in a supervised or an unsupervised manner. In a supervised manner, the target domain information is manually generated by domain experts, for example, by analyzing initial ASR output (word lattice or 1-best) produced by a generic LM. Whereas, in an unsupervised manner, the domain information is generated automatically, for example, derived from the initial ASR output by employing information retrieval techniques. While the supervised approach provides reliable and adequate information, it is time and cost ineffective. In this work, we will extract domain information in an unsupervised manner from the ASR output. Although the ASR output is a valuable source of target domain information, it is prone to errors caused by the recognition process. The recognition errors might corrupt the domain information present in the ASR output. Nevertheless, by simulating different levels of word error rate (WER) in the ASR output, Clarkson and Robinson [6] showed that transcripts with high WER can still benefit the adaptation process. In another work, 3

Chapter 1. Introduction Lecorvé et al. [7] used only incorrectly recognized parts of the ASR output to perform adaptation. Surprisingly, they obtained more than 10% relative perplexity improvement. They concluded that some misrecognized words are still in-domain words aiding to capture appropriate domain information and others are harmless. Thus, despite the presence of errors, the ASR output contains valuable information which can be effectively utilized by adaptation techniques. A brief review of existing adaptation techniques. Since the introduction of statistical LMs, several LM domain adaptation techniques have been proposed to alleviate the effect of distribution mismatch [8]. In practice, LMs can be adapted in two different stages of recognition process: online and offline. In online adaptation, the LM is adapted during the decoding process. However, the decoding process itself is a highly complex mechanism involving intensive computations which make the online LM adaptation impractical. In offline adaptation, on the other hand, the generic LM is first applied to produce initial ASR output (word lattice). Then, produced ASR output is utilized to generate target domain information, in supervised or unsupervised manner, which is employed to adapt the generic LM offline. Lastly, the adapted LM is applied to re-decode the input utterances, or to re-score the word lattice (or N-best list). Given the complexity of decoding process, re-decoding the input utterances is a tedious task. Hence, only a few LM types, for which fast decoding algorithms are available, are eligible for this task such as backoff n-gram models [10, 11]. The backoff n-gram model is a predominant choice for decoding in the state-of-the-art ASR systems due to its effectiveness and simplicity [12] (generic LM is a backoff n-gram model). Whereas, more complex models, such as neural network based [13], are usually employed to re-score the word lattices [9]. While the complex models are expected to have greater prediction power, the efficacy of re-scoring is constrained by the quality of generated word lattice which contains only subset of all possible hypotheses. For example, an inadequate LM used during the decoding stage might discard correct hypotheses, as a result, a deficient word lattice will be produced [14]. Hence, in this work, we will focus on adapting backoff n-gram models which can be effectively applied to re-decode the input utterances. 4

Chapter 1. Introduction The three popular backoff n-gram model adaptation techniques applied to ASR systems are cache-based, topic-mixture and query-based. These techniques employ domainspecific information to tune distribution of generic LM so that it better matches the target domain. For example, the cache-based techniques [6, 15 18] are based on the hypothesis that a word used in a recent past is more likely to be used again. Hence, the probabilities of recognized words are increased within the generic LM. In the topicmixture techniques [6, 18 21], the generic LM is dismantled into several sub-domain (or sub-topic) LMs interpolated together. Here, the domain of the final interpolated LM can be controlled by tuning interpolation weights of the sub-topic LMs. Hence, the ASR output is used to find the closest sub-topic LMs to increase their weights. The query-based techniques [7, 22 24], on the other hand, use ASR output to generate queries which are submitted to external sources, such as world wide web (WWW), to retrieve similar data. The retrieved data is then used to update parameters of the generic LM. For example, by training new pseudo in-domain LM from the retrieved data and interpolating it with the generic LM. These LM adaptation techniques have been shown effective to improve recognition performance of ASR systems. The complete review of these techniques will be given in chapter 2. The proposed adaptation approach based on data selection. The existing adaptation techniques typically adjust the distribution learned by generic LM to match the target domain distribution. This adjustment is performed by directly changing parameters of the generic LM, for instance, by increasing or decreasing probabilities of individual words (or n-grams). Changing parameters of LM might help to achieve desired distribution, however, the adapted LM most probably won t represent a distribution corresponding to the natural text produced by human. Consequently, the encapsulated knowledge might be corrupted. Thus, in this work, rather than directly updating parameters of the generic LM, we will examine other adaptation methods that preserves the natural distribution of linguistic units. In particular, we propose to manipulate the training data used to build generic LM. As was mentioned previously, the training data consist of text assembled from various domains. Hence, we will employ data selection techniques [25] to select a subset of 5

Chapter 1. Introduction training data similar to the ASR output (broad overview of data selection techniques will be exposed in chapter 3). As a result, out-of-domain sentences will be discarded, leaving only in-domain sentences. The in-domain sentences are then used to train new LM which is expected to better converge with the target domain distribution. In addition, the new LM will represent adapted version of generic LM, since it was build from the same, but pruned data. More importantly, the adapted LM produced in this way will encapsulate appropriate linguistic knowledge which complies with the regularities of natural language. To evaluate the effectiveness of proposed approach we conducted several experiments on TED-LIUM speech corpus which will be described in chapter 4. 1.2 Contributions In this thesis, we proposed unsupervised LM adaptation framework to address domain mismatch problem inherent in generic ASR systems. The proposed framework is based on data selection technique which customizes generic background corpus to produce domainspecific LM. The novelties of the proposed framework are listed below: 1) Existing LM adaptation techniques aim to tune parameters of the generic model to diverge its focus towards target domain. Different from them, the proposed approach employs ASR output and data selection techniques to perform adaptation at the data level. This work shows that a LM adapted in this way possesses a strong discriminative ability that results in substantial WER reduction. 2) Although the generic background corpus is sufficiently large and contains data from various domains, several adaptation techniques (e.g. query-based) still require indomain data retrieved from external sources such as WWW. Unlike these approaches, our method efficiently utilizes available background corpus by intelligently selecting indomain sentences. Hence, the proposed method doesn t rely on any external source which might be unavailable for some tasks involving private corporation or military domains. Experiments performed on TED-LIUM speech corpus show that proposed adaptation framework can produce domain-specific LM that achieves up to 10% relative WER reduction. When we adapted LM to a more specific domain the WER reduction up to 12% was observed. Moreover, we compared our approach against standard adaptation 6

Chapter 1. Introduction method based on linear interpolation which directly updates parameters of a LM, and observed better WER. The work on unsupervised LM adaptation by data selection was accepted by ACIIDS conference [26]. 1.3 Report Organization The report is organized as follows: In Chapter 2, we provide background information on ASR systems, statistical LMs, and domain mismatch problem. We describe general LM adaptation framework, followed by a review of popular LM adaptation techniques applied to ASR systems. In Chapter 3, we provide an overview of the current state-of-the-art data selection techniques including linguistic features used to represent data and similarity metrics. We briefly review other natural language processing (NLP) applications where data selection has been employed. In Chapter 4, we propose data selection based unsupervised LM adaptation framework for ASR systems. We explain experiment setup and data. Lastly, obtained results are discussed. Chapter 5 concludes the report and lists future research directions. 7