The USTC System for Blizzard Challenge 2012

Similar documents
Management Science Letters

E-LEARNING USABILITY: A LEARNER-ADAPTED APPROACH BASED ON THE EVALUATION OF LEANER S PREFERENCES. Valentina Terzieva, Yuri Pavlov, Rumen Andreev

arxiv: v1 [cs.dl] 22 Dec 2016

Natural language processing implementation on Romanian ChatBot

Fuzzy Reference Gain-Scheduling Approach as Intelligent Agents: FRGS Agent

'Norwegian University of Science and Technology, Department of Computer and Information Science

Consortium: North Carolina Community Colleges

Application for Admission

CONSTITUENT VOICE TECHNICAL NOTE 1 INTRODUCING Version 1.1, September 2014

Learning Methods in Multilingual Speech Recognition

part2 Participatory Processes

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

HANDBOOK. Career Center Handbook. Tools & Tips for Career Search Success CALIFORNIA STATE UNIVERSITY, SACR AMENTO

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Letter-based speech synthesis

VISION, MISSION, VALUES, AND GOALS

Speech Emotion Recognition Using Support Vector Machine

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

A study of speaker adaptation for DNN-based speech synthesis

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Modeling function word errors in DNN-HMM based LVCSR systems

Speech Recognition at ICSI: Broadcast News and beyond

Edinburgh Research Explorer

Modeling function word errors in DNN-HMM based LVCSR systems

Speaker recognition using universal background model on YOHO database

also inside Continuing Education Alumni Authors College Events

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Acquiring Competence from Performance Data

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

The IRISA Text-To-Speech System for the Blizzard Challenge 2017

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Statistical Parametric Speech Synthesis

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

WHEN THERE IS A mismatch between the acoustic

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Unit Selection Synthesis Using Long Non-Uniform Units and Phonemic Identity Matching

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Mandarin Lexical Tone Recognition: The Gating Paradigm

Investigation on Mandarin Broadcast News Speech Recognition

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

Calibration of Confidence Measures in Speech Recognition

Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Body-Conducted Speech Recognition and its Application to Speech Support System

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Assignment 1: Predicting Amazon Review Ratings

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Reducing Features to Improve Bug Prediction

A Hybrid Text-To-Speech system for Afrikaans

Expressive speech synthesis: a review

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

2014 Gold Award Winner SpecialParent

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

METHODS FOR EXTRACTING AND CLASSIFYING PAIRS OF COGNATES AND FALSE FRIENDS

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA

Segregation of Unvoiced Speech from Nonspeech Interference

/$ IEEE

Toward a Unified Approach to Statistical Language Modeling for Chinese

Voice conversion through vector quantization

Human Emotion Recognition From Speech

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Building Text Corpus for Unit Selection Synthesis

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

arxiv: v2 [cs.cv] 30 Mar 2017

Rhythm-typology revisited.

On March 15, 2016, Governor Rick Snyder. Continuing Medical Education Becomes Mandatory in Michigan. in this issue... 3 Great Lakes Veterinary

Affective Classification of Generic Audio Clips using Regression Models

Lecture 9: Speech Recognition

Multi-View Features in a DNN-CRF Model for Improved Sentence Unit Detection on English Broadcast News

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

On the Combined Behavior of Autonomous Resource Management Agents

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

Author's personal copy

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Probabilistic Latent Semantic Analysis

INPE São José dos Campos

arxiv: v1 [cs.lg] 3 May 2013

Mining Association Rules in Student s Assessment Data

An Online Handwriting Recognition System For Turkish

Australian Journal of Basic and Applied Sciences

THE PERCEPTION AND PRODUCTION OF STRESS AND INTONATION BY CHILDREN WITH COCHLEAR IMPLANTS

Acoustic correlates of stress and their use in diagnosing syllable fusion in Tongan. James White & Marc Garellek UCLA

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Transcription:

The USTC for Blizzard Challege 2012 Zhe-Hua Lig, Xia-Ju Xia, Yag Sog, Che-Yu Yag, Lig-Hui Che, Li-Rog Dai iflytek Speech Lab, Uiversity of Sciece ad Techology of Chia, Hefei, P.R.Chia zhlig@ustc.edu Abstract This paper itroduces the speech sythesis system developed by USTC for Blizzard Challege 2012. A audiobook speech corpus is adopted as the traiig data for system costructio this year. Similar to our previous systems, the hidde Markov model (HMM) based uit selectio ad waveform cocateatio approach is followed to develop our speech sythesis system usig this corpus. Cosiderig the icosistet recordig coditios ad the arrator s expressiveess withi the corpus, we add some chael ad expressiveess related labels to each setece besides the covetioal segmetal ad prosodic labels for system costructio. The evaluatio results of Blizzard Challege 2012 show that our system performs well i all evaluatio tests, which proves the effectiveess of the HMM-based uit selectio approach i copig with a o-stadard speech sythesis corpus. Idex Terms: Speech sythesis, uit selectio, hidde Markov model 1. Itroductio USTC have bee attedig Blizzard Challege sice 2006. I 2006, we submitted a HMM-based statistical parametric speech sythesis system [1]. Sice Blizzard Challege 2007 [2], we started to adopt the HMM-based uit selectio ad waveform cocateatio approach [3] to build our systems for achievig better similarity ad aturaless of sythetic speech. I this method, the optimal cadidate phoe sequece is searched out from the speech database by optimizig a statistical criterio which is derived from a group of acoustic models. The criterio is a combiatio of maximum likelihood ad miimum Kullback-Leibler divergece (KLD). The acoustic models are traied usig differet acoustic features, such as frame-level spectral ad F0 features, phoe duratios, ad so o. Furthermore, some ew techiques have bee developed ad evaluated durig the system costructio of the followig years. I Blizzard Challege 2009, cross-validatio (CV) ad miimal geeratio error criterio (MGE) [4] was itroduced to optimize the scale of the decisio tree for model clusterig automatically. State-sized cocateatio uits ad multi- Gaussia state probability desity fuctios (PDFs) were also employed durig system costructio [5]. I Blizzard Challege 2010, a covariace tyig techique was applied to improve the efficiecy ad reduce the footprit of the acoustic models [6] ad a syllable-level F0 model was itroduced to evaluatio the pitch combiatio of two adjacet syllables [7]. I Blizzard Challege 2011, a maximum log likelihood ratio (LLR) criterio was developed to replace the covetioal maximum likelihood criterio for uit selectio [8]. The same HMM-based uit selectio ad waveform cocateatio approach is followed to build our system for Blizzard Challege 2012. Due to the limited preparatio time, we costruct our system usig the framework similar to Blizzard Challege 2007. The differece is that a extra syllable-level F0 model [7] is added. Because the speech corpus for system costructio is composed of audiobook recordigs with automatic trascriptios, we make a setece selectio based o the cofidece value of speech recogitio ad add some chael ad expressiveess related labels to each setece for better cotext-depedet model traiig. The evaluatio results of Blizzard Challege 2012 prove the effectiveess of our HMM-based uit selectio approach i dealig with such o-stadard speech sythesis corpus. This paper is orgaized as follows. Sectio 2 itroduces our methods used for system costructio. I sectio 3, the evaluatio results of our system i Blizzard Challege 2012 are show ad discussed. The coclusios are made i sectio 4. 2. Methods 2.1. HMM-based uit selectio method 2.1.1. Model traiig At traiig stage, we firsly choose a group of acoustic features that ca be used to evaluate the aturaless of sythetic speech. Let M deote the umber of chose features. The task of model traiig is to estimate a set of cotext-depedet statistical models {λ 1,..., λ M } for these features. I our system for Blizzard Challege 2012, phoe is adopted as the basic segmet for uit s- electio ad six models, icludig a spectrum model, a F0 model, a phoe duratio model, a cocateatig spectrum model, a cocateatig F0 model, ad a syllablelevel F0 model are traied. The spectrum model ad the

F0 model are used to model the frame-level spectral ad F0 features. The phoe duratio model presets the distributio of frame umbers withi a phoe. The cocateatig spectrum ad F0 models describe the distributio of spectral ad F0 trasitios at phoe boudaries [2]. The syllable-level F0 model is traied usig the F0 features extracted from the vowels of two adjacet syllables [7]. Based o the extracted spectral ad F0 parameters for each frame of the traiig database, HMMs are estimated uder maximum likelihood criterio to get the spectrum model ad the F0 model, where the spectrum is modeled by a cotiuous probability distributio ad the F0 is modeled by a multi-space probability distributio (MSD) [9]. The we take the state aligmet results usig the traied HMMs to trai the phoe duratio model, cocateatig spectrum model, cocateatig F0 model, ad syllable-level F0 model respectively [2, 7]. I order to preset the effects of cotext features o the distributio of acoustic features, all the models are traied cotextdepedetly. Decisio-tree-based model clusterig techique [10] is applied to deal with the data-sparsity problems ad to predict the model parameters for the cotext features that do ot exist i the traiig set at sythesis stage. 2.1.2. Uit selectio Assume the utterace for sythesis cosists of N phoes ad has cotext feature C, which is give by text aalysis o the iput setece. A cadidate sequece of phoe-sized uits to sythesis this utterace is writte as U = {u 1, u 2,..., u N }. The, the optimal sequece U is searched out from the database uder the statistical criterio of U = arg max U M w m [log P λm (X(U, m) C) m=1 w KLD D λm (C(U), C)] (1) where X(U, m) extracts the acoustic features correspodig to the m-th model from the uit sequece U; C(U) deotes the cotext feature of the uit sequece U; P λm ( ) ad D λm ( ) represet the likelihood ad KLD calculatio fuctios respectively;w m ad w KLD deote the weights for the m-th model ad the KLD compoets 1 i the criterio. Furthermore, we ca rewrite (1) ito the covetioal form of a sum of target cost ad cocateatio cost as described i [7]. The a dyamic programmig (DP) search is applied to fid the optimal cadidate sequece. I order to reduce the computatio complexity of DP search, a KLD-based uit pre-selectio algorithm [2] is applied before the DP search. 1 Oly the KLD of spectrum model, F0 model, ad phoe duratio model are cosidered i our implemetatio. Fially, the waveforms of every two cosecutive cadidate uits i the optimal sequece are cocateated to produce the sythesized speech. The cross-fade techique [11] is used here to smooth the phase discotiuity at the cocateatio poits of phoe boudaries. 2.2. Database ad aotatio This year, a audiobook database [12] is released as the speech corpus for system costructio. This database cosists of the recordigs of four books writte by Mark Twai ad was proouced by a America Eglish arrator. The texts of this database is geerated by lightly supervised speech recogitio techique [12] with a cofidece value for each setece. We processed the database by the followig steps. 1) Setece selectio. The seteces with cofidece value lower tha 0 were discarded. The umber of remaiig seteces is 26, 001 ad the total duratio is about 50 hours. 2) Segmetal ad prosodic labellig. We adopted a Eglish text aalysis tool provided by iflytek to get the phoeme trascriptio ad ToBI iformatio of each setece based o the orthographic texts provided with the speech data. The phoe boudary segmetatio was coducted by HMM aligmet usig a adapted acoustic model. 3) Chael labellig. The database cosists of four stories ad we foud there is sigificat chael icosistecy amog the recordigs of differet s- tories. Thus, we added a chael label to each setece accordig to the story it belogs to. We checked several samples of each story ad assiged the chael label empirically. The first ad third s- tories were labelled as Chael 1. The secod ad the fourth stories were labelled as Chael 2 ad Chael 3 respectively. This chael label is added to the questio set for decisio-tree-based model clusterig. At sythesis time, the label of Chael 1 is used for iput seteces. 4) Expressiveess labellig. Compared to the covetioal speech sythesis databases with ewsreadig style, this audiobook database is far more expressive. I order to get relatively eutral speech for system costructio, we made a simple twovalue expressiveess labellig accordig to the average F0 of each setece i a usupervised way. The idea is similar to [13]. Firstly, the average F0 of all seteces i the database are calculated ad a threshold of 175Hz is applied empirically. The seteces with average F0 lower tha this threshold were labelled as eutral oes. Otherwise, they were labelled as expressive oes. This label is also added to the questios set for decisio-tree-based

MOS 3.8 Mea Opiio Scores (similarity to origial speaker) All listeers 3.7 3.6 3.5 3.4 SYS_32H SYS_32H-EL SYS_50H Figure 1: The mea opiio scores (MOS) with 95% cofidece iterval for the three systems i the iteral experimet. Score 1 2 3 4 5 model clusterig. At sythesis time, the eutral label is assiged to each iput setece. 345 346 348 350 349 345 348 344 346 345 345 A C F I B H G D J E K 2.3. Iteral experimet Figure 2: Boxplot of MOS o similarity. I order to evaluate the effectiveess of the setece s- electio ad expressiveess labellig method itroduced above, a iteral experimet was coducted durig the system preparatio. Three systems were built ad compared. SYS 32H. The threshold of cofidece value for setece selectio was set to 100, which leads to 32 hours of recordigs for system costructio. The chael labels were used ad the expressiveess labels were eglected. SYS 32H-EL. The same as SYS 32H except that the expressiveess labels were used. SYS 50H. The same as SYS 32H except that the threshold of cofidece value for setece selectio was set to 0. Thirty-five seteces were sythesized usig the three systems ad were evaluated by five listeers. The listeers were required to give a score from 1 (very uatural) to 5 (very atural) for each sythesized speech. The mea opiio scores (MOS) with 95% cofidece iterval for the three systems are show i Fig. 1. From this figure, we see that the differece betwee settig the cofidece threshold for setece selectio to 0 ad 100 is very small. Itroducig expressiveess labels improves the aturaless of sythetic speech slightly. However, the differece is also isigificat. Fially, we set the threshold of cofidece value to 0 ad adopt the two-value expressiveess labels i the submitted system. 3. Evaluatio This sectio itroduces ad discusses the evaluatio results of our system i Blizzard Challege 2012. This year, the idetifier letter of our system is C. A is the atural speech ad system B is a Festival bechmark system. 3.1. Similarity test The boxplots of MOS o similarity of all the systems are show i Figure 2. As we ca see, our system achieves the best similarity to the origial speaker. The results of Wilcoxo s siged rak tests further show that the differece betwee system C ad ay other systems o similarity is sigificat at 1% level. The high similarity s- core of our system ca be attributed to the uit selectio ad waveform cocateatio sythesis approach where o sigal processig is applied besides the simple waveform smoothig at phoe boudaries. 3.2. Naturaless test The boxplots of MOS o aturaless of all systems are show i Fig. 3. The results show that our system achieved the best performace (ot icludig the atural speech system A) o aturaless amog all the participat systems. Ad the Wilcoxo s siged rak tests also show that the differece betwee C ad ay other participat systems o aturaless is sigificat.

Mea Opiio Scores (aturaless all data) All listeers Word error rate (all listeers) Score 1 2 3 4 5 575 1170 1172 1172 1172 1172 1172 1172 1172 1172 1171 WER (%) 0 5 10 15 20 25 30 35 40 45 50 55 60 454 455 455 455 454 455 457 436 440 452 A C F I B H G D J E K C F I H B G D J E K Figure 3: Boxplot of MOS o aturaless. Figure 4: Word error rates of all participat systems. 3.3. Itelligibility test Fig. 4 shows the results of the overall word error rate (WER) test of all systems. Our system achieves the lowest WER amog all the systems, which is 19% for all listeers ad 7.7% for the paid ative Eglish speakers. The Wilcoxo s siged rak tests shows the differece betwee C ad D ad H is isigificat. 3.4. Paragraph test I this test, each listeer listeed to oe whole paragraph from a ovel ad chose a score o a scale of 1 to 60 for the followig seve aspects: overall impressio, pleasatess, speech pauses, stress, itoatio, emotio, ad listeig effort. The, a mea opiio score could be calculated for each aspect. The evaluatio results show that our system achieves the best performace i all the seve aspects. The mea opiio scores of our system ad the atural speech are listed i Table 1. From this table, we see that the emotio ad the itoatio are the weakest aspects of our system. This is due to the lack of emotio ad itoatio related cotext features i curret system. 4. Coclusios This paper itroduced the USTC speech sythesis system built for the Blizzard Challege 2012. The HMM-based uit selectio approach has bee adopted for system costructio. The evaluatio results of Blizzard Challege 2012 has proved the effectiveess of this approach i sythesizig the texts of ovel domai usig a o-stadard A C Overall 48 37 Pleasatess 45 36 Speech Pause 47 35 Stress 47 34 Itoatio 47 33 Emotio 46 32 Listeig Effort 47 35 Table 1: MOS of our system (C) ad the atural speech (A) i the paragraph test. speech sythesis database. The audiobook sythesis is still a challegig task ad there are still several problems eed to be solved i the future work, such as chael equalizatio, automatic labellig for expressiveess ad emotio factors, itoatio modellig, ad so o. 5. Ackowledgemets This work was partially supported by the Natioal Nature Sciece Foudatio of Chia (Grat No. 60905010) ad the Fudametal Research Fuds for the Cetral Uiversities (Grat No. WK2100060005). The authors also thak the research divisio of iflytek Co. Ltd., Hefei, Chia, for providig the Eglish text aalysis tools. 6. Refereces [1] Z. Lig, Y. Wu, Y. Wag, L. Qi, ad R. Wag, USTC system for Blizzard Challege 2006: a improved HMM-

based speech sythesis method, i Blizzard Challege Workshop, 2006. [2] Z. Lig, L. Qi, H. Lu, Y. Gao, L. Dai, R. Wag, Y. Jiag, Z. Zhao, J. Yag, J. Che, ad G. Hu, The USTC ad iflytek speech sythesis systems for Blizzard Challege 2007, i Blizzard Challege Workshop, 2007. [3] Z. Lig ad R. Wag, HMM-based hierarchical uit s- electio combiig kullback-leibler divergece with likelihood criterio, i Proc. of ICASSP 2007, vol. 4, april 2007, pp. 1245 1248. [4] Y.-J. Wu ad R.-H. Wag, Miimum geeratio error traiig for HMM-based speech sythesis, i Proc. I- CASSP, vol. 1, May. 2006, pp. 89 92. [5] H. Lu, Z. Lig, M. Lei, C. Wag, H. Zhao, L. Che, Y. Hu, L. Dai, ad R. Wag, The USTC system for Blizzard Challege 2009, i Blizzard Challege Workshop, 2009. [6] Y. Jiag, Z. Lig, M. Lei, C. Wag, H. Lu, Y. Hu, L. Dai, ad R. Wag, The USTC system for Blizzard Challege 2010, i Blizzard Challege Workshop, 2010. [7] Z. Lig, Z. Wag, ad L. Dai, Statistical modelig of syllable-level f0 features for hmm-based uit selectio speech sythesis, i ISCSLP, 2010. [8] L. Che, C. Yag, Z. Lig, Y. Jiag, L. Dai, Y. Hu, ad R. Wag, The USTC system for Blizzard Challege 2011, i Blizzard Challege Workshop, 2011. [9] K. Tokuda, T. Masuko, N. Miyazaki, ad T. Kobayashi, Hidde Markov models based o multi-space probability distributio for pitch patter modelig, i Proc. of ICAS- SP, 1999, pp. 229 232. [10] T. W. K. Shioda, MDL-based cotext-depedet subword modelig for speech recogitio, J. Acoust Soc. Japa (E), vol. 21, o. 2, 2000. [11] T. Hirai ad S. Tepaku, Usig 5 ms segmets i cocateative speech sythesis, i 5th ISCA Speech Sythesis Workshop, 2004, pp. 37 42. [12] N. Brauschweiler, M. Gales, ad S. Buchholz, Lightly supervised recogitio for automatic aligmet of large coheret speech recordigs, i Iterspeech, 2010, pp. 2222 2225. [13] N. Brauschweiler ad S. Buchholz, Automatic setece selectio from speech corpora icludig diverse speech for improved HMM-TTS sythesis quality, i Iterspeech, 2011, pp. 1821 1824.