Deep Learning in Speech Synthesis. Heiga Zen Google August 31st, 2013

Similar documents
UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

A study of speaker adaptation for DNN-based speech synthesis

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Speech Emotion Recognition Using Support Vector Machine

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Python Machine Learning

Modeling function word errors in DNN-HMM based LVCSR systems

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Human Emotion Recognition From Speech

Deep Neural Network Language Models

Modeling function word errors in DNN-HMM based LVCSR systems

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

arxiv: v1 [cs.lg] 7 Apr 2015

A Deep Bag-of-Features Model for Music Auto-Tagging

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

On the Formation of Phoneme Categories in DNN Acoustic Models

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Calibration of Confidence Measures in Speech Recognition

WHEN THERE IS A mismatch between the acoustic

Improvements to the Pruning Behavior of DNN Acoustic Models

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

Learning Methods in Multilingual Speech Recognition

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Speech Recognition at ICSI: Broadcast News and beyond

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

arxiv: v1 [cs.cl] 27 Apr 2016

Speaker Identification by Comparison of Smart Methods. Abstract

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Letter-based speech synthesis

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Edinburgh Research Explorer

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

arxiv: v1 [cs.lg] 15 Jun 2015

(Sub)Gradient Descent

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Review: Speech Recognition with Deep Learning Methods

Statistical Parametric Speech Synthesis

Knowledge Transfer in Deep Convolutional Neural Nets

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

Model Ensemble for Click Prediction in Bing Search Ads

CSL465/603 - Machine Learning

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Word Segmentation of Off-line Handwritten Documents

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX,

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Dropout improves Recurrent Neural Networks for Handwriting Recognition

THE world surrounding us involves multiple modalities

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Learning Methods for Fuzzy Systems

Lecture 1: Machine Learning Basics

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Second Exam: Natural Language Parsing with Neural Networks

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

Spoofing and countermeasures for automatic speaker verification

Softprop: Softmax Neural Network Backpropagation Learning

INPE São José dos Campos

Speaker recognition using universal background model on YOHO database

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Axiom 2013 Team Description Paper

The IRISA Text-To-Speech System for the Blizzard Challenge 2017

arxiv: v2 [cs.cv] 30 Mar 2017

Body-Conducted Speech Recognition and its Application to Speech Support System

Generative models and adversarial training

Investigation on Mandarin Broadcast News Speech Recognition

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

Rule Learning With Negation: Issues Regarding Effectiveness

Speaker Recognition. Speaker Diarization and Identification

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Transcription:

Deep Learning in Speech Synthesis Heiga Zen Google August 31st, 2013

Outline Background Deep Learning Deep Learning in Speech Synthesis Motivation Deep learning-based approaches DNN-based statistical parametric speech synthesis Experiments Conclusion

Text-to-speech as sequence-to-sequence mapping Automatic speech recognition (ASR) Speech (continuous time series) Text (discrete symbol sequence) Machine translation (MT) Text (discrete symbol sequence) Text (discrete symbol sequence) Text-to-speech synthesis (TTS) Text (discrete symbol sequence) Speech (continuous time series) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 1 of 50

Speech production process text (concept) fundamental freq modulation of carrier wave by speech information voiced/unvoiced freq transfer char frequency transfer characteristics magnitude start--end fundamental frequency Sound source voiced: pulse unvoiced: noise speech air flow Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 2 of 50

Typical flow of TTS system Sentence segmentaiton Word segmentation Text normalization Part-of-speech tagging Pronunciation discrete discrete NLP Frontend TEXT Text analysis Speech synthesis SYNTHESIZED SPEECH Prosody prediction Waveform generation discrete continuous Speech Backend This talk focuses on backend Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 3 of 50

Statistical parametric speech synthesis (SPSS) [2] Speech Feature extraction Model training Parameter generation Waveform synthesis Synthesized Speech Text Text Large data + automatic training Automatic voice building Parametric representation of speech Flexible to change its voice characteristics Hidden Markov model (HMM) as its acoustic model HMM-based speech synthesis system (HTS) [1] Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 4 of 50

Characteristics of SPSS Advantages Flexibility to change voice characteristics Small footprint Robustness Drawback Quality Major factors for quality degradation [2] Vocoder Acoustic model Deep learning Oversmoothing Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 5 of 50

Deep learning [3] Machine learning methodology using multiple-layered models Motivated by brains, which organize ideas and concepts hierarchically Typically artificial neural network (NN) w/ 3 or more levels of non-linear operations Shallow Neural Network Deep Neural Network (DNN) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 6 of 50

Basic components in NN Non-linear unit Network of units h j h i = f(z i ) j... x i... z j = i x i w ij i Examples of activation functions 1 Logistic sigmoid: f(z j ) = 1 + e z j Hyperbolic tangent: f(z j ) = tanh (z j ) Rectified linear: f(z j ) = max (z j, 0) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 7 of 50

Deep architecture Logistic regression depth=1 Kernel machines, decision trees depth=2 Ensemble learning (e.g., Boosting [4], tree intersection [5]) depth++ N-layer neural network depth=n + 1 Input units Input vector x............ Output units Output vector y Hidden units Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 8 of 50

Difficulties to train DNN NN w/ many layers used to give worse performance than NN w/ few layers Slow to train Vanishing gradients [6] Local minimum Since 2006, training DNN significantly improved GPU [7] More data Unsupervised pretraining (RBM [8], auto-encoder [9]) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 9 of 50

Restricted Boltzmann Machine (RBM) [11] W h h ={0,1} j v v ={0,1} i Undirected graphical model No connection between visible & hidden units p(v, h W ) = 1 exp { E(v, h; W )} Z(W ) w E(v, h; W ) = i b i v i j c j h j i,j v i w ij h j ij: weight b i, c j : bias Parameters can be estimated by contrastive divergence learning [10] Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 10 of 50

Deep Belief Network (DBN) [8] RBMs are stacked to form a DBN Layer-wise training of RBM is repeated over multiple layers (pretraining) Joint optimization as DBN or supervised learning as DNN with additional final layer (fine tuning) DNN Output RBM1 copy RBM2 stacking DBN Supervised learning as DNN Input Input Input (Jointly toptimize as DBN) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 11 of 50

Representation learning DBN (feature extractor) DBN + classification layer (feature classifier) Output DNN (feature + classifier) Output Input Input Input Unsupervised layer-wise pre-training Adding output layer (e.g., softmax) Supervised fine-tuning (backpropagation) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 12 of 50

Success of DNN in various machine learning tasks Tasks Vision [12] Language Speech [13] Word error rates (%) Hours of HMM-GMM HMM-GMM Task data HMM-DNN w/ same data w/ more data Voice Input 5,870 12.3 N/A 16.0 YouTube 1,400 47.6 52.3 N/A Products Personalized photo search [14, 15] Voice search [16, 17]. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 13 of 50

Conventional HMM-GMM [1] Decision tree-clustered HMM with GMM state-output distributions Linguistic features x yes no yes no yes no Acoustic features y... Acoustic features y Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 15 of 50

Limitation of HMM-GMM approach (1) Hard to integrate feature extraction & modeling Cepstra c 1 c 2 c 3 c 4 c 5........ c T dimensinality reduction Spectra s 1 s 2 s 3 s 4 s 5........ s T Typically use lower dimensional approximation of speech spectrum as acoustic feature (e.g., cepstrum, line spectral pairs) Hard to model spectrum directly by HMM-GMM due to high dimensionality & strong correlation Waveform-level model [18], mel-cepstral analysis-integrated model [19], STAVOCO [20], MGE-LSD [21] Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 16 of 50

Limitation of HMM-GMM approach (2) Data fragmentation Acoustic space yes no yes no yes no yes no... yes no Linguistic-to-acoustic mapping by decision trees Decision tree splits input space into sub-clusters Inefficient to represent complex dependencies between linguistic & acoustic features Boosting [4], tree intersection [5], product of experts [22] Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 17 of 50

Motivation to use deep learning in speech synthesis Integrating feature extraction Can model high-dimensional, highly correlated features efficiently Layered architecture with non-linear operations offers feature extraction to be integrated with acoustic modeling Distributed representation Can be exponentially more efficient than fragmented representation Better representation ability with fewer parameters Layered hierarchical structure in speech production concept linguistic articulatory waveform Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 18 of 50

Deep learning-based approaches Recent applications of deep learning to speech synthesis HMM-DBN (USTC/MSR [23, 24]) DBN (CUHK [25]) DNN (Google [26]) DNN-GP (IBM [27]) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 20 of 50

HMM-DBN [23, 24] Linguistic features x yes no yes no yes no DBN i DBN j... Acoustic features y Acoustic features y Decision tree-clustered HMM with DBN state-output distributions DBNs replaces GMMs Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 21 of 50

DBN [25] h 1 v h 2 Linguistic features x h 3 v Acoustic features y DBN represents joint distribution of linguistic & acoustic features DBN replaces decision trees and GMMs Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 22 of 50

DNN [26] Acoustic features y h 3 h 2 h 1 Linguistic features x DNN represents conditional distribution of acoustic features given linguistic features DNN replaces decision trees and GMMs Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 23 of 50

DNN-GP [27] Acoustic features y Gaussian Process Regression h 3 h 2 h 1 Linguistic features x Uses last hidden layer output as input for Gaussian Process (GP) regression Replaces last layer of DNN by GP regression Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 24 of 50

Comparison cep: mel-cepstrum, ap: band aperiodicities x: linguistic features, y: acoustic features, c: cluster index y x: conditional distribution of y given x (y, x): joint distribution between x and y HMM HMM DNN -GMM -DBN DBN DNN -GP cep, ap, F 0 spectra cep, ap, F 0 cep, ap, F 0 F 0 parametric parametric parametric parametric non-parametric y c c x y c c x (y, x) y x y h h x HMM-GMM is more computationally efficients than others Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 25 of 50

Framework Binary features Duration prediction Input feature extraction Text analysis TEXT Spectral features Input layer Hidden layers Output layer Numeric features Duration feature Frame position feature Input features including binary & numeric features at frame 1... Input features including binary & numeric features at frame T............ Statistics (mean & var) of speech parameter vector sequence Excitation features V/UV feature SPEECH Waveform synthesis Parameter generation Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 27 of 50

Framework Is this new?... no NN [28] RNN [29] What s the difference? More layers, data, computational resources Better learning algorithm Statistical parametric speech synthesis techniques Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 28 of 50

Experimental setup Database US English female speaker Training / test data 33000 & 173 sentences Sampling rate 16 khz Analysis window 25-ms width / 5-ms shift Linguistic 11 categorical features features 25 numeric features Acoustic 0 39 mel-cepstrum features log F 0, 5-band aperiodicity,, 2 HMM 5-state, left-to-right HSMM [30], topology MSD F 0 [31], MDL [32] DNN 1 5 layers, 256/512/1024/2048 units/layer architecture sigmoid, continuous F 0 [33] Postprocessing Postfiltering in cepstrum domain [34] Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 30 of 50

Preliminary experiments w/ vs w/o grouping questions (e.g., vowel, fricative) Grouping (OR operation) can be represented by NN w/o grouping questions worked more efficiently How to encode numeric features for inputs Decision tree clustering uses binary questions Neural network can have numerical values as inputs Feeding numerical values directly worked more efficiently Removing silences Decision tree splits silence & speech at the top of the tree Single neural network handles both of them Neural network tries to reduce error for silence Better to remove silence frames as preprocessing Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 31 of 50

Example of speech parameter trajectories w/o grouping questions, numeric contexts, silence frames removed 5-th Mel-cepstrum 1 0-1 0 100 200 300 400 500 Frame Natural speech HMM (α=1) DNN (4x512) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 32 of 50

Objective evaluations Objective measures Aperiodicity distortion (db) Voiced/Unvoiced error rates (%) Mel-cepstral distortion (db) RMSE in log F 0 Sizes of decision trees in HMM systems were tuned by scaling (α) the penalty term in the MDL criterion α < 1: larger trees (more parameters) α = 1: standard setup α > 1: smaller trees (fewer parameters) Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 33 of 50

Aperiodicity distortion 1.32 HMM DNN (256 units / layer) DNN (1024 units / layer) DNN (512 units / layer) DNN (2048 units / layer) Aperiodicity distortion (db) 1.30 α=16 1.28 1 α=4 1.26 1 1 α=1 1 α=0.375 1.24 1.22 2 2 2 3 2 3 3 4 5 3 5 4 5 4 5 4 1.20 1e+05 1e+06 1e+07 Total number of parameters Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 34 of 50

V/UV errors Voiced/Unvoiced Error Rate (%) HMM DNN (256 units / layer) DNN (512 units / layer) DNN (1024 units / layer) DNN (2048 units / layer) 4.6 4.4 4.2 4.0 3.8 3.6 3.4 3.2 1e+05 1e+06 1e+07 Total number of parameters Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 35 of 50

Mel-cepstral distortion 5.4 HMM DNN (256 units / layer) DNN (1024 units / layer) DNN (512 units / layer) DNN (2048 units / layer) Mel-cepstral distortion (db) 5.2 5.0 4.8 4.6 1e+05 1e+06 1e+07 Total number of parameters Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 36 of 50

RMSE in log F 0 HMM DNN (256 units / layer) DNN (1024 units / layer) DNN (512 units / layer) DNN (2048 units / layer) RMSE in log F0 0.13 0.12 1e+05 1e+06 1e+07 Total number of parameters Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 37 of 50

Subjective evaluations Compared HMM-based systems with DNN-based ones with similar # of parameters Paired comparison test 173 test sentences, 5 subjects per pair Up to 30 pairs per subject Crowd-sourced HMM DNN (α) (#layers #units) Neutral p value z value 15.8 (16) 38.5 (4 256) 45.7 < 10 6-9.9 16.1 (4) 27.2 (4 512) 56.8 < 10 6-5.1 12.7 (1) 36.6 (4 1 024) 50.7 < 10 6-11.5 Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 38 of 50

Conclusion Deep learning in speech synthesis Aims to replace HMM with acoustic model based on deep architectures Different groups presented different architectures at ICASSP 2013 HMM-DBN DBN DNN DNN-GP DNN-based approach achieved reasonable performance Many possible future research topics Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 39 of 50

References I [1] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis. In Proc. Eurospeech, pages 2347 2350, 1999. [2] H. Zen, K. Tokuda, and A. Black. Statistical parametric speech synthesis. Speech Commun., 51(11):1039 1064, 2009. [3] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1 127, 2009. [4] Y. Qian, H. Liang, and F. Soong. Generating natural F0 trajectory with additive trees. In Proc. Interspeech, pages 2126 2129, 2008. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 40 of 50

References II [5] K. Yu, H. Zen, F. Mairesse, and S. Young. Context adaptive training with factorized decision trees for HMM-based statistical parametric speech synthesis. Speech Commun., 53(6):914 923, 2011. [6] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. Kremer and J. Kolen, editors, A field guide to dynamical recurrent neural networks. IEEE Press, 2001. [7] R. Raina, A. Madhavan, and A. Ng. Large-scale deep unsupervised learning using graphics processors. In Proc. ICML, volume 9, pages 873 880, 2009. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 41 of 50

References III [8] G. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527 1554, 2006. [9] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11:3371 3408, 2010. [10] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771 1800, 2002. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 42 of 50

References IV [11] P Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. Rumelhard and J. McClelland, editors, Parallel Distributed Processing, volume 1, chapter 6, pages 194 281. MIT Press, 1986. [12] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In Proc. NIPS, pages 1106 1114, 2012. [13] G. Hinton, L. Deng, D. Yu, G. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82 97, 2012. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 43 of 50

References V [14] C. Rosenberg. Improving photo search: a step across the semantic gap. http://googleresearch.blogspot.co.uk/2013/06/ improving-photo-search-step-across.html. [15] K. Yu. https://plus.sandbox.google.com/103688557111379853702/ posts/fdw7eqx87eq. [16] V. Vanhoucke. Speech recognition and deep learning. http://googleresearch.blogspot.co.uk/2012/08/ speech-recognition-and-deep-learning.html. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 44 of 50

References VI [17] Bing makes voice recognition on Windows Phone more accurate and twice as fast. http://www.bing.com/blogs/site_blogs/b/search/archive/ 2013/06/17/dnn.aspx. [18] R. Maia, H. Zen, and M. Gales. Statistical parametric speech synthesis with joint estimation of acoustic and excitation model parameters. In Proc. ISCA SSW7, pages 88 93, 2010. [19] K. Nakamura, K. Hashimoto, Y. Nankaku, and K. Tokuda. Integration of acoustic modeling and mel-cepstral analysis for HMM-based speech synthesis. In Proc. ICASSP, pages 7883 7887, 2013. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 45 of 50

References VII [20] T. Toda and K. Tokuda. Statistical approach to vocal tract transfer function estimation based on factor analyzed trajectory hmm. In Proc. ICASSP, pages 3925 3928, 2008. [21] Y.-J. Wu and K. Tokuda. Minimum generation error training with direct log spectral distortion on LSPs for HMM-based speech synthesis. In Proc. Interspeech, pages 577 580, 2008. [22] H. Zen, M. Gales, Y. Nankaku, and K. Tokuda. Product of experts for statistical parametric speech synthesis. IEEE Trans. Audio Speech Lang. Process., 20(3):794 805, 2012. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 46 of 50

References VIII [23] Z.-H. Ling, L. Deng, and D. Yu. Modeling spectral envelopes using restricted Boltzmann machines for statistical parametric speech synthesis. In Proc. ICASSP, pages 7825 7829, 2013. [24] Z.-H. Ling, L. Deng, and D. Yu. Modeling spectral envelopes using restricted Boltzmann machines and deep belief networks for statistical parametric speech synthesis. IEEE Trans. Audio Speech Lang. Process., 21(10):2129 2139, 2013. [25] S. Kang, X. Qian, and H. Meng. Multi-distribution deep belief network for speech synthesis. In Proc. ICASSP, pages 8012 8016, 2013. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 47 of 50

References IX [26] H. Zen, A. Senior, and M. Schuster. Statistical parametric speech synthesis using deep neural networks. In Proc. ICASSP, pages 7962 7966, 2013. [27] R. Fernandez, A. Rendel, B. Ramabhadran, and R. Hoory. F0 contour prediction with a deep belief network-gaussian process hybrid model. In Proc. ICASSP, pages 6885 6889, 2013. [28] O. Karaali, G. Corrigan, and I. Gerson. Speech synthesis with neural networks. In Proc. World Congress on Neural Networks, pages 45 50, 1996. [29] C. Tuerk and T. Robinson. Speech synthesis using artificial network trained on cepstral coefficients. In Proc. Eurospeech, pages 1713 1716, 1993. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 48 of 50

References X [30] H. Zen, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. A hidden semi-markov model-based speech synthesis system. IEICE Trans. Inf. Syst., E90-D(5):825 834, 2007. [31] K. Tokuda, T. Masuko, N. Miyazaki, and T. Kobayashi. Multi-space probability distribution HMM. IEICE Trans. Inf. Syst., E85-D(3):455 464, 2002. [32] K. Shinoda and T. Watanabe. Acoustic modeling based on the MDL criterion for speech recognition. In Proc. Eurospeech, pages 99 102, 1997. [33] K. Yu and S. Young. Continuous F0 modelling for HMM based statistical parametric speech synthesis. IEEE Trans. Audio Speech Lang. Process., 19(5):1071 1079, 2011. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 49 of 50

References XI [34] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. Incorporation of mixed excitation model and postfilter into HMM-based text-to-speech synthesis. IEICE Trans. Inf. Syst., J87-D-II(8):1563 1571, 2004. Heiga Zen Deep Learning in Speech Synthesis August 31st, 2013 50 of 50