Neural Network Based Pitch Control for Various Sentence Types. Volker Jantzen Speech Processing Group TIK, ETH Zürich, Switzerland

Similar documents
Speech Emotion Recognition Using Support Vector Machine

Modeling function word errors in DNN-HMM based LVCSR systems

A study of speaker adaptation for DNN-based speech synthesis

Modeling function word errors in DNN-HMM based LVCSR systems

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Rhythm-typology revisited.

Human Emotion Recognition From Speech

Speech Recognition at ICSI: Broadcast News and beyond

Learning Methods in Multilingual Speech Recognition

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Automatic intonation assessment for computer aided language learning

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Mandarin Lexical Tone Recognition: The Gating Paradigm

A Neural Network GUI Tested on Text-To-Phoneme Mapping

On the Formation of Phoneme Categories in DNN Acoustic Models

The Acquisition of English Intonation by Native Greek Speakers

Speaker Identification by Comparison of Smart Methods. Abstract

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

SARDNET: A Self-Organizing Feature Map for Sequences

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

SEGMENTAL FEATURES IN SPONTANEOUS AND READ-ALOUD FINNISH

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Acoustic correlates of stress and their use in diagnosing syllable fusion in Tongan. James White & Marc Garellek UCLA

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Evolutive Neural Net Fuzzy Filtering: Basic Description

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe

Phonological Processing for Urdu Text to Speech System

Degeneracy results in canalisation of language structure: A computational model of word learning

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

Modern TTS systems. CS 294-5: Statistical Natural Language Processing. Types of Modern Synthesis. TTS Architecture. Text Normalization

Automatic Pronunciation Checker

Developing a TT-MCTAG for German with an RCG-based Parser

A Cross-language Corpus for Studying the Phonetics and Phonology of Prominence

Speaker recognition using universal background model on YOHO database

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Investigation on Mandarin Broadcast News Speech Recognition

INPE São José dos Campos

THE MULTIVOC TEXT-TO-SPEECH SYSTEM

English Language and Applied Linguistics. Module Descriptions 2017/18

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

The IRISA Text-To-Speech System for the Blizzard Challenge 2017

Syntactic systematicity in sentence processing with a recurrent self-organizing network

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Learning Methods for Fuzzy Systems

Word Segmentation of Off-line Handwritten Documents

Natural Language Processing. George Konidaris

Speaker Recognition. Speaker Diarization and Identification

Test Effort Estimation Using Neural Network

L1 Influence on L2 Intonation in Russian Speakers of English

Revisiting the role of prosody in early language acquisition. Megha Sundara UCLA Phonetics Lab

Expressive speech synthesis: a review

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

Proceedings of Meetings on Acoustics

Word Stress and Intonation: Introduction

WHEN THERE IS A mismatch between the acoustic

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Evolution of Symbolisation in Chimpanzees and Neural Nets

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Perceived speech rate: the effects of. articulation rate and speaking style in spontaneous speech. Jacques Koreman. Saarland University

Artificial Neural Networks written examination

PRAAT ON THE WEB AN UPGRADE OF PRAAT FOR SEMI-AUTOMATIC SPEECH ANNOTATION

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Segregation of Unvoiced Speech from Nonspeech Interference

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training

CS 598 Natural Language Processing

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Journal of Phonetics

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Detecting English-French Cognates Using Orthographic Edit Distance

Vowel Alternations and Predictable Spelling Changes

On rises and falls in interrogatives

arxiv: v1 [cs.lg] 15 Jun 2015

A survey of intonation systems

Deep Neural Network Language Models

Calibration of Confidence Measures in Speech Recognition

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

IEEE Proof Print Version

Unit Selection Synthesis Using Long Non-Uniform Units and Phonemic Identity Matching

Python Machine Learning

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Lecture 10: Reinforcement Learning

Transcription:

Neural Network Based Pitch Control for Various Sentence Types Volker Jantzen Speech Processing Group TIK, ETH Zürich, Switzerland

Overview Introduction Preparation steps Prosody corpus Prosodic transcription Phonetic segmentation of speech data Extraction of pitch contour Neural network Input / output coding Architecture Training algorithm Training parameters Conclusions Results Future work

Introduction SVOX TTS System can only handle prosody of declarative sentences up to now Goal was to include prosody of Different kinds of questions Exclamations Enumerations Emphasis on words Prosody corpus was recorded in cooperation with LATL (University of Geneva) and Swisscom Pitch control in SVOX is done with a neural network A recurrent neural network was trained with data from the prosody corpus

Question Types Yes/No questions Braucht die Schweiz eine Kulturpolitik? Wh questions Doch was ist hier mit diesem Wirken in den Dingen gemeint? Alternative questions Hast du das Auto genommen oder bist du mit der Bahn gefahren?

Prosody Corpus Over 1600 German sentences spoken by the same female speaker who recorded the diphone corpus The corpus consists of 858 Declarative sentences 585 Questions 175 Wh questions 335 Yes/No questions 75 Alternative questions 227 Exclamations 71 Enumerations

Prosodic Transcription I The first 1000 sentences of the corpus were manually transcripted with regard to Accents 1 2 3 4 E Main accent of the phrase Pitch accent Non pitch accent Secondary word accent Emphatic accent Phrase boundaries / Short break // Long break /// Sentence boundary

Prosodic Transcription II Phrase types P Progredient phrase S Semi-terminal phrase T Terminal phrase Y Question with rising pitch at the end W Question with falling pitch at the end AI Alternative question - initial phrase AM Alternative question - middle phrase AF Alternative question - final phrase LI Enumeration - initial phrase LM Enumeration - middle phrase LF Enumeration - final phrase XM Parenthetical phrase / extraposition on a medium pitch level XL Parenthetical phrase / extraposition on a low pitch level

Prosodic Transcription III Examples hast 0. - du: 0. - vir 2. - klic 0. - g@ 0. - gla_upt 1 P / di: 0. - z@s 0. - StYk 3. - pa 0. - pi:r 1 P / za_i 0. -?a_in 0. - gyl 2. - ti 0. - g@r 0. - fer 0. - tra:k. E Y /// gla_upst 1. - du: 0 P //?e:r 0. - hat 0. - rect 1 AI //?o: 0 d@r 0. -?Irt 1. -?e:r 0. - zic 0 AF ///

Phonetic Segmentation of Speech Data Phonetic segmentation needed to find syllable nuclei within speech signal on which the pitch contour is computed Forced alignment using Entropics HTK Hidden markov models that were trained on the phonetic corpus could be used on the prosody corpus without retraining Speech coding 16 khz ESPS files 25 ms Hamming windows For each window: 12 MFCC, Energy, 12 MFCC, and Energy Architecture of HMMs One CHMM for each phone including glottal stop Left-to-right architecture 3 emitting states No manual corrections of segmentation were necessary

Extraction of Pitch Contour F 0 Computation done with ESPS procedure get_f0 get_f0 uses autocorrelation Frame step: 10 ms Correlation window size: 7.5 ms Minimum F 0 set to 120 Hz, maximum F 0 set to 500 Hz Good results, virtually no octave jumps

Architecture of Neural Net Recurrent neural network with 2 hidden layers: Input layer: 56 + 10 nodes 1. hidden layer: 20 nodes 2. hidden layer: 10 nodes 10 recurrent links from 2. hidden layer Output layer: 3 nodes

Input / Output Coding For each syllable Input vectors 56 binary elements Left context: 3 * 3 = 9 elements Syllable in focus: 29 elements Right context: 6 * 3 = 18 elements Output vectors 3 pitch values Pitch at beginning, center and end of nucleus Output range [0.2.. 0.8] corresponding to [180 Hz.. 360 Hz]

Syllable- / Context Coding Coding of syllable in focus Coding of context Short / long vowel High / low intrinsic pitch Plosive before syllable nucleus Plosive after syllable nucleus (5) Accent type (10) Phrase type (3) Previous phrase boundary (3) Following phrase boundary Previous phrase progredient (ends with high pitch) Previous phrase semi-terminal (ends with medium pitch) Word boundary before syllable Word boundary after syllable Pitch accent Non pitch accent Break before / after syllable

Architecture of Neural Net Recurrent neural network with 2 hidden layers: Input layer: 56 + 10 nodes 1. hidden layer: 20 nodes 2. hidden layer: 10 nodes 10 recurrent links from 2. hidden layer Output layer: 3 nodes

Training algorithm Output of each Neuron O j = f ( Σ W ji O i ) with f = i 1 1 + e -x Training with backpropagation through time Backpropagation W ji = η δ i O i δ i = f ( Σ W ji O i ) (D i - O i ) f ( Σ W ji O i ) Σ W kj δ k k for output neurons otherwise Net is unfolded in time to regard additional error from recurrent links

Training Implementation in Matlab Utterances Trainingset: 590 sentences Testset: 200 sentences Trainingset and testset have same distribution of sentence types Trainparameter Learn rate: 0.1 Epochs: ca. 1000 Control of training process Predicted pitch contours were plotted against orignal pitch contours Resulting pitch contours were imposed on original speech signals with PSOLA and listened to

Results Pitch Contour is linear interpolation of outputs Computed pitch contour is imposed on original speech signal with a PSOLA algorithm Natural durations and energy Examples Declarative sentences Exclamations Yes/No questions Wh questions Alternative questions

Conclusions Typical pitch contours of the different question types were learned by the network Computed pitch contours are close to natural pitch contours Difficulties with sentences where main accent is on last syllable Enumerations have worse results than the other sentence types (fewest training data) Mean square error is not a good measure for naturalness

Future Work Further experiments to gather more experience about the behaviour of neural networks Find formal criteria to estimate the quality of a neural network Embed neural network into the SVOX System Adaption of the syntax analysis of SVOX so that different question types can be analysed properly Use the prosody corpus to retrain the models used for duration control