Learning phonetic features from waveforms

Similar documents
Neural Network Model of the Backpropagation Algorithm

Fast Multi-task Learning for Query Spelling Correction

An Effiecient Approach for Resource Auto-Scaling in Cloud Environments

More Accurate Question Answering on Freebase

Channel Mapping using Bidirectional Long Short-Term Memory for Dereverberation in Hands-Free Voice Controlled Devices

On the Formation of Phoneme Categories in DNN Acoustic Models

1 Language universals

Information Propagation for informing Special Population Subgroups about New Ground Transportation Services at Airports

MyLab & Mastering Business

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Lecture 9: Speech Recognition

Learning Methods in Multilingual Speech Recognition

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Speech Recognition at ICSI: Broadcast News and beyond

An Online Handwriting Recognition System For Turkish

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Probabilistic Latent Semantic Analysis

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Characterizing and Processing Robot-Directed Speech

Modeling function word errors in DNN-HMM based LVCSR systems

A study of speaker adaptation for DNN-based speech synthesis

Natural Language Processing. George Konidaris

Erkki Mäkinen State change languages as homomorphic images of Szilard languages

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

LEARNING A SEMANTIC PARSER FROM SPOKEN UTTERANCES. Judith Gaspers and Philipp Cimiano

Books Effective Literacy Y5-8 Learning Through Talk Y4-8 Switch onto Spelling Spelling Under Scrutiny

Modeling function word errors in DNN-HMM based LVCSR systems

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Investigation on Mandarin Broadcast News Speech Recognition

Speech Emotion Recognition Using Support Vector Machine

Proceedings of Meetings on Acoustics

A Neural Network GUI Tested on Text-To-Phoneme Mapping

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Speaker recognition using universal background model on YOHO database

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

WHEN THERE IS A mismatch between the acoustic

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Effect of Word Complexity on L2 Vocabulary Learning

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Edinburgh Research Explorer

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

In Workflow. Viewing: Last edit: 10/27/15 1:51 pm. Approval Path. Date Submi ed: 10/09/15 2:47 pm. 6. Coordinator Curriculum Management

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

ACTIVITY: Comparing Combination Locks

Statistical Parametric Speech Synthesis

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Universityy. The content of

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Body-Conducted Speech Recognition and its Application to Speech Support System

Small-Vocabulary Speech Recognition for Resource- Scarce Languages

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

On-the-Fly Customization of Automated Essay Scoring

Self-Supervised Acquisition of Vowels in American English

CS 598 Natural Language Processing

RANKING AND UNRANKING LEFT SZILARD LANGUAGES. Erkki Mäkinen DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF TAMPERE REPORT A ER E P S I M S

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

Letter-based speech synthesis

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

Corrective Feedback and Persistent Learning for Information Extraction

TEAM NEWSLETTER. Welton Primar y School SENIOR LEADERSHIP TEAM. School Improvement

Comparison of network inference packages and methods for multiple networks inference

Segregation of Unvoiced Speech from Nonspeech Interference

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training

Reinforcement Learning by Comparing Immediate Reward

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

Parsing of part-of-speech tagged Assamese Texts

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Mandarin Lexical Tone Recognition: The Gating Paradigm

Year 4 National Curriculum requirements

Using Synonyms for Author Recognition

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

MARK 12 Reading II (Adaptive Remediation)

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Lecture 10: Reinforcement Learning

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

A new Dataset of Telephone-Based Human-Human Call-Center Interaction with Emotional Evaluation

Phonology Revisited: Sor3ng Out the PH Factors in Reading and Spelling Development. Indiana, November, 2015

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

WHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING AND TEACHING OF PROBLEM SOLVING

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

AMULTIAGENT system [1] can be defined as a group of

Transcription:

Learning phoneic feaures from waveforms Ying Lin (yinglin@ucla.edu) Absrac Unsupervised learning of broad phoneic classes by infans was simulaed using a saisical mixure model. Wih he phoneic labels removed, hand-ranscribed segmens from he TIMIT daabase were used in model-based clusering o obain daa-driven classes. Simple Hidden Markov Models were chosen o be he componens of he mixure, wih Mel-Cepsral coefficiens as he fron-end. The sound classes were found by ieraively pariioning he clusers. The resuls of running his algorihm on he TIMIT segmens sugges ha he pariions may be inerpreed as gradien acousic feaures, and ha o some degree, he resuling clusers correspond o knowledge-based phoneic classes. Thus, he clusers may reflec he preliminary phonological caegories formed during language learning in early childhood. 1 Inroducion An imporan change ha occurs during early phonological developmen is ha an infan changes from a universal perceiver o a language-specific one[7]. I is widely believed ha one of he underlying mechanisms is he abiliy o learn sound prooypes from disribuions of sounds[2]. Alhough unsupervised learning of sound prooypes has been simulaed using connecionis models wih arificial daa, no model is ye available ha akes real speech signals as inpu. The curren sudy is a firs sep in building a compuaional model for he human-like learning of sub-lexical unis from acousic signals, using ools from Auomaic Speech Recogniion and saisical learning. Assuming for his firs sep ha a phone-level segmenaion is given, we sudy he echnical problem of using a saisical mixure model o cluser a se of unlabelled acousic segmens. To handle he challenge ha acousic segmens are non-saionary and have variable duraions, simple Hidden Markov Models (HMM) were chosen o be he componens of he mixure, using Mel-cepsral coefficiens as he parameerized represenaion of speech. 2 Mehod 2.1 Mixure model The basic inuiion behind mixure models is ha he observed daa may be generaed by differen sources, each capured by a separae componen model. The way o decide which componen bes accouns for he daa is by comparing he daa s likelihood given each componen. Under he mixure model assumpion, he likelihood has he general form: p(d M) = p(m i ) p(d m i ) (1) i=1,,n where each m i is ofen a parameric model ha serves as a mixure componen of M, and p(m i ) is he prior probabiliy over all componens {m 1,, m N } of M. The inuiive inerpreaion of p(m i ) is he relaive size of he subse of daa ha is aribued o m i, and p(d m i ) corresponds o likelihood of he daa given m i. Anoher imporan noion is he poserior probabiliy of each mixure componen, defined as: p(m i )p(d m i ) p(m i d) = p(m i ) p(d m i ) i=1,,n (2)

Inuiively speaking, he poserior probabiliy represens o wha exen he daa d is explained by componen m i. When his value approaches 1, i means ha almos cerainly, m i is responsible for d. 2.2 Learning algorihm The learning problem of he mixure model is addressed by he well-known Expecaion-Maximizaion algorihm [9]. The EM algorihm ieraes over he following wo seps unil he daa likelihood sops increasing: 1. E-sep: given curren esimaes of {p(m i )} and m i, compue he poserior probabiliies using (2); 2. M-sep: weigh each daum wih he poserior p(m i d), and updae m i and p(m i ) wih he discouned daa. One perspecive on his algorihm is provided by some proposals on exemplar-based caegory learning [4]. E-sep can be viewed as deermining he membership of a new exemplar wih regards o each class using he pre-sored exemplars, while M-sep can be viewed as shifing he ceners of he exemplar clouds by updaing he conribuion of each exemplar. The main difference beween our model and exemplar-based models is in essence similar o he one beween emplae-based and saisical speech recogniion: raher han soring all exemplars and using some emplae-maching echniques o deermine similariy, we assume exemplars are generaed by a mixure of models and use likelihood o measure similariy. 2.3 Mixure of HMMs In principle, any probabilisic model ha can be used o approximae (2) for ime series daa can serve as a componen of he mixure. Therefore, he choice of models was no limied o HMM. We chose HMM because i is relaively easy o implemen, no because we considered i he bes model for acousic segmens. The main challenge in clusering speech segmens was ha segmens may have differen lenghs and are no saionary. Raher han mapping all segmens o a fixed dimension [5], we used a mixure of HMMs o model he whole segmens. The use of HMMs in clusering speech was considered in [3], bu he mixure of HMMs was firs applied o he clusering of moion daa [1]. The algorihm for raining a mixure of HMMs involves some minor modificaions o he regular Baum- Welsh algorihm [6]. Assuming he oupu probabiliy of each sae is compued from a Gaussian mixure, he E-sep includes he following formulae (he use of symbols also follows [6]): p(o (s) λ m ) = i α (i)β (i) (3) ξ (s,m) (i, j) = α (i) a ij b j (o (s) +1 ) β +1(i) p(o (s) λ m ) γ (s,m) (i, k) = γ (s,m) (i) γ (s,m) (i) = α (i)β (i) p(o (s) λ m ) c (m) N(o(s) j c(m) i,j N(o(s) p(λ m O (s) ) = p(λ m)p(o (s) λ m ) j p(λ j)p(o (s) λ j ), µ (m), Σ(m) ) (4) (5), µ (m) i,j, Σ(m) i,j ) (6) α (i), β (i) are he regular forward and backward probabiliies compued from model parameers. a ij are ransiion probabiliies. b j (o (s) ) are oupu probabiliies. N(o (s), µ, Σ ) are he Gaussian componens in he oupu probabiliies. In (4)(5)(6), he exra subscrips m and s indicae ha here is a separae couner for each pair of HMM and observaion sequence. (7) calculaes he poserior probabiliy. (7)

As menioned in 2.2, he M-sep uses he poserior probabiliy o weigh each sufficien saisics couner in (4)(5)(6), and updaes he parameers of a given model using he weighed sum of all couners associaed wih his model. The formulae include: Σ (m) s a (m) ij = p(λ m O (s) ) ξ(s,m) (i, j) s p(λ m O (s) ) γ(s,m) (i) µ (m) = s p(λ m O (s) ) γ(s,m) (i, k)o (s) s p(λ m O (s) ) = s p(λ m O (s) ) γ(s,m) (i, k)(o (s) µ i )(o (s) µ i ) T s p(λ m O (s) ) c (m) = s p(λ m O (s) ) γ(s,m) (i, k) j s p(λ m O (s) ) γ(s,m) (i, j) s p(λ m ) = p(λ m O (s) ) s j p(λ j O (s) ) (8)(9)(10)(11) updaes he corresponding parameers (ransiion probabiliies a (m) ij beween saes, he means µ (m), covariances Σ(m) and weighs of he Gaussian mixure c(m) ) for each HMM componen in he mixure, and (12) updaes he prior probabiliy over he mixure componens. Noe ha running he algorihm for he firs ime requires an iniial esimae for he HMM parameers and for he prior probabiliy. The K-Means algorihm based on he Iakura-Saio disorion [6] was used for such purpose. Using his mehod, every acousic segmen was mapped o he LPC vecor of is cenroid specrum and he iniial clusering was done on all he LPC vecors. (8) (9) (10) (11) (12) K-means: I n 757 335 355 ô i 243 421 303 A ô 86 140 497 waer 2 58 276 she 230 170 8 ask 137 179 11 HMM mixure: I n 1437 5 5 ô i 27 900 40 A ô 13 41 669 waer 0 3 333 she 405 3 0 ask 11 316 0 Table 1: Comparisons of clusering mehods Table 1 shows wo comparisons of he clusering mehods, using 3 diphones and 3 words respecively. Columns 1, 2, 3 represen he cluser indices. We can see ha when he unis conain significan dynamics, he HMM mixure achieves a much beer separaion of differen unis han he K-Means algorihm. 2.4 Ieraive refinemen of he mixure model Due o he complex form of he likelihood funcion, finding he global maximum in he likelihood space can be very difficul. The heurisic ha we used o approximae he global maximum is o sar wih a small number of clusers, and hen spli hem successively o obain he desired number of clusers. The crierion for choosing which cluser o spli is again based on likelihood. The inuiion of Algorihm 1 is ha new caegories firs emerge from he larges or he mos heerogeneous subse of daa. Thus i may be viewed as a sraegy for inducively learning he sound caegories from unlabelled daa. Our clusering experimen was conduced on he manually ranscribed TIMIT daabase. The raining of he HMM mixure was implemened by modifying he HTK source code, and he successive spliing

Algorihm 1 Successive cluser spliing 1: Train a mixure of k HMM s 2: repea 3: for each cluser C i do 4: Spli C i ino n clusers and obain a new mixure model, record he gain in likelihood 5: end for 6: Choose he spli ha maximally increases he likelihood 7: Rerain he new mixure model on all daa 8: unil sopping condiion is saisfied algorihm was implemened in Malab. All HMMs are 3-sae, lef-o-righ, wih a 2-Gaussian mixure modelling he oupu disribuion of each sae. Mel-cepsral coefficiens (13) [8] ogeher wih he dela feaures (13) [6] were used as he parameerized represenaion of speech signals. This represenaion allowed us o focus on he specral envelope insead of he speaker informaion. Wih he phoneic labels removed, 7166 acousic segmens from 22 speakers in TIMIT were clusered. Saring wih 2 clusers, 5 pariions were found. Each pariion replaced he old cluser wih 2 new clusers, hereby resuling in a oal of 6 clusers. The disribuion of phoneic labels over he clusers was calculaed afer each pariion and reraining. 3 Resuls Figures 1 5 illusrae how he phoneic segmens are divided ino wo new clusers a each pariioning sep. The phoneic labels use symbols from he TIMIT phoneic alphabe. For each phoneic label, he posiion of he verical bar indicaes he percenages of he acousic segmens ha were assigned o he lef and righ cluser. For example in Figure 1, he bars corresponding o he voiced inerdenal fricaive dh represen he resul ha 95% of acousic segmens labelled dh were assigned o cluser 1 ( obsruen ) and 5% were assigned o cluser 2 ( sonoran ). The clusers were named using prefix coding. For example, a paren cluser named 12 was spli ino daugher clusers 121 and 122. To save space, each figure displays he subse of labels wih more han half of he segmens falling in he paren cluser. For example, labels included in Figure 3 (cluser 21 and 22) were hose ha have been mosly assigned o cluser 2 ( sonorans ) in Figure 1. 1 Some phoneic labels are consolidaed for beer display. s,sh,z,zh,jh,ch h f p,,k,b,d,g hh dh n v m ng hv dx q nx w y l Vowels el r Cluser 1: obsruen 50% Cluser 2: sonoran Figure 1. The firs pariion 1 : [sonoran]

zh z sh s f ch h v jh dh d n m k p ng g b nx Cluser 11: fricaive 50% Cluser 12: sop Figure 2. The second pariion of obsruens: [fricaive] el w uw l r ao er ow ax axr oy ah aa aw ay uh eh ux ae ih ix ey iy y Cluser 21: back 50% Cluser 22: fron Figure 3. The hird pariion of sonorans: [back] y iy ix ih ey eh ae Cluser 221: high 50% Cluser 222: low Figure 4. The fourh pariion of fron sonorans: [high]

k p d g b n m ng nx Cluser 121: oral 50% Cluser 122: nasal Figure 5. The fifh pariion of sops: [nasal] The division of phoneic segmens a each spli suggess ha he splis may be inerpreed as gradien, disincive acousic feaures ha disinguish wo classes of sounds by he general shapes of heir specral envelopes. For convenience, hese feaures were named using linguisic erms. The percenages may depend on he disribuion of sounds in he raining daa se, bu hey reflec some general paerns of conexual variaion in phoneic segmens. Take he voiced labiodenal fricaive [v] as an example. The fac ha in coninuous speech, [v] is ofen produced as an approximan wihou significan fricaion noise is refleced by he ambiguous saus of [v] in Figure 1 and Figure 2. Anoher example is he disribuion of [w],[ô],[l] and [l " ]. They all fall ino he caegory of sonorans ha have a low F2, which may coincide wih a primiive phoneic caegory in early child language. To furher invesigae he naure of hese classes, an evaluaion was also conduced by creaing 6 reference labels for he 6 broad phoneic classes obained above: fricaive/affricae, plosive, nasal, back sonoran, high fron sonoran and cenral sonoran. These reference labels were compleely based on linguisic knowledge. The percenage of he daa-driven labels ha mach he knowledge-based labels was calculaed. Moreover, a es se was consruced from 7 speakers from he same TIMIT dialec area. The resuls are repored in Table 2. Considering ha he mixure model was learned in a compleely unsupervised manner, is performance on he phone classificaion ask was, as expeced, reasonable. The similariy beween he raining and es se suggess ha our resuls reflec general paerns raher han hose specific o he raining se. Daa se Speakers Phones Percenage Train 22 7166 69.17 Tes 7 2084 67.61 Table 2: Percenage of phones ha mach he knowledge-based reference labels 4 Discussion and fuure work The curren sudy demonsraes he possibiliy of using saisical ASR ools for he purpose of modelling acquisiion of phoneic caegories. This work will be exended in wo direcions. Firs, insead of using manually-ranscribed segmens, we would like o segmen he word signals and learn sound caegories a he same ime. Second, we would also like o replace phone-level opimizaion wih lexicon-level opimizaion, and highligh he connecion beween lexical growh and sub-lexical unis. 5 Acknowledgemens The auhor would like o hank Pa Keaing, Abeer Alwan and Yingnian Wu for heir commens.

References [1] J. Alon e al. Discovering clusers in moion ime-series daa, in Proc. CVPRC, 2003. [2] J. Maye, J. F. Werker, and L. Gerken, Infan sensiiviy o disribuional informaion can affec phoneic discriminaion, Cogniion, vol. 3, no. 82, pp. B101 B111, 2002. [3] B. Raj, R. Singh, and R. Sern, Auomaic generaion of subword unis for speech recogniion sysems, IEEE Trans. Speech and Audio Proc., vol. 10, no. 2, pp. 89 99, Feb 2002. [4] K. Johnson, Speech percepion wihou speaker normalizaion: An exemplar model, in Talker Variabiliy in Speech Processing, K. Johnson and J. W. Mullennix, ed. 1997 [5] F. Korkmazskiy, B. H. Juang, and F. Soong, Generalized mixure of HMM s for coninuous speech recogniion, in Proc. ICASSP97, pp. 1443 1446, 1997 [6] L. Rabiner and B. H. Juang, Fundamenals of Speech Recogniion. Prenice Hall, 1993 [7] J. F. Werker and R. C. Tees, Cross-language speech percepion: Evidence for percepual reorganizaion during he firs year of life, Infan Behavior and Developmen, no. 7, pp. 49 63, 1984. [8] S. Davis and P. Mermelsein, Comparison of parameric represenaions for monosyllabic word recogniion in coninuously spoken senence, IEEE Trans. ASSP, vol. 28, pp. 357 366, 1980. [9] A. P. Dempser, N. M. Laird, and D. B. Rubin, Maximum liklihood from incomplee daa via he EM algorihm, J. Royal Sa. Soc., vol. B, no. 39, pp. 1 38, 1977.