Proceedings of Meetings on Acoustics

Similar documents
Mandarin Lexical Tone Recognition: The Gating Paradigm

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Speech Emotion Recognition Using Support Vector Machine

Human Emotion Recognition From Speech

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

A Neural Network GUI Tested on Text-To-Phoneme Mapping

SARDNET: A Self-Organizing Feature Map for Sequences

Speech Recognition at ICSI: Broadcast News and beyond

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Learning Methods in Multilingual Speech Recognition

A study of speaker adaptation for DNN-based speech synthesis

Evolution of Symbolisation in Chimpanzees and Neural Nets

Artificial Neural Networks

Proceedings of Meetings on Acoustics

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

On the Formation of Phoneme Categories in DNN Acoustic Models

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

Artificial Neural Networks written examination

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Self-Supervised Acquisition of Vowels in American English

WHEN THERE IS A mismatch between the acoustic

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

Learning Methods for Fuzzy Systems

Revisiting the role of prosody in early language acquisition. Megha Sundara UCLA Phonetics Lab

Self-Supervised Acquisition of Vowels in American English

Modeling function word errors in DNN-HMM based LVCSR systems

Infants learn phonotactic regularities from brief auditory experience

Accelerated Learning Course Outline

Speaker Identification by Comparison of Smart Methods. Abstract

Segregation of Unvoiced Speech from Nonspeech Interference

Accelerated Learning Online. Course Outline

English Language and Applied Linguistics. Module Descriptions 2017/18

Evolutive Neural Net Fuzzy Filtering: Basic Description

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397,

Stages of Literacy Ros Lugg

Modeling function word errors in DNN-HMM based LVCSR systems

A Case Study: News Classification Based on Term Frequency

INPE São José dos Campos

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

Phonological encoding in speech production

Journal of Phonetics

Voice conversion through vector quantization

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

Learners Use Word-Level Statistics in Phonetic Category Acquisition

Word Segmentation of Off-line Handwritten Documents

Speaker recognition using universal background model on YOHO database

Python Machine Learning

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Perceptual scaling of voice identity: common dimensions for different vowels and speakers

Identification of Opinion Leaders Using Text Mining Technique in Virtual Community

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Large vocabulary off-line handwriting recognition: A survey

Linking object names and object categories: Words (but not tones) facilitate object categorization in 6- and 12-month-olds

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Rhythm-typology revisited.

THE RECOGNITION OF SPEECH BY MACHINE

Automatic Pronunciation Checker

Introduction to Psychology

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Abstractions and the Brain

Knowledge Transfer in Deep Convolutional Neural Nets

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

A Cross-language Corpus for Studying the Phonetics and Phonology of Prominence

Unit Selection Synthesis Using Long Non-Uniform Units and Phonemic Identity Matching

Improvements to the Pruning Behavior of DNN Acoustic Models

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Speaker Recognition. Speaker Diarization and Identification

SEGMENTAL FEATURES IN SPONTANEOUS AND READ-ALOUD FINNISH

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Student Perceptions of Reflective Learning Activities

Phonological and Phonetic Representations: The Case of Neutralization

Beeson, P. M. (1999). Treating acquired writing impairment. Aphasiology, 13,

INTRODUCTION J. Acoust. Soc. Am. 102 (3), September /97/102(3)/1891/7/$ Acoustical Society of America 1891

On-Line Data Analytics

Neural pattern formation via a competitive Hebbian mechanism

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

Pobrane z czasopisma New Horizons in English Studies Data: 18/11/ :52:20. New Horizons in English Studies 1/2016

CS 598 Natural Language Processing

Audible and visible speech

Transcription:

Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 5aSCb: Production and Perception II: The Speech Segment (Poster Session) 5aSCb49. Simulation of neural mechanism for Chinese vowel perception with neural network model Chao-Min Wu*, Ming-Hung Li and Tao-Wei Wang *Corresponding author's address: Electrical Engineering, National Central University, Chung-Li, 32001, Taiwan, Taiwan, wucm@ee.ncu.edu.tw Based on the results of psycholinguistic experiments, the perceptual magnet effect is the important factor in speech development. This effect produced a warped auditory space to the corresponding phoneme. The purpose of this study was to develop a neural network model in simulation of speech perception. The neural network model with unsupervised learning was used to determine the phonetic categories of phoneme according to the formant frequencies of the vowels. The modified Self-Organizing Map (SOM) algorithm was proposed to produce the auditory perceptual space of English vowels. Simulated results were compared with findings from psycholinguistic experiments, such as categorization of English /r/ and /l/ and prototype and non-prototype vowels, to indicate the model's ability to produce auditory perception space. In addition, this speech perception model was combined with the neural network model (Directions into Velocities Articulator, DIVA) to simulate categorization of ten English vowels and their productions to show the learning capability of speech perception and production. We further extended this modified DIVA model to show its capability to categorize six Chinese vowels (/a/, /i/, /u/, /e/, /o/, /y/) and their productions. Finally, this study proposed further development and related discussions for this speech perception model and its clinical application. Published by the Acoustical Society of America through the American Institute of Physics 2013 Acoustical Society of America [DOI: 10.1121/1.4799006] Received 22 Jan 2013; published 2 Jun 2013 Proceedings of Meetings on Acoustics, Vol. 19, 060293 (2013) Page 1

INTRODUCTION Psycholinguistic experiments on the human subects were often used to study speech perception in the past. For examples, Peterson and Barney (Peterson and Barney, 1952) indicated that the boundaries of different vowels may initially be inherent in the auditory processing of speech. Similar experiments on the human subect were also conducted to investigate vowel categories and category boundaries (Eimas, 1975; Streeter, 1976). Based on the results of psycholinguistic experiments, the perceptual magnet effect is the important factor in speech development. This effect produced a warped auditory space to the corresponding phoneme. Additionally, several phonetic studies provided that the perceptual magnet effects influence of the phonetic categories (Kuhl, 1991; Iverson and Kuhl, 1995; Sussman and Laukner- Morano, 1995). Previous studies often utilized psychological experiments on the human subects to interpret the theoretical framework of phonetic perception, but the purpose of this study was to develop a neural network model in simulation of speech perception. Early speech perception models were mainly acoustic-analysis-based speech recognition systems (Juang et al., 1986; Rabiner et al., 1989). The most popular and promising Hidden Markov model (HMM) utilizes the extracted phonetic features and statistical analysis to implement speech recognition. This type of speech perception model often needs large database for training in order to get the expected good recognition rates and is considered less flexible when compared to the human perception (Benzeghiba et al., 2007). To narrow the gap, the neural network models are developed to model the nervous system (i.e. the brain) and neural signal processing in the computational neuroscience. Many neural network models are useful to describe biological behaviors (Reiss, 1964; Hoshino et al., 2002) and represent their properties. In general, mathematical model is difficult to represent the sensory mapping and neural property. Kohonen (1982) presented that a self-organizing feature map (SOFM) can allocate the afferent weight vectors of map cells according to the distribution of input patterns used to train the network. Furthermore, Kohonen (1990; 1993) proposed the self-organizing map network model (SOM) that provided parallel signal processing to implement the self-organizing mechanism of the brain and describe the sensory mechanism. Physiologically, the self-organizing mechanism is defined as the brain categorizes the unknown external stimuli based on its captured features. Kohonen (1998) used the SOM model to proect Finnish symbol strings onto neurons and simulated phonemic categorization mechanism to show the model s ability. Guenther and Gaa (1996) simulated the perceptual magnet effect (Kuhl, 1991) with the SOM model in his DIVA (Directions into Velocities Articulator) model and showed that this effect affects the recognition rate more in identifying the prototype vowels than that in non-prototype vowels. However, auditory function of the original DIVA model (Guenther et al., 1998; 2006) provided no perceptual function and was given as three pairs of input ranges to represent the first three formant frequencies of the speech sounds to control the simulated articulatory movements of speech organs. In contrast to Guenther et al. s study, this study provides an approach with neural network model to simulate the phonetic experiments related to acoustic characterization and phoneme categorization. We modified the original SOM algorithm to simulate the perceptual magnet effect and produced the auditory perceptual space for English vowels. This speech perception model was combined with the DIVA model to simulate categorization of ten English vowels and their productions to show the learning capability of speech perception and production. We further extended this modified DIVA model to show its capability to categorize six Chinese vowels (/a/, /i/, /u/, /e/, /o/, /y/) and their productions. METHODS The original SOM neural network model is a feedforward neural model that includes only the input and output layers. Each neuron of the output layer in the model has forward and lateral connections. The network uses Kohonen s learning rule (winner-take-all) and repeats updating the synaptic weights until the topological structure is formed. In this study, the modified SOM algorithm (Wu et al., submitted) was proposed to produce the auditory perceptual space of the vowels. The modified SOM network model with unsupervised learning was used to determine the phonetic categories of phoneme according to the formant frequencies of the vowels. In the modified SOM model, the formant frequencies of the phonemes were used as the input vectors and the outputs of the model were represented as the responses in the auditory map. The synaptic weights of the forward connection between the input and output layers were adaptive, but the weights of the lateral connection among neurons of the output layer Proceedings of Meetings on Acoustics, Vol. 19, 060293 (2013) Page 2

were fixed. The similarity of phonemic sound ( S ) is determined by the Euclidean distance between the input ( X ) and weight ( W ) vectors (see Eq. 1). The smaller the Euclidean distance is the higher the similarity is. The highest similarity is chosen as the winner neuron. S X W () t (1) i To improve the neural activity representations, the Eq. 2 is used to determine the neural activity value with the similarity and provide the varied activity levels. 2 2 S / y () t e (2) where is the similarity effective range. The updated rule is modified with activity level for winner neurons to obtain more responses (see Eq. 3). W (+1) t W () t () t ()[ t X W ()] t y () t c i 2 t / rc t 0 e c 2 Rc where ( ) ( ), : learning time; and exp( ), (3) r c : distance between the neighboring and winner neuron; R :effective neighborhood range. c where () t and () t are the learning rate and the neighborhood function, respectively. c Simulation I: Categorization of Ten English Vowels The modified SOM model with the aforementioned learning rules is used to develop an auditory perception model in the perceptual simulations as described in Wu et al. This speech perception model was combined with the DIVA model to simulate categorization of ten English vowels and their productions to show the learning capability of speech perception and production. In this simulation, one hundred speech sounds for each English vowel were randomly generated and used as the input data for learning process which is analogous to infant babbling. These ten English vowels were based on the vowels and their first three formant frequencies from the study of Peterson and Barney (as shown in TABLE 1 and FIGURE 1). We used one thousand neurons for this simulation. After learning process, the speech sounds shown in FIGURE 1 were used as the test sounds for the speech perception model to show its learning capability. FIGURE 2 displays the implied ten English vowels categories intended to be perceived with the test sounds. Then the modified DIVA model could be used to produce ten English vowels. Simulation II: Categorization of Six Chinese Vowels We further extended this modified DIVA model to show its capability to categorize six Chinese vowels (/a/, /i/, /u/, /e/, /o/, /y/) and their productions. In the second simulation, one hundred speech sounds for each Chinese vowel were randomly generated and used as the input data for learning process which is analogous to infant babbling. These six Chinese vowels were based on the vowels and their first three average formant frequencies from the study of 24 male college students (as shown in TABLE 2 and FIGURE 3). We used one thousand neurons for this simulation. These acoustic data were recorded with CSL Model 4100 (Kay Elemetrics Corp, Lincoln Park, NJ, USA). After learning process, the speech sounds which were generated in a similar way to the first simulation were used as the test sounds for the speech perception model to show its learning capability in Chinese. FIGURE 3 displays the recorded six Chinese vowels categories intended to be perceived with the test sounds. Then the modified DIVA model could be used to produce six Chinese vowels. Proceedings of Meetings on Acoustics, Vol. 19, 060293 (2013) Page 3

TABLE 1. The first three formant frequencies of the ten English vowels. F1(Hz) F2(Hz) F3(Hz) 270 2290 3010 390 1990 2550 660 519 392 662 541 526 490 677 1720 1619 964 1251 1097 910 1350 1419 2410 2411 2233 2278 2239 2108 1690 2241 FIGURE 1. One thousand speech sounds for learning process with a 50Hz interval. FIGURE 2. Implied ten English vowels for categorization. TABLE 2. The first three average formant frequencies of the six Chinese vowels from 24 male college students. F1(Hz) F2(Hz) F3(Hz) 793 /a/ 1258 2649 285 /i/ 2224 3026 318 /u/ 485 /e/ 500 /o/ 284 /y/ 772 1968 892 1928 2559 2676 2727 2379 Proceedings of Meetings on Acoustics, Vol. 19, 060293 (2013) Page 4

FIGURE 3. Six Chinese vowels ( /a/, /i/, /u/, /e/, /o/, /y/) from 24 male college students for categorization. RESULTS and DISCUSSION Simulation I: Categorization and production of Ten English Vowels In this simulation, one thousand speech sounds (shown in FIGURE 1) were used as the input data for learning process. After learning process, the speech sounds shown in FIGURE 4(a) display 10 categories when FIGURE 4 are with F1 and F2 as x and y axis, respectively. As shown in FIGURE 4(a), the speech perception model indicates its learning capability. In order to see the speech perception space clearly, the perceived formant frequencies were further filtered and presented in FIGURE 4(b) where the speech perception space is easily recognized. Then the modified DIVA model could be used to produce ten English vowels. For example, FIGURE 5 demonstrates the chosen vowel /a/ with its first three formant frequencies generated from the modified DIVA model. (a) (b) FIGURE 4. The speech perception space after learning process(a); and the speech perception after filtering process(b). FIGURE 5. First three formant frequencies of the English vowel /a/ generated from the modified DIVA model. Proceedings of Meetings on Acoustics, Vol. 19, 060293 (2013) Page 5

The first three formant frequencies displayed in FIGURE 5 are F1=679 Hz, F2=1220, and F3=2281Hz. The first three formant frequencies generated by the original DIVA model are F1=677 Hz, F2=1238, and F3=2275Hz. These two sets of the first three formant frequencies indicate that the modified DIVA model with additional speech perception function maintain its original functions. Simulation II: Categorization and production of Six Chinese Vowels In this simulation, six hundred speech sounds were used as the input data for learning process. After learning process, the speech sounds shown in FIGURE 6(a) display 6 categories after filtering process on the F1-F2 plane where the speech perception space is easily recognized. As shown in FIGURE 6(a), the speech perception model indicates its learning capability. Then the modified DIVA model could be used to produce six Chinese vowels. For example, FIGURE 6(b) presents the chosen vowel /y/ (as the red cursor lines shown in FIGURE 6(a)) with its first three formant frequencies generated from the modified DIVA model. However, we could not train the modified DIVA model to produce the correct Chinese vowels /u/ and /o/. Possible reasons we suspect are the original DIVA model focus on the first two formant frequencies to improve its articulators to produce the correct vocal tract shape to generate the right Chinese sounds. In addition to this problem, we also have known the original DIVA model could not produce the Chinese tones. We have also modified the original DIVA model to generate four Chinese tones and have published it elsewhere (Wu and Wang, 2012). (a) (b) FIGURE 6. The speech perception space after learning process(a); and the first three formant frequencies of the Chinese vowel /y/ as the red cursor lines shown in (a) generated from the modified DIVA model (b). CONCLUSION This study investigated the neural mechanism for Chinese vowel perception using a neural network model. The neural network model with unsupervised learning was used to determine the phonetic categories of phoneme according to the formant frequencies of the vowels. The modified SOM algorithm was proposed to produce the auditory perceptual space of English and Chinese vowels. This speech perception model was combined with the DIVA model to simulate categorization of ten English vowels and their productions. We further extended this modified DIVA model to show its capability to categorize six Chinese vowels to show its learning capability of speech perception and production. ACKNOWLEDGMENTS This research is supported by National Science Council of Taiwan with the grant number NSC 101-2221-E-008-005. REFERENCES Peterson, G. E., and Barney, H. L. (1952). "Control methods used in a study of the vowels," J. Acoust. Soc. Am., 24, 175-184. Eimas, P. D., (1975). Auditory and phonetic coding of the cues for speech: Discrimination of the /r l/ distinction by young infants, Percept., Psychophys. 18, 341 347. Proceedings of Meetings on Acoustics, Vol. 19, 060293 (2013) Page 6

Streeter, L. A., (1976). Language perception of 2-month-old infants shows effects of both innate mechanisms and experience, Nature, 259, 39 41. Kuhl, P. K., (1991). Human adults and human infants show a perceptual magnet effect for the prototypes of speech categories, monkeys do not, Percept. Psychophysics, 50, 93 107. Iverson, P. and Kuhl, P. K., (1995). Mapping the perceptual magnet effect for speech using signal detection theory and multidimensional scaling, J. Acoust. Soc. Am., 97, 553 562. Sussman, J. E. and Lauckner-Morano, V. J., (1995). Further tests of the perceptual magnet effect in the perception of [i]: Identification and change/no-change discrimination, J. Acoust. Soc. Am., 97, 539 552. Reiss, R. F., (1964). Neural Theory and Modeling, Stanford Univ. Press, Stanford, CA. Rosenblatt, F., (1960). Perceptron simulation experiments, Proceedings of the IRE, 48, 301-309. Kohonen, T. and Somervuo, P., (1998). Self- organizing maps of symbol strings, Neurocomputer, 21, 19-30. Kohonen, T., (2003). Self-organized maps of sensory events, Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 361, 1177-1186. Guenther, F. H. and Gaa, M. N., (1996). The Perceptual Magnet Effect as an Emergent Property of Neural Map Formation, Journal of the Acoustical Society of America, 100, 1111-1121. Guenther, F. H., Hampson, M., and Johnson, D. (1998). A Theoretical Investigation of Reference Frames for the Planning of Speech Movements. Psychological Review, 105, pp. 611-633. Guenther, F. H., Ghosh, S. S., and Tourville, J. A. (2006). Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production. Brain & Language, 96, pp. 280-301. Juang, B.-H., Rabiner, L. R., and Wilpon, J. G. (1986). On The Use Of Bandpass Liftering In Speech Recognition. ICASSP-86 Proceedings, pp.765-768, Tokyo, April Rabiner, L. R., Lee, C. H., Juang, B. H., and Wilpon, J. G. (1989). HMM Clustering for Connected Word Recognition. Acoustics, Speech, and Signal Processing, IEEE, pp. 405-408. Benzeghiba, M., Mori, R. D., Deroo, O., Dupont, S., Erbes, T., Jouvet, D., et al. (2007). Automatic speech recognition and speech variability: A review. Speech Communication, 47, pp. 763-786. Wu, Chao-Min, Wang, Tao-Wei and Li, Ming-Hong (submitted). Development of a neural-network-based auditory model for the study of proficiency on perception. Hebb, D. O., (1949). The Organization of Behavior: A Neuropsychological Theory, Wiley, New York. Wu, Chao-Min and Wang, Tao-Wei (2012) Study of Neural Correlates of Mandarin Tonal Production with Neural Network Model. Journal of Medical and Biological Engineering, 32(3), pp.169-174. Proceedings of Meetings on Acoustics, Vol. 19, 060293 (2013) Page 7