Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Similar documents
have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Speech Recognition at ICSI: Broadcast News and beyond

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Learning Methods in Multilingual Speech Recognition

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

user s utterance speech recognizer content word N-best candidates CMw (content (semantic attribute) accept confirm reject fill semantic slots

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Voice conversion through vector quantization

Body-Conducted Speech Recognition and its Application to Speech Support System

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

WHEN THERE IS A mismatch between the acoustic

A study of speaker adaptation for DNN-based speech synthesis

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Modeling function word errors in DNN-HMM based LVCSR systems

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Modeling function word errors in DNN-HMM based LVCSR systems

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Mandarin Lexical Tone Recognition: The Gating Paradigm

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Speech Emotion Recognition Using Support Vector Machine

Proceedings of Meetings on Acoustics

Learning Methods for Fuzzy Systems

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Audible and visible speech

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Edinburgh Research Explorer

Human Emotion Recognition From Speech

On the Formation of Phoneme Categories in DNN Acoustic Models

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Lecture 1: Machine Learning Basics

I-COMPETERE: Using Applied Intelligence in search of competency gaps in software project managers.

Calibration of Confidence Measures in Speech Recognition

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Hynninen and Zacharov; AES 106 th Convention - Munich 2 performing such tests on a regular basis, the manual preparation can become tiresome. Manual p

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

Investigation on Mandarin Broadcast News Speech Recognition

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Speaker recognition using universal background model on YOHO database

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Segregation of Unvoiced Speech from Nonspeech Interference

Accuracy (%) # features

SARDNET: A Self-Organizing Feature Map for Sequences

The Effects of Ability Tracking of Future Primary School Teachers on Student Performance

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Word Segmentation of Off-line Handwritten Documents

Evolution of Symbolisation in Chimpanzees and Neural Nets

Dynamic Pictures and Interactive. Björn Wittenmark, Helena Haglund, and Mikael Johansson. Department of Automatic Control

Letter-based speech synthesis

Pp. 176{182 in Proceedings of The Second International Conference on Knowledge Discovery and Data Mining. Predictive Data Mining with Finite Mixtures

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula

2 Mitsuru Ishizuka x1 Keywords Automatic Indexing, PAI, Asserted Keyword, Spreading Activation, Priming Eect Introduction With the increasing number o

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397,

REVIEW OF CONNECTED SPEECH

Teaching and Learning as Multimedia Authoring: The Classroom 2000 Project

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Understanding and Supporting Dyslexia Godstone Village School. January 2017

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

The IRISA Text-To-Speech System for the Blizzard Challenge 2017

Phonological and Phonetic Representations: The Case of Neutralization

Infrastructure Issues Related to Theory of Computing Research. Faith Fich, University of Toronto

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

Unit Selection Synthesis Using Long Non-Uniform Units and Phonemic Identity Matching

Reinforcement Learning by Comparing Immediate Reward

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature

Florida Reading Endorsement Alignment Matrix Competency 1

Rendezvous with Comet Halley Next Generation of Science Standards

Generative models and adversarial training

First Grade Curriculum Highlights: In alignment with the Common Core Standards

English Language and Applied Linguistics. Module Descriptions 2017/18

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Taught Throughout the Year Foundational Skills Reading Writing Language RF.1.2 Demonstrate understanding of spoken words,

ELA/ELD Standards Correlation Matrix for ELD Materials Grade 1 Reading

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA

phone hidden time phone

Transcription:

ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology 8916-5 Takayama, Ikoma, Nara 630-01, JAPAN Tel:+81-743-72-5287, Fax:+81-743-72-5289 e-mail:feli-y,nakamura,shikanog@is.aist-nara.ac.jp ABSTRACT An audio-visual intelligibility score is generally used as an evaluation measure in visual speech synthesis. Especially an intelligibility score of talking heads represents accuracy of facial models[1][2]. The facial models has two stages such as construction of real faces and realization of dynamical human-like motions. We focus on lip movement synthesis from input acoustic speech to realize dynamical motions. The goal of our research is to synthesize lip movements natural enough to do lip-reading. In previous research, we have proposed a lip movement synthesis method using HMMs which can incorporate a forward coarticulation eect and con- rmed its eectiveness through objective evaluation tests. In this paper, subjective evaluation tests are performed. Intelligibility test and acceptability test are conducted for subjective evaluation. 1. INTRODUCTION In research ofhuman perception, the integration of auditory and visual modalities has been investigated by tests of audio, visual, and audio-visual speech intelligibility. These intelligibility scores are evaluated by not only a natural human face but also a synthetic talking face. Especially the intelligibility score of talking heads represents the accuracy of the facial models. The accurate synthesis of talking heads needs to be elaborately re-synchronized with input acoustic speech signals, lip synchronization. The lip synchronization of talking heads are normally realized for the text-to-(audio-visual) speech systems, where the visual parameters for the lip movement synthesis are prepared in advance. However the visual parameters can be synthesized from the input acoustic speech signals using the correlation between auditory speech and visual facial parameters. This speech-to-lip movement synthesis can save the transmission bit rate of facial images in multimedia communication. Thus we focus on lip movement synthesis to realize dynamical motions from input acoustic speech. The goal of our research is to synthesize lip movements natural enough to do lip-reading. If such elaborate lip motions can be synthesized, hearing impaired people may be able to recover auditory information by reading the visualized lip motion. Mapping algorithms from acoustic speech signals to lip movements have been reported based on: Vector Quantization(VQ) [3][4] and Gaussian Mixtures [5] These methods require extensive training sets to account for context information. The required audio-visual data increases in proportion to the time length over the preceding or succeeding frames. A dierent approach uses speech recognition technique, such as phonetic segmentation [6] and Hidden Markov Model (HMM) [7][8][9][10]. These methods convert the acoustic signal into lip parameters based on information such as a phonetic segment, a word, a phoneme, an acoustic event and so on. The HMM-based method has the advantage that explicit phonetic information is available to handle coarticulation effects caused by surrounding phoneme contexts. In speech recognition, the forward or backward coarticulation eects are generally dealt by using with biphone or triphone HMMs. However these biphone and triphone models require a large speech data and extensive training. In previous work, we have proposed an HMM-

based lip movement synthesis method that is driven by a speech recognition process and that incorporates lip movement coarticulation eects. Moreover it has been veried that the proposed method is more eective than the conventional VQ method by the objective tests. However, it is important to conrm the performance by not only objective evaluation but also subjective evaluation. Therefore subjective evaluation tests are performed in this paper. In addition to the intelligibility test, we adopt the acceptability test in line with subjective evaluation of audio speech synthesis. the optimal HMM state by maximizing the likelihood of input acoustic speech. In this paper, phoneme HMMs are used so that the combination of phoneme HMMs can produce any utterance sequence. The HMM-based method is composed of two processes: a decoding process that converts an input acoustic speech signal to a most likely HMM state sequence by the Viterbi alignment and a table look-up process that converts an HMM state to corresponding lip parameters. The lip parameters for each HMM state in the look-up table are also trained using the Viterbi alignment. 2. SPEECH-TO-LIP MOVEMENT SYNTHESIS METHODS We introduce the speech-to-lip movement synthesis methods before describing the subjective evaluation tests. We describe conventional VQbased and the HMM-based methods, then describe the proposed context dependent HMMbased method. 2.1 VQ Method The VQ-based method maps a VQ codeword vector of an input acoustic speech signal to visual parameters frame-by-frame. The VQ-based method generates both audio and visual speech VQ codebooks by training data. Each audio VQ code is assigned to the output visual parameter VQ code by the training. In the synthesis process, an acoustic codeword vector for input auditory speech is selected to minimize the distance between the acoustic speech parameter and an acoustic codeword vector of the acoustic speech VQ codebook. Then the output visual parameters are retrieved as a visual speech VQ code associated with the input acoustic codeword vector. The visualized lips images are reconstructed by the visual parameters. 2.2 HMM Method The HMM-based method maps from an input acoustic speech signal to lip parameters through HMM states determined by the Viterbi alignment. The Viterbi alignment assigns an input frame to 2.3 SV-HMM Method we have proposed a new HMM-based method taking into account the succeeding viseme context, which is called the Succeeding-Viseme-HMMbased method (SV-HMM-based method). This proposed HMM-based method produces visual parameters through incorporation of coarticulation information. In speech recognition, coarticulation eects are generally dealt using HMMs. Biphone or triphone HMMs are modeled as depending on preceding and succeeding phoneme contexts. However, the number of the biphone or triphone HMMs is the squared or cubed number of the monophone HMMs so that the training for all models requires large amounts of acoustic speech data. Moreover, collection and preparation of such large corpora of synchronized audiovisual data is quite expensive. On the other hand, the proposed method continues to use the monophone HMMs of context independent models but synthesizes visual parameters with context dependency. The proposed method generates context dependent visual parameters by looking ahead to context independent HMM state sequence. Although coarticulation eects are bidirectional, the focus in this paper is on forward coarticulation because estimation errors in the HMM-based method are aected more by forward than backward coarticulation. Moreover, the use of visemes reduces the number of context dependent lip parameters. Visemes are dened by the distinguishable postures of the visible mouth articulation, associated with speech production. The postures of the mouth in the same places of articulation can not be distin-

guished. Visemes of succeeding phonemes have a strong coarticulation eect on current visual parameters. The training algorithm of the SV-HMM-based method diers from the HMM-based method with context independent lip parameters only in its use of the viseme classes of succeeding phonemes. The training and synthesis steps are as follows. Training Algorithm 1. Prepare and parameterize data of synchronized acoustic speech signals and 3D lip position data. 2. Train the acoustic phoneme HMMs using the training acoustic speech parameters. 3. Align the acoustic speech parameters into HMM state sequences using the forced Viterbi alignment. 4. Classify all frames into viseme classes by looking ahead to succeeding phoneme contexts. 5. Average synchronous lip parameters for the frames associated with the same HMM state and the same viseme class of the succeeding phoneme context. Synthesis Algorithm 1. Align an input acoustic speech parameters into an HMM state sequence using the Viterbi alignment. 2. Determine the viseme classes of the succeeding phoneme contexts at each frame. 3. Retrieve the output lip parameters associated with the HMM state and the viseme class. 4. Concatenate the retrieved lip parameters as a lip movement sequence to synthesize the visualized lips. 3. SUBJECTIVE EVALUATION EXPERIMENT In the previous section, the Euclidean error distance gave an objective distance between the synthesized lip parameters and the original lip parameters. However, it is not certain whether these values reect the error perceived by a viewer. Therefore we performed subjective tests to evaluate the quality of the synthesized lip motion. In the eld of speech processing, standard subjective testing consists of two procedures: the intelligibility test and the acceptability test. The intelligibility of speech test quanties the amount of transmitted information perceived by the subjects. The acceptability of speech test deals with perceiver judgements of naturalness. Both of these preliminary tests were used to assess the quality of the lip movement synthesis. 3.1. Intelligibility Test In this paper, the intelligibility score is dened as the percentage of all syllables that were identied. The test was performed by four normal Japanese subjects unfamiliar with lip-reading. 3.1.1 Method Subjects Ten young adults participated in the experiment. The subjects were all native speakers of Japanese, and they have normal hearing and normal or corrected vision. They were volunteer students of NAIST. Stimuli The stimuli were CVCV nonsense Japanese words. They are selected out of the 102 disyllabic nonsenese words list, which is reported by the project which aims to standarize the evaluation of the intelligibility of Japanese synthetic speech[12]. The second syllable in the list is selected from all the Japanese syllables whose vowels /a/,/i/,and /u/. The vowel of the rst syllable in the list was the same as the one of the second syllable except for eleven C/a/C/i/ syllables. The lip movement synthesis methods are evaluated using the second consonant of the disyllable, which have been veried to reect well the intelligibility of the normal sentence[13]. The 102 nonsense words were recorded by a video camera for this subjective evaluation. The other 326 word audio-visual data for construction of HMMs and VQ codebooks was recorded by the 3D position sensor system. The data consists of 3D positions of the markers. The test audio data was digitized at 12kHz with 16 bit resolution. The length of each word is almost 2 second. The trial words are conducted considering the distinction of the visual lip shape out of the non-

sense words list. Audio-visual perception studies for Japanese show that Japanese speakers rely less on visual speech information than do English speakers[14][15]. However another report describes that the intelligibility of only the bilabial consonants show 90% even in the Japanese. Moreover Sekiyama et al.[16] haveinvestigated the consonant ranking to do easy lip-reading. The result shows the consonant /w/,/p/,/h/,/py/, /m/ are easy to be recognized. We selected the 9 stimuli out of the 102 words list for the subjects. The consonants are selected from three category of /w/, the bilabial consonants, and the other consonants except bilabial consonants unaccustomed lip-reading. The synthesized visual parameters are visualized by the ICP 3D lip model[11]. The orientation of the lips was xed frontally. The lips were synthesized at 125Hz. The actual output 25Hz frames were constructed by lip parameters averaged in every ve frames. Experimental Design The presentation is classied as the ve synthesis methods ; lip image sequence from natural human visual parameter, synthetic VQ, synthetic HMM, synthetic SV-HMM visual parameters. In the intelligibility test, the stimulus by the HMM and SV-HMM methods are prepared without the forced Viterbi alignment. The noise comtaminated speech is presented for the purpose of degrading the estimation from audio speech. White Gaussian noise was added to degrade the acoustic speech signal. The Signal-to-Noise Ration are selected the identical conditions to the ICP intelligibility tests[11], 018;012;06; 0; 6dB. Then another SNR 01 is added for evaluating visual only speech synthesis. For presentation, the trial words are repeated by the synthesis methods and Signal-Noise-Ratio. Therefore the words arranged in random order. Procedure Subjects viewed the stimuli on a color monitor. Stimulus presentation and response collection were controlled by a computer. Subjects were given an audio-visual stimulus by pushing Enter or Space keys by themselves. They were asked to look at visual speech and listen without concentrating to the audio utterances. Intelligibility Score 100 80 60 A speech 40 AV original AV VQ 20 AV HMM AV SV-HMM 0-18 -12-6 0 6 Signal to Noise Ratio Figure 1: Result of the Intelligibility Tests 3.1.2 Results Fig. 1 shows the /C-V/ intelligibility scores for the original and the SV-HMM-based method at dierent levels of acoustic degradation. The auditory and visual stimuli synthesized from the original lip position and using the SV-HMM-based method gave higher intelligibility scores than the purely auditory stimulus. This suggests that both natural and model-based lip synthesis enhanced intelligibility. 3.2. Acceptability Test 3.2.1 Method In the acceptability test, we investigate the dierences among the synthesis methods by focusing on the naturalness of the synthesized lip movement. The indicators of the naturalness may be the smoothness of synthesized lip parameters or the synchronization between the lip shapes and the audio speech. The acceptability score is evaluated by the criterion whether the synthesized lip movement is as natural as a human lip motion. Subjects The subjects for the acceptability tests were used the identical group of intelligibility tests. Stimuli The test utterances were composed of 3 Japanese words out of the 3D position data. The words were randomly selected out of 100 test words. The

3 words are presented two times randomly The synthetic visualized lips were created using the same ICP 3D lip model software. Experimental Design The presentation is classied as the ve synthesis methods ; lip image sequence from natural human visual parameter, synthetic VQ, synthetic HMM with forced Viterbi alignment, synthetic HMM, synthetic SV-HMM with forced Viterbi alignment, synthetic SV-HMM visual parameters. In the same manner as the intelligibility test, the presentation order of the words was chosen at random. To imagine the human lip motion, the clean acoustic signal is provided to the subjects in addition to the visualized lips. The Mean Opinion Score (MOS) (IEEE Recommendation, 1969) was used as the measure to evaluate the naturalness of the synthesized lip movement. The MOS is the most widely used subjective quality measure for evaluating telephone systems and speech coding algorithms. The subjects assigned scores on a ve point scale dened by Japanese category labels analogous to excellent, good, fair, poor, and bad. Procedure Subjects viewed the stimuli on a color monitor. Stimulus presentation and response collection were controlled by a computer. Subjects were given an audio-visual stimulus by pushing Enter or Space keys by themselves. In evaluating the lip movement synthesis, subjects were instructed that the displayed lip movement should be marked excellent when it had natural enough to be human lip movement. 3.2.2 Results Fig.2 shows the mean opinion scores. Scores for the VQ-based method, the HMM-based method, and the proposed SV-HMM-based method were 2.97, 3.42, and 3.17, respectively. However, the dierence in mean MOS over the synthesis methods shows no signicance with the F-test. The result might be caused by the very small number of test utterances, and by an insucient number of subjects. Moreover the visualized lip model is somewhat limited, as it uses a cartoon animation whose parameters are ne-tuned to a specic speaker. In the future, the acceptability test will be further investigated while improving the data MOS 5 4 3 2 1 0 3.32 2.97 3.42 Original VQ HMM HMM Forced 3.42 3.60 3.17 SV-HMM SV-HMM Forced Fig. 2: Result of the Acceptability Tests and the visualization tool. 4. CONCLUSION In this paper, subjective evaluation tests are performed for speech-to-lip movement synthesis. Intelligibility test and acceptability test are conducted for subjective evaluation. The subjective evaluation did not show the eect of the proposed method clearly; this will be investigated under better experimental conditions in the future. In synthesis, the HMM-based method has the intrinsic diculty that the precision of lip movement synthesis depends upon the accuracy of a Viterbi alignment. The Viterbi alignment deterministically assigns a single HMM state for each input frame. Incorrectly decoded frames of the HMM state sequence may give rise to wrong lip shapes. This problem could be solved by extending the Viterbi algorithm to the Forward- Backward algorithm, which can take all the HMM state sequence probabilities into account. 5. REFERENCES 1. Massaro,D.W.,"Perceiving Talking Faces", MIT Press (1997). 2. Le Go,B.,Guiard-Marigny,T.,Benoit,C., "Analysis- Synthesis and Intelligibility of a Talking Face", in "Progress in Speech Synthesis", J. van Santen et al.,eds, Springer-Verlag (1996). 3. Morishima, S. and Harashima, H.: A Media Conversion from Speech to Facial image for Intelligent Man- Machine Interface, IEEE Journal on sel. areas in Communications, Vol. 9, No. 4, pp. 594{600 (1991). 4. Lavagetto, F.: Converting Speech into Lip Movements: A Multimedia Telephone or Hard of Hearing People, IEEE Trans. on Rehabilitation Engineering, Vol. 3, No. 1, pp. 90{102 (1995).

5. Rao, R.R. and Chen,T., "Cross-Modal Prediction in Audio-Visual Communication", Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Vol.4, pp.2056-2059(1996) 6. Goldenthal,W., Waters,K., Van Thong,J.M. and Glickman,O. "Driving Synthetic Mouth Gestures: Phonetic Recognition for FaceMe!", Eurospeech'97 Proceedings,Vol.4, pp.1995-1998 1997) 7. Simons, A. and Cox, S.: Generation of Mouthshape for a Synthetic Talking ead, Proc. of the Institute of Acoustics, Vol. 12, No. 10 (1990). 8. Chou, W. and Chen, H.: Speech Recognition for Image Animation and Coding, ICASSP 95, pp. 2253{ 2256 (1995). 9. Chen, T. and Rao, R.: Audio-Visual Interaction in Multimedia Communication, ICASSP 97, pp. 179{ 182 (1997). 10. Yamamoto, E., Nakamura, S. and Shikano, K.: Speech-to-Lip Movement Synthesis by HMM, ESCA Workshop of Audio Visual Speech Processing, pp.137{140(1997). 11. Guiard-Marigny, T., Adjoudani, T. and Benoit, C.: 3D Models of the Lips and Jaw for Visual Speech Synthesis, in "Progress in Speech Synthesis", J. van Santen et al.,eds, Springer-Verlag (1996). 12. Speech Input/Output Systems Expert Committee, "Commentary on the Guideline of Speech Synthesizer Evaluation", Committee on Standardization of Human-Media Information Processing, the Japan Electronic Industry Development Association (1997) 13. Kasuya,H.and Kasuya,S.(1992), "Relationships between syllable, word and sentence intelligibility of synthetic speech", "Proc. Int'l Conf. Spoken Language Processing", Vol.2,pp.1215-1218. 14. Sekiyama,K.(1994),"Dierences in audio-visual speech perception between Japanese and Americans: McGurk eect as a function of incompatibility",j. of the Acoustic Society of Japan,Vol.15,pp.143-158. 15. Sekiyama,K.and Tohkura,Y.(1991),"McGurk eect in non-english listeners: Few visual aects for Japanese subjects hearing Japanese syllables of high auditory intelligibility",j. of Acoustic Society of America,Vol.90,pp.1797-1805 16. Sekiyama,K.and Joe,K.and Umeda,M.(1987), "Perceptual components of Japanese syllables in lipreading:a multidimensional study (English Abstract)",IEICE Technical Report IE,pp.29-36(1991)