Contents for Subpart 6

Similar documents
WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Appendix L: Online Testing Highlights and Script

Test Administrator User Guide

Mandarin Lexical Tone Recognition: The Gating Paradigm

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Speech Recognition at ICSI: Broadcast News and beyond

Learning Methods in Multilingual Speech Recognition

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Large Kindergarten Centers Icons

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

GACE Computer Science Assessment Test at a Glance

21st Century Community Learning Center

Star Math Pretest Instructions

ELA/ELD Standards Correlation Matrix for ELD Materials Grade 1 Reading

Program Matrix - Reading English 6-12 (DOE Code 398) University of Florida. Reading

SIE: Speech Enabled Interface for E-Learning

Does the Difficulty of an Interruption Affect our Ability to Resume?

THE MULTIVOC TEXT-TO-SPEECH SYSTEM

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

MISSISSIPPI OCCUPATIONAL DIPLOMA EMPLOYMENT ENGLISH I: NINTH, TENTH, ELEVENTH AND TWELFTH GRADES

Word Stress and Intonation: Introduction

STUDENT MOODLE ORIENTATION

SARDNET: A Self-Organizing Feature Map for Sequences

Phonological Processing for Urdu Text to Speech System

English Language and Applied Linguistics. Module Descriptions 2017/18

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

First Grade Curriculum Highlights: In alignment with the Common Core Standards

READ 180 Next Generation Software Manual

INTERMEDIATE ALGEBRA PRODUCT GUIDE

Segregation of Unvoiced Speech from Nonspeech Interference

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Body-Conducted Speech Recognition and its Application to Speech Support System

Characteristics of the Text Genre Informational Text Text Structure

CHANCERY SMS 5.0 STUDENT SCHEDULING

Implementing the English Language Arts Common Core State Standards

Course Law Enforcement II. Unit I Careers in Law Enforcement

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Use of CIM in AEP Enterprise Architecture. Randy Lowe Director, Enterprise Architecture October 24, 2012

PowerTeacher Gradebook User Guide PowerSchool Student Information System

Florida Reading Endorsement Alignment Matrix Competency 1

PRAAT ON THE WEB AN UPGRADE OF PRAAT FOR SEMI-AUTOMATIC SPEECH ANNOTATION

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

Human Emotion Recognition From Speech

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Understanding and Supporting Dyslexia Godstone Village School. January 2017

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

REVIEW OF CONNECTED SPEECH

Administrative Services Manager Information Guide

Grade Band: High School Unit 1 Unit Target: Government Unit Topic: The Constitution and Me. What Is the Constitution? The United States Government

Bi-Annual Status Report For. Improved Monosyllabic Word Modeling on SWITCHBOARD

A Neural Network GUI Tested on Text-To-Phoneme Mapping

1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

TA Certification Course Additional Information Sheet

MAKING YOUR OWN ALEXA SKILL SHRIMAI PRABHUMOYE, ALAN W BLACK

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Speech Emotion Recognition Using Support Vector Machine

Characteristics of the Text Genre Realistic fi ction Text Structure

Voice conversion through vector quantization

Revisiting the role of prosody in early language acquisition. Megha Sundara UCLA Phonetics Lab

Organizing Comprehensive Literacy Assessment: How to Get Started

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Stages of Literacy Ros Lugg

MYP Language A Course Outline Year 3

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

MINISTRY OF EDUCATION

SEGMENTAL FEATURES IN SPONTANEOUS AND READ-ALOUD FINNISH

Using SAM Central With iread

Classroom Activities/Lesson Plan

TIPS PORTAL TRAINING DOCUMENTATION

Teachers: Use this checklist periodically to keep track of the progress indicators that your learners have displayed.

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

Text Compression for Dynamic Document Databases

Eyebrows in French talk-in-interaction

Linking Task: Identifying authors and book titles in verbose queries

Natural Language Processing. George Konidaris

CREATING SHARABLE LEARNING OBJECTS FROM EXISTING DIGITAL COURSE CONTENT

Integrating Blended Learning into the Classroom

Phonemic Awareness. Jennifer Gondek Instructional Specialist for Inclusive Education TST BOCES

CEFR Overall Illustrative English Proficiency Scales

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I

eguidelines Aligned to the Common Core Standards

ODS Portal Share educational resources in communities Upload your educational content!

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Idaho Early Childhood Resource Early Learning eguidelines

School of Innovative Technologies and Engineering

MULTIMEDIA Motion Graphics for Multimedia

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Designing a Speech Corpus for Instance-based Spoken Language Generation

ASSISTIVE COMMUNICATION

Highlighting and Annotation Tips Foundation Lesson

SLINGERLAND: A Multisensory Structured Language Instructional Approach

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

SYLLABUS- ACCOUNTING 5250: Advanced Auditing (SPRING 2017)

Android App Development for Beginners

Transcription:

Contents for Subpart 6 6.1 Scope...2 6.2 Definitions...2 6.3 Symbols and abbreviations...2 6.4 MPEG-4 audio text-to-speech bitstream syntax...3 6.4.1 MPEG-4 audio TTSSpecificConfig...3 6.4.2 MPEG-4 audio text-to-speech payload...3 6.5 MPEG-4 audio text-to-speech bitstream semantics...5 6.5.1 MPEG-4 audio TTSSpecificConfig...5 6.5.2 MPEG-4 audio text-to-speech payload...5 6.6 MPEG-4 audio text-to-speech decoding process...7 6.6.1 Interface between DEMUX and syntactic decoder...8 6.6.2 Interface between syntactic decoder and speech synthesizer...8 6.6.3 Interface from speech synthesizer to compositor...8 6.6.4 Interface from compositor to speech synthesizer...8 6.6.5 Interface between speech synthesizer and phoneme/bookmark-to-fap converter...9 Annex 6.A (informative) Applications of MPEG-4 audio text-to-speech decoder...10 6.A.1 General...10 6.A.2 Application scenario: MPEG-4 Story Teller on Demand (STOD)...10 6.A.3 Application scenario: MPEG-4 audio text-to-speech with moving picture...10 6.A.4 MPEG-4 audio TTS and face animation using bookmarks appropriate for trick mode...10 6.A.5 Random access unit...10 ISO/IEC 2001 All rights reserved 1

Subpart 6: Text to Speech Interface (TTSI) 6.1 Scope This subpart of ISO/IEC 14496-3 specifies the coded representation of MPEG-4 Audio Text-to-Speech (M-TTS) and its decoder for high quality synthesized speech and for enabling various applications. The exact synthesis method is not a standardization issue partly because there are already various speech synthesis techniques. This subpart of ISO/IEC 14496-3 is intended for application to M-TTS functionalities such as those for facial animation (FA) and moving picture (MP) interoperability with a coded bitstream. The M-TTS functionalities include a capability of utilizing prosodic information extracted from natural speech. They also include the applications to the speaking device for FA tools and a dubbing device for moving pictures by utilizing lip shape and input text information. The text-to-speech (TTS) synthesis technology is recently becoming a rather common interface tool and begins to play an important role in various multimedia application areas. For instance, by using TTS synthesis functionality, multimedia contents with narration can be easily composed without recording natural speech sound. Moreover, TTS synthesis with facial animation (FA) / moving picture (MP) functionalities would possibly make the contents much richer. In other words, TTS technology can be used as a speech output device for FA tools and can also be used for MP dubbing with lip shape information. In MPEG-4, common interfaces only for the TTS synthesizer and for FA/MP interoperability are defined. The M-TTS functionalities can be considered as a superset of the conventional TTS framework. This TTS synthesizer can also utilize prosodic information of natural speech in addition to input text and can generate much higher quality synthetic speech. The interface bitstream format is strongly user-friendly: if some parameters of the prosodic information are not available, the missed parameters are generated by utilizing preestablished rules. The functionalities of the M-TTS thus range from conventional TTS synthesis function to natural speech coding and its application areas, i.e., from a simple TTS synthesis function to those for FA and MP. 6.2 Definitions 6.2.1 International Phonetic Alphabet; IPA : The worldwide agreed symbol set to represent various phonemes appearing in human speech. 6.2.2 lip shape pattern : A number that specifies a particular pattern of the preclassified lip shape. 6.2.3 lip synchronization : A functionality that synchronizes speech with corresponding lip shapes. 6.2.4 MPEG-4 Audio Text-to-Speech Decoder : A device that produces synthesized speech by utilizing the M- TTS bitstream while supporting all the M-TTS functionalities such as speech synthesis for FA and MP dubbing. 6.2.5 moving picture dubbing : A functionality that assigns synthetic speech to the corresponding moving picture while utilizing lip shape pattern information for synchronization. 6.2.6 M-TTS sentence : This defines the information such as prosody, gender, and age for only the corresponding sentence to be synthesized. 6.2.7 M-TTS sequence : This defines the control information which affects all M-TTS sentences that follow this M- TTS sequence. 6.2.8 phoneme/bookmark-to-fap converter : A device that converts phoneme and bookmark information to FAPs. 6.2.9 text-to-speech synthesizer : A device producing synthesized speech according to the input sentence character strings. 6.2.10 trick mode : A set of functions that enables stop, play, forward, and backward operations for users. 6.3 Symbols and abbreviations F0 fundamental frequency (pitch frequency) 2 ISO/IEC 2001 All rights reserved

DEMUX demultiplexer FA facial animation FAP facial animation parameter ID identifier IPA International Phonetic Alphabet MP moving picture M-TTS MPEG-4 Audio TTS STOD story teller on demand TTS text-to-speech 6.4 MPEG-4 audio text-to-speech bitstream syntax 6.4.1 MPEG-4 audio TTSSpecificConfig TTSSpecificConfig() { TTS_Sequence() Table 6.1 - Syntax of TTS_Sequence() Syntax No. of bits Mnemonic TTS_Sequence() { TTS_Sequence_ID 5 uimsbf Language_Code 18 uimsbf Gender_Enable 1 bslbf Age_Enable 1 bslbf Speech_Rate_Enable 1 bslbf Prosody_Enable 1 bslbf Video_Enable 1 bslbf Lip_Shape_Enable 1 Bslbf Trick_Mode_Enable 1 Bslbf 6.4.2 MPEG-4 audio text-to-speech payload AlPduPayload { TTS_Sentence() Table 6.2 - Syntax of TTS_Sentence() Syntax No. of bits Mnemonic TTS_Sentence() { TTS_Sentence_ID 10 uimsbf Silence 1 bslbf if (Silence) { Silence_Duration 12 uimsbf ISO/IEC 2001 All rights reserved 3

else { if (Gender_Enable) { Gender 1 bslbf if (Age_Enable) { Age 3 uimsbf if (!Video_Enable && Speech_Rate_Enable) { Speech_Rate 4 uimsbf Length_of_Text 12 uimsbf for (j=0; j<length_of_text; j++) { TTS_Text 8 bslbf if (Prosody_Enable) { Dur_Enable 1 bslbf F0_Contour_Enable 1 bslbf Energy_Contour_Enable 1 bslbf Number_of_Phonemes 10 uimsbf Phoneme_Symbols_Length 13 uimsbf for (j=0 ; j<phoneme_symbols_length ; j++) { Phoneme_Symbols 8 bslbf for (j=0 ; j<number_of_phonemes ; j++) { if(dur_enable) { Dur_each_Phoneme 12 uimsbf if (F0_Contour_Enable) { Num_F0 5 uimsbf for (k=0; k<num_f0;k++) { F0_Contour_each_Phoneme 8 uimsbf F0_Contour_each_Phoneme_Time 12 uimsbf if (Energy_Contour_Enable) { Energy_Contour_each_Phoneme 8*3=24 uimsbf if (Video_Enable) { Sentence_Duration 16 uimsbf Position_in_Sentence 16 uimsbf Offset 10 uimsbf 4 ISO/IEC 2001 All rights reserved

if (Lip_Shape_Enable) { Number_of_Lip_Shape 10 uimsbf for (j=0 ; j<number_of_lip_shape ; j++) { Lip_Shape_in_Sentence 16 uimsbf Lip_Shape 8 uimsbf 6.5 MPEG-4 audio text-to-speech bitstream semantics 6.5.1 MPEG-4 audio TTSSpecificConfig TTS_Sequence_ID This is a five-bit ID to uniquely identify each TTS object appearing in one scene. Each speaker in a scene will have distinct TTS_Sequence_ID. Language_Code When this is "00" (00110000 00110000 in binary), the IPA is to be sent. In all other languages, this is the ISO 639 Language Code. In addition to this 16 bits, two bits that represent dialects of each language is added at the end (user defined). Gender_Enable This is a one-bit flag which is set to 1 when the gender information exists. Age_Enable This is a one-bit flag which is set to 1 when the age information exists. Speech_Rate_Enable This is a one-bit flag which is set to 1 when the speech rate information exists. Prosody_Enable This is a one-bit flag which is set to 1 when the prosody information exists. Video_Enable This is a one-bit flag which is set to 1 when the M-TTS decoder works with MP. In this case, M- TTS should synchronize synthetic speech to MP and accommodate the functionality of ttsforward and ttsbackward. When VideoEnable flag is set, M-TTS decoder uses system clock to select adequate TTS_Sentence frame and fetches Sentence_Duration, Position_in_Sentence, Offset data. TTS synthesizer assigns appropriate duration for each phoneme to meet Sentence_Duration. The starting point of speech in a sentence is decided by Position_in_Sentence. If Position_in_Sentence equals 0 (the starting point is the initial of sentence), TTS uses Offset as a delay time to synchronize synthetic speech to MP. Lip_Shape_Enable This is a one-bit flag which is set to 1 when the coded input bitstream has lip shape information. With lip shape information, M-TTS request FA tool to change lip shape according to timing information (Lip_Shape_in_Sentence) and predifined lip shape pattern. Trick_Mode_Enable This is a one-bit flag which is set to 1 when the coded input bitstream permits trick mode functions such as stop, play, forward, and backward. 6.5.2 MPEG-4 audio text-to-speech payload TTS_Sentence_ID This is a ten-bit ID to uniquely identify a sentence in the M-TTS text data sequence for indexing purpose. The first five bits equal to the TTS_Sequence_ID of the speaker defined in subclause 6.5.1, and the rest five bits are the sequential sentence number of each TTS object. Silence This is a one-bit flag which is set to 1 when the current position is silence. ISO/IEC 2001 All rights reserved 5

Silence_Duration This defines the time duration of the current silence segment in milliseconds. It has a value from 1 to 4095. The value 0 is prohibited. Gender This is a one-bit which is set to 1 if the gender of the synthetic speech producer is male and 0, if female. Age This represents the age of the speaker for synthetic speech. The meaning of age is defined in Table 6.3. Table 6.3 - Age mapping table Age Age of the speaker 000 below 6 001 6 12 010 13 18 011 19 25 100 26 34 101 35 45 110 45 60 111 over 60 Speech_Rate This defines the synthetic speech rate in 16 levels. The level 8 corresponds the normal speed of the speaker defined in the current speech synthesizer, the level 0 corresponds to the slowest speed of the speech synthesizer, and the level 15 corresponds to the fastest speed of the speech synthesizer. Length_of_Text This identifies the length of the TTS_Text data in bytes. TTS_Text This is a character string containing the input text. The text bracketed by < and > contains bookmarks. If the text bracketed by < and > starts with FAP, the bookmark is handed to the face animation through the TtsFAPInterface as a string of characters. Otherwise, the text of the bookmark is ignored. The syntax of the bookmarks is defined in ISO/IEC 14496-2. Dur_Enable This is a one-bit flag which is set to 1 when the duration information for each phoneme exists. F0_Contour_Enable This is a one-bit flag which is set to 1 when the pitch contour information for each phoneme exists. Energy_Contour_Enable This is a one-bit flag which is set to 1 when the energy contour information for each phoneme exists. Number_of_Phonemes This defines the number of phonemes needed for speech synthesis of the input text. Phonemes_Symbols_Length This identifies the length of Phonemes_Symbols (IPA code) data in bytes since the IPA code has optional modifiers and dialect codes. Phoneme_Symbols This defines the indexing number for the current phoneme by using the Unicode 2.0 numbering system. Each phoneme symbol is represented as a number for the corresponding IPA. Three two-byte numbers is used for each IPA representation including a two-byte integer for the character, and an optional twobyte integer for the spacing modifier, and another optional two-byte integer for the diacritical mark. Dur_each_Phoneme This defines the duration of each phoneme in msec. Num_F0 This defines the number of F0 values specified for the current phoneme. F0_Contour_each_Phoneme This defines half of the F0 value in Hz at time instant F0_Contour_each_Phoneme_Time. F0_Contour_each_Phoneme_Time This defines the integer number of the time in ms for the position of the F0_Contour_each_Phoneme. 6 ISO/IEC 2001 All rights reserved

Energy_Contour_each_Phoneme These 3 8-bit data correspond to the energy values at the start, the middle, and the end positions of the phoneme. The energy value X is calculated as X =, int( 50log10 A p p ) where Ap p is the peak-to-peak value of the speech waveform at the defined position. Sentence_Duration This defines the duration of the sentence in msec. Position_in_Sentence This defines the position of the current stop in a sentence as an elapsed time in msec. Offset This defines the duration of a very short pause before the start of synthesized speech output in msec. Number_of_Lip_Shape This defines the number of lip-shape patterns to be processed. Lip_Shape_in_Sentence This defines the position of each lip shape from the beginning of the sentence in msec. Lip_Shape This defines the indexing number for the current lip-shape pattern to be processed that is defined in in ISO/IEC 14496-2. 6.6 MPEG-4 audio text-to-speech decoding process The architecture of the M-TTS decoder is described below and only the interfaces relevant to the M-TTS decoder are the subjects of standardization. The number above each arrow indicates the section describing each interface. D E M U X M-TTS Decoder 6.1 Syntactic Speech Decoder 6.2 Synthesizer 6.3 6.5 6.4 C o m p Phoneme/Bookmarkto-FAP Converter o s i Face Decoder t o r Figure 6.1 - MPEG-4 Audio TTS decoder architecture. In this architecture the following types of interfaces is distinguished: Interface between DEMUX and the syntactic decoder Interface between the syntactic decoder and the speech synthesizer Interface from the speech synthesizer to the compositor Interface from the compositor to the speech synthesizer ISO/IEC 2001 All rights reserved 7

Interface between the speech synthesizer and the phoneme/bookmark-to-fap converter 6.6.1 Interface between DEMUX and syntactic decoder Receiving a bitstream, DEMUX passes coded M-TTS bitstreams to the syntactic decoder. 6.6.2 Interface between syntactic decoder and speech synthesizer Receiving a coded M-TTS bitstream, the syntactic decoder passes some of the following bitstreams to the speech synthesizer. Input type of the M-TTS data: specifies synchronized operation with FA or MP Control commands stream: Control command sequence Input text: character string(s) for the text to be synthesized Auxiliary information: Prosodic parameters including phoneme symbols Lip shape patterns Information for trick mode operation The pseudo-c code representation of this interface is defined in subclause 6.4. 6.6.3 Interface from speech synthesizer to compositor This interface is identical to the interface for digitized natural speech to the compositor. The dynamic range is from 32767 to + 32768. 6.6.4 Interface from compositor to speech synthesizer This interface is defined to allow the local control of the synthesized speech by users. This user interface supports trick mode of the synthesized speech in synchronization with MP and changes some prosodic properties of the synthesized speech by using the ttscontrol defined as follows: Table 6.4 - Syntax of ttscontrol() Syntax No. of bits Mnemonic ttscontrol() { ttsplay() ttsforward() ttsbackward() ttsstopsyllable() ttsstopword() ttsstopphrase() TtsChangeSpeedRate() TtsChangePitchDynamicRange() TtsChangePitchHeight() TtsChangeGender() ttschangeage() The member function ttsplay allows a user to start speech synthesis in the forward direction while ttsforward and ttsbackword enable the user to change the starting play position in forward and backward direction, respectively. The ttsstopsyllable, ttsstopword, and ttsstopphrase functions define the interface for users to stop speech synthesis at the specified boundary such as syllable, word, and phrase. The member function ttschangespeechrate is an interface to change the synthesized speech rate. The argument speed has the numbers from 1 to 16. The member function ttschangepitchdynamicrange is an interface to change the dynamic 8 ISO/IEC 2001 All rights reserved

range of the pitch of synthesized speech. By using the argument of this function, level, a user can change the dynamic range from 1 to 16. Also a user can change the pitch height from 1 to 16 by using the argument height in the member function ttschangepitchheight. The member functions ttschangegender and ttschangeage allow a user to change the gender and the age of the synthetic speech producer by assigning numbers, as defined in subclause 6.5.2, to their arguments, gender and age, respectively. 6.6.5 Interface between speech synthesizer and phoneme/bookmark-to-fap converter In the MPEG-4 framework, the speech synthesizer and the face animation are driven synchronously. The speech synthesizer generates synthetic speech. At the same time, TTS gives phonemesymbol and phonemeduration as well as bookmarks to the Phoneme/Bookmark-to-FAP converter. The Phoneme/Bookmark to FAP converter generates relevant facial animation according to the phonemesymbol, the phonemeduration and bookmarks. Further description of the Phoneme/Bookmark to FAP converter is provided in ISO/IEC 14496-2. The synthesized speech and facial animation have relative synchronization except the absolute composition time. The synchronization of the absolute composition time comes from the same composition time stamp of the TTS bitstream. If the Lip_Shape_Enable is set, the Lip_Shape_in_Sentence is used to generate the phonemeduration. Otherwise, the TTS provides phoneme durations. The speech synthesizer generates stress and/or wordbegin bits when the corresponding phoneme has stress and/or start of a word, respectively. Within the MTTS_Text, the beginning of a bookmark for using facial animation parameters is identified by '<FAP'. The bookmark lasts until the closing bracket '>' A bookmark is handed to the TtsFAPInterface with the phoneme of the next word of the current sentence following the bookmark. If there is no word after the bookmark, the bookmark is handed to the TtsFAPInterface with the last phoneme of the previous word in the current sentence. In order to allow animation of complex expressions and motion, a sequence of up to 40 bookmarks is allowed without words between them. The starttime defines the time in msec relative to the beginning of the M-TTS sequence when the phoneme will start playing. The class ttsfapinterface defines the data structure for the interface between the speech synthesizer and the phoneme-to-fap converter. Table 6.5 - Syntax of TtsFAPInterface() Syntax No. of bits Mnemonic TtsFAPInterface() { PhonemeSymbol 8 uimsbf PhonemeDuration 12 uimsbf f0average 8 uimsbf Stress 1 bslbf WordBegin 1 bslbf Bookmark char * Starttime long int ISO/IEC 2001 All rights reserved 9

Annex 6.A (informative) Applications of MPEG-4 audio text-to-speech decoder 6.A.1 General This annex part describes application scenarios for the M-TTS decoder. 6.A.2 Application scenario: MPEG-4 Story Teller on Demand (STOD) In the STOD application, users can select a story from a huge database of story libraries which are stored in hard disks or compact disks. The STOD system reads aloud the story via the M-TTS decoder with the MPEG-4 facial animation tool or with appropriately selected images. The user can stop and resume speaking at any moment he wants through user interfaces of the local machine (for example, mouse or keyboard). The user can also select the gender, age, and the speech rate of the electronic story teller. The synchronization between the M-TTS decoder with the MPEG-4 facial animation tool is realized by using the same composition time of the M-TTS decoder for the MPEG-4 facial animation tool. 6.A.3 Application scenario: MPEG-4 audio text-to-speech with moving picture In this application, synchronized playback of the M-TTS decoder and encoded moving picture is the most important issue. The architecture of the M-TTS decoder can provide several granularities of synchronization. Aligning the composition time of each TTS_Sentence, coarse granularity of synchronization and trick mode functionality can be easily achieved. To get finer granularity of synchronization, the information about the Lip_Shape would be utilized. The finest granularity of synchronization can be achieved by using the prosody information and the video-related information such as Sentence_Duration, Position_in_Sentence, and Offset. With this synchronization capability, the M-TTS decoder can be used for moving picture dubbing by utilizing the Lip_Shape and Lip_Shape_in_Sentence. 6.A.4 MPEG-4 audio TTS and face animation using bookmarks appropriate for trick mode Bookmarks allow to animate a face using facial animation parameters (FAP) in addition to the animation of the mouth derived from phonemes. The FAP of the bookmark is applied to the face until another bookmark resets the FAP. Designing contents that replay each sentence independent of trick mode requires that bookmarks of the text to be spoken are repeated at the beginning of each sentence to initialize the face to the state that is defined by the previous sentence. In this case, some mismatch of synchronization can occur in the beginning of a sentence. However, the system recovers when the new bookmark is processed. 6.A.5 Random access unit Every TTS_Sentence is a random access unit. 10 ISO/IEC 2001 All rights reserved