Adding Japanese language synthesis support to the espeak system

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Adding Japanese language synthesis support to the espeak system"

Transcription

1 Adding Japanese language synthesis support to the espeak system Richard Pronk Bachelor thesis Credits: 18 EC Bachelor Opleiding Kunstmatige Intelligentie University of Amsterdam Faculty of Science Science Park XH Amsterdam Supervisor dr. D.J.M. (David) Weenink Institute of Phonetic Sciences Faculty of Humanities University of Amsterdam Spuistraat VT Amsterdam June 28th, 2013

2 Abstract In this paper we describe an addition to the espeak system which is capable of pronouncing Japanese language. This implementation is used for automatic segmentation of Japanese speech from Japanese text. The speech synthesiser that we use is part of the praat speech analyse program and is based on the espeak text-to-speech engine. Because the Japanese writing system is very complex i.e. it mixes several alphabets with logograms (kanji) and it doesn t use explicit word boundaries, we made some restriction on the input form. First we force the user to make explicit where words end and secondly we do not yet support logograms (kanji) because we need a pronunciation database to implement this feature. We supply hints how these limitations can be overcome.

3 Contents 1 Introduction The Japanese writing systems Rules of the Hepburn romanization system The espeak system Praat Literature review 7 3 Theoretical foundation Phonetic transcription Place of articulation Nasality Voicing Phonetic overview of the Japanese language Vowels Voiced and semi-voiced sounds Devoicing Particles Palatalised sounds Moraic nasal n Gemination Implementation within espeak Word segmentation Pronunciation rules Normalisation to a single writing system Text to phoneme translation Phoneme definitions Input using Rōmaji Latin characters for abbreviations Kanji Results and Evaluation 19 6 Conclusion 20 7 Future work espeak functionality A How to use this initial implementation 23 B IPA for Japanese 24

4 1 Introduction We describe an initial implementation of Japanese speech synthesis support for the espeak[3] system. This initial implementation will enable future research regarding Japanese phonetics to be carried out more easily. Our implementation described in this paper is aimed at providing assistance during the segmentation process of Japanese speech within the speech analysis system praat[2]. The main focus of this paper is therefore on the correct pronouncing of Japanese characters given the rules of the language, rather than focusing on perfect natural sounding Japanese. Furthermore, this paper will provide an overview on Japanese phonetics and the issues encountered regarding the implementation of Japanese language within the espeak system. 1.1 The Japanese writing systems The Japanese language uses three writing systems; hiragana ( ひらがな ), katakana ( カタカナ ) and kanji ( 漢字 ), and to complicate things even further, sometimes even Latin characters are used in Japanese text. The hiragana and katagana writing systems make up the alphabet, covering all the possible sounds in the language. These writing systems have corresponding character sets, where a i u e o あ (a) い (i) う (u) え (e) お (o) k か (ka) き (ki) く (ku) け (ke) こ (ko) a i u e o ア (a) イ (i) ウ (u) エ (e) オ (o) k カ (ka) キ (ki) ク (ku) ケ (ke) コ (ko) Table 1: Example of hiragana chart (top) and katakana chart (bottom) each character represents one mora (mora being one sound-unit in the Japanese language). As seen in Table 1 the hiragana and katagana writing both have characters for the same sounds. In the Japanese language these writing systems are used in combination with each other, where one sentence can consist of hiragana, katagana, kanji and even Latin characters. Kanji are Chinese characters which are widely used within Japanese texts, when a kanji character is not available for a word, hiragana is often used. Hiragana can also be combined with kanji characters for declensions and conjugations. And Katagana is used to transcribe foreign language words or writing loan words. Due to the fact that all possible sounds in the Japanese language are covered by the hiragana and katakana writing systems, the pronunciation of kanji (i.e. the Chinese characters) can be written in terms of those characters (e.g. 漢字 かんじ ). The focus of the current implementation is therefore on being able to pronounce these Japanese characters, with the exception of kanji due to its complexity and lexical dependency in pronunciation (section 4.5). Another supported input method however, is by using rōmaji ( ローマ字 ) which allows for 4

5 Japanese input using purely Latin characters. In this paper the modified Hepburn romanization system will be used for the transcription from hiragana and katagana to rōmaji and is also the romanization system which is supported by the provided implementation. There are more romanization systems available for the Japanese language, however in addition to the fact that the modified Hepburn romanization system is frequently used, the system is also most adjusted to English pronunciation which is therefore the most suitable for the espeak system. The full Hiragana 1 and Katakana 2 charts with romaji transcription using the modified Hepburn romanization system can be seen in the links provided as footnotes Rules of the Hepburn romanization system In order to properly convert Japanese characters to rōmaji a number of rules must be adhered to. These rules have been compiled in the Hepburn romanization system of which there are many versions, this section will discuss the rules that are relevant to this paper. The first rule is that double vowels need to be indicated with a macron or circumflex, whereas /oo/ has a different pronunciation as /ō/ (see section 3.5.1). With the exception of the vowel i since /ii/ is always pronounced as a long vowel, however the implemented system of this paper also allows for /ī/ as input as this is frequently used for loan-words. As seen in table 3 the vowel combination /ou/ is a special case, whereas the vowel combination as single long vowel as two separate vowels aa ā aa ii ī or ii (not possible) uu ū uu ee ē ee oo ō oo ou ō ou Table 2: Double vowel representation in the modified Hepburn romanization system pronunciation of ou can be pronounced as a single long /ō/ or as two separate vowels. For example 東京 ( とうきょう ) would be transcribed from hiragana as /toukyou/, however the ou combination is not pronounced as two separate vowels but as a single long ō vowel. Therefore the transcription from 東京 ( とうきょう ) to the modified Hepburn romanization system should be /tōkyō/. The second rule within the modified Hepburn romanization system is that particles are written as pronounced. For example the subject marker は is written as /wa/ (as pronounced) instead of /ha/, which would be the standard reading when the character does not have a grammatical function. The same goes for the particle へ, which be pronounced as /e/ instead of /he/ and the particle を, which would be pronounced as /o/ instead of /wo/. The third rules is that the n syllable should be written as n and as n before vowels and y in order

6 to disambiguate from sounds from the n row (n-row being the sounds: /na/, /ni/, /nu/, /ne/ and /no/). The syllables /no/ can be read in two different a i u e o n な (na) に (ni) ぬ (nu) ね (ne) の (no) ん (n) Table 3: The ambiguity of the syllable n ways, as one mora or two as separate mora s. Meaning that /no/ can be read as の (no) or as ん (n) お (o). A good example for when this distinction is required is with the following words, 蟹 ( かに ) meaning crab and 簡易 ( かんい ) meaning simplicity. Without this distinction both words are written as /kani/, this would be the correct pronunciation for crab, however simplicity should be written as /kan i/ in order to be pronounced correctly. Double consonants (see section 3.5.7) can be written as expected, however there is one exception that is with ch, which becomes tch. For example 抹茶 ( まっちゃ ) becomes matcha instead of maccha. this was also the fourth and final relevant rule of the Hepburn romanization system. So, to summarize, the relevant rules of the Hepburn romanization system are: 1. Long vowel are represented with a macron or circumflex 2. Particles are written as pronounced 3. The syllable n is written as n before vowels and the syllable y 4. Gemination with ch becomes tch 1.2 The espeak system espeak is an open source Text-To-Speech (TTS) system, which mainly uses formant-based synthesis. Currently, espeak supports over 50 languages, however at the start of the project it didn t support the Japanese language. The speech produced by espeak is highly configurable, but is however not as natural sounding as larger synthesisers which use unit-based synthesis and are based on human speech recordings. In formant-based synthesis, voiced speech (e.g. vowels and sonorant consonants) is created by using formants. On the other hand, unvoiced consonants (e.g. /s/) are created by using pre-recorded sounds. Furthermore, voiced consonants (e.g. /z/) are created as a mixture of formantbased voiced sounds in combination with a pre-recorded unvoiced sound. The espeak system uses modular language data files, which are easy to understand text files. This way, a language can be added or modified without the need of understanding the underlying source code of espeak. 1.3 Praat Praat is a speech analysis system which is based on the espeak text-to-speech engine and therefore uses the language files provided by the espeak system. The provided implementation for espeak regarding Japanese pronunciation will therefore also be used in the praat system. This speech analysis program will also be used during the evaluation process later on. 6

7 2 Literature review There are two types of articles related to this implementation, namely articles about formant synthesis and articles about Japanese phonetics. The first article of interest is the paper from Klatt (1980)[6] which describes software for a cascade/parallel formant synthesiser. This paper gives an in-depth view on how a formant synthesis system is built, which will be useful for this research since the espeak program is based on this system. The second paper of interest is the paper from Klatt & Klatt (1990)[7] which is a continuation of the previous paper. This paper provides a view on the analysis and synthesis of different types of voices, with the main focus being the differences between male and female voices which will be helpful for creating more natural sounding synthesis. Although, as previously stated, natural sounding synthesis is not the main focus of this paper, making the implementation sound as natural as possible will contribute to better results during the segmentation process. These articles about formant synthesis are relatively old, however this is due to the fact that current research focuses mainly on unit-based synthesis. Although unit-based synthesis provides a more natural sounding output, it lacks configurability and theoretical knowledge on the sounds to be produced. The actual research conducted on speech synthesis is therefore mainly done on formant synthesis whereas commercial products tend to use unit-based speech synthesis due to the higher quality of the speech output. Amongst the literature concerning Japanese phonetics is the book by Vance (2008)[9] which provides an in-depth view on Japanese phonological research as well as an insight into the basics of phonological research itself. Another useful book regarding this research is by Kawase et al. (1978)[5] which provides theory on the pronunciation of the Japanese language, where its main focus is on the mouth movements used during pronunciation. More specific papers regarding Japanese phonetics include the paper by Shigeto (2012/forthcoming)[8] which focuses on the actual duration of double consonants and the paper by Bion et al. (2013)[1] which focuses on the differences of vowel durations in the Japanese Language. The paper by Halpern (n.d.)[4] provides insight in how a phonetic database could help by providing the phonological representation of words, which is essential for natural sounding speech synthesis due to the presence of lexically-dependent pronunciation in the Japanese language. This paper demonstrates an implementation of a phonetic database for the Japanese language and the usage of this database. Although the idea of the phonetic database described in this paper can be used, the actual phonetic database itself is not available in terms of the General Public License (GPL) which is a requirement for this project. The idea of this project is to combine these two types of articles by adding Japanese phonetics to a formant synthesis system namely, implementing Japanese speech synthesis support to espeak. 7

8 3 Theoretical foundation First a number of phonetic terminology and concepts will be described which are used later on in this paper. Afterwards the phonetic aspects of the Japanese language itself will be discussed. 3.1 Phonetic transcription The International Phonetic Alphabet (IPA) provides a phonetic transcription of speech, this alphabet is used to describe the pronunciation of the language rather than trying to form words within the language. Due to this standardised alphabet a language can be correctly pronounced without the need of knowing the rules of the language. The IPA notation for the Japanese language can be seen in attachment B. 3.2 Place of articulation Articulation is the process of physically forming the sounds that will result in the pronunciation of a word. This process uses various body parts which are divided into active articulators and passive articulators. Active articulators are generally identified as the articulators that move during the formation of speech. Examples of active articulators include the tongue and the lower lip. In contrast to this, the passive articulators make little to no movement during this process. Examples of passive articulators include the upper lip, upper teeth and the roof of the mouth. The position of these articulators will define the resulting speech. 3.3 Nasality Nasality refers to the effect of the velum in the articulation of consonants. An open velum (see Figure 1) allows for air to escape through the nasal cavity (inner nose), whereas a closed velum causes that air to escape only through the oral cavity (inner mouth). Therefore the meaning of a consonant being nasal would be that while articulating the consonant the velum is open, which allows for air to escape through the nose. 3.4 Voicing Voicing is dependent on the glottis, which refers to both the vocal cords and the open space between them. This (small) opening allows for the vocal cords to vibrate which results in the voiced sound. The opening in the glottis can also be wide, whereas air can pass through freely and the vocal folds have reduced vibration. This is the cause of the so called voiceless sounds. As example take the voiceless consonant s, when the s is pronounced you can t feel any vibrations in the vocal folds, where in contrast to this the voiced consonant z produces vibrations which can be felt. Furthermore the glottis can also be closed, in which case no air can pass. The sound which is produced by obstructing the airflow by closing the glottis is called a glottal stop. 8

9 Figure 1: closed/open velar (taken from: Vance(2009)) 3.5 Phonetic overview of the Japanese language Vowels Articulators alter the vocal resonances which results in the formation of vowel sounds. Peaks amongst the spectra of vowel sounds are called vocal formants. These vocal formants are extremely useful to, for example, distinguish between individual vowel sounds. Distinguishing vowel sounds can be done by comparing the formants, in which case the first two formants tend to be sufficient for the task. This is also used in the implementation within espeak which will be discussed later on (see section phoneme definitions). Another important aspect of vowels (especially in the Japanese language) is length, where the meaning of a word can depend on the length of a vowel. Take for instance the words 雪 ( ゆき ) which is read as /yuki/ meaning snow and 勇気 ( ゆうき ) read as /yu:ki/ and means courage. Here the u (transcribed as u:) is a long vowel where the meaning of the word changes due to the length of the vowel. Furthermore there is another thing to consider when talking about about long vowels, that is the question how the double vowels (e.g. uu, ee, ii, aa, uu) should be pronounced. This due to the fact that these double vowels can be read in two different ways, as two separate vowels or as a single one long vowel. The difference between the pronunciation of a single one long vowel and two short vowels can be clearly seen in words like /satooya/ and /sato:ya/. In figure 2 the pronunciation difference is clearly visible, whereas /satooya/ is clearly pronounced with two separate vowels, which can be seen by the drop between the vowels (i.e. where the arrows point to). Although a good estimation can be given on the pronunciation of double vowels (i.e. by checking the word boundaries of kanji s within a word) this would require a lexical analysis system. Unfortunately such a system is not yet available for this project and therefore automatically checking on vowel combinations is out of the scope of this paper. Instead we force the user to make the input unambiguous with regard to double vowels by using a so called prolonged sound mark ( ー ), which is already the standard way to explicitly indicate long vowels in Japanese text (e.g. おー ). 9

10 Figure 2: Long and short Vowel distinction (taken from: Vance(2009)) Voiced and semi-voiced sounds As previously described (section 3.4) a consonant is voiced when the vocal cords are vibrating during pronunciation process, whereas if a consonant is voiceless the vocal cords are not vibrating during pronunciation. In the Japanese writing system this indication of whether or not the character is voiced is marked in the top right corner of a character with a so called dakuten ( ). For example in the character さ (sa) the first syllable is voiceless, however if we were to add the dakuten to this character making it ざ (za), the first syllable of this character would be voiced. Adding a dakuten is possible for the characters from the k-row (becomes g-row), s-row (becomes z-row), t-row (becomes d-row) and the h-row (becomes b-row). It is also possible to make characters semi-voiced by adding a so called handakuten ( ) to the character. If we for example take は (ha) and add a handakuten to this character it becomes ぱ (pa), whereas if were to add the dakuten this character it would have become ば (ba). The addition of handakuten to make a character semi-voiced is only possible for characters from the b-row (or h-row) as /b/ is the semi-voiced counterpart of /p/ Devoicing We have already seen the difference between voiced and voiceless consonants (e.g. z and s), however it is also possible to have devoiced vowels. Although no actual sound is produced, the mouth moves in the direction of the vowel. In the Japanese language there are a couple of rules for when devoicing takes place, for example when the vowels i or u are between unvoiced phonemes these vowels are devoiced. Take for example the word /sukiyaki/ here the vowel 10

11 u is between the unvoiced syllable s and the unvoiced syllable k which causes the vowel u to devoice. Therefore the word will be phonetically transcribed as /su0kiyaki/ where as the u is devoiced. There is another rule where there is a high probability that the vowels u and i are preceded by an unvoiced consonant and are immediately followed by an pause. However since there is not a certainty that this is always the case, a lexical analysis system is needed in order to verify in which cases this is true. Vance(1990) also states the following about the devoicing of vowels: When the /su/ is the last syllable of a polite nonpast verb form or the polite nonpast copula /desu/ です and immediately followed by a pause, devoicing is quite consistent for most Tokyo speakers. Vance(1990) This sounds like a lexical analysis system is needed in order to find the polite nonpast verbs, however this is not the case as will be shown in the implementation section, this because there is a small rule that can be implemented which can find the polite nonpast verbs without the need of a lexical analysis system. However for this Vance(1990) also comes with an additional exception: Vowel devoicing also interacts with intonation in an obvious way. If the last syllable in a sentence contains a short high vowel preceded by a voiceless consonant but has to carry the intonation for a question, the vowel doesn t devoice. Vance(1990) What this means is that when there is a question which is indicated with a question mark (i.e. there is a rising pitch) the last syllable will not be devoiced anymore, therefore the need for an audible rise overrides the devoicing rule. Futhermore, when the vowel u is devoiced and the preceding syllable is an s, the pronunciation for the devoiced u will then be taken by the pronunciation of the syllable s which will then have a duration of two mora s. Therefore /desu/ will be pronounced as /dess/ and phonetically transcribed as /desu0/ (u0 standing for a devoiced vowel u) Particles When a character has a grammatical function (a so called particle), the pronunciation of the character can change from its original reading. Take for instance the Japanese sentence これは日本語です (which translates to this is Japanese ), the character は is normally pronounced as /ha/, however since this character has the grammatical function of being a particle (i.e. it addresses the subject in the sentence) it is pronounced as /wa/ (otherwise written as わ ). The same goes for the particle へ, which would be pronounced as /e/ instead of /he/ and the particle を, which would be pronounced as /o/ instead of /wo/ Palatalised sounds The palatalised sounds (as shown in the full hiragana and katagana charts) use a consonant-semivowel-vowel syllable structure, whereas the semivowel is a palatal approximant (written as y but phonetically transcribed as /j/). The semivowel-vowel part can consist of three different characters ゃ (ya), ゅ (yu) and 11

12 ょ (yo), whereas during the articulation of the consonant the tongue is raised toward the hard palate and the alveolar ridge. The pronunciation of palatalised sounds goes as following, き (ki) + ゃ (ya) きゃ (kya), note that this is not pronounced as /kiya/ but as /kya/. Furthermore, notice that when working with palatalised sounds like these the ゃ (ya) character is written smaller than the normal や (ya) character (and the same goes for the ゅ (yu) and ょ (yo) characters) Moraic nasal n The moraic nasal n has an articulation which is dependent on which syllable follows, for this the place of articulation is altered depending on the following syllable. This gives us the following articulation rules for the syllable n (as taken from wikipedia 3 ): 1. uvular [ð] at the end of utterances and in isolation. 2. bilabial [m] before [p], [b] and [m] 3. dental [n] before coronals /d/, /t/, and /n/ 4. velar [N] before [k] and [g] Gemination In the Japanese language a double consonant is indicated by a so called sokuon, which is presented as a small tsu character and is both available in the hiragana ( っ ) and katakana ( ッ ) wrtiting systems, whereas the sokuon copies the next following syllable. See for example the word けっか (kekka) meaning result, the sokuon in this word precedes the syllable か (ka) which starts with the syllable k. The sukuon character therefore also becomes the letter k making the double consonant. There are however a few exception in which the sokuon does not become the first following letter, that is with ch, which becomes tch, 抹茶 ( まっちゃ ) for example becomes matcha instead of maccha. This sokuon used for gemination can also be used at the end of a sentence, which will indicate a glottal stop. 4 Implementation within espeak 4.1 Word segmentation The current implementation within espeak requires the user to segmentate the input sentence themselves, this due to the absence of a lexical analysis system (see section future work). For this all words but also particles needs to be separated with a space. A sentence like これはにほんごですか should therefore be inserted into the system as これはにほんごですか. 3 taken on 28/06/13 12

13 4.2 Pronunciation rules The espeak system uses two kinds of text files to implement the pronunciation rules, the first file is the * rules file (in this case ja rules since we are working with the Japanese language) which contain the actual pronunciation rules and the second file is the * list file (ja list in our case) which contains a lookup dictionary. The following rules are implemented into the ja rules file unless otherwise notified. In order to correctly pronounce Japanese text there are three main steps need to be taken (the first step does not comply to rōmaji input): 1. Normalising to a single writing system 2. Text to phoneme translation 3. Describing Japanese phonemes Normalisation to a single writing system Due to the fact that every sound in Japanese language can be written in terms of hiragana characters. The first step is by normalising the input sentence to a single writing system (in this case the hiragana writing system), for katagana and half-width katagana this can done with a straightforward replacement rule. This replacement rule is possible since there is a one on one conversion possible between the hiragana and katakana writing systems. The replace function within espeak works as follows:.replace a b Where a will be replaced with b. Each line specifies either one or two alphabetic characters to be replaced by another one or two alphabetic characters. This substitution is done before the text to phoneme translation. The katakana characters will therefore be placed on the a side, whereas the hiragana characters will be placed on the b side..replace アあイいウう Palatalised sounds (e.g. きゃ (kya)) can be divided into two separate characters, the き (ki) character and the small ゃ (ya) character ( き (ki) キ (ki) and ゃ (ya) ャ (ya) therefore きゃ (kya) キャ (kya)), where the palatalisation rules text to phoneme conversion still apply (as this rule is the same in hiragana as katagana). Half-width katakana (katakana but written smaller) however has a small thing to take into consideration, that is that the カ (ga) character consist of 2 parts the カ (ka) and the voicing syllable. The replacement function will convert the first thing it matches, therefore if the character カ (ka) is placed before カ (ga) the replace function will convert the half-width katakana ga to the hiragana character ka and leaving the voicing mark unparsed. 13

14 4.2.2 Text to phoneme translation The text to phoneme translation contains the actual parsing of the pronunciation rules. These rules are given in groups which is made for each letter or character 4, these rules are used by the espeak system to try to find the best fit for each found character of the input sentence. Every rule is given on a separate line and has the following syntax: [< pre >)] < match > [(< post >] < phonemestring > This can be best explained with a small example:.group ああ a あー a: The first line creates the group for the character あ (a), in this case if the espeak system now encounters the syllable あ (a) it will be handled by this group. Now that the group is known, pronunciation rules are needed on what to do when this group is encountered, in this case we want to make the character あ a and あー a:. The < match > section of the rule is therefore the hiragana character あ and this will be converted to the phoneme a (i.e. the < phonemestring > in this case is simply a ). However, if long vowel is inserted ( あー ) this should be translated into a:. In order to do this, a second rule is added into the group あ, that is when あー is found in the input sentence this will be a better match and therefore tranlating あー (< match >) a: (< phonemestring >). The support for palatalised sounds is done in a similar way, in this case the group of the first character will be used (in this case き ) and in the rules of this group we added a match for the palatalised sounds with the corresponding phoneme string (In this case きゃ (< match >) kya (< phonemestring >))..group き き ki きー ki: きゃ kya きゃー kya: Gemination (see section 3.5.7) is indicated by a small tsu character (in hiragana っ ) called the sokuon, however on its own it does not have a reading. Therefore, a check is needed on which syllable follows, this can be done by using the (< post > section of the rule. This section can provide rules such as, if the next character starts with the syllable k, change the sokuon character to that syllable. This is also exactly what has been done in this implementation, however the syllable starting with the same syllable have been grouped together in order to make it more efficient and more readable. This grouping has be done as following:.l01 かきくけこ //starts with k.l02 がぎぐげご //starts with g.l03 さしすせそ //starts with s... 4 for more information see 14

15 Where L01, L02, etc. defines a group of letter sequences (in this case the hiragana characters start with the same syllable). The text to phoneme translation for the sokuon therefore becomes:.group っっ (L01 k っ (L02 g っ (L03 s... っ? Notice that the rules now have a (< post > section meaning that one of these characters in the specified group is must after the sokuon in order for the rule to be true. This makes the phoneme translation from the sokuon to the specified syllable in the < phonemestring >. Another rule in the sokuon pronunciation is that when the sokuon is at the end of a word this is pronounced as a glottal stop (e.g. あっ ) indicated with a question mark. The (< post > section of the rule ( means that the next syllable is a pause or a hyphen, which in this case means that the sokuon must be at the end of a word. The rule that has been implemented to devoice the last syllable on polite nonpast verbs is based on the fact that those verbs always end on masu (with exception of the word desu). Therefore our rule simply puts the whole word masu in the < match > section where as the last syllable (u) must be at the end of the word (the ( rule in the (< post > section). In doing so all polite non-past verbs are found and correctly devoiced, the exception case desu is handled in the same manner with the addition of the ) rule in the (< pre > section Phoneme definitions Defining the correct Japanese phonemes must be done in the phoneme file dedicated to the Japanese language (i.e. the ph japanese file). In this file, a phoneme table for the Japanese language must be given, if however a phoneme is not defined in the phoneme table, but is use in the rules files (i.e. the ja rules file) the espeak system will use its base phoneme. In these phoneme definition attributes like the correct IPA notation, place of articulation and the reference to the corresponding sound files for that specific phoneme can be given. For a detailed explanation on the rules and possibilities within these definitions please refer to the documentation on the espeak page 5. It is for example also possible to import phonemes from other files by using the import phoneme statement which can be used to copy a previously defined phoneme from a specified phoneme table. The phoneme table contains a list of phoneme definitions, which has the following structure: phoneme u ipa IF NOT nextphw(isvoiced) AND NOT prevphw(isvoiced) AND... NOT thisph(iswordend) AND thisph(notwordstart) THEN 5 for more information see 15

16 ChangePhoneme(u0) ENDIF vowel starttype #u endtype #u length 83 FMT(vowel/uu bck) endphoneme In this case the phoneme u is defined for the Japanese language, in this phoneme description the correct IPA notation, as well as the correct length and the sound for that phoneme is given. The sound file which is referenced to has been chosen relative to the formants of the Japanese vowel u and is available in the espeak sound database. The starttype and endtype allocates the phonemes to groups so that functions can be tested on groups of phonemes. As can be seen it is also possible to add if statements to the phoneme descriptions, in this case, if the phoneme u is between voiceless phonemes and is not at the beginning nor at the end of a word, this phoneme will be changed to the u0 phoneme (which devoices the vowel). This because the ChangePhoneme(u0) function changes the current phoneme to the phoneme u0 which is a devoiced vowel u phoneme u0 ipa IF prevphw(s) THEN WAV(ufric/s ) ENDIF vowel starttype #u endtype #u length 83 endphoneme Here can be seen that the phoneme how no sound defined, which is the cause of the phoneme being devoiced. However if the preceding phoneme is the phoneme s, the s sound is given for the phoneme u. This due to the rule that the syllable s takes over the the unvoiced u, when the phoneme u is devoiced and the preceding phoneme is the s phoneme. The use of the function ChangePhoneme is also used to solve the problem of the moraic nasal n as was earlier described in this paper. For this, the different phonemes descriptions for the articulations of the syllable n (where the articulation point differs) have been imported from the ph consonant file. Rules have then been added in the form of: IF nextphw(b) OR nextphw(b) OR nextphw(b) THEN ChangePhoneme(m) ENDIF Whereas now the correct phoneme definitions are used for the different types of articulation of the syllable ん (n). An easier phoneme description, is the phoneme description describing the long vowel, which only (with respect to the previously shown phoneme u) changes in length: phoneme u: 16

17 ipa : vowel starttype #u endtype #u length 153 FMT(vowel/uu bck) endphoneme It is also possible to simply call other phonemes, for example this is the case with the y phoneme which is phonetically transcribed as /j/: phoneme y ipa j CALL base/j endphoneme 4.3 Input using Rōmaji Due to the fact that the modified Hepburn rōmaji writing system is already optimised for English pronunciation, only straightforward rules have to be implemented. For example long vowels which are inputted as ā, ī, ū, ē and ō, have groups like the following:.group a a a.group ā ā a: One thing to take in to consideration however is that phonemes with two (or more) syllables need to be explicitly denoted in the rules. Take for instance the phoneme ch, when we only have rules for c and h the espeak system will use two separate phonemes c and h instead of the desired phoneme ch. The fix for this will be to add the ch combination to the c group..group c c ch c ch The moriac nasal n is another example which needs some extra care. As explained previously in this paper (see section hepbrun), the character n needs to be disambiguated from characters in the n-row. The rules of the modified Hepburn romanization stated that only when the n is preceded by a vowel or y, should the character n be written as n. This is due to the fact that when the syllable n is preceded by a consonant it must be the character ん. Because of the fact that characters in the n-row are always followed by a vowel. This is also true for when the syllable n is at the end of a word, this will give us the following rules in order to disambiguate the character ん from the n-row:.group n n n n (C N n ( N n N Where C stands for any arbitrary consonant, N is the phoneme for the character ん and ( means that n is at the end of a word. 17

18 4.4 Latin characters for abbreviations In Japanese the capitalized Latin characters are used for abbreviations. Some frequently used ones are JR and NHK, these abbreviations in Latin characters are used next to the Japanese characters. The capitalized latin characters are supposed to be pronounced as isolated English characters. However, the pronunciation is changed to be able to be pronounced within the available Japanese characters. For example, JR is pronounced as / ジェイアール / which is /jeia:ru/. Therefore rules of the following form are required:.group J J.group R R jei a:ru However, the current version of espeak automatically decapitalises and due to the fact that we also allow uncapitalized Latin characters (the modified hepburn romanization system) there are already groups of the following form:.group j j.group r r j r Due to this (and the standardization to uncapitalized characters), the current implementation pronounces JR as /jr/ instead of /jeia:ru/. If however, at a later point it is possible to turn off automatic decapitalisation for the Japanese language, this implementation will be sufficient. 4.5 Kanji As previously stated, the current implementation does not support kanji. However there is in espeak a way to normalise kanji into hiragana, namely by using the look-up dictionary file (ja list). The problem with this however, is that all the possible kanji combinations and readings of the kanji need to inserted into this file, which does not seem like an optimal solution. However to illustrate how this would work, I added a single kanji combination ( 漢字 ) into the look-up dictionary file. $textmode 漢字かんじ The $textmode indicates that a conversion between text to text is wanted instead of the also possible text to phoneme conversion. When this kanji ( 漢字 ) is given as input it will be normalised to hiragana and then get parsed by the system like any other hiragana input. 18

19 5 Results and Evaluation The best way to get an idea of the actual results of this implementation, would be by simply listening to the output of the speech synthesiser, which was also the main evaluation method during each new implementation within the system. However, also the parsing of Japanese text was a big part of this project. For this our main focus was that all the syllables should be pronounced correctly (i.e. have the correct translation from Japanese text to phoneme transcription). For this we tested with the focus on gemination of syllables, the moriac nasal n and the right IPA notations given an input sentence (which would indicate that the right phonemes are used) as well as the normalisation from katakana and half-width katakana to hiragana, in which all conversions to hiragana are now done correctly. For this random words with the focus on the previous stated issues were inserted, given the knowledge that the system possesses on the rules of the pronunciation of Japanese characters the system transcribed the words correctly, however at this point of time not all the pronunciation rules of the language could be implemented. The main reason for this is the fact that this system does not yet have a lexical analysis system. The actual use of this implementation will be during the automatic segmentation process within praat. Although this implementation regarding Japanese language support is during this evaluation not yet supported by praat, this feature will be available at the time this implementation becomes available for espeak. Given is a manually segmented output of the system, when comparing this analysis with my own pronunciation of this Japanese text, many specific similarities can be found which gives high hopes for the dynamic time warping algorithm which is used during the segmentation process. 19

20 6 Conclusion The provided implementation is capable of pronouncing hiragana, katakana and rōmaji using the hepburn romanization system. However the implementation still comes with restrictions to the user, which are unnatural. For example the need to write tōkyō like / とーきょー / instead of / とうきょう / or even as / 東京 /. The main goal of this paper was however to provide assistance during the segmentation process of praat, where there is a high probability such a situation is the case. On the other hand this implementation already provides a good starting point for further research regarding the implementation of Japanese speech synthesis support within espeak, where this implementation could be used as a start-up block. 7 Future work The current implementation has its main focus on being able to correctly pronouncing Japanese characters and using this during the segmentation process in praat. However this implementation comes with various restrictions, by which the user has to alter the input in order to get correct pronunciation. This of course is not an optimal solution and in order to overcome these restriction some additional processing of the input is needed. There are two main component that could help overcome these restrictions and can provide to a more natural input, whereas any Japanese sentence (with kanji etc.) can be processed by the system. The first component is a lexical analyse system which can parse kanji and is capable of providing a grammatical analyse (i.e. show which characters are particles). This type of system should also be able to normalise the kanji to the hiragana writing system (or even rōmaji), which then can be parsed by the already present implementation for espeak. Various lexical analyse systems are already available for the Japanese language, however needed is a system which works offline and is in terms with the General Public Licence (GPL). Furthermore the Japanese language uses a pitch accent, which is not yet implemented in this current implementation. For this a phonetic database could be used, like the database described by Halpern (n.d.), where a so called binary pitch is used and the pitch is presented with a two-pitch-level model. In this model a pitch can be either high (H) or low (L), and each mora (for a specific word) has its own pitch level. This implementation of speech is needed due to presence of words that only differ in accent, but also to make the system more natural sounding. With these additional systems the input sentence can be normalised to hiragana characters, which the current implementation is already capable of pronouncing. With this the current restrictions (e.g. particles) can be solved making the input more natrual. 7.1 espeak functionality Additional functionality needs to be implemented within the espeak system in order to be able to correctly pronounce Japanese text are the following: In the rules (i.e. ja rules) it must be possible to check for a question mark in order to be able to implement the exception where the rise in pitch overrides the devocing 20

21 process. Also due to the automatic decapitalisation of the input text, it is not yet possible to have to both support rōmaji as capatalised latin characters for abbreviations. This due to the fact that the capatalised latin characters will be automatically decapatalised and will be parsed as rōmaji syllables instead of correctly pronouncing the abbreviations. 21

22 References [1] Ricardo A. H. Bion, Kouki Miyazawa, Hideaki Kikuchi, and Reiko Mazuka. Learning phonemic vowel length from naturalistic recordings of japanese infant-directed speech. PLoS ONE, 8(2):e51594, [2] Paul Boersma and David Weenink. Praat: doing phonetics by computer, [3] Jonathan Duddington. espeak text to speech, [4] Jack Halpern. The role of phonetics and phonetic databases in japanese speech technology. [5] I. Kawase, M. Sugihara, and Kokusai Kōryū Kikin. Nihongo, the pronunciation of Japanese. Japan Foundation, [6] Dennis H. Klatt. Software for a cascade/parallel formant synthesizer. 67: , [7] Dennis H. Klatt and Laura C. Klatt. Analysis, synthesis, and perception of voice quality variations among female and male talkers. 87: , [8] Kawahara Shigeto. The phonetics of obstruent geminates, sokuon. 2012/forthcoming. [9] T.J. Vance. The Sounds of Japanese with Audio CD. Cambridge University Press,

23 A How to use this initial implementation Currently, the implementation allows for hiragana and katagana characters but also input using the modified Hepburn romanization system. However, due to the fact that there is at this point no lexical analysis is available available, some ambiguity in pronunciation occurs. In order to overcome these ambiguities we force the user to make the input unambiguous with regard to pronunciation. 1. The system request the user to added word segmentation (section 4.1) 2. Particles need to written as pronounced (section 3.5.4) 3. Long vowels need to written with the prolonged sound mark (section 3.5.1) Therefore a sentence like: / これは東京です / (this is tōkyō) could be inserted as / これわとーきょーです / where there is no kanji, the user provides word segmentation and long vowels are explicitly written with a prolonged sound mark. However it is also possible to input the sentence using the modified hepbrun romanization system: /kore wa tōkyō desu/. Which has next to the rules of the modified hepbrun romanization system no additional restrictions. 23

24 B IPA for Japanese IPA for Japanese as found on wikipedia 6 IPA Japanese example English approximation b basho bog ç hito hue C shita, shugo sheep d dōmo dome dz, z zutto rods, zen dý, ý jibun, gojū jeep, garagist F fugu who g gakusei gape h hon hone j yakusha yak k kuru skate m mikan much n nattō not ð nihon long N ringo, rinku finger, pink p pan span R roku close to /t/ in auto in American English s suru sue t taberu stop ts tsunami cats tc chikai, kinchō itchy w wasabi was P (in Ryukyu languages) uh-oh! a aru roughly like father e eki roughly like met i iru need i yoshi, shita (almost silent) o oniisan roughly like sore unagi desu, sukiyaki roughly like foot (almost silent) 6 for more information see 24

AS BARRIER TO A Andreas Popper

AS BARRIER TO A Andreas Popper AS BARRIER TO A ... units of measurement for confusing similarity SOUND tonal similarity of marks APPEARANCE visual similarity of marks CONCEPT conceptual similarity of marks CLAIMED SCOPE OF PROTECTION

More information

Introductory Japanese A: Course Syllabus

Introductory Japanese A: Course Syllabus Course Syllabus & Schedule Introductory Japanese A: Course Syllabus (Formerly known by Elementary Japanese A) I. Course Description and Objectives Introductory Japanese A is an introduction to Japanese

More information

Japanese Language Course 2017/18

Japanese Language Course 2017/18 Japanese Language Course 2017/18 The Faculty of Philosophy, University of Sarajevo is pleased to announce that a Japanese language course, taught by a native Japanese speaker, will be offered to the citizens

More information

JAPANESE 1 Elementary Japanese I West Los Angeles College Fall 2013

JAPANESE 1 Elementary Japanese I West Los Angeles College Fall 2013 JAPANESE 1 Elementary Japanese I West Los Angeles College Fall 2013 Japanese 1 (Section # 4269) Instructor: Shana Brenish シェイナ ブレニッシュ ime: u h 6:45 p.m. - 9:20 p.m. Room: FA 207 E-mail: shanabrenish@juno.com

More information

JPNS 101 Elementary Online Japanese 1. 4 credits

JPNS 101 Elementary Online Japanese 1. 4 credits JPNS 101 Elementary Online Japanese 1 4 credits Course Description WINDWARD COMMUNITY COLLEGE MISSION STATEMENT Windward Community College offers innovative programs in the arts and sciences and opportunities

More information

JPNS 101 Elementary Japanese 1 4 M, T, W, R 10:00-10:50, 11:30-12:20

JPNS 101 Elementary Japanese 1 4 M, T, W, R 10:00-10:50, 11:30-12:20 JPNS 101 Elementary Japanese 1 4 M, T, W, R 10:00-10:50, 11:30-12:20 INSTRUCTOR: Akiko Swan OFFICE: Manaleo 114 OFFICE HOURS: Tue 8:40-9:40 and by appointment TELEPHONE: office 236-9233 (Do not leave any

More information

Elementary Japanese I (Fall 2014) Course Syllabus

Elementary Japanese I (Fall 2014) Course Syllabus 1 UNIVERSITY OF CALIFORNIA, MERCED Elementary Japanese I (JPN 00101) 9:00-9:50 MTWR COB 282 Elementary Japanese I (JPN 00102) 1:00-1:50 MTWR KL 396 Fall 2014 Instructor: Miki Y. Ishikida Office Room: COB

More information

NIHONGO Japanese Lesson

NIHONGO Japanese Lesson NIHONGO Japanese Lesson Click The Following Link To Find Out How To Speak Like A Diplomat Please Print These Out!! Lesson 1 Practice Greeting: 挨拶の練習 (ai satsu no ren shu) Lesson 1 挨拶の練習 : Practice Japanese

More information

Textbook: Kiku Kangaeru Hanasu: Ryuugakusei no tame no. by Koike Mari, Nakagawa Michiko, Miyazaki Satoko & Hiratsuka Mari

Textbook: Kiku Kangaeru Hanasu: Ryuugakusei no tame no. by Koike Mari, Nakagawa Michiko, Miyazaki Satoko & Hiratsuka Mari Department of Humanities, Sciences, Social Sciences and Health Sciences (310) 825-1898 Quarter: Summer 2011 ELEMENTARY TO INTERMEDIATE CONVERSATIONAL JAPANESE X401B Course Description (Summer 2011) Reg#:

More information

JPC111x Japanese Pronunciation for Communication Syllabus (English Version)

JPC111x Japanese Pronunciation for Communication Syllabus (English Version) JPC111x Japanese Pronunciation for Communication Syllabus (English Version) ABOUT THIS COURSE This course has been designed to teach you Japanese pronunciation so that you can accurately get your message

More information

Phonology. 1. the sounds of words are made by blowing air through the throat, mouth, and/or nose

Phonology. 1. the sounds of words are made by blowing air through the throat, mouth, and/or nose Phonology Phonology is the study of the sound system of language. It is the study of the wide variety of sounds in all languages, of the basic units of sound in a particular language, and of the regularities

More information

Syllabus EALC 320: Advanced Japanese I

Syllabus EALC 320: Advanced Japanese I Syllabus EALC 320: Advanced Japanese I Classroom & Hours: VKC 209 1:00 1:50 M, T, W, Th Instructor: Kumagai, Yuka, Director of Japanese Language Program くまがいゆか ( 熊谷由香 ) Office Hours: W,Th 2:00-3:00 or

More information

Phonetics & Phonology

Phonetics & Phonology Phonetics & Phonology Pronunciation Poor English pronunciation may confuse people even if you use advanced English grammar. We can use simple words and simple grammar structures that make people understand

More information

Trinity Bay SHS. Class Course Planner 2017 Semester 2 Term 3. Class: Year 7 Japanese

Trinity Bay SHS. Class Course Planner 2017 Semester 2 Term 3. Class: Year 7 Japanese Class: Year 7 Japanese Class 2017 Teachers: Mrs Clark, Mr Cooney, Ms Howells, Miss Young. Curriculum Intent for ANYONE FOR SPORT Topic Assessment x 3 Feedback x 3 Wk1 Traditional Japanese Sports Discuss

More information

The Guide to the Japanese Language Program Courses for Academic Years 2017/2018 and 2018/2019

The Guide to the Japanese Language Program Courses for Academic Years 2017/2018 and 2018/2019 The Guide to the Japanese Language Program Courses for Academic Years 2017/2018 and 2018/2019 All new students including exchange and Indonesian Linkage students are requested to fill out an (online) questionnaire

More information

Ch. 4 Phonetics: The Sounds of Language

Ch. 4 Phonetics: The Sounds of Language Ch. 4 Phonetics: The Sounds of Language Sound Segments Knowing a language includes knowing the sounds of that language Phonetics is the study of speech sounds We are able to segment a continuous stream

More information

Nasal, Lateral and Approximant Consonants.

Nasal, Lateral and Approximant Consonants. Nasal, Lateral and Approximant Consonants. So far we have studied two major groups of consonants - the plosives and fricatives - and also the affricates ts, dz; this gives a total of seventeen. There remain

More information

Dear Meiji Gakuin University Exchange Applicant,

Dear Meiji Gakuin University Exchange Applicant, Dear Meiji Gakuin University Exchange Applicant, Please use the following materials provided by MGU to compile your preliminary application for Hunter s Fall 2017 exchange with Meiji Gakuin University.

More information

The sounds of language

The sounds of language The sounds of language Phonetics Chapter 4 1 Recap Language vs. other communicative systems Universal characteristics of language Displacement Arbitrariness Productivity Cultural transmission Duality 2

More information

JAPANESE LANGUAGE. Introduction. A rich blend of outside influence and internal innovation

JAPANESE LANGUAGE. Introduction. A rich blend of outside influence and internal innovation Web Japan http://web-japan.org/ JAPANESE LANGUAGE A rich blend of outside influence and internal innovation Calligraphy Calligraphy is an art form in which the aim is to use brush and ink to bring out

More information

WELCOME TO LJPN200 (CRN 64942)! Fall 2014 ようこそ日本語二年生へ

WELCOME TO LJPN200 (CRN 64942)! Fall 2014 ようこそ日本語二年生へ WELCOME TO LJPN200 (CRN 64942)! Fall 2014 ようこそ日本語二年生へ This 10-page course outline contains all the information you need to know about LJPN200. Please be sure to check this before you ask questions. Answers

More information

日本語 2 年生 Japanese 2A: Intermediate Japanese

日本語 2 年生 Japanese 2A: Intermediate Japanese California State University, Sacramento 日本語 2 年生 Japanese 2A: Intermediate Japanese Instructor: Kazue Masuyama, Ph.D. Office & Hours: Mariposa 2063 MTWR 10-11 or by appointment Class Time & Location: JAPN

More information

Description of the articulation of consonants of English

Description of the articulation of consonants of English Description of the articulation of consonants of English Chia-Lin Hsieh, Yi-Shan Chiu Chapter One Introduction In 1996, the Education Innovation Council suggested that the Ministry of Education should

More information

Adventures in Japanese

Adventures in Japanese Adventures in Japanese Japanese Language High School Textbook 4th Edition Teacher s Guide to Go, Volume 1 THIS IS A COPY FOR PREVIEW AND EVALUATION, AND IS NOT TO BE REPRODUCED OR SOLD. This sample includes:

More information

USING PEER-GROUP ACTIVITIES TO DEVELOP WRITING SKILLS

USING PEER-GROUP ACTIVITIES TO DEVELOP WRITING SKILLS USING PEER-GROUP ACTIVITIES TO DEVELOP WRITING SKILLS Yasuko Okada Independent Researcher birdrock@u01.gate01.com Abstract Process writing is a pedagogy that is widely used to teach writing in a foreign

More information

My Japanese Coach: Lesson I, Basic Words

My Japanese Coach: Lesson I, Basic Words My Japanese Coach: Lesson I, Basic Words Lesson One: Basic Words Hi! I m Haruka! It s nice to meet you. I m here to teach you Japanese. So let s get right into it! Here is a list of words in Japanese.

More information

LNGT0101 Introduction to Linguistics

LNGT0101 Introduction to Linguistics LNGT0101 Introduction to Linguistics Any questions? Any questions on homework 1, or otherwise? Lecture #5 Sept 26 th, 2011 2 Today s agenda Describe consonants. Describe vowels. Use the IPA symbols to

More information

California State University, Sacramento 日本語 Japanese 1B: Elementary Japanese

California State University, Sacramento 日本語 Japanese 1B: Elementary Japanese California State University, Sacramento 日本語 Japanese 1B: Elementary Japanese Instructor: Catherine Miskow Office & Hours: Mariposa 2019 12pm - 1:00pm on TU & TH or by appointment Class Time & Location:

More information

THE EFFECTS OF INDEPENDENT STUDY AND REFLECTION ON JAPANESE EAP LEARNER BELIEFS: LESSONS FOR SELF-ACCESS

THE EFFECTS OF INDEPENDENT STUDY AND REFLECTION ON JAPANESE EAP LEARNER BELIEFS: LESSONS FOR SELF-ACCESS THE EFFECTS OF INDEPENDENT STUDY AND REFLECTION ON JAPANESE EAP LEARNER BELIEFS: LESSONS FOR SELF-ACCESS Caroline Hutchinson Kanda University of International Studies Context 19 freshmen at Japanese university

More information

Phonology. Description of Articulation of Consonants of English. Professor: 王鶴巘. Students Number: M97C0215. Name: 郭麗熒 Pallas Kuo

Phonology. Description of Articulation of Consonants of English. Professor: 王鶴巘. Students Number: M97C0215. Name: 郭麗熒 Pallas Kuo Phonology Description of Articulation of Consonants of English Professor: 王鶴巘 Students Number: M97C0215 Name: 郭麗熒 Pallas Kuo 1 Description of articulation of Consonants of English Kuo, LI-Ying I. Introduction

More information

Investigation of annotator s behaviour using eye-tracking data

Investigation of annotator s behaviour using eye-tracking data Investigation of annotator s behaviour using eye-tracking data Ryu Iida, Koh Mitsuda, Takenobu Tokunaga Tokyo Institute of Technology, Japan LAW VII & ID (August 9, 2013) Research background 2 Manual annotation:

More information

Review for Midterm. SPAU 3343 Updated Spring, 2014

Review for Midterm. SPAU 3343 Updated Spring, 2014 Review for Midterm SPAU 3343 Updated Spring, 2014 1 IPA International Phonetic Alphabet. Each symbol represents a single sound. We can transcribe any sound of any language with IPA. 2 Linguistics The scientific

More information

ELC 231: Introduction to Language and Linguistics

ELC 231: Introduction to Language and Linguistics ELC 231: Introduction to Language and Linguistics Introduction to Phonetics Dr. Meagan Louie M. Louie ELC 231: Language and Linguistics 1 / 60 Last Time: A language consists of (i) A structured collection

More information

Consonants: articulation and transcription

Consonants: articulation and transcription Phonology 1: Handout January 20, 2005 Consonants: articulation and transcription 1 Orientation phonetics [G. Phonetik]: the study of the physical and physiological aspects of human sound production and

More information

Learning how to pronounce a second language

Learning how to pronounce a second language PAC3 at JALT 2001 Conference Proceedings MENU Text Version Help & FAQ International Conference Centre Kitakyushu JAPAN November 22-25, 2001 The Taming of English Vowels Stephen Lambacher The University

More information

It is not rare to hear that American English is preferable for Japanese English learners

It is not rare to hear that American English is preferable for Japanese English learners JALT2009 Conference Proceedings 224 Japanese students perception of English Reference data: Fukuda, T. (2010). Japanese students perception of English. In A. M. Stoke (Ed.), JALT2009 Conference Proceedings.

More information

Speech and speech processing / April 7, 2005 Ted Gibson

Speech and speech processing / April 7, 2005 Ted Gibson Speech and speech processing 9.59 / 24.905 April 7, 2005 Ted Gibson The structure of language Sound structure: phonetics and phonology cat = /k/ + /æ/ + /t/ eat = /i/ + /t/ rough = /r/ + /^/ + /f/ Language

More information

Phonology. Phonemes, Features, and Phonological Rules. How Phonology was born. Phonological theory. Phoneme: Historics. Phoneme: definitions

Phonology. Phonemes, Features, and Phonological Rules. How Phonology was born. Phonological theory. Phoneme: Historics. Phoneme: definitions Phonology Phonemes, Features, and Phonological Rules THEORETICAL CONSIDERATIONS Importance of theories: Representation of abstract knowledge Representation and production of input Derivation of developmental

More information

Introduction to Phonetics Week 3 Basics of Articulation

Introduction to Phonetics Week 3 Basics of Articulation Introduction to Phonetics Week 3 Basics of Articulation Ruben van de Vijver October 27, 2014 Basics of Articulation Questions about last week (The vocal tract?) This week: Human language, transcribing

More information

Toss ʻn Talk: A Conversation Card Game

Toss ʻn Talk: A Conversation Card Game GIBSON: TOSS ʻN TALK: A CONVERSATION CARD GAME ームへの参加や生徒同士の協力を促進し 熱中して学ぶ ことができます ここでは このToss n Talkのゲーム を始めたきっかけとなった基本の会話文とゲームの仕方 を説明します MENU PRINTABLE VERSION Introduction HELP & FAQS Toss ʻn Talk: A

More information

Japanese University Students Order of Morpheme Acquisition

Japanese University Students Order of Morpheme Acquisition Japanese University Students Order of Morpheme Acquisition NISHIJIMA Hisao 武庫川女子大学学校教育センター年報 第 2 号 2017 年 原著論文 Japanese University Students Order of Morpheme Acquisition NISHIJIMA, Hisao* Abstract The

More information

Introduction to Phonetics and Phonology 1

Introduction to Phonetics and Phonology 1 Introduction to Phonetics and Phonology 1 Some of the symbols and terms in Baker (2007) and Horobin and Smith (2002) may be unfamiliar to students who have limited experience of phonetics, i.e. THE SCIENTIFIC

More information

Add -reru to the negative base, that is to the "-a" syllable of any Godan Verb. e.g. becomes becomes

Add -reru to the negative base, that is to the -a syllable of any Godan Verb. e.g. becomes becomes The "Passive." Formation i) Ichidan Verbs: Add -rareru to the negative base, e.g. remove from, add inflection to thus, ii. Godan Verbs: Add -reru to the negative base, that is to the "-a" syllable of any

More information

Bahasa Inggris 3 LANGUAGE AND VOCABULARY RELATED TO FUTURE CAREER

Bahasa Inggris 3 LANGUAGE AND VOCABULARY RELATED TO FUTURE CAREER Modul ke: 6 Eko Fakultas EKONOMI & BISNIS Bahasa Inggris 3 LANGUAGE AND VOCABULARY RELATED TO FUTURE CAREER Putra Boediman Program Studi MANAJEMEN Intonation Many people from different countries have improper

More information

Classification of Speech Sounds. Vowels; Consonants.

Classification of Speech Sounds. Vowels; Consonants. Classification of Speech Sounds Vowels; Consonants. Since the writing system of English does not provide us with a one-to-one correspondence between oral sound and written symbol, we need a tool for representing

More information

Pronunciation May 3, Vowels and consonants

Pronunciation May 3, Vowels and consonants Pronunciation May 3, 2016 Vowels and consonants Consonants 1. How many consonants are there is English? 2. What is a written letter that doesn t have its own sound? 3. What is a sound that is not represented

More information

AM Anthony. University of Michigan

AM Anthony. University of Michigan TOOLS FOR TEACHING PRONUNCIATION AM Anthony University of Michigan Learning a foreign language has often erroneously meant undue emphasis on grammar in the form of paradigms, and vocabulary in the form

More information

Influence by EAS of 2007 Summer: Deakin University. Mika Iwamoto. A graduation thesis submitted to the International Communication Course,

Influence by EAS of 2007 Summer: Deakin University. Mika Iwamoto. A graduation thesis submitted to the International Communication Course, Influence by EAS of 2007 Summer 1 Running Head: INFLUENCE BY EAS OF 2007 SUMMER Influence by EAS of 2007 Summer: Deakin University Mika Iwamoto A graduation thesis submitted to the International Communication

More information

日本語 3030 Japanese 3030: Advanced Japanese II University of North Texas Spring 2017

日本語 3030 Japanese 3030: Advanced Japanese II University of North Texas Spring 2017 日本語 3030 Japanese 3030: Advanced Japanese II University of North Texas ------ Spring 2017 Instructor & Office Hours: Instructor Name Yayoi Takeuchi Email yayoi.takeuchi@unt.edu Office LANG 405B Phone 940--565--2404

More information

effects observed in consonant and v

effects observed in consonant and v Title Author(s) Citation Cross cultural studies on audiovisu effects observed in consonant and v Rahmawati, Sabrina; Ohgishi, Michit Proceedings of 2011 6th Internation Systems, Services, and Applications

More information

Specialization Module. Speech Technology. Timo Baumann

Specialization Module. Speech Technology. Timo Baumann Specialization Module Speech Technology Timo Baumann baumann@informatik.uni-hamburg.de Universität Hamburg, Department of Informatics Natural Language Systems Group A bit of Phonetics Speech Production:

More information

Speech Synthesis. Tokyo Institute of Technology Department of fcomputer Science

Speech Synthesis. Tokyo Institute of Technology Department of fcomputer Science Speech Synthesis Sadaoki Furui Tokyo Institute of Technology Department of fcomputer Science furui@cs.titech.ac.jp 0107-14 Pronouncing Acoustic dictionary segments and rules dictionary Text input Pronounce

More information

TOKIWA UNIVERSITY STUDENT EXCHANGE PROGRAM - FALL 2013

TOKIWA UNIVERSITY STUDENT EXCHANGE PROGRAM - FALL 2013 TOKIWA UNIVERSITY STUDENT EXCHANGE PROGRAM - FALL 2013 This is the guidance for the Tokiwa University Student Exchange Program 2013, which is held during the Fall Semester, from mid-september 2013 to mid-january

More information

What Is Phonetics? Phonetic Transcription Articulation of Sounds. Phonetics. Darrell Larsen. Linguistics 101

What Is Phonetics? Phonetic Transcription Articulation of Sounds. Phonetics. Darrell Larsen. Linguistics 101 What Is? Linguistics 101 Outline What Is? 1 What Is? 2 Phonetic Alphabet Transcription 3 Articulation of Consonants Articulation of Vowels Other Languages What Is? What Is? Definition the study of speech

More information

Research Article Effectiveness of Context-Aware Character Input Method for Mobile Phone Based on Artificial Neural Network

Research Article Effectiveness of Context-Aware Character Input Method for Mobile Phone Based on Artificial Neural Network Applied Computational Intelligence and Soft Computing Volume, Article ID 8648, 6 pages doi.55//8648 Research Article Effectiveness of Context-Aware Character Input Method for Mobile Phone Based on Artificial

More information

UNC Charlotte Japanese Folktales - Spring 16

UNC Charlotte Japanese Folktales - Spring 16 UNC Charlotte Japanese 4050-002 - Folktales - Spring 16 Instructor: Jordan Bledsoe Office: COED 459 Contact: jbleds11@uncc.edu Office Hours Mondays: 8:30-9:15 AM, 11:00-12:15, Wednesdays: 11:00~12:15,

More information

The Phone(c Alphabet. Spelling, or orthography, does not consistently represent the sounds of language. Some problems with ordinary spelling:

The Phone(c Alphabet. Spelling, or orthography, does not consistently represent the sounds of language. Some problems with ordinary spelling: The Phone(c Alphabet Spelling, or orthography, does not consistently represent the sounds of language Some problems with ordinary spelling: 1. The same sound may be represented by many le?ers or combina(on

More information

A Transformation-Based Learning Method on Generating Korean Standard Pronunciation *

A Transformation-Based Learning Method on Generating Korean Standard Pronunciation * A Transformation-Based Learning Method on Generating Korean Standard Pronunciation * Kim Dong-Sung a and Chang-Hwa Roh a a Department of Linguistics and Cognitive Science Hankuk University of Foreign Studies

More information

Simplification of Example Sentences for Learners of Japanese Functional Expressions

Simplification of Example Sentences for Learners of Japanese Functional Expressions Simplification of Example Sentences for Learners of Japanese Functional Expressions Jun Liu Nara Institute of Science and Technology 8916-5 Takayama, Ikoma, Nara, Japan liu.jun.lc3@is.naist.jp Yuji Matsumoto

More information

Lecture (2) PHONETICS

Lecture (2) PHONETICS Advanced Phonetics and Phonology 1302741 Lecture (2) PHONETICS Phonetics Scientific study of spoken language Basic conditions and constraints of human speech production and perception How are speech sounds

More information

Lesson 98: Education (20-25 minutes)

Lesson 98: Education (20-25 minutes) Main Topic 17: Industries Lesson 98: Education (20-25 minutes) Today, you will: 1. Learn useful vocabulary related to EDUCATION. 2. Review MODALS PART2. I. VOCABULARY Exercise 1: What s the meaning? (5-6

More information

ISO INTERNATIONAL STANDARD. Documentation - Romankation of Japanese (kana script) First edition Reference number ISO 3602 : 1989 (El

ISO INTERNATIONAL STANDARD. Documentation - Romankation of Japanese (kana script) First edition Reference number ISO 3602 : 1989 (El INTERNATIONAL STANDARD ISO 3602 First edition 1989-09-01 Documentation - Romankation of Japanese (kana script) Documen ta tion - Romankation du japonais (ecriture en kana) I Reference number ISO 3602 :

More information

Insertion of Glottal Stops before Word-Final Voiceless Stops in English. Alyssa Flower. Kent State University

Insertion of Glottal Stops before Word-Final Voiceless Stops in English. Alyssa Flower. Kent State University Insertion of glottal stops before word-final voiceless stops in English 1 Insertion of Glottal Stops before Word-Final Voiceless Stops in English Alyssa Flower Kent State University Insertion of glottal

More information

Analyzing the Revision Logs of a Japanese Newspaper for Article Quality Assessment

Analyzing the Revision Logs of a Japanese Newspaper for Article Quality Assessment Analyzing the Revision Logs of a Japanese Newspaper for Article Quality Assessment Hideaki Tamori 1 Yuta Hitomi 1 Naoaki Okazaki 2 Kentaro Inui 3 1 Media Lab, The Asahi Shimbun Company 2 Tokyo Institute

More information

Lecture (5) FEATURES

Lecture (5) FEATURES Advanced Phonetics and Phonology 1302741 Lecture (5) FEATURES Segmental Composition Speech sounds can be decomposed into a number of articulatory components. Combining these properties in different ways

More information

Development of speech synthesis simulation system and study of timing between articulation and vocal fold vibration for consonants /p/, /t/ and /k/

Development of speech synthesis simulation system and study of timing between articulation and vocal fold vibration for consonants /p/, /t/ and /k/ Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Development of speech synthesis simulation system and study of timing between articulation and vocal

More information

Speech Corpora. When you conduct research on speech you can either (1) record your own data or (2) use a ready-made speech corpus.

Speech Corpora. When you conduct research on speech you can either (1) record your own data or (2) use a ready-made speech corpus. Speech Corpora Speech corpus a large collection of audio recordings of spoken language. Most speech corpora also have additional text files containing transcriptions of the words spoken and the time each

More information

Mastery in Japanese Conjunctions among Indonesian Learners of Japanese

Mastery in Japanese Conjunctions among Indonesian Learners of Japanese Quest Journals Journal of Research in Humanities and Social Science Volume 5 ~ Issue 4 (2017) pp: 36-42 ISSN(Online) : 2321-9467 www.questjournals.org Research Paper Mastery in Japanese Conjunctions among

More information

UNESCAP LANGUAGE PROGRAMME

UNESCAP LANGUAGE PROGRAMME 1 UNESCAP LANGUAGE PROGRAMME PRONUNCIATION SKILLS Duration: This course is held once a week, 2 hours a class, for 13 weeks. (Please check posted schedule for dates and time.) Description: This course is

More information

Access By Bullet Train 〇 Shizuoka campus: 1 hour from Tokyo 〇 Hamamatsu campus: 1.5 hour from Tokyo 30min from Nagoya

Access By Bullet Train 〇 Shizuoka campus: 1 hour from Tokyo 〇 Hamamatsu campus: 1.5 hour from Tokyo 30min from Nagoya 1 Access By Bullet Train 〇 Shizuoka campus: 1 hour from Tokyo 〇 Hamamatsu campus: 1.5 hour from Tokyo 30min from Nagoya 1.Where We Are We are located in the heart of Japan! Campus Mascot (Shizuppi) 2.

More information

SYNTHESIS OF ORAL AND NASAL VOWELS OF URDU

SYNTHESIS OF ORAL AND NASAL VOWELS OF URDU 94 Center for Research in Urdu Language Processing SYNTHESIS OF ORAL AND NASAL VOWELS OF URDU MUHAMMAD KHURRAM RIAZ ABSTRACT The following oral and nasal vowels of Urdu were synthesized (i, æ, u,, i, æ,

More information

The JF Standard for Japanese-Language Education

The JF Standard for Japanese-Language Education The JF Standard for Japanese-Language Education Minna no"can-do" website Marugoto: Japanese Language and Culture http://jfstandard.jp/ http://jfstandard.jp/cando/ http://marugoto.org/ The JF Standard for

More information

Research on the Acoustic Realization of Lateral in Sundanese

Research on the Acoustic Realization of Lateral in Sundanese International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 10 (October 2016), PP.44-48 Research on the Acoustic Realization of Lateral

More information

Phonemics Practice ANSWERS Sayers, LIN [m] [m ] [n] [n ] # u a # # e a # e je r # o d o # # d

Phonemics Practice ANSWERS Sayers, LIN [m] [m ] [n] [n ] # u a # # e a # e je r # o d o # # d Phonemics Practice ANSWERS Sayers, LIN 3201 The four nasal phones are [m], [m ], [n] and [n ]. When analyzing the data, we find the following distributions: [m] [m ] [n] [n ] # u a # # e a # e je r # o

More information

Phonetics. The Sound of Language

Phonetics. The Sound of Language Phonetics. The Sound of Language 1 The Description of Sounds Fromkin & Rodman: An Introduction to Language. Fort Worth etc., Harcourt Brace Jovanovich Read: Chapter 5, (p. 176ff.) (or the corresponding

More information

COPYRIGHT NOTICE Heisig/Remembering the Kana

COPYRIGHT NOTICE Heisig/Remembering the Kana COPYRIGHT NOTICE Heisig/Remembering the Kana is published by University of Hawai i Press and copyrighted, 2007, by James W. Heisig. All rights reserved. No part of this book may be reproduced in any form

More information

With the globalization of English, the acquisition of English for practical use has

With the globalization of English, the acquisition of English for practical use has JALT2010 Conference Proceedings 139 Effective instruction of shadowing using a movie Yukie Saito Kansaigaikokugo University Yuko Nagasawa Bunkyo University High School Shigeko Ishikawa Tokyo International

More information

BM3 Introduction to English Linguistics Part II

BM3 Introduction to English Linguistics Part II BM3 Introduction to English Linguistics Part II Session 2: Phonetics Contact options: Who am I? Rebecca Carroll, M.A. Stud.IP A 10 1-103 / phone 0441-798 3181 Email: rebecca.carroll@uni-oldenburg.de All

More information

Dynamically Scoring Rhymes with Phonetic Features and Sequence Alignment

Dynamically Scoring Rhymes with Phonetic Features and Sequence Alignment Dynamically Scoring Rhymes with Phonetic Features and Sequence Alignment Abstract We present a formalized rhyme function for machine approximation of human rhyme. Words are represented as sequences of

More information

L17: Speech synthesis (front-end)

L17: Speech synthesis (front-end) L17: Speech synthesis (front-end) Text-to-speech synthesis Text processing Phonetic analysis Prosodic analysis Prosodic modeling [This lecture is based on Schroeter, 2008, in Benesty et al., (Eds); Holmes,

More information

UNIT SELECTION VOICE FOR AMHARIC USING FESTVOX

UNIT SELECTION VOICE FOR AMHARIC USING FESTVOX UNIT SELECTION VOICE FOR AMHARIC USING FESTVOX Sebsibe H/Mariam, S P Kishore, Alan W Black, Rohit Kumar, and Rajeev Sangal Language Technologies Research Center International Institute of Information Technology,

More information

The Test of English for International Communication (TOEIC), a multiple-choice test composed

The Test of English for International Communication (TOEIC), a multiple-choice test composed TOEIC Survey: Speaking vs. Listening and Reading Masaya Kanzaki Kanda University of International Studies Reference Data: Kanzaki, M. (2015). TOEIC survey: Speaking vs. listening and reading. In P. Clements,

More information

Layered Speech-Act Annotation for Spoken Dialogue Corpus

Layered Speech-Act Annotation for Spoken Dialogue Corpus Layered Speech-Act Annotation for Spoken Dialogue Corpus Yuki Irie, Shigeki Matsubara, Nobuo Kawaguchi, Yukiko Yamaguchi, Yasuyoshi Inagaki Graduate School of Information Science, Nagoya University Furo-cho,

More information

Introduction to Phonetics

Introduction to Phonetics Introduction to Phonetics Dr. Christian DiCanio cdicanio@buffalo.edu University at Buffalo 9/1-3/14 DiCanio (UB) Introduction to Phonetics 9/1-3/14 1 / 31 Introduction Questions Why do languages sound

More information

Video https://youtu.be/yhwm9t48zi0

Video https://youtu.be/yhwm9t48zi0 http://ccmg.cc.toin.ac.jp/tech/bmed/ft28/univalphen.pdf Video https://youtu.be/yhwm9t48zi0 Most rational writing system in the world: Universal Literacy Alphabet Against Poverty, Spread of AIDS (HIV),

More information

LING 520 Introduction to Phonetics I Fall 2008

LING 520 Introduction to Phonetics I Fall 2008 LING 520 Introduction to Phonetics I Fall 2008 Week 2 English consonants and vowels Articulatory phonology Sep. 15, 2008 2 1. Consonants are longer when at the end of a phrase (bib, did, don, nod). 2.

More information

Oral Communication A. Name:

Oral Communication A. Name: Oral Communication A Name: Semester 1, 2016. (10301-004/5 & 10306-004/5) Mon, Wed, Fri, Rm, L13 (LL4) Teacher: Andrew Blyth, PhD Candidate, MA ELT, CELTA, B.Sc Contact: ablyth@nanzan-u.ac.jp (Don t use

More information

Application of the Speech Recognition Technology in Language Education

Application of the Speech Recognition Technology in Language Education European Journal of Multidisiplinary Sciences EjMS Volume I (ISSN: 2421-8251) Application of the Speech Recognition Technology in Language Education Mehryar Nooriafshar a *, Darius Nooriafshar b * Corresponding

More information

Advanced Phonetics and Phonology

Advanced Phonetics and Phonology Advanced Phonetics and Phonology 1302741 Lecture (7) NATURALNESS AND STRENGTH Naturalness Natural : Reasonable or expected in a particular situation (Macmillan English Dictionary) to be expected, frequently

More information

10/20/11. Dr. Colleen Fitzgerald The University of Texas at Arlington

10/20/11. Dr. Colleen Fitzgerald The University of Texas at Arlington Oklahoma Native Language Association October 20, 2011 Ada, OK Dr. Colleen Fitzgerald The University of Texas at Arlington http://ling.uta.edu/~colleen cmfitz@uta.edu Or maybe, what is phonetics? Phonetics

More information

Word accent in Mongolian. Anastasia M Karlsson Lund University

Word accent in Mongolian. Anastasia M Karlsson Lund University Word accent in Mongolian Anastasia M Karlsson Lund University The subject of the investigation is standard Mongolian, the Halh dialect (see Svantesson et al 2005) spoken in Ulaanbaatar, the capital of

More information

Florida International University Department of Modern Languages Syllabus: JPN 1131 U01 (Japanese II)

Florida International University Department of Modern Languages Syllabus: JPN 1131 U01 (Japanese II) Semester: Spring, 2017 Class Time: M & W 2:00pm 3:50pm F 2:00pm 2:50pm Class Days: M, W & F Florida International University Department of Modern Languages Syllabus: JPN 1131 U01 (Japanese II) Instructor:

More information

INTD0112 Introduction to Linguistics

INTD0112 Introduction to Linguistics INTD0112 Introduction to Linguistics Lecture #8 Sept 30 th, 2009 Announcements Just a reminder: HW2 due on Friday in my mailbox or by e-mail by 12:30pm. Delay policy applies. First talk in the linguistics

More information

Sample assessment task. Task details. Content description. Year level 2 Learning area Subject Title of task

Sample assessment task. Task details. Content description. Year level 2 Learning area Subject Title of task Sample assessment task Year level 2 Learning area Subject Title of task Task details Description of task Type of assessment Purpose of assessment Assessment strategy Evidence to be collected Languages

More information

Monitoring Soft Palate Movements in Speech

Monitoring Soft Palate Movements in Speech Paper Delivered at the 8lst Meeting of The Acoustical Society of America Washington, D.C. April 23, 1971 Monitoring Soft Palate Movements in Speech JOHN J. OHALA, Department of Linguistics, University

More information

Distinctive Features. Phonology Below the Level of the Phoneme. Robert Mannell

Distinctive Features. Phonology Below the Level of the Phoneme. Robert Mannell Distinctive Features Phonology Below the Level of the Phoneme Robert Mannell Introduction So far we have mainly examined phonological units at or above the level of the phoneme. Some phonological properties

More information

日本語 2040/ 2050 Intermediate Japanese: Step Forward in Japan

日本語 2040/ 2050 Intermediate Japanese: Step Forward in Japan 日本語 2040/ 2050 Intermediate Japanese: Step Frward in Japan University f Nrth Texas Summer 20 Instructr: 2040 & 2050 Sectin 001 Name Yayi Takeuchi 竹内弥生 ( たけうちやよい ) Email yayi.takeuchi@unt.edu Office Language

More information

Building Intercultural Communicative Competence: Analyzing the Language Exchange Program

Building Intercultural Communicative Competence: Analyzing the Language Exchange Program 立教日本語教育実践学会 R-JLEP 研究ノート Research Notes 日本語教育実践研究第 3 号 pp.127-136 Building Intercultural Communicative Competence: Analyzing the Language Exchange Program Ayana INOGUCHI (Rikkyo University) Keywords :

More information

A Course In Phonetics

A Course In Phonetics THE NTERNATONAL PHONETC ALPHABET (revised to 1989) CONSONANTS A Course n Phonetics Third Edition Peter Ladefoged C University of California, ~ ohgeles s Wheresymbols appear in pain, the one to the right

More information