Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape
|
|
- Benjamin Wilkinson
- 6 years ago
- Views:
Transcription
1 Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Koshi Odagiri 1, and Yoichi Muraoka 1 1 Graduate School of Fundamental/Computer Science and Engineering, Waseda University, Tokyo, Japan Abstract In this paper, we propose a vision-based approach to recognize Japanese vowels. Traditional researches dealt with lip size, lip width and lip height, but our method deals with lip shape. Our method focus on temporal changes of lip shape, and we define new feature value to recognize vowels. There are a lot of conventional studies, but those studies datasets are captured in specific environment such as well-lighted room and using lipsticks. However, we use Active shape models to extract lip area and calculate feature values. Therefore, our technique is not influenced by environment. And this paper describe the feature values are robust. We experimented with our approach and about 80% of average accuracy rate was obtained, and this rate is same as vowels recognition of Japanese who use lip reading. We conclude that our method helps speech recognition. Keywords: lip reading, vowel recognition, lip extraction 1. Introduction Today, speech recognitions by audio are developed and those are used in game hardware, car navigation system and cell phones, however, the systems cannot be used under noisy environment. Basically, speech recognition by impaired hearing people is based on sign language. But some people use lip reading. Therefore we can say that visual information improve performance of audio speech recognition under bad environment. To recognize mouth area is avery important for lip reading. We classify methods for recognizing into two types. One is color-based recognition such as snake algorithm[1], and two is model based recognition like Active shape models[2]. Color-based recognition is influenced by brightness of environment. On the other hand, model based recognition is not influenced by light, but need training datasets of face. Lip reading experiments are classified into four types. First is letter recognition, second is word recognition, third is sentence recognition and the last is semantic recognition. But Japanese language has hiragana letters and unclear grammar. Therefore sentence recognition and semantic recognition are not robust and need a lot of learned datasets. And Japanese pronunciations consist of some hiragana letters. Japanese have differences between mouth shapes when they speak vowels. And almost all sound based on 5 vowels of /a/, /i/, /u/, /e/ and /o/. Therefore single sound recognition of vowels is important. There are two types of single sound recognition. Those are static lip image recognition and tracking temporal changes of lip. In this paper, we propose a method of letter recognition focusing on temporal changes of lip shapes by model-based lip extraction for lip reading. 2. Related works In this section, we discuss the previous related works and we show a direction of our method. Uchimura s study[3] is letter recognition using static image recognition. In their study, they use histograms of gray scale images to recognize lip area, and the letter recognition method use mouth size and mouth width. They use static image of lip, therefore specifying sections between letters is difficult and unsuitable to expand word recognition and sentence recognition. Saitoh and Konishi s study[4] uses one of the color-based method. And their method of letter recognition is to use temporal changes of lip size and lip aspect ratio. The results of their method was on average 93.8%. But the method is not robust because of color-based method. Fig. 1: Lip area extraction by color-based method Figure 1 and figure 2 are results of lip area extraction using color-based method. We experimented lip extraction by RGB information of image. Figure 1 shows that this method can get almost all lip area, but besides non-lip area. In figure 2, we changed the threshold of color-comparison.
2 Fig. 2: Lip area extraction by different threshold The figures show that this color-based algorithm is clearly influenced by a background and regulation of thresholds. recognition of utterance by visual information using lip features. 3.1 Initialization First, we use 68 points for make active shape models learn faces, and use 19 features in those points. Figure 3 shows 68 points learned by active shape models. In this experiment, we define sections of utterance as one segment between mouth close and next mouth close. Experimentally, one section has about 30 frames to 70 frames. Therefore we adjust those sections to 50 frames. And To adjust the movement of mouth, we adjust mouth size and inclination by the width between features of both sides of the closed mouth contour in the first frame. 3.2 Feature value To tracking temporal changes, we use feature value from features of lip contours including the inside of mouth. The following figure 4 shows our definition of feature value in this experiments. Our feature value is defined the width between center point of contour and each points. These feature values mean where the features are. Fig. 3: Lip area extraction by active shape models On the other hand, we propose a lip extraction method of a model-based method. Figure 3 shows that a lip extraction by Active shape models using the same image as the above face. Clearly, the model-based method extract lip area correctly and also in detail. And our method deal with lip shapes more and more minutely. We mentioned the above section, Japanese language consists of hiragana letters on pronounces. And there are so small difference between consonants. Therefore Uchimura s based on mouth size and width and Saitoh s method based on mouth size and aspect ratio are unsuitable to expand to consonants recognition. We propose a robust method based on model-based lip extraction and tracking temporal changes of feature points on lip shape to recognize vowels, and our method solve the above problems. 3. Method We use model-base method for lip area extraction in this experiments. In this section, we propose a method of Fig. 4: Features of lip area and Feature value Therefore feature values are formulated as V = (α x + α y ) 2 + (C x + C y ) 2 (1) where V is feature value, α is feature, and C is the center feature of mouth. 3.3 Relation between feature values In this paragraph, we explain relation between feature values. The following figure 5 is comparison between feature values of 5 different people that calculated by the previous paragraph using the top feature of the mouth of /a/. Those features change largely at the vowel of /a/. In addition, figure 6 is relation of temporal changes between vowels. We can see differences between vowels from figure.
3 3. Evaluate values of each vowel by formula 3, and the vowel which the evaluated value is smallest is a matchable vowel for input. 4. Experiments In this section, we implement our method and experiment. And discuss the results of our system. Fig. 5: Feature values of /a/ by 5 people 4.1 Setup We implemented the system which has the method we proposed. And the system was divided into the following 2 parts. INPUT CALCULATING FEATURE FEATURES RECOGNITION LEANING LIP AREA EXTRACTION DISK Fig. 7: Chart of learning part of system Fig. 6: Feature values of vowels Considering previous two graphs, we can recognize vowels by feature values which proposed by us and can be got by formula Learning values Calculating average of previous feature values by formula 1 for each vowel. And we use those values to recognize an input vowel. Therefore leaned datas are got by N n=0 D tvp = V np (2) N where D tv is a learned feature value of a time of a vowel. N is number of datasets, p is feature of lip area. V is value got by formula Matching method For recognition of vowels, we use following formula to calculate which vowel is most likely to the input. T 19 S v = X tvn D tvn (3) t=0 n where S v is evaluated value of a vowel, T is number of frames. And X tvn is input vowel. D is calculated by formula Figure 7 is a chart of learning part of our system. First we input a vowel and calculating feature values by our method. And learn those values to database. INPUT FEATURES RECOGNITION LIP AREA EXTRACTION CALCULATING FEATURE COMPARING WITH LEARNED DATA OUTPUT ESTIMATED DISK Fig. 8: Chart of estimating part of system Figure 8 is a char of estimating part of our system. Estimating part have the same processes as learning part by calculating feature values. But the next step is comparing process. The comparing process is done by the above matching method of section 3 using learned database. Last, we can get an estimated answer by the system.
4 Table 1: Environment of experiments OS Windows 7 Professional 64bit edition CPU Intel Core 2 Extreme X9650 Memory 4GByte Camera Logicool 2-MP Webcam C600h Resolution of camera 640px x 480px FPS during capturing 30fps Our system was run the following table 1. We used web camera. And this means that this system was run by a camera more poor than a camera of iphone 4. We captured 20 people speaking 5 vowels in front of camera and captured 3 times each. And we used 15 people of those data for valid dataset. Those valid dataset is defined not blurred and can recognize feature points by Active shape models. And our datasets were captured at various backgrounds such as laboratories, houses and meeting rooms. In our experiment, we used Leave-one-out Crossvalidation method for the evaluation. And we evaluated the following 2 situations. 1) Using captured vowels other than a vowel 2) Using captured vowels other than a man 4.2 Results The following table 2 is results of the above experiments. Table 2: Results of our experiments (the numbers of accuracy rate correspond to above evaluations) Vowel Accuracy rate of (1) Accuracy rate of (2) /a/ 76% 75% /i/ 92% 90% /u/ 67% 69% /e/ 84% 82% /o/ 72% 76% Average 78.2% 78.4% Average accuracy rates are over about 80%. In Sekiyama s research[5], average accuracy rate of vowels recognition of Japanese people who use lip reading is about 80%. Therefore, our study have the same accuracy. The cases of wrong estimations were almost between /a/ and /o/ and /u/ and /o/. Those cases are often shown in other papers. 4.3 Discussion Figure 9 shows comparison of the biggest different temporal changes of feature points of two training datasets which were defined by section 4.1. Clearly, there are no difference two datasets, therefore our method recognizes robust feature values and deal with vowels of unknown people. Figure 10 shows some of the biggest difference of the feature values between /u/ and /o/. We deal here with the cases between /u/ and /o/ for a wrong estimation. Clearly, the figure shows /o/ vowel closer than /u/ to the input vowel of /u/. Two reasons are considered in this case. First is Fig. 9: Comparison between two trained datasets Fig. 10: Feature values of vowels precision problem of Active shape models. On extracting lip feature points by Active shape models, occasionally, the extracting method tracks wrong face model. This problem occurs because training face dataset is not enough. And this reason also make feature points blurred. Second is speaker s problem. In our experiments, there was a tendency that those people didn t open mouth widely when they spoke. This make difference between vowels too small. Therefore, blurred feature points make our system output wrong recognitions. We have mentioned two studies[3][4] in section 2, and compare the results. The following table 3 shows the results of the two studies. Average accuracy rates is inferior than related works, but on recognizing some vowels, our method is superior. Table 3: The results of related works Vowel Uchimura s study Saitoh s study /a/ 90% 95.8% /i/ 70% 91.8% /u/ 100% 96.9% /e/ 100% 88.3% /o/ 70% 96.2% Average 86% 93.8%
5 5. Conclusion We have described a vowel recognition method by tracking temporal changes of lip feature points. The results shows that our method can make robust feature values for Japanese vowel recognition. We conclude that our method is widely applicable to lip reading systems. We also mentioned the above section, clearly lip tracking by Active shape models is blurred. So, there are improvements of lip tracking by Active shape models. And this method was evaluated about vowels, Therefore we are extending to consonants on the next step, and word and sentence recognition in the future. References [1] M. Kass, A. Witkin and D. Terzopoulos. Snakes: Active Contour Models. International Journal of NTSC Computer Vision, pp , [2] T.F. Cootes and D.H. Cooper and C.J. Taylor and J. Graham. Active shape models - their training and application. Computer Vision and Image Understanding, pp.38-59, [3] Keiichi UCHIMURA, Junji MICHIDA, Masami TOKOU, Teizo AIDA. Discrimination of Japanese vowels by image analysis. The Transactions of the Institute of Electronics, Information and Communication Engineers, pp , [4] Takeshi SAITOH, Mitsugu HISAKI, Ryosuke KONISHI. Japanese Phone Classification Based on Mouth Cavity Region. IEICE technical report, pp , [5] Kaoru SEKIYAMA, Kazuki Joe, Michio UMEDA. Lipreading Japanese syllables. ITEJ Technical Report 12(1), pp33-40, 1988.
Word Segmentation of Off-line Handwritten Documents
Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationEli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology
ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationUnvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition
Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese
More informationAustralian Journal of Basic and Applied Sciences
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean
More informationMandarin Lexical Tone Recognition: The Gating Paradigm
Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition
More informationGACE Computer Science Assessment Test at a Glance
GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science
More information1.11 I Know What Do You Know?
50 SECONDARY MATH 1 // MODULE 1 1.11 I Know What Do You Know? A Practice Understanding Task CC BY Jim Larrison https://flic.kr/p/9mp2c9 In each of the problems below I share some of the information that
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationOn the Formation of Phoneme Categories in DNN Acoustic Models
On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-
More informationBody-Conducted Speech Recognition and its Application to Speech Support System
Body-Conducted Speech Recognition and its Application to Speech Support System 4 Shunsuke Ishimitsu Hiroshima City University Japan 1. Introduction In recent years, speech recognition systems have been
More informationREVIEW OF CONNECTED SPEECH
Language Learning & Technology http://llt.msu.edu/vol8num1/review2/ January 2004, Volume 8, Number 1 pp. 24-28 REVIEW OF CONNECTED SPEECH Title Connected Speech (North American English), 2000 Platform
More informationOPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS
OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,
More informationFlorida Reading Endorsement Alignment Matrix Competency 1
Florida Reading Endorsement Alignment Matrix Competency 1 Reading Endorsement Guiding Principle: Teachers will understand and teach reading as an ongoing strategic process resulting in students comprehending
More informationLongman English Interactive
Longman English Interactive Level 3 Orientation Quick Start 2 Microphone for Speaking Activities 2 Course Navigation 3 Course Home Page 3 Course Overview 4 Course Outline 5 Navigating the Course Page 6
More informationFeature-oriented vs. Needs-oriented Product Access for Non-Expert Online Shoppers
Feature-oriented vs. Needs-oriented Product Access for Non-Expert Online Shoppers Daniel Felix 1, Christoph Niederberger 1, Patrick Steiger 2 & Markus Stolze 3 1 ETH Zurich, Technoparkstrasse 1, CH-8005
More informationTaught Throughout the Year Foundational Skills Reading Writing Language RF.1.2 Demonstrate understanding of spoken words,
First Grade Standards These are the standards for what is taught in first grade. It is the expectation that these skills will be reinforced after they have been taught. Taught Throughout the Year Foundational
More informationAQUA: An Ontology-Driven Question Answering System
AQUA: An Ontology-Driven Question Answering System Maria Vargas-Vera, Enrico Motta and John Domingue Knowledge Media Institute (KMI) The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom.
More informationOn-Line Data Analytics
International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob
More informationarxiv: v1 [cs.cl] 2 Apr 2017
Word-Alignment-Based Segment-Level Machine Translation Evaluation using Word Embeddings Junki Matsuo and Mamoru Komachi Graduate School of System Design, Tokyo Metropolitan University, Japan matsuo-junki@ed.tmu.ac.jp,
More informationSARDNET: A Self-Organizing Feature Map for Sequences
SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu
More informationAppendix L: Online Testing Highlights and Script
Online Testing Highlights and Script for Fall 2017 Ohio s State Tests Administrations Test administrators must use this document when administering Ohio s State Tests online. It includes step-by-step directions,
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationBooks Effective Literacy Y5-8 Learning Through Talk Y4-8 Switch onto Spelling Spelling Under Scrutiny
By the End of Year 8 All Essential words lists 1-7 290 words Commonly Misspelt Words-55 working out more complex, irregular, and/or ambiguous words by using strategies such as inferring the unknown from
More informationSpeech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines
Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,
More informationTeaching a Laboratory Section
Chapter 3 Teaching a Laboratory Section Page I. Cooperative Problem Solving Labs in Operation 57 II. Grading the Labs 75 III. Overview of Teaching a Lab Session 79 IV. Outline for Teaching a Lab Session
More informationWord Stress and Intonation: Introduction
Word Stress and Intonation: Introduction WORD STRESS One or more syllables of a polysyllabic word have greater prominence than the others. Such syllables are said to be accented or stressed. Word stress
More informationREAD 180 Next Generation Software Manual
READ 180 Next Generation Software Manual including ereads For use with READ 180 Next Generation version 2.3 and Scholastic Achievement Manager version 2.3 or higher Copyright 2014 by Scholastic Inc. All
More informationUsing computational modeling in language acquisition research
Chapter 8 Using computational modeling in language acquisition research Lisa Pearl 1. Introduction Language acquisition research is often concerned with questions of what, when, and how what children know,
More informationRichardson, J., The Next Step in Guided Writing, Ohio Literacy Conference, 2010
1 Procedures and Expectations for Guided Writing Procedures Context: Students write a brief response to the story they read during guided reading. At emergent levels, use dictated sentences that include
More informationSpeech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers
Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,
More informationVoice conversion through vector quantization
J. Acoust. Soc. Jpn.(E)11, 2 (1990) Voice conversion through vector quantization Masanobu Abe, Satoshi Nakamura, Kiyohiro Shikano, and Hisao Kuwabara A TR Interpreting Telephony Research Laboratories,
More informationTrend Survey on Japanese Natural Language Processing Studies over the Last Decade
Trend Survey on Japanese Natural Language Processing Studies over the Last Decade Masaki Murata, Koji Ichii, Qing Ma,, Tamotsu Shirado, Toshiyuki Kanamaru,, and Hitoshi Isahara National Institute of Information
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationMULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY
MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY Chen, Hsin-Hsi Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan E-mail: hh_chen@csie.ntu.edu.tw Abstract
More informationSINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)
SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,
More informationWHEN THERE IS A mismatch between the acoustic
808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,
More information5 Guidelines for Learning to Spell
5 Guidelines for Learning to Spell 1. Practice makes permanent Did somebody tell you practice made perfect? That's only if you're practicing it right. Each time you spell a word wrong, you're 'practicing'
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationSIE: Speech Enabled Interface for E-Learning
SIE: Speech Enabled Interface for E-Learning Shikha M.Tech Student Lovely Professional University, Phagwara, Punjab INDIA ABSTRACT In today s world, e-learning is very important and popular. E- learning
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George
More informationDesign Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm
Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute
More information1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all
Human Communication Science Chandler House, 2 Wakefield Street London WC1N 1PF http://www.hcs.ucl.ac.uk/ ACOUSTICS OF SPEECH INTELLIGIBILITY IN DYSARTHRIA EUROPEAN MASTER S S IN CLINICAL LINGUISTICS UNIVERSITY
More informationMining Association Rules in Student s Assessment Data
www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production
More informationUSER GUIDANCE. (2)Microphone & Headphone (to avoid howling).
Igo Campus Education System USER GUIDANCE 1 Functional Overview The system provide following functions: Audio, video, textual chat lesson. Maximum to 10 multi-face teaching game, and online lecture. Class,
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More information1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature
1 st Grade Curriculum Map Common Core Standards Language Arts 2013 2014 1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature Key Ideas and Details
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationPrevalence of Oral Reading Problems in Thai Students with Cleft Palate, Grades 3-5
Prevalence of Oral Reading Problems in Thai Students with Cleft Palate, Grades 3-5 Prajima Ingkapak BA*, Benjamas Prathanee PhD** * Curriculum and Instruction in Special Education, Faculty of Education,
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationHoughton Mifflin Reading Correlation to the Common Core Standards for English Language Arts (Grade1)
Houghton Mifflin Reading Correlation to the Standards for English Language Arts (Grade1) 8.3 JOHNNY APPLESEED Biography TARGET SKILLS: 8.3 Johnny Appleseed Phonemic Awareness Phonics Comprehension Vocabulary
More informationSpeaker recognition using universal background model on YOHO database
Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,
More informationAGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS
AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic
More informationFacing our Fears: Reading and Writing about Characters in Literary Text
Facing our Fears: Reading and Writing about Characters in Literary Text by Barbara Goggans Students in 6th grade have been reading and analyzing characters in short stories such as "The Ravine," by Graham
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationFirst Grade Curriculum Highlights: In alignment with the Common Core Standards
First Grade Curriculum Highlights: In alignment with the Common Core Standards ENGLISH LANGUAGE ARTS Foundational Skills Print Concepts Demonstrate understanding of the organization and basic features
More informationGetting Started with Deliberate Practice
Getting Started with Deliberate Practice Most of the implementation guides so far in Learning on Steroids have focused on conceptual skills. Things like being able to form mental images, remembering facts
More informationBRAZOSPORT COLLEGE LAKE JACKSON, TEXAS SYLLABUS. POFI 1301: COMPUTER APPLICATIONS I (File Management/PowerPoint/Word/Excel)
BRAZOSPORT COLLEGE LAKE JACKSON, TEXAS SYLLABUS POFI 1301: COMPUTER APPLICATIONS I (File Management/PowerPoint/Word/Excel) COMPUTER TECHNOLOGY & OFFICE ADMINISTRATION DEPARTMENT CATALOG DESCRIPTION POFI
More informationExperiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling
Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad
More informationCase study Norway case 1
Case study Norway case 1 School : B (primary school) Theme: Science microorganisms Dates of lessons: March 26-27 th 2015 Age of students: 10-11 (grade 5) Data sources: Pre- and post-interview with 1 teacher
More informationMYCIN. The MYCIN Task
MYCIN Developed at Stanford University in 1972 Regarded as the first true expert system Assists physicians in the treatment of blood infections Many revisions and extensions over the years The MYCIN Task
More informationCalibration of Confidence Measures in Speech Recognition
Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE
More informationFunctional Maths Skills Check E3/L x
Functional Maths Skills Check E3/L1 Name: Date started: The Four Rules of Number + - x May 2017. Kindly contributed by Nicola Smith, Gloucestershire College. Search for Nicola on skillsworkshop.org Page
More informationCoast Academies Writing Framework Step 4. 1 of 7
1 KPI Spell further homophones. 2 3 Objective Spell words that are often misspelt (English Appendix 1) KPI Place the possessive apostrophe accurately in words with regular plurals: e.g. girls, boys and
More informationRobust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction
INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer
More informationSoftware Maintenance
1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories
More informationThe 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X
The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,
More informationK 1 2 K 1 2. Iron Mountain Public Schools Standards (modified METS) Checklist by Grade Level Page 1 of 11
Iron Mountain Public Schools Standards (modified METS) - K-8 Checklist by Grade Levels Grades K through 2 Technology Standards and Expectations (by the end of Grade 2) 1. Basic Operations and Concepts.
More informationA Correlation of. Grade 6, Arizona s College and Career Ready Standards English Language Arts and Literacy
A Correlation of, To A Correlation of myperspectives, to Introduction This document demonstrates how myperspectives English Language Arts meets the objectives of. Correlation page references are to the
More informationSpecification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments
Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,
More informationSpeaker Identification by Comparison of Smart Methods. Abstract
Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer
More informationGRAMMAR IN CONTEXT 2 PDF
GRAMMAR IN CONTEXT 2 PDF ==> Download: GRAMMAR IN CONTEXT 2 PDF GRAMMAR IN CONTEXT 2 PDF - Are you searching for Grammar In Context 2 Books? Now, you will be happy that at this time Grammar In Context
More informationINPE São José dos Campos
INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA
More informationRESPONSE TO LITERATURE
RESPONSE TO LITERATURE TEACHER PACKET CENTRAL VALLEY SCHOOL DISTRICT WRITING PROGRAM Teacher Name RESPONSE TO LITERATURE WRITING DEFINITION AND SCORING GUIDE/RUBRIC DE INITION A Response to Literature
More informationADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION
ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento
More informationLearning Disability Functional Capacity Evaluation. Dear Doctor,
Dear Doctor, I have been asked to formulate a vocational opinion regarding NAME s employability in light of his/her learning disability. To assist me with this evaluation I would appreciate if you can
More informationMath 96: Intermediate Algebra in Context
: Intermediate Algebra in Context Syllabus Spring Quarter 2016 Daily, 9:20 10:30am Instructor: Lauri Lindberg Office Hours@ tutoring: Tutoring Center (CAS-504) 8 9am & 1 2pm daily STEM (Math) Center (RAI-338)
More informationStudent Name: OSIS#: DOB: / / School: Grade:
Grade 6 ELA CCLS: Reading Standards for Literature Column : In preparation for the IEP meeting, check the standards the student has already met. Column : In preparation for the IEP meeting, check the standards
More informationWE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT
WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D. INTRODUCTION Anyone who spends ample time working
More informationTHE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING
SISOM & ACOUSTICS 2015, Bucharest 21-22 May THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING MarilenaăLAZ R 1, Diana MILITARU 2 1 Military Equipment and Technologies Research Agency, Bucharest,
More informationCENTRAL MAINE COMMUNITY COLLEGE Introduction to Computer Applications BCA ; FALL 2011
CENTRAL MAINE COMMUNITY COLLEGE Introduction to Computer Applications BCA 120-03; FALL 2011 Instructor: Mrs. Linda Cameron Cell Phone: 207-446-5232 E-Mail: LCAMERON@CMCC.EDU Course Description This is
More informationPre-AP Geometry Course Syllabus Page 1
Pre-AP Geometry Course Syllabus 2015-2016 Welcome to my Pre-AP Geometry class. I hope you find this course to be a positive experience and I am certain that you will learn a great deal during the next
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationIN THIS UNIT YOU LEARN HOW TO: SPEAKING 1 Work in pairs. Discuss the questions. 2 Work with a new partner. Discuss the questions.
6 1 IN THIS UNIT YOU LEARN HOW TO: ask and answer common questions about jobs talk about what you re doing at work at the moment talk about arrangements and appointments recognise and use collocations
More informationDisambiguation of Thai Personal Name from Online News Articles
Disambiguation of Thai Personal Name from Online News Articles Phaisarn Sutheebanjard Graduate School of Information Technology Siam University Bangkok, Thailand mr.phaisarn@gmail.com Abstract Since online
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center
More informationBeginning to Flip/Enhance Your Classroom with Screencasting. Check out screencasting tools from (21 Things project)
Beginning to Flip/Enhance Your Classroom with Screencasting Check out screencasting tools from http://21things4teachers.net (21 Things project) This session Flipping out A beginning exploration of flipping
More informationGOLD Objectives for Development & Learning: Birth Through Third Grade
Assessment Alignment of GOLD Objectives for Development & Learning: Birth Through Third Grade WITH , Birth Through Third Grade aligned to Arizona Early Learning Standards Grade: Ages 3-5 - Adopted: 2013
More informationResearch Design & Analysis Made Easy! Brainstorming Worksheet
Brainstorming Worksheet 1) Choose a Topic a) What are you passionate about? b) What are your library s strengths? c) What are your library s weaknesses? d) What is a hot topic in the field right now that
More informationuser s utterance speech recognizer content word N-best candidates CMw (content (semantic attribute) accept confirm reject fill semantic slots
Flexible Mixed-Initiative Dialogue Management using Concept-Level Condence Measures of Speech Recognizer Output Kazunori Komatani and Tatsuya Kawahara Graduate School of Informatics, Kyoto University Kyoto
More informationWho s Reading Your Writing: How Difficult Is Your Text?
Who s Reading Your Writing: How Difficult Is Your Text? When I got my prescription filled at the pharmacy, I thought I was just going to be taking some pills like last time. So when the pharmacist asked
More informationGrade 3: Module 2B: Unit 3: Lesson 10 Reviewing Conventions and Editing Peers Work
Grade 3: Module 2B: Unit 3: Lesson 10 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Exempt third-party content is indicated by the footer: (name
More informationMADERA SCIENCE FAIR 2013 Grades 4 th 6 th Project due date: Tuesday, April 9, 8:15 am Parent Night: Tuesday, April 16, 6:00 8:00 pm
MADERA SCIENCE FAIR 2013 Grades 4 th 6 th Project due date: Tuesday, April 9, 8:15 am Parent Night: Tuesday, April 16, 6:00 8:00 pm Why participate in the Science Fair? Science fair projects give students
More informationMyths, Legends, Fairytales and Novels (Writing a Letter)
Assessment Focus This task focuses on Communication through the mode of Writing at Levels 3, 4 and 5. Two linked tasks (Hot Seating and Character Study) that use the same context are available to assess
More informationSpring 2015 Online Testing. Program Information and Registration and Technology Survey (RTS) Training Session
Spring 2015 Online Testing Program Information and Registration and Technology Survey (RTS) Training Session Webinar Training Sessions: Calls will be operator assisted. Submit questions through the chat
More informationOrganizational Knowledge Distribution: An Experimental Evaluation
Association for Information Systems AIS Electronic Library (AISeL) AMCIS 24 Proceedings Americas Conference on Information Systems (AMCIS) 12-31-24 : An Experimental Evaluation Surendra Sarnikar University
More informationLet s think about how to multiply and divide fractions by fractions!
Let s think about how to multiply and divide fractions by fractions! June 25, 2007 (Monday) Takehaya Attached Elementary School, Tokyo Gakugei University Grade 6, Class # 1 (21 boys, 20 girls) Instructor:
More information