Toolkits for ASR; Sphinx
|
|
- Geraldine Payne
- 6 years ago
- Views:
Transcription
1 Toolkits for ASR; Sphinx Samudravijaya K samudravijaya@gmail.com 08-MAR-2011 Workshop on Fundamentals of Automatic Speech Recognition CDAC Noida, 08-MAR-2011 Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 1/31
2 A Block Diagram of an ASR System Signal Feature Extraction Training Acoustic Model Testing Matching (acoustic domain) Symbol sequence Language Model Matching (symbolic domain) Sentence Hypothesis Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 2/31
3 Hierachy of Units in an Utterance source: state of art ASR by Steve Young, 2000 Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 3/31
4 Sentence HMM is composed of Phone HMMs Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 4/31
5 Toolkits for Automatic Speech Recognition (1) Training, (2) Testing, (3) Performance Evaluation There are several public domain toolkits that help to build an ASR system: HTK: Hidden Markov Model ToolKit [1]. Public domain, but decoder cannot be distributed (C). Sphinxes [2]: Open source: (C, C++, java) Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 5/31
6 Toolkits for Automatic Speech Recognition (1) Training, (2) Testing, (3) Performance Evaluation There are several public domain toolkits that help to build an ASR system: HTK: Hidden Markov Model ToolKit [1]. Public domain, but decoder cannot be distributed (C). Sphinxes [2]: Open source: (C, C++, java) ISIP Production system [3]. Public domain ( without any restrictions) (C++) Julius Open-Source Large Vocabulary CSR Engine [4]. It uses Acoustic Models in HTK format, and Grammar files in its own format. Open license (no limitations on distribution) (C++). HMM toolbox for Matlab Useful for Isolated Word Recognition [5]. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 5/31
7 What is CMU Sphinx? According to Arthur Chan (the editor of Hieroglyphs[6], the sphinx manual in a book form), there are two definitions of Sphinx: A large vocabulary speech recognizer with high accuracy and speed performance. A collection of tools and resources that enables developers/researchers to build successful speech recognizers Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 6/31
8 Pocketsphinx source: SphinxLunch ppt by Arthur Chan, 2004 Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 7/31
9 A Block Diagram of an ASR System Signal Feature Extraction Training Acoustic Model Testing Matching (acoustic domain) Symbol sequence Language Model Matching (symbolic domain) Sentence Hypothesis Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 8/31
10 Language model training source: state of art ASR by Steve Young, 2000 Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 9/31
11 CMU-Cambridge SLM toolkit Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 10/31
12 Lexicon (Pronunciation Dictionary) source: Ph.D. thesis of Ravi Shankar M., CMU [7] Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 11/31
13 source: Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 12/31
14 source: erwin/sr2003/sphinx.ppt Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 13/31
15 Feature Extraction (Frontend processing) * wave2feat program computes 13 MFCCs from speech files stored in any of wav,nist,raw format. * Caution: use -dither yes option. Excise long silences. * cepview s0001.cep prints the cepstral coefficients. source: Ph.D. thesis of Ravi Shankar M., CMU [7]. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 14/31
16 SphinxTrain Training sub-word HMMs Stages of training (Reference: 1 Training context Independent phone HMMs 2 Training context Dependent phone HMMs 3 Decision tree building 4 Training context Dependent tied phone HMMs 5 Recursive Gaussian splitting Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 15/31
17 Training Context Independent phone HMMs 2 steps: Initialization and Embedding re-estimation. Inputs: * Feature vector sequences * Word-level transcriptions * Pronunciation dictionary Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 16/31
18 Training Context Independent phone HMMs 2 steps: Initialization and Embedding re-estimation. Inputs: * Feature vector sequences * Word-level transcriptions * Pronunciation dictionary (I) Initialization: 1 Make a proto-type HMM (5-state, left-to-right, skipping 1 state permitted); copy to all phone HMMs. 2 Compute means and variance of all training feature vectors 3 Initialise Gaussians of all states of phone HMMs with global means and variance. 4 For each and every utterance, generate phone-level transcriptions from word-level transcriptions using the pronunciation dictionary. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 16/31
19 Training subword HMMs An iterative algorithm (Baum-Welch, also known as Forward-Backward) is used. The Maximum Likelihood approach guarantees increase of the likelihood of the trained model matching with training data with each iteration. To begin with, an initial estimation of parameters of HMMs (A,B,π) is required. Q: How to get an initial estimation of (λ = {A,B,π}? A: We can estimate parameters if we know the boundaries of every subword HMM in training utterances. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 17/31
20 Training subword HMMs An iterative algorithm (Baum-Welch, also known as Forward-Backward) is used. The Maximum Likelihood approach guarantees increase of the likelihood of the trained model matching with training data with each iteration. To begin with, an initial estimation of parameters of HMMs (A,B,π) is required. Q: How to get an initial estimation of (λ = {A,B,π}? A: We can estimate parameters if we know the boundaries of every subword HMM in training utterances. Practical solution: Assume that the durations of all units (phones) are equal. If there are N phones in a training utterance, divide the feature vector sequence into N equal parts. Assign each part, to a phoneme in the phoneme sequence corresponding to the transcription of the utterance. Repeat for all training utterances. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 17/31
21 Initial estimation of HMM parameters: an illustration Let the transcription of the 1st wave file be the following sequence of words: mera bhaarat mahaan Let the relevant lines in the dictionary be as follows: bhaarata bh aa r a t mahaana m a h aa n mera m e r aa The phonemehmm sequence (of length 16) corresponding to this sentence is sil m e r aa bh aa r a t m a h aa n sil Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 18/31
22 Initial estimation of HMM parameters: an illustration Let the transcription of the 1st wave file be the following sequence of words: mera bhaarat mahaan Let the relevant lines in the dictionary be as follows: bhaarata bh aa r a t mahaana m a h aa n mera m e r aa The phonemehmm sequence (of length 16) corresponding to this sentence is sil m e r aa bh aa r a t m a h aa n sil If the duration of the wavefile is 1.0sec, there will 98 feature vectors (frame shift = 10msec and frame size = 25msec). Assign the first 6 feature vectors to sil HMM; the next 6 (7 through 12) to m ; the next 6 (13 through 18) to e ;... ; the last 8 feature vectors to sil. If HMM has 3 states, assign 2 feature vector to each state; compute mean,sd. Assume a i,j =0.5 if j=i or j=i+1; else assign 0. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 18/31
23 Embedded Re-estimation (II) Embedding re-estimation: 1 For each utterance, do the following: Using the phone-level transcriptions, compose a sentence HMM out of phone HMMs. Forward-Backward algorithm: compute the likelihood of each feature vector being generated by each state of each phone HMM in the sentence HMM Accumulate likelihoods of feature vectors being generated by each state. 2 For each state: re-estimate HMM parameters using the accumulated likelihoods. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 19/31
24 Embedded Re-estimation (II) Embedding re-estimation: 1 For each utterance, do the following: Using the phone-level transcriptions, compose a sentence HMM out of phone HMMs. Forward-Backward algorithm: compute the likelihood of each feature vector being generated by each state of each phone HMM in the sentence HMM Accumulate likelihoods of feature vectors being generated by each state. 2 For each state: re-estimate HMM parameters using the accumulated likelihoods. Repeat the Embedded Re-estimation a few times. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 19/31
25 Training Context Dependent phone HMMs 1 Initialise N 3 triphone models, where N is the number of phones. 2 Compose sentence HMM out of triphone (CD) models instead of monophone (CI) models. 3 Carry out the Embedded Re-estimation for a few iterations. The sequence of CI HMMs was sil m e r aa bh aa r a t m a h aa n sil The sequence of CD HMMs (triphones) is sil sil-m+e m-e+r e-r+aa r-aa+bh... Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 20/31
26 Training Context Dependent phone HMMs 1 Initialise N 3 triphone models, where N is the number of phones. 2 Compose sentence HMM out of triphone (CD) models instead of monophone (CI) models. 3 Carry out the Embedded Re-estimation for a few iterations. The sequence of CI HMMs was sil m e r aa bh aa r a t m a h aa n sil The sequence of CD HMMs (triphones) is sil sil-m+e m-e+r e-r+aa r-aa+bh... If N = 50, each HMM has 3 states, there may be upto 375,000 states. Each state is associated with one Gaussian. Huge amount of speech data is needed for robust estimation of the parameters (µ,σ) of 375,000 Gaussians! Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 20/31
27 Training Context Dependent phone HMMs 1 Initialise N 3 triphone models, where N is the number of phones. 2 Compose sentence HMM out of triphone (CD) models instead of monophone (CI) models. 3 Carry out the Embedded Re-estimation for a few iterations. The sequence of CI HMMs was sil m e r aa bh aa r a t m a h aa n sil The sequence of CD HMMs (triphones) is sil sil-m+e m-e+r e-r+aa r-aa+bh... If N = 50, each HMM has 3 states, there may be upto 375,000 states. Each state is associated with one Gaussian. Huge amount of speech data is needed for robust estimation of the parameters (µ,σ) of 375,000 Gaussians! Reduce the number of states by state-tying; use Decision Trees. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 20/31
28 Training Context Dependent tied phone HMMs * Build Decision Trees for parameter sharing. * One decision tree is built for each state position (5 decision trees if there are 5 emitting states of HMMs). Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 21/31
29 Training Context Dependent tied phone HMMs * Build Decision Trees for parameter sharing. * One decision tree is built for each state position (5 decision trees if there are 5 emitting states of HMMs). The first step is to generate Linguistic Questions. Two methods: 1 Manually create linguistic questions using phonetic knowledge. 2 Run make quests program to automatically form phone groups. First few lines of a linguistic-questions file may look like this. SIL sil h s sh VOWELS a aa i ii u uu e ee o oo NASAL m n ng LABPLO p ph b bh Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 21/31
30 Decision trees are used to decide which of the HMM states of all the triphones (seen and unseen) are similar to each other, so that data from all these states are collected together and used to train one global state, which is called a senone (also called a tied state). Example: Left states of 1st and 3rd triphones above would be similar. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 22/31
31 Training Context Dependent tied phone HMMs 1 Prune the Decision trees so that the number of senones (tied states) is commensurate with the amount of training data. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 23/31
32 Training Context Dependent tied phone HMMs 1 Prune the Decision trees so that the number of senones (tied states) is commensurate with the amount of training data. 2 Create CD tied model definition file that has (a) all triphones which are seen during training, and (b) has the states corresponding to these triphones identified with senones from the pruned trees (state-senone mapping). 3 Carry out the Embedded Re-estimation (tied CD models) for a few iterations. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 23/31
33 Training Context Dependent tied phone HMMs 1 Prune the Decision trees so that the number of senones (tied states) is commensurate with the amount of training data. 2 Create CD tied model definition file that has (a) all triphones which are seen during training, and (b) has the states corresponding to these triphones identified with senones from the pruned trees (state-senone mapping). 3 Carry out the Embedded Re-estimation (tied CD models) for a few iterations. 4 Generate Gaussian mixtures for each senone (tied state) and re-train. Repeat this step till the desired number (say 8) of mixtures are created for each GMM (senone). Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 23/31
34 Training Context Dependent tied phone HMMs 1 Prune the Decision trees so that the number of senones (tied states) is commensurate with the amount of training data. 2 Create CD tied model definition file that has (a) all triphones which are seen during training, and (b) has the states corresponding to these triphones identified with senones from the pruned trees (state-senone mapping). 3 Carry out the Embedded Re-estimation (tied CD models) for a few iterations. 4 Generate Gaussian mixtures for each senone (tied state) and re-train. Repeat this step till the desired number (say 8) of mixtures are created for each GMM (senone). 5 One can carry out discriminative training following the Maximum Mutual Information Estimation scheme (maximises the posterior probability of the correct word sequence given all possible word sequences) [9]. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 23/31
35 source: erwin/sr2003/sphinx.ppt Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 24/31
36 Inputs to sphinx3 decoder source: erwin/sr2003/sphinx.ppt Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 25/31
37 Sphinx3 decoders source: erwin/sr2003/sphinx.ppt Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 26/31
38 Output of recogniser source: erwin/sr2003/sphinx.ppt Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 27/31
39 Samudravijaya source: K SphinxLunch ppt samudravijaya@gmail.com bytoolkits Arthur for ASR; Chan, Sphinx /31
40 Sphinx4 Sphinx-4 is a state-of-the-art speech recognition system written entirely in the Java programming language [10]. Generalized pluggable front end architecture: MFCC, CMN Generalized pluggable language model architecture: trigram, JSGF and ARPA-format FST grammars. Generalized acoustic model architecture: Sphinx-3 acoustic models. Generalized search management: breadth first and word pruning Post-processing recognition results: obtaining confidence scores, generating lattices. Standalone tools: displaying waveforms and spectrograms; generating features from audio. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 29/31
41 Comparison of Performance of Sphinxes source: [10]. PocketSphinx[11]: It is a small-footprint continuous speech recognition system, suitable for handheld and desktop applications. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 30/31
42 Sphinx, the eternal mystery source: [10]. Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 31/31
43 Bibliography Cambridge University, UK; Entropic; Microsoft HTK, Hidden Markov Model ToolKit Project by Carnegie Mellon University The CMU Sphinx group open source speech recognition engines Joe Picone et al. ISIP Production system (r02 n02) (23-JUL-2009) Japanese Universities and Laboratories Open-Source Large Vocabulary CSR Engine: Julius Kevin Murphy HMM toolbox for Matlab Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 31/31
44 murphyk/software/hmm/hmm.html Arthur Chan Hieroglyphs: Building Speech Application Using Sphinx and Related Resources, (3rd Draft) 11-MAR Ravishankar M., Efficient Algorithms for Speech Recognition Ph.D Thesis, Carnegie Mellon University, May 1996, Tech Report. CMU-CS rkm/th/th.pdf Cambridge University, UK; Entropic; Microsoft HTK Book, Documentation of HTK L Qin and A Rudnicky Implementing and Improving MMIE Training in SphinxTrain CMU Sphinx Workshop 2010, 13 March 2010, Dallas, USA sphinx/sphinx2010/papers/107.unblinded.p Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 31/31
45 Bhiksharaj et al. A speech recognizer written entirely in the Java programming language A small-footprint continuous speech recognition system release/ Samudravijaya K samudravijaya@gmail.com Toolkits for ASR; Sphinx 31/31
Learning Methods in Multilingual Speech Recognition
Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationSpeech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers
Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,
More informationBAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass
BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,
More informationSpeech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines
Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationBUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING
BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial
More informationADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION
ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento
More informationA NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren
A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,
More informationA New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation
A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick
More informationOn the Formation of Phoneme Categories in DNN Acoustic Models
On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-
More informationInvestigation on Mandarin Broadcast News Speech Recognition
Investigation on Mandarin Broadcast News Speech Recognition Mei-Yuh Hwang 1, Xin Lei 1, Wen Wang 2, Takahiro Shinozaki 1 1 Univ. of Washington, Dept. of Electrical Engineering, Seattle, WA 98195 USA 2
More informationSemi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration
INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One
More informationUsing Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing
Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar Muthukumar, and Alan W Black Carnegie Mellon University,
More informationAutoregressive product of multi-frame predictions can improve the accuracy of hybrid models
Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationUnsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode
Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode Diploma Thesis of Michael Heck At the Department of Informatics Karlsruhe Institute of Technology
More informationSpeaker recognition using universal background model on YOHO database
Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationA Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language
A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language Z.HACHKAR 1,3, A. FARCHI 2, B.MOUNIR 1, J. EL ABBADI 3 1 Ecole Supérieure de Technologie, Safi, Morocco. zhachkar2000@yahoo.fr.
More informationSegmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition
Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition Yanzhang He, Eric Fosler-Lussier Department of Computer Science and Engineering The hio
More informationA study of speaker adaptation for DNN-based speech synthesis
A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,
More informationLikelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract
More informationThe NICT/ATR speech synthesis system for the Blizzard Challenge 2008
The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National
More informationInternational Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012
Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of
More informationUnvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition
Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese
More informationSpeech Emotion Recognition Using Support Vector Machine
Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,
More informationLecture 9: Speech Recognition
EE E6820: Speech & Audio Processing & Recognition Lecture 9: Speech Recognition 1 Recognizing speech 2 Feature calculation Dan Ellis Michael Mandel 3 Sequence
More informationEdinburgh Research Explorer
Edinburgh Research Explorer Personalising speech-to-speech translation Citation for published version: Dines, J, Liang, H, Saheer, L, Gibson, M, Byrne, W, Oura, K, Tokuda, K, Yamagishi, J, King, S, Wester,
More informationSpeech Recognition by Indexing and Sequencing
International Journal of Computer Information Systems and Industrial Management Applications. ISSN 215-7988 Volume 4 (212) pp. 358 365 c MIR Labs, www.mirlabs.net/ijcisim/index.html Speech Recognition
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George
More informationEli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology
ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology
More informationLetter-based speech synthesis
Letter-based speech synthesis Oliver Watts, Junichi Yamagishi, Simon King Centre for Speech Technology Research, University of Edinburgh, UK O.S.Watts@sms.ed.ac.uk jyamagis@inf.ed.ac.uk Simon.King@ed.ac.uk
More informationCalibration of Confidence Measures in Speech Recognition
Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE
More informationDesign Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm
Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute
More informationVimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India
World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 2, No. 1, 1-7, 2012 A Review on Challenges and Approaches Vimala.C Project Fellow, Department of Computer Science
More informationRobust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction
INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer
More informationSTUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH
STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH Don McAllaster, Larry Gillick, Francesco Scattone, Mike Newman Dragon Systems, Inc. 320 Nevada Street Newton, MA 02160
More informationImproved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge
Improved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge Preethi Jyothi 1, Mark Hasegawa-Johnson 1,2 1 Beckman Institute,
More informationBody-Conducted Speech Recognition and its Application to Speech Support System
Body-Conducted Speech Recognition and its Application to Speech Support System 4 Shunsuke Ishimitsu Hiroshima City University Japan 1. Introduction In recent years, speech recognition systems have been
More informationOn Developing Acoustic Models Using HTK. M.A. Spaans BSc.
On Developing Acoustic Models Using HTK M.A. Spaans BSc. On Developing Acoustic Models Using HTK M.A. Spaans BSc. Delft, December 2004 Copyright c 2004 M.A. Spaans BSc. December, 2004. Faculty of Electrical
More informationPhonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project
Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California
More informationAnalysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion
More informationAutomatic Pronunciation Checker
Institut für Technische Informatik und Kommunikationsnetze Eidgenössische Technische Hochschule Zürich Swiss Federal Institute of Technology Zurich Ecole polytechnique fédérale de Zurich Politecnico federale
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationPREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES
PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,
More informationAnalysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription
Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer
More informationBi-Annual Status Report For. Improved Monosyllabic Word Modeling on SWITCHBOARD
INSTITUTE FOR SIGNAL AND INFORMATION PROCESSING Bi-Annual Status Report For Improved Monosyllabic Word Modeling on SWITCHBOARD submitted by: J. Hamaker, N. Deshmukh, A. Ganapathiraju, and J. Picone Institute
More informationSpeech Translation for Triage of Emergency Phonecalls in Minority Languages
Speech Translation for Triage of Emergency Phonecalls in Minority Languages Udhyakumar Nallasamy, Alan W Black, Tanja Schultz, Robert Frederking Language Technologies Institute Carnegie Mellon University
More informationInternational Journal of Advanced Networking Applications (IJANA) ISSN No. :
International Journal of Advanced Networking Applications (IJANA) ISSN No. : 0975-0290 34 A Review on Dysarthric Speech Recognition Megha Rughani Department of Electronics and Communication, Marwadi Educational
More informationSmall-Vocabulary Speech Recognition for Resource- Scarce Languages
Small-Vocabulary Speech Recognition for Resource- Scarce Languages Fang Qiao School of Computer Science Carnegie Mellon University fqiao@andrew.cmu.edu Jahanzeb Sherwani iteleport LLC j@iteleportmobile.com
More informationINVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT
INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT Takuya Yoshioka,, Anton Ragni, Mark J. F. Gales Cambridge University Engineering Department, Cambridge, UK NTT Communication
More informationDigital Signal Processing: Speaker Recognition Final Report (Complete Version)
Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Xinyu Zhou, Yuxin Wu, and Tiezheng Li Tsinghua University Contents 1 Introduction 1 2 Algorithms 2 2.1 VAD..................................................
More informationCROSS-LANGUAGE MAPPING FOR SMALL-VOCABULARY ASR IN UNDER-RESOURCED LANGUAGES: INVESTIGATING THE IMPACT OF SOURCE LANGUAGE CHOICE
CROSS-LANGUAGE MAPPING FOR SMALL-VOCABULARY ASR IN UNDER-RESOURCED LANGUAGES: INVESTIGATING THE IMPACT OF SOURCE LANGUAGE CHOICE Anjana Vakil and Alexis Palmer University of Saarland Department of Computational
More informationAn Online Handwriting Recognition System For Turkish
An Online Handwriting Recognition System For Turkish Esra Vural, Hakan Erdogan, Kemal Oflazer, Berrin Yanikoglu Sabanci University, Tuzla, Istanbul, Turkey 34956 ABSTRACT Despite recent developments in
More informationSEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING
SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING Sheng Li 1, Xugang Lu 2, Shinsuke Sakai 1, Masato Mimura 1 and Tatsuya Kawahara 1 1 School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501,
More informationDeep Neural Network Language Models
Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com
More informationMasters Thesis CLASSIFICATION OF GESTURES USING POINTING DEVICE BASED ON HIDDEN MARKOV MODEL
Masters Thesis CLASSIFICATION OF GESTURES USING POINTING DEVICE BASED ON HIDDEN MARKOV MODEL By: Tanvir Alam Email: Tansoft_shawn@hotmail.com Date: 26/06/2007 14:15 Supervisor: At Philips Research: Dr.
More informationACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS
ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS Annamaria Mesaros 1, Toni Heittola 1, Antti Eronen 2, Tuomas Virtanen 1 1 Department of Signal Processing Tampere University of Technology Korkeakoulunkatu
More informationUNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak
UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS Heiga Zen, Haşim Sak Google fheigazen,hasimg@google.com ABSTRACT Long short-term
More informationInvestigation of Indian English Speech Recognition using CMU Sphinx
Investigation of Indian English Speech Recognition using CMU Sphinx Disha Kaur Phull School of Computing Science & Engineering, VIT University Chennai Campus, Tamil Nadu, India. G. Bharadwaja Kumar School
More informationLOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS
LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS Pranay Dighe Afsaneh Asaei Hervé Bourlard Idiap Research Institute, Martigny, Switzerland École Polytechnique Fédérale de Lausanne (EPFL),
More informationVowel mispronunciation detection using DNN acoustic models with cross-lingual training
INTERSPEECH 2015 Vowel mispronunciation detection using DNN acoustic models with cross-lingual training Shrikant Joshi, Nachiket Deo, Preeti Rao Department of Electrical Engineering, Indian Institute of
More informationDistributed Learning of Multilingual DNN Feature Extractors using GPUs
Distributed Learning of Multilingual DNN Feature Extractors using GPUs Yajie Miao, Hao Zhang, Florian Metze Language Technologies Institute, School of Computer Science, Carnegie Mellon University Pittsburgh,
More informationRole of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation
Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Alistair Conkie AT&T abs - Research 180 Park Avenue, Florham Park,
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationLower and Upper Secondary
Lower and Upper Secondary Type of Course Age Group Content Duration Target General English Lower secondary Grammar work, reading and comprehension skills, speech and drama. Using Multi-Media CD - Rom 7
More informationCharacterizing and Processing Robot-Directed Speech
Characterizing and Processing Robot-Directed Speech Paulina Varchavskaia, Paul Fitzpatrick, Cynthia Breazeal AI Lab, MIT, Cambridge, USA [paulina,paulfitz,cynthia]@ai.mit.edu Abstract. Speech directed
More informationExperiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling
Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad
More informationImprovements to the Pruning Behavior of DNN Acoustic Models
Improvements to the Pruning Behavior of DNN Acoustic Models Matthias Paulik Apple Inc., Infinite Loop, Cupertino, CA 954 mpaulik@apple.com Abstract This paper examines two strategies that positively influence
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production
More informationUTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation
UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation Taufiq Hasan Gang Liu Seyed Omid Sadjadi Navid Shokouhi The CRSS SRE Team John H.L. Hansen Keith W. Godin Abhinav Misra Ali Ziaei Hynek Bořil
More informationPHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS
PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS Akella Amarendra Babu 1 *, Ramadevi Yellasiri 2 and Akepogu Ananda Rao 3 1 JNIAS, JNT University Anantapur, Ananthapuramu,
More informationSpeaker Identification by Comparison of Smart Methods. Abstract
Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationWHEN THERE IS A mismatch between the acoustic
808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,
More informationSupport Vector Machines for Speaker and Language Recognition
Support Vector Machines for Speaker and Language Recognition W. M. Campbell, J. P. Campbell, D. A. Reynolds, E. Singer, P. A. Torres-Carrasquillo MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationNotes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1
Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial
More informationNCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches
NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches Yu-Chun Wang Chun-Kai Wu Richard Tzong-Han Tsai Department of Computer Science
More informationWiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company
WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company Table of Contents Welcome to WiggleWorks... 3 Program Materials... 3 WiggleWorks Teacher Software... 4 Logging In...
More informationAnalysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems
Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Ajith Abraham School of Business Systems, Monash University, Clayton, Victoria 3800, Australia. Email: ajith.abraham@ieee.org
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More informationBooks Effective Literacy Y5-8 Learning Through Talk Y4-8 Switch onto Spelling Spelling Under Scrutiny
By the End of Year 8 All Essential words lists 1-7 290 words Commonly Misspelt Words-55 working out more complex, irregular, and/or ambiguous words by using strategies such as inferring the unknown from
More informationhave to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,
A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994
More informationDesigning a Computer to Play Nim: A Mini-Capstone Project in Digital Design I
Session 1793 Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I John Greco, Ph.D. Department of Electrical and Computer Engineering Lafayette College Easton, PA 18042 Abstract
More informationLarge vocabulary off-line handwriting recognition: A survey
Pattern Anal Applic (2003) 6: 97 121 DOI 10.1007/s10044-002-0169-3 ORIGINAL ARTICLE A. L. Koerich, R. Sabourin, C. Y. Suen Large vocabulary off-line handwriting recognition: A survey Received: 24/09/01
More informationEffect of Word Complexity on L2 Vocabulary Learning
Effect of Word Complexity on L2 Vocabulary Learning Kevin Dela Rosa Language Technologies Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA kdelaros@cs.cmu.edu Maxine Eskenazi Language
More informationSIE: Speech Enabled Interface for E-Learning
SIE: Speech Enabled Interface for E-Learning Shikha M.Tech Student Lovely Professional University, Phagwara, Punjab INDIA ABSTRACT In today s world, e-learning is very important and popular. E- learning
More informationBODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY
BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY Sergey Levine Principal Adviser: Vladlen Koltun Secondary Adviser:
More informationImplementing a tool to Support KAOS-Beta Process Model Using EPF
Implementing a tool to Support KAOS-Beta Process Model Using EPF Malihe Tabatabaie Malihe.Tabatabaie@cs.york.ac.uk Department of Computer Science The University of York United Kingdom Eclipse Process Framework
More informationSchool of Innovative Technologies and Engineering
School of Innovative Technologies and Engineering Department of Applied Mathematical Sciences Proficiency Course in MATLAB COURSE DOCUMENT VERSION 1.0 PCMv1.0 July 2012 University of Technology, Mauritius
More informationEnglish Language and Applied Linguistics. Module Descriptions 2017/18
English Language and Applied Linguistics Module Descriptions 2017/18 Level I (i.e. 2 nd Yr.) Modules Please be aware that all modules are subject to availability. If you have any questions about the modules,
More informationASR for Tajweed Rules: Integrated with Self- Learning Environments
I.J. Information Engineering and Electronic Business, 2017, 6, 1-9 Published Online November 2017 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijieeb.2017.06.01 ASR for Tajweed Rules: Integrated with
More informationNon intrusive multi-biometrics on a mobile device: a comparison of fusion techniques
Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques Lorene Allano 1*1, Andrew C. Morris 2, Harin Sellahewa 3, Sonia Garcia-Salicetti 1, Jacques Koreman 2, Sabah Jassim
More informationNatural Language Processing. George Konidaris
Natural Language Processing George Konidaris gdk@cs.brown.edu Fall 2017 Natural Language Processing Understanding spoken/written sentences in a natural language. Major area of research in AI. Why? Humans
More informationSegregation of Unvoiced Speech from Nonspeech Interference
Technical Report OSU-CISRC-8/7-TR63 Department of Computer Science and Engineering The Ohio State University Columbus, OH 4321-1277 FTP site: ftp.cse.ohio-state.edu Login: anonymous Directory: pub/tech-report/27
More informationSwitchboard Language Model Improvement with Conversational Data from Gigaword
Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword
More information