NEW TRENDS IN MACHINE LEARNING FOR SPEECH RECOGNITION
|
|
- Shanon Bond
- 6 years ago
- Views:
Transcription
1 SISOM & ACOUSTICS 2015, Bucharest May NEW TRENDS IN MACHINE LEARNING FOR SPEECH RECOGNITION Inge GAVAT, Diana MILITARU University POLITEHNICA Bucharest, In the paper, the authors tries to present the evolution in automatic speech recognition (ASR) from the classical long living solution based on hidden Markov models (HMMs) trained with mel-frequency cepstral coefficients (MFCCs) or perceptual linear prediction (PLP) coefficients to the today evolving ASR systems based on the so called deep learning (DL).In DL not only the model is trained, but also the features are extracted by hierarchical learning from the entire speech spectrum. Some examples of DL structures like autoencoders (AEs), convolutional neural nets (CNNs), deep belief networks (DBNs) based on restricted Bolzmann machines (RBMs) are given. Keywords:speech recognition,asr, ASRU, MFCC, PLP, hidden Markov models, deep learning, autoencoders, convolutional neural nets, deep belief networks, restricted Bolzmann machines 1. INTRODUCTION Automatic speech recognition (ASR) is one of the great challenges in Artificial Intelligence (AI). Despite the huge progress made in the last fifty years, it is still much to do so that the machine task arrives to the human performance. Nowadays ASR leaved the laboratory, penetrating in our daily life by products that enrich our existence, like the Microsoft dictation program, but difficulties arises by changing the speaker and more, by changing language. In the last 50 years the best technology for acoustic modeling was that relying on hidden Markov models and the best results were obtained using as features the mel-frequency cepstral coefficients to which we can add today the coefficients based on perceptual linear prediction.but beginning with 2006, a new possibility for acoustic modeling is constituted by a method called Deep learning, method that enhanced with more than 10% the recognition accuracy in automatic speech recognition tasks. In this paper will be presented the classical method applied to build ASR systems, followed by the new method that gains day by day more interest in the speech community. Deep learning (DL) has been the hottest topic in speech recognition in the last 2 years. A few long-standing performance records were broken with deep learning methods, rapidly adopted by producers: Microsoft and Google have both deployed DLbased speech recognition systems in their products. On the other hand, Microsoft, Google, IBM, Nuance, AT&T, and all the major academic and industrial players in speech recognition have projects on deep learning.it can be now said that the history of speech recognition faced two distinctive periods:speech recognition I (from late 1980s) with the classical solution based on HMMs and speech recognition II (from around 2011) using DL for acoustic modeling [1].In the first period the features were chosen to be as much as possible decorrelated, in order to minimize the number of features describing a speech frame; with this features was trained the acoustic model used for recognition, so that only the model was trained. In DL both, the features and the model are trained, the features being obtained from the entire spectrum of the speech frame,by a hierarchical training. On this way hidden correlations in the signal are conserved and in the same time the number of features can be reduced, becoming tractable for the recognition process [2]. From a functional view, ASR is the conversion process from the acoustic data sequence of speech into a word sequence. From the technical view of machine learning (ML), this conversion process of ASR requires a number of sub-processes including the use of discrete time stamps, called frames, to characterize the speech waveform data or acoustic features, and the use of categorical labels like words, phones, and triphones to index the acoustic data sequence. The more interesting and unique problem in ASR, however, is on the input side, caused by the variable-length acoustic-feature sequence. As a consequence, even if two output word sequences are identical, the input speech data typically have distinct lengths and so different input samples from the same sentence usually contain different data dimensionality depending on how the speech sounds are
2 Inge GAVAT, Diana MILITARU 272 produced.distinguished from other classification problems commonly studied in ML, the ASR problem is a special class of structured pattern recognition where the recognized patterns such as phones or words are embedded in the overall temporal sequence pattern as a sentence. ASR is also a very difficult task due to the huge number of variabilities with complicated and nonlinear interactions caused by speaker(accents, dialect, style, emotion, coarticulation, reduction pronunciation, hesitation), environment (noise, side talk, reverberation), or device (headphone, speaker phone, cell phone). 2. THE CLASSIC SPEECH RECOGNITION A classic system that recognizes continuous speech is represented in Figure 1. Because the speech is continuous, it is not sufficient for the system to recognize only words, it must recognize also sentences, and so it can be considered that the system understands speech. Figure1. Block Diagram of the classic automatic speech recognition and understanding system From the input speech sequence, feature are extracted; for each acoustical unit, (phone or triphone), for that the acoustical models were built in the training stage a comparison is made and the most likely model is chosen. Concatenating the chosen acoustical units, words can be obtained and confirmed by searching a lexicon; a simple grammar gives a word sequence, finalizing the speech recognition. For the recognition process we need as knowledge resources a lexicon, in which is contained the phonetic transcription of all words that can be recognized by the system, and a collection of acoustical models for all the phonetic units to be recognized. To realize a correct word sequence, in other words to realize speech understanding, are introduced constraints in form of a restrictive grammar or a language model, as a third knowledge resource. The features are mel-frequency cepstral coefficients (MFCC) or perceptual linear predictioncoefficients (PLP); they are obtained by digital signal processing methods, treating speech like an acoustic signal. The acoustic models are HMMs with Gaussian mixtures for each state, trained to represent usualymonophones, triphones or tied triphones called senons. The resulted word sequence, constrained by a grammar or a language model is than turned in a sentence. With such a system, for continuous speech recognition in Romanian language with a small vocabulary [3] was obtained a word recognition rate that achieve 99% and a phrase recognition rate near 90% with a bigram language model, suitable to be improved with a higher order statistic language model.the performance is comparable with the results obtained in recognition tasks for other languages. Trying different methods to enhance the results and looking to what is happening in the speech community, the way to further go today is represented by DL.
3 273 New trends in machine learning for speech recognition 3. DEEP LEARNING FOR ACOUSTIC MODELING Deep learning or hierarchical learning, has emerged since 2006 ignited by the publications of Hinton [4]. Within the past few years, the deep learning research has already had a strong impact on the wide range domain of signal processing, including notably ASR. Deep learning refers to a class of ML techniques, where many layers of information processing stages in hierarchical architectures are exploited for unsupervised feature learning and for pattern classification. That means that in DL not only classifiers, but also the features will be trained in more stages or hidden layers. Of course it is a computationally more intensive approach, but the progress of the computer technique allows now such approach. The first question that arises is if the deep architecture enhances performance in comparison with the classical procedure. An experiment done by Zeiler et al in a very large vocabulary speech recognition task shows that deeply trained features in 2 to 12 layers are more effective and enhance the average accuracy comparative to the features obtained in a single layer from around 47% to around 57% for large training times, like it can be seen in Figure 2.It can be also observed a considerable enhancing effect from the first 4 layers. Adding supplementary layers and arriving to 8, 10, 12 layers bring not much in accuracy modification. The word error rate for the same task is also diminished from 16% for one hidden layer to 11.1% for 12 hidden layers, becoming 10.8 for 4 hidden layers. Figure 2. Average accuracy dependency from the number of hidden layers(after Zeileret et al) First, the deep learning structures evolved in the domain of neural networks, by adding supplementary hidden layers to the existentone. It is the case of the perceptron with one hidden layer that is not a deep structure, but can become by introducing new hidden layers. A deep perceptron is presented in Figure 3. The signal is entering in the input, visible layer v connected with the weights W 0 to the first hidden layer h 1. With the weights W 1 and W 2 are interconnected the three hidden layers h 1, h 2 and h 3, the weights W 3 are conducting to the output layer giving the label for the input. The training of this structure is not easy, but the nowadays computers can handle such a task. The great advantage is that as input signal can be taken the entire spectrum of a speech unit (monophone, triphone or senon). The spectrum, conveying much more information conserves on this way the hidden inter-correlations between speech frames that are not considered by calculating only the MFCC or PLP coefficients, even if the first and second order variations are considered. That explains why deep structures, with more hidden layers, yield to better recognition results as the shallow ones and why they are on Figure 3. The deep structure of a perceptron the way to be frequently used in difficult speech recognition problems.
4 Inge GAVAT, Diana MILITARU 274 Now other structures became available, such as autoencoders (AE), convolutional neural networks (CNN),or deep belief networks (DBN), based on restricted Bolzman machines (RBM). 4. DEEP LEARNING STRUCTURES The autoencoder (AE) is a paradigm for deep learning architectures. Autoencoders are simple learning circuits consisting of encoder and decoder in order to transform inputs into outputs with a minimum of distortion. The encoder uses raw data (like Fourier spectrum) as input and produces features as output, and the decoder uses the output of the encoder (the extracted features) as input and reconstructs the original encoder s input raw data as output. The purpose of an autoencoder is to reduce the feature dimension learning a compressed, distributed representation for a set of data. The simplest form of the autoencoder s architecture is a feedforward, non-recurrent neural net, with an input layer, an output layer and one or more hidden layers between them, a similar architecture to the multilayer perceptron (MLP). The difference between autoencoder and MLP is that an autoencoder is trained to reconstruct its own inputs, instead to predict some target value given inputs [6]. While conceptually simple, they play an important role in machine learning. Autoencoders were first introduced in the 1980s by Hinton and the PDP group (Rumelhart et al., 1986) to address the problem of backpropagation without a teacher, by using the input data as the teacher. Together with Hebbian learning rules (Hebb, 1949; Oja, 1982), autoencoders provide one of the fundamental paradigms for unsupervised learning and for beginning to address the mystery of how synaptic changes induced by local biochemical events can be coordinated in a self-organized manner to produce global learning and intelligent behavior. The convolutional neural network (CNN) was introduced by LeCun and is another type of discriminative deep architecture, in which each module consists of a convolutional layer and a pooling layer. These modules are often stacked up with one on top of another, to form a deep model. The convolutional layer shares many weights, and the pooling layer subsamples the output of the convolutional layer and reduces the data rate from the layer below. The weight sharing in the convolutional layer, together with appropriately chosen pooling schemes, endows the CNN to be highly effective also in speech recognition, not only in computer vision. Compared with MFCCs, raw spectral features not only retain more information, but also enable the use of convolution and pooling operations to represent and handle some typical speech variability like vocal tract length differences across speakers, or distinct speaking styles causing formant undershoot or overshoot, expressed explicitly in the frequency domain. For example, the CNN can only be meaningfully and effectively applied tospeech recognition when spectral features, instead of MFCCfeatures, are used. More recently, Sainath et all [7 ] went one step further toward raw features by learning the parameters that define the filter-banks on power spectra like it can be seen in Figure 4. Figure 4.Illustration of the joint learning of filter parameters and the rest of thedeep network
5 275 New trends in machine learning for speech recognition That is, rather than using Mel-warped filter-bankfeatures as the input features as in, the weights correspondingto the Mel-scale filters are only used to initialize the parameters,which are subsequently learned together with the rest of thedeep network as the classifier. A very successful method of training CNNs wiyh a special pooling strategy is presenred in [8]. A restricted Boltzmann machine (RBM) is a stochastic neural network. RBMconsists of one layer of stochastic visible units (binary input data) connected to a layer of stochastic hidden units (binary learning data)in order to model non-independencies between the visible units. In RBM there are undirected connections only between visible-hidden and hidden-visible units, like it is to see in figure 5a.The RBM is a building block of a deep structure with large scale applications, called deep belief networks. The deep belief network (DBN) introduced by Hinton is built by stacking RBMs (figure 5b).DBN is composed of multiple layers of stochastic hidden units (latent variables), with connections between the layers, but not between units within each layer. The inferred states of the hidden units can be used,after training a RBM,as input data for training the next RBM that learns to model the dependencies between the hidden units of the first RBM. These steps can be repeated over and over to produce the layers of nonlinear feature detectors. Deep belief nets are learned one layer at a time by treating the values of the latent variables in one layer, when they are being inferred from data, as the data for training the next layer. This efficient, greedy learning can be followed by, or combined with, other learning procedures that fine-tune all of the weights to improve the generative or discriminative performance of the whole network. (a) (b) Figure 5. The restricted Boltzmann machine (a) and the deep belief network (b) DBNs are largely applied for acoustical modeling in speech recognition because they have higher modeling capacity per parameter than GMMs.Another advantage of DBNs is the efficient training method that combines unsupervised generative learning, used for feature discovery, with a subsequent stage of supervised learning that fine-tunes the features in order to optimize discrimination. On the strength of DBNs goodperformance on the TIMIT corpus, DBN acoustic models have been used for a variety oflarge vocabulary speech recognitiontasks achieving very competitive performance [9].[10]. We can say that for the advances in DL a special year was 2013, when a special session on this subject was organized at ICASSP, presenting the newest achievements in the field [11],[12],[13],[14],[15]. 5. CONCLUSIONS Because of their ability to model hidden internal dependency in the speech signal, beginning with the year 2006 deep neural networks gained over the classic HMM based modeling in recognition performance in ASR systems and tend to become from 2011 a standard approach in acoustic modeling.
6 Inge GAVAT, Diana MILITARU 276 REFERENCES 1. Le Cun, Y., Deep Learning Tutorial, ICML, Atlanta, Bengio, Y., Learning deep architectures for AI, Foundations and Trends in Machine Learning, vol. 2, issue 1, pp , Militaru, D.,Gavat, I.,Dumitru, O.,Zaharia, T.,Segarceanu, S.,ProtoLOGOS, system for Romanian language automatic speech recognition and understanding (ASRU), in IEEE International Conference on Speech Technology and Human-Computer Dialogue (SPED) 2009, Bucharest, Romania, pp Hinton, G. E., Osindero, S. and The, Y.W., A Fast Learning Algortihm for Deep Belief Nets, in Neural Computation, vol. 18, pp , Zeiler, M.D.,et al, On Rectified Linear Units for Speech Recognition, in Proceedings ICASSP Deng, L., Seltzer, M., Yu, D., Acero, A., Mohamed, A.and Hinton, G., Binary coding of speech spectrograms using a deep auto-encoder, Interspeech, Sainath, T., Kingsbury, B., Mohamed, A.,Dahl, G.,Saon, G.,Soltau,H.,Beran, T.,Aravkin, A.,Ramabhadran,B.,Improvements to Deep Convolutional Neural Networks for LVCSR,in Proceedings of the Automatic Speech Recognition and Understanding Workshop (ASRU) 2011, Waikoloa, HI, USA, pp Deng, L.,Abdel-Hamid, O., Yu,D.,A deep convolutional neural network using heterogeneous pooling for Trading Acoustic Invariance With Phonetic Confusion,in Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2013, Vancouver, BC, pp Sainath, T., Kingsbury, B.,Ramabhadran,B., Novak, P.,Mohamed,A.,Making Deep Belief Networks Effective for Large Vocabulary Continuous Speech Recognition,in Proceedings of the Automatic Speech Recognition and Understanding Workshop (ASRU) 2011, Waikoloa, HI, USA, pp Mohamed, A.,Dahl, G.,Hinton, G.,Acoustic Modeling Using Deep Belief Networks,in IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, Sept. 2013, pp Deng,L.,Li,X.,Machine Learning Paradigms in Speech Recognition: An Overview,in IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, May 2013, pp Dahl, G., Yu, D., Deng, L.,Acero, A.,Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition, in IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, January 2012, pp Deng, L.,Hinton, G., Kingsbury,B.,New Types of Deep Neural Network Learning for Speech Recognition and Related Applications: An Overview,in Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2013, Vancouver, BC, pp Deng, L.,Li, L., Huang, J.T.,Yao, K.,Yu, D.,Seide, F.,Seltzer, M.,Zweig,G.,He, X.,Williams, J.,Gong, Y.,Acero,A.,Recent Advances in Deep Learning for Speech Research at Microsoft,in Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2013, Vancouver, BC, pp Deng,L.,Li,X.,Machine Learning Paradigms in Speech Recognition: An Overview,in IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, May 2013, pp
Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction
INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer
More informationAutoregressive product of multi-frame predictions can improve the accuracy of hybrid models
Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,
More informationLearning Methods in Multilingual Speech Recognition
Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationSemi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration
INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationPREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES
PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationINVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT
INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT Takuya Yoshioka,, Anton Ragni, Mark J. F. Gales Cambridge University Engineering Department, Cambridge, UK NTT Communication
More informationA New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation
A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick
More informationBUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING
BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial
More informationA NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren
A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,
More informationADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION
ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento
More informationA study of speaker adaptation for DNN-based speech synthesis
A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationSegmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition
Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition Yanzhang He, Eric Fosler-Lussier Department of Computer Science and Engineering The hio
More informationDeep Neural Network Language Models
Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationCalibration of Confidence Measures in Speech Recognition
Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE
More informationA Review: Speech Recognition with Deep Learning Methods
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.1017
More informationTHE world surrounding us involves multiple modalities
1 Multimodal Machine Learning: A Survey and Taxonomy Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency arxiv:1705.09406v2 [cs.lg] 1 Aug 2017 Abstract Our experience of the world is multimodal
More informationSpeech Emotion Recognition Using Support Vector Machine
Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,
More informationDistributed Learning of Multilingual DNN Feature Extractors using GPUs
Distributed Learning of Multilingual DNN Feature Extractors using GPUs Yajie Miao, Hao Zhang, Florian Metze Language Technologies Institute, School of Computer Science, Carnegie Mellon University Pittsburgh,
More informationImprovements to the Pruning Behavior of DNN Acoustic Models
Improvements to the Pruning Behavior of DNN Acoustic Models Matthias Paulik Apple Inc., Infinite Loop, Cupertino, CA 954 mpaulik@apple.com Abstract This paper examines two strategies that positively influence
More informationSpeech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines
Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,
More informationDNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS
DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS Jonas Gehring 1 Quoc Bao Nguyen 1 Florian Metze 2 Alex Waibel 1,2 1 Interactive Systems Lab, Karlsruhe Institute of Technology;
More informationPhonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project
Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California
More informationInternational Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012
Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationSpeech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers
Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,
More informationHIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION
HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION Atul Laxman Katole 1, Krishna Prasad Yellapragada 1, Amish Kumar Bedi 1, Sehaj Singh Kalra 1 and Mynepalli Siva Chaitanya 1 1 Samsung
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More informationA Deep Bag-of-Features Model for Music Auto-Tagging
1 A Deep Bag-of-Features Model for Music Auto-Tagging Juhan Nam, Member, IEEE, Jorge Herrera, and Kyogu Lee, Senior Member, IEEE latter is often referred to as music annotation and retrieval, or simply
More informationSpeaker Identification by Comparison of Smart Methods. Abstract
Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer
More informationOn the Formation of Phoneme Categories in DNN Acoustic Models
On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationWHEN THERE IS A mismatch between the acoustic
808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,
More informationAnalysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion
More informationarxiv: v1 [cs.lg] 7 Apr 2015
Transferring Knowledge from a RNN to a DNN William Chan 1, Nan Rosemary Ke 1, Ian Lane 1,2 Carnegie Mellon University 1 Electrical and Computer Engineering, 2 Language Technologies Institute Equal contribution
More informationDesign Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm
Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute
More informationInvestigation on Mandarin Broadcast News Speech Recognition
Investigation on Mandarin Broadcast News Speech Recognition Mei-Yuh Hwang 1, Xin Lei 1, Wen Wang 2, Takahiro Shinozaki 1 1 Univ. of Washington, Dept. of Electrical Engineering, Seattle, WA 98195 USA 2
More informationA Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language
A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language Z.HACHKAR 1,3, A. FARCHI 2, B.MOUNIR 1, J. EL ABBADI 3 1 Ecole Supérieure de Technologie, Safi, Morocco. zhachkar2000@yahoo.fr.
More informationSystem Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering
More informationUNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak
UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS Heiga Zen, Haşim Sak Google fheigazen,hasimg@google.com ABSTRACT Long short-term
More informationVimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India
World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 2, No. 1, 1-7, 2012 A Review on Challenges and Approaches Vimala.C Project Fellow, Department of Computer Science
More informationAnalysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription
Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer
More informationDIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE Shaofei Xue 1
More informationUnsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model
Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.
More informationInternational Journal of Advanced Networking Applications (IJANA) ISSN No. :
International Journal of Advanced Networking Applications (IJANA) ISSN No. : 0975-0290 34 A Review on Dysarthric Speech Recognition Megha Rughani Department of Electronics and Communication, Marwadi Educational
More informationLikelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationarxiv: v1 [cs.cl] 27 Apr 2016
The IBM 2016 English Conversational Telephone Speech Recognition System George Saon, Tom Sercu, Steven Rennie and Hong-Kwang J. Kuo IBM T. J. Watson Research Center, Yorktown Heights, NY, 10598 gsaon@us.ibm.com
More informationThe NICT/ATR speech synthesis system for the Blizzard Challenge 2008
The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National
More informationTRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen
TRANSFER LEARNING OF WEAKLY LABELLED AUDIO Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Laboratory of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationSpeech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence
INTERSPEECH September,, San Francisco, USA Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence Bidisha Sharma and S. R. Mahadeva Prasanna Department of Electronics
More informationCSL465/603 - Machine Learning
CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am
More informationBAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass
BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,
More informationLOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS
LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS Pranay Dighe Afsaneh Asaei Hervé Bourlard Idiap Research Institute, Martigny, Switzerland École Polytechnique Fédérale de Lausanne (EPFL),
More informationSTUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH
STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH Don McAllaster, Larry Gillick, Francesco Scattone, Mike Newman Dragon Systems, Inc. 320 Nevada Street Newton, MA 02160
More informationADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF
Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download
More informationDigital Signal Processing: Speaker Recognition Final Report (Complete Version)
Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Xinyu Zhou, Yuxin Wu, and Tiezheng Li Tsinghua University Contents 1 Introduction 1 2 Algorithms 2 2.1 VAD..................................................
More informationNoise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions
26 24th European Signal Processing Conference (EUSIPCO) Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions Emma Jokinen Department
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationDeep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach
#BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying
More informationSpeaker recognition using universal background model on YOHO database
Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,
More informationAutomatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment
Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy Sheeraz Memon
More informationarxiv: v2 [cs.cv] 30 Mar 2017
Domain Adaptation for Visual Applications: A Comprehensive Survey Gabriela Csurka arxiv:1702.05374v2 [cs.cv] 30 Mar 2017 Abstract The aim of this paper 1 is to give an overview of domain adaptation and
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George
More informationSecond Exam: Natural Language Parsing with Neural Networks
Second Exam: Natural Language Parsing with Neural Networks James Cross May 21, 2015 Abstract With the advent of deep learning, there has been a recent resurgence of interest in the use of artificial neural
More informationVowel mispronunciation detection using DNN acoustic models with cross-lingual training
INTERSPEECH 2015 Vowel mispronunciation detection using DNN acoustic models with cross-lingual training Shrikant Joshi, Nachiket Deo, Preeti Rao Department of Electrical Engineering, Indian Institute of
More informationSEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING
SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING Sheng Li 1, Xugang Lu 2, Shinsuke Sakai 1, Masato Mimura 1 and Tatsuya Kawahara 1 1 School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501,
More informationINPE São José dos Campos
INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA
More informationarxiv: v1 [cs.cv] 10 May 2017
Inferring and Executing Programs for Visual Reasoning Justin Johnson 1 Bharath Hariharan 2 Laurens van der Maaten 2 Judy Hoffman 1 Li Fei-Fei 1 C. Lawrence Zitnick 2 Ross Girshick 2 1 Stanford University
More informationMachine Learning from Garden Path Sentences: The Application of Computational Linguistics
Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,
More informationarxiv: v1 [cs.lg] 15 Jun 2015
Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and
More informationSegregation of Unvoiced Speech from Nonspeech Interference
Technical Report OSU-CISRC-8/7-TR63 Department of Computer Science and Engineering The Ohio State University Columbus, OH 4321-1277 FTP site: ftp.cse.ohio-state.edu Login: anonymous Directory: pub/tech-report/27
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationDropout improves Recurrent Neural Networks for Handwriting Recognition
2014 14th International Conference on Frontiers in Handwriting Recognition Dropout improves Recurrent Neural Networks for Handwriting Recognition Vu Pham,Théodore Bluche, Christopher Kermorvant, and Jérôme
More information(Sub)Gradient Descent
(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include
More informationSoftprop: Softmax Neural Network Backpropagation Learning
Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science
More informationUsing Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing
Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar Muthukumar, and Alan W Black Carnegie Mellon University,
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14
More informationAssignment 1: Predicting Amazon Review Ratings
Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for
More informationWord Segmentation of Off-line Handwritten Documents
Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationAxiom 2013 Team Description Paper
Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association
More informationDevice Independence and Extensibility in Gesture Recognition
Device Independence and Extensibility in Gesture Recognition Jacob Eisenstein, Shahram Ghandeharizadeh, Leana Golubchik, Cyrus Shahabi, Donghui Yan, Roger Zimmermann Department of Computer Science University
More informationEli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology
ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology
More informationLecture 9: Speech Recognition
EE E6820: Speech & Audio Processing & Recognition Lecture 9: Speech Recognition 1 Recognizing speech 2 Feature calculation Dan Ellis Michael Mandel 3 Sequence
More informationSpeech Translation for Triage of Emergency Phonecalls in Minority Languages
Speech Translation for Triage of Emergency Phonecalls in Minority Languages Udhyakumar Nallasamy, Alan W Black, Tanja Schultz, Robert Frederking Language Technologies Institute Carnegie Mellon University
More informationEvolution of Symbolisation in Chimpanzees and Neural Nets
Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication
More informationFUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria
FUZZY EXPERT SYSTEMS 16-18 18 February 2002 University of Damascus-Syria Dr. Kasim M. Al-Aubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production
More informationExploration. CS : Deep Reinforcement Learning Sergey Levine
Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?
More information