SISOM & ACOUSTICS 2015, Bucharest 21-22 May NEW TRENDS IN MACHINE LEARNING FOR SPEECH RECOGNITION Inge GAVAT, Diana MILITARU University POLITEHNICA Bucharest, email: i_gavat@yahoo.com In the paper, the authors tries to present the evolution in automatic speech recognition (ASR) from the classical long living solution based on hidden Markov models (HMMs) trained with mel-frequency cepstral coefficients (MFCCs) or perceptual linear prediction (PLP) coefficients to the today evolving ASR systems based on the so called deep learning (DL).In DL not only the model is trained, but also the features are extracted by hierarchical learning from the entire speech spectrum. Some examples of DL structures like autoencoders (AEs), convolutional neural nets (CNNs), deep belief networks (DBNs) based on restricted Bolzmann machines (RBMs) are given. Keywords:speech recognition,asr, ASRU, MFCC, PLP, hidden Markov models, deep learning, autoencoders, convolutional neural nets, deep belief networks, restricted Bolzmann machines 1. INTRODUCTION Automatic speech recognition (ASR) is one of the great challenges in Artificial Intelligence (AI). Despite the huge progress made in the last fifty years, it is still much to do so that the machine task arrives to the human performance. Nowadays ASR leaved the laboratory, penetrating in our daily life by products that enrich our existence, like the Microsoft dictation program, but difficulties arises by changing the speaker and more, by changing language. In the last 50 years the best technology for acoustic modeling was that relying on hidden Markov models and the best results were obtained using as features the mel-frequency cepstral coefficients to which we can add today the coefficients based on perceptual linear prediction.but beginning with 2006, a new possibility for acoustic modeling is constituted by a method called Deep learning, method that enhanced with more than 10% the recognition accuracy in automatic speech recognition tasks. In this paper will be presented the classical method applied to build ASR systems, followed by the new method that gains day by day more interest in the speech community. Deep learning (DL) has been the hottest topic in speech recognition in the last 2 years. A few long-standing performance records were broken with deep learning methods, rapidly adopted by producers: Microsoft and Google have both deployed DLbased speech recognition systems in their products. On the other hand, Microsoft, Google, IBM, Nuance, AT&T, and all the major academic and industrial players in speech recognition have projects on deep learning.it can be now said that the history of speech recognition faced two distinctive periods:speech recognition I (from late 1980s) with the classical solution based on HMMs and speech recognition II (from around 2011) using DL for acoustic modeling [1].In the first period the features were chosen to be as much as possible decorrelated, in order to minimize the number of features describing a speech frame; with this features was trained the acoustic model used for recognition, so that only the model was trained. In DL both, the features and the model are trained, the features being obtained from the entire spectrum of the speech frame,by a hierarchical training. On this way hidden correlations in the signal are conserved and in the same time the number of features can be reduced, becoming tractable for the recognition process [2]. From a functional view, ASR is the conversion process from the acoustic data sequence of speech into a word sequence. From the technical view of machine learning (ML), this conversion process of ASR requires a number of sub-processes including the use of discrete time stamps, called frames, to characterize the speech waveform data or acoustic features, and the use of categorical labels like words, phones, and triphones to index the acoustic data sequence. The more interesting and unique problem in ASR, however, is on the input side, caused by the variable-length acoustic-feature sequence. As a consequence, even if two output word sequences are identical, the input speech data typically have distinct lengths and so different input samples from the same sentence usually contain different data dimensionality depending on how the speech sounds are
Inge GAVAT, Diana MILITARU 272 produced.distinguished from other classification problems commonly studied in ML, the ASR problem is a special class of structured pattern recognition where the recognized patterns such as phones or words are embedded in the overall temporal sequence pattern as a sentence. ASR is also a very difficult task due to the huge number of variabilities with complicated and nonlinear interactions caused by speaker(accents, dialect, style, emotion, coarticulation, reduction pronunciation, hesitation), environment (noise, side talk, reverberation), or device (headphone, speaker phone, cell phone). 2. THE CLASSIC SPEECH RECOGNITION A classic system that recognizes continuous speech is represented in Figure 1. Because the speech is continuous, it is not sufficient for the system to recognize only words, it must recognize also sentences, and so it can be considered that the system understands speech. Figure1. Block Diagram of the classic automatic speech recognition and understanding system From the input speech sequence, feature are extracted; for each acoustical unit, (phone or triphone), for that the acoustical models were built in the training stage a comparison is made and the most likely model is chosen. Concatenating the chosen acoustical units, words can be obtained and confirmed by searching a lexicon; a simple grammar gives a word sequence, finalizing the speech recognition. For the recognition process we need as knowledge resources a lexicon, in which is contained the phonetic transcription of all words that can be recognized by the system, and a collection of acoustical models for all the phonetic units to be recognized. To realize a correct word sequence, in other words to realize speech understanding, are introduced constraints in form of a restrictive grammar or a language model, as a third knowledge resource. The features are mel-frequency cepstral coefficients (MFCC) or perceptual linear predictioncoefficients (PLP); they are obtained by digital signal processing methods, treating speech like an acoustic signal. The acoustic models are HMMs with Gaussian mixtures for each state, trained to represent usualymonophones, triphones or tied triphones called senons. The resulted word sequence, constrained by a grammar or a language model is than turned in a sentence. With such a system, for continuous speech recognition in Romanian language with a small vocabulary [3] was obtained a word recognition rate that achieve 99% and a phrase recognition rate near 90% with a bigram language model, suitable to be improved with a higher order statistic language model.the performance is comparable with the results obtained in recognition tasks for other languages. Trying different methods to enhance the results and looking to what is happening in the speech community, the way to further go today is represented by DL.
273 New trends in machine learning for speech recognition 3. DEEP LEARNING FOR ACOUSTIC MODELING Deep learning or hierarchical learning, has emerged since 2006 ignited by the publications of Hinton [4]. Within the past few years, the deep learning research has already had a strong impact on the wide range domain of signal processing, including notably ASR. Deep learning refers to a class of ML techniques, where many layers of information processing stages in hierarchical architectures are exploited for unsupervised feature learning and for pattern classification. That means that in DL not only classifiers, but also the features will be trained in more stages or hidden layers. Of course it is a computationally more intensive approach, but the progress of the computer technique allows now such approach. The first question that arises is if the deep architecture enhances performance in comparison with the classical procedure. An experiment done by Zeiler et al in a very large vocabulary speech recognition task shows that deeply trained features in 2 to 12 layers are more effective and enhance the average accuracy comparative to the features obtained in a single layer from around 47% to around 57% for large training times, like it can be seen in Figure 2.It can be also observed a considerable enhancing effect from the first 4 layers. Adding supplementary layers and arriving to 8, 10, 12 layers bring not much in accuracy modification. The word error rate for the same task is also diminished from 16% for one hidden layer to 11.1% for 12 hidden layers, becoming 10.8 for 4 hidden layers. Figure 2. Average accuracy dependency from the number of hidden layers(after Zeileret et al) First, the deep learning structures evolved in the domain of neural networks, by adding supplementary hidden layers to the existentone. It is the case of the perceptron with one hidden layer that is not a deep structure, but can become by introducing new hidden layers. A deep perceptron is presented in Figure 3. The signal is entering in the input, visible layer v connected with the weights W 0 to the first hidden layer h 1. With the weights W 1 and W 2 are interconnected the three hidden layers h 1, h 2 and h 3, the weights W 3 are conducting to the output layer giving the label for the input. The training of this structure is not easy, but the nowadays computers can handle such a task. The great advantage is that as input signal can be taken the entire spectrum of a speech unit (monophone, triphone or senon). The spectrum, conveying much more information conserves on this way the hidden inter-correlations between speech frames that are not considered by calculating only the MFCC or PLP coefficients, even if the first and second order variations are considered. That explains why deep structures, with more hidden layers, yield to better recognition results as the shallow ones and why they are on Figure 3. The deep structure of a perceptron the way to be frequently used in difficult speech recognition problems.
Inge GAVAT, Diana MILITARU 274 Now other structures became available, such as autoencoders (AE), convolutional neural networks (CNN),or deep belief networks (DBN), based on restricted Bolzman machines (RBM). 4. DEEP LEARNING STRUCTURES The autoencoder (AE) is a paradigm for deep learning architectures. Autoencoders are simple learning circuits consisting of encoder and decoder in order to transform inputs into outputs with a minimum of distortion. The encoder uses raw data (like Fourier spectrum) as input and produces features as output, and the decoder uses the output of the encoder (the extracted features) as input and reconstructs the original encoder s input raw data as output. The purpose of an autoencoder is to reduce the feature dimension learning a compressed, distributed representation for a set of data. The simplest form of the autoencoder s architecture is a feedforward, non-recurrent neural net, with an input layer, an output layer and one or more hidden layers between them, a similar architecture to the multilayer perceptron (MLP). The difference between autoencoder and MLP is that an autoencoder is trained to reconstruct its own inputs, instead to predict some target value given inputs [6]. While conceptually simple, they play an important role in machine learning. Autoencoders were first introduced in the 1980s by Hinton and the PDP group (Rumelhart et al., 1986) to address the problem of backpropagation without a teacher, by using the input data as the teacher. Together with Hebbian learning rules (Hebb, 1949; Oja, 1982), autoencoders provide one of the fundamental paradigms for unsupervised learning and for beginning to address the mystery of how synaptic changes induced by local biochemical events can be coordinated in a self-organized manner to produce global learning and intelligent behavior. The convolutional neural network (CNN) was introduced by LeCun and is another type of discriminative deep architecture, in which each module consists of a convolutional layer and a pooling layer. These modules are often stacked up with one on top of another, to form a deep model. The convolutional layer shares many weights, and the pooling layer subsamples the output of the convolutional layer and reduces the data rate from the layer below. The weight sharing in the convolutional layer, together with appropriately chosen pooling schemes, endows the CNN to be highly effective also in speech recognition, not only in computer vision. Compared with MFCCs, raw spectral features not only retain more information, but also enable the use of convolution and pooling operations to represent and handle some typical speech variability like vocal tract length differences across speakers, or distinct speaking styles causing formant undershoot or overshoot, expressed explicitly in the frequency domain. For example, the CNN can only be meaningfully and effectively applied tospeech recognition when spectral features, instead of MFCCfeatures, are used. More recently, Sainath et all [7 ] went one step further toward raw features by learning the parameters that define the filter-banks on power spectra like it can be seen in Figure 4. Figure 4.Illustration of the joint learning of filter parameters and the rest of thedeep network
275 New trends in machine learning for speech recognition That is, rather than using Mel-warped filter-bankfeatures as the input features as in, the weights correspondingto the Mel-scale filters are only used to initialize the parameters,which are subsequently learned together with the rest of thedeep network as the classifier. A very successful method of training CNNs wiyh a special pooling strategy is presenred in [8]. A restricted Boltzmann machine (RBM) is a stochastic neural network. RBMconsists of one layer of stochastic visible units (binary input data) connected to a layer of stochastic hidden units (binary learning data)in order to model non-independencies between the visible units. In RBM there are undirected connections only between visible-hidden and hidden-visible units, like it is to see in figure 5a.The RBM is a building block of a deep structure with large scale applications, called deep belief networks. The deep belief network (DBN) introduced by Hinton is built by stacking RBMs (figure 5b).DBN is composed of multiple layers of stochastic hidden units (latent variables), with connections between the layers, but not between units within each layer. The inferred states of the hidden units can be used,after training a RBM,as input data for training the next RBM that learns to model the dependencies between the hidden units of the first RBM. These steps can be repeated over and over to produce the layers of nonlinear feature detectors. Deep belief nets are learned one layer at a time by treating the values of the latent variables in one layer, when they are being inferred from data, as the data for training the next layer. This efficient, greedy learning can be followed by, or combined with, other learning procedures that fine-tune all of the weights to improve the generative or discriminative performance of the whole network. (a) (b) Figure 5. The restricted Boltzmann machine (a) and the deep belief network (b) DBNs are largely applied for acoustical modeling in speech recognition because they have higher modeling capacity per parameter than GMMs.Another advantage of DBNs is the efficient training method that combines unsupervised generative learning, used for feature discovery, with a subsequent stage of supervised learning that fine-tunes the features in order to optimize discrimination. On the strength of DBNs goodperformance on the TIMIT corpus, DBN acoustic models have been used for a variety oflarge vocabulary speech recognitiontasks achieving very competitive performance [9].[10]. We can say that for the advances in DL a special year was 2013, when a special session on this subject was organized at ICASSP, presenting the newest achievements in the field [11],[12],[13],[14],[15]. 5. CONCLUSIONS Because of their ability to model hidden internal dependency in the speech signal, beginning with the year 2006 deep neural networks gained over the classic HMM based modeling in recognition performance in ASR systems and tend to become from 2011 a standard approach in acoustic modeling.
Inge GAVAT, Diana MILITARU 276 REFERENCES 1. Le Cun, Y., Deep Learning Tutorial, ICML, Atlanta, 2013 2. Bengio, Y., Learning deep architectures for AI, Foundations and Trends in Machine Learning, vol. 2, issue 1, pp. 1 127, 2009 3. Militaru, D.,Gavat, I.,Dumitru, O.,Zaharia, T.,Segarceanu, S.,ProtoLOGOS, system for Romanian language automatic speech recognition and understanding (ASRU), in IEEE International Conference on Speech Technology and Human-Computer Dialogue (SPED) 2009, Bucharest, Romania, pp. 1-9. 4. Hinton, G. E., Osindero, S. and The, Y.W., A Fast Learning Algortihm for Deep Belief Nets, in Neural Computation, vol. 18, pp.1527 1554, 2006 5. Zeiler, M.D.,et al, On Rectified Linear Units for Speech Recognition, in Proceedings ICASSP 2013 6. Deng, L., Seltzer, M., Yu, D., Acero, A., Mohamed, A.and Hinton, G., Binary coding of speech spectrograms using a deep auto-encoder, Interspeech, 2010. 7. Sainath, T., Kingsbury, B., Mohamed, A.,Dahl, G.,Saon, G.,Soltau,H.,Beran, T.,Aravkin, A.,Ramabhadran,B.,Improvements to Deep Convolutional Neural Networks for LVCSR,in Proceedings of the Automatic Speech Recognition and Understanding Workshop (ASRU) 2011, Waikoloa, HI, USA, pp. 315-320. 3. Deng, L.,Abdel-Hamid, O., Yu,D.,A deep convolutional neural network using heterogeneous pooling for Trading Acoustic Invariance With Phonetic Confusion,in Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2013, Vancouver, BC, pp. 6669-6673. 9. Sainath, T., Kingsbury, B.,Ramabhadran,B., Novak, P.,Mohamed,A.,Making Deep Belief Networks Effective for Large Vocabulary Continuous Speech Recognition,in Proceedings of the Automatic Speech Recognition and Understanding Workshop (ASRU) 2011, Waikoloa, HI, USA, pp. 30-35 10. Mohamed, A.,Dahl, G.,Hinton, G.,Acoustic Modeling Using Deep Belief Networks,in IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, Sept. 2013, pp. 1112-1129 11. Deng,L.,Li,X.,Machine Learning Paradigms in Speech Recognition: An Overview,in IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, May 2013, pp. 1060 1089. 12. Dahl, G., Yu, D., Deng, L.,Acero, A.,Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition, in IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, January 2012, pp. 30-42. 13. Deng, L.,Hinton, G., Kingsbury,B.,New Types of Deep Neural Network Learning for Speech Recognition and Related Applications: An Overview,in Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2013, Vancouver, BC, pp. 8599-8603. 14. Deng, L.,Li, L., Huang, J.T.,Yao, K.,Yu, D.,Seide, F.,Seltzer, M.,Zweig,G.,He, X.,Williams, J.,Gong, Y.,Acero,A.,Recent Advances in Deep Learning for Speech Research at Microsoft,in Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2013, Vancouver, BC, pp. 8604-8608 15. Deng,L.,Li,X.,Machine Learning Paradigms in Speech Recognition: An Overview,in IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, May 2013, pp. 1060 1089