Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Size: px
Start display at page:

Download "Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition"

Transcription

1 Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition Yanzhang He, Eric Fosler-Lussier Department of Computer Science and Engineering The hio State University, Columbus, H, USA hey, Abstract Discriminative segmental models, such as segmental conditional random fields (SCRFs), have been successfully applied to speech recognition recently in lattice rescoring to integrate detectors across different levels of units, such as phones and words. However, the lattice generation has been constrained by a baseline decoder, typically a frame-based hybrid HMM- DNN system, which still suffers from the well-known frame independent assumption. In this paper, we propose to use SCRFs with DNNs directly as the acoustic model, a one-pass unified framework that can utilize local phone classifiers, phone transitions and long-span features, in direct word decoding to model phones or sub-phonetic segments with variable length. We describe a WFST-based approach to utilize the proposed acoustic model efficiently with the language model in first-pass word recognition. ur evaluation on the WSJ corpus shows our SCRF-DNN system outperforms a hybrid HMM-DNN system and a frame-level CRF-DNN system using the same label space. Index Terms: word recognition, segmental conditional random fields, first-pass decoder 1. Introduction Conventional Hidden Markov Models (HMMs) are used to model acoustic observations in a frame-by-frame fashion for speech recognition. They have long been known for the disadvantages of the conditional independence assumption between frames and the inability to incorporate long span features (e.g. phone duration and formant trajectories). There have been a number of studies attempting to address these limitations by extending the frame-level HMMs to segmental models [1, 2, 3]. Discriminative segmental models recently have been a promising direction of speech recognition research [4, 5, 6]. In particular, segmental conditional random fields (SCRFs) have achieved recent success in the lattice rescoring framework for improving state-of-the-art word recognition performance [7] due to their expressive log-linear feature integration and their discriminative nature at the sequence level. However, they remain to rely on lattices generated from a baseline HMM system to constrain candidate segmentations and label sequences for rescoring. nly recently have direct segment-based CRF approaches been explored as an first-pass decoding, but only for phone recognition [8, 9, 10]. In this work, we propose a WFSTbased decoding framework to utilize SCRFs as acoustic models directly along with language models for first-pass word decoding. As far as we know, this is the first attempt in the literature. In recent years, deep neural networks (DNNs) have achieved remarkable success in acoustic modeling [11, 12]. In our previous work [9], we have shown a sequence-based SCRF can be combined with a frame-based shallow neural network very effectively in a one-pass phone recognizer. DNNs, as sophisticated non-linear feature learners, can even further eliminate the traditional limitation of linear feature space for CRFs. Therefore, in this work, we investigate the effectiveness of the combination of SCRFs with DNNs as acoustic models in an end-to-end system for word recognition. We evaluate the proposed SCRF-DNN system on the WSJ0 corpus and find that even in the presence of trigram wordlevel language models and strong DNN acoustic models, a monophone-based SCRF-DNN system is still able to outperform a frame-level CRF-DNN system and a hybrid HMM-DNN system using the same monophone label space, and approaching the performance of a senone-based hybrid system. As in the case for phone recognition [9], the phone duration features and the acoustics-based phone transition features are useful for word recognition. 2. SCRF-based Acoustic Models 2.1. Frame-level Conditional Random Fields Conditional random fields (CRFs) [13], shown in Figure 1(a), are used for modeling sequence posteriors in structured prediction tasks, which are widely used in speech and language processing [14]. Let = {o 1, o 2,, o T } be the observation sequence, where T is the total number of frames in an utterance; let = {q 1, q 2,, q T } be the corresponding phone label sequence. Then assuming a standard frame-level (linear-chain) CRF, the probability of a phone label sequence conditioned on the observations is given by: where P ( ) = exp T t=1 iλifi(qt 1, qt, ot) Z() Z() = exp (1) T λ if i(q t 1, q t, o t) (2) t=1 Training is done through the conditional maximum likelihood (CML) criterion, i.e. by maximizing the posterior probabilities of the correct label sequences conditioned on the training observations. Decoding is done by choosing the label sequence with the highest posterior probability at test time [14] Segmental Conditional Random Fields Segmental conditional random fields, also known as SCRFs [5] or semi-markov CRFs [15], generalize the frame-level CRFs by i

2 cat /k ae t/ k k ae ae ae t o 1 o 2 o 3 o 4 o 5 o 6 (a) Frame-level CRFs. cat /k ae t/ k ae t o 1 o 2 o 3 o 4 o 5 o 6 e 0 e 1 e 2 e 3 (b) Segmental CRFs. E Figure 1: Frame-level CRFs and Segmental CRFs. allowing each label state to span several observations with variable length. As is shown in Figure 1(b), since each label state in the sequence in a SCRF can correspond to a chunk of observations in, the feature functions for a segment or between two segments can represent long-span dependencies, e.g. duration features. The Markov assumption is only held on the transitions between segment states, while the transitions within segments can be non-markovian. Given an observation sequence, suppose for a specific hypothesis, is segmented by SCRFs into J chunks. Let segment-level phone label sequence be denoted as = {q 1, q 2,, q J}, where 1 J T. Let e j be the frame index for the ending frame of the j th segment. Then E = {e 1, e 2,, e J} defines one possible segmentation for. Each label q j corresponds to a chunk of frames from (exclusive) to e j (inclusive), with the observations denoted as o e j. The joint probability distribution of the label sequence and its associated segmentation E conditioned on the observations is modeled as: P (, E ) = exp E j=1 i λifi(qj 1, qj, oe j ) Z() where Z() =,E s.t. = E E exp λ if i(q j 1, q j, o e j j=1 i (3) ) (4) Training and decoding needs to consider all possible label sequences and segmentations for inference, which can be done efficiently by extending the standard forward and backward recursions for frame-level CRFs to the segment level [9] DNNs as Features to CRFs / SCRFs We consider to use the linear output of a DNN before the softmax layer as the input to CRF or SCRF acoustic models. Let K be the dimension of the DNN output layer, the k th K output for the frame at time t be denoted as DNN k (o t) Frame-level CRF features We combine each dimension of the DNN output with each phone state label u to construct a state feature as: f u,k (q t, o t) = DNN k (o t)δ(q t = u) (5) We use transition bias as a typical transition feature for CRFs that associates with a pair of phone states (u, v): f u,v(q t 1, q t) = δ(q t 1 = u)δ(q t = v) (6) In addition, we also use observation-dependent transition features that are represented by the DNN output, which have already been shown important in our previous work [9]: f u,v,k (q t 1, q t, o t) = DNN k (o t)δ(q t 1 = u)δ(q t = v) (7) In a CRF-DNN acoustic model, the CRF can be considered as an extension of the DNN with transition features and sequence-level softmax normalization; while the DNN can be considered as an extension of CRF feature functions from linear to non-linear. Notice that while the DNN models the frame posteriors in an HMM hybrid system, a CRF-DNN system models the sequence posteriors directly. So when we combine the CRF- DNN acoustic model with the lexicon and the language model in the ASR cascade for word decoding, in order to convert the phoneme sequence posteriors into likelihoods, we need to divide them by the sequence priors instead of the frame-level priors. We would explain how it can be done in the WFST decoding framework in Section SCRF features For SCRFs, transition bias is still used. In addition, we use state segmental features and boundary transition features, which are explained in detail below and illustrated in Figure 2(a) and 2(b). A segmental state feature associates a hypothesized segment label and the corresponding chunk of frames. It can be constructed by simple transformation of the frame-level features across the hypothesized segment. Let l = e j denote the segment length for o e j. We construct the DNN-related segmental state features as the form of: f u,k (q j, o e j ) = φ k (o e j )δ(q j = u) (8) where φ could be the following: Sub-sample Feature: φ k, t (o e j ) = DNN k (o ej 1 + t), t = {0.1l, 0.3l, 0.5l, 0.7l, 0.9l}. Avg Feature: φ k (o e j ) = ( DNN k (o t))/l. t (,e j ] Max Feature: φ k (o e j ) = max DNN k(o t). t (,e j ] Min Feature: φ k (o e j ) = min DNN k (o t). t (,e j ] In addition, we model duration as one-hot features associated with the hypothesized segment label in segmental state features: f u,d (q j, o e j ) = δ(l = d)δ(q j = u), for all 1 d D (9) where D is the maximum allowable segment duration. For efficiency and better generalizability, these segmental features are only used for states instead of transitions. For transitions, we use boundary transition features that associate two

3 consecutive hypothesized phone segment labels with a fixedsize context window of observations around their boundary as: f u,v,k, t (q j 1, q j, o +c +c ) = DNN k (o ej 1 + t)δ(q j 1 = u)δ(q j = v), for all c t c (10) where c is the predefined half-window size. Since the boundary transition features are not segmental, i.e. independent of the segment length, we can apply the implementation of Boundary- Factored SCRFs for efficient inference in training and decoding, which are proposed in our previous work [9]. q m-1 q m q m+1 5 sub-samples (equally spaced) avg max min duration (one-hot) (a) Segmental state features q n-1 q n q n+1 q n+2 (b) Boundary transition features. Figure 2: Segmental state features and boundary transition features for SCRFs. (a) Segmental state features including subsample, average, maximum, minimum and duration features associated with a hypothesized phone segment q m with 10 frames long. The output dimension K of DNNs is simplified to be 3 for illustration. Duration features are encoded as one-hot features: only the bit corresponding to the hypothesized segment length is fired as 1, otherwise 0 (assuming the maximum duration D is 10). (b) Boundary transition features with a +/- 1 frame context window (c=1) that are associated with a pair of hypothesized phone segment labels (q n, q n+1). 3. WFST-based Word Decoding 3.1. HMM-based Decoding The conventional statistical model for ASR generates the most likely word sequence W given the observation through maximizing the conditional probability P (W ), based on Bayes decision rule: W = arg max P (W ) (11) W arg max W max P ( )P ( W )P (W ) (12) denotes a sequence of sub-word units (typically phones or context dependent phones) corresponding to W. P ( ) is an acoustic model; P ( W ) is a pronunciation model or dictionary model; and P(W) is a language model. In HMM-based ASR systems, P ( ) is modeled by an HMM-GMM generative model, or by a hybrid HMM-DNN model, which converts the DNN state posteriors to likelihoods through dividing them by state priors [16]. A traditional HMMbased decoder can be encoded as a static WFST-based decoding graph by composing transducers at different levels for probabilistic transduction [17]: H C L G (13) H is a transducer that has, on its output, context dependent phones, and on its input, symbols representing acoustic states. The HMM state topology is represented in H with weights as probabilities of state self-loops and transitions. C transduces the context dependent phones into independent ones. L is a WFST for the lexicon that maps phones into words with pronunciation probabilities. And G maps one word sequence to another based on language model probabilities. The four components are composed as a static graph, which is expanded at test time to accept state-level acoustic model scores at each frame from a GMM or DNN component. All the probabilities are encoded as negative log likelihoods as the arc weights so that the most likely word sequence corresponds to the shortest path of the expanded graph with respect to the tropical semiring CRF-based Decoding In a CRF-based system as in Section 2, we model the posterior probability P ( ) directly instead of P ( ). In spirit to the idea of hybrid HMM-DNN systems, we follow Morris [18] to incorporate a CRF acoustic model into word decoding by transforming the posterior probability based on Bayes rule again: P ( )P () P ( ) = (14) By applying Eq. (14) to Eq. (12), we can formulate CRFbased ASR decoding as Eq. (16) for finding the most likely word sequence: W arg max max W = arg max max W P ( )P () P ( W )P (W ) (15) P ( ) P ( W )P (W ) (16) The phoneme sequence posterior P ( ) could be modeled by frame-level CRFs as in Eq. (1), or by segmental CRFs as in Eq. (3). is the phoneme sequence prior, modeled by an n-gram language model at the phone or state level. P ( W ) and P (W ) are the same as in a HMM-based system. To decode with a CRF-based acoustic model, we can also apply Eq. (16) to the WFST decoding framework with an extension as: H C P L G (17) The tranducers C, L and G are unchanged as original. However, H in this case represents the state transition and crossphone transition topology in CRFs or SCRFs, and its arcs are unweighted since the transition scores could be based on acoustic observations, which are provided by the CRF or SCRF acoustic models P ( ) at test time along the acoustic state scores. The phone sequence prior can be represented in a transducer P, where the arc weights are positive log likelihoods of the phone n-gram probabilities, since is in the denominator of Eq. (16). P is then composed to the right of C in the case of monophone n-gram LM priors, or to the left of C in the case of context-dependent phone priors.

4 P ( ) for CRFs or SCRFs cannot be fully calculated until the end of the utterance. However, in the time-synchronous viterbi decoding [17], we only need to provide the accumulated negative log of the numerator (i.e. the weighted feature sum) in Eq. (1) and (3) up to the current time step along the path, since the log of the denominator Z() would be the same for all paths eventually. This enables beam pruning and online decoding. 4. Evaluation We evaluate the proposed SCRF-DNN system on the WSJ0 corpus for continuous speech recognition. This is a corpus of read speech from the Wall Street Journal by native English speakers. A training set of 7138 utterances from 83 speakers (about 15h) is used to build recognition models. A development set of 368 utterances from 10 speakers is used to tune the models prior to evaluation. Specifically this work evaluates system performance on the Eval-92 test set 5K vocabulary task. The evaluation set includes 330 utterances from 8 different speakers. All systems are evaluated using the same standard 5K closed vocabulary trigram language model provided by the task. As initial experiments, we build our SCRF-DNN acoustic models with monophone labels. For direct comparison, a hybrid HMM-DNN system and a frame-level CRF-DNN system are built with 3-state monophone state labels. We use the Kaldi toolkit [19] to build a hybrid HMM-DNN system using 40-dimensional log Mel filterbank features with their deltas and double-deltas from a 11-frame context window. We first pretrain the DNN generatively with stacked RBMs, which are then used to initialize the DNN with 7 hidden layers of 2048 sigmoid units. Then we train the DNN with the monophone state targets using the alignment obtained from an HMM-GMM system trained with MFCC features. Since the initial monophone alignment is not very accurate, we realign the training data with the trained DNN and retrain the DNN using the new alignment. We repeat this process for three times until the performance becomes saturated. We further improve the system by applying smbr-based sequence training on the DNN [20]. For faster convergence of the smbr training, we regenerate the lattices after the first iteration and train for 4 more iterations. For the decoding, we use the standard 5K closed vocabulary trigram language models provided by the task. Both the frame-level CRF-DNN system and the SCRF- DNN system use the same DNN that is trained in the hybrid system. Linear output of DNNs normalized at the utterance level is used as the features to CRFs and SCRFs since it works slightly better than the DNN posteriors in our preliminary experiments. We use stochastic gradient descent for the optimization of both CRFs and SCRFs. For the learning rate scheduling, CRFs training uses adaptive gradient descent [21], which converges within about 20 iterations; SCRFs training already converges very quickly with a small fixed learning rate (0.0003) within about 5 iterations, with no more improvement by switching to adaptive gradient descent or learning rate halving scheduling. We use a bigram phone LM for the phone sequence prior component in the SCRF decoding graph with a scaling factor, similarly to the standard language model scaling factor. Both factors are tuned in the development set. Following our previous work [9], we use +/- 6 frames of the boundaries as the context window for the boundary transition features for SCRFs, and 10 frames as the maximum duration for each segment (phones that are longer than 10 frames are evenly split into smaller segments). The result comparison is shown in Table 1. The baseline monophone hybrid HMM-DNN system achieves 3.9% WER on the test set, which shows it is already a strong monophone system. The frame-level CRF-DNN system achieves 4.2% WER, which is insignificantly worse than the hybrid system. The SCRF-DNN system obtains 3.3% WER, which outperforms both frame-level baseline systems significantly with the same monophone label space. Note that both frame-level systems use 3-state monophones as the targets, while SCRFs only use 1-state monophone targets because it models a phoneme as a whole unit. But the sub-sample features and the DNN 3-state output help SCRFs to model internal phone states implicitly. Systems phone label space Test WER hybrid HMM-DNN 3-state monophones 3.9 % frame-level CRF-DNN 3-state monophones 4.2 % SCRF-DNN 1-state monophones 3.3 % hybrid HMM-DNN senones 2.5 % Table 1: WER on the Eval-92 5K closed vocabulary task. We also trained a hybrid HMM-DNN system with tied triphone states (senones) as the targets, which achieves 2.5% WER. That means, with only 1-state monophone targets, our SCRF-DNN system already achieves the improvement half way through from a hybrid monophone system to a senone system. In future work, we would like to incorporate context dependency into SCRFs through either the feature space or the label space. For example, we could use DNN senone posteriors or linear output as the transition features associated with a pair of phones, in which case we can constrain the parameter space by only considering top few senones for each pair of transition. The CML training criterion for CRFs and SCRFs is effectively equivalent to the maximum mutual information (MMI) training criterion for hybrid HMM-DNN sequence training [14, 22], except that we do not train CRFs or SCRFs with the language models together as in [23, 20]. Also, we do not jointly train the sequence models with DNNs as in [24, 10]. Applying both techniques might further improve the result of our SCRF- DNN system. In addition, alternative sequence training criteria might be useful for SCRFs, such as smbr[23, 20], large margin[25] or other cost functions investigated in [26]. 5. Conclusion Segmental conditional random fields have been mainly used in the lattice rescoring framework for word recognition or onepass decoding for phone recognition in the literature. We apply SCRFs to one-pass word recognition for the first time by introducing a WFST-based decoding framework that can incorporate SCRFs and DNNs as acoustic models efficiently with language models. The experiment results on a 5K vocabulary read speech corpus (WSJ0) show that the proposed SCRF-DNN system outperforms both a hybrid HMM-DNN system and a framelevel CRF-DNN system with the same label space significantly. We show that SCRFs are capable of modeling variable-length phone segments directly with duration features and the aggregation of local DNN output within a segment. In future work, we would like to explore the integration of DNN senone posteriors as well as sequence training with language models. 6. Acknowledgements This work was supported by NSF Grant IIS We would like to thank Jeremy Morris for valuable discussions and the use of his frame-level ASR-CRaFT toolkit.

5 7. References [1] M. stendorf, V. V. Digalakis, and. A. Kimball, From hmm s to segment models: A unified view of stochastic modeling for speech recognition, Speech and Audio Processing, IEEE Transactions on, vol. 4, no. 5, pp , [2] J. R. Glass, A probabilistic framework for segment-based speech recognition, Computer Speech & Language, vol. 17, no. 2, pp , [3] M. De Wachter, M. Matton, K. Demuynck, P. Wambacq, R. Cools, and D. Van Compernolle, Template-based continuous speech recognition, Audio, Speech, and Language Processing, IEEE Transactions on, vol. 15, no. 4, pp , [4] M. Layton and M. Gales, Augmented statistical models for speech recognition, in Acoustics, Speech and Signal Processing, ICASSP 2006 Proceedings IEEE International Conference on, vol. 1. IEEE, 2006, pp. I I. [5] G. Zweig and P. Nguyen, A segmental CRF approach to large vocabulary continuous speech recognition, in Proceedings of the IEEE Workshop on Automatic Speech Recognition Understanding (ASRU 09), Merano, Italy, Dec. 2009, pp [6] S. Zhang, A. Ragni, and M. Gales, Structured log linear models for noise robust speech recognition, Signal Processing Letters, IEEE, vol. 17, no. 11, pp , [7] G. Zweig, P. Nguyen, D. Van Compernolle, K. Demuynck, L. Atlas, P. Clark, G. Sell, M. Wang, F. Sha, H. Hermansky, D. Karakos, A. Jansen, S. Thomas, G. S. V. S. Sivaram, S. Bowman, and J. Kao, Speech recognition with segmental conditional random fields: A summary of the JHU CLSP 2010 summer workshop, in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 11), Prague, Czech Republic, May 2011, pp [8] G. Zweig, Classification and recognition with direct segment models, in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 12), Kyoto, Japan, Mar. 2012, pp [9] Y. He and E. Fosler-Lussier, Efficient segmental conditional random fields for phone recognition, in Proceedings of the Annual Conference of the International Speech Communication Association (Interspeech 12), Portland, R, USA, Sep [10]. Abdel-Hamid, L. Deng, D. Yu, and H. Jiang, Deep segmental neural networks for speech recognition, in INTERSPEECH, [11] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, Signal Processing Magazine, IEEE, vol. 29, no. 6, pp , [12] L. Deng, J. Li, J.-T. Huang, K. Yao, D. Yu, F. Seide, M. Seltzer, G. Zweig, X. He, J. Williams et al., Recent advances in deep learning for speech research at microsoft, ICASSP 2013, [13] J. Lafferty, A. McCallum, and F. Pereira, Conditional random fields: Probabilistic models for segmenting and labeling sequence data, [14] E. Fosler-Lussier, Y. He, P. Jyothi, and R. Prabhavalkar, Conditional random fields in speech, audio, and language processing, Proceedings of the IEEE, vol. 101, no. 5, pp , [15] S. Sarawagi and W. W. Cohen, Semi-Markov conditional random fields for information extraction, in Advances in Neural Information Processing Systems (NIPS 04), Vancouver, British Columbia, Canada, Dec. 2004, pp [16] H. Bourlard and N. Morgan, Connectionist Speech Recognition: A Hybrid Approach. Kluwer Academic Publishers, [17] M. Mohri, F. Pereira, and M. Riley, Speech recognition with weighted finite-state transducers, in Springer Handbook of Speech Processing. Springer, 2008, pp [18] J. J. Morris, A study on the use of conditional random fields for automatic speech recognition, Ph.D. dissertation, The hio State University, [19] D. Povey, A. Ghoshal, G. Boulianne, L. Burget,. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. ian, P. Schwarz et al., The Kaldi speech recognition toolkit, in Proc. of ASRU, 2011, pp [20] B. Kingsbury, Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling, in Proc. of ICASSP, 2009, pp [21] J. Duchi, E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res., vol. 12, pp , [22] X. He, L. Deng, and W. Chou, Discriminative learning in sequential pattern recognition, IEEE Signal Processing Magazine, vol. 25, no. 5, pp , Sep [23] D. Povey, Discriminative training for large vocabulary speech recognition, Cambridge, UK: Cambridge University, vol. 79, [24] R. Prabhavalkar and E. Fosler-Lussier, Backpropagation training for multilayer conditional random field based phone recognition, in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, 2010, pp [25] S.-X. Zhang and M. J. Gales, Structured svms for automatic speech recognition, Audio, Speech, and Language Processing, IEEE Transactions on, vol. 21, no. 3, pp , [26] H. Tang, K. Gimpel, and K. Livescu, A comparison of training approaches for discriminative segmental models, in Proc. Annual Conference of International Speech Communication Association (INTERSPEECH), 2014.

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

Improvements to the Pruning Behavior of DNN Acoustic Models

Improvements to the Pruning Behavior of DNN Acoustic Models Improvements to the Pruning Behavior of DNN Acoustic Models Matthias Paulik Apple Inc., Infinite Loop, Cupertino, CA 954 mpaulik@apple.com Abstract This paper examines two strategies that positively influence

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT Takuya Yoshioka,, Anton Ragni, Mark J. F. Gales Cambridge University Engineering Department, Cambridge, UK NTT Communication

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

arxiv: v1 [cs.lg] 7 Apr 2015

arxiv: v1 [cs.lg] 7 Apr 2015 Transferring Knowledge from a RNN to a DNN William Chan 1, Nan Rosemary Ke 1, Ian Lane 1,2 Carnegie Mellon University 1 Electrical and Computer Engineering, 2 Language Technologies Institute Equal contribution

More information

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING Sheng Li 1, Xugang Lu 2, Shinsuke Sakai 1, Masato Mimura 1 and Tatsuya Kawahara 1 1 School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501,

More information

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

Distributed Learning of Multilingual DNN Feature Extractors using GPUs Distributed Learning of Multilingual DNN Feature Extractors using GPUs Yajie Miao, Hao Zhang, Florian Metze Language Technologies Institute, School of Computer Science, Carnegie Mellon University Pittsburgh,

More information

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento

More information

Deep Neural Network Language Models

Deep Neural Network Language Models Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com

More information

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,

More information

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS Pranay Dighe Afsaneh Asaei Hervé Bourlard Idiap Research Institute, Martigny, Switzerland École Polytechnique Fédérale de Lausanne (EPFL),

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE Shaofei Xue 1

More information

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS Jonas Gehring 1 Quoc Bao Nguyen 1 Florian Metze 2 Alex Waibel 1,2 1 Interactive Systems Lab, Karlsruhe Institute of Technology;

More information

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training INTERSPEECH 2015 Vowel mispronunciation detection using DNN acoustic models with cross-lingual training Shrikant Joshi, Nachiket Deo, Preeti Rao Department of Electrical Engineering, Indian Institute of

More information

arxiv: v1 [cs.cl] 27 Apr 2016

arxiv: v1 [cs.cl] 27 Apr 2016 The IBM 2016 English Conversational Telephone Speech Recognition System George Saon, Tom Sercu, Steven Rennie and Hong-Kwang J. Kuo IBM T. J. Watson Research Center, Yorktown Heights, NY, 10598 gsaon@us.ibm.com

More information

On the Formation of Phoneme Categories in DNN Acoustic Models

On the Formation of Phoneme Categories in DNN Acoustic Models On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-

More information

Investigation on Mandarin Broadcast News Speech Recognition

Investigation on Mandarin Broadcast News Speech Recognition Investigation on Mandarin Broadcast News Speech Recognition Mei-Yuh Hwang 1, Xin Lei 1, Wen Wang 2, Takahiro Shinozaki 1 1 Univ. of Washington, Dept. of Electrical Engineering, Seattle, WA 98195 USA 2

More information

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS Heiga Zen, Haşim Sak Google fheigazen,hasimg@google.com ABSTRACT Long short-term

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX,

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX, IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX, 2017 1 Small-footprint Highway Deep Neural Networks for Speech Recognition Liang Lu Member, IEEE, Steve Renals Fellow,

More information

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

SPEECH RECOGNITION CHALLENGE IN THE WILD: ARABIC MGB-3

SPEECH RECOGNITION CHALLENGE IN THE WILD: ARABIC MGB-3 SPEECH RECOGNITION CHALLENGE IN THE WILD: ARABIC MGB-3 Ahmed Ali 1,2, Stephan Vogel 1, Steve Renals 2 1 Qatar Computing Research Institute, HBKU, Doha, Qatar 2 Centre for Speech Technology Research, University

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California

More information

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,

More information

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer

More information

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH Don McAllaster, Larry Gillick, Francesco Scattone, Mike Newman Dragon Systems, Inc. 320 Nevada Street Newton, MA 02160

More information

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar Muthukumar, and Alan W Black Carnegie Mellon University,

More information

The A2iA Multi-lingual Text Recognition System at the second Maurdor Evaluation

The A2iA Multi-lingual Text Recognition System at the second Maurdor Evaluation 2014 14th International Conference on Frontiers in Handwriting Recognition The A2iA Multi-lingual Text Recognition System at the second Maurdor Evaluation Bastien Moysset,Théodore Bluche, Maxime Knibbe,

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract

More information

Dropout improves Recurrent Neural Networks for Handwriting Recognition

Dropout improves Recurrent Neural Networks for Handwriting Recognition 2014 14th International Conference on Frontiers in Handwriting Recognition Dropout improves Recurrent Neural Networks for Handwriting Recognition Vu Pham,Théodore Bluche, Christopher Kermorvant, and Jérôme

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Corrective Feedback and Persistent Learning for Information Extraction

Corrective Feedback and Persistent Learning for Information Extraction Corrective Feedback and Persistent Learning for Information Extraction Aron Culotta a, Trausti Kristjansson b, Andrew McCallum a, Paul Viola c a Dept. of Computer Science, University of Massachusetts,

More information

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17. Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

A Review: Speech Recognition with Deep Learning Methods

A Review: Speech Recognition with Deep Learning Methods Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.1017

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology

More information

Semi-Supervised Face Detection

Semi-Supervised Face Detection Semi-Supervised Face Detection Nicu Sebe, Ira Cohen 2, Thomas S. Huang 3, Theo Gevers Faculty of Science, University of Amsterdam, The Netherlands 2 HP Research Labs, USA 3 Beckman Institute, University

More information

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of Technology Lincoln Laboratory {es,dar}@ll.mit.edu ABSTRACT

More information

The MSR-NRC-SRI MT System for NIST Open Machine Translation 2008 Evaluation

The MSR-NRC-SRI MT System for NIST Open Machine Translation 2008 Evaluation The MSR-NRC-SRI MT System for NIST Open Machine Translation 2008 Evaluation AUTHORS AND AFFILIATIONS MSR: Xiaodong He, Jianfeng Gao, Chris Quirk, Patrick Nguyen, Arul Menezes, Robert Moore, Kristina Toutanova,

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention Damien Teney 1, Peter Anderson 2*, David Golub 4*, Po-Sen Huang 3, Lei Zhang 3, Xiaodong He 3, Anton van den Hengel 1 1

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012 Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of

More information

Discriminative Learning of Beam-Search Heuristics for Planning

Discriminative Learning of Beam-Search Heuristics for Planning Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Improved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge

Improved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge Improved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge Preethi Jyothi 1, Mark Hasegawa-Johnson 1,2 1 Beckman Institute,

More information

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases POS Tagging Problem Part-of-Speech Tagging L545 Spring 203 Given a sentence W Wn and a tagset of lexical categories, find the most likely tag T..Tn for each word in the sentence Example Secretariat/P is/vbz

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Alistair Conkie AT&T abs - Research 180 Park Avenue, Florham Park,

More information

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures Alex Graves and Jürgen Schmidhuber IDSIA, Galleria 2, 6928 Manno-Lugano, Switzerland TU Munich, Boltzmannstr.

More information

Lecture 9: Speech Recognition

Lecture 9: Speech Recognition EE E6820: Speech & Audio Processing & Recognition Lecture 9: Speech Recognition 1 Recognizing speech 2 Feature calculation Dan Ellis Michael Mandel 3 Sequence

More information

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models Richard Johansson and Alessandro Moschitti DISI, University of Trento Via Sommarive 14, 38123 Trento (TN),

More information

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode Diploma Thesis of Michael Heck At the Department of Informatics Karlsruhe Institute of Technology

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

The 2014 KIT IWSLT Speech-to-Text Systems for English, German and Italian

The 2014 KIT IWSLT Speech-to-Text Systems for English, German and Italian The 2014 KIT IWSLT Speech-to-Text Systems for English, German and Italian Kevin Kilgour, Michael Heck, Markus Müller, Matthias Sperber, Sebastian Stüker and Alex Waibel Institute for Anthropomatics Karlsruhe

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

A Deep Bag-of-Features Model for Music Auto-Tagging

A Deep Bag-of-Features Model for Music Auto-Tagging 1 A Deep Bag-of-Features Model for Music Auto-Tagging Juhan Nam, Member, IEEE, Jorge Herrera, and Kyogu Lee, Senior Member, IEEE latter is often referred to as music annotation and retrieval, or simply

More information

An Online Handwriting Recognition System For Turkish

An Online Handwriting Recognition System For Turkish An Online Handwriting Recognition System For Turkish Esra Vural, Hakan Erdogan, Kemal Oflazer, Berrin Yanikoglu Sabanci University, Tuzla, Istanbul, Turkey 34956 ABSTRACT Despite recent developments in

More information

Speech Recognition by Indexing and Sequencing

Speech Recognition by Indexing and Sequencing International Journal of Computer Information Systems and Industrial Management Applications. ISSN 215-7988 Volume 4 (212) pp. 358 365 c MIR Labs, www.mirlabs.net/ijcisim/index.html Speech Recognition

More information

arxiv: v1 [cs.lg] 15 Jun 2015

arxiv: v1 [cs.lg] 15 Jun 2015 Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and

More information

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches Yu-Chun Wang Chun-Kai Wu Richard Tzong-Han Tsai Department of Computer Science

More information

Speaker recognition using universal background model on YOHO database

Speaker recognition using universal background model on YOHO database Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

Large vocabulary off-line handwriting recognition: A survey

Large vocabulary off-line handwriting recognition: A survey Pattern Anal Applic (2003) 6: 97 121 DOI 10.1007/s10044-002-0169-3 ORIGINAL ARTICLE A. L. Koerich, R. Sabourin, C. Y. Suen Large vocabulary off-line handwriting recognition: A survey Received: 24/09/01

More information

Attributed Social Network Embedding

Attributed Social Network Embedding JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MAY 2017 1 Attributed Social Network Embedding arxiv:1705.04969v1 [cs.si] 14 May 2017 Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua Abstract Embedding

More information

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

International Journal of Advanced Networking Applications (IJANA) ISSN No. : International Journal of Advanced Networking Applications (IJANA) ISSN No. : 0975-0290 34 A Review on Dysarthric Speech Recognition Megha Rughani Department of Electronics and Communication, Marwadi Educational

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Device Independence and Extensibility in Gesture Recognition

Device Independence and Extensibility in Gesture Recognition Device Independence and Extensibility in Gesture Recognition Jacob Eisenstein, Shahram Ghandeharizadeh, Leana Golubchik, Cyrus Shahabi, Donghui Yan, Roger Zimmermann Department of Computer Science University

More information

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski Problem Statement and Background Given a collection of 8th grade science questions, possible answer

More information

An empirical study of learning speed in backpropagation

An empirical study of learning speed in backpropagation Carnegie Mellon University Research Showcase @ CMU Computer Science Department School of Computer Science 1988 An empirical study of learning speed in backpropagation networks Scott E. Fahlman Carnegie

More information

arxiv: v1 [cs.cl] 2 Apr 2017

arxiv: v1 [cs.cl] 2 Apr 2017 Word-Alignment-Based Segment-Level Machine Translation Evaluation using Word Embeddings Junki Matsuo and Mamoru Komachi Graduate School of System Design, Tokyo Metropolitan University, Japan matsuo-junki@ed.tmu.ac.jp,

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Letter-based speech synthesis

Letter-based speech synthesis Letter-based speech synthesis Oliver Watts, Junichi Yamagishi, Simon King Centre for Speech Technology Research, University of Edinburgh, UK O.S.Watts@sms.ed.ac.uk jyamagis@inf.ed.ac.uk Simon.King@ed.ac.uk

More information