REDUNDANT CODING AND DECODING OF MESSAGES IN HUMAN SPEECH COMMUNICATION

Size: px
Start display at page:

Download "REDUNDANT CODING AND DECODING OF MESSAGES IN HUMAN SPEECH COMMUNICATION"

Transcription

1 REDUNDANT CODING AND DECODING OF MESSAGES IN HUMAN SPEECH COMMUNICATION Hynek Hermansky The Johns Hopkins University, Baltimore, MD, USA KAVLI Institute for Theoretical Physics, Santa Barbara, CA ABSTRACT We hypothesize that linguistic message in speech, as represented by a string of speech sounds, is coded redundantly in both the time and the frequency domains. Such redundant coding of the message in the signal evolved so that relevant spectral and temporal properties of human hearing can be used in extracting the messages from the noisy speech signal. This hypothesis is supported by results of recognition experiments using a particular architecture of an automatic recognition system where longer temporal segments of temporal trajectories of spectral energies in individual frequency bands of speech are used for deriving estimates of posterior probabilities of speech sounds. Combinations of estimates in the reliable frequency bands are then adaptively fused to yield the final probability vectors, which best satisfy the adopted performance monitoring criteria. Keywords: human speech communications, redundant coding of message, robustness, in automatic recognition of speech, parallel multistream processing Corresponding author: Hynek Hermansky, Center for Language and Speech Processing, The Johns Hopkins University, Hackerman Hall 324F, 3400 N. Charles Street, Baltimore, MD 20219, hynek@jhu.edu 1. MOTIVATIONS 1.1. Coding and decoding linguistic messages in speech Speech evolved for human communication in realistic noisy environments. Speech signals carry linguistic messages, which contain information that is being communicated. They also always contain information about speaker s identity, health, emotions, moods, other particular speaker idiosyncrasies and acoustic noise. Comparing the amount of information transmitted in speech and the amount of linguistic information in the message, as represented by an ordered string of phonemes [Miller 1951] suggests that a massive information rate expansion of several orders of magnitude occurs during speech production [Flanagan 1965]. Subsequently, similar information rate reduction happens during the message decoding. We

2 hypothesize that the information rate expansion is largely due to an inherent redundant coding of the message in the signal, which evolved so that relevant spectral and temporal properties of human hearing can be used in extracting the messages from the noisy speech signal. Figure 1 Schematic illustration of origins of temporal and spectral redundancies in coding the message in speech and of a possible use of these redundancies in decoding the speech message In creating a linguistic message, speaker s brain issues motor commands, which specify strings of speech sounds that constitute the message. These commands control critical elements of a speech production system. Movements of the critical elements elicit synchronized movements of the whole vocal tract and introduce redundant information about the message into different parts of the speech spectrum. In addition, the vocal organs move with their own inertias, which introduce redundancies of the message coding in time. As the result, the series of sound-specific speech commands are coded redundantly both in time and in frequency. Noise corrupts a speech signal before it reaches a listener. Human hearing extracts information about the speech sound sequences, which represent the message, using spectro-temporal properties of an auditory cortex. In this process, the listener alleviates the noise and massively reduces the information rate. Such message decoding is done, effortlessly and with tolerable delays, by all listeners with normal hearing. How much of this ability is innate and how much is acquired in life is still being discussed [Cowie 2010]; however, it is certain that hearing is among the innate abilities that are used in message decoding by all normal listeners. Thus, the forces of nature fostered over the millennia of human evolution strategies for encoding and decoding messages in speech that are consistent with one of the most important fundamental principles of the mathematical theory of communication in noise [Shannon 1949]: the principle of providing redundant ways of encoding messages in speech during speech production in such a way as to allow these redundancies to be exploited by human hearing during the decoding of messages in speech.

3 1.2. Redundancies in time Speech messages are coded into sequences of phonemes. In producing such a sequence, the vocal tract cannot change its shape instantly. The vocal tract inertia redundantly spreads information about the underlying sounds over longer time intervals, resulting in the phoneme coarticulation. While the spacing of phonemes in conversational speech is, on average, roughly 70 ms, the information spread is considerably longer, at least several hundreds of ms [Yang et al 2000, Bush and Kain 2013], Due to the coarticulation, information about phonemes, carried in temporal dynamics of speech spectrum, spreads over several phonemes, and the coarticulation patterns of close phonemes overlap in time. One of the well-accepted time constants of human hearing (see,e.g. [Covan 1984]), which can be interpreted as a temporal buffer for processing acoustic signal, spans roughly a syllable-length temporal interval in speech. We hypothesize that the listener s perceptual system uses this temporal buffer for gathering and processing information about the identity and position of each phoneme from the coarticulated speech. When sufficiently long segments of speech, containing most of the dynamics caused by the underlying speech sound of interest, mixed with the unwanted information from coarticulated neighboring sounds, are available in the hearing buffer, human hearing can focus on the desired information, and ignore short temporal distortions [Miller 1947, Warren 1970]. This forms in the listener s mind a phoneme sequence that carries the message. This concept implies that the individual phonemes are perceived independently, despite the obvious interactions among neighboring phonemes in the speech signal. This concept is strongly supported by results of perceptual experiments, where a probability of correct human recognition of combinations of consonants and vowels in syllables is given as a product of probabilities of correct recognition of phonemes in the syllables [Fletcher 1953, Boothroyd and Nittrouer 1988], thus implying their conditional independence Redundancies in frequency A speaker forms a message in speech by controlling critical message-carrying parts of the vocal tract (tongue constriction, lips, and velum). Movements of these critical parts cause the entire speaker-specific vocal tract to change its shape in synchrony with the critical parts. Changes to the vocal tract induce speaker and message specific redundant changes throughout the speech spectrum. Therefore, the information is carried redundantly in different parts of speech spectrum. This spread of information into different frequency bands is advantageous for human decoding of the message from noisy speech. During decoding, the speaker-specific information is alleviated by relatively broad spectral integration of the speech spectrum over several critical bands, yielding articulatory bands. In low-noise acoustic environments, each articulatory band carries roughly an equal amount of information about the message, and the human cognitive system extracts message-specific information from all articulatory bands. When some articulatory bands are corrupted by noise, these bands are suppressed in decoding the message [French and Steinberg 1947]. Spectral smoothing within several critical bands is supported by the articulatory bands being broader than critical bands [French and Steinberg 1949], by the F2 concept and by 3.5 Bark spectral integration theory [Fant and Risberg 1962, Chistovich 1978], by optimality of the low-order PLP model in alleviating speaker-dependent information in speech [Hermansky and Broad 1988, Hermansky 1990], by speech resynthesis from low resolution spectrum [Shannon et al 1995], by data-derived spectral features [Hermansky and Malayath 1998] and by observed spectral properties of mammalian auditory cortex [Chi et al 2005, Mahajan et al 2014]. The existence of parallel articulatory frequency bands is supported by results of a series of psychophysical experiments with high-pass and low-pass filtered speech at varying signal-to-noise (SNR) ratios [Fletcher 1951], and by related works on predicting intelligibility of corrupted speech [French and Steinberg 1949].

4 Evidence from physiology of hearing further support the articulatory band concept. Human hearing separates speech signal into a number of parallel spectral streams, which persist all the way to auditory cortex [Chow 1951]. With ascending processing levels, firing rates gradually decrease, but the number of processing neurons gradually increase [Pickles 1988, Hromadka et al 2008, Hermansky 2013]. Thus, the representation of speech grows sparser but richer in detail over time. So, there are a number of possible ways of describing this representation, thus forming multiple processing channels. This yields a significant advantage. When a single stream system is corrupted by noise, its results are corrupted. Assuming that parallel processing streams in a multiband system are formed in such a way that typical noises do not simultaneously affect all streams, a system could rely on uncorrupted streams for information extraction. The whole hypothetical concept of the redundant coding of messages in human speech communication is schematically depicted in Fig APPLICATIONS OF THE PROPOSED HYPOTHESIS IN ASR 2.1. Estimating posterior probabilities of speech sounds The first stage of most current ASR systems estimates posterior probabilities of sub-word acoustic units, which are typically related to phonemes of speech. This probability estimating module is today most often a deep neural network, which an ANN consisting with a number of hidden layer units, trained on labeled speech data. While for smaller databases labeling can be done manually, labels in large training databases are typically derived by a forced alignment of transcribed speech with sequences of sub-word models representing messages in speech. Properties of trained ANN primarily depend on data on which the ANN is trained, on character of sub-word acoustic units that are being targeted, on strategies for the ANN training, and on architecture of the ANN. Posterior probabilities of sub-word units derived from a signal that carries an unknown message can be either appropriately transformed and used as features for a Gaussian mixture model based recognizer [Hermansky et al 2000] or converted to scaled likelihoods and directly used in search for the best sequence of models that represent the message being recognized [Bourlard and Wellekens 1990]. The probability-estimating front-end module forms an information bottleneck of ASR system. The relevant information, which is lost in the front-end is lost forever. On the other hand, the irrelevant information that is not alleviated needs to be dealt with, often at a considerable computational and training data expense, in subsequent modules of the ASR system. Thus, the accurate probability estimating module is critical in the success of the whole speech recognition process Use of temporal redundancies in ASR In earlier ASR attempts, acoustic processing was done using a short chunks of speech signal of roughly ms. With an introduction of ANNs into ASR acoustic processing, the time spans for the processing are gradually increasing [Morgan et al 1991, Fanty et al 1992]. The need for longer temporal intervals in the extraction of phoneme identities and timings is supported by results from optimizing discrimination among speech sounds by designing linear discriminants applied to the temporal trajectories of spectral energies. Such optimization yields filters with impulse responses that span several phonemes [Van Vuuren and Hermansky 1997, Valente and Hermansky 2006]. Similar filters are obtained in the first convolutive layer of a convolutional ANN [Pesan et al 2015]. Thus, speech data suggest that, in order to discriminate among phonemes, evidence for discrimination must also come from outside the phoneme. These results suggest a way of dealing with the coarticulation that has puzzled the speech community for many decades (see, e.g., [Cooper et al 1952]) and that still plagues the field of ASR. This coarticulation, which is in ASR dealt with

5 by expanding a number of classes of speech sounds into thousands (and, thus, negatively affecting ASR system complexities), can be seen as a helpful way to introduce temporal redundancies in message encoding. When a powerful classifier is provided with sufficiently long segments of temporal trajectories of spectral energies, the classifier can focus on information about the identity of the desired phoneme in the part of the trajectory that carries the phoneme s coarticulation pattern, thus alleviating any unwanted information about the coarticulation from the neighboring phonemes. This speculation motivated the TempoRAl Pattern (TRAP) feature extraction technique [Hermansky and Sharma 1998], which applies individual ANN classifiers to temporal trajectories of spectral speech energies and subsequently fuses estimates from these individual classifiers to yield probabilities of phonemes for the next stages of the ASR process. An efficient technique for estimating the temporal profiles of spectral energies in individual frequency bands is the Frequency Domain Linear Prediction (FDLP) [Athineos and Ellis 2003], which is suitable for TRAP-based feature extraction [Athineos et al 2004] and can yield additional advantages through its reduced fragility in the presence of linear distortions and reverberations [Ganapathy et al 2014]. The use of longer temporal spans of speech signals to derive speech features is increasingly becoming a focal interest of the research community (see, e.g., [Hochreiter and Schmidhuber 1997, Vinyals et al 2012, Peddinti et al 2015, Peddinti et al 2017]) Using spectral redundancies in ASR Motivated by the existence of spectral redundancies discussed above, we and others have been pursuing this multiband recognition of speech as a highly promising new route towards robust speech processing for a number of years (see [Hermansky 2013] for an overview). When it was introduced two decades ago [Bourlard et al 1996, Hermansky et al 1996], parallel multiband processing captured the attention of several research groups, and efforts to exploit its possibilities continue to this day [Bourlard and Dupont 1996, Tibrewala and Hermansky 1997, Mirghafori and Morgan 1998, Mirghafori 1998, Okawa et al 1998, Sharma 1999, Moris et al 2001, Misra et al 2004, Ketabdar et al 2007, Valente and Hermansky 2007, Burget et al 2008, Ketabdar and Bourlard 2008, Valente and Hermansky 2008, Zhao and Morgan 2008, Mesgarani et al 2010, Badiezadegan and Rose 2011, Mesgarani et al 2011, Variani et al 2013, Hermansky et al 2015, Mallidi et al 2015, Ogawa et al 2015, Mallidi and Hermansky 2016]. These engineering works indicate the potential of the multiband technique in ASR, but they also illustrate the amount of work required to unlock its full benefits. 3. APPLICATIONS IN MACHINE RECOGNTION OF SPEECH Pursuing the knowledge of the way the linguistic message in coded and decoded in human speech communication is of intellectual interest but it also has practical implications in ASR. One direction to be pursued is the multiband speech recognition [Hermansky 2013]. Its principle is depicted in the Fig. 2.

6 Figure 2 Multiband deep neural network for estimating posterior probabilities of speech sounds, which exploits spectral and temporal redundancies in coding of speech messages. To build a multiband system, one must 1) address ways of forming the appropriate processing streams and 2) ways of optimal fusion of the information from the processing streams. Each stream should provide cues for partial recognition of the message but all streams should not be simultaneously affected by noises. We have most experience with use of streams, which are formed through the band-pass filtering of the auditory signal (e.g., [Hermansky et al 1996, Variani et al 2013, Hermansky et al 2015, Mallidi and Hermansky 2016]). Obvious extensions of these elementary streams are streams that employ longer segments of the signal to extract different information from spectral dynamics and explored in the work of Tibrewala and Hermansky [1997], Hermansky and Sharma [1998], Hermansky [2011] and Valente and Hermansky [2011], and streams formed by emulated cortical receptive fields [Hermansky and Fousek 2005, Mesgarani et al 2010]. Most recent nonlinear techniques [Golik et al 2015], which sequentially optimize the input spectral analysis and the subsequent time-frequency processing observed in the auditory cortex, directly support the multi-stream techniques and are of prime interest in this project. An important part of this system is the performance monitoring module, which should be able to fuse information from reliable streams. The performance monitoring allows for on-going improvement of systems on new previously unseen data. It differentiates our multiband approach from techniques such as the mixture of experts approach [Jacobs et al 1991]. When encountering the new data with the multiband system, the fusion module should only fuse information from the most reliable streams, leaving out the unreliable ones.. So it needs to estimate the accuracy of the result without knowledge of the correct result. Machines are typically weak in this ability, and evaluations of their performance require labeled test data. Current performance evaluation techniques evaluate different statistical properties of the ANN classifier output on new data. To various degrees, they use the knowledge that the classifier works with a messagecarrying signal, and which has certain statistical structures; phonemes come in certain rhythms, and with certain similarities among them. Existing performance monitoring efforts for ANN-based classifiers include comparisons of the highest-probability estimate to the several next lower ones [Tibrewala and Hermansky 1997] and a related technique based on the entropy of the classifier output [Misra et al 2004], the evaluation of the autocorrelation matrix of transformed probability estimates [Mesgarani et al 2011], the M measure and delta-m measures, which are based on evaluation of averaged divergences between probability estimates spaced across different time spans [Hermansky et al 2013, Variani et al 2013, Mallidi et al 2015], and the DNN-based autoencoder technique, which models the ANN classifier output [Ogawa et al 2013, Mallidi et al 2015]. One of the effective fusion techniques is the use of a nonlinear ANN classifier that takes concatenated vectors of posterior estimates from the individual streams as its input and delivers final posterior estimates at its output [Hermansky et al 1996]. The number of reliable streams could change from a maximum of using all streams for high-quality signals to using only a few streams for heavily corrupted signals. The fusion module needs to accommodate such changes. Some earlier works have addressed this issue by training as many fusion classifiers as the number of possible non-empty stream combinations [Tibrewala and Hermansky 1997, Variani et al 2013], then running all stream combinations in parallel and selecting the best combination. While theoretically flawless, this technique is computationally demanding. Recent efforts (e.g., [Mallidi and Hermansky 2016]) experiment with an alternative and novel technique of training an ANN-based fusion module while randomly assigning zero values to some of the input streams. Such a stream-dropping during the training of the fusing module can better accommodate corrupted or empty module inputs from the unreliable streams. 5. CONCLUSIONS In the conventional view of human speech communication, vocal tract spectral envelope carries information about underlying speech sound and is being imprinted on short term spectrum of speech signal. Spectral envelope is easily corruptible by relatively minor signal distortions and vocal tract inertia makes speech sounds interleaved with each other and makes the information decoding more difficult. The paper

7 offers an alternative view in which the vocal tract redundantly spreads the information about speech sounds into larger temporal spans of the signal so that it is less vulnerable to time localized signal distortions and redundantly modulates spectral carriers in different frequency bands so that it is less vulnerable to frequency localized signal distortions. The temporal redundancies are used in extraction of information in frequency bands and the spectral redundancies are used in selective choice of least corrupted bands in subsequent decoding the information about speech sounds. This alternative view of human speech communication could help in improving reliability of future engineering systems for machine recognition of realistic noisy speech. Acknowledgements The work has been supported by the DARPA RATS program, by the IARPA BABEL program, by the JHU Human Language Technology Center of Excellence and by Google faculty award to Hynek Hermansky. The paper was prepared and its content presented by the author during his residence in the program on Physics of Hearing: From Neurobiology to Information Theory and Back, at the KAVLI Institute for Theoretical Physics at the University of California Santa Barbara. REFERENCES Athineos, M. et al "LP-TRAP: Linear predictive temporal patterns." Proc. ICSLP Athineos, M., and D. P. W. Ellis. "Frequency-domain linear prediction for temporal features.", Proc. ASRU Badiezadegan, S. and R. Rose, A performance monitoring approach to fusing enhanced spectrogram channels in robust speech recognition. Proc. INTERSPEECH Biem A., and S. Katagiri, Filter bank design based on discriminative feature extraction. Proc. ICASSP Boothroyd, J., and S. Nittrouer, Mathematical treatment of context effects in phoneme and word recognition. JASA 84, no. 1, Bourlard, H,. and S. Dupont, A new ASR approach based on independent processing and recombination of partial frequency bands. Proc. ICSLP Bourlard, H., Non-stationary multi-channel (multi-stream) processing towards robust and adaptive ASR. Proc. ESCA Workshop Robust Methods in Speech Recognition Bourlard, H, et al, Towards subband-based speech recognition. Proc. EUSIPCO, Bourlard, H., and C. J. Wellekens. "Links between Markov models and multilayer perceptrons." IEEE Trans. Pattern Analysis and Machine Intelligence, Burget, L. et al. Combination of strongly and weakly constrained recognizers for reliable detection of out-of-vocabulary words (OOVs). Proc. ICASSP Bush, B. O., and A. Kain. "Estimating phoneme formant targets and coarticulation parameters of conversational and clear speech." Proc. ICASSP Chi, T. et al, "Multiresolution spectrotemporal analysis of complex sounds."jasa, Chistovich, Ludmilla A., and Valentina V. Lublinskaya. "The center of gravity effect in vowel spectra and critical distance between the formants: psychoacoustical study of the perception of vowellike stimuli." Hearing research 1.3 (1979): Chow K. Numerical estimates of the auditory central nervous system of the rhesus monkey. J. Compar. Neurol., vol. 95, no. 1, Cooper, F. S., Delattre, P. C., Liberman, A. M., Borst, J. M., & Gerstman, L. J. Some experiments on the perception of synthetic speech sounds. J. Acoust. Soc. Amer. 24(6), 1952 Cowie, Fiona, "Innateness and Language", The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.). Cowan N. On short and long auditory stores. Psychol. Bull., vol. 96, no. 2, Fant, G. & Risberg, A. (1962). Auditory matching of vowels with two formant synthetic sounds. STL- QPRS 4:7-11. M. Fanty, et al English alphabet recognition with telephone speech, Advances in Neural Information Processing Systems 4, San Mateo, CA, USA: Morgan Kaufmann, 1992, pp

8 Flanagan, James L.." Speech Analysis Synthesis and Perception. Springer Berlin Heidelberg, Fletcher, H. "Speech and hearing in communication." Fousek, P., and H. Hermansky. "Towards ASR based on hierarchical posterior-based keyword recognition." Proc. ICASSP French, Norman R., and John C. Steinberg. "Factors governing the intelligibility of speech sounds." The journal of the Acoustical society of America 19.1 (1947): Ganapathy, S., S. H. Mallidi, and H. Hermansky. "Robust feature extraction using modulation filtering of autoregressive models." IEEE/ACM Transactions ASSP 22.8, Golik, P., Z. Tüske, R. Schlüter, and H. Ney. "Convolutional neural networks for acoustic modeling of raw time signal in LVCSR." Proc. INTERSPEECH 2015 Green, J et al. "SMASH: a tool for articulatory data processing and analysis." Proc. INTERSPEECH Hermansky, H. "Perceptual linear predictive (PLP) analysis of speech." JASA Hermansky H. Multiband recognition of speech: Dealing with unknown unknowns. (invited paper) Proceedings of the IEEE, May;101(5) Hermansky H. Speech recognition from spectral dynamics. (invited paper), Sadhana 36, no. 5, Hermansky H. and P. Fousek, Multi-resolution RASTA filtering for TANDEM-based ASR. Proc ICSLP Hermansky H. and S. Sharma, TRAPS, classifiers of temporal patterns. Proc. ICSLP, 1998 Hermansky H., S. Tibrewala, and M. Pavel, Towards ASR on partially corrupted speech. Proc. Int. Conf. Spoken Lang. Process., Hermansky, H. and N. Malayath, Spectral basis functions from discriminant analysis. Proc. ICSLP Hermansky, H., and D. J. Broad. "The effective second formant F2'and the vocal tract front-cavity." Proc. ICASSP Hermansky, H. et al, "Towards machines that know when they do not know." Proc. ICASSP Hermansky, H. et al, "Mean temporal distance: Predicting ASR error from temporal properties of speech signal.", Proc. ICASSP Hermansky H, et al, Perceptual properties of current speech recognition technology. (invited paper), Proc. IEEE. 101(9), Hochreiter, S. and J. Schmidhuber. Long short-term memory. Neural computation, 9(8): , 1997 Hromadka, T. et al Sparse representation of sounds in the unanesthetized auditory cortex. PLoS Biol., vol. 6, no. 1, Jacobs RA, M.I. Jordan, S.J. Nowlan, G.E. Hinton, Adaptive mixtures of local experts. Neural computation. 1991;3(1). Mahajan, N. et al, "Principal components of auditory spectro-temporal receptive fields." Proc. INTERSPEECH 2014 Mallidi, S. et al, Autoencoder based multi-stream combination for noise robust speech recognition., Proc. INTERSPEECH Mallidi, S.H. and H. Hermansky. A framework for Practical Multiband ASR, Proc. INTERSPEECH 2016 Mallidi, Sri H. et al Uncertainty estimation of ANN classifiers. Proc. ASRU Mallidi, Sri H., JHU PhD Thesis (in preparation) 2016 Mesgarani N. et al, Towards optimizing stream fusion. JASA 139, no. 1, pp , Mesgarani N. et al, Phoneme representation and classification in primary auditory cortex. JASA 123, pp , 2008 Mesgarani N. et al, A multiband multiresolution framework for phoneme recognition. Proc. INTERSPEECH, 2010, pp Meyer B.T. et al, Performance monitoring for automatic speech recognition in noisy multi-channel environments, Proc. ASRU Miller, George A. "The masking of speech." Psychological bulletin 44.2 (1947): 105. Miller, G. A. "Language and communication." 1951.

9 Mirghafori, N., and N. Morgan. "Combining connectionist multi-band and full-band probability streams for speech recognition of natural numbers." Proc. ICSLP, Vol Mirghafori, N. A multiband approach to automatic speech recognition. PhD Thesis, UC Berkeley, Misra, H., et al, "Spectral entropy based feature for robust ASR." Proc. ICASSP Morgan, N., et al, Continuous speech recognition using PLP analysis with multilayer perceptrons. Proc. ICASSP-91 Morris, A. et al, Multi-stream adaptive evidence combination for noise robust ASR. Speech Communication, vol. 34, no. 1 2, Ogawa T, et al, Autoencoder based multi-stream combination for noise robust speech recognition. Proc. ICASSP Ogawa, T. et al, "Stream selection and integration in multiband ASR using GMM-based performance monitoring." Proc. INTERSPEECH Okawa S. et al, Multi-band speech recognition in noisy environments. Proc. ICASSP P. Jain and H. Hermansky, Beyond a single critical-band in TRAP based ASR. Proc. EUROSPEECH Peddinti, V. et al, "A time delay neural network architecture for efficient modeling of long temporal contexts." Proc. INTESREPEECH, Peddinti, V. et al, "Low latency modeling of temporal contexts, IEEE Signal Processing Letters, 2017 Pešán, J. et al, "ANN derived filters for processing of modulation spectrum of speech." Proc. INTERSPEECH Povey, D. et al,. The Kaldi speech recognition toolkit. ASRU 2011 workshop on automatic speech. Pickles J., An Introduction to the Physiology of Hearing. Bingley, U.K.: Emerald, Shannon, C. and E. W. Weaver: The Mathematical Theory of Communication." Urbana, Shannon, Robert V., et al. "Speech recognition with primarily temporal cues." Science (1995): 303. Sharma S. Multi-stream approach to robust speech recognition. Ph.D. dissertation, Dept. Electr. Comput. Eng., Oregon Grad. Inst. Sci. Technol., Portland, OR, 1999 Tibrewala S. and H. Hermansky, Sub-band based recognition of noisy speech. Proc. ICASSP Tibrewala, S. and H. Hermansky, Multi-stream approach in acoustic modeling. Proc. DARPA Hub 5 Workshop, Tibrewala, S., and H. Hermansky. "Sub-band based recognition of noisy speech." Proc ICASSP Valente F. and H. Hermansky, Data-driven extraction of spectral-dynamics based posterior features. in Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. New York: Springer-Verlag, Valente F. and H. Hermansky, Discriminant linear processing of time-frequency plane. Proc. INTERSPEECH Valente, F. and Hynek Hermansky. "Combination of acoustic classifiers based on dempster-shafer theory of evidence." Proc. ICASSP 2007 Valente, F. and Hynek Hermansky. On the combination of auditory and modulation frequency channels for ASR applications, LIDIAP-REPORT Van Vuuren, S. and H. Hermansky, Data-driven design of RASTA-like filters. Proc. EUROSPEECH Variani, E. et al, "Multi-stream recognition of noisy speech with performance monitoring." Proc. INTERSPEECH Vinyals O, et al. Revisiting recurrent neural networks for robust ASR. Proc. ICASSP 2012 Warren, R. Perceptual restoration of missing speech sounds. Science, vol. 167, pp , 1970 Yang, H. et al, Relevance of time frequency features for phonetic and speaker-channel classification. Speech communication, 31(1), Zhao S. and N. Morgan, Multi-stream spectro-temporal features for robust speech recognition. Proc. INTERSPECH 2008

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions 26 24th European Signal Processing Conference (EUSIPCO) Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions Emma Jokinen Department

More information

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology

More information

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California

More information

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,

More information

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial

More information

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,

More information

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT Takuya Yoshioka,, Anton Ragni, Mark J. F. Gales Cambridge University Engineering Department, Cambridge, UK NTT Communication

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS Jonas Gehring 1 Quoc Bao Nguyen 1 Florian Metze 2 Alex Waibel 1,2 1 Interactive Systems Lab, Karlsruhe Institute of Technology;

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence INTERSPEECH September,, San Francisco, USA Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence Bidisha Sharma and S. R. Mahadeva Prasanna Department of Electronics

More information

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition Yanzhang He, Eric Fosler-Lussier Department of Computer Science and Engineering The hio

More information

Segregation of Unvoiced Speech from Nonspeech Interference

Segregation of Unvoiced Speech from Nonspeech Interference Technical Report OSU-CISRC-8/7-TR63 Department of Computer Science and Engineering The Ohio State University Columbus, OH 4321-1277 FTP site: ftp.cse.ohio-state.edu Login: anonymous Directory: pub/tech-report/27

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George

More information

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

Distributed Learning of Multilingual DNN Feature Extractors using GPUs Distributed Learning of Multilingual DNN Feature Extractors using GPUs Yajie Miao, Hao Zhang, Florian Metze Language Technologies Institute, School of Computer Science, Carnegie Mellon University Pittsburgh,

More information

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING Sheng Li 1, Xugang Lu 2, Shinsuke Sakai 1, Masato Mimura 1 and Tatsuya Kawahara 1 1 School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501,

More information

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language Z.HACHKAR 1,3, A. FARCHI 2, B.MOUNIR 1, J. EL ABBADI 3 1 Ecole Supérieure de Technologie, Safi, Morocco. zhachkar2000@yahoo.fr.

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

On the Formation of Phoneme Categories in DNN Acoustic Models

On the Formation of Phoneme Categories in DNN Acoustic Models On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-

More information

Deep Neural Network Language Models

Deep Neural Network Language Models Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com

More information

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation Taufiq Hasan Gang Liu Seyed Omid Sadjadi Navid Shokouhi The CRSS SRE Team John H.L. Hansen Keith W. Godin Abhinav Misra Ali Ziaei Hynek Bořil

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS Heiga Zen, Haşim Sak Google fheigazen,hasimg@google.com ABSTRACT Long short-term

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012 Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH Don McAllaster, Larry Gillick, Francesco Scattone, Mike Newman Dragon Systems, Inc. 320 Nevada Street Newton, MA 02160

More information

arxiv: v1 [cs.lg] 7 Apr 2015

arxiv: v1 [cs.lg] 7 Apr 2015 Transferring Knowledge from a RNN to a DNN William Chan 1, Nan Rosemary Ke 1, Ian Lane 1,2 Carnegie Mellon University 1 Electrical and Computer Engineering, 2 Language Technologies Institute Equal contribution

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production

More information

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar Muthukumar, and Alan W Black Carnegie Mellon University,

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech Dept. for Speech, Music and Hearing Quarterly Progress and Status Report VCV-sequencies in a preliminary text-to-speech system for female speech Karlsson, I. and Neovius, L. journal: STL-QPSR volume: 35

More information

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access Joyce McDonough 1, Heike Lenhert-LeHouiller 1, Neil Bardhan 2 1 Linguistics

More information

Improvements to the Pruning Behavior of DNN Acoustic Models

Improvements to the Pruning Behavior of DNN Acoustic Models Improvements to the Pruning Behavior of DNN Acoustic Models Matthias Paulik Apple Inc., Infinite Loop, Cupertino, CA 954 mpaulik@apple.com Abstract This paper examines two strategies that positively influence

More information

Seminar - Organic Computing

Seminar - Organic Computing Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

A comparison of spectral smoothing methods for segment concatenation based speech synthesis D.T. Chappell, J.H.L. Hansen, "Spectral Smoothing for Speech Segment Concatenation, Speech Communication, Volume 36, Issues 3-4, March 2002, Pages 343-373. A comparison of spectral smoothing methods for

More information

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of Technology Lincoln Laboratory {es,dar}@ll.mit.edu ABSTRACT

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract

More information

Stages of Literacy Ros Lugg

Stages of Literacy Ros Lugg Beginning readers in the USA Stages of Literacy Ros Lugg Looked at predictors of reading success or failure Pre-readers readers aged 3-53 5 yrs Looked at variety of abilities IQ Speech and language abilities

More information

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all Human Communication Science Chandler House, 2 Wakefield Street London WC1N 1PF http://www.hcs.ucl.ac.uk/ ACOUSTICS OF SPEECH INTELLIGIBILITY IN DYSARTHRIA EUROPEAN MASTER S S IN CLINICAL LINGUISTICS UNIVERSITY

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

Evolution of Symbolisation in Chimpanzees and Neural Nets

Evolution of Symbolisation in Chimpanzees and Neural Nets Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication

More information

Body-Conducted Speech Recognition and its Application to Speech Support System

Body-Conducted Speech Recognition and its Application to Speech Support System Body-Conducted Speech Recognition and its Application to Speech Support System 4 Shunsuke Ishimitsu Hiroshima City University Japan 1. Introduction In recent years, speech recognition systems have been

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

JONATHAN H. WRIGHT Department of Economics, Johns Hopkins University, 3400 N. Charles St., Baltimore MD (410)

JONATHAN H. WRIGHT Department of Economics, Johns Hopkins University, 3400 N. Charles St., Baltimore MD (410) JONATHAN H. WRIGHT Department of Economics, Johns Hopkins University, 3400 N. Charles St., Baltimore MD 21218. (410) 516 5728 wrightj@jhu.edu EDUCATION Harvard University 1993-1997. Ph.D., Economics (1997).

More information

Speaker Recognition. Speaker Diarization and Identification

Speaker Recognition. Speaker Diarization and Identification Speaker Recognition Speaker Diarization and Identification A dissertation submitted to the University of Manchester for the degree of Master of Science in the Faculty of Engineering and Physical Sciences

More information

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Voiced-voiceless distinction in alaryngeal speech - acoustic and articula Nord, L. and Hammarberg, B. and Lundström, E. journal:

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

Investigation on Mandarin Broadcast News Speech Recognition

Investigation on Mandarin Broadcast News Speech Recognition Investigation on Mandarin Broadcast News Speech Recognition Mei-Yuh Hwang 1, Xin Lei 1, Wen Wang 2, Takahiro Shinozaki 1 1 Univ. of Washington, Dept. of Electrical Engineering, Seattle, WA 98195 USA 2

More information

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen TRANSFER LEARNING OF WEAKLY LABELLED AUDIO Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Laboratory of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi

More information

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397,

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397, Adoption studies, 274 275 Alliteration skill, 113, 115, 117 118, 122 123, 128, 136, 138 Alphabetic writing system, 5, 40, 127, 136, 410, 415 Alphabets (types of ) artificial transparent alphabet, 5 German

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

Speaker recognition using universal background model on YOHO database

Speaker recognition using universal background model on YOHO database Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,

More information

arxiv: v1 [cs.cl] 27 Apr 2016

arxiv: v1 [cs.cl] 27 Apr 2016 The IBM 2016 English Conversational Telephone Speech Recognition System George Saon, Tom Sercu, Steven Rennie and Hong-Kwang J. Kuo IBM T. J. Watson Research Center, Yorktown Heights, NY, 10598 gsaon@us.ibm.com

More information

Using EEG to Improve Massive Open Online Courses Feedback Interaction

Using EEG to Improve Massive Open Online Courses Feedback Interaction Using EEG to Improve Massive Open Online Courses Feedback Interaction Haohan Wang, Yiwei Li, Xiaobo Hu, Yucong Yang, Zhu Meng, Kai-min Chang Language Technologies Institute School of Computer Science Carnegie

More information

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS Pranay Dighe Afsaneh Asaei Hervé Bourlard Idiap Research Institute, Martigny, Switzerland École Polytechnique Fédérale de Lausanne (EPFL),

More information

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE Shaofei Xue 1

More information

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction CLASSIFICATION OF PROGRAM Critical Elements Analysis 1 Program Name: Macmillan/McGraw Hill Reading 2003 Date of Publication: 2003 Publisher: Macmillan/McGraw Hill Reviewer Code: 1. X The program meets

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

A student diagnosing and evaluation system for laboratory-based academic exercises

A student diagnosing and evaluation system for laboratory-based academic exercises A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens

More information

DEVELOPMENT OF LINGUAL MOTOR CONTROL IN CHILDREN AND ADOLESCENTS

DEVELOPMENT OF LINGUAL MOTOR CONTROL IN CHILDREN AND ADOLESCENTS DEVELOPMENT OF LINGUAL MOTOR CONTROL IN CHILDREN AND ADOLESCENTS Natalia Zharkova 1, William J. Hardcastle 1, Fiona E. Gibbon 2 & Robin J. Lickley 1 1 CASL Research Centre, Queen Margaret University, Edinburgh

More information

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Alistair Conkie AT&T abs - Research 180 Park Avenue, Florham Park,

More information

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

Abstractions and the Brain

Abstractions and the Brain Abstractions and the Brain Brian D. Josephson Department of Physics, University of Cambridge Cavendish Lab. Madingley Road Cambridge, UK. CB3 OHE bdj10@cam.ac.uk http://www.tcm.phy.cam.ac.uk/~bdj10 ABSTRACT

More information