REDUNDANT CODING AND DECODING OF MESSAGES IN HUMAN SPEECH COMMUNICATION

Similar documents
Speech Recognition at ICSI: Broadcast News and beyond

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Modeling function word errors in DNN-HMM based LVCSR systems

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

WHEN THERE IS A mismatch between the acoustic

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Speech Emotion Recognition Using Support Vector Machine

Modeling function word errors in DNN-HMM based LVCSR systems

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

A study of speaker adaptation for DNN-based speech synthesis

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

SARDNET: A Self-Organizing Feature Map for Sequences

Human Emotion Recognition From Speech

Learning Methods in Multilingual Speech Recognition

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Word Segmentation of Off-line Handwritten Documents

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Segregation of Unvoiced Speech from Nonspeech Interference

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Learning Methods for Fuzzy Systems

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Calibration of Confidence Measures in Speech Recognition

On the Formation of Phoneme Categories in DNN Acoustic Models

Deep Neural Network Language Models

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Python Machine Learning

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

arxiv: v1 [cs.lg] 7 Apr 2015

Proceedings of Meetings on Acoustics

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Lecture 1: Machine Learning Basics

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

Improvements to the Pruning Behavior of DNN Acoustic Models

Seminar - Organic Computing

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Rule Learning With Negation: Issues Regarding Effectiveness

A Case Study: News Classification Based on Term Frequency

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Probabilistic Latent Semantic Analysis

Softprop: Softmax Neural Network Backpropagation Learning

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Stages of Literacy Ros Lugg

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

THE RECOGNITION OF SPEECH BY MACHINE

Evolution of Symbolisation in Chimpanzees and Neural Nets

Body-Conducted Speech Recognition and its Application to Speech Support System

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

JONATHAN H. WRIGHT Department of Economics, Johns Hopkins University, 3400 N. Charles St., Baltimore MD (410)

Speaker Recognition. Speaker Diarization and Identification

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Investigation on Mandarin Broadcast News Speech Recognition

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397,

Rule Learning with Negation: Issues Regarding Effectiveness

Reducing Features to Improve Bug Prediction

Speaker Identification by Comparison of Smart Methods. Abstract

INPE São José dos Campos

Speaker recognition using universal background model on YOHO database

arxiv: v1 [cs.cl] 27 Apr 2016

Using EEG to Improve Massive Open Online Courses Feedback Interaction

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

CLASSIFICATION OF PROGRAM Critical Elements Analysis 1. High Priority Items Phonemic Awareness Instruction

Using dialogue context to improve parsing performance in dialogue systems

A student diagnosing and evaluation system for laboratory-based academic exercises

DEVELOPMENT OF LINGUAL MOTOR CONTROL IN CHILDREN AND ADOLESCENTS

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Evolutive Neural Net Fuzzy Filtering: Basic Description

Abstractions and the Brain

Transcription:

REDUNDANT CODING AND DECODING OF MESSAGES IN HUMAN SPEECH COMMUNICATION Hynek Hermansky The Johns Hopkins University, Baltimore, MD, USA KAVLI Institute for Theoretical Physics, Santa Barbara, CA ABSTRACT We hypothesize that linguistic message in speech, as represented by a string of speech sounds, is coded redundantly in both the time and the frequency domains. Such redundant coding of the message in the signal evolved so that relevant spectral and temporal properties of human hearing can be used in extracting the messages from the noisy speech signal. This hypothesis is supported by results of recognition experiments using a particular architecture of an automatic recognition system where longer temporal segments of temporal trajectories of spectral energies in individual frequency bands of speech are used for deriving estimates of posterior probabilities of speech sounds. Combinations of estimates in the reliable frequency bands are then adaptively fused to yield the final probability vectors, which best satisfy the adopted performance monitoring criteria. Keywords: human speech communications, redundant coding of message, robustness, in automatic recognition of speech, parallel multistream processing Corresponding author: Hynek Hermansky, Center for Language and Speech Processing, The Johns Hopkins University, Hackerman Hall 324F, 3400 N. Charles Street, Baltimore, MD 20219, hynek@jhu.edu 1. MOTIVATIONS 1.1. Coding and decoding linguistic messages in speech Speech evolved for human communication in realistic noisy environments. Speech signals carry linguistic messages, which contain information that is being communicated. They also always contain information about speaker s identity, health, emotions, moods, other particular speaker idiosyncrasies and acoustic noise. Comparing the amount of information transmitted in speech and the amount of linguistic information in the message, as represented by an ordered string of phonemes [Miller 1951] suggests that a massive information rate expansion of several orders of magnitude occurs during speech production [Flanagan 1965]. Subsequently, similar information rate reduction happens during the message decoding. We

hypothesize that the information rate expansion is largely due to an inherent redundant coding of the message in the signal, which evolved so that relevant spectral and temporal properties of human hearing can be used in extracting the messages from the noisy speech signal. Figure 1 Schematic illustration of origins of temporal and spectral redundancies in coding the message in speech and of a possible use of these redundancies in decoding the speech message In creating a linguistic message, speaker s brain issues motor commands, which specify strings of speech sounds that constitute the message. These commands control critical elements of a speech production system. Movements of the critical elements elicit synchronized movements of the whole vocal tract and introduce redundant information about the message into different parts of the speech spectrum. In addition, the vocal organs move with their own inertias, which introduce redundancies of the message coding in time. As the result, the series of sound-specific speech commands are coded redundantly both in time and in frequency. Noise corrupts a speech signal before it reaches a listener. Human hearing extracts information about the speech sound sequences, which represent the message, using spectro-temporal properties of an auditory cortex. In this process, the listener alleviates the noise and massively reduces the information rate. Such message decoding is done, effortlessly and with tolerable delays, by all listeners with normal hearing. How much of this ability is innate and how much is acquired in life is still being discussed [Cowie 2010]; however, it is certain that hearing is among the innate abilities that are used in message decoding by all normal listeners. Thus, the forces of nature fostered over the millennia of human evolution strategies for encoding and decoding messages in speech that are consistent with one of the most important fundamental principles of the mathematical theory of communication in noise [Shannon 1949]: the principle of providing redundant ways of encoding messages in speech during speech production in such a way as to allow these redundancies to be exploited by human hearing during the decoding of messages in speech.

1.2. Redundancies in time Speech messages are coded into sequences of phonemes. In producing such a sequence, the vocal tract cannot change its shape instantly. The vocal tract inertia redundantly spreads information about the underlying sounds over longer time intervals, resulting in the phoneme coarticulation. While the spacing of phonemes in conversational speech is, on average, roughly 70 ms, the information spread is considerably longer, at least several hundreds of ms [Yang et al 2000, Bush and Kain 2013], Due to the coarticulation, information about phonemes, carried in temporal dynamics of speech spectrum, spreads over several phonemes, and the coarticulation patterns of close phonemes overlap in time. One of the well-accepted time constants of human hearing (see,e.g. [Covan 1984]), which can be interpreted as a temporal buffer for processing acoustic signal, spans roughly a syllable-length temporal interval in speech. We hypothesize that the listener s perceptual system uses this temporal buffer for gathering and processing information about the identity and position of each phoneme from the coarticulated speech. When sufficiently long segments of speech, containing most of the dynamics caused by the underlying speech sound of interest, mixed with the unwanted information from coarticulated neighboring sounds, are available in the hearing buffer, human hearing can focus on the desired information, and ignore short temporal distortions [Miller 1947, Warren 1970]. This forms in the listener s mind a phoneme sequence that carries the message. This concept implies that the individual phonemes are perceived independently, despite the obvious interactions among neighboring phonemes in the speech signal. This concept is strongly supported by results of perceptual experiments, where a probability of correct human recognition of combinations of consonants and vowels in syllables is given as a product of probabilities of correct recognition of phonemes in the syllables [Fletcher 1953, Boothroyd and Nittrouer 1988], thus implying their conditional independence. 1.3. Redundancies in frequency A speaker forms a message in speech by controlling critical message-carrying parts of the vocal tract (tongue constriction, lips, and velum). Movements of these critical parts cause the entire speaker-specific vocal tract to change its shape in synchrony with the critical parts. Changes to the vocal tract induce speaker and message specific redundant changes throughout the speech spectrum. Therefore, the information is carried redundantly in different parts of speech spectrum. This spread of information into different frequency bands is advantageous for human decoding of the message from noisy speech. During decoding, the speaker-specific information is alleviated by relatively broad spectral integration of the speech spectrum over several critical bands, yielding articulatory bands. In low-noise acoustic environments, each articulatory band carries roughly an equal amount of information about the message, and the human cognitive system extracts message-specific information from all articulatory bands. When some articulatory bands are corrupted by noise, these bands are suppressed in decoding the message [French and Steinberg 1947]. Spectral smoothing within several critical bands is supported by the articulatory bands being broader than critical bands [French and Steinberg 1949], by the F2 concept and by 3.5 Bark spectral integration theory [Fant and Risberg 1962, Chistovich 1978], by optimality of the low-order PLP model in alleviating speaker-dependent information in speech [Hermansky and Broad 1988, Hermansky 1990], by speech resynthesis from low resolution spectrum [Shannon et al 1995], by data-derived spectral features [Hermansky and Malayath 1998] and by observed spectral properties of mammalian auditory cortex [Chi et al 2005, Mahajan et al 2014]. The existence of parallel articulatory frequency bands is supported by results of a series of psychophysical experiments with high-pass and low-pass filtered speech at varying signal-to-noise (SNR) ratios [Fletcher 1951], and by related works on predicting intelligibility of corrupted speech [French and Steinberg 1949].

Evidence from physiology of hearing further support the articulatory band concept. Human hearing separates speech signal into a number of parallel spectral streams, which persist all the way to auditory cortex [Chow 1951]. With ascending processing levels, firing rates gradually decrease, but the number of processing neurons gradually increase [Pickles 1988, Hromadka et al 2008, Hermansky 2013]. Thus, the representation of speech grows sparser but richer in detail over time. So, there are a number of possible ways of describing this representation, thus forming multiple processing channels. This yields a significant advantage. When a single stream system is corrupted by noise, its results are corrupted. Assuming that parallel processing streams in a multiband system are formed in such a way that typical noises do not simultaneously affect all streams, a system could rely on uncorrupted streams for information extraction. The whole hypothetical concept of the redundant coding of messages in human speech communication is schematically depicted in Fig. 1. 2. APPLICATIONS OF THE PROPOSED HYPOTHESIS IN ASR 2.1. Estimating posterior probabilities of speech sounds The first stage of most current ASR systems estimates posterior probabilities of sub-word acoustic units, which are typically related to phonemes of speech. This probability estimating module is today most often a deep neural network, which an ANN consisting with a number of hidden layer units, trained on labeled speech data. While for smaller databases labeling can be done manually, labels in large training databases are typically derived by a forced alignment of transcribed speech with sequences of sub-word models representing messages in speech. Properties of trained ANN primarily depend on data on which the ANN is trained, on character of sub-word acoustic units that are being targeted, on strategies for the ANN training, and on architecture of the ANN. Posterior probabilities of sub-word units derived from a signal that carries an unknown message can be either appropriately transformed and used as features for a Gaussian mixture model based recognizer [Hermansky et al 2000] or converted to scaled likelihoods and directly used in search for the best sequence of models that represent the message being recognized [Bourlard and Wellekens 1990]. The probability-estimating front-end module forms an information bottleneck of ASR system. The relevant information, which is lost in the front-end is lost forever. On the other hand, the irrelevant information that is not alleviated needs to be dealt with, often at a considerable computational and training data expense, in subsequent modules of the ASR system. Thus, the accurate probability estimating module is critical in the success of the whole speech recognition process. 2.2. Use of temporal redundancies in ASR In earlier ASR attempts, acoustic processing was done using a short chunks of speech signal of roughly 10-20 ms. With an introduction of ANNs into ASR acoustic processing, the time spans for the processing are gradually increasing [Morgan et al 1991, Fanty et al 1992]. The need for longer temporal intervals in the extraction of phoneme identities and timings is supported by results from optimizing discrimination among speech sounds by designing linear discriminants applied to the temporal trajectories of spectral energies. Such optimization yields filters with impulse responses that span several phonemes [Van Vuuren and Hermansky 1997, Valente and Hermansky 2006]. Similar filters are obtained in the first convolutive layer of a convolutional ANN [Pesan et al 2015]. Thus, speech data suggest that, in order to discriminate among phonemes, evidence for discrimination must also come from outside the phoneme. These results suggest a way of dealing with the coarticulation that has puzzled the speech community for many decades (see, e.g., [Cooper et al 1952]) and that still plagues the field of ASR. This coarticulation, which is in ASR dealt with

by expanding a number of classes of speech sounds into thousands (and, thus, negatively affecting ASR system complexities), can be seen as a helpful way to introduce temporal redundancies in message encoding. When a powerful classifier is provided with sufficiently long segments of temporal trajectories of spectral energies, the classifier can focus on information about the identity of the desired phoneme in the part of the trajectory that carries the phoneme s coarticulation pattern, thus alleviating any unwanted information about the coarticulation from the neighboring phonemes. This speculation motivated the TempoRAl Pattern (TRAP) feature extraction technique [Hermansky and Sharma 1998], which applies individual ANN classifiers to temporal trajectories of spectral speech energies and subsequently fuses estimates from these individual classifiers to yield probabilities of phonemes for the next stages of the ASR process. An efficient technique for estimating the temporal profiles of spectral energies in individual frequency bands is the Frequency Domain Linear Prediction (FDLP) [Athineos and Ellis 2003], which is suitable for TRAP-based feature extraction [Athineos et al 2004] and can yield additional advantages through its reduced fragility in the presence of linear distortions and reverberations [Ganapathy et al 2014]. The use of longer temporal spans of speech signals to derive speech features is increasingly becoming a focal interest of the research community (see, e.g., [Hochreiter and Schmidhuber 1997, Vinyals et al 2012, Peddinti et al 2015, Peddinti et al 2017]). 2.3. Using spectral redundancies in ASR Motivated by the existence of spectral redundancies discussed above, we and others have been pursuing this multiband recognition of speech as a highly promising new route towards robust speech processing for a number of years (see [Hermansky 2013] for an overview). When it was introduced two decades ago [Bourlard et al 1996, Hermansky et al 1996], parallel multiband processing captured the attention of several research groups, and efforts to exploit its possibilities continue to this day [Bourlard and Dupont 1996, Tibrewala and Hermansky 1997, Mirghafori and Morgan 1998, Mirghafori 1998, Okawa et al 1998, Sharma 1999, Moris et al 2001, Misra et al 2004, Ketabdar et al 2007, Valente and Hermansky 2007, Burget et al 2008, Ketabdar and Bourlard 2008, Valente and Hermansky 2008, Zhao and Morgan 2008, Mesgarani et al 2010, Badiezadegan and Rose 2011, Mesgarani et al 2011, Variani et al 2013, Hermansky et al 2015, Mallidi et al 2015, Ogawa et al 2015, Mallidi and Hermansky 2016]. These engineering works indicate the potential of the multiband technique in ASR, but they also illustrate the amount of work required to unlock its full benefits. 3. APPLICATIONS IN MACHINE RECOGNTION OF SPEECH Pursuing the knowledge of the way the linguistic message in coded and decoded in human speech communication is of intellectual interest but it also has practical implications in ASR. One direction to be pursued is the multiband speech recognition [Hermansky 2013]. Its principle is depicted in the Fig. 2.

Figure 2 Multiband deep neural network for estimating posterior probabilities of speech sounds, which exploits spectral and temporal redundancies in coding of speech messages. To build a multiband system, one must 1) address ways of forming the appropriate processing streams and 2) ways of optimal fusion of the information from the processing streams. Each stream should provide cues for partial recognition of the message but all streams should not be simultaneously affected by noises. We have most experience with use of streams, which are formed through the band-pass filtering of the auditory signal (e.g., [Hermansky et al 1996, Variani et al 2013, Hermansky et al 2015, Mallidi and Hermansky 2016]). Obvious extensions of these elementary streams are streams that employ longer segments of the signal to extract different information from spectral dynamics and explored in the work of Tibrewala and Hermansky [1997], Hermansky and Sharma [1998], Hermansky [2011] and Valente and Hermansky [2011], and streams formed by emulated cortical receptive fields [Hermansky and Fousek 2005, Mesgarani et al 2010]. Most recent nonlinear techniques [Golik et al 2015], which sequentially optimize the input spectral analysis and the subsequent time-frequency processing observed in the auditory cortex, directly support the multi-stream techniques and are of prime interest in this project. An important part of this system is the performance monitoring module, which should be able to fuse information from reliable streams. The performance monitoring allows for on-going improvement of systems on new previously unseen data. It differentiates our multiband approach from techniques such as the mixture of experts approach [Jacobs et al 1991]. When encountering the new data with the multiband system, the fusion module should only fuse information from the most reliable streams, leaving out the unreliable ones.. So it needs to estimate the accuracy of the result without knowledge of the correct result. Machines are typically weak in this ability, and evaluations of their performance require labeled test data. Current performance evaluation techniques evaluate different statistical properties of the ANN classifier output on new data. To various degrees, they use the knowledge that the classifier works with a messagecarrying signal, and which has certain statistical structures; phonemes come in certain rhythms, and with certain similarities among them. Existing performance monitoring efforts for ANN-based classifiers include comparisons of the highest-probability estimate to the several next lower ones [Tibrewala and Hermansky 1997] and a related technique based on the entropy of the classifier output [Misra et al 2004], the evaluation of the autocorrelation matrix of transformed probability estimates [Mesgarani et al 2011], the M measure and delta-m measures, which are based on evaluation of averaged divergences between probability estimates spaced across different time spans [Hermansky et al 2013, Variani et al 2013, Mallidi et al 2015], and the DNN-based autoencoder technique, which models the ANN classifier output [Ogawa et al 2013, Mallidi et al 2015]. One of the effective fusion techniques is the use of a nonlinear ANN classifier that takes concatenated vectors of posterior estimates from the individual streams as its input and delivers final posterior estimates at its output [Hermansky et al 1996]. The number of reliable streams could change from a maximum of using all streams for high-quality signals to using only a few streams for heavily corrupted signals. The fusion module needs to accommodate such changes. Some earlier works have addressed this issue by training as many fusion classifiers as the number of possible non-empty stream combinations [Tibrewala and Hermansky 1997, Variani et al 2013], then running all stream combinations in parallel and selecting the best combination. While theoretically flawless, this technique is computationally demanding. Recent efforts (e.g., [Mallidi and Hermansky 2016]) experiment with an alternative and novel technique of training an ANN-based fusion module while randomly assigning zero values to some of the input streams. Such a stream-dropping during the training of the fusing module can better accommodate corrupted or empty module inputs from the unreliable streams. 5. CONCLUSIONS In the conventional view of human speech communication, vocal tract spectral envelope carries information about underlying speech sound and is being imprinted on short term spectrum of speech signal. Spectral envelope is easily corruptible by relatively minor signal distortions and vocal tract inertia makes speech sounds interleaved with each other and makes the information decoding more difficult. The paper

offers an alternative view in which the vocal tract redundantly spreads the information about speech sounds into larger temporal spans of the signal so that it is less vulnerable to time localized signal distortions and redundantly modulates spectral carriers in different frequency bands so that it is less vulnerable to frequency localized signal distortions. The temporal redundancies are used in extraction of information in frequency bands and the spectral redundancies are used in selective choice of least corrupted bands in subsequent decoding the information about speech sounds. This alternative view of human speech communication could help in improving reliability of future engineering systems for machine recognition of realistic noisy speech. Acknowledgements The work has been supported by the DARPA RATS program, by the IARPA BABEL program, by the JHU Human Language Technology Center of Excellence and by Google faculty award to Hynek Hermansky. The paper was prepared and its content presented by the author during his residence in the program on Physics of Hearing: From Neurobiology to Information Theory and Back, at the KAVLI Institute for Theoretical Physics at the University of California Santa Barbara. REFERENCES Athineos, M. et al "LP-TRAP: Linear predictive temporal patterns." Proc. ICSLP 2004. Athineos, M., and D. P. W. Ellis. "Frequency-domain linear prediction for temporal features.", Proc. ASRU 2003. Badiezadegan, S. and R. Rose, A performance monitoring approach to fusing enhanced spectrogram channels in robust speech recognition. Proc. INTERSPEECH 2011. Biem A., and S. Katagiri, Filter bank design based on discriminative feature extraction. Proc. ICASSP 1994. Boothroyd, J., and S. Nittrouer, Mathematical treatment of context effects in phoneme and word recognition. JASA 84, no. 1, 1988. Bourlard, H,. and S. Dupont, A new ASR approach based on independent processing and recombination of partial frequency bands. Proc. ICSLP 1996. Bourlard, H., Non-stationary multi-channel (multi-stream) processing towards robust and adaptive ASR. Proc. ESCA Workshop Robust Methods in Speech Recognition 1999. Bourlard, H, et al, Towards subband-based speech recognition. Proc. EUSIPCO, 1996. Bourlard, H., and C. J. Wellekens. "Links between Markov models and multilayer perceptrons." IEEE Trans. Pattern Analysis and Machine Intelligence, 1990. Burget, L. et al. Combination of strongly and weakly constrained recognizers for reliable detection of out-of-vocabulary words (OOVs). Proc. ICASSP 2008. Bush, B. O., and A. Kain. "Estimating phoneme formant targets and coarticulation parameters of conversational and clear speech." Proc. ICASSP 2013. Chi, T. et al, "Multiresolution spectrotemporal analysis of complex sounds."jasa, 2005. Chistovich, Ludmilla A., and Valentina V. Lublinskaya. "The center of gravity effect in vowel spectra and critical distance between the formants: psychoacoustical study of the perception of vowellike stimuli." Hearing research 1.3 (1979): 185-195. Chow K. Numerical estimates of the auditory central nervous system of the rhesus monkey. J. Compar. Neurol., vol. 95, no. 1, 1951. Cooper, F. S., Delattre, P. C., Liberman, A. M., Borst, J. M., & Gerstman, L. J. Some experiments on the perception of synthetic speech sounds. J. Acoust. Soc. Amer. 24(6), 1952 Cowie, Fiona, "Innateness and Language", The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.). Cowan N. On short and long auditory stores. Psychol. Bull., vol. 96, no. 2, 1984. Fant, G. & Risberg, A. (1962). Auditory matching of vowels with two formant synthetic sounds. STL- QPRS 4:7-11. M. Fanty, et al English alphabet recognition with telephone speech, Advances in Neural Information Processing Systems 4, San Mateo, CA, USA: Morgan Kaufmann, 1992, pp. 199 206.

Flanagan, James L.." Speech Analysis Synthesis and Perception. Springer Berlin Heidelberg, 1965. 166-209. Fletcher, H. "Speech and hearing in communication." 1953. Fousek, P., and H. Hermansky. "Towards ASR based on hierarchical posterior-based keyword recognition." Proc. ICASSP 2006. French, Norman R., and John C. Steinberg. "Factors governing the intelligibility of speech sounds." The journal of the Acoustical society of America 19.1 (1947): 90-119. Ganapathy, S., S. H. Mallidi, and H. Hermansky. "Robust feature extraction using modulation filtering of autoregressive models." IEEE/ACM Transactions ASSP 22.8, 2014. Golik, P., Z. Tüske, R. Schlüter, and H. Ney. "Convolutional neural networks for acoustic modeling of raw time signal in LVCSR." Proc. INTERSPEECH 2015 Green, J et al. "SMASH: a tool for articulatory data processing and analysis." Proc. INTERSPEECH 2013. Hermansky, H. "Perceptual linear predictive (PLP) analysis of speech." JASA 1990. Hermansky H. Multiband recognition of speech: Dealing with unknown unknowns. (invited paper) Proceedings of the IEEE, May;101(5) 2013. Hermansky H. Speech recognition from spectral dynamics. (invited paper), Sadhana 36, no. 5, 2011. Hermansky H. and P. Fousek, Multi-resolution RASTA filtering for TANDEM-based ASR. Proc ICSLP 2005. Hermansky H. and S. Sharma, TRAPS, classifiers of temporal patterns. Proc. ICSLP, 1998 Hermansky H., S. Tibrewala, and M. Pavel, Towards ASR on partially corrupted speech. Proc. Int. Conf. Spoken Lang. Process., 1996. Hermansky, H. and N. Malayath, Spectral basis functions from discriminant analysis. Proc. ICSLP 1998. Hermansky, H., and D. J. Broad. "The effective second formant F2'and the vocal tract front-cavity." Proc. ICASSP 1989. Hermansky, H. et al, "Towards machines that know when they do not know." Proc. ICASSP 2015. Hermansky, H. et al, "Mean temporal distance: Predicting ASR error from temporal properties of speech signal.", Proc. ICASSP 2013. Hermansky H, et al, Perceptual properties of current speech recognition technology. (invited paper), Proc. IEEE. 101(9), 2013. Hochreiter, S. and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735 1780, 1997 Hromadka, T. et al Sparse representation of sounds in the unanesthetized auditory cortex. PLoS Biol., vol. 6, no. 1, 2008. Jacobs RA, M.I. Jordan, S.J. Nowlan, G.E. Hinton, Adaptive mixtures of local experts. Neural computation. 1991;3(1). Mahajan, N. et al, "Principal components of auditory spectro-temporal receptive fields." Proc. INTERSPEECH 2014 Mallidi, S. et al, Autoencoder based multi-stream combination for noise robust speech recognition., Proc. INTERSPEECH 2015. Mallidi, S.H. and H. Hermansky. A framework for Practical Multiband ASR, Proc. INTERSPEECH 2016 Mallidi, Sri H. et al Uncertainty estimation of ANN classifiers. Proc. ASRU 2015. Mallidi, Sri H., JHU PhD Thesis (in preparation) 2016 Mesgarani N. et al, Towards optimizing stream fusion. JASA 139, no. 1, pp. 14 18, 2011. Mesgarani N. et al, Phoneme representation and classification in primary auditory cortex. JASA 123, pp. 899 909, 2008 Mesgarani N. et al, A multiband multiresolution framework for phoneme recognition. Proc. INTERSPEECH, 2010, pp. 318 321. Meyer B.T. et al, Performance monitoring for automatic speech recognition in noisy multi-channel environments, Proc. ASRU 2016. Miller, George A. "The masking of speech." Psychological bulletin 44.2 (1947): 105. Miller, G. A. "Language and communication." 1951.

Mirghafori, N., and N. Morgan. "Combining connectionist multi-band and full-band probability streams for speech recognition of natural numbers." Proc. ICSLP, Vol. 98. 1998. Mirghafori, N. A multiband approach to automatic speech recognition. PhD Thesis, UC Berkeley, 1998. Misra, H., et al, "Spectral entropy based feature for robust ASR." Proc. ICASSP 2004. Morgan, N., et al, Continuous speech recognition using PLP analysis with multilayer perceptrons. Proc. ICASSP-91 Morris, A. et al, Multi-stream adaptive evidence combination for noise robust ASR. Speech Communication, vol. 34, no. 1 2, 2001. Ogawa T, et al, Autoencoder based multi-stream combination for noise robust speech recognition. Proc. ICASSP 2015. Ogawa, T. et al, "Stream selection and integration in multiband ASR using GMM-based performance monitoring." Proc. INTERSPEECH 2013. Okawa S. et al, Multi-band speech recognition in noisy environments. Proc. ICASSP 1998. P. Jain and H. Hermansky, Beyond a single critical-band in TRAP based ASR. Proc. EUROSPEECH 2003. Peddinti, V. et al, "A time delay neural network architecture for efficient modeling of long temporal contexts." Proc. INTESREPEECH, 2015. Peddinti, V. et al, "Low latency modeling of temporal contexts, IEEE Signal Processing Letters, 2017 Pešán, J. et al, "ANN derived filters for processing of modulation spectrum of speech." Proc. INTERSPEECH 2015. Povey, D. et al,. The Kaldi speech recognition toolkit. ASRU 2011 workshop on automatic speech. Pickles J., An Introduction to the Physiology of Hearing. Bingley, U.K.: Emerald, 1988. Shannon, C. and E. W. Weaver: The Mathematical Theory of Communication." Urbana, 1949. Shannon, Robert V., et al. "Speech recognition with primarily temporal cues." Science 270.5234 (1995): 303. Sharma S. Multi-stream approach to robust speech recognition. Ph.D. dissertation, Dept. Electr. Comput. Eng., Oregon Grad. Inst. Sci. Technol., Portland, OR, 1999 Tibrewala S. and H. Hermansky, Sub-band based recognition of noisy speech. Proc. ICASSP 1997. Tibrewala, S. and H. Hermansky, Multi-stream approach in acoustic modeling. Proc. DARPA Hub 5 Workshop, 1997. Tibrewala, S., and H. Hermansky. "Sub-band based recognition of noisy speech." Proc ICASSP 1997. Valente F. and H. Hermansky, Data-driven extraction of spectral-dynamics based posterior features. in Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. New York: Springer-Verlag, 2011. Valente F. and H. Hermansky, Discriminant linear processing of time-frequency plane. Proc. INTERSPEECH 2006. Valente, F. and Hynek Hermansky. "Combination of acoustic classifiers based on dempster-shafer theory of evidence." Proc. ICASSP 2007 Valente, F. and Hynek Hermansky. On the combination of auditory and modulation frequency channels for ASR applications, LIDIAP-REPORT-2008. Van Vuuren, S. and H. Hermansky, Data-driven design of RASTA-like filters. Proc. EUROSPEECH 1997. Variani, E. et al, "Multi-stream recognition of noisy speech with performance monitoring." Proc. INTERSPEECH 2013. Vinyals O, et al. Revisiting recurrent neural networks for robust ASR. Proc. ICASSP 2012 Warren, R. Perceptual restoration of missing speech sounds. Science, vol. 167, pp. 392 393, 1970 Yang, H. et al, Relevance of time frequency features for phonetic and speaker-channel classification. Speech communication, 31(1), 35-50. 2000 Zhao S. and N. Morgan, Multi-stream spectro-temporal features for robust speech recognition. Proc. INTERSPECH 2008