Review Article A REVIEW ON LANDMARK DETECTION METHODOLOGIES OF STOP CONSONANTS

Size: px
Start display at page:

Download "Review Article A REVIEW ON LANDMARK DETECTION METHODOLOGIES OF STOP CONSONANTS"

Transcription

1 , pp Available online at Review Article A REVIEW ON LANDMARK DETECTION METHODOLOGIES OF STOP CONSONANTS NIRMALA S. R.* AND GOSWAMI UPASHANA Department of Electronics and Communication Engineering, Gauhati University Institute of Science and Technology, Guwahati, , India. *Corresponding Author: -nirmalasr3@gmail.com,upashanagoswami15@gmail.com Received: May 28, 2017; Revised: October 10, 2017; Accepted: December 21; Published: December 30, 2017 Abstract- Human can produce different sounds such vowels, semi-vowels, nasals, fricatives, stops, murmurs etc. Some sounds are produced by vocal fold vibration and others by making constriction in the vocal cord. Stop consonants fall under the second category. They are associated with low energy, high var iability and highly random in nature. The extraction of useful features from speech signal is a very challenging task. They can be extracted from certain locations, where there are sudden and significant articulatory changes known as landmarks. Landmarks associated with stops are Voicing Onset Time and burst rel ease. Analysis of Burst and Voicing Onset Time can give the place of articulation during the production of sounds. Therefore, detection of these landmarks is studied by a number of researchers. Landmark based processing is required for analysis of events associated with stops. In this paper, we reviewed some of the ex isting methodologies for landmark detection of stop consonants. Keywords- Landmark, Burst, Voicing Onset Time, Stop, Detection. Citation: Nirmala S. R. and Goswami Upashana, (2017) A Review on Landmark Detection Methodologies of Stop Consonants., ISSN: & E-ISSN: , Volume 8, Issue 1, pp Copyright: Copyright 2017 Nirmala S. R. and Goswami Upashana, This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited. Academic Editor / Reviewer: Introduction Speech is produced by the convolution of the excitation signal from the vocal folds (source) and the impulse response of the vocal tract filter [1]. One of the most important classes of sound is stop consonants. They are produced by making constriction at some point in the vocal tract and then releasing the air. In English phonemes /b/, /p/, /t/, /d/, /k/, /g/ are called stops or plosives [2]. In the past few decades, stops have been studied by a number of researchers. They particularly studied the salient regions known as `landmarks'. Acoustic landmarks or events contain important cues for speech perception. There are mainly two important landmarks associated with any stop. They are burst onset and voicing onset time (VOT) by which the nature of stops is completely characterized Detection of these events plays a major role in many applications such as automatic speech recognition, phoneme recognition etc. [3, 4, 5, 12]. Detection of these landmarks in various other languages is also a wide area of research [22, 23]. Stops are classified based on presence of vocal fold vibration, manner of articulation and place of articulation. In production of stop consonant, the changing articulators produce multiple events or landmarks. In this paper, we have discussed the landmark detection methodologies of stop consonants. Classification of Stops Phonemes are the smallest unit, which distinguish one word from other in a particular language. One of the main sub-classes of phoneme is stop consonants. They are mainly classified into two types, voiced and unvoiced depending on vocal fold state. A. Voiced Stop If the vocal fold vibration is present during the production of stops then they are called voiced stops, for example phonemes /b/, /d/, /g/ are voiced. B. Unvoiced Stop In case of unvoiced stops this vibration is absent and phonemes /p/, /t/, /k/ fall under this category. Manner of Articulation It describes how the airstream is affected when it flows from lung and out of the mouth. From this view point, stop production leads to multiple articulatory events. They are dependent on type of stop consonants, speaker variability and the context in which it appears. The events are mainly divided into five components such as-closure-interval, transient, frication, aspiration and transition [2,3] and described below. The [Fig-1] shows the five components of stops. Fig-1 Events occurred during a stop production [6]. Closure-interval: In stop production, first a closure is formed within the oral cavity of the vocal tract. In case of unvoiced stops, silence region is present during closure interval. A low frequency dominant periodic energy is present in this phase of voiced stops which can be seen as 'voice bar' in the spectrogram. Transient: The release of closure is termed as transient. It is characterized by a brief pulse of intense energy in the spectrogram. Bioinfo Publications 316

2 A Review on Landmark Detection Methodologies of Stop Consonants Frication: The component resulting from the combination of high intra-oral pressure being released through a narrow opening at the point of release. Aspiration: The component resulting of the vocal tract opening creating turbulence through the glottis rather than the oral constriction. Formants can often be present during this phase. Transition: The component where formants are present and the oral tract is moving to the position for the following vowel target. The main landmarks or events of stops are burst and VOT [shown in [Fig-2] and [Fig-3]. The interval between the transient or burst onset and the onset of voicing is known as VOT [2]. It is an important temporal cue to distinguish between voiced and voiceless stop especially when stops are in word initial position [16]. This parameter can range from 0-30ms for voiced and ms for unvoiced stops [6]. On average, the VOT range are as follows [14]: VOT (/b d g/) < VOT (/p t k/). VOT (/b/) <VOT (/d/) < VOT (/g/). VOT (/p/) < VOT (/t/) <VOT (/k/). Place of Articulation In the production of stops, the point of contact at which the constriction occurs in the vocal tract is known as place of articulation. The distinctive sound of consonants is given by the place of articulation along with manner of articulation. According to the place of articulation, stops are classified into following types [7]: Bilabial Bilabial stops are produced by making maximum constriction using both lips and the phonemes /b/ (voiced) and /p/ (unvoiced) fall under this category. The point of constriction [in Fig-4] of bilabial stops is shown by red line. Fig-2 Illustration of the events/landmarks related to the voiceless or unvoiced stop consonant /k/ producing sound ka. Fig-4 Point of constriction (red line) of bilabial stop [7]. Alveolar: Alveolar stops are produced when the tongue tip articulates with the frontal part of the alveolar ridge such as /d/ (voiced) and /t/ (unvoiced). Alveolar stop place of constriction [in Fig-5] is shown by red line. Fig-2 Illustration of the events/landmarks related to the voiced stop consonant /b/ producing sound ba. Burst onset Burst onset or burst or closure-burst-transition (CBT) is a brief pulse of acoustic energy produced by the initial releases of constriction in stop production. Automatic detection of burst onset is one of the major problem considered in several studies [4,5,8]. The typical value of this may range for 5-10ms. There are high degree of variability in burst such as- the burst release may be weak (voiced stops) and sharp (unvoiced stops); multiple burst may be present in a single stop (Velar stop). Voicing onset time (VOT) Fig-5 Point of constriction (red line) of alveolar stop [7]. Velar Making constriction at velum produces velar stops. The examples are /g/ (voiced) and /k/ (unvoiced). The red line [in Fig-6] shows the constriction of velar stops. Bioinfo Publications 317

3 Nirmala S. R. and Goswami Upashana Fig-6 Point of constriction (red line) of velar stop [7]. Burst Detection Methodologies This section provides a systematic study on various existing methods for burst onset detection. Liu et al. [3], proposed a method for landmark detection, which involves 6-energy band, rate of rise (ROR) measure and threshold logic. In their proposed method, they computed the broad spectrogram of the input speech signal using 512-point DFT, 6 ms hamming window. They divided the resulting spectrogram into six frequency bands. These bands were used to monitor different events. The energy changes in the six bands were computed by using two pass strategies. The first pass used coarse information to find the general vicinity of a spectral change and in the second pass some parameter values were modified in order to localize the energy changes in time. The ROR was measured by computing overlapping db first difference of energy in each of the six bands. The abrupt spectral change in six bands was found by the positive and negative peaks in ROR waveform. Burst onset landmarks were selected from these peaks using a threshold method. Detection rates were 41%, 68%, 85%, and 88%, for temporal accuracies of 5, 10, 20, and 30 ms respectively for sentences from TIMIT database. Salomon et al. [4] proposed a method for landmark detection based on temporal parameters. In their experiment, they took band pass components of the speech signal as the temporal parameter. Initially, the signal is passed through an auditory filter-bank, which divides the signal into 60 band-pass frequency channels. Then they computed envelope analysis and feature extraction in each channel. They include energy onset and offset measures, periodicity and aperiodicity content. By combining all features in the individual channels, summary feature was produced to detect the abrupt change in energy. Detection rate was 96% with 50ms temporal accuracy for sentences from TIMIT database. A gross comparison of the method with that of [3] was also reported in [4]. Based on the experiment performed on TIMIT database, Liu s method [3] had an error rate of 15% for deletions, 6% for substitutions, and 25% for insertions. On the other hand Salomon method [4], had an overall detection rate of 80.2% with 15% error for deletion, 4.8% for substitutions and 8.7% for insertions. The study shows that summary measures across all frequency channels may be better for stop landmark detection compared to the selection of broader frequency band used by Liu. A stop burst release exists only for 5-10 ms and temporal accuracy less than this affect the systems which require this landmark detection. Temporal accuracy can be improved by using parameter characterization approach such as parameters from Gaussian Mixture Model (GMM). Pandey et al. [5], [8] used rate of change (ROC) measure defined on Gaussian Mixture Model (GMM) of log magnitude using 4 Gaussian components, along with onset-offset detector and spectral flatness measures to detect stop landmarks. Approximate error in modeling log magnitude spectra is less than other modeling such as magnitude spectra or squared magnitude spectra. Another advantage is that, gain normalization is not necessary in this type of spectrum. They computed the log magnitude spectra of the speech signal by taking 512-point DFT and 6-ms hanning window. The high frame rate is useful for capturing fast spectral variations. A median filter with 50 points was used to smooth the magnitude spectrum. The resulting spectrum was approximated by a weighted sum of the Gaussian functions. The expectation maximization algorithm was used to estimate GMM parameters. ROC was calculated on the smoothed parameters to capture the abrupt change. Detection rates are 98%, 97%, 95%, 90% and 73% at temporal accuracies of 30, 20, 15, 10, and 5 ms respectively for sentences from TIMIT database. The iterative process of estimation of the Gaussian parameters is computation intensive and the method is not suited for a real-time implementation. Lin et al. [9] used a method based on spectral moment in addition to energy band parameters for burst onset detection. They used spectral moments as parameters for classification of Mandarin stops. Considering a fixed weight for a parameter affects the detection rate because it desensitizes the parameter variation. Jayan et. al. [10] proposed a method by combining peak energy from fixed frequency band and first four spectral moments from the speech spectrum for burst onset detection. In order to calculate the energy band parameters, the speech was sampled at 10kHz and spectrogram was computed by taking 512-point DFT and 6ms hanning window. A 20-point moving average was taken along the time index to smooth the resulting spectrum for each frame. The peaks in three different frequency bands ( , , and khz) were taken from the smoothed spectrum. Again to compute the spectral moments, normalized speech spectrum was considered as probability density function. The first four spectral moments; centroid, variance, skewness and kurtosis was computed from the spectrum. A combined rate of change measure based on Mahalanobis distance, referred to as ROC-MD. Their results showed that energy parameters were highly reliable and contribute more towards detection rate. Spectral moments were useful as additional parameters for improving detection rates of burst onset landmark, but for reliable and accurate detection of landmarks combined parameters were required. Rate of change obtained by Mahalanobis distance based first difference (ROC-MD) operation was more effective in combining parameters and deriving a single parameter indicative of the overall variation. It was less sensitive to the variations in time steps and it is effective for time-localizing the burst onsets. The detection rates of the combined system were 90% and 96% for temporal deviation of 5 ms and 10 ms respectively (time step 3ms). Niyogi et. al. [11], proposed a method for stop detection using three energy measures namely total log energy, log energy above 3 khz and spectral flatness measure based on wiener entropy as feature vector. Then these are used as input to support vector machine (SVM) to detect stop consonant. Lin et. al. [12], used two dimensional cepstrum (TDC) as feature and a random forest (RF) detector to detect burst onset. RF creates lots of decision trees and used them to make the classification. Each tree in a forest judges the input test sample to make a local decision. The plurality votes determine the final decision on that input test sample. The stop sounds from TIMIT database were used to train the classifier. In order to increase the detection accuracy the stops were classified as voiceless stop burst, voiced stop burst and stop aspiration. They implemented the asymmetric boot-strapping to avoid imbalance in the training set. TDC was derived by the two dimensional discrete cosine transform (2D-DCT). They compared their technique with other machine learning techniques such as SVM and GMM and showed that detection was better in their method. The temporal accuracies were 64%, 86%, 99% with tolerances 5 ms, 20 ms and 30 ms respectively. Prathosh et al. [13], proposed two new temporal features plosion index (PI), maximum normalized cross-correlation (MNCC) and a rule based classifier to detect closure burst transition (CBT). In this algorithm first, Hilbert Envelop and zero crossing rate was computed on high pass filtered speech signal. Then PI was computed at local maxima of Hilbert envelop between successive zero crossing and if it exceeds a particular threshold it was considered as potential CBT. They assumed that within 20ms interval two burst releases couldn t occur and the very first potential CBT within 20 ms was considered representative burst candidate (RBC). MNCC was computed over three successive epochs and based on threshold criteria decision was taken whether RBC is CBT or not. The detection rates for CBT using TIMIT database sentences were 64%, 84%, 97%, and 100% Bioinfo Publications 318

4 A Review on Landmark Detection Methodologies of Stop Consonants for deviation of 5, 10, 15, and 20 ms respectively. VOT detection methodologies Various existing methods for estimating VOT fall under two categories: Identification of the locations of the burst and voicing onsets through a set of customized acoustic-phonetic rules (knowledge-based) and Those which train a learning machine (such as random forest, support vector machine) to estimate the VOT using some acoustic features corresponding to the stop-to-voiced-phone transition event. Stouten et al. [14], used a reassigned time-frequency representation (RTFR) to automatically estimate the VOT of stops. RTFR is a high resolution signal analysis method with better time information preserving capacity than Mel frequency cepstral coefficients (MFCC). This method involves three steps. In the first step, HMM based speech recognizer was used to select plosive segment. It searched burst onset 2.5ms or 4 frames prior to the burst segment start found by the recognizer and extended the end segment to 10ms or 16 frames, in order to minimize the error. Second step was burst onset detection, for which they retained only the frequency range 3.2 to 8 KHz. The burst power p(n) for frame n was estimated by summing all frequency bins in RTFR power and the first local maxima of p(n) with sufficient strong and sharp change was identified as burst onset. A missing burst or weak burst could lead to absence in local maxima. In those cases, the start of the segment was identified as burst onset. The third step was to find the starting of periodicity or voicing. For voicing detection, only frequency components 0-4 KHz were retained. They computed short time autocorrelation of the RTFR frame by multiplying with an asymmetric weighted version of the frame to find short at lag 0 to 40. Then they summed these values over the lag index and over the retained frequency band. They validated their algorithm with manually extracted VOT estimates. The method showed detection rate of 76.10% for temporal accuracy of 10 ms and 91.40% with temporal accuracy of 20 ms for a subset of TIMIT database sentences. Keshet et al. [15] proposed a supervised learning algorithm for VOT detection. First the algorithm was trained on a set of manually marked stop consonants. The trained segments were of arbitrary length and marks indicated the burst onset and vowel onset. They extracted 7 acoustics features order to achieve high accuracy. These features were highly informative about the accurate location of the onset pair. First 4 features were taken from the short time Fourier transform (STFT): log of the total spectral energy, log of the energy between 50 Hz and 1000 Hz, log of the energy above 3000 Hz and wiener entropy (a measure of spectral flatness). The fifth feature was maximum power spectrum computed around 6ms before and 18ms after the frame centre. RAPT-base pitch tracker for voicing detection was accounted as sixth feature. The number of zero crossings were computed around the frame center was considered as seventh feature. At the testing phase, each segment of the input signal was mapped to the same vector space and most probable onset pair and hence VOT was detected. A kernel machine was used to map the input signal and target onset pair to a vector space, which included all possible onset pairs. They used four databases for their experiment namely TIMIT database, Big Brother database which contain spontaneous speech samples, switchboard database which contain spontaneous telephone conversation and Paterson / Goldrick database which contain data from laboratory study. Performance of the algorithm was evaluated based following criteria - difference between the automatic and manual VOT estimation. They detected VOT with 99% accuracy with 50ms temporal accuracy. Lin et al. [16], proposed a random forest (RF) classifier for onset detection. They used HMM based force alignment to align a speech signal with its accompanying text transcription. Two-dimensional cepstral coefficients (TDCC) were used to capture voicing onsets. They applied Fourier Transform of the on the log magnitude spectra computed over successive group frames. Using these features RF classifier was applied on segments obtained using HMM based force alignment to detect the voicing and burst onsets. Detection rates of VOT were 57.20%, 83.40%, 93.40% and 96.50% with temporal accuracies of 5 ms 10 ms 15 ms and 20 ms respectively with TIMT database. Prathosh et al. [17], used temporal measure to estimate VOT estimation in continuous speech. Burst onset was detected as described in paper [13] discussed above. For voicing detection Maximum weighted inner-product (MWIP) and zero crossing difference (ZCD) were used. MWIP measures the similarity between two vectors and hence for periodic segment two successive epochs will have high degree of similarity. They selected the closest epoch to the burst onset and computed MWIP between two successive epochs starting from the closest epoch to the burst onset and selected a threshold to check whether computed MWIP was greater than t1 or not. If it was greater they checked whether ZCD over both of the two successive inter-epoch intervals starting from closest epoch to the burst onset is less than another threshold. If this condition was also satisfied they termed the 1st epoch as voicing onset. Conclusion The various existing methodologies for landmark detection rely on spectral and temporal features. Generally, these methods for burst and VOT detection are validated against a manually marked database. The automatic detection of events of stops is used in many applications. The analysis of extracted features around the useful landmarks or events can represent the speech information for various applications like speech recognition, phoneme recognition [13, 24]. Accurate detection of landmarks can be used to determine the appropriate place of articulation of stops [21]. Again, VOT is significant in discriminating voiced and unvoiced stops. The parameters around these two landmarks are also useful in diagnosis of pathological condition like dysarthria which causes disruption in the speech production system [18]. Due to the various difficulties associated with sound production of dysarthric speakers the landmark detection play an important role in acoustic analysis of dysarthric speech. Application of research: It gives an idea of various existing methodologies for landmark detection of stop consonants. These landmarks can be used for various application like speech recognition, phoneme recognition etc. Research Category: Speech Processing *Abbreviations: VOT- Voicing onset time GMM- Gaussian Mixture Model ROR- Rate of rise ROC- Rate of change DFT- Discrete Fourier Transform TDC- Two dimensional cepstrum RF- Random forest DCT- Discrete cosine transform SVM- Support vector machine MNCC- Maximum normalized cross correlation CBT- Closure burst transition RBC- Representative burst candidate PI- Plosion Index RTFR- Reassigned time-frequency representation HMM- Hidden Markov model MFCC- Mel frequency cepstral coefficients STFT- Short time Fourier transform ZCD- Zero crossing difference MWIP- Maximum weighted inner-product Author Contributions: All author equally contributed Author statement: All authors read, reviewed, agree and approved the final manuscript Conflict of Interest: None declared Acknowledgement / Funding: Author thankful to Gauhati University Institute of Science and Technology, Guwahati, , India Bioinfo Publications 319

5 References [1] Rabiner L. and Juang B., Fundamentals of Speech Recognition, (1993) Prentice-Hall, Englewood Cliffs [2] Fry D. B., Acoustic Phonetics (2009) Cambridge Universtiy Press. [3] Liu S. A. (1996) Journal of Acoustical Society of America, 100(5), pp [4] Salomon A, Wilson C.V. and Deshmukh O. (2004) Journal of Acoustical Society of America, 115(3), [5] Jayan A.R. and Pandey P. C. (2009) IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), [6] Prathosh A. P. (2015) PhD Thesis, Indian Institute of Science, Bangalore. [7] ``Places of Articulation - The Complete List (with Examples)''.[ONLINE]. Available at: (Accessed on 3 rd May, 2017.) [8] Jayan A. R. and Pandey P. C. (2008) Proceedings of Int. Symposium on Frontiers of Research on Speech and Music, [9] Lin C.Y. and Wang H.C. (2008) 6th International Symposium on Chinese Spoken Language Processing, 1-4. [10] Jayan A.R., RajathBhat P.S. and Pandey P.C. (2011) 17 th National Conference on Communications, 1-5. [11] Niyogi P., Burges C. and Ramesh P. (1999) IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1, [12] Lin C. and Wang H. (2010) IEEE International Conference on Acoustics, Speech and Signal Processing, [13] Prathosh A.P., Ramakrishnan A.G. and Ananthapadmanabha T.V. (2014) Journal of Acoustical Society of America, 135 (1), [14] Stouten V., Hamme H. V. (2011) Speech Communication, 51 (12), [15] Sonderegger M. and Keshet J. (2012) Journal of Acoustical Society of America, 132 (6), [16] Lin C.Y. and Wang H.C. (2011) Journal of Acoustical Society of America, 130 (1), [17] Prathosh A.P., Ramakrishnan A.G. and Ananthapadmanabha T.V. (2014) Journal of Acoustical Society of America, 136 (2), [18] Kent R.D., Weismer G., Kent J.F., Vorperian H.K. and Duffy J.R. (1999) Journal of Communication Disorders, 32, [19] Mengistu K. and Rudzicz F. (2011) IEEE International Conference on Acoustics, Speech and Signal Processing, [20] Kim H., Martin K., Hasegawa-johnson M., and Perlman A. (2015) Clinical Linguistics and Phonetics, 24, [21] Prathosh A. P., Ramakrishnan A. G., and Ananthapadmanabha T.V. (2015) Interspeech, [22] Potisuk S. (2016) International Journal of Signal Processing Systems, 4 (3), [23] Singh K. and Tiwari N. (2016) Journal of Acoustical Society of America, 140 (5), [24] Arjun P. and Jayan A. R. (2017) International Conference on Inventive Systems and Control, 1-6. Nirmala S. R. and Goswami Upashana Bioinfo Publications 320

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,

More information

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,

More information

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012 Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of

More information

Speaker Recognition. Speaker Diarization and Identification

Speaker Recognition. Speaker Diarization and Identification Speaker Recognition Speaker Diarization and Identification A dissertation submitted to the University of Manchester for the degree of Master of Science in the Faculty of Engineering and Physical Sciences

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

Speaker recognition using universal background model on YOHO database

Speaker recognition using universal background model on YOHO database Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,

More information

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all Human Communication Science Chandler House, 2 Wakefield Street London WC1N 1PF http://www.hcs.ucl.ac.uk/ ACOUSTICS OF SPEECH INTELLIGIBILITY IN DYSARTHRIA EUROPEAN MASTER S S IN CLINICAL LINGUISTICS UNIVERSITY

More information

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence INTERSPEECH September,, San Francisco, USA Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence Bidisha Sharma and S. R. Mahadeva Prasanna Department of Electronics

More information

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California

More information

Consonants: articulation and transcription

Consonants: articulation and transcription Phonology 1: Handout January 20, 2005 Consonants: articulation and transcription 1 Orientation phonetics [G. Phonetik]: the study of the physical and physiological aspects of human sound production and

More information

Segregation of Unvoiced Speech from Nonspeech Interference

Segregation of Unvoiced Speech from Nonspeech Interference Technical Report OSU-CISRC-8/7-TR63 Department of Computer Science and Engineering The Ohio State University Columbus, OH 4321-1277 FTP site: ftp.cse.ohio-state.edu Login: anonymous Directory: pub/tech-report/27

More information

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech Dept. for Speech, Music and Hearing Quarterly Progress and Status Report VCV-sequencies in a preliminary text-to-speech system for female speech Karlsson, I. and Neovius, L. journal: STL-QPSR volume: 35

More information

Phonetics. The Sound of Language

Phonetics. The Sound of Language Phonetics. The Sound of Language 1 The Description of Sounds Fromkin & Rodman: An Introduction to Language. Fort Worth etc., Harcourt Brace Jovanovich Read: Chapter 5, (p. 176ff.) (or the corresponding

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production

More information

On the Formation of Phoneme Categories in DNN Acoustic Models

On the Formation of Phoneme Categories in DNN Acoustic Models On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language Z.HACHKAR 1,3, A. FARCHI 2, B.MOUNIR 1, J. EL ABBADI 3 1 Ecole Supérieure de Technologie, Safi, Morocco. zhachkar2000@yahoo.fr.

More information

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

International Journal of Advanced Networking Applications (IJANA) ISSN No. : International Journal of Advanced Networking Applications (IJANA) ISSN No. : 0975-0290 34 A Review on Dysarthric Speech Recognition Megha Rughani Department of Electronics and Communication, Marwadi Educational

More information

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

Mandarin Lexical Tone Recognition: The Gating Paradigm

Mandarin Lexical Tone Recognition: The Gating Paradigm Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition

More information

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

On Developing Acoustic Models Using HTK. M.A. Spaans BSc. On Developing Acoustic Models Using HTK M.A. Spaans BSc. On Developing Acoustic Models Using HTK M.A. Spaans BSc. Delft, December 2004 Copyright c 2004 M.A. Spaans BSc. December, 2004. Faculty of Electrical

More information

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access Joyce McDonough 1, Heike Lenhert-LeHouiller 1, Neil Bardhan 2 1 Linguistics

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Voiced-voiceless distinction in alaryngeal speech - acoustic and articula Nord, L. and Hammarberg, B. and Lundström, E. journal:

More information

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

age, Speech and Hearii

age, Speech and Hearii age, Speech and Hearii 1 Speech Commun cation tion 2 Sensory Comm, ection i 298 RLE Progress Report Number 132 Section 1 Speech Communication Chapter 1 Speech Communication 299 300 RLE Progress Report

More information

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

A comparison of spectral smoothing methods for segment concatenation based speech synthesis D.T. Chappell, J.H.L. Hansen, "Spectral Smoothing for Speech Segment Concatenation, Speech Communication, Volume 36, Issues 3-4, March 2002, Pages 343-373. A comparison of spectral smoothing methods for

More information

Body-Conducted Speech Recognition and its Application to Speech Support System

Body-Conducted Speech Recognition and its Application to Speech Support System Body-Conducted Speech Recognition and its Application to Speech Support System 4 Shunsuke Ishimitsu Hiroshima City University Japan 1. Introduction In recent years, speech recognition systems have been

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

Affective Classification of Generic Audio Clips using Regression Models

Affective Classification of Generic Audio Clips using Regression Models Affective Classification of Generic Audio Clips using Regression Models Nikolaos Malandrakis 1, Shiva Sundaram, Alexandros Potamianos 3 1 Signal Analysis and Interpretation Laboratory (SAIL), USC, Los

More information

Automatic segmentation of continuous speech using minimum phase group delay functions

Automatic segmentation of continuous speech using minimum phase group delay functions Speech Communication 42 (24) 429 446 www.elsevier.com/locate/specom Automatic segmentation of continuous speech using minimum phase group delay functions V. Kamakshi Prasad, T. Nagarajan *, Hema A. Murthy

More information

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

An Acoustic Phonetic Account of the Production of Word-Final /z/s in Central Minnesota English

An Acoustic Phonetic Account of the Production of Word-Final /z/s in Central Minnesota English Linguistic Portfolios Volume 6 Article 10 2017 An Acoustic Phonetic Account of the Production of Word-Final /z/s in Central Minnesota English Cassy Lundy St. Cloud State University, casey.lundy@gmail.com

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

Speech Recognition by Indexing and Sequencing

Speech Recognition by Indexing and Sequencing International Journal of Computer Information Systems and Industrial Management Applications. ISSN 215-7988 Volume 4 (212) pp. 358 365 c MIR Labs, www.mirlabs.net/ijcisim/index.html Speech Recognition

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

Automatic Pronunciation Checker

Automatic Pronunciation Checker Institut für Technische Informatik und Kommunikationsnetze Eidgenössische Technische Hochschule Zürich Swiss Federal Institute of Technology Zurich Ecole polytechnique fédérale de Zurich Politecnico federale

More information

Perceptual scaling of voice identity: common dimensions for different vowels and speakers

Perceptual scaling of voice identity: common dimensions for different vowels and speakers DOI 10.1007/s00426-008-0185-z ORIGINAL ARTICLE Perceptual scaling of voice identity: common dimensions for different vowels and speakers Oliver Baumann Æ Pascal Belin Received: 15 February 2008 / Accepted:

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

Support Vector Machines for Speaker and Language Recognition

Support Vector Machines for Speaker and Language Recognition Support Vector Machines for Speaker and Language Recognition W. M. Campbell, J. P. Campbell, D. A. Reynolds, E. Singer, P. A. Torres-Carrasquillo MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA

More information

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS Akella Amarendra Babu 1 *, Ramadevi Yellasiri 2 and Akepogu Ananda Rao 3 1 JNIAS, JNT University Anantapur, Ananthapuramu,

More information

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Phonological and Phonetic Representations: The Case of Neutralization

Phonological and Phonetic Representations: The Case of Neutralization Phonological and Phonetic Representations: The Case of Neutralization Allard Jongman University of Kansas 1. Introduction The present paper focuses on the phenomenon of phonological neutralization to consider

More information

Universal contrastive analysis as a learning principle in CAPT

Universal contrastive analysis as a learning principle in CAPT Universal contrastive analysis as a learning principle in CAPT Jacques Koreman, Preben Wik, Olaf Husby, Egil Albertsen Department of Language and Communication Studies, NTNU, Trondheim, Norway jacques.koreman@ntnu.no,

More information

SEGMENTAL FEATURES IN SPONTANEOUS AND READ-ALOUD FINNISH

SEGMENTAL FEATURES IN SPONTANEOUS AND READ-ALOUD FINNISH SEGMENTAL FEATURES IN SPONTANEOUS AND READ-ALOUD FINNISH Mietta Lennes Most of the phonetic knowledge that is currently available on spoken Finnish is based on clearly pronounced speech: either readaloud

More information

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA LANGUAGE AND SPEECH, 2009, 52 (4), 391 413 391 Variability in Word Duration as a Function of Probability, Speech Style, and Prosody Rachel E. Baker, Ann R. Bradlow Northwestern University, Evanston, IL,

More information

Pobrane z czasopisma New Horizons in English Studies Data: 18/11/ :52:20. New Horizons in English Studies 1/2016

Pobrane z czasopisma New Horizons in English Studies  Data: 18/11/ :52:20. New Horizons in English Studies 1/2016 LANGUAGE Maria Curie-Skłodowska University () in Lublin k.laidler.umcs@gmail.com Online Adaptation of Word-initial Ukrainian CC Consonant Clusters by Native Speakers of English Abstract. The phenomenon

More information

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy Sheeraz Memon

More information

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation Taufiq Hasan Gang Liu Seyed Omid Sadjadi Navid Shokouhi The CRSS SRE Team John H.L. Hansen Keith W. Godin Abhinav Misra Ali Ziaei Hynek Bořil

More information

Lecture 9: Speech Recognition

Lecture 9: Speech Recognition EE E6820: Speech & Audio Processing & Recognition Lecture 9: Speech Recognition 1 Recognizing speech 2 Feature calculation Dan Ellis Michael Mandel 3 Sequence

More information

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Xinyu Zhou, Yuxin Wu, and Tiezheng Li Tsinghua University Contents 1 Introduction 1 2 Algorithms 2 2.1 VAD..................................................

More information

DEVELOPMENT OF LINGUAL MOTOR CONTROL IN CHILDREN AND ADOLESCENTS

DEVELOPMENT OF LINGUAL MOTOR CONTROL IN CHILDREN AND ADOLESCENTS DEVELOPMENT OF LINGUAL MOTOR CONTROL IN CHILDREN AND ADOLESCENTS Natalia Zharkova 1, William J. Hardcastle 1, Fiona E. Gibbon 2 & Robin J. Lickley 1 1 CASL Research Centre, Queen Margaret University, Edinburgh

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Rhythm-typology revisited.

Rhythm-typology revisited. DFG Project BA 737/1: "Cross-language and individual differences in the production and perception of syllabic prominence. Rhythm-typology revisited." Rhythm-typology revisited. B. Andreeva & W. Barry Jacques

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397,

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397, Adoption studies, 274 275 Alliteration skill, 113, 115, 117 118, 122 123, 128, 136, 138 Alphabetic writing system, 5, 40, 127, 136, 410, 415 Alphabets (types of ) artificial transparent alphabet, 5 German

More information

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Koshi Odagiri 1, and Yoichi Muraoka 1 1 Graduate School of Fundamental/Computer Science and Engineering, Waseda University,

More information

Corpus Linguistics (L615)

Corpus Linguistics (L615) (L615) Basics of Markus Dickinson Department of, Indiana University Spring 2013 1 / 23 : the extent to which a sample includes the full range of variability in a population distinguishes corpora from archives

More information

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering

More information

Applying Fuzzy Rule-Based System on FMEA to Assess the Risks on Project-Based Software Engineering Education

Applying Fuzzy Rule-Based System on FMEA to Assess the Risks on Project-Based Software Engineering Education Journal of Software Engineering and Applications, 2017, 10, 591-604 http://www.scirp.org/journal/jsea ISSN Online: 1945-3124 ISSN Print: 1945-3116 Applying Fuzzy Rule-Based System on FMEA to Assess the

More information

To appear in the Proceedings of the 35th Meetings of the Chicago Linguistics Society. Post-vocalic spirantization: Typology and phonetic motivations

To appear in the Proceedings of the 35th Meetings of the Chicago Linguistics Society. Post-vocalic spirantization: Typology and phonetic motivations Post-vocalic spirantization: Typology and phonetic motivations Alan C-L Yu University of California, Berkeley 0. Introduction Spirantization involves a stop consonant becoming a weak fricative (e.g., B,

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

SOUND STRUCTURE REPRESENTATION, REPAIR AND WELL-FORMEDNESS: GRAMMAR IN SPOKEN LANGUAGE PRODUCTION. Adam B. Buchwald

SOUND STRUCTURE REPRESENTATION, REPAIR AND WELL-FORMEDNESS: GRAMMAR IN SPOKEN LANGUAGE PRODUCTION. Adam B. Buchwald SOUND STRUCTURE REPRESENTATION, REPAIR AND WELL-FORMEDNESS: GRAMMAR IN SPOKEN LANGUAGE PRODUCTION by Adam B. Buchwald A dissertation submitted to The Johns Hopkins University in conformity with the requirements

More information

Evidence for Reliability, Validity and Learning Effectiveness

Evidence for Reliability, Validity and Learning Effectiveness PEARSON EDUCATION Evidence for Reliability, Validity and Learning Effectiveness Introduction Pearson Knowledge Technologies has conducted a large number and wide variety of reliability and validity studies

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Edinburgh Research Explorer

Edinburgh Research Explorer Edinburgh Research Explorer Personalising speech-to-speech translation Citation for published version: Dines, J, Liang, H, Saheer, L, Gibson, M, Byrne, W, Oura, K, Tokuda, K, Yamagishi, J, King, S, Wester,

More information

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of Technology Lincoln Laboratory {es,dar}@ll.mit.edu ABSTRACT

More information

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions 26 24th European Signal Processing Conference (EUSIPCO) Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions Emma Jokinen Department

More information

Probability and Statistics Curriculum Pacing Guide

Probability and Statistics Curriculum Pacing Guide Unit 1 Terms PS.SPMJ.3 PS.SPMJ.5 Plan and conduct a survey to answer a statistical question. Recognize how the plan addresses sampling technique, randomization, measurement of experimental error and methods

More information

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS Annamaria Mesaros 1, Toni Heittola 1, Antti Eronen 2, Tuomas Virtanen 1 1 Department of Signal Processing Tampere University of Technology Korkeakoulunkatu

More information

Self-Supervised Acquisition of Vowels in American English

Self-Supervised Acquisition of Vowels in American English Self-Supervised cquisition of Vowels in merican English Michael H. Coen MIT Computer Science and rtificial Intelligence Laboratory 32 Vassar Street Cambridge, M 2139 mhcoen@csail.mit.edu bstract This paper

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

IEEE Proof Print Version

IEEE Proof Print Version IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 1 Automatic Intonation Recognition for the Prosodic Assessment of Language-Impaired Children Fabien Ringeval, Julie Demouy, György Szaszák, Mohamed

More information

Consonant-Vowel Unity in Element Theory*

Consonant-Vowel Unity in Element Theory* Consonant-Vowel Unity in Element Theory* Phillip Backley Tohoku Gakuin University Kuniya Nasukawa Tohoku Gakuin University ABSTRACT. This paper motivates the Element Theory view that vowels and consonants

More information