CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL FEATURES FOR SPEAKER RECOGNITION USING GAUSSIAN MIXTURE MODEL

Size: px
Start display at page:

Download "CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL FEATURES FOR SPEAKER RECOGNITION USING GAUSSIAN MIXTURE MODEL"

Transcription

1 CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL FEATURES FOR SPEAKER RECOGNITION USING GAUSSIAN MIXTURE MODEL Speaker recognition is a pattern recognition task which involves three phases namely, feature extraction, training and testing. In the feature extraction stage, features representing speaker information are extracted from the speech signal. In the present study LP residual derived from the speech data is used for training and testing and also processing of LP residual in time domain at subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are built, one for each speaker, using the training data of the speaker. During the testing phase, the models are tested with the test data. Based on the results with test data, decision is made about the identity of the speaker. 4.1 THE SPEECH FEATURE EXTRACTION The selection of the best parametric representation for acoustic data is an important task in the design of any text-independent speaker recognition system. The acoustic features should fulfill the following requirements. Be of low dimensionality to allow a reliable estimation of parameters of the Automatic speaker recognition systems. Be independent of the speech and recording environment. 60

2 PRE-PROCESSING The task begins with the pre-processing of the speech signal collected from each speaker. The speech signal is sampled at samples/sec and it is resampled to 8000 samples/sec. In the preprocessing stage, the given speech utterance is pre-emphasized, blocked into a number of frames and windowed. The frame size chosen is 5 msec which corresponds 40 samples and a frame shift between frames is 2.5 msec which corresponds to 20 samples has been taken in the subsegmental processing of LP residual. The frame size chosen is 20 msec which corresponds 40 samples and a frame shift between frames is 2.5 msec which corresponds to 20 samples has been taken in the segmental processing of LP residual and its sampling frequency is decimated by 4 times hence frame size is same as subsegmental level. The frame size chosen is 250 ms which corresponds 40 samples and a frame shift between frames is 6.25 msec which corresponds to 1 sample has been taken in the suprasegmental processing of LP residual in which signal is decimated by 50 times. The preprocessing task is carried out in a sequence of steps as explained below Pre-Emphasis The given speech samples in each frame are passed through a first order filter to spectrally flatten the signal and make it less susceptible to finite precision effects later in the signal processing task. The pre-emphasis filter used has the form H (z) =1- z-1, 0.9 a 1.0. In fact, it is sometimes better to difference the entire speech utterance before frame blocking and windowing. 61

3 Windowing After pre-emphasis, each frame is windowed using a window function. The windowing ensures that the signal discontinuities at the beginning and end of each frame are minimized. The window function used is the Hamming window given below, W (n) = , 0 n N-1 (4.1) Where N is the number of samples in the frame Approach to Speech Feature Extraction One of the early problems in speaker recognition systems was to choose the right speaker specific excitation source features from the speech. Excitation source models were chosen to be GMM or HMM, as they are assumed to offer a good fit to the statistical nature of speech. Moreover, the excitation source models are often assumed to have diagonal covariance matrices which arises the need for speech features those are by nature uncorrelated. Speaker recognition system uses subsegmental, segmental and suprasegmental features from LP residual represent different speaker specific excitation source features. These features are robust to channel and environmental noise. We present a brief overview of subsegmental, segmental and suprasegmental features of LP residual Subsegmental, Segmental and Suprasegmental Features of the LP Residual The 12th order LP residual signal is blocked into frames using specified frame size of 20 msec and frame shift of 10 msec. The LP 62

4 residual contains more speaker specific information. It has been shown that humans can recognize people by listening to the LP residual signal [57]. This may be attributed the speaker-specific excitation source information present at different levels. This work views the speaker specific excitation source information. In subsegmental analysis, the LP residual is blocked into frames of size 5 msec considered in shifts of 2.5 msec for extracting the dominant speaker information in each frame. Each frame has 40 samples with shift 20 samples in segmental analysis, the LP residual is blocked into frames of size 20 ms considered in shifts of 2.5 msec for extracting the pitch and energy of the speaker. In suprasegmental analysis, the LP residual is blocked into frames of size 250 msec considered in shifts of 6.25 msec for extracting longterm information, which has very low frequency information of the speaker. At each level source based speaker characteristics are represented in the database independently using GMM and combine them to improve the Speaker recognition system. 4.2 GAUSSIAN MIXTURE MODEL FOR SPEAKER RECOGNITION GMM is a classic parametric method best used to model speaker identities due to the fact that Gaussian components have the capability of representing some general speaker dependent spectral shapes. Gaussian classifier has been successfully employed in the several text-independent speaker identification applications since the approach used by this classifier is similar to that used by the long term average of spectral features for representing a speaker s average vocal tract shape [101]. 63

5 In a GMM model, the probability distribution of the observed data takes the form given by the following equation [102]. (4.2) Where M is the number of component densities, x is a D dimensional observed data, bi ( x ) is the component density and pi is the mixture weight for i = 1,.., M as shown in Fig = (4.3) Each component density denotes a D-dimensional normal distribution with mean vector and covariance matrix i. The mixture weights satisfy the condition represent positive scalar values. collectively represented as λ = { i} These and therefore parameters can be for i = 1,.. M. Each speaker in a speaker in a speaker identification system can be represented by one distinct GMM and is referred by the speaker s models λi, for i = 1, 2, 3, N, where N is the number of speakers Training the Model The training procedure is similar to the procedure followed in vector quantization. Clusters are formed within the training data. Each cluster is then represented with multiple Gaussian probability distribution function (pdf). The union of many such Gaussian pdf s is a GMM. The most common approach to estimate the GMM parameters is the maximum likelihood estimation [103], where P(X/λ) is maximized with respect to λ. P(X/λ) is the conditional 64

6 probability and vector X = {x1, x2,.xi} is the set of all feature vectors belonging to a particular acoustic class. Since there is no closed form solution to the maximum likelihood estimation, convergence is guaranteed only when large enough data is available. An iterative approach for computing the GMM model parameters using Expectation-maximization (EM) algorithm [104] is followed. Fig. 4.1: Diagram of Gaussian Mixture Model. E-Step: Posterior probabilities are calculated for all the training feature vectors. Posterior probability for a feature vector i of the nth frame of the given speaker is follows P = (4.4) 65

7 M-Step: The M-step uses the posterior probabilities from the EStep to estimate model parameters as follows: (4.5) i = (4.6) And I = (4.7), Set Pi = I i, and and iterate the sequence of E-step and M-step a few times by iteratively checking for the condition. The EM algorithm improves on the GMM parameter estimates by iteratively checking for the condition P (X z+1) > P (X z) (4.8) Testing the Model Let the number of models representing different acoustic classes be N. hence j, where j = {1, 2, 3,.N}, is the set of GMMs under consideration. For each test utterance, feature vectors xn at time n are extracted. The probability of each model given the feature vectors xn is given by P( j xn) = (4.9) Since P(xn) is a constant and P( j), the apriori probabilities, are assumed to be equal, the problem is reduced to finding the maximizes. But j that is given by = P({x1,x2,,xI} j) (4.10) 66

8 Where, I is the number of feature vectors for each frame of the speech signal belonging to a particular acoustic class. Assuming that each frame is statistically independent, Equation 4.10 can be written as P({x1,x2,,xI} j) = (4.11) Applying logarithm on Equation 4.9 and simplifying for N we have Nr = (4.12) Where Nr is declared as the class to which the feature vectors belong. Note that {Nr, r = {1,2,3,,N}} is the set of all acoustic classes. 4.3 EXPERIMENTAL RESULTS Database Used for the Study In this study we consider identification task on TIMIT Speaker database [4]. The TIMIT corpus of read speech has been designed to provide speaker data for the acquisition of acousticphonetic knowledge and for the development and evaluation of automatic speaker recognition systems. TIMIT contains a total of 6300 sentences, 10 sentences spoken by each of 630 speakers from 8 major dialect regions of the United States. We consider 380 utterances spoken by 38 speakers out of 630 speakers for speaker recognition. For each speaker maximum of 10 speech utterances among which 8, 7 and 6 are used for training and tested with minimum 2, 3 and 4 speech utterances. 67

9 Experimental Setup In general, speaker recognition refers to both speaker identification and speaker verification. Speaker identification is the task of identifying a given speaker from a set of speakers. In the closed-set speaker identification no speaker outside the given set is used for testing. Speaker verification is the task of verification is either to accept or reject the claim of the speaker. In this work experiments have been carried out on closed-set speaker identification. The system has been implemented in Matlab7 on Windows XP platform. We have used LP order of 12 for all experiments. We have trained the model using Gaussian mixture components as 4, 8, 16 and 32 for different training and testing speech utterances which are spoken by 38 speakers respectively. Here, recognition rate is defined as the ratio of the number of speakers identified to the total number of speakers tested Extraction of Complete Source Information of LP Residual, HE of LP Residual and RP of LP Residual at Different levels. As the envelope of the short-time spectrum corresponds to the frequency response of the vocal tract shape, one can observe the short-time spectrum of the LP residual for different LP orders and the corresponding signal LP spectra to determine the extent of the vocal tract information present in the LP residual. As the order of the LP analysis is increased, the LP spectrum approximates the short-time spectral envelope better. The envelope of the short-time spectrum corresponds to the frequency response of the vocal tract 68

10 shape, thus reflecting the vocal tract system characteristics. Typically the vocal tract system is characterized by a maximum of five resonances in the 0-4 khz range. Therefore LP order of about 8-14 seems to be most appropriate for a speech signal resample at 8 khz. For low order, say 3 as shown in Fig.4.3 (a), the LP spectrum may pick up only the prominent resonances, and hence the residual will still have a large amount of information about the vocal tract system. Thus the spectrum of the residual Fig.4.3 (b) contains most of the information of the spectral envelope, except for the prominent resonances. On the other hand, if a large order, say 30 is used, then the LP spectrum may pick up spurious peaks as shown in Fig. 4.3(e). These spurious peaks influence the corresponding LP residual obtained by passing the speech signal through may be affected due to the influence of these spurious nulls in the spectrum of the inverse filter is shown in Fig 4.2 (f). 69

11 Fig. 4.2: (a) LP Spectrum, (b) LP Residual Spectrum for LP Order 3, (c) LP Spectrum, (d) Residual Spectrum for LP Order 9, (e) LP Spectrum and (f) Residual Spectrum for LP Order

12 From above discussion, it is evident that LP residual does not contain any significant features of the vocal tract shape for LP orders in the range The LP residual may contain mostly the source information at subsegmental, segmental and suprasegmental levels. The features derived from the LP residual at these levels are called as residual features. We verified that the speaker-specific information present in the LP residual dominates the amplitude information than phase information due to inverse filtering. Hence we separate the amplitude information and phase information of the LP residual using Hilbert transform, hence amplitude information contained in the HE of LP residual and phase information contained in RP of LP residual at subsegmental, segmental and suprasegmental levels are shown in Figs 4.3 and 4.4. Figs: 4.3 Analytical Signal Representation of a) Subsegmental, b) Segmental and c) Suprasegmental Feature Vectors using HE of LP Residual. 71

13 Fig 4.4: Subsegmental, Segmental and Suprasegmental Feature Vectors for RP of LP Residual Effect of Model at subsegmental, segmental and Suprasegmental Levels and Amount of Training and Test data This section presents performance of the subsegemental, segmental and suprasegmental levels of LP residual (complete source) based speaker recognition systems with respect to number of mixture components per model (At each level) and amount of training and testing data. The recognition performance of subsegmental, segmental and suprasegmental levels of LP residual for the amount of train and test data is shown in the Tables 4.1 and

14 4.4 DISCUSSION ON SPEAKER RECOGNITION PERFORMANCE The speaker recognition performance with respect to training and testing data is presented in section with detailed explanation With Respect to Varying Amount of Training and Testing Data For better performance, we are applying a condition that the testing utterances are less than training utterances for better understanding of performance and good authentication, by following the previous condition the training utterances are decreased testing utterances are increased till it reach 6 and 4 utterances respectively among the 10 utterances per speaker. In this experiment, for each speaker to develop one model with mixture components 2, 4, 8 and 32 by using GMM for training utterances. The various amounts of training data were sequentially taken and tested with corresponding testing utterances by following above conditions. We observed best results for 6-4 utterance model compared to 8-2 and 7-3 utterance models, which are discussed in further sections. It is also evident that when there is no enough training data, the selection of model order becomes more important. For all amounts of training data, performance is increased from 2 to 32 Gaussian components. 73

15 4.5 MODELING SUBSEGMENTAL SPEAKER (Sub), INFORMATION SEGMENTAL (Seg) FROM AND SUPRASEGMENTAL (Supra) LEVELS OF LP RESIDUAL Speaker-specific excitation source information extracted at subsegmental level in which one pitch cycle is modeled. At this level, GMM model is used to capture variety of speaker characteristics. Blocks of 40 samples from the voiced regions of LP residual are used as input to the GMM model. Successive blocks are formed with a shift of 20 samples. One GMM model is trained using LP residual at subsegmental level. Since the block size is less than a pitch period, the variety characteristics of the excitation source (LP residual) within one glottal pulse are captured. The performance of speaker identification at subsegmental of LP residual is shown in Figs 4.5(a), 4.7(a) and in the 2nd column of Tables 4.1 and 4.2. At the segmental level, two to three glottal cycles of speaker-specific information is modeled in which the information may be attributed to pitch and energy. At this level, GMM model is used to capture variety of speaker characteristics. Blocks of 40 samples from the voiced regions of LP residual are used as input to the GMM model. Successive blocks are formed with a shift of 5 samples. One GMM model is trained using LP residual at segmental level. Since the block size is 2-3 pitch period, the variety characteristics of the excitation source (LP residual) within 2-3 glottal pulse is captured The performance of the Speaker recognition system at segmental level is shown in Figs 4.5(a),4.7 (a) and in the 3rd column of Table 4.1 and 4.2. At the suprasegmental level, 25 to 50 glottal cycles of speaker-specific information is modeled in which the information may be attributed to long-term variations means that using this feature speaker is recognized even 74

16 though the speaker is aged. This is motivation of my work. The performance of the Speaker recognition system at suprasegmental level is shown in Figs 4.5(a), 4.7(a) and in the 4 th column of Tables 4.1 and 4.2. These are compared with base line Speaker recognition system using MFCCs in the 6 th column of tables 4.1 and 4.2. For comparison purpose, the base line speaker recognition system using speaker information from the vocal tract and segmental source feature are developed for the same database. Speech signal is processed in blocks of 20 msec with a shift of 10 msec. For every frame 39 dimensional MFCCs are computed. The performance of this speaker recognition system is shown in figs 4.5(b) and 4. 7(b) and Tables 4.1 and 4.2 and this compared with the Speaker recognition performance of sub, seg and supra levels of LP residual. The performance of speaker recognition system have been given in the form of percentile in the all the Tables of this chapter. 75

17 Table 4.1: Speaker recognition performance of Subsegmental (Sub), Segmental (Seg) and Suprasegmental (Supra) information of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 7 used as training 3 used as testing. No. of Mixtur Sub Seg Supra es SRC= Sub+seg+supra MFCCs Sub+Seg+ Supra+ MFCCs

18 Fig. 4.5: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of LP Residual and b) Sub+Seg+Supra along with MFCCs 77

19 4.5.1 Combining Evidences From Subsegmantal, Segmental and Suprasegmental Levels of LP Residual By the way of deriving each feature, the information present at subsegmental, segmental and suprasegmental levels are different and hence may reflect different aspect of speaker specific source information. By comparing their recognition performance it can be observed that the subsegmental features provide best performance. Thus the sub segmental features may have more speaker-specific evidence compared to other level features. The different performances of the recognition experiments indicate the different nature of speaker information present. In case of identification, the muddle pattern of features is considered as an indication of the different nature of information present. In the confusion Pattern, principal diagonal represents correct identification and the rest represents miss classification. Figure 4.6 shows the confusion patterns of the identification results conducted for all the proposed features using TIMIT database, respectively. In each case, the confusion pattern is entirely different. The decisions for both true and false identification are different. This indicates that they reflect different aspect of source information. This may help in combining the evidences for further improvement of the recognition performance from the complete source perspective. 78

20 Fig.4.6: Confusion Pattern of a) Sub, b) Seg, c) Supra of LP Residual, d) SRC=Sub+Seg+Supra e) MFCCs and f) SRC+MFCC s Information for Identification of 38 Speakers from TIMIT Database. 79

21 Table 4.2: Speaker recognition performance of Sub, Seg and Supra information of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing. No. of Mixtures Sub Seg Supra SRC= Sub+seg+supra MFCCs SRC+MFCCs

22 Fig. 4.7: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of LP Residual and b) Sub+Seg+Supra along with MFCCs. 81

23 4.6 MODELING SPEAKER INFORMATION FROM SUBSEGMENTAL SEGMENTAL AND SUPRASEGMENTAL LEVELS OF HE OF LP RESIDUAL The amplitude information is obtained from LP residual using Hilbert transform and which is 900 a shifted version of LP residual. Since the HE represents the magnitude information of the LP residual. The HE of LP residual is processed at subsegmental, segmental and suprasegemental levels. In this subsegmental, segmental and suprasegemental sequences are derived from the HE of LP residual is called as HE features. The speaker recognition performances for subsegmental, segmental and suprasegmental levels are shown in Figs 4.8(a)-4.10(a) respectively. The combined amplitude information at each level is improved is shown in Figs 4.8(b)-4.10(b) respectively. The experimental results shown in Tables 4.3, 4.4 and 4.5 for 38 speakers of TIMIT database. 82

24 Table 4.3: Speaker recognition performance of Sub, Seg and Supra information of HE of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing. SRC = No. of Sub Seg Supra Sub+seg+supra MFCCs SRC+MFCCs mixtures

25 Fig.4.8: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and b) Sub+Seg+Supra along with MFCCs. 84

26 Table 4.4: Speaker recognition performance of Sub, Seg and Supra information of HE of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing. SRC=Sub+seg+ No. of Sub Seg Supra Supra MFCCs SRC+MFCCs mixtures

27 Fig. 4.9: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and b) Sub+Seg+Supra along with MFCCs. 86

28 Table 4.5: Speaker Recognition Performance of Sub, Seg and Supra Information of HE of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 6 used for Training 4 used for Testing. No. of mixtures Sub Seg Supra SRC=Sub+seg+ Supra MFCCs SRC+MFCCs

29 Fig 4.10: The performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and b) Sub+Seg+Supra along with MFCCs. 88

30 4.7 MODELING SPEAKER INFORMATION FROM SUBSEGMENTAL SEGMENTAL AND SUPRASEGMENTAL LEVELS OF RP OF LP RESIDUAL The Phase information is obtained from LP residual using Hilbert transform and which is 900 a shifted version of LP residual. Since the HE represents the magnitude information of the LP residual, we can obtain the cosine of the information from LP residual by dividing with HE. Hence we obtain phase information from LP residual is known as RP of LP residual. residual is processed at subsegmental, The RP of LP segmental and suprasegmental levels. These levels are derived from RP of LP residual is called as RP features. The speaker recognition performances for subsegmental, segmental and suprasegmental levels for RP of LP residual are shown in Figs 4.11(a)-4.13(a) respectively. The combined phase information at each level is improved and shown in figs 4.11(b)-4.13(b). The experimental results shown in tables for 38 speakers. 89

31 Table 4.6: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing. No. of Sub Seg Supra mixtur es SRC=Sub+seg+ MFCCs SRC+MFCCs supra 90

32 Fig 4.11: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 91

33 Table 4.7: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing. SRC=Sub+ No. of Sub Seg Supra seg+supra MFCCs mixtures SRC+MFCCs

34 Fig 4.12: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 93

35 Table 4.8: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing. No. of Sub Seg Supra mixtur es SRC=Sub+ MFCCs SRC+MFCCs seg+supra 94

36 Fig 4.13: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 95

37 4.8 COMBINING EVIDENCES FROM SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL LEVELS OF HE AND RP OF LP RESIDUAL The procedure to compute subsegmental, segmental and suprasegmental feature vectors from HE and RP of the LP is same as described earlier except the input sequence. In one case the input will be HE and the other case it will be RP. The unipolar nature of the HE helps in suppressing the bipolar variations representing sequence information and emphasizing only the amplitude values. As a result, the amplitude information in the subsegmental, segmental and suprasegmental sequences of LP residual are shown in Figs 4.3 (a) (b) and (c). On the other hand, the residual phase represents the sequence information of thee residual samples. Figs 4.4 (a), (b) and (c) show the residual phase of the subsegmental, segmental and suprasegmental processing respectively. In all these cases, the amplitude information is absent. Hence analytic signal representation provides amplitude and sequence information of the LP residual samples independently. In [113] it was shown that information present in the residual phase significantly contributes to the speaker recognition. We propose that, information present in the HE may also contribute well to speaker recognition. Further, as they reflect different aspect of the source information, the combined representation of both the evidences may be more effective for speaker recognition. We conduct different experiments on TIMIT database for 38 speakers. 96

38 Table 4.9: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing. No. of HE+RP HE+RP HE+RP SRC=HE+Rp MFCCs SRC+MFCCs mixtures of Sub of Seg of of Supra Sub+seg+supra

39 Fig 4.14: The Performance of Speaker Recognition System for a) Sub, Seg and Supra levels of HE and RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 98

40 Table 4.10: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing. No. of HE+RP HE+RP HE+RP SRC= HE+RP of of Supra Sub+seg+supra of Sub of Seg mixtures MFCC s SRC+MFCC s

41 Fig 4.15: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE and RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 100

42 Table 4.11: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual for 38 speakers. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing. SRC=Sub+ MFCC seg+supra s No. of Sub Seg Supra mixtures SRC+MFCCs

43 Fig 4.16: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE and RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 102

44 4.9 Discussion on Speaker Recognition Performance with respect to varying amount of Training and Testing data. In this experiment, speaker models with 2, 4, 8, 16 and 32 component densities were trained using 8 and 6 speech utterances and tested with 2 and 4 speech utterances per speaker. The recognition performance of residual features, HE and RP features for 2 and 4 test speech utterances versus 8 and 6 speech utterances are train data are shown in the figs and Tables It is shown that with increase in test speech utterances per speaker the recognition performance increases. The largest increase in percentage of recognition for training speech utterances per speaker, when the amount of test speech utterances are 4 in the case of residual features, HE and RP features individually shown in the Tables and Figs and fusion of HE and RP features since fusion of both provides complete source information [Tables and Figs ] DISCUSSION ON SPEAKER PERFORMANCE WITH RECOGNITION RESPECT TO DIFFERENT TYPES OF FEATURES To investigate Speaker recognition performance using LP residual at subsegmental, segmental and suprasegmental levels with respect to the component densities per model where each speaker is model at subsegmental, segmental and suprasegmental information of LP residual. The performance of subsegmental level is more than the other two levels. Similarly each speaker is modeled at subsegmental, segmental and suprasegmental of HE and RP of 103

45 LP residual. Individually, the performance of HE and RP is less than residual features. Fusion of HE and RP improves the performance of Speaker recognition system [Tables and Figs]. Therefore, the fusion of HE and RP features provides better performance than the residual features alone. This shows the robustness of the combined HE and RP representation of complete source is providing additional information to the MFCC features. From this observation we conclude that combined representation of HE and RP features are better than the residual features alone. It indicates complete information present in the source can be represented by the combined representation of the HE and RP features COMPARATIVE STUDY OF HE FEATURES AND RP FEATURES OVER RESIDUAL FEATURES FOR RECOGNITION SYSTEM We have compared the results obtained by the observed new approach with some recent works which were discussed in detail in section 2.6. In these works features used and database are different. Tables 4.12 and 4.13 shows comparative analysis of different features for speaker recognition performance Table 4.12: Comparison of Speaker Recognition Performance at Different Databases for LP residual at Sub, Seg and Supra levels. Database Sub Seg Supra NIST NIST-03 TIMIT SRC+MFCCs 31 SRC=Sub+ MFCCs Seg+Supra

46 Table 4.13: Comparison of speaker recognition performance at different Databases for HE and RP of LP residual at subsegmental, segmental and suprasegmental levels. Type of Databases SRC=Su Sub Seg Supra signal NIST-03 OBSERVED MODEL MFCCs upra NIST-99 b+seg+s SRC+M FCCs HE RP HE+RP HE RP HE+RP HE RP HE+RP GMM using Database is TIMIT 105

47 4.12 SUMMARY In this chapter, model the speaker-specific source information from LP residual at subsegmental, segmental and suprasegmental using GMM. The segmental and suprasegmental level information is decimated by a factor of 4 and 50, respectively. Experimental results show that subsegmental, segmental and suprasegmental levels contain speaker information. Further combining the evidences from each level, the performance improvement indicates the different nature speaker information at each level. Towards the end, the idea of subsegmental, segmental and suprasegmental features of LP residual, HE of LP residual and RP of LP residual at subsegmental, segmental and suprasegmental levels recognition system using GMM s are proposed. 106 for speaker

Speaker recognition using universal background model on YOHO database

Speaker recognition using universal background model on YOHO database Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute

More information

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012 Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

Segregation of Unvoiced Speech from Nonspeech Interference

Segregation of Unvoiced Speech from Nonspeech Interference Technical Report OSU-CISRC-8/7-TR63 Department of Computer Science and Engineering The Ohio State University Columbus, OH 4321-1277 FTP site: ftp.cse.ohio-state.edu Login: anonymous Directory: pub/tech-report/27

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence INTERSPEECH September,, San Francisco, USA Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence Bidisha Sharma and S. R. Mahadeva Prasanna Department of Electronics

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy Sheeraz Memon

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language Z.HACHKAR 1,3, A. FARCHI 2, B.MOUNIR 1, J. EL ABBADI 3 1 Ecole Supérieure de Technologie, Safi, Morocco. zhachkar2000@yahoo.fr.

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

Speaker Recognition. Speaker Diarization and Identification

Speaker Recognition. Speaker Diarization and Identification Speaker Recognition Speaker Diarization and Identification A dissertation submitted to the University of Manchester for the degree of Master of Science in the Faculty of Engineering and Physical Sciences

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract

More information

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation Taufiq Hasan Gang Liu Seyed Omid Sadjadi Navid Shokouhi The CRSS SRE Team John H.L. Hansen Keith W. Godin Abhinav Misra Ali Ziaei Hynek Bořil

More information

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions 26 24th European Signal Processing Conference (EUSIPCO) Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions Emma Jokinen Department

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Xinyu Zhou, Yuxin Wu, and Tiezheng Li Tsinghua University Contents 1 Introduction 1 2 Algorithms 2 2.1 VAD..................................................

More information

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Support Vector Machines for Speaker and Language Recognition

Support Vector Machines for Speaker and Language Recognition Support Vector Machines for Speaker and Language Recognition W. M. Campbell, J. P. Campbell, D. A. Reynolds, E. Singer, P. A. Torres-Carrasquillo MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA

More information

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of Technology Lincoln Laboratory {es,dar}@ll.mit.edu ABSTRACT

More information

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques Lorene Allano 1*1, Andrew C. Morris 2, Harin Sellahewa 3, Sonia Garcia-Salicetti 1, Jacques Koreman 2, Sabah Jassim

More information

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production

More information

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National

More information

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George

More information

Voice conversion through vector quantization

Voice conversion through vector quantization J. Acoust. Soc. Jpn.(E)11, 2 (1990) Voice conversion through vector quantization Masanobu Abe, Satoshi Nakamura, Kiyohiro Shikano, and Hisao Kuwabara A TR Interpreting Telephony Research Laboratories,

More information

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering

More information

Speech Recognition by Indexing and Sequencing

Speech Recognition by Indexing and Sequencing International Journal of Computer Information Systems and Industrial Management Applications. ISSN 215-7988 Volume 4 (212) pp. 358 365 c MIR Labs, www.mirlabs.net/ijcisim/index.html Speech Recognition

More information

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech Dept. for Speech, Music and Hearing Quarterly Progress and Status Report VCV-sequencies in a preliminary text-to-speech system for female speech Karlsson, I. and Neovius, L. journal: STL-QPSR volume: 35

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,

More information

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial

More information

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

A comparison of spectral smoothing methods for segment concatenation based speech synthesis D.T. Chappell, J.H.L. Hansen, "Spectral Smoothing for Speech Segment Concatenation, Speech Communication, Volume 36, Issues 3-4, March 2002, Pages 343-373. A comparison of spectral smoothing methods for

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

An Online Handwriting Recognition System For Turkish

An Online Handwriting Recognition System For Turkish An Online Handwriting Recognition System For Turkish Esra Vural, Hakan Erdogan, Kemal Oflazer, Berrin Yanikoglu Sabanci University, Tuzla, Istanbul, Turkey 34956 ABSTRACT Despite recent developments in

More information

arxiv: v1 [math.at] 10 Jan 2016

arxiv: v1 [math.at] 10 Jan 2016 THE ALGEBRAIC ATIYAH-HIRZEBRUCH SPECTRAL SEQUENCE OF REAL PROJECTIVE SPECTRA arxiv:1601.02185v1 [math.at] 10 Jan 2016 GUOZHEN WANG AND ZHOULI XU Abstract. In this note, we use Curtis s algorithm and the

More information

Mandarin Lexical Tone Recognition: The Gating Paradigm

Mandarin Lexical Tone Recognition: The Gating Paradigm Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition

More information

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District Report Submitted June 20, 2012, to Willis D. Hawley, Ph.D., Special

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT Takuya Yoshioka,, Anton Ragni, Mark J. F. Gales Cambridge University Engineering Department, Cambridge, UK NTT Communication

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,

More information

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode Diploma Thesis of Michael Heck At the Department of Informatics Karlsruhe Institute of Technology

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

International Journal of Advanced Networking Applications (IJANA) ISSN No. : International Journal of Advanced Networking Applications (IJANA) ISSN No. : 0975-0290 34 A Review on Dysarthric Speech Recognition Megha Rughani Department of Electronics and Communication, Marwadi Educational

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS Annamaria Mesaros 1, Toni Heittola 1, Antti Eronen 2, Tuomas Virtanen 1 1 Department of Signal Processing Tampere University of Technology Korkeakoulunkatu

More information

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach Data Integration through Clustering and Finding Statistical Relations - Validation of Approach Marek Jaszuk, Teresa Mroczek, and Barbara Fryc University of Information Technology and Management, ul. Sucharskiego

More information

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl

More information

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl

More information

Automatic segmentation of continuous speech using minimum phase group delay functions

Automatic segmentation of continuous speech using minimum phase group delay functions Speech Communication 42 (24) 429 446 www.elsevier.com/locate/specom Automatic segmentation of continuous speech using minimum phase group delay functions V. Kamakshi Prasad, T. Nagarajan *, Hema A. Murthy

More information

Statewide Framework Document for:

Statewide Framework Document for: Statewide Framework Document for: 270301 Standards may be added to this document prior to submission, but may not be removed from the framework to meet state credit equivalency requirements. Performance

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY Sergey Levine Principal Adviser: Vladlen Koltun Secondary Adviser:

More information

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Using Web Searches on Important Words to Create Background Sets for LSI Classification Using Web Searches on Important Words to Create Background Sets for LSI Classification Sarah Zelikovitz and Marina Kogan College of Staten Island of CUNY 2800 Victory Blvd Staten Island, NY 11314 Abstract

More information

Body-Conducted Speech Recognition and its Application to Speech Support System

Body-Conducted Speech Recognition and its Application to Speech Support System Body-Conducted Speech Recognition and its Application to Speech Support System 4 Shunsuke Ishimitsu Hiroshima City University Japan 1. Introduction In recent years, speech recognition systems have been

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar Muthukumar, and Alan W Black Carnegie Mellon University,

More information

Speaker Recognition For Speech Under Face Cover

Speaker Recognition For Speech Under Face Cover INTERSPEECH 2015 Speaker Recognition For Speech Under Face Cover Rahim Saeidi, Tuija Niemi, Hanna Karppelin, Jouni Pohjalainen, Tomi Kinnunen, Paavo Alku Department of Signal Processing and Acoustics,

More information

Multi-Dimensional, Multi-Level, and Multi-Timepoint Item Response Modeling.

Multi-Dimensional, Multi-Level, and Multi-Timepoint Item Response Modeling. Multi-Dimensional, Multi-Level, and Multi-Timepoint Item Response Modeling. Bengt Muthén & Tihomir Asparouhov In van der Linden, W. J., Handbook of Item Response Theory. Volume One. Models, pp. 527-539.

More information

Automatic Pronunciation Checker

Automatic Pronunciation Checker Institut für Technische Informatik und Kommunikationsnetze Eidgenössische Technische Hochschule Zürich Swiss Federal Institute of Technology Zurich Ecole polytechnique fédérale de Zurich Politecnico federale

More information

Spoofing and countermeasures for automatic speaker verification

Spoofing and countermeasures for automatic speaker verification INTERSPEECH 2013 Spoofing and countermeasures for automatic speaker verification Nicholas Evans 1, Tomi Kinnunen 2 and Junichi Yamagishi 3,4 1 EURECOM, Sophia Antipolis, France 2 University of Eastern

More information

Mathematics. Mathematics

Mathematics. Mathematics Mathematics Program Description Successful completion of this major will assure competence in mathematics through differential and integral calculus, providing an adequate background for employment in

More information

Statistical Parametric Speech Synthesis

Statistical Parametric Speech Synthesis Statistical Parametric Speech Synthesis Heiga Zen a,b,, Keiichi Tokuda a, Alan W. Black c a Department of Computer Science and Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya,

More information

Author's personal copy

Author's personal copy Speech Communication 49 (2007) 588 601 www.elsevier.com/locate/specom Abstract Subjective comparison and evaluation of speech enhancement Yi Hu, Philipos C. Loizou * Department of Electrical Engineering,

More information

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2 1 CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2 Peter A. Chew, Brett W. Bader, Ahmed Abdelali Proceedings of the 13 th SIGKDD, 2007 Tiago Luís Outline 2 Cross-Language IR (CLIR) Latent Semantic Analysis

More information

Universiteit Leiden ICT in Business

Universiteit Leiden ICT in Business Universiteit Leiden ICT in Business Ranking of Multi-Word Terms Name: Ricardo R.M. Blikman Student-no: s1184164 Internal report number: 2012-11 Date: 07/03/2013 1st supervisor: Prof. Dr. J.N. Kok 2nd supervisor:

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

Perceptual scaling of voice identity: common dimensions for different vowels and speakers

Perceptual scaling of voice identity: common dimensions for different vowels and speakers DOI 10.1007/s00426-008-0185-z ORIGINAL ARTICLE Perceptual scaling of voice identity: common dimensions for different vowels and speakers Oliver Baumann Æ Pascal Belin Received: 15 February 2008 / Accepted:

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Math 96: Intermediate Algebra in Context

Math 96: Intermediate Algebra in Context : Intermediate Algebra in Context Syllabus Spring Quarter 2016 Daily, 9:20 10:30am Instructor: Lauri Lindberg Office Hours@ tutoring: Tutoring Center (CAS-504) 8 9am & 1 2pm daily STEM (Math) Center (RAI-338)

More information

Affective Classification of Generic Audio Clips using Regression Models

Affective Classification of Generic Audio Clips using Regression Models Affective Classification of Generic Audio Clips using Regression Models Nikolaos Malandrakis 1, Shiva Sundaram, Alexandros Potamianos 3 1 Signal Analysis and Interpretation Laboratory (SAIL), USC, Los

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad

More information

Perceived speech rate: the effects of. articulation rate and speaking style in spontaneous speech. Jacques Koreman. Saarland University

Perceived speech rate: the effects of. articulation rate and speaking style in spontaneous speech. Jacques Koreman. Saarland University 1 Perceived speech rate: the effects of articulation rate and speaking style in spontaneous speech Jacques Koreman Saarland University Institute of Phonetics P.O. Box 151150 D-66041 Saarbrücken Germany

More information

Truth Inference in Crowdsourcing: Is the Problem Solved?

Truth Inference in Crowdsourcing: Is the Problem Solved? Truth Inference in Crowdsourcing: Is the Problem Solved? Yudian Zheng, Guoliang Li #, Yuanbing Li #, Caihua Shan, Reynold Cheng # Department of Computer Science, Tsinghua University Department of Computer

More information