CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL FEATURES FOR SPEAKER RECOGNITION USING GAUSSIAN MIXTURE MODEL

Similar documents
Speaker recognition using universal background model on YOHO database

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Human Emotion Recognition From Speech

Speech Emotion Recognition Using Support Vector Machine

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

WHEN THERE IS A mismatch between the acoustic

Segregation of Unvoiced Speech from Nonspeech Interference

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Speech Recognition at ICSI: Broadcast News and beyond

A study of speaker adaptation for DNN-based speech synthesis

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

Probabilistic Latent Semantic Analysis

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Recognition. Speaker Diarization and Identification

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Learning Methods in Multilingual Speech Recognition

Support Vector Machines for Speaker and Language Recognition

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Generative models and adversarial training

Proceedings of Meetings on Acoustics

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Voice conversion through vector quantization

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Speech Recognition by Indexing and Sequencing

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Python Machine Learning

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

THE RECOGNITION OF SPEECH BY MACHINE

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

An Online Handwriting Recognition System For Turkish

arxiv: v1 [math.at] 10 Jan 2016

Mandarin Lexical Tone Recognition: The Gating Paradigm

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Lecture 1: Machine Learning Basics

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Software Maintenance

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Calibration of Confidence Measures in Speech Recognition

Word Segmentation of Off-line Handwritten Documents

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

INPE São José dos Campos

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Automatic segmentation of continuous speech using minimum phase group delay functions

Statewide Framework Document for:

Australian Journal of Basic and Applied Sciences

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Body-Conducted Speech Recognition and its Application to Speech Support System

Using dialogue context to improve parsing performance in dialogue systems

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Speaker Recognition For Speech Under Face Cover

Multi-Dimensional, Multi-Level, and Multi-Timepoint Item Response Modeling.

Automatic Pronunciation Checker

Spoofing and countermeasures for automatic speaker verification

Mathematics. Mathematics

Statistical Parametric Speech Synthesis

Author's personal copy

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2

Universiteit Leiden ICT in Business

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Perceptual scaling of voice identity: common dimensions for different vowels and speakers

Artificial Neural Networks written examination

Math 96: Intermediate Algebra in Context

Affective Classification of Generic Audio Clips using Regression Models

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Perceived speech rate: the effects of. articulation rate and speaking style in spontaneous speech. Jacques Koreman. Saarland University

Truth Inference in Crowdsourcing: Is the Problem Solved?

Transcription:

CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL FEATURES FOR SPEAKER RECOGNITION USING GAUSSIAN MIXTURE MODEL Speaker recognition is a pattern recognition task which involves three phases namely, feature extraction, training and testing. In the feature extraction stage, features representing speaker information are extracted from the speech signal. In the present study LP residual derived from the speech data is used for training and testing and also processing of LP residual in time domain at subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are built, one for each speaker, using the training data of the speaker. During the testing phase, the models are tested with the test data. Based on the results with test data, decision is made about the identity of the speaker. 4.1 THE SPEECH FEATURE EXTRACTION The selection of the best parametric representation for acoustic data is an important task in the design of any text-independent speaker recognition system. The acoustic features should fulfill the following requirements. Be of low dimensionality to allow a reliable estimation of parameters of the Automatic speaker recognition systems. Be independent of the speech and recording environment. 60

4.1.1. PRE-PROCESSING The task begins with the pre-processing of the speech signal collected from each speaker. The speech signal is sampled at 16000 samples/sec and it is resampled to 8000 samples/sec. In the preprocessing stage, the given speech utterance is pre-emphasized, blocked into a number of frames and windowed. The frame size chosen is 5 msec which corresponds 40 samples and a frame shift between frames is 2.5 msec which corresponds to 20 samples has been taken in the subsegmental processing of LP residual. The frame size chosen is 20 msec which corresponds 40 samples and a frame shift between frames is 2.5 msec which corresponds to 20 samples has been taken in the segmental processing of LP residual and its sampling frequency is decimated by 4 times hence frame size is same as subsegmental level. The frame size chosen is 250 ms which corresponds 40 samples and a frame shift between frames is 6.25 msec which corresponds to 1 sample has been taken in the suprasegmental processing of LP residual in which signal is decimated by 50 times. The preprocessing task is carried out in a sequence of steps as explained below. 4.1.1.1. Pre-Emphasis The given speech samples in each frame are passed through a first order filter to spectrally flatten the signal and make it less susceptible to finite precision effects later in the signal processing task. The pre-emphasis filter used has the form H (z) =1- z-1, 0.9 a 1.0. In fact, it is sometimes better to difference the entire speech utterance before frame blocking and windowing. 61

4.1.1.2. Windowing After pre-emphasis, each frame is windowed using a window function. The windowing ensures that the signal discontinuities at the beginning and end of each frame are minimized. The window function used is the Hamming window given below, W (n) =0.54-0.46, 0 n N-1 (4.1) Where N is the number of samples in the frame. 4.1.2. Approach to Speech Feature Extraction One of the early problems in speaker recognition systems was to choose the right speaker specific excitation source features from the speech. Excitation source models were chosen to be GMM or HMM, as they are assumed to offer a good fit to the statistical nature of speech. Moreover, the excitation source models are often assumed to have diagonal covariance matrices which arises the need for speech features those are by nature uncorrelated. Speaker recognition system uses subsegmental, segmental and suprasegmental features from LP residual represent different speaker specific excitation source features. These features are robust to channel and environmental noise. We present a brief overview of subsegmental, segmental and suprasegmental features of LP residual. 4.1.2.1. Subsegmental, Segmental and Suprasegmental Features of the LP Residual The 12th order LP residual signal is blocked into frames using specified frame size of 20 msec and frame shift of 10 msec. The LP 62

residual contains more speaker specific information. It has been shown that humans can recognize people by listening to the LP residual signal [57]. This may be attributed the speaker-specific excitation source information present at different levels. This work views the speaker specific excitation source information. In subsegmental analysis, the LP residual is blocked into frames of size 5 msec considered in shifts of 2.5 msec for extracting the dominant speaker information in each frame. Each frame has 40 samples with shift 20 samples in segmental analysis, the LP residual is blocked into frames of size 20 ms considered in shifts of 2.5 msec for extracting the pitch and energy of the speaker. In suprasegmental analysis, the LP residual is blocked into frames of size 250 msec considered in shifts of 6.25 msec for extracting longterm information, which has very low frequency information of the speaker. At each level source based speaker characteristics are represented in the database independently using GMM and combine them to improve the Speaker recognition system. 4.2 GAUSSIAN MIXTURE MODEL FOR SPEAKER RECOGNITION GMM is a classic parametric method best used to model speaker identities due to the fact that Gaussian components have the capability of representing some general speaker dependent spectral shapes. Gaussian classifier has been successfully employed in the several text-independent speaker identification applications since the approach used by this classifier is similar to that used by the long term average of spectral features for representing a speaker s average vocal tract shape [101]. 63

In a GMM model, the probability distribution of the observed data takes the form given by the following equation [102]. (4.2) Where M is the number of component densities, x is a D dimensional observed data, bi ( x ) is the component density and pi is the mixture weight for i = 1,.., M as shown in Fig. 4.1. = (4.3) Each component density denotes a D-dimensional normal distribution with mean vector and covariance matrix i. The mixture weights satisfy the condition represent positive scalar values. collectively represented as λ = { i} These and therefore parameters can be for i = 1,.. M. Each speaker in a speaker in a speaker identification system can be represented by one distinct GMM and is referred by the speaker s models λi, for i = 1, 2, 3, N, where N is the number of speakers. 4.2.1. Training the Model The training procedure is similar to the procedure followed in vector quantization. Clusters are formed within the training data. Each cluster is then represented with multiple Gaussian probability distribution function (pdf). The union of many such Gaussian pdf s is a GMM. The most common approach to estimate the GMM parameters is the maximum likelihood estimation [103], where P(X/λ) is maximized with respect to λ. P(X/λ) is the conditional 64

probability and vector X = {x1, x2,.xi} is the set of all feature vectors belonging to a particular acoustic class. Since there is no closed form solution to the maximum likelihood estimation, convergence is guaranteed only when large enough data is available. An iterative approach for computing the GMM model parameters using Expectation-maximization (EM) algorithm [104] is followed. Fig. 4.1: Diagram of Gaussian Mixture Model. E-Step: Posterior probabilities are calculated for all the training feature vectors. Posterior probability for a feature vector i of the nth frame of the given speaker is follows P = (4.4) 65

M-Step: The M-step uses the posterior probabilities from the EStep to estimate model parameters as follows: (4.5) i = (4.6) And I = (4.7), Set Pi = I i, and and iterate the sequence of E-step and M-step a few times by iteratively checking for the condition. The EM algorithm improves on the GMM parameter estimates by iteratively checking for the condition P (X z+1) > P (X z) (4.8) 4.2.2. Testing the Model Let the number of models representing different acoustic classes be N. hence j, where j = {1, 2, 3,.N}, is the set of GMMs under consideration. For each test utterance, feature vectors xn at time n are extracted. The probability of each model given the feature vectors xn is given by P( j xn) = (4.9) Since P(xn) is a constant and P( j), the apriori probabilities, are assumed to be equal, the problem is reduced to finding the maximizes. But j that is given by = P({x1,x2,,xI} j) (4.10) 66

Where, I is the number of feature vectors for each frame of the speech signal belonging to a particular acoustic class. Assuming that each frame is statistically independent, Equation 4.10 can be written as P({x1,x2,,xI} j) = (4.11) Applying logarithm on Equation 4.9 and simplifying for N we have Nr = (4.12) Where Nr is declared as the class to which the feature vectors belong. Note that {Nr, r = {1,2,3,,N}} is the set of all acoustic classes. 4.3 EXPERIMENTAL RESULTS 4.3.1. Database Used for the Study In this study we consider identification task on TIMIT Speaker database [4]. The TIMIT corpus of read speech has been designed to provide speaker data for the acquisition of acousticphonetic knowledge and for the development and evaluation of automatic speaker recognition systems. TIMIT contains a total of 6300 sentences, 10 sentences spoken by each of 630 speakers from 8 major dialect regions of the United States. We consider 380 utterances spoken by 38 speakers out of 630 speakers for speaker recognition. For each speaker maximum of 10 speech utterances among which 8, 7 and 6 are used for training and tested with minimum 2, 3 and 4 speech utterances. 67

4.3.2. Experimental Setup In general, speaker recognition refers to both speaker identification and speaker verification. Speaker identification is the task of identifying a given speaker from a set of speakers. In the closed-set speaker identification no speaker outside the given set is used for testing. Speaker verification is the task of verification is either to accept or reject the claim of the speaker. In this work experiments have been carried out on closed-set speaker identification. The system has been implemented in Matlab7 on Windows XP platform. We have used LP order of 12 for all experiments. We have trained the model using Gaussian mixture components as 4, 8, 16 and 32 for different training and testing speech utterances which are spoken by 38 speakers respectively. Here, recognition rate is defined as the ratio of the number of speakers identified to the total number of speakers tested. 4.3.3. Extraction of Complete Source Information of LP Residual, HE of LP Residual and RP of LP Residual at Different levels. As the envelope of the short-time spectrum corresponds to the frequency response of the vocal tract shape, one can observe the short-time spectrum of the LP residual for different LP orders and the corresponding signal LP spectra to determine the extent of the vocal tract information present in the LP residual. As the order of the LP analysis is increased, the LP spectrum approximates the short-time spectral envelope better. The envelope of the short-time spectrum corresponds to the frequency response of the vocal tract 68

shape, thus reflecting the vocal tract system characteristics. Typically the vocal tract system is characterized by a maximum of five resonances in the 0-4 khz range. Therefore LP order of about 8-14 seems to be most appropriate for a speech signal resample at 8 khz. For low order, say 3 as shown in Fig.4.3 (a), the LP spectrum may pick up only the prominent resonances, and hence the residual will still have a large amount of information about the vocal tract system. Thus the spectrum of the residual Fig.4.3 (b) contains most of the information of the spectral envelope, except for the prominent resonances. On the other hand, if a large order, say 30 is used, then the LP spectrum may pick up spurious peaks as shown in Fig. 4.3(e). These spurious peaks influence the corresponding LP residual obtained by passing the speech signal through may be affected due to the influence of these spurious nulls in the spectrum of the inverse filter is shown in Fig 4.2 (f). 69

Fig. 4.2: (a) LP Spectrum, (b) LP Residual Spectrum for LP Order 3, (c) LP Spectrum, (d) Residual Spectrum for LP Order 9, (e) LP Spectrum and (f) Residual Spectrum for LP Order 30. 70

From above discussion, it is evident that LP residual does not contain any significant features of the vocal tract shape for LP orders in the range 8-20. The LP residual may contain mostly the source information at subsegmental, segmental and suprasegmental levels. The features derived from the LP residual at these levels are called as residual features. We verified that the speaker-specific information present in the LP residual dominates the amplitude information than phase information due to inverse filtering. Hence we separate the amplitude information and phase information of the LP residual using Hilbert transform, hence amplitude information contained in the HE of LP residual and phase information contained in RP of LP residual at subsegmental, segmental and suprasegmental levels are shown in Figs 4.3 and 4.4. Figs: 4.3 Analytical Signal Representation of a) Subsegmental, b) Segmental and c) Suprasegmental Feature Vectors using HE of LP Residual. 71

Fig 4.4: Subsegmental, Segmental and Suprasegmental Feature Vectors for RP of LP Residual. 4.3.4. Effect of Model at subsegmental, segmental and Suprasegmental Levels and Amount of Training and Test data This section presents performance of the subsegemental, segmental and suprasegmental levels of LP residual (complete source) based speaker recognition systems with respect to number of mixture components per model (At each level) and amount of training and testing data. The recognition performance of subsegmental, segmental and suprasegmental levels of LP residual for the amount of train and test data is shown in the Tables 4.1 and 4.2. 72

4.4 DISCUSSION ON SPEAKER RECOGNITION PERFORMANCE The speaker recognition performance with respect to training and testing data is presented in section 4.4.1 with detailed explanation. 4.4.1. With Respect to Varying Amount of Training and Testing Data For better performance, we are applying a condition that the testing utterances are less than training utterances for better understanding of performance and good authentication, by following the previous condition the training utterances are decreased testing utterances are increased till it reach 6 and 4 utterances respectively among the 10 utterances per speaker. In this experiment, for each speaker to develop one model with mixture components 2, 4, 8 and 32 by using GMM for training utterances. The various amounts of training data were sequentially taken and tested with corresponding testing utterances by following above conditions. We observed best results for 6-4 utterance model compared to 8-2 and 7-3 utterance models, which are discussed in further sections. It is also evident that when there is no enough training data, the selection of model order becomes more important. For all amounts of training data, performance is increased from 2 to 32 Gaussian components. 73

4.5 MODELING SUBSEGMENTAL SPEAKER (Sub), INFORMATION SEGMENTAL (Seg) FROM AND SUPRASEGMENTAL (Supra) LEVELS OF LP RESIDUAL Speaker-specific excitation source information extracted at subsegmental level in which one pitch cycle is modeled. At this level, GMM model is used to capture variety of speaker characteristics. Blocks of 40 samples from the voiced regions of LP residual are used as input to the GMM model. Successive blocks are formed with a shift of 20 samples. One GMM model is trained using LP residual at subsegmental level. Since the block size is less than a pitch period, the variety characteristics of the excitation source (LP residual) within one glottal pulse are captured. The performance of speaker identification at subsegmental of LP residual is shown in Figs 4.5(a), 4.7(a) and in the 2nd column of Tables 4.1 and 4.2. At the segmental level, two to three glottal cycles of speaker-specific information is modeled in which the information may be attributed to pitch and energy. At this level, GMM model is used to capture variety of speaker characteristics. Blocks of 40 samples from the voiced regions of LP residual are used as input to the GMM model. Successive blocks are formed with a shift of 5 samples. One GMM model is trained using LP residual at segmental level. Since the block size is 2-3 pitch period, the variety characteristics of the excitation source (LP residual) within 2-3 glottal pulse is captured The performance of the Speaker recognition system at segmental level is shown in Figs 4.5(a),4.7 (a) and in the 3rd column of Table 4.1 and 4.2. At the suprasegmental level, 25 to 50 glottal cycles of speaker-specific information is modeled in which the information may be attributed to long-term variations means that using this feature speaker is recognized even 74

though the speaker is aged. This is motivation of my work. The performance of the Speaker recognition system at suprasegmental level is shown in Figs 4.5(a), 4.7(a) and in the 4 th column of Tables 4.1 and 4.2. These are compared with base line Speaker recognition system using MFCCs in the 6 th column of tables 4.1 and 4.2. For comparison purpose, the base line speaker recognition system using speaker information from the vocal tract and segmental source feature are developed for the same database. Speech signal is processed in blocks of 20 msec with a shift of 10 msec. For every frame 39 dimensional MFCCs are computed. The performance of this speaker recognition system is shown in figs 4.5(b) and 4. 7(b) and Tables 4.1 and 4.2 and this compared with the Speaker recognition performance of sub, seg and supra levels of LP residual. The performance of speaker recognition system have been given in the form of percentile in the all the Tables of this chapter. 75

Table 4.1: Speaker recognition performance of Subsegmental (Sub), Segmental (Seg) and Suprasegmental (Supra) information of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 7 used as training 3 used as testing. No. of Mixtur Sub Seg Supra es SRC= Sub+seg+supra MFCCs Sub+Seg+ Supra+ MFCCs 2 1 20 10 10 30 16.67 4 35 26.67 5 2 36.67 30 8 36.67 4 5 46.67 46.67 60 16 83 56.67 5 80 66.67 80 32 9 56.67 5 8 60 83.37 76

Fig. 4.5: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of LP Residual and b) Sub+Seg+Supra along with MFCCs 77

4.5.1 Combining Evidences From Subsegmantal, Segmental and Suprasegmental Levels of LP Residual By the way of deriving each feature, the information present at subsegmental, segmental and suprasegmental levels are different and hence may reflect different aspect of speaker specific source information. By comparing their recognition performance it can be observed that the subsegmental features provide best performance. Thus the sub segmental features may have more speaker-specific evidence compared to other level features. The different performances of the recognition experiments indicate the different nature of speaker information present. In case of identification, the muddle pattern of features is considered as an indication of the different nature of information present. In the confusion Pattern, principal diagonal represents correct identification and the rest represents miss classification. Figure 4.6 shows the confusion patterns of the identification results conducted for all the proposed features using TIMIT database, respectively. In each case, the confusion pattern is entirely different. The decisions for both true and false identification are different. This indicates that they reflect different aspect of source information. This may help in combining the evidences for further improvement of the recognition performance from the complete source perspective. 78

Fig.4.6: Confusion Pattern of a) Sub, b) Seg, c) Supra of LP Residual, d) SRC=Sub+Seg+Supra e) MFCCs and f) SRC+MFCC s Information for Identification of 38 Speakers from TIMIT Database. 79

Table 4.2: Speaker recognition performance of Sub, Seg and Supra information of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing. No. of Mixtures Sub Seg Supra SRC= Sub+seg+supra MFCCs SRC+MFCCs 2 20 1 10 26.66 50 36.66 4 4 20 1 46.66 50 46.66 8 7 46.66 10 76.66 50 76.66 16 90 56.66 6.66 86.66 66.67 90 32 96.66 70 80 70 8 80

Fig. 4.7: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of LP Residual and b) Sub+Seg+Supra along with MFCCs. 81

4.6 MODELING SPEAKER INFORMATION FROM SUBSEGMENTAL SEGMENTAL AND SUPRASEGMENTAL LEVELS OF HE OF LP RESIDUAL The amplitude information is obtained from LP residual using Hilbert transform and which is 900 a shifted version of LP residual. Since the HE represents the magnitude information of the LP residual. The HE of LP residual is processed at subsegmental, segmental and suprasegemental levels. In this subsegmental, segmental and suprasegemental sequences are derived from the HE of LP residual is called as HE features. The speaker recognition performances for subsegmental, segmental and suprasegmental levels are shown in Figs 4.8(a)-4.10(a) respectively. The combined amplitude information at each level is improved is shown in Figs 4.8(b)-4.10(b) respectively. The experimental results shown in Tables 4.3, 4.4 and 4.5 for 38 speakers of TIMIT database. 82

Table 4.3: Speaker recognition performance of Sub, Seg and Supra information of HE of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing. SRC = No. of Sub Seg Supra Sub+seg+supra MFCCs SRC+MFCCs mixtures 2 26.67 46.67 40 3 3 4 46.67 30 60 50 6 8 3 30 0 50 5 56.33 16 50 40 50 5 56.33 32 70 5 6 60 64 83

Fig.4.8: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and b) Sub+Seg+Supra along with MFCCs. 84

Table 4.4: Speaker recognition performance of Sub, Seg and Supra information of HE of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing. SRC=Sub+seg+ No. of Sub Seg Supra Supra MFCCs SRC+MFCCs mixtures 2 1 6.67 20 1 3 2 4 36.67 26.67 40 50 40 8 36.67 40 46.67 5 46.67 16 60 4 70 5 70 32 70 76.67 6.67 80 66.67 80 85

Fig. 4.9: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and b) Sub+Seg+Supra along with MFCCs. 86

Table 4.5: Speaker Recognition Performance of Sub, Seg and Supra Information of HE of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 6 used for Training 4 used for Testing. No. of mixtures Sub Seg Supra SRC=Sub+seg+ Supra MFCCs SRC+MFCCs 2 20 16.67 0 2 26.67 50 4 36.67 16.67 3 50 76.67 8 46.67 50 46.67 56.67 60 16 5 5 80 5 66.67 32 80 4 66.67 6 36.67 87

Fig 4.10: The performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and b) Sub+Seg+Supra along with MFCCs. 88

4.7 MODELING SPEAKER INFORMATION FROM SUBSEGMENTAL SEGMENTAL AND SUPRASEGMENTAL LEVELS OF RP OF LP RESIDUAL The Phase information is obtained from LP residual using Hilbert transform and which is 900 a shifted version of LP residual. Since the HE represents the magnitude information of the LP residual, we can obtain the cosine of the information from LP residual by dividing with HE. Hence we obtain phase information from LP residual is known as RP of LP residual. residual is processed at subsegmental, The RP of LP segmental and suprasegmental levels. These levels are derived from RP of LP residual is called as RP features. The speaker recognition performances for subsegmental, segmental and suprasegmental levels for RP of LP residual are shown in Figs 4.11(a)-4.13(a) respectively. The combined phase information at each level is improved and shown in figs 4.11(b)-4.13(b). The experimental results shown in tables 4. 6-4. 8 for 38 speakers. 89

Table 4.6: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing. No. of Sub Seg Supra 2 10 20 6.67 4 36.67 23 8 30 16 32 mixtur es SRC=Sub+seg+ MFCCs SRC+MFCCs 4 3 1 26.67 50 6 56 66.67 5 50 50 56 76.67 5 76.67 80 7 80 60 84.67 supra 90

Fig 4.11: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 91

Table 4.7: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing. SRC=Sub+ No. of Sub Seg Supra seg+supra MFCCs mixtures 2 5 1 0 1 3 6 4 4 30 30 50 60 8 56.67 40 40 5 70 16 66 36.67 36.67 5 70 32 76.67 30 30 66.67 83 92 SRC+MFCCs

Fig 4.12: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 93

Table 4.8: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing. No. of Sub Seg Supra 2 40 36.67 0 4 60 4 8 5 16 32 mixtur es SRC=Sub+ MFCCs SRC+MFCCs 46.67 26.67 50 70 50 76.67 3 60 56.67 60 56.67 40 66.67 5 66.67 70 4 36.67 6 36.67 seg+supra 94

Fig 4.13: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 95

4.8 COMBINING EVIDENCES FROM SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL LEVELS OF HE AND RP OF LP RESIDUAL The procedure to compute subsegmental, segmental and suprasegmental feature vectors from HE and RP of the LP is same as described earlier except the input sequence. In one case the input will be HE and the other case it will be RP. The unipolar nature of the HE helps in suppressing the bipolar variations representing sequence information and emphasizing only the amplitude values. As a result, the amplitude information in the subsegmental, segmental and suprasegmental sequences of LP residual are shown in Figs 4.3 (a) (b) and (c). On the other hand, the residual phase represents the sequence information of thee residual samples. Figs 4.4 (a), (b) and (c) show the residual phase of the subsegmental, segmental and suprasegmental processing respectively. In all these cases, the amplitude information is absent. Hence analytic signal representation provides amplitude and sequence information of the LP residual samples independently. In [113] it was shown that information present in the residual phase significantly contributes to the speaker recognition. We propose that, information present in the HE may also contribute well to speaker recognition. Further, as they reflect different aspect of the source information, the combined representation of both the evidences may be more effective for speaker recognition. We conduct different experiments on TIMIT database for 38 speakers. 96

Table 4.9: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing. No. of HE+RP HE+RP HE+RP SRC=HE+Rp MFCCs SRC+MFCCs mixtures of Sub of Seg of of Supra Sub+seg+supra 2 1 26.67 6.67 20 3 2 4 60 4 6.67 56.67 50 56.67 8 3 7 3 56.67 5 60 16 6 66.67 46.67 8 5 86.67 32 66.67 5 30 76.67 60 76.67 97

Fig 4.14: The Performance of Speaker Recognition System for a) Sub, Seg and Supra levels of HE and RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 98

Table 4.10: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing. No. of HE+RP HE+RP HE+RP SRC= HE+RP of of Supra Sub+seg+supra of Sub of Seg 2 20 1 20 4 46.67 30 8 5 16 32 mixtures MFCC s SRC+MFCC s 20 3 2 6.67 46.67 50 46.67 60 66.67 5 70 7 7 6.67 70 5 76.67 90 75 6.67 86.67 66.67 93.37 99

Fig 4.15: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE and RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 100

Table 4.11: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual for 38 speakers. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing. SRC=Sub+ MFCC seg+supra s 6.67 26.67 26.67 50 4 60 50 76.67 60 3 6.67 6 56.67 60 16 8 40 8 5 86.67 32 9 4 9 6 98.67 No. of Sub Seg Supra mixtures 2 26.67 36.67 4 50 8 101 SRC+MFCCs

Fig 4.16: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE and RP of LP Residual and b) Sub+Seg+Supra along with MFCCs. 102

4.9 Discussion on Speaker Recognition Performance with respect to varying amount of Training and Testing data. In this experiment, speaker models with 2, 4, 8, 16 and 32 component densities were trained using 8 and 6 speech utterances and tested with 2 and 4 speech utterances per speaker. The recognition performance of residual features, HE and RP features for 2 and 4 test speech utterances versus 8 and 6 speech utterances are train data are shown in the figs 4.5-4.16 and Tables 4.1-4.11 It is shown that with increase in test speech utterances per speaker the recognition performance increases. The largest increase in percentage of recognition for training speech utterances per speaker, when the amount of test speech utterances are 4 in the case of residual features, HE and RP features individually shown in the Tables 4.1-4.8 and Figs 4.5-4.13 and fusion of HE and RP features since fusion of both provides complete source information [Tables 4.9-4.11 and Figs 4.14-4.16]. 4.10 DISCUSSION ON SPEAKER PERFORMANCE WITH RECOGNITION RESPECT TO DIFFERENT TYPES OF FEATURES To investigate Speaker recognition performance using LP residual at subsegmental, segmental and suprasegmental levels with respect to the component densities per model where each speaker is model at subsegmental, segmental and suprasegmental information of LP residual. The performance of subsegmental level is more than the other two levels. Similarly each speaker is modeled at subsegmental, segmental and suprasegmental of HE and RP of 103

LP residual. Individually, the performance of HE and RP is less than residual features. Fusion of HE and RP improves the performance of Speaker recognition system [Tables and Figs]. Therefore, the fusion of HE and RP features provides better performance than the residual features alone. This shows the robustness of the combined HE and RP representation of complete source is providing additional information to the MFCC features. From this observation we conclude that combined representation of HE and RP features are better than the residual features alone. It indicates complete information present in the source can be represented by the combined representation of the HE and RP features. 4.11 COMPARATIVE STUDY OF HE FEATURES AND RP FEATURES OVER RESIDUAL FEATURES FOR RECOGNITION SYSTEM We have compared the results obtained by the observed new approach with some recent works which were discussed in detail in section 2.6. In these works features used and database are different. Tables 4.12 and 4.13 shows comparative analysis of different features for speaker recognition performance Table 4.12: Comparison of Speaker Recognition Performance at Different Databases for LP residual at Sub, Seg and Supra levels. Database Sub Seg Supra NIST-99 64 60 57 90 NIST-03 TIMIT SRC+MFCCs 31 SRC=Sub+ MFCCs Seg+Supra 76 87 58 13 67 66 79 56.67 1 86.67 66.67 90 104 96

Table 4.13: Comparison of speaker recognition performance at different Databases for HE and RP of LP residual at subsegmental, segmental and suprasegmental levels. Type of Databases SRC=Su Sub Seg Supra signal NIST-03 OBSERVED MODEL MFCCs upra NIST-99 b+seg+s SRC+M FCCs HE 44 56 8 71 87 94 RP 49 69 17 73 87 93 HE+RP 64 78 22 88 87 98 HE 32 39 7 54 66 76 RP 23 51 14 56 66 77 HE+RP 48 59 17 72 66 83 HE 80 76.67 20 80 66.67 80 RP 80 76.67 6.67 80 6 84.67 HE+RP 9 75 46.67 9 6 98.67 GMM using Database is TIMIT 105

4.12 SUMMARY In this chapter, model the speaker-specific source information from LP residual at subsegmental, segmental and suprasegmental using GMM. The segmental and suprasegmental level information is decimated by a factor of 4 and 50, respectively. Experimental results show that subsegmental, segmental and suprasegmental levels contain speaker information. Further combining the evidences from each level, the performance improvement indicates the different nature speaker information at each level. Towards the end, the idea of subsegmental, segmental and suprasegmental features of LP residual, HE of LP residual and RP of LP residual at subsegmental, segmental and suprasegmental levels recognition system using GMM s are proposed. 106 for speaker