CHAPTER 4 IMPROVING THE PERFORMANCE OF A CLASSIFIER USING UNIQUE FEATURES

Similar documents
Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Modeling function word errors in DNN-HMM based LVCSR systems

Lecture 1: Machine Learning Basics

Modeling function word errors in DNN-HMM based LVCSR systems

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Generative models and adversarial training

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

WHEN THERE IS A mismatch between the acoustic

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Speech Recognition at ICSI: Broadcast News and beyond

Speaker recognition using universal background model on YOHO database

Speech Emotion Recognition Using Support Vector Machine

A study of speaker adaptation for DNN-based speech synthesis

Probabilistic Latent Semantic Analysis

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Python Machine Learning

Human Emotion Recognition From Speech

SARDNET: A Self-Organizing Feature Map for Sequences

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Support Vector Machines for Speaker and Language Recognition

Learning Methods in Multilingual Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Word Segmentation of Off-line Handwritten Documents

CS Machine Learning

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Reinforcement Learning by Comparing Immediate Reward

Artificial Neural Networks written examination

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

(Sub)Gradient Descent

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Truth Inference in Crowdsourcing: Is the Problem Solved?

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Assignment 1: Predicting Amazon Review Ratings

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Speaker Identification by Comparison of Smart Methods. Abstract

An Online Handwriting Recognition System For Turkish

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

Proceedings of Meetings on Acoustics

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

This scope and sequence assumes 160 days for instruction, divided among 15 units.

Speech Recognition by Indexing and Sequencing

Australian Journal of Basic and Applied Sciences

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Evolutive Neural Net Fuzzy Filtering: Basic Description

Computerized Adaptive Psychological Testing A Personalisation Perspective

Introduction to Simulation

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

NCEO Technical Report 27

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Rule Learning With Negation: Issues Regarding Effectiveness

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Switchboard Language Model Improvement with Conversational Data from Gigaword

School Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne

Comparison of network inference packages and methods for multiple networks inference

Calibration of Confidence Measures in Speech Recognition

Segregation of Unvoiced Speech from Nonspeech Interference

CSL465/603 - Machine Learning

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Issues in the Mining of Heart Failure Datasets

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Chapter 2 Rule Learning in a Nutshell

Corpus Linguistics (L615)

Speaker Recognition. Speaker Diarization and Identification

Major Milestones, Team Activities, and Individual Deliverables

arxiv:cmp-lg/ v1 22 Aug 1994

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Learning From the Past with Experiment Databases

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Digital Media Literacy

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Statewide Framework Document for:

Voice conversion through vector quantization

Learning Methods for Fuzzy Systems

INPE São José dos Campos

Softprop: Softmax Neural Network Backpropagation Learning

Comparison of EM and Two-Step Cluster Method for Mixed Data: An Application

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Probability and Statistics Curriculum Pacing Guide

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

Semi-Supervised Face Detection

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

The Effect of Discourse Markers on the Speaking Production of EFL Students. Iman Moradimanesh

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Transcription:

38 CHAPTER 4 IMPROVING THE PERFORMANCE OF A CLASSIFIER USING UNIQUE FEATURES 4.1 INTRODUCTION In classification tasks, the error rate is proportional to the commonality among classes. Conventional GMM based modeling techniques fail to capture the unique features of a class. Classification accuracy can be improved if the modeling technique is able to capture the unique features of each class. Given the training data of one class, its corresponding model, and the model of a possibly confusing class, the log-likelihoods of the training data produced by these two models can be assumed to follow two different Gaussian distributions. Under such an assumption, the amount of overlap between the Gaussian likelihoods may be attributed to the commonality (number of common features) between the classes. In this chapter, product of likelihood-gaussians is used to identify the most probable confusing features between two classes. Then, the training technique deemphasizes these common features by removing them from training data. Furthermore, a separate model is built for the common features so that it may be used to identify confusing features during testing. By eliminating the confusing features, during testing, evidence is derived only from the features(feature vectors) that are unique to a class. The proposed logic was experimented on speaker identification task and language identification task using NTIMIT speech corpus and OGI_MLTS telephone

39 speech corpus respectively. The results are compared with the performance of a conventional GMM-based classifier presented by Zissman (1996). In this proposed research work, GMM has been used for both speaker identification task and language identification task. 4.2 GAUSSIAN MIXTURE MODEL Gaussian Mixture Models (GMM) (Reynolds & Rose 1995) are popular statistical models due to their ability to form good approximations of data and the ease involved in computation. It is a linear combination of multiple Gaussian distributions. A Gaussian mixture density is a weighted sum of M component densities given by the equation where - is a D-dimensional feature vector, - i th mixture component density, i = 1, 2,..., M, - i th mixture weight, i = 1, 2,..., M. form Each component density is a D-variate Gaussian function of the with mean vector and covariance matrix i. The mixture weight must satisfy the constraint that

40 The complete Gaussian mixture density is parameterized by the mean vectors, covariance matrix, and mixture weights from all the component densities. These parameters are collectively represented by Where N is the number of speakers. The training of a GMM, given a collection of training feature vectors, is generally carried out using the Expectation-Maximization (EM) algorithm (Dumpster et al 1977) to estimate the model parameters with Maximum Likelihood (ML) criterion. The EM is an iterative method which alternates between performing an expectation step, which computes an expectation of the log likelihood with respect to the current estimate of the GMM parameters and a maximization step, which computes the parameters which maximize the expected log likelihood found on the expectation step. These parameters are then used to determine the parameters of the models in the next expectation step. An example of GMM training for speaker recognition can be found in (Reynolds 1995b). The EM algorithm iteratively refines the GMM parameters to monotonically increase the likelihood of the estimated model for the observed feature vectors. Generally, five to ten iterations are sufficient for parameter convergence. One of the advantages of using the GMM as the likelihood function is that, it is computationally inexpensive and is based on a well understood statistical model. Another advantage is that a large number of Gaussian mixtures are used to model diversified sound components or clusters based on their underlying distribution of acoustic observations from a speaker. For textindependent tasks it is insensitive to the temporal aspects of the speech and only the underlying distribution of acoustic observations from a speaker has

41 been modeled. The latter is also a disadvantage, because higher-levels of information about the speaker, conveyed in the temporal speech signal have not used by this approach. 4.3 IDENTIFYING COMMON FEATURES As discussed before, when two classes share common features, it leads to classification error. Common features between two classes occupy the same region in the feature space. During conventional Gaussian Mixture Model training, when the model parameters are estimated for each class to maximize likelihood without considering the information from the other class, the Gaussians of the two models overlap. This may result in an unseen common feature vector getting better likelihood for the confusing class. The overlap is proportional to the number of common features. As the overlap increases, the number of feature vectors probable to get a better likelihood for the confused class also increases. Since a common feature may give a better likelihood for the confused model, the proposed technique tries to identify the most probable common features in likelihood space. Let us consider the feature vectors of two different classes (C i and C j ) as and. Let i and j be the models of the classes, C i and C j, respectively. Let the likelihoods of the feature vectors of the class C i for the given models i and j be p( i ) and p( j ), respectively. We can assume that these likelihoods are distributed normally in likelihood space with suitable parameters. Let these two Gaussians be N ii ii, ) and N ji ji, ). Similarly, for the feature vectors of the class C j, the likelihood- Gaussians are N jj jj, ) and N ij ij, ). As discussed above, the common features may give a better likelihood for the confused class. The overlap in the feature space is reflected in the likelihood space. The overlapped region between N ji and N ii indicate that a subset of gives likelihood in the same

42 range for the models i and j. As the overlap increases, it implies that the number of feature vectors of that give likelihood in the same range for both the models increases. This increases the probability of an unseen common feature vector, belonging to class C i, giving a better likelihood for the confused class model j. Therefore, the overlap can be used as a measure of the number of features that class C i shares with class C j. A method to quantify the amount of overlap between two Gaussians was proposed in (Nagarajan & 2006) and was used in (Nagarajan & 2007) to calculate the amount of bias. The method is used here to calculate the commonality between two classes and the k (Product of Gaussian mean) to identify the most probable confusing features. The details of the method presented in (Nagarajan & 2006) is given below for clarity purposes and it is used to identify the common feature vectors. Let N ii ii, ) and N ji ji, ) be Let N k k, ) be 1 N k ( k, ) = N ii ( ii, ) N ji ( ji, ) (4.6) ( ) can be given as For the product of the Gaussians, the mean ( k ) and its variance 1 In the present study, N k is not normalized, as this will not affect its use in Equation (4.11)

43 In order to quantify the amount of overlap between two different Gaussians, we define the following ratio ( ). In (4.9), And From (4.10) and (4.11), Equation(4.9) can be written as, If ii = ji, then (4.9) reduces to

44 However, for this case we expect the overlap Oij to be equal to 1. To achieve this, Equation (4.9) is further normalized as given below: The resultant is used as a measure to estimate the amount of overlap between two Gaussians. The mean k plays a major role in the identification of the confusing features. As it can be inferred from Figure 4.1, the mean of the product of Gaussians approximates the point at which the likelihood of Gaussians intersect. The feature vectors of can be classified into four types. 1. Feature vectors that give high likelihoods for both i and j 2. Feature vectors that give low likelihoods for both i and j 3. Feature vectors that give lower likelihood for j and higher likelihood for i 4. Feature vectors that give higher likelihood for j and lower likelihood for i

45 Figure 4.1 An illustration of overlaps between likelihood Gaussians. (a) (b) The likelihood distributions (N ii and N ji ) of the utterances of the classes C i for the given models i and j. The likelihood distributions (N jj and N ij ) of the utterances of the classes C j for the given models j and The feature vectors of belonging to the first case are distributed in the right side of N ii and N ji, respectively. The feature vectors of the second case are distributed in the left side of both the Gaussians. Feature vectors of the third case are distributed in the left tail end of N ji and right tail end of N ii. Feature vectors of the above discussed three cases cause no problem to classification. But the fourth case causes a serious problem. The only region where the feature vectors of the fourth case may be present is the right side of N ji and the left side N ij where the Gaussians overlap. This region implies that there is a subset for which likelihoods, given i, falls in the overlapped region; and there is also another subset for which likelihoods, given j, falls in the overlapped region. As the overlap increases, the

46 probability that a feature vector a, from, belonging to also increases. When a feature vector b gets likelihoods in the same range, corresponding to the overlapped region, there is a probability that P(b j ) is greater than P(b i ). Therefore, it follows that the feature vectors in this region are probable to be the confusing feature vectors. The feature vectors corresponding to the overlapped region under N ji that is to the right of k get a better likelihood than the feature vectors belonging to that are distributed in the overlapped region under N ii to the left of k. Therefore, the probability of finding a confusing feature vector in the distribution N ji increases as the likelihood increases beyond k. Similarly, the probability of finding a confusing feature vector in distribution N ii increases as the likelihood decreases beyond k. In accordance with the above logic, the feature vectors b, where P(b i ) is lesser than k - or P(b j ) is greater than k +, are classified as confusing feature vectors. 4.4 TRAINING USING DISCRIMINATIVE FEATURES Discriminative training involves modeling the class with the unique feature vectors of a class with respect to another. A feature that may be common between C i and C j may be the feature that may discriminate C i from another class C k. Consider two classes C i and C j, their models i and j, and their training data and. Using the technique described in previous section, the confusing feature vectors of both the classes with respect to each other can be identified. If the models ij and ji are trained with the unique feature vectors of class C i and C j respectively, then the model captures only the unique feature vectors. If a confusing feature vector a of class C i is given to ij, the value P(a ij ) will be very ji was also modeled using unique feature vectors of C j, the probability density P(a ji ) will also be very low. Thus the common features are de-emphazised.

47 Classification can be improved if, during testing, the classifier makes a decision based on the discriminating features alone. To identify a common feature vector during testing, two separate models ' ' ij and ji generated using the feature vectors of C i confused with C j and feature vectors of C j confused with C i. If a feature vector a in the testing set gets higher likelihood for either ' ij or ' ji those features can be removed. If the confusing feature vectors are modeled properly, the technique ensures that evidence is derived only from discriminative feature vectors. The following is a step by step procedure used to apply this technique. Let us consider N different classes, C 1, C 2,..., C N. Let, be the acoustic models of the class C i, where m is the number of mixtures per state, that varies from 1,2,...,M. For each class, M models with varying numbers of mixtures are pre-generated. For the N classes, the number of possible pairs is N(N-1)/2. This value can be very high for speaker identification task and this increases the computational time of the training algorithm significantly. For each class C i, consider the most confusing K speakers. For each pair C i and C j, the confusing feature vectors of both the classes are removed and four ' models ij, ji, ij and ' ji are generated. Then during testing, the confusing feature vectors are identified and they are not considered for classification. ie., only the unique features are considered for making a decision. 1) For each pair (C i, C j ), identify the confusing feature vectors for both the classes. 2) Remove the confusing feature vectors from the training data using the technique described in section 4.4. For class C i, remove all feature vectors a where P(a i ) < k - or P(a j ) > k. Similarly for class C j remove all feature vectors a x j k where P(a j ) < k - or P(a i ) > k -. A value is added so that the most probable confusing

48 features(towards the tail end for likelihood Gaussians) are removed. 3) ij, ji, ' ij and ' ji. For each speaker, the neighbors (K most confusing speakers) are found out by using training data of the class to test the models and assigning the K most frequently confused competing speakers as neighbors. In this procedure a value is added to k so that the risk of removing frequently occurring features of a class prevented. This is especially a problem when the overlap between the Gaussians is very high. In that case, the confusion error cannot be avoided, but the performance can be enhanced by removing feature vectors that are most likely to be confused. In this chapter, the technique is applied to a speaker identification task and a language identification task. But the same technique can also be used for any classification task based on GMM. 4.5 EXPERIMENTAL SETUP AND PERFORMANCE ANALYSIS-SPEAKER IDENTIFICATION TASK The NTIMIT speech corpus, collected for speech recognition, is used for both training and testing. The NTIMIT database, developed by NYNEX, is the same speech from the TIMIT database recorded over local and long distance telephone loops. Each sentence was played through an artificial mouth coupled to a carbon-button telephone handset via a telephone test frame designed to approximate the acoustic coupling between the human mouth and the telephone handset. The speech was transmitted to a local or long-distance central office and looped back for recording. Both TIMIT and NTIMIT have 6300 utterances. Each speaker has 10 utterances and each of these utterances are ~3 sec long. 40 female speakers were considered for this

49 task because the performance of female speakers were found to be less compared to male speakers. To maintain homogeneity in training and testing across different speakers, the first 8 utterances were considered for training and the last two was considered for testing. The total number of training utterances is 320. The total number of test utterances is 80. MFCC(13 static + 13 dynamic + 13 acceleration ) is used as feature for this task. For each of the pairs, the common features are identified and removed as explained before. Then 4 models per pair are generated. As noted before, a feature that may be common between class C i and C j may be the feature that discriminates class C i from another class. So, we adopt a two level testing as described below. The proposed technique, in its present form, can be used for pairwise testing only. But the technique has been extended for 40 speakers by performing testing in two levels. In first level identification, conventional testing is performed for a given number of mixtures. The two best outputs of this testing is considered as the pair for the second level testing using the proposed technique. When compared the performance of the system with that of a conventional GMM-based classification system where models capture common features along with unique features, the following can be inferred from the Table 4.1. The overall performance of the system increases for most of the model complexities. The improvement in the performance is different for different topology. This is because as we modify the number of mixtures, the amount of overlap differs. There is a small reduction in the performance for 8 and 9 mixtures. This is due to removing frequently occurring feature vectors by misclassifying them as confusing feature vectors.

50 Table 4.1 Speaker identification performance using conventional GMM and proposed technique No. of mixtures Conventional GMM Performance in % Proposed technique 5 57.50 62.50 6 55.00 56.25 7 52.5 54.75 8 64.75 62.5 9 62.5 60 10 65 68.5 11 64.75 66.25 12 62.5 62.5 Although, recent development in speaker identification tasks show high performance, the intension of the paper is to show that classification performance can be improved by training using unique features of a class and consider only unique features for making a decision during testing. 4.6 EXPERIMENTAL SETUP AND PERFORMANCE ANALYSIS-LANGUAGE IDENTIFICATION TASK The Oregon Graduate Institute Multi-language Telephone Speech(OGI MLTS) corpus, containing spontaneous utterances of 11 languages is used for this task. Out of the 11 languages, English and French were considered. 30 utterances of ~45 sec each was considered for training the language models. For testing 99 and 49 utterances of English and French respectively were taken. Cepstral mean subtracted MFCC(13 static + 13 dynamic + 13 acceleration) features were used for training and testing. The proposed technique was applied to this setup as explained in section The number of mixtures were varied for both the models and the performance is

51 compared with that of conventional GMM-based modeling techniques. The results of the comparison is tabulated in Table 4.2. Table 4.2 Language identification performance using conventional GMM and proposed technique No. of mixtures Conventional GMM Performance in % Proposed technique 16 82.22 84.24 32 84.16 86.41 60 84.86 88.2 90 84.47 87.97 120 85.56 87.81 130 85.87 87.50 From Table 4.2, it can be noted that there is an increase in performance for all cases by applying the proposed technique. 4.7 SUMMARY In this chapter, a discriminative GMM technique was proposed to equip the classifier to capture the unique features of a class and to make decisions based on the unique features alone. It was assumed that the acoustic likelihoods are distributed normally and the unique features of a class were identified using the mean of product of likelihood-gaussians. In the experiments, the increase in performance is more consistent in language identification task due to large number of training examples. The experiments show that the proposed system increases the performance of a classifier significantly.