Reducing Domain Mismatch by Maximum Mean Discrepancy Based Autoencoders

Similar documents
DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

A study of speaker adaptation for DNN-based speech synthesis

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

arxiv: v2 [cs.cv] 30 Mar 2017

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Learning Methods in Multilingual Speech Recognition

Modeling function word errors in DNN-HMM based LVCSR systems

Speaker recognition using universal background model on YOHO database

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Modeling function word errors in DNN-HMM based LVCSR systems

Python Machine Learning

Speech Recognition at ICSI: Broadcast News and beyond

Speech Emotion Recognition Using Support Vector Machine

Spoofing and countermeasures for automatic speaker verification

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Probabilistic Latent Semantic Analysis

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

WHEN THERE IS A mismatch between the acoustic

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Calibration of Confidence Measures in Speech Recognition

Lecture 1: Machine Learning Basics

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Assignment 1: Predicting Amazon Review Ratings

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Speaker Recognition For Speech Under Face Cover

Support Vector Machines for Speaker and Language Recognition

Generative models and adversarial training

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Investigation on Mandarin Broadcast News Speech Recognition

Edinburgh Research Explorer

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

Attributed Social Network Embedding

Transfer Learning with Applications

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

Comment-based Multi-View Clustering of Web 2.0 Items

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Recognition. Speaker Diarization and Identification

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

A Survey on Unsupervised Machine Learning Algorithms for Automation, Classification and Maintenance

Online Updating of Word Representations for Part-of-Speech Tagging

A Comparison of Two Text Representations for Sentiment Analysis

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Speech Recognition by Indexing and Sequencing

arxiv:submit/ [cs.cv] 2 Aug 2017

Semi-Supervised Face Detection

Offline Writer Identification Using Convolutional Neural Network Activation Features

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Multi-Dimensional, Multi-Level, and Multi-Timepoint Item Response Modeling.

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

A new Dataset of Telephone-Based Human-Human Call-Center Interaction with Emotional Evaluation

Evaluation of Teach For America:

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Human Emotion Recognition From Speech

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Word Segmentation of Off-line Handwritten Documents

Segregation of Unvoiced Speech from Nonspeech Interference

On the Formation of Phoneme Categories in DNN Acoustic Models

arxiv: v1 [cs.cl] 2 Apr 2017

CSL465/603 - Machine Learning

Lecture 9: Speech Recognition

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Automatic Pronunciation Checker

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Evaluating Interactive Visualization of Multidimensional Data Projection with Feature Transformation

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

SUPRA-SEGMENTAL FEATURE BASED SPEAKER TRAIT DETECTION

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Statewide Framework Document for:

Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models

A survey of multi-view machine learning

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

UMass at TDT Similarity functions 1. BASIC SYSTEM Detection algorithms. set globally and apply to all clusters.

TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

Transcription:

Reducing Domain Mismatch by Maximum Mean Discrepancy Based Autoencoders Wei-wei Lin, Man-Wai Mak, Longxin Li The Hong Kong Polytechnic University Jen-Tzung Chien National Chiao Tung University

Contributions We show how maximum mean discrepancy (MMD) can be generalized to measure the discrepancies among multiple distributions. We propose a new domain adaptation method based on MMD and demonstrated that it can greatly reduce multisource variability. 2

Process of Speaker Verification Utterance from registered speaker low-dim representation of the whole utterance Spectral Analysis 60-dim acoustic vectors Factor Analysis 500-dim i-vector Decision Threshold x s Spectral Analysis 60-dim acoustic vectors Factor Analysis PLDA Scoring x t 500-dim i-vector Decision Making Utterance from test speaker 3

I-Vectors Speaker supervector UBM Supervector Total variability matrix Total variability factor I-vector is the maximum-a-posteriori (MAP) estimate of, which we denoted as. Instead of using high-dimension supervector to represent a speaker, we use more compact (low-dimension) i-vector to represent a speaker. represents the subspace where i-vectors vary. 4

I-Vector/PLDA Procedure of i-vector/plda: MFCC i-vector extractor Preprocessing PLDA Modeling In Gaussian PLDA, a preprocessed i-vector from the j-th session of speaker i is considered generated from a factor analysis model: Pre-processed i-vector Mean of i-vectors in training set Speaker subspace Speaker factor Residue 5

Given a test i-vector and target-speaker s i-vectors, the verification score is the log-likelihood ratio between two hypotheses: log S LR (x s, x t ) = log I-Vector/PLDA p(x s, x t Same speaker) p(x s, x t di erent speaker) = 1 2 xt s Qx s + x T s Px t + 1 2 xt t Qx t + const where 6

Domain Mismatch NIST SRE16 is a multilingual dataset for speaker verification. Test data include Cantonese and Tagalog speakers. But both Cantonese and Tagalog speech in the development set are unlabeled and small in number (2344 segments). 7

Domain Mismatch Means Covariance matrices Pairwise normalized distance between different languages and genders 8

Domain Mismatch We have English corpora from previous SREs and SWB, which are large in number and have speaker labels. But the language mismatch in SRE16 makes these corpora less useful. We aim to adapt the i-vectors of English speech to look more like the i-vectors of Cantonese and Tagalog. Then, we use the adapted English i-vectors to train a PLDA model for scoring Cantonese and Tagalog i-vectors. 9

Domain Adaptation I-vector based domain adaptation: Enrolment MFCC Test MFCC I-vector extraction Project into common feature space MMD-based Autoencoder Preprocessing PLDA scoring 10

IDVC Inter-dataset variability compensation (IDVC) is a popular domain adaptation technique for speaker verification. IDVC aims to remove the subspace that causes most of the interdataset variability: where is an i-vector, the columns of comprise the eigenvectors of the covariance matrix of the domain means. 11

Motivations of Our Work A drawback of IDVC is that the domain mismatch is entirely defined by the domain means. From the perspective of reducing the divergence between probabilistic distributions, this is not enough. 12

Motivations of Our Work Means are the first moment of probabilistic distributions only. Even if two distributions have exactly the same mean, they could still be very different, due to the difference in the higher order statistics. 13

Maximum Mean Discrepancy The theoretical work in domain adaptation suggests that it is important to have a good measurement of the divergence between the data distributions of different domains. Maximum mean discrepancy (MMD) is a distance measure in the space of probability. Given two datasets, MMD computes the mean squared difference between the statistics of the two datasets: 14

Maximum Mean Discrepancy Kernel function 15

Maximum Mean Discrepancy Assume that we have D sets of data, where. We can generalize MMD to measure the discrepancies among multiple domains: Kernel function 16

<latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> <latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> <latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> <latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> Domain-Invariant Autoencoders The domain-invariant autoencoder (DAE) directly encodes the features that minimize the multi-source mismatch: Domain 1 Domain 2 Domain 3 D =3 17

<latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> <latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> <latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> <latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> <latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> <latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> <latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> <latexit sha1_base64="izhrlqw7iixuukgi6e7jbrou+a8=">aaacunicfvfnb9qwepwgj5by1ckri8ukacthlvri9mchag5ceexqtpu2ueu4k6y1/gj2pn2vld/rk/ws/g3obg7sfjgs7aezn5rnn3kthcm4/j2i7t1/8hbn99he4ydpnz3fp3hx7kxjouy4kcze5sybfbomkfdczw2bqvzcrt7/1nuvrse6yfqzlmvifku0kavngfjznsphps1lumgpd6/2h/e4xgw9c5iedekfp1chg9u0mlxrojfl5tw0iwvmplmouir2l20c1izpwqxtadvt4dk/ut3snyft0nlycdtsvfbvds+uc0uvb6ziohpbts75r9q0wfi480lxdylm60fliyka2llac2gbo1wgwlgvqsvlm2yzx2duxprcht9ouofgkayln15do02y4jirrafnsj92avlsd5o23wk7mbkprbvqcaxpeeyy8dxwv9vggrr71qfmvootwt+//6mjvaafd3mq1q0pnyzqlf4s6airtlyxdxech42tejx8fzc8+dgvdze8iq/jictkptkhx8gpmrbofpbb8pp8ij5eessi+zoadfqel2qjivwdvlzewq==</latexit> Nuisance-Attribute Autoencoders The nuisance-attribute autoencoder (NAE) borrows the idea of IDVC in that it removes the domain specific features using: where g(f(x)) should contain all of the domain-specific info. Therefore, will become domain-indistinguishable. g(f(x)) is realized by an autoencoder called NAE, which encodes the features that cause most of the multi-source mismatch. 18

<latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> <latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> <latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> <latexit sha1_base64="3lawrkpcbyj7dol/fbup23wbqde=">aaacshicfvflb9naen6yvymvfo5clcikxcgycxjckcrogquiikanffvred1olu7d2h23jvb+cvzpb+pfse58iclipn35npon5puzopbcuzl8hks3bt+5e2/n/u6dh48ep9nbf3rqtgm5jrmrxk4kccifxjejkjiplyiqjj4v55+6/nkfwiempqfljbmcurav4eah9p3ow5vz3jazjsulb4k0b0pw2/fsf3cdlyy3cjvxcc5n06sm3imlwsw2u1njsaz+dnocbqhbocv9smsbvwyrmq6mdu9tvir+xefbobdurwaqoixbznxbf+wmdvxvcy903rbqvm5untime3edx6wwyekuawburdaa8wvy4btws9glugegjzfckaw69nkfttm091lhznlpm9jnnyci8so0bbfybmeukzfflytueyylwfws+f9rtedgvvyz2lmcq9b3/n80ode04dcbud368omvucqfbb3hlon24w6c04nrmozsb2+hhx/7o+6w5+wfe8vs9o4dss/smi0zz3p2k/1i19fbnilmeayp0acvecy2lprxb+yo2u4=</latexit> Nuisance-Attribute Autoencoders Domain 1 Domain 2 Domain 3 D =3 19

MMD-Baesd Autoencoders Domain 1 Domain 1 Domain 2 Domain 2 Domain 3 Domain 3 Domain-Invariant Autoencoder (DAE) Nuisance-Attribute Autoencoder (NAE)

t-sne Visualizations of Learned Features Before DAE Transformation After DAE Transformation The t-sne plot of the hidden activations of DAE has less domain-clustering effect than that of the i-vectors, which shows that the DAE indeed learns a domain-invariant representation. 21

Experimental Setup Parameterization: 19 MFCCs together with energy plus their 1 st and 2 nd derivatives à 60-Dim UBM: gender-dependent, 512 mixtures, trained by SRE16-dev Total Variability Matrix: gender-independent, 300 total factors, trained by SRE16-dev DAE- and NAE-transformed vectors: 300-dim I-Vector Preprocessing: PCA to 200-dim followed by length normalization PLDA: 200 latent factors 22

Experimental Setup We have conducted two sets of experiments 1. domain adaptation experiment 2. domain robustness experiment. In the domain adaptation experiment, i-vectors derived from SRE04--SRE10 and SRE16-dev were used for training the DAE, the NAE and the projection matrices in IDVC. I-vectors derived from SRE16-eval were used for testing. 23

Domain Adaptation Experiment Method EER mcprim acprim No Adapt 15.84 0.89 0.93 IDVC 13.08 0.86 0.93 DAE 12.79 0.85 0.91 NAE 12.81 0.85 0.91 Pooling genders and languages All of the domain adaptation methods improve system performance significantly. Both DAE and NAE outperform IDVC by a small margin. 24

Domain Robustness Experiment In the domain robustness experiment, for each gender and language (TGL/CAN) in test sessions, we exclude the speech of the same gender who speak that language from training. Test Data Male Training Data Female ENG TGL CAN ENG TGL CAN Male TGL Male CAN Female TGL Female CAN 25

Domain Robustness Experiment IDVC DAE The performance of DA methods degrades when in-domain data are excluded from training. 26

Domain Robustness Experiment EER (%) DAE achieves a relative reduction of 5-6% with respect to IDVC on Cantonese speech. But no gain is found on Tagalog speech. 27

I-vector Adaptation + PLDA Interpolation I-vectors adaptation can be combined with unsupervised PLDA model interpolation (interpolate the covariance matrices, Garcia- Romero (2014)). Method EER mcprim acprim No Adapt 15.84 0.89 0.93 IDVC 13.08 0.86 0.93 DAE 12.79 0.85 0.91 NAE 12.81 0.85 0.91 Method EER mcprim acprim No Adapt 13.47 0.86 0.91 IDVC 12.88 0.85 0.93 DAE 12.43 0.84 0.90 NAE 12.51 0.84 0.91 Without PLDA Interpolation With PLDA Interpolation,!=0.3 Combining i-vector adaptation and PLDA covariance matrix adaptation and can further improve performance. 28

Conclusions We proposed two MMD-based autoencoders. We show the relative improvement of 11.8% EER in the NIST 2016 SRE compared to PLDA without adaptation. We also found that MMD-based autoencoders are more robust to unseen domains. In the domain robustness experiments, MMD-based autoencoders show 5.2% and 6.8% improvement over IDVC for male and female Cantonese speakers, respectively. 29