Learning Discrete Representations via Information Maximizing Self-Augmented Training

Similar documents
Python Machine Learning

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Artificial Neural Networks written examination

arxiv: v2 [stat.ml] 30 Apr 2016 ABSTRACT

Active Learning. Yingyu Liang Computer Sciences 760 Fall

A study of speaker adaptation for DNN-based speech synthesis

CS Machine Learning

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

arxiv: v1 [cs.lg] 15 Jun 2015

(Sub)Gradient Descent

Generative models and adversarial training

arxiv: v2 [cs.cv] 30 Mar 2017

Model Ensemble for Click Prediction in Bing Search Ads

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

arxiv:submit/ [cs.cv] 2 Aug 2017

Attributed Social Network Embedding

Lecture 1: Machine Learning Basics

Knowledge Transfer in Deep Convolutional Neural Nets

Assignment 1: Predicting Amazon Review Ratings

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

College Pricing. Ben Johnson. April 30, Abstract. Colleges in the United States price discriminate based on student characteristics

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

arxiv: v1 [cs.cv] 10 May 2017

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

arxiv: v1 [cs.lg] 7 Apr 2015

arxiv: v1 [cs.cl] 2 Apr 2017

Probabilistic Latent Semantic Analysis

CSL465/603 - Machine Learning

Rule Learning With Negation: Issues Regarding Effectiveness

Calibration of Confidence Measures in Speech Recognition

On the Formation of Phoneme Categories in DNN Acoustic Models

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

A Deep Bag-of-Features Model for Music Auto-Tagging

Speech Recognition at ICSI: Broadcast News and beyond

Australian Journal of Basic and Applied Sciences

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training

Improvements to the Pruning Behavior of DNN Acoustic Models

Rule Learning with Negation: Issues Regarding Effectiveness

Evolutive Neural Net Fuzzy Filtering: Basic Description

Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

A Neural Network GUI Tested on Text-To-Phoneme Mapping

TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Learning From the Past with Experiment Databases

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Deep Neural Network Language Models

Modeling function word errors in DNN-HMM based LVCSR systems

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification

Cultivating DNN Diversity for Large Scale Video Labelling

Semi-Supervised Face Detection

Human Emotion Recognition From Speech

Modeling function word errors in DNN-HMM based LVCSR systems

WHEN THERE IS A mismatch between the acoustic

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

SARDNET: A Self-Organizing Feature Map for Sequences

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Word Segmentation of Off-line Handwritten Documents

INPE São José dos Campos

Softprop: Softmax Neural Network Backpropagation Learning

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Comment-based Multi-View Clustering of Web 2.0 Items

Detecting English-French Cognates Using Orthographic Edit Distance

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

A Review: Speech Recognition with Deep Learning Methods

Laboratorio di Intelligenza Artificiale e Robotica

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Speech Emotion Recognition Using Support Vector Machine

Transfer Learning with Applications

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Learning Methods for Fuzzy Systems

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

On-the-Fly Customization of Automated Essay Scoring

arxiv: v2 [cs.ir] 22 Aug 2016

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Team Formation for Generalized Tasks in Expertise Social Networks

Diverse Concept-Level Features for Multi-Object Classification

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Test Effort Estimation Using Neural Network

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Transcription:

Learning Discrete Representations via Information Maximizing Self-Augmented Training Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama University of Tokyo RIKEN *Based on the work performed at Preferred Networks Preferred Networks ATR Preferred Networks University of Tokyo Preferred Networks University of Tokyo RIKEN University of Tokyo Longer version is to appear in ICML 2017 1

Unsupervised Discrete Representation Learning Unlabeled data 2

Unsupervised Discrete Representation Learning Discrete representations Unlabeled data Learn to map 3

Unsupervised Discrete Representation Learning Unlabeled data Learn to map Discrete representations Clustering Map to cluster assignments 0, 1, 5, 8, 9, 1, 3, 2, 4, 3, 9, 3, 2, 0, 2, 1, 4, 3, 1, 3 4

Unsupervised Discrete Representation Learning Unlabeled data Learn to map Discrete representations Clustering Map to cluster assignments 0, 1, 5, 8, 9, 1, 3, 2, 4, 3, 9, 3, 2, 0, 2, 1, 4, 3, 1, 3 Hash Learning Map to binary codes 0001, 0101, 1110, 1111, 0000, 0111, 0000, 1011 5

Deep Neural Networks (DNN) are Promising Unlabeled data DNN Flexible Scalable Discrete representations Clustering Map to cluster assignments 0, 1, 5, 8, 9, 1, 3, 2, 4, 3, 9, 3, 2, 0, 2, 1, 4, 3, 1, 3 Hash Learning Map to binary codes 0001, 0101, 1110, 1111, 0000, 0111, 0000, 1011 6

Deep Neural Networks (DNN) are Promising Unlabeled data DNN Discrete representations Clustering Map to cluster assignments 0, 1, 5, 8, 9, 1, 3, 2, 4, 3, 9, 3, 2, 0, 2, 1, 4, 3, 1, 3 Sensitive to input perturbation Hash Learning Map to binary codes 0001, 0101, 1110, 1111, 0000, 0111, 0000, 1011 7

Illustration of Overfitting X 2 X Y 2 {0,...,K 1} p (y x) 8

Illustration of Overfitting X 2 X Mutual Information Y 2 {0,...,K 1} I(X; Y ) p (y x) Information Maximization clustering [Bridle et al., 1991] 9

Illustration of Over-fitting X 2 X Mutual Information Y 2 {0,...,K 1} I(X; Y ) p (y x) 1 2 0 3 Information Maximization clustering [Bridle et al., 1991] 10

Illustration of Over-fitting X 2 X Mutual Information I(X; Y ) Y 2 {0,...,K 1} p (y x) Weight-decay 1 2 0 3 Regularized Information Maximization (RIM) [Gomes et al., 2010] 11

Our Contributions Better regularization Our method IMSAT RIM [Gomes et al., 2010] Weight-decay, Clustering More general InfoMax 12

Our Method: IMSAT X 2 X Mutual Information I(X; Y ) Y 2 {0,...,K 1} Y =(Y 1,...,Y D ) 00 or p (y x) SAT regularization 10 11 01 Information Maximizing Self-Augmented Training 13

State-of-the-art Performance State-of-the-art in Clustering Hash learning Clustering accuracy (%) 98.4% 84.3% MNIST 58.5% 59.6% Our method [Xie et al. 2016] RIM with deep and linear classifier [Gomes et al., 2010] 14

Outline 1. Introduction 2. Proposed Method: IMSAT = IM + SAT Information Maximization (IM) Self-Augmented Training (SAT) 3. Experiments 4. Conclusions & Future Work 15

Outline 1. Introduction 2. Proposed Method: IMSAT = IM + SAT Information Maximization (IM) Self-Augmented Training (SAT) 3. Experiments 4. Conclusions & Future Work 16

Information Maximization Better regularization IMSAT RIM [Gomes et al., 2010] More general InfoMax 17

Information Maximization Learn cluster assignment probability p (y x) : max I(X; Y ) y 2 {0,...,K 1} [Bridle et al., 1991, Gomes et al., 2010] Learn discrete representations probability p (y 1,...,y D x)? 18

Information Maximization Learn cluster assignment probability p (y x) : max I(X; Y ) y 2 {0,...,K 1} [Bridle et al., 1991, Gomes et al., 2010] Learn discrete representations probability p (y 1,...,y D x) : max I(X; Y 1,...,Y D ) 19

Information Maximization Learn cluster assignment probability p (y x) : max I(X; Y ) y 2 {0,...,K 1} [Bridle et al., 1991, Gomes et al., 2010] Learn discrete representations probability p (y 1,...,y D x) : max I(X; Y 1,...,Y D ) Challenge: Combinatorial summation à We need approximation! X y 1 X y 2 X y D 20

Information Maximization p (y x) Approximate up to second order interaction: [Brown 2009] DX X I(X; Y 1,...,Y D ) I(X; Y d ) I(Y d ; Y d 0) d=1 Maximize information 1appled6=d 0 appled Reduce redundancy y =(y 1,y 2,,y D ) x 0 1 1 0 1 1 0 1 1 21

Outline 1. Introduction 2. Proposed Method: IMSAT = IM + SAT Information Maximization (IM) Self-Augmented Training (SAT) 3. Experiments 4. Conclusions & Future Work 22

Self-Augmented Training (SAT) Better regularization IMSAT RIM [Gomes et al., 2010] More general InfoMax 23

Self-Augmented Training (SAT) Augmentation function Ex.) T ( ) :X! X Data augmentation x T (x) 24

Self-Augmented Training (SAT) p (y x) Similarization p (y T (x)) y =(y 1,y 2,...,y D ) KL[p (y x)kp (y T (x))] x Data augmentation T (x) [Bachman et al., 2014; Miyato et al., 2016] 25

Self-Augmented Training (SAT) Related work: DNN regularization that imposes invariance. Supervised/ Semi-supervised Continuous Discrete One-dim Bachman et al., 2014; Miyato et al., 2016; Sajjadi et al., 2016 Multi-dim Unsupervised Discriminative feature learning [Dosovitskiy et al. 2014] Self-Augmented Training 26

Self-Augmented Training (SAT) Related work: DNN regularization that imposes invariance. Supervised/ Semi-supervised Unsupervised Continuous Discriminative feature learning [Dosovitskiy et al. 2014] Discrete One-dim Bachman et al., 2014; Miyato et al., 2016; Sajjadi et al., 2016 Multi-dim Borrow idea Self-Augmented Training 27

Self-Augmented Training (SAT) Decision boundary Local perturbation T (x) =x + r, krk 2 = Random Perturbation Training (RPT) [Bachman et al., 2014] Virtual Adversarial Training (VAT) [Miyato et al., 2016] p (y x + r) p (y x) r x 28

Outline 1. Introduction 2. Proposed Method: IMSAT = IM + SAT Information Maximization (IM) Self-Augmented Training (SAT) 3. Experiments 4. Conclusions & Future Work 29

IMSAT = Information Maximizing + SAT 0 1 1 0 1 1 0 1 1 Discrete representations Similarization Information Maximization Data augmentation 30

Outline 1. Introduction 2. Proposed Method: IMSAT Information Maximization (IM) Self-Augmented Training (SAT) 3. Experiments 4. Conclusions & Future Work 31

Experiments (Clustering) Measure clustering accuracy Batch normalization ReLU activation Softmax output Implementation available online https://github.com/weihua916/imsat p (y x) 1200 1200 x #(Cluster size) #(input dimension) 32

Experiments (Clustering) eported. Results marked with were excerpted from Xie et al. (2016). Method MNIST Omniglot STL CIFAR10 CIFAR100 SVHN Reuters 20news K-means 53.2 12.0 85.6 34.4 21.5 17.9 54.1 15.5 dae+k-means 79.8 14.1 72.2 44.2 20.8 17.4 67.2 22.1 DEC [Xie et al., 84.3 5.7 (0.3) 78.1 (0.1) 46.9 (0.9) 14.3 (0.6) 11.9 (0.4) 67.3 (0.2) 30.8 (1.8) Linear2014] RIM 59.6 (2.3) 11.1 (0.2) 73.5 (6.5) 40.3 (2.1) 23.7 (0.8) 20.2 (1.4) 62.8 (7.8) 50.9 (3.1) Linear IMSAT (VAT) 61.1 (1.9) 12.3 (0.2) 91.7 (0.5) 40.7 (0.6) 23.9 (0.4) 18.2 (1.9) 42.9 (0.8) 43.9 (3.3) Deep RIM 58.5 (3.5) 5.8 (2.2) 92.5 (2.2) 40.3 (3.5) 13.4 (1.2) 26.8 (3.2) 62.3 (3.9) 25.1 (2.8) IMSAT (RPT) 89.6 (5.4) 16.4 (3.1) 92.8 (2.5) 45.5 (2.9) 24.7 (0.5) 35.9 (4.3) 71.9 (6.5) 24.4 (4.7) IMSAT (VAT) 98.4 (0.4) 24.0 (0.9) 94.1 (0.4) 45.6 (0.8) 27.5 (0.4) 57.3 (3.9) 71.0 (4.9) 31.1 (1.9) Tested on 8 benchmark datasets. Hyper-parameters are fixed throughout the datasets. 33

Experiments (Clustering) eported. Results marked with were excerpted from Xie et al. (2016). Method MNIST Omniglot STL CIFAR10 CIFAR100 SVHN Reuters 20news K-means 53.2 12.0 85.6 34.4 21.5 17.9 54.1 15.5 dae+k-means 79.8 14.1 72.2 44.2 20.8 17.4 67.2 22.1 DEC 84.3 5.7 (0.3) 78.1 (0.1) 46.9 (0.9) 14.3 (0.6) 11.9 (0.4) 67.3 (0.2) 30.8 (1.8) Linear RIM 59.6 (2.3) 11.1 (0.2) 73.5 (6.5) 40.3 (2.1) 23.7 (0.8) 20.2 (1.4) 62.8 (7.8) 50.9 (3.1) Linear IMSAT (VAT) 61.1 (1.9) 12.3 (0.2) 91.7 (0.5) 40.7 (0.6) 23.9 (0.4) 18.2 (1.9) 42.9 (0.8) 43.9 (3.3) Deep RIM 58.5 (3.5) 5.8 (2.2) 92.5 (2.2) 40.3 (3.5) 13.4 (1.2) 26.8 (3.2) 62.3 (3.9) 25.1 (2.8) IMSAT (RPT) 89.6 (5.4) 16.4 (3.1) 92.8 (2.5) 45.5 (2.9) 24.7 (0.5) 35.9 (4.3) 71.9 (6.5) 24.4 (4.7) IMSAT (VAT) 98.4 (0.4) 24.0 (0.9) 94.1 (0.4) 45.6 (0.8) 27.5 (0.4) 57.3 (3.9) 71.0 (4.9) 31.1 (1.9) Used perturbation as augmentation function. IMSAT (VAT) achieved state-of-the-art performance. x r 34

Experiments (Clustering) Do domain-specific augmentation functions improve the clustering performance? Omniglot dataset [Lake et al. 2011] 35

Experiments (Clustering) Domain-specific augmentation function Small stochastic affine transformation 36

Experiments (Clustering) Domain-specific augmentation function Small stochastic affine transformation 37

Experiments (Clustering) 24.0% Accuracy improved! 70.0% (a) IMSAT (VAT) + Domain-specific knowledge (b) IMSAT (VAT ure 2. Randomly sampled clusters of Omniglot discovered using (a) IMSAT (VAT) and (b) IMSAT (VAT domly sampled data points in same (Affine cluster. transformation) 38

Experiments (Hash Learning) 3 evaluation metrics: mean average precision #(hash bits) precision @ sample=500 precision @ hamming dist=2 16-bit (D = 16) p (y 1,...,y D x) x #(input dimension) 39

Experiments (Hash Learning) Experimental results of Deep Hash and the previous methods were excerpted from Erin Liong et al. (2015). Method Hamming ranking (map) precision @ sample = 500 precision @ r = 2 (Dimensions of hidden layers) MNIST CIFAR10 MNIST CIFAR10 MNIST CIFAR10 Spectral hash (Weiss et al., 2009) 26.6 12.6 56.3 18.8 57.5 18.5 PCA-ITQ (Gong et al., 2013) 41.2 15.7 66.4 22.5 65.7 22.6 Deep Hash (60-30) 43.1 16.2 67.9 23.8 66.1 23.3 Linear RIM 35.9 (0.6) 24.0 (3.5) 68.9 (1.1) 15.9 (0.5) 71.3 (0.9) 14.2 (0.3) Deep RIM (60-30) 42.7 (2.8) 15.2 (0.5) 67.9 (2.7) 21.8 (0.9) 65.9 (2.7) 21.2 (0.9) Deep RIM (200-200) 43.7 (3.7) 15.6 (0.6) 68.7 (4.9) 21.6 (1.2) 67.0 (4.9) 21.1 (1.1) Deep RIM (400-400) 43.9 (2.7) 15.4 (0.2) 69.0 (3.2) 21.5 (0.4) 66.7 (3.2) 20.9 (0.3) IMSAT (VAT) (60-30) 61.2 (2.5) 19.8 (1.2) 78.6 (2.1) 21.0 (1.8) 76.5 (2.3) 19.3 (1.6) IMSAT (VAT) (200-200) 80.7 (2.2) 21.2 (0.8) 95.8 (1.0) 27.3 (1.3) 94.6 (1.4) 26.1 (1.3) IMSAT (VAT) (400-400) 83.9 (2.3) 21.4 (0.5) 97.0 (0.8) 27.3 (1.1) 96.2 (1.1) 26.4 (1.0) Tested on 2 benchmark datasets. Hyper-parameters are fixed throughout the datasets. 40

Experiments (Hash Learning) Experimental results of Deep Hash and the previous methods were excerpted from Erin Liong et al. (2015). Method Hamming ranking (map) precision @ sample = 500 precision @ r = 2 (Dimensions of hidden layers) MNIST CIFAR10 MNIST CIFAR10 MNIST CIFAR10 Spectral hash (Weiss et al., 2009) 26.6 12.6 56.3 18.8 57.5 18.5 PCA-ITQ (Gong et al., 2013) 41.2 15.7 66.4 22.5 65.7 22.6 Deep Hash (60-30) 43.1 16.2 67.9 23.8 66.1 23.3 Linear RIM 35.9 (0.6) 24.0 (3.5) 68.9 (1.1) 15.9 (0.5) 71.3 (0.9) 14.2 (0.3) Deep RIM (60-30) 42.7 (2.8) 15.2 (0.5) 67.9 (2.7) 21.8 (0.9) 65.9 (2.7) 21.2 (0.9) Deep RIM (200-200) 43.7 (3.7) 15.6 (0.6) 68.7 (4.9) 21.6 (1.2) 67.0 (4.9) 21.1 (1.1) Deep RIM (400-400) 43.9 (2.7) 15.4 (0.2) 69.0 (3.2) 21.5 (0.4) 66.7 (3.2) 20.9 (0.3) IMSAT (VAT) (60-30) 61.2 (2.5) 19.8 (1.2) 78.6 (2.1) 21.0 (1.8) 76.5 (2.3) 19.3 (1.6) IMSAT (VAT) (200-200) 80.7 (2.2) 21.2 (0.8) 95.8 (1.0) 27.3 (1.3) 94.6 (1.4) 26.1 (1.3) IMSAT (VAT) (400-400) 83.9 (2.3) 21.4 (0.5) 97.0 (0.8) 27.3 (1.1) 96.2 (1.1) 26.4 (1.0) IMSAT (VAT) outperformed the previous methods. 41

Outline 1. Introduction 2. Background (Regularized Information Maximization [Gomes et al., 2010]) 3. Proposed Method (Information Maximizing Self- Augmented Training) 4. Experiments 5. Conclusions & Future Work 42

Conclusions Better regularization SAT regularization IMSAT RIM [Gomes et al., 2010] InfoMax for learning discrete representations More general InfoMax 43

Conclusions Better regularization IMSAT State-of-the-art in clustering and unsupervised hash learning RIM [Gomes et al., 2010] More general InfoMax 44

Future Work Augmentation function T ( ) encodes invariance of representations. à What is effective T ( ) for different types of data? Ex.) image, text, sequence, graph Data augmentation x T (x) 45