Deep Multi-Task Learning with evolving weights

Similar documents
Python Machine Learning

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Knowledge Transfer in Deep Convolutional Neural Nets

arxiv: v1 [cs.lg] 15 Jun 2015

Lecture 1: Machine Learning Basics

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Artificial Neural Networks written examination

(Sub)Gradient Descent

arxiv: v2 [cs.cv] 30 Mar 2017

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Model Ensemble for Click Prediction in Bing Search Ads

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Evolutive Neural Net Fuzzy Filtering: Basic Description

CSL465/603 - Machine Learning

Deep Neural Network Language Models

Attributed Social Network Embedding

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

arxiv: v2 [stat.ml] 30 Apr 2016 ABSTRACT

arxiv: v1 [cs.lg] 7 Apr 2015

A study of speaker adaptation for DNN-based speech synthesis

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

A Deep Bag-of-Features Model for Music Auto-Tagging

Softprop: Softmax Neural Network Backpropagation Learning

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

An empirical study of learning speed in backpropagation

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Evolution of Symbolisation in Chimpanzees and Neural Nets

arxiv:submit/ [cs.cv] 2 Aug 2017

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Semi-Supervised Face Detection

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

Modeling function word errors in DNN-HMM based LVCSR systems

IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX,

Learning Methods for Fuzzy Systems

Modeling function word errors in DNN-HMM based LVCSR systems

A survey of multi-view machine learning

Human Emotion Recognition From Speech

arxiv: v2 [cs.ir] 22 Aug 2016

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors

A Reinforcement Learning Variant for Control Scheduling

TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY

A Review: Speech Recognition with Deep Learning Methods

Dialog-based Language Learning

Probabilistic Latent Semantic Analysis

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Australian Journal of Basic and Applied Sciences

SARDNET: A Self-Organizing Feature Map for Sequences

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Axiom 2013 Team Description Paper

INPE São José dos Campos

Improvements to the Pruning Behavior of DNN Acoustic Models

Second Exam: Natural Language Parsing with Neural Networks

Learning Methods in Multilingual Speech Recognition

Word Segmentation of Off-line Handwritten Documents

Learning From the Past with Experiment Databases

Discriminative Learning of Beam-Search Heuristics for Planning

WHEN THERE IS A mismatch between the acoustic

Calibration of Confidence Measures in Speech Recognition

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Rule Learning With Negation: Issues Regarding Effectiveness

Forget catastrophic forgetting: AI that learns after deployment

Learning to Schedule Straight-Line Code

CS Machine Learning

THE world surrounding us involves multiple modalities

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Dropout improves Recurrent Neural Networks for Handwriting Recognition

Comment-based Multi-View Clustering of Web 2.0 Items

Massachusetts Institute of Technology Tel: Massachusetts Avenue Room 32-D558 MA 02139

Lip Reading in Profile

Assignment 1: Predicting Amazon Review Ratings

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Rule Learning with Negation: Issues Regarding Effectiveness

Deep Facial Action Unit Recognition from Partially Labeled Data

A Pipelined Approach for Iterative Software Process Model

Georgetown University at TREC 2017 Dynamic Domain Track

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

arxiv: v1 [cs.cl] 2 Apr 2017

Abstractions and the Brain

Image based Static Facial Expression Recognition with Multiple Deep Network Learning

Transcription:

Deep Multi-Task Learning with evolving weights Soufiane Belharbi1, Romain He rault1, Cle ment Chatelain1 and Se bastien Adam2 1- INSA de Rouen - LITIS EA 4108 Saint E tienne du Rouvray 76800 - France 2- Universite de Rouen, UFR des Sciences - LITIS EA 4108 Saint E tienne du Rouvray 76800 - France Abstract. Pre-training of deep neural networks has been abandoned in the last few years. The main reason is the difficulty to control the overfitting and tune the consequential raised number of hyper-parameters. In this paper we use a multi-task learning framework that gathers weighted supervised and unsupervised tasks. We propose to evolve the weights along the learning epochs in order to avoid the break in the sequential transfer learning used in the pre-training scheme. This framework allows the use of unlabeled data. Extensive experiments on MNIST showed interesting results. 1 Introduction In many real life applications, acquiring unlabeled data is easy and cheap in the opposite of labeled data, where manual annotation costs money and time. Many approaches, such as semi-supervised learning, have been adopted to benefit from unlabeled data as an inductive bias to improve the generalization of the learned model [1, 2, 3, 4]. Most of these algorithms are based on: (i) a sequential transfer learning where unsupervised training is done separately using the unlabeled data followed by a supervised training. (ii) shallow architectures where there is only one or two transformations of the input to predict the output. Deep neural networks (DNN) can also benefit from unlabeled data. Usually, only one task is performed at each hidden layer. One can think of involving multiple tasks at each layer [5, 6]. Sharing the hidden representations allows learning better features for the generalization. [7, 8, 9, 10] proposed an unsupervised auxiliary task based on a layer-wise training which leads to the concept of pre-training which is a sequential transfer learning consisting of an unsupervised task followed by a supervised task. Tow main drawbacks to this scheme are: (i) the difficulty to control the over-fitting that may happen in the unsupervised task which damages the parameters used for the supervised task. (ii) the large number of hyper-parameters one needs to tune. One cause is that both tasks are optimized separately. A natural way to solve this issue is to learn both tasks simultaneously. The work presented in this paper is mainly inspired by [11], where the authors propose to code an auxiliary task in each layer based on similarity preservation and manifold assumption. It is a regulation scheme based on parallel transfer in Multi-Task Learning (MTL) in place of the traditional pre-training technique. This work has been partly supported by the ANR-11-JS02-010 project LeMon. 141

Here, a similar MTL framework learns an unsupervised and a supervised tasks simultaneously except that the auxiliary unsupervised task is achieved by a set of auto-encoder reconstruction functions. Auto-encoders follows the work of [12] who combines the idea of input corruption [13, 14] with layer-wise training which leads to the denoising. Moreover, we propose to balance both tasks using evolving weights along the learning epochs. Thus, the traditional pre-training scheme is a special case of our framework when setting the weights in a particular setup. Our approach allows easily a better generalization and fast training of DNN. It gives interesting results on MNIST dataset. 2 Proposed model The approach is formulated as a multi-task learning (MTL) framework [6] that gathers a main and a secondary task. Let us consider a training set D = {(x1, y1 ),..., (xl, yl ), (xl+1, ),..., (xl+u, )} where the l first examples are labeled and the u last examples are unlabeled; M and R the prediction/regression function of the main and secondary task, respectively; Cs and Cr their respective costs. The main task is a supervised task with parameters w = {wsh, ws } where ws is a set of parameters proper to the main task; and with criterion Js, Js (D; w = {wsh, ws }) = l X Cs (M(xi ; w), yi ). (1) i=1 The secondary task is a reconstruction task with parameters w0 = {wsh, wr } where wr is a set of parameters proper to the reconstruction task; and with criterion Jr, Jr (D; w0 = {wsh, wr }) = l+u X Cr (R(xi ; w0 ), xi ). (2) i=1 Both tasks share the set of parameters wsh. Our purpose in using both tasks in the same framework, is that we hope that the secondary task improves the main task. The importance of both tasks is balanced using the importance weights λs and λr for the main and secondary task, respectively. The full objective of our model can be written as, J (D; {wsh, ws, wr }) = λs Js (D; {wsh, ws }) + λr Jr (D; {wsh, wr }). (3) We propose to evolve the importance weights λs and λr along the optimization iterations. The intuition is to give more importance to the secondary task in the first learning epochs, but keep the main task present to avoid large damage of wsh. The main task is the final target, thus, the importance of the main task is increased and the importance of the secondary task is decreased through the learning epochs (Fig.1). 142

The new criterion becomes, J (D; {wsh, ws, wr }) = λs (t) Js (D; {wsh, ws }) + λr (t) Jr (D; {wsh, wr }), (4) where t 0 indicates the learning epochs. For the sake of a fair comparison between different models in the experiments, weights are constrained such that t 0: 0 λs (t) 1, 0 λr (t) 1 and λs (t) + λr (t) = 1. These conditions are not mandatory in an MTL [6, 15]. By doing this, we make sure that the observed benefit in our approach is not due to any boost of the learning rate in the optimization of Eq.4, as the learning rate has a direct relation with the importance weight. 3 3.1 Implementation details The reconstruction task In practice, the main task is achieved by a neural network N N with K hidden layers. The secondary task is achieved by a set of K reconstruction functions, each one is represented by a denoising auto-encoder (DAE). We recall that a DAE is a 2-layers neural network (a coding layer followed by a decoding layer). The k th DAE has a set of parameters wdae,k = {wc,k, wd,k } where wc,k and wd,k are respectively the parameters of the coding and decoding layers, and a noise function f (a binomial for instance) which is used to corrupt the input. In this case, all the parameters wc,k, 1 k K are shared with the supervised task which allows a parallel transfer learning. We set wr = {wdae,1,..., wdae,k }. Each DAE is provided with its own cost function Cdae,k. The criterion of the secondary task can be written in this case as, Jr (D; wr ) = l+u X K X Cdae,k (R(f (xi,k 1 ); wdae,k ), xi,k 1 ), (5) i=1 k=1 where x,k is the representation at the k th layer of N N (x,0 is the original input). For the sake of simplicity, we consider that all the reconstruction functions have the same importance weight λr (t). One may think to associate different importance weights to each one. 3.2 Evolution of the importance weights along learning Four different ways to evolve the importance weights through learning epochs (Fig.1) are studied in this work: Stairs schedule: the traditional pre-training scheme referenced as stairst1 where λs (t) = 0 and λr (t) = 1 before an iteration t1 and λs (t) = 1 and λr (t) = 0 after. Linear schedule: the weights progress linearly from the epoch 0 to the last training epoch; This case is referenced as lin. Abridged linear schedule: The linear trend can be stopped at an iteration t1 then λs (t) and λr (t) are respectively saturated to 1 and 0 after that. This case is referenced as lint1. Exponential schedule: the weights evolve t exponentially proportionally to exp σ, where t is the current number of epochs, σ is the slope. This case is referenced as expσ. 143

Stairs (Pre-training) Linear Linear until t1 Exponential Weights 1 0 1 λr λs 0 start t1 start Epochs Fig. 1: Evolution of importance weights along training epochs 3.3 Optimization Eq.4 is minimized by Stochastic Gradient Descent (SGD). Usually, in a case where one has multiple tasks in one single objective, alternating between task works well [6, 11, 16, 15]. The optimization technique is illustrated in Alg.1. Algorithm 1 Training our model for one epoch 1: 2: 3: 4: 5: 6: 4 D is the shuffled training set. B a mini-batch. for B in D do Bs labeled examples of B, Make a gradient step toward Jr using B (update w0 ) Make a gradient step toward Js using Bs (update w) end for Experiments We evaluate our approach on the MNIST dataset for the classification task using a similar protocol as in [11]. All the networks have the same input (282 ) and output (10). We refer to each network by N NK where K is the number of hidden layers. We use the following networks: N N1 : with size 50. N N2 : with size: 60, 50. N N3 : with size: 70, 60, 50. N N4 : with size: 80, 70, 60, 50. Each network N NK is optimized once with only a supervised criterion Js. The resulting classification test error is denoted as the baseline error. Starting from the very same random neuron weights, each network N NK is also optimized with multi-task criteria following different weight schedules: stairs100 (traditional pre-training), lin100, lin and. All the trainings end after 5000 epochs using a mini-batch size of 600 and a constant learning rate (0.01) which is decreased over the last 500 epochs. Only for the stairs100 schedule, as in traditional pre-training setup, we use a learning rate which is optimized on a validation set to train the denoising auto-encoders. We used a custom version 144

of Crino [17] for all the experiments. The error difference with the baseline error is displayed in Tab.1 for different sizes l and u of respectively labeled and unlabeled sets (negative means better than baseline). For deeper networks, we present only results using the schedule, due to the lack of space. In the case of shallow network (Tab.1a), one can see that our approach improves the results using different schedules with a better global performance using the schedule. One can also notice that the more we add labeled data, the less we improve the result using our approach. In that case, we observe high values in the shared parameters wsh ; that can be overcome throught an l1, l2 regularization which is considered for future work. We notice also that we do not improve the performance when using only labeled data (u = 0). One explanation to this is that the observed improvement of our approach is mainly due to the information contained in the extra unlabeled data. In the case of deeper networks (Tab.1b), one can observe a global improvement of the performance using our approach. We observe the same pattern when using larger labeled data, but we obtain lower improvement. Table 1: Classification error over MNIST test. (Figures are in percentage and relative to their corresponding baseline error) (a) Shallow neural network Labeled Architecture Baseline Error Schedule Unlabeled u=0 u = 103 u = 2 103 u = 5 103 u = 104 u = 2 104 u = 4 104 u = 5 104 l l = 103 N N1 12.85% l = 100 N N1 31% l = 104 N N1 9.08% stairs100 lin100 lin stairs100 lin100 lin stairs100 lin100 lin -0.1-0.17-0.16-0.28-0.28-0.46-0.19-0.87-1.35-1.69-2.21-1.82-2.39-1.89 +1.32-0.19-1.03-1.78-2.03-2.26-2.62-2.03-0.85-1.46-1.91-2.21-2.42-2.44-2.44 +0.09 +0.009 +0.05 0.0-0.02-0.049-0.06 0.0-0.71-0.98-1.43-1.6-1.83-1.63-1.68 +1.5 +0.33-0.23-0.95-1.12-1.46-1.37-1.56 +0.05-0.73-0.95-1.44-1.44-1.78-1.7-1.75-0.2-0.26-0.5 +0.95 +0.85 +0.74 +0.31-0.1-0.44-0.04-0.21-0.3-0.51-0.75-0.56-0.56 (b) Deeper neural networks Labeled Architecture Baseline Error Schedule Unlabeled u=0 u = 103 u = 2 103 u = 5 103 u = 104 u = 2 104 u = 4 104 u = 5 104 l 5 N N2 31.48% l = 100 N N3 33.58% N N4 30.53% N N2 12.32% -1.05-2.03-2.32-2.26-2.4-2.12-1.99-0.01-0.83-1.35-1.71-1.29-0.78-1.17-1.23 l = 103 N N3 11.96% N N4 12.54% N N2 5.62% -0.92-1.17-1.31-0.72-0.82-1.65-0.88 +0.02-0.55-0.78-1.15-1.14-1.25-1.08-1.23-0.58-0.77-1.23-1.29-1.48-1.3-1.3 l = 104 N N3 5.69% N N4 5.5% -0.03-0.8-1.18-1.56-1.76-1.88-1.92-1.85-0.11-0.2-0.4-0.54-0.75-0.81-0.81-0.81-0.03-0.07-0.01-0.18-0.22-0.41-0.42-0.42 +0.08 +0.12 +0.04-0.04-0.24-0.55-0.55 Conclusion We presented in this paper a new learning scheme for deep neural networks where we used a multitask learning framework that gathers a supervised and unsupervised task. We proposed to evolve the weights of the tasks along the 145

learning epochs using different schedules. Using less hyper-parameters, we improved the performance of deep neural networks easily. As a future work, we consider using l1, l2 regularization of the shared parameters. In the aim to set an automatic framework, we consider using an early stopping on the reconstruction task based on the error on the train and validation set. References [1] O. Chapelle, B. Scho lkopf, and A. Zien. Semi-supervised learning. Adaptive computation and machine learning. MIT Press, 2006. [2] O. Chapelle, J. Weston, and B. Scho lkopf. Cluster kernels for semi-supervised learning. In Advances in Neural Information Processing 15, NIPS 2002, pages 585 592, 2002. [3] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical report, CMU-CALD-02-107, Carnegie Mellon university, 2002. [4] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning, 7:2399 2434, 2006. [5] S. C. Suddarth and Y. L. Kergosien. Rule-injection hints as a means of improving network performance and learning time. In Neural Networks, EURASIP workshop 1990, pages 120 129, 1990. [6] R. Caruana. Multitask learning. Machine Learning, 28(1):41 75, 1997. [7] G. E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 507, 2006. [8] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527 1554, 2006. [9] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In Advances in Neural information Processing Systems 19, NIPS 2006, pages 153 160, 2006. [10] M. Ranzato, F. J. Huang, Y. Boureau, and Y. LeCun. unsupervised learning of invariant feature hierarchies with applications to object recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2007, 2007. [11] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. In Machine Learning, Proceedings of 25th International Conference, ICML 2008, pages 1168 1175, 2008. [12] P. Vincent, H. Larochelle, I. Lajoie, and Y. Bengio. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11:3371 3408, 2010. [13] L. Holmstrom and P. Koistinen. Using additive noise in back-propagation training. Neural Networks, IEEE Transaction, 3(1):24 38, 1992. [14] J. Sietsma and R. J. F. Dow. Creating artificial neural networks that generalize. Neural Networks, 4(1):67 79, 1991. [15] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In Computer Vision, ECCV 2014, 13th European Conference, pages 94 108, 2014. [16] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In Machine Learning, Proceedings of he 25th International Conference, ICML 2008, pages 160 167, 2008. [17] Crino, a neural-network library based on Theano. https://github.com/jlerouge/crino, 2014. 146