One-Shot Learning of Faces

Similar documents
Lecture 1: Machine Learning Basics

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Python Machine Learning

Knowledge Transfer in Deep Convolutional Neural Nets

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems

CS Machine Learning

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

(Sub)Gradient Descent

Generative models and adversarial training

Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma

Rule Learning With Negation: Issues Regarding Effectiveness

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Assignment 1: Predicting Amazon Review Ratings

A Case Study: News Classification Based on Term Frequency

Rule Learning with Negation: Issues Regarding Effectiveness

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Learning From the Past with Experiment Databases

Artificial Neural Networks written examination

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Word Segmentation of Off-line Handwritten Documents

Model Ensemble for Click Prediction in Bing Search Ads

Attributed Social Network Embedding

CSL465/603 - Machine Learning

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

arxiv: v1 [cs.lg] 15 Jun 2015

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

Human Emotion Recognition From Speech

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

arxiv: v1 [cs.cv] 10 May 2017

Switchboard Language Model Improvement with Conversational Data from Gigaword

WHEN THERE IS A mismatch between the acoustic

SARDNET: A Self-Organizing Feature Map for Sequences

Axiom 2013 Team Description Paper

Softprop: Softmax Neural Network Backpropagation Learning

Speech Emotion Recognition Using Support Vector Machine

Reducing Features to Improve Bug Prediction

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Calibration of Confidence Measures in Speech Recognition

Cultivating DNN Diversity for Large Scale Video Labelling

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Lecture 1: Basic Concepts of Machine Learning

Semi-Supervised Face Detection

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Learning Methods for Fuzzy Systems

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

A Reinforcement Learning Variant for Control Scheduling

Second Exam: Natural Language Parsing with Neural Networks

Truth Inference in Crowdsourcing: Is the Problem Solved?

Learning Methods in Multilingual Speech Recognition

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

arxiv: v2 [cs.cv] 30 Mar 2017

Lip Reading in Profile

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Software Maintenance

Australian Journal of Basic and Applied Sciences

Evolutive Neural Net Fuzzy Filtering: Basic Description

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Exploration. CS : Deep Reinforcement Learning Sergey Levine

The Strong Minimalist Thesis and Bounded Optimality

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

arxiv: v2 [cs.ro] 3 Mar 2017

arxiv: v1 [cs.lg] 7 Apr 2015

INPE São José dos Campos

Georgetown University at TREC 2017 Dynamic Domain Track

Comment-based Multi-View Clustering of Web 2.0 Items

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

An Online Handwriting Recognition System For Turkish

Lecture 10: Reinforcement Learning

AI Agent for Ice Hockey Atari 2600

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Curriculum Design Project with Virtual Manipulatives. Gwenanne Salkind. George Mason University EDCI 856. Dr. Patricia Moyer-Packenham

THE enormous growth of unstructured data, including

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Beyond the Pipeline: Discrete Optimization in NLP

Test Effort Estimation Using Neural Network

Build on students informal understanding of sharing and proportionality to develop initial fraction concepts.

How to Judge the Quality of an Objective Classroom Test

Chapter 2 Rule Learning in a Nutshell

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Using focal point learning to improve human machine tacit coordination

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Why Did My Detector Do That?!

Probability and Statistics Curriculum Pacing Guide

Probabilistic Latent Semantic Analysis

Transcription:

One-Shot Learning of Faces Luke Johnston William Chen Department of Computer Science, Stanford University Introduction The ability to learn and generalize from single or few examples is often cited as a weakness in learning methods. Although there have been tremendous breakthroughs in deep learning, traditional gradient-based networks require a ton of training data and must relearn many parameters in order to classify a new class. For example, suppose our task is to recognize individual faces. A convolutional neural network can be trained to achieve this task for a group of people if it is trained with a dataset that contains many images of each person s face. However, if we suddenly introduce it to a new face, we must retrain it with many examples of this new face before it will be able to recognize that person reliably in the future. Humans, on the other hand, can quickly learn a new person s face with only a single image. Building off of current research, we experiment with using siamese networks to learn how to quickly differentiate between two different faces given only a few reference images. The motivation for improving these so called one-shot models is to improve the learning efficiency of the systems that use them and to gain insight on how previous knowledge can best be used when learning about new examples, which can help make tremendous strides in general machine learning. Dataset and Task Definition As data, we used 80,000 colored face images of celebrities from the FaceScrub [2] dataset. First, the data was split into 80% training and 20% validation. The images were then downsized from 100x100 to 64x64 pixels in order to improve learning efficiency and were randomly distorted with tensorflow functions (change in contrast, mirrored, add image whitening) to increase the effective size of the dataset and reduce overfitting. Five examples of these images can be seen in the top row of Figure 1. The data itself is in the form of a list of tuples where each tuple contains a 64x64x3 numpy array of pixel values as the first element and an integer ID uniquely identifying the person in the image as the second element. The task of n-shot classification over c classes we define as follows: given a set of nc images x i with their labels y i {(x 1, y 1 ), (x 2, y 2 ),, (x nc, y nc )} where among the y i there are a total of n occurrences of each of the c classes, our task is to predict the class y u of an unknown image x u. When n = 1, this is one-shot learning. The success of the model is measured by its accuracy on this classification task over images from the validation set. We used the validation set for testing also because in our experience it is very difficult to overfit the hyperparameters of neural networks of this size to the validation set (although it is very easy to overfit them to the training set). Baseline and Oracle Comparison For the baseline, we implemented a basic 3NN algorithm using scipy that uses the image pixels directly as features and an ID corresponding to the person in the image as the label. For each test example, the algorithm searches for the three nearest points (out of twelve points since we worked with only two training classes with 6 images each) via Euclidean distance and classifies the test example with the majority label. The baseline was able to classify 62.6% of the test examples in one test of 200 trials, however, this number varied widely across different training and testing sets, which we suspect is because 1

the euclidean distance metric for KNN classification does not provide valuable information in high dimensional spaces (see discussion of the curse of dimensionality below). For the oracle, we manually learned each person s face from 3 example images from the same training set as the baseline and later checked how many test images we could identify. Since the number of classes involved was not too high, we were able to correctly classify all of the test examples. There is a clear gap in accuracy between the baseline and oracle because humans are known to be able to identify and generalize new information quickly and effectively. Machine learning models on the other hand, have yet to be able to perform nearly as well. Also worth noting is that for the purposes of the baseline, we utilized a relatively naive feature set for prediction which may partly explain the poor classification accuracy. These results show that there is still much room for improvement and we hope to be able to make progress in this regard. We worked with two classes with 6 training examples each and tested on 200 examples all from the FaceScrub dataset for the purposes of comparing the baseline and oracle. Challenges and Techniques We think that the main problem with the baseline 3NN classifier is that the feature space of all pixels is too large, and each individual pixel feature means too little to give relevant information about the images. The curse of dimensionality means that the euclidean distance metric will not vary much at all between pairs of points in high dimensional spaces. In an attempt to remedy this problem, we trained an autoencoder on the training data to compress the images into a more meaningful representation with fewer features. As described below, the autoencoder was not entirely successful in providing us a useful feature representation of the data, but it did provide us some visual insight into the lower dimensional feature space. The task of n-shot learning has two primary challenges: first, the network cannot be trained on many examples of each class, as is the normal practice for image classification. Second, image classification is usually performed by a softmax layer, which estimates the probability that a given image belongs to each known class. If we want our model to be able to classify arbitrary numbers of classes, we cannot use a softmax layer, or else we would have to retrain the layer right before the softmax layer with the addition of each new class. We can solve both of these problems by training the network to learn to verify whether two images are from the same class, and then leverage this information for the n-shot classification task. This is described in the Siamese Network section below. We will be using python and tensorflow to train the neural networks and other algorithms, using a personal GTX 980 GPU with CUDNN libraries to speed up training of neural networks. Related Work Prior research has trained models on datasets with few examples of each class. In Santoro et al.[1], they take advantage of Neural Turing Machines (NTMs), which offer the ability to quickly encode and retrieve new information. This avoids having to inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference; thus it can leverage data to make accurate predictions only after a few samples. They tested it on the Omniglot dataset, which consists of over 1600 separate classes with only a few examples each. Their algorithm was tested on several baselines, such as feed-forward RNN, LSTM, and knn classifiers, which are fed features from an autoencoder. This was the inspiration for our attempt to extract features with an autoencoder. In Gregory Koch et al. [5], a convolutional siamese network is leveraged for the task of oneshot classification of the Omniglot dataset [3] by computing a difference metric on two reference images. The siamese network learns to estimate the probability that two handwritten characters 2

belong to the same class, which can be used for the one-shot learning task of handwritten characters from a number of alphabets. This technique is the main inspiration for our paper and it will be interesting to observe how well oneshot learning performs on the potentially more complicated problem space of face images. Models and Algorithms Autoencoder We trained an autoencoder on the FaceScrub dataset in order to extract features and reduce the dimensionality of the dataset [7]. The autoencoder consists of a convolutional encoder that maps the input image into a feature representation of 100 floating point numbers, and then a decoder that maps this feature representation back into the shape of the input image through a series of transpose convolution operations. The encoder contains three convolutional layers and then a fully connected layer. Each convolutional layer is followed by a max pooling step with window size of 2 and stride of 2, and a ReLU step to take the positive parts of the activations and introduce nonlinearity into the model. The filter sizes of the three convolutional layers in order are 3, 5, and 5, and the number of channels of each layer (starting from the input image) are 3, 25, 50, and 100. The decoder then contains three deconvolutional [4] layers with the opposite structure as the convolutional encoder, each followed by a nonlinear ReLU layer. So the final output has the same shape as the input image, and the loss funciton is simply the average mean-squared loss between the output image pixels and the input image pixels. The autoencoder is implemented using tensorflow. During training we used an adaptive subgradient optimizer [6] with learning rate 0.005. Results Figure 1 provides a visualization of how well the autoencoder learns to compress the faces. While it does seem to have learned useful features such Figure 1: 5 Facescrub dataset images (top row) and their decoded representations after being passed through the autoencoder (bottom row). as skin color and facial feature location, in general it does not seem to be learning the distinct features for individual faces that we are interested in, as the reconstructions and very blurry and difficult for a human to identify (so probably will be difficult for any one-shot recognition model to classify). Increasing the size of the encoded representation does not seem to help this problem, and even if it did, 100 dimensions is still probably too large to use a knn classifier on as we originally hoped, which is unfortunate. Possible fixes to this problem could include adding a classification step to the autoencoder to force it to learn features that are useful for classification, or we could just use features taken from the final layer of a classifier. Siamese Network For our next attempt at a more complex one-shot recognition model than the baseline, we built our own implementation of a siamese network, as described in [5], using tensorflow. Our siamese network is trained to estimate the likelihood that two images belong to different classes. If the two images do belong to the same class (in our case, if they are images of the same person), the siamese network should output 0, otherwise, it should output 1. It does this by first taking two inputs (from the FaceScrub dataset) and mapping them both to an encoded 200 length feature representation vector using the convolutional structure depicted in Figure 2. After this, the element-wise absolute value of the difference between the two feature vectors is computed, and this symmetric distance metric is passed into a final fully-connected layer with a single output. 3

Figure 2: Four-layer Siamese Network Architecture The sigmoid of this output is taken to map it into the interval [0, 1] and this output represents the estimated probability that the two images are of the same person. We used the tensorflow adaptive subgradient [6] optimization algorithm to minimize the absolute value of the difference between the output of the siamese network and the target. Once the siamese network is trained to determine whether two images are from the same class, it can be used to perform n-shot c- classification as follows: given c classes and a set R i of n reference images of each class i, R 1 = {x 11,, x 1n },, R c = {x c1,, x cn } where x ij denotes the jth reference image for class i, for each new unknown image x u, we compute a distance metric D(x u, i) to each class i with one of two methods: 1. The minimum probability method: D(x u, i) = min j P S (x u, x ij ) 2. The average probability method: D(x u, i) = 1 n P S (x u, x ij ) n j=1 where P S (x 1, x 2 ) denotes the probability that x 1 and x 2 belong to different classes, as estimated by the siamese neural network. Then we just classify each new image C(x u ) as follows: C(x u ) = arg min i D(x u, i) Hence, in the minimum probability method, we classify an image according to the class which contains a reference image most similar to it, whereas in the average probability method, we compute the average estimated probability that the image belongs to the same class as all reference images in that class, and maximize that probability (note that 1 P S (x u, x ij ) is the estimated probability that x u belongs to the same class as x ij ). Results The training and validation loss curves for the siamese network are depicted in Figure 3. The model exhibits slight overfitting, but not to the extent that the validation loss begins increasing. The curve is still slightly sloping down when we terminated training, so it is possible that our model would continue to improve given more training time (here it was trained for approximately 8 hours on our setup). With the siamese network successfully estimating the probability that two images belong to the same class, we first tested the n-shot 2- classification task as described above. The results are depicted in Figure 4. For the oneshot task, the siamese network achieves an accuracy of approximately 90%. Additionally, we built a demo that allows the user to take reference images for two new classes using a webcam, and then classifies subsequent images using the siamese neural network. An example trial of this demo is depicted in Figure 7, where it achieves 4

Figure 3: The training and validation loss for the siamese network. The model exhibits slight overfitting. perfect accuracy on the two authors (which it does most of the time). After running the demo a number of times, we found the model to perform extremely well on most pairs of people as long as the faces were well lit, framed correctly, and mostly facing the camera. When these conditions are met, the demo almost always achieves perfect results. Figure 7 shows that the model performs well for a variety of facial expressions and changes in face orientation. With such successful one-shot results, we expected the model to perform even better on the n-shot task (with n > 1). The bar plot in Figure 4 shows the effect of increasing the number of reference images on the accuracy of 2-classification, for both distance metric methods. Adding more reference images causes the accuracy to increase from 90% to around 94% for both distance metrics, with the average distance metric performing slightly better. For the rest of our analysis, we used 4-shot classification, since it seems that most of the advantage of adding more reference images occurs with only 4 reference images. So far we have only reported our results on n-shot classification of 2 classes. We also investigated how well our model does as the number of classes increases. This is depicted in Figure 5. As would be expected, the accuracy decreases as the number of classes increases, down to approximately 68% for 10 classes. While this is significantly better than random guessing (10%), it is not practical for real-life facial recognition applications. In order to better understand how our model is performing, we made a confusion matrix for one subset of 10 classes taken from the test set, depicted in Figure 6. We can see that the model performs the best when classifying images of Rupert Grint (class 4), and performs the worst when classifying images of Omid Djalili (class 9). We manually inspected the images from each of these classes and found that most of the images of Rupert Grint are well framed, facing the camera, with a similar expression. Images of Omid Djalili, on the other hand, are taken from many different angles, with varying expressions, hair colors, hairstyles, and facial hair, wearing different hats, and with wildly different lighting and makeup applications (he is a comedian and a background actor who takes a number of different role types). So our model s mistakes are reasonable, although of course we would like perfect performance under all these conditions. Figure 4: The model achieves 90% accuracy on the one-shot classification task of two classes, using both the average distance metric (blue) and the minimum distance metric (orange). Increasing the number of reference images increases the accuracy up to approximately 94% for 4 or more reference images, with the average distance metric performing slightly better than the minimum distance metric. The evident variance in the plot is due to the fact that we cannot test all possible image pairs during validation, so we must randomly select the classes and reference images. 5

Figure 5: This bar graph demonstrates how the siamese network performs on the 4-shot classification task as the number of classes is increased. We chose the 4-shot task because, as evident from Figure 4, four reference images gives as high accuracy as any greater number of reference images. Conclusion and Future work Figure 6: Confusion matrix from k=10 case Being able to emulate the efficiency of humans ability to learn has long been a challenge for artificial systems. In our aim to model the one-shot learning face classification task, we explored several ways to measure the similarity between face images as well as ways to efficiently represent the images for such a task. We found that autoencoders were an interesting way to compress images into a lower dimensional feature space, but were not able to retain some (potentially) important aspects of the images. On the other hand, siamese neural networks turned out to be very good at learning descriptive features of the images while also having the ability to compute the distances between them. Our siamese neural network performed much better than the baseline on the one-shot classification of faces, achieving approximately 90% accuracy on the test set. We found that increasing the number of reference images increased the accuracy, up until 94% at four reference images (4-shot classification). Additionally, our model continues to perform decently as the number of classes increases, but it is not yet at a level sufficient for commercial application. We suspect that our model could perform much better given more training data. During our experiments, we originally had a mistake with the way we were loading data, and it caused us to only use half of the data. When training our model on only this half of the data, we could achieve no better than 80% accuracy on the one-shot two-classification task, and had to use a reduced model size with only 3 layers, smaller window sizes, and fewer filters, in order to avoid extreme overfitting. However, as soon as we found the error and started training on all the data, we were easily able to increase the model size to our final one, and achieve 90% accuracy. Since only a two-fold increase in data caused such a large performance boost, and since we are still overfitting to the training data, we suspect that with even more training data, we could use a larger model and achieve better results. We hoped to try the MegaFace [8] dataset, but were not granted access until a couple days ago, which was too late to update our model. Another interesting task to explore would be for the model to learn to identify new classes as it encounters them. Right now, we must explicitly provide references images to the model for each class. For fully autonomous classification of faces, our model would have to identify new classes by itself (for example, if a new image is 6

Figure 7: Results of our binary classification demo, using images of the two authors taken with the webcam. The model achieves perfect accuracy in this case, and in most other trials of the two authors as well. unlikely to belong to any other known class, as- [6] Duchi, John, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online sign it a new class). We explore one approach learning and stochastic optimization. Jourto this problem in our quarter project for ML, nal of Machine Learning Research 12.Jul which uses Neural Turing Machines for the same (2011): 2121-2159. task of one-shot face recognition. [7] Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science 313.5786 [1] Santoro A., Bartunov S., Botvinick M., (2006): 504-507. Wierstra D., Lillicrap T. (2016). One-shot learning with memory-augmented neural net- [8] Megaface 2: 672,057 Identities for works. 13 arxiv:1605.06065 Available online Face Recognition, Aaron Nech, Ira at: https://arxiv.org/abs/1605.06065 Kemelmacher-Shlizerman, 2016 References [2] Ng, Hong-Wei, and Stefan Winkler. A data-driven approach to cleaning large face datasets. 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. [3] Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science 350.6266 (2015): 1332-1338. [4] Zeiler, Matthew D., et al. Deconvolutional networks. Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010. [5] Koch, Gregory. Siamese neural networks for one-shot image recognition. Diss. University of Toronto, 2015. 7