Bird Species Identification from an Image

Similar documents
Python Machine Learning

CS Machine Learning

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Learning From the Past with Experiment Databases

Lecture 1: Machine Learning Basics

Multivariate k-nearest Neighbor Regression for Time Series data -

Assignment 1: Predicting Amazon Review Ratings

Rule Learning With Negation: Issues Regarding Effectiveness

CS 446: Machine Learning

Human Emotion Recognition From Speech

Speech Emotion Recognition Using Support Vector Machine

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Generative models and adversarial training

CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE. Mingon Kang, PhD Computer Science, Kennesaw State University

Universidade do Minho Escola de Engenharia

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Switchboard Language Model Improvement with Conversational Data from Gigaword

Reducing Features to Improve Bug Prediction

Modeling function word errors in DNN-HMM based LVCSR systems

A Case Study: News Classification Based on Term Frequency

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Rule Learning with Negation: Issues Regarding Effectiveness

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Time series prediction

Indian Institute of Technology, Kanpur

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

(Sub)Gradient Descent

Word Segmentation of Off-line Handwritten Documents

Knowledge Transfer in Deep Convolutional Neural Nets

Australian Journal of Basic and Applied Sciences

Why Did My Detector Do That?!

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Modeling function word errors in DNN-HMM based LVCSR systems

Using Web Searches on Important Words to Create Background Sets for LSI Classification

arxiv: v1 [cs.lg] 15 Jun 2015

Model Ensemble for Click Prediction in Bing Search Ads

CSL465/603 - Machine Learning

A Vector Space Approach for Aspect-Based Sentiment Analysis

arxiv: v2 [cs.cv] 30 Mar 2017

UNIVERSITY OF CALIFORNIA SANTA CRUZ TOWARDS A UNIVERSAL PARAMETRIC PLAYER MODEL

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

Using dialogue context to improve parsing performance in dialogue systems

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Applications of data mining algorithms to analysis of medical data

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Calibration of Confidence Measures in Speech Recognition

A study of speaker adaptation for DNN-based speech synthesis

WHEN THERE IS A mismatch between the acoustic

Quantitative analysis with statistics (and ponies) (Some slides, pony-based examples from Blase Ur)

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Linking Task: Identifying authors and book titles in verbose queries

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Multi-tasks Deep Learning Model for classifying MRI images of AD/MCI Patients

Evaluating Interactive Visualization of Multidimensional Data Projection with Feature Transformation

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

12- A whirlwind tour of statistics

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

arxiv: v1 [cs.lg] 3 May 2013

Data Fusion Through Statistical Matching

Lecture 1: Basic Concepts of Machine Learning

The University of Amsterdam s Concept Detection System at ImageCLEF 2011

Exposé for a Master s Thesis

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Self Study Report Computer Science

Going to School: Measuring Schooling Behaviors in GloFish

arxiv: v1 [cs.cv] 10 May 2017

Evolutive Neural Net Fuzzy Filtering: Basic Description

A Decision Tree Analysis of the Transfer Student Emma Gunu, MS Research Analyst Robert M Roe, PhD Executive Director of Institutional Research and

Activity Recognition from Accelerometer Data

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Genre classification on German novels

Truth Inference in Crowdsourcing: Is the Problem Solved?

Feature Selection based on Sampling and C4.5 Algorithm to Improve the Quality of Text Classification using Naïve Bayes

Semi-Supervised Face Detection

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

Interactive Whiteboard

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Artificial Neural Networks written examination

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Copyright by Sung Ju Hwang 2013

Corpus Linguistics (L615)

Probabilistic Latent Semantic Analysis

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Historical maintenance relevant information roadmap for a self-learning maintenance prediction procedural approach

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Transcription:

Bird Species Identification from an Image Aditya Bhandari, 1 Ameya Joshi, 2 Rohit Patki 3 1 Department of Computer Science, Stanford University 2 Department of Electrical Engineering, Stanford University 3 Institute of Computational Mathematics and Engineering, Stanford University This document is the final project report for the CS 229 Machine Learning course at Stanford University. The project aims to quantify the qualitative description of different bird species using machine learning techniques and use it as an effective tool for bird species identification from images. 1 Introduction Identification of bird species is a challenging task often resulting in ambiguous labels. Even professional bird watchers sometimes disagree on the species given an image of a bird. It is a difficult problem that pushes the limits of the visual abilities for both humans and computers. Although different bird species share the same basic set of parts, different bird species can vary dramatically in shape and appearance. Intraclass variance is high due to variation in lighting and background and extreme variation in pose (e.g., flying birds, swimming birds, and perched birds that are partially occluded by branches). Our project aims to employ the power of machine learning to help amateur bird watchers identify bird species from the images they capture. 2 Dataset Caltech and UCSD have gathered data to produce the Caltech-UCSD Birds-200-2011 (CUB- 200-2011) dataset [3]. The dataset contains 11,788 images of 200 bird species. The list of species names was obtained using an online field guide. Images were harvested using Flickr image search and then filtered by showing each image to multiple users of Mechanical Turk. 1

3 Features A vocabulary of 28 attribute groupings and 312 binary attributes (e.g., the attribute group belly color contains 15 different color choices) was selected based on an online tool for bird species identification. All attributes are visual in nature, with most pertaining to a color, pattern, or shape of a particular part. Some examples of attributes are: has back color::red has bill shape::cone has wing shape::pointed-wings 4 Algorithms We realized that the essence of the project was to understand the intricacies of different machine learning algorithms and to learn which algorithm gives good results for which use case. With this philosophy, we wrote our own implementations of KNN and Naive Bayes in MATLAB. An added advantage of not using any library was that we could tweak whatever parameters we wanted to. Looking at the results of these two algorithms, we got a baseline for future techniques that could be implemented using available libraries. We observed that libraries like Scikit Learn allowed us to tweak different aspects of an algorithm, but maybe not to the extent of our own implementation of the algorithm. We faced an inherent trade-off between tweaking ability and the number of algorithms that could be implemented and tested in the time frame of the project. We chose trying out numerous algorithms using the Scikit Learn library [2] in Python: 1. Naive Bayes 2. Support Vector Machines 3. K-nearest Neighbors 4. Linear Discriminant Analysis (LDA) 5. Decision Trees 6. Random Forests 7. One versus Rest classifiers with Logistic Regression Based on the results obtained, we chose the best three techniques to improvise on. We used various feature selection and feature reduction techniques to see if we can improve the accuracy further. We started with changing kernels for SVM - Linear and Radial Basis Functions. Next, we did feature reduction using PCA and applied SVM, Logistic Regression and LDA on the reduced features. We 2

then used feature selection techniques like L1 based method, removing features with low variance, univariate feature selection and tree based feature selection. A slight improvement gave us hope and we decided to play with it more. We used PCA for feature reduction followed by feature selection to obtain a new feature data. On this data, we implemented LDA, Logistic Regression and SVM. This improved the accuracy further. In the end, we tried including the certainty values of features into our model, that is, we converted the original binary feature data into 8 discrete values between 0 and 1 based on the certainty. On running algorithms on this data, no significant change was observed. 5 Results We trained and tested our algorithms on the complete data set to start with. Later we randomly separated the data set into training data and test data so that we had samples from each class. 70% of the data was used as training data and 30% was used as test data. The following figures and tables show the results we observed on implementing algorithms as mentioned in the above section. Figure 1 shows the training versus testing accuracy for different learning methods that we implemented. Figure 2 shows the testing accuracy using different techniques on three of the learning methods - LDA, SVM and Logistic Regression. Table 1: Results table Method Training Accuracy Testing Accuracy certainty metric PCA Feature Selection PCA + Feature Selection Naive Bayes 33.07 19.22 KNN 45.43 31.18 Decision Trees Random Forests 99.83 24.35 99.39 33.58 LDA 63.56 45.44 46.73 47.81 47.7 SVM 50.67 43.91 48.15 48.74 49.11 Logistic Regression 84.42 51.61 52.42 53.31 51.02 47.38 46.93 53.65 3

Figure 1: Training vs Testing Accuracy Figure 2: Testing Accuracy wth different techniques 4

6 Discussion We initially observed low accuracy with basic implementation of Naive Bayes and KNN in MAT- LAB. We then observed improved accuracy with library implementations of SVM, LDA and Logistic Regression. Feature selection and feature reduction improved the accuracy to 53%. We believe such an accuracy for a 200 class classification problem is fairly decent. Table 2: Comparison with related published work [1] Feature Extraction Method Learning Method Percentage Accuracy MTurks Logistic Regression 53.65 Computer Vision SVM 51.0 Computer Vision Logistic Regression 65.0 Computer Vision SVM+CNN 75.7 7 Future Work 1. We implemented Neural Networks and when we ran it on our machine for just 5 hidden neurons, it went out of memory and could not complete. So, we can try to run Neural Networks on high performance computing machines. 2. Computer vision algorithms can be used for automatic feature extraction. 3. We can develop an Android/iOS application that identifies a bird in real time on clicking its photo. References [1] Steve Branson et al. Bird Species Categorization Pose Normalized Deep Convolutional Nets. In: CoRR abs/1406.2952 (2014). URL: http://arxiv.org/abs/1406. 2952. [2] F. Pedregosa et al. Scikit-learn: Machine Learning in Python. In: Journal of Machine Learning Research 12 (2011), pp. 2825 2830. [3] C. Wah et al. The Caltech-UCSD Birds-200-2011 Dataset. Tech. rep. CNS-TR-2011-001. California Institute of Technology, 2011. 5