COMPUTATIONAL INTELLIGENCE

Similar documents
Python Machine Learning

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Lecture 1: Machine Learning Basics

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

arxiv: v2 [cs.cv] 30 Mar 2017

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

arxiv: v1 [cs.lg] 15 Jun 2015

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

DNN ACOUSTIC MODELING WITH MODULAR MULTI-LINGUAL FEATURE EXTRACTION NETWORKS

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Generative models and adversarial training

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

A Neural Network GUI Tested on Text-To-Phoneme Mapping

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

THE world surrounding us involves multiple modalities

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

arxiv: v1 [cs.cv] 10 May 2017

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Second Exam: Natural Language Parsing with Neural Networks

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Evolution of Symbolisation in Chimpanzees and Neural Nets

(Sub)Gradient Descent

A Deep Bag-of-Features Model for Music Auto-Tagging

Probabilistic Latent Semantic Analysis

Learning Methods for Fuzzy Systems

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors

Artificial Neural Networks written examination

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

Multi-tasks Deep Learning Model for classifying MRI images of AD/MCI Patients

Word Segmentation of Off-line Handwritten Documents

A deep architecture for non-projective dependency parsing

Speaker Identification by Comparison of Smart Methods. Abstract

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

SARDNET: A Self-Organizing Feature Map for Sequences

arxiv: v2 [stat.ml] 30 Apr 2016 ABSTRACT

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

Active Learning. Yingyu Liang Computer Sciences 760 Fall

A study of speaker adaptation for DNN-based speech synthesis

A Case Study: News Classification Based on Term Frequency

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

A Survey on Unsupervised Machine Learning Algorithms for Automation, Classification and Maintenance

Laboratorio di Intelligenza Artificiale e Robotica

CSL465/603 - Machine Learning

A Review: Speech Recognition with Deep Learning Methods

Axiom 2013 Team Description Paper

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Human Emotion Recognition From Speech

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Knowledge Transfer in Deep Convolutional Neural Nets

Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma

Lecture 1: Basic Concepts of Machine Learning

Model Ensemble for Click Prediction in Bing Search Ads

Evolutive Neural Net Fuzzy Filtering: Basic Description

Modeling function word errors in DNN-HMM based LVCSR systems

A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval

The stages of event extraction

Cost-sensitive Deep Learning for Early Readmission Prediction at A Major Hospital

arxiv: v1 [cs.cl] 27 Apr 2016

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

Deep Neural Network Language Models

Mining Association Rules in Student s Assessment Data

Australian Journal of Basic and Applied Sciences

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011

Modeling function word errors in DNN-HMM based LVCSR systems

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

A survey of multi-view machine learning

Artificial Neural Networks

arxiv: v2 [cs.cl] 26 Mar 2015

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

Issues in the Mining of Heart Failure Datasets

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

arxiv: v1 [cs.cl] 20 Jul 2015

Software Maintenance

Offline Writer Identification Using Convolutional Neural Network Activation Features

CS 446: Machine Learning

Knowledge Elicitation Tool Classification. Janet E. Burge. Artificial Intelligence Research Group. Worcester Polytechnic Institute

Assignment 1: Predicting Amazon Review Ratings

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

WHEN THERE IS A mismatch between the acoustic

Linking Task: Identifying authors and book titles in verbose queries

arxiv: v4 [cs.cl] 28 Mar 2016

TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY

Massachusetts Institute of Technology Tel: Massachusetts Avenue Room 32-D558 MA 02139

Using Web Searches on Important Words to Create Background Sets for LSI Classification

arxiv: v1 [cs.lg] 7 Apr 2015

arxiv:submit/ [cs.cv] 2 Aug 2017

LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER DNN ACOUSTIC MODELS

Abstractions and the Brain

A Reinforcement Learning Variant for Control Scheduling

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Transcription:

COMPUTATIONAL INTELLIGENCE AUTOS for feature extraction Adrian Horzyk

Autoencoders Autoencoder is a kind of artificial neural networks which is trained to represent a set of training data in an unsupervised manner using a reduced dimensionality and gets the same output data as input ones. The reduced dimensionality is used to find out frequent combinations which constitute complex data features which can be used in various classifiers. Autoencoders consist of encoders and decoders:

Types of Autoencoders We can distinguish a few types of autoencoders: Undercomplete Autoencoders are defined to represent data in undercomplete way, i.e. the outputs do not reproduce inputs precisely in order to allow for generalization, feature extraction, data distribution, and correction of outliers. Training of such autoencoders aims to minimize the loss function defining the differences between outputs and inputs. When the autoencoders are linear, they work similarly to PCA (Principal Components Analysis), so they can replace such kind of preprocessing algorithms (PCA or ICA). Autoencoders with Regularization use the complexity of the modeled distribution of the data to select an adequate dimension and capacity of encoders and decoders. They use a loss function to be resistant to noise and missing data, and learn correct data distribution. These autoencoders can be non-linear and overcomplete as well. Sparse Autoencoders are autoencoders which are used for other computational tasks, e.g. for classification, where we need to represent frequent features more than find a perfect identity function. In this approach, representation of rare features is penalized. This leads to a sparse representation of inputs and useful feature extraction as a preparation phase for classification. Anomaly Detection Autoencoders are autoencoders which are used to detect rare features that stand for various anomalies in data and can identify outliers. Denoising Autoencoders (DAE) try to find a function which returns correct output for noised, corrupted or incomplete inputs. They have to recover the original undistorted inputs on their outputs.

Training of Autoencoders Autoencoders are trained in an unsupervised way using the algorithm typically used for supervised learning, e.g. backpropagation. This is because we use the outputs which are the same as the inputs. Assume that we have a set of unlabeled training examples {,,, }, where x i R n. An autoencoder uses outputs defined as y i = x i where y i is an expected output value. Autoencoders can learn to extract features similarly as Convolutional Neural Networks (CNN) do. The training capabilities of autoencoders are associated with the number of encoding and decoding layers. When autoencoders have more than single encoding and decoding layers, we call them deep autoencoders. Deep autoencoders usually have a better compression ratio than flat autoencoders. Deep autoencoders can be constructed from flat autoencoders trained subsequently and separately. Autoencoders are usually trained using the backpropagation algorithm, however, we can also use other algorithms, e.g. the recirculation algorithm.

Combining Autoencoders with MLPs Sparse Autoencoders are often trained to be combined with other types of artificial neural networks, e.g. MLPs. This is because they can preprocess raw input data and extract useful features for other networks: One of our goals during laboratory classes will be to implement such a combination of an autoencoder and MLP.

BIBLIOGRAPHY AND REFERENCES 1. Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning, MIT Press, 2016, ISBN 978-1- 59327-741-3 or PWN 2018. 2. Stanford University Tutorial of Unsupervised Learning used to Autoencoders: http://ufldl.stanford.edu/tutorial/unsupervised/autoencoders/