A Survey on Pattern Recognition Using Spiking Neural Networks with Temporal Encoding and Learning

Similar documents
Artificial Neural Networks

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Python Machine Learning

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness

Artificial Neural Networks written examination

Human Emotion Recognition From Speech

INPE São José dos Campos

A Neural Network GUI Tested on Text-To-Phoneme Mapping

SARDNET: A Self-Organizing Feature Map for Sequences

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

XXII BrainStorming Day

Reducing Features to Improve Bug Prediction

Learning Methods for Fuzzy Systems

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Test Effort Estimation Using Neural Network

Evolution of Symbolisation in Chimpanzees and Neural Nets

Australian Journal of Basic and Applied Sciences

(Sub)Gradient Descent

Evolutive Neural Net Fuzzy Filtering: Basic Description

CS 446: Machine Learning

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Learning From the Past with Experiment Databases

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

A Case-Based Approach To Imitation Learning in Robotic Agents

Modeling function word errors in DNN-HMM based LVCSR systems

Axiom 2013 Team Description Paper

AUTOMATED FABRIC DEFECT INSPECTION: A SURVEY OF CLASSIFIERS

Softprop: Softmax Neural Network Backpropagation Learning

Time series prediction

Ordered Incremental Training with Genetic Algorithms

Issues in the Mining of Heart Failure Datasets

CS Machine Learning

Knowledge Transfer in Deep Convolutional Neural Nets

Word Segmentation of Off-line Handwritten Documents

Using EEG to Improve Massive Open Online Courses Feedback Interaction

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

A Reinforcement Learning Variant for Control Scheduling

Neural pattern formation via a competitive Hebbian mechanism

Modeling function word errors in DNN-HMM based LVCSR systems

Classification Using ANN: A Review

A study of speaker adaptation for DNN-based speech synthesis

A Review: Speech Recognition with Deep Learning Methods

Forget catastrophic forgetting: AI that learns after deployment

CSL465/603 - Machine Learning

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

Predicting Early Students with High Risk to Drop Out of University using a Neural Network-Based Approach

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe

arxiv: v1 [cs.lg] 3 May 2013

Welcome to. ECML/PKDD 2004 Community meeting

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Breaking the Habit of Being Yourself Workshop for Quantum University

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Speech Emotion Recognition Using Support Vector Machine

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Calibration of Confidence Measures in Speech Recognition

Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems

arxiv: v1 [cs.cv] 10 May 2017

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

arxiv: v2 [cs.cv] 30 Mar 2017

Knowledge-Based - Systems

On-Line Data Analytics

Mining Association Rules in Student s Assessment Data

An OO Framework for building Intelligence and Learning properties in Software Agents

A Case Study: News Classification Based on Term Frequency

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Improving Fairness in Memory Scheduling

Beyond the Pipeline: Discrete Optimization in NLP

Learning to Schedule Straight-Line Code

On the Formation of Phoneme Categories in DNN Acoustic Models

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Data Fusion Through Statistical Matching

Lecture 1: Machine Learning Basics

CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE. Mingon Kang, PhD Computer Science, Kennesaw State University

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

A Comparison of Two Text Representations for Sentiment Analysis

Syntactic systematicity in sentence processing with a recurrent self-organizing network

Speaker Identification by Comparison of Smart Methods. Abstract

An empirical study of learning speed in backpropagation

Learning Methods in Multilingual Speech Recognition

Generative models and adversarial training

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Bayllocator: A proactive system to predict server utilization and dynamically allocate memory resources using Bayesian networks and ballooning

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Lecture 1: Basic Concepts of Machine Learning

Switchboard Language Model Improvement with Conversational Data from Gigaword

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Applications of data mining algorithms to analysis of medical data

Accelerated Learning Online. Course Outline

Transcription:

A Survey on Pattern Recognition Using Spiking Neural Networks with Temporal Encoding and Learning A.Dhilipan, P.G Scholar, Department of Computer Science, Anna University Regional Centre, Coimbatore, adhilipbe@gmail.com. J. Preethi, Assistant Professor, Department of Computer Science, Anna University Regional Centre Coimbatore, preethi17j@yahoo.com. M.Sreeshakthy, P.G Scholar, Department of Computer Science, Anna University Regional Centre, Coimbatore, m.sribtechit@gmail.com V.Sangeetha, P.G Scholar, Department of Computer Science, Anna University Regional Centre, Coimbatore, sangeethaiyer11@gmail.com Abstract This paper, recognize of the patterns using spiking neural networks with temporal encoding and learning. Neural networks place the important role in cognitive and decision making process. Processing the different type of inputs lead to find the discriminate the pattern. Leaky Integrate Fire Neurons are used to recognize the patterns. During the recognition supervised learning method is used to make the decision. Temporal encoding and learning of spiking neural network is used to classify effectively. Different spiking neural networks learning algorithm are used to recognize the patterns and also analyze the performance of the particular algorithm which was used in the pattern recognition. Keywords- Pattern recognition; Spiking Neural Networks; Cognitive and decision Making; Temporal Encoding; Supervised Learning. 1. INTRODUCTION Pattern recognition is the process of classifying the particular data which was based on the prior knowledge or statistical information extracted from the pattern. Usually the pattern has to be classified using the collection of data. Several methods are used to implement the pattern recognition such as maximum entropy classifier, Naïve Bayes classifier, decision trees, support vector machine and perceptrons. These methods are less biological inspired so that Spiking Neural Network is used to recognize the pattern because it s having well biological evidence. Biologically inspired models provide the better computational power during the pattern recognition. Pattern recognition follows some steps to identify the particular pattern. Before processing the pattern, first identify how the information s are stored in the pattern. Two basic schemes are used to process the pattern; there is rate coding and temporal coding. Rate coding model is the rate of firing communication state that intensity of stimuli increase the rate of actions. In rate coding each and every neurons have to be fired, but it has to be choosen based on the probability. It may be character by using the firing neurons rather than the spiking neurons. Learning process of rate coding is the adaptive weight modification. And the other method is temporal encoding. Temporal pattern in train spike carry the information fast rather than the rate coding. Example of temporal coding is the latency coding which is ability of caring information within the spiking time. There are number of methods to classify the particular pattern but they are less performance, but spiking neural network having the best spiking time which is having high performance and accuracy for classifying the patterns. There are several spiking models such as, integrate-and-fire (IF) model, the Hodgkin-Huxley-type model and the Izhikevich model. IF model has the higher performance and simple when compare with another model. There are several algorithms are used to process the patterns like, Back propagation, bio spike timing dependent plasticity (STDP), spatiotemporal learning rule. In this paper patterns are identify using spiking neural networks and also analyze the performance and accuracy of the particular algorithm. Section 2 describes the Related work which is based on the pattern recognition, Section 3 deals with what are the steps are used to recognize the patterns Section 4 deals with the performance of pattern recognition algorithm and final section 5 describes the result and conclusion of this paper. 2. RELATED WORKS There are several work exists for pattern recognition. In [1] recognize the patterns using bio inspired spiking neural network has been used. Iris related data sets are used to process the particular pattern. Leaky integrate fire neurons models are used to analyze the pattern which is based on the supervised learning mechanism. The whole system contains the encoding, learning process and readout. Temporal encoding and learning process is used in the spiking neural network which is used to recognize the patterns effective and efficient. Network training is used to increase the fast neural information processing. [2] This paper tells that how the multiple view of information are processing using the spiking neural network. The visual patterns are recognized using 121

integrate fire neurons. Each neurons are arranged in the hierarchical manner and information are processed each layer which are arranged in frame by frame. Each frame has to be changed its structure by using Hebbian-based training. The patterns are analyzed using Vidtimt data set and recognize the facial information. [3] In this paper spatiotemporal learning rule is used to analyze the particular information process by using Spiking neural network. During the training process neurons have to fired and learn by using the Hebbian-based training rule. This method applied to several inputs and analyzes the performance of the information in the memory system. [4] Paper deals that, classify the patterns using the spiking neural networks. The network has to fired using the integrate fire neurons and the neurons are classify using the supervised learning process. The system has to be processed by using following steps, encoding, learning and readout. This process has to be applied digital images like MINST and alphabetical letters and analyze the performance of the pattern recognition. It also apply the long term modification like spike time dependent plasticity. [5] This paper deals with that, analyze the spatial related data has to be processed using spiking neural networks different encoding and learning process are used to analyze the performance and accuracy of the classification. [6] This paper deals that encoding the data and process those data s using the back propagation algorithm, the temporal coding is used to process fast information as well same as the rate coding method. XOR problem related data s are processed using the spiking networks with back propagation algorithm and analyze the performance using bench mark of data. OVERALL STRUCTURE STIMULI/INPUTS ENCODING NEURONS LEARNING NEURONS 3. SPIKING NEURAL NETWORKS Spiking neural networks (SNN) used to analyze the patterns, the basic SNN architecture is explain following structure. Fig 2: Architecture of Spiking Neural Networks 3.1 PATTERN RECOGNITION STEPS IN SPIKING NEURAL NETWORKS SNN is used to recognize the memory which has several parts, Encoding part, learning part and Readout part. Each stimuli (input) has several component, those components are connect to the encoding part and the spiking information has to be trained using the learning process. During the learning the weight has to be updated and read out process has been used to extract the particular patterns. 3.1.1 ENCODING PART Encoding is the process of generating the spiking patterns from the input stimuli. Temporal encoding is used to generate the spiking patterns rather than rate code. Latency code is example for the temporal coding which is encoding the information in the time responsive of encoded window. When encoding the information each neuron has to be fired. Encoding process each component has to be selected and convert the input points into the latency information. The binary information has to be converting into the temporal pattern of discrete spiking patterns. READ OUT NEURONS PATTERNS/OUTPUT Fig 1: Over all structure of the pattern recognition Fig 3: Input Encoding for pattern recognition 3.1.2 LEARNING PART Learning neurons place the important role for classifies the particular patterns. Learning network consists layer of tempotrons. Each encoding neurons are connected to the learning neurons. The structures 122

of learning neurons are equal to the number of encoding neurons. The tempotrons has to be classified the task which load has to be less than the critical value. The neurons generate the action spikes when the internal neurons cross the firing threshold values. be active (1) or inactive neurons (2). The temporal has to separate from the spatiotemporal pattern due to the learning rules. The pattern has to be classified by using binary and ternary patterns. This is shown in following fig 5 and 6. 3.1.3 READOUT PART The main goal of the readout process is to extract the particular pattern from the input stimuli of the learning neurons. In this the patterns are represented using the binary values because the classification of patterns only the 1 or 0 which means, it either classify using fired neurons (1) or un fired neurons (0). The total learning neurons of the output can be representing a maximum number of 2 classes of patterns Fig 4: Readout process for pattern recognition 3.2 TEMPORAL LEARNING RULES. Fig 5 : Binay pattern classification Temporal learning rule goal is deal encoded information with spiking time. The spiking time is based on the Spiking time dependent plasticity (STDP). STDP plasticity depends on intervals between pre and post synaptic spikes. The basic mechanism of the STDP is Long term Potential and Long term Depression. The neuron model used here is Leaky integrate model neuron which was driven by synaptic current generation. The potential of the neurons of the synaptic is weight sum of the post synaptic, which was defined by [1] V(t)= +V res (1) Here Wi and ti represents the synaptic efficiency and fire time of ith afferent. V res represent the rest potential where k represents the normalized PSP kernel. A neuron is fired when V (t) crossing the input threshold value, after the potentially smoothly decrease to the V res. In the classification process each neurons has to be classifies either fired or not fired (P + or P - ). The fail of tempotron update process in each neuron, which is the same instructor time (T). From learning process it has the supervised learning method and the learning process has to be gained by the neural activities. 3.2.1 LEARNING FROM THE NEURAL ACTIVITES Several activities has to be increased the memory pattern, the synaptic weight has to be taken each and every binary values or from the continues distribution. In Hopfield neurons the memory patterns are expressed by the activities of neurons which may Fig 6 : Ternary pattern classification 4. PERFORMANCE ANALYSIS Spiking Neural Network is used to classify the particular pattern for different kind of inputs like iris data, digital images, alphabetical letters and other data s. SNN has three different types of steps encoding the input data, learning the neurons, readout process. During the encoding process the input data has to be converting to the spiking information. And those spiking data has to be trained and readout that information is to extract the particular patterns from the input data. Table 1 shown the performance details of the pattern recognition. 123

International Journal of Research in Advent Technology, Vol.2, No.11, November 2014 NO PAPER INPUT METHOD ACCURACY REF 1 2 3 4 5 Brain Inspired Spiking Neural Network With Temporal Encoding And Learning Pattern Recognition In Spiking Neural Network With Temporal Encoding And Learning The Spatio temporal Learning Rule And Its Efficiency In Separating Spatiotemporal Patterns Fast And Adaptive Network Of Neurons In Multi View Visual Pattern Recognition Error Back Propagation Temporally Encoded Neurons Of Spiking Neural Network Iris Data STDP 99.63% [1] MINST images and digital images Spatio temporal Data Vidtimt Data XOR problem and Benchmark Data s STDP 99.1% [2] Hebbian Rule for learning Structural Adaption and Frame by frame operation Spike prop and Error propagation 85% [3] 95.7% [4] 97.1% [5] 6. RESULT AND CONCLUSION From the above analyze Spiking Neural Network has to be identify the patterns from different type of inputs like iris data, cancer data, digital images, and alphabetic letters. For processing those data s each and every input has to be encoded and trained using different training rules and recognize the patterns. Spiking Neural Network is based on firing the neurons. So that the Spiking networks will provide the best result for recognize the patterns. REFERNENCE [1] Qiang Yu a, HuajinTang b,c,n, KayChenTan a, HaoyongYu, A brain-inspired spiking neural network model with temporal encoding and learning vol 138, pp 3-13, springer. [2] Qiang Yu a, HuajinTang, Pattern Recognition Computation in A Spiking Neural Network with Temporal Encoding and Learning pp 10-15, IEEE. [3] Minoru Tsukada, Xiaochuan Pan, The spatiotemporal learning rule and its efficiency in separating spatiotemporal patterns pp 139-146, springer. [4] Simei Gomes Wysoski, Lubica Benuskova, Nikola Kasabov, Fast and adaptive network of spiking neurons for multi-view visual pattern recognition,pp 2563-2575, Springer. [5] Sander M. Bohtea, Joost N. Kok, Han La Poutr, Error-backpropagation in temporally encoded networks of spiking neurons 17-37,Elesiver. [6] R.C. Froemke,Y.Dan,Spike-timing-dependent synaptic modification induced by natural spike trains,nature416(6879)(2002)433 438. 124

International Journal of Research in Advent Technology, Vol.2, No.11, November 2014 [7] S.Ghosh-Dastidar,H.Adeli,Improved spiking neural networks for eeg classification and epilepsy and seizure detection,integr.comput.- AidedEng. 14(3)(2007)187 212. [8] S. Song,K.D.Miller,L.F.Abbott,Competitive Hebbian learning through spike- timingdependent synaptic plasticity,nat.neurosci.3(2000)919 926. [9]J.J.Wade,L.J.McDaid,J.A.Santos,H.M.Sayers,SWA T:aspiking neural network training algorithm for classification problems,ieeetrans.neuralnetw.21(11) (2010)1817 1830. [10] S. Mitra, S. Fusi, and G. Indiveri, Real-Time Classification of Complex Patterns Using Spike- Based Learning in Neuromorphic VLSI, vol. 3, no. 1, pp. 32 42, Dec. 2008 [11] R. G utig and H. Sompolinsky, The tempotron: a neuron that learns spike timing-based decisions, Nature Neuroscience, vol. 9, no. 3, pp. 420 428,Feb. 2006. [12] S. M. Bohte, E. M. Bohte, H. L. Poutr, and J. N. Kok, Unsupervised clustering with spiking neurons by sparse temporal coding and multilayer RBF networks, IEEE Trans. Neural Netw, vol. 13, pp. 426 435, 2002. [13] P. X. Tsukada M., The spatiotemporal learning rule and its efficiency in separating spatiotemporal patterns, Biol. Cybern., vol. 92, pp. 139 146,February 2005. [14] R. Kempter, W. Gerstner, and J. L. van Hemmen, Spike-Based Compared to Rate-Based Hebbian Learning, in Advances in Neural Information Processing Systems 11. MIT-Press, 1999, pp. 125 131. [15] J. J. Hopfield, Pattern recognition computation using action potential timing for stimulus representation, Nature, vol. 376, no. 6535, pp. 33 36, Jul. 1995. 125