CHAPTER 3 CHAPTER 3 3.1 Introduction Objective of this chapter is to address the. BPNN is an Artificial Neural Network (ANN) based powerful technique which is used for detection of the intrusion activity. Basic component of BPNN is a neuron, which stores and processes the information. Chapter starts with biological model of neuron, followed by computational model of neuron which is derived from biological model. Following to this, advantages and challenges of ANN are also discussed. In the middle portion of the chapter, supervised and unsupervised learning approaches, feed forward neural network and feed backward neural network (BPNN) are discussed in detail. Chapter ends with advantages and challenges of BPNN. 3.2 Biological Model of Human Neuron Basic element of the human neural network is a neuron. Neuron stores and processes the information. Typical structure of a neuron is shown in Fig.3.1. Neuron has Dendrites, Soma (Cell Body), Axon, Axon Terminal, Myelin, Schwann Cell, Nodes of Ranvier and Synapses as the basic elements. FIGURE 3.1: Typical Structure of a Neuron [3] 18
Artificial Neural Network (ANN) A. Dendrites: As per [11], dendrites are the set of input units of a neuron which receive electrochemical signals sent by other neurons. B. Soma (Neuron Cell Body): As per [12], soma or cell body is the portion of the neuron where all dendrites end. Soma processes the information passed by dendrites and gives the output. C. Axon: As per [13], an axon, transfers output of a neuron to different units which might be neurons, muscles and glands. D. Axon Terminals: As per [14], Axon Terminals are terminations of the branches of an axon which is a long fiber and used to take output of the neuron away from the soma to the other neurons. E. Myelin: As per [4], Myelin is a fatty white substance which is used to form an electrically insulating layer of axon. F. Schwann cell: Schwann Cells are the special glia cells which supplies nutrients and oxygen to the neurons [5] [15]. G. The Node of Ranvier: As per [6], the Node of Ranvier is a gap between two myelin cells. H. Synapses: As per [8], synapse is a unit which passes the information in the form of electrical or chemical signal to another neuron or any cell. It can be visualized as a small gap between first neuron s axon and second neuron s dendrites. Working of Neural Network: In human body, thousands of neurons are connected with each other and form a network. Working of all the neuron units is same. Neuron receives input from many other neurons. These inputs are received by dendrites with help of synapses. Dendrites pass these inputs to the Soma. Soma processes these inputs and gives output in form of an electrical or chemical signal. Such output passes from Axon to one or more neurons, muscles and glands. 3.3 Artificial Neural Network (ANN) As per [10], Artificial Neural Network (ANN) is an inspiration from biological neural networks and used as an approximate function to find the outputs for the given inputs. ANN has three layers: input layer, hidden layer and output layer. As per the complexity of the problem, hidden layer consists of one or more layers. Further, each layer that is input layer, hidden layer or output layer contains one or more neurons. In general, ANN is 19
CHAPTER 3 visualized as interconnected neurons like human neurons that pass information between each other. The connections have numeric weights that can be set by learning from past experience as well as from current situation. 3.4 Computational Model Derived from Biological Model of Neuron FIGURE 3.2: Computation Model Derived From Biological Model of Neuron [17] Computational model derived from biological model of neuron was addressed by McCulloch in 1943 [16]. Jihoon Yang in [17] has represented McCulloch s model which is shown in Fig.3.2. In the computational model, inputs X 1, X 2,., X n with weights W 1, W 2,.W n are similar to dendrites of biological model. Weight W 0 is bias of X0. Summation W i X i for i=0 to n is similar to soma of the biological model. If this summation is greater than 0, then 1 output else -1 output is given. This output can be considered as axon of the biological model. 3.5 Advantages of ANN As per our previous literature review of [1], we found following advantages of ANN: 1. It has self learning capability. 2. Can perform tasks that a linear program cannot. 3. Due to the parallel nature of neurons, failure of a neuron does not affect the working. 4. A learned neural network does not need to be relearned during the next usage. 20
Challenges of ANN 3.6 Challenges of ANN As per our previous literature review of [1] and our work [2], we found following challenges of ANN: 1. ANN needs training to operate. 2. The architecture of ANN is different from the architecture of microprocessors, therefore needs to be emulated. 3. Processing time is high for large neural networks. 3.7 Supervised and Unsupervised Learning Learning in the ANN can be done either by supervised or unsupervised approach. In supervised approach, learning samples with expected outputs are used. On the other side, in unsupervised approach, samples without expected output are used for the learning. Supervised approach is more suitable for classification problem while unsupervised approach is more suitable for clustering problem [18]. 3.8 Perceptron: Feed Forward Learning Algorithm Perceptron is a less complex, feed forward supervised learning algorithm which supports fast learning. It is mainly used for classification of linearly separable inputs in to various classes [19] [20]. 3.9 To overcome the limitation of perceptron, in 1986, Rumelhart et al. in [21], had describe a new supervised learning procedure known as which is used for linear as well as non-linear classification. BPNN is a supervised algorithm in which error difference between the desired output and calculated output is back propagated. The procedure is repeated during learning to minimize the error by adjusting the weights thought the back propagation of error. As a result of weight adjustments, hidden units set their weights to represent important features of the task domain. BPNN consists of three layers: 1) Input Layer 2) Hidden Layer and 3) Output Layer. Number of the hidden layers, and number of hidden units in each hidden layers depend upon the complexity of the problem. Learning in BPNN is a two step processes [2] [22]: 21
CHAPTER 3 Step 1 (Forward Propagation): In this step, depending upon the inputs and current weights, outputs are calculated. For such calculation, each hidden unit and output unit calculates net excitation which depends on: Values of previous layer units that are connected to the unit in consideration. Weights between the previous layer unit and unit in consideration. Threshold value on the unit in consideration. This net excitation is used by activation function which returns calculated output value for that unit. This activation function must be continuous and differentiable. There are various activation functions which can be used in BPNN. Sigmoid is widely used activation function. It is defined as (3.1).... (3.1) Step 2 (Backward Propagation of Error): During this step, error is calculated by difference between the targeted output and actual output of each output unit. This error is back propagated to the previous layer that is hidden layer. For each unit in the hidden layer N, error at that node is calculated. In the similar way, error at each node of previous hidden layer that is N-1 is calculated. These calculated errors are used to correct the weighs so that the error at each output unit is minimized. Forward and backward steps are repeated until the error is minimized up to the expected level. 3.10 Parameters of BPNN Following are the list of the parameters / criteria which affects the performance of the BPNN [2]. 1. Learning Rate 2. Initial Weight 3. Number of Hidden Units 4. Overtraining and Early Stopping Criteria 5. Number of Learning Samples 6. Activation Function 7. Normalization of the Inputs 22
Advantages of BPNN 3.11 Advantages of BPNN As per our previous work of [1] and [2], following are the advantages of BPNN: 1. BPNN supports high speed classification. 2. BPNN can be used for linear as well as non linear classification. 3. BPNN supports multi class classification. 3.12 Challenges of BPNN As per our previous work of [1] and [2], following are the current challenges of BPNN: 1. Training time for BPNN is high. 2. BPNN suffers from local minima. 3. Structure of the BPNN is highly complex. 3.13 References 1. Bhavin Shah, and Bhushan Trivedi. "Artificial Neural Network based Intrusion Detection System: A Survey." International Journal of Computer Applications 39, no. 6 (2012): 13-18. 2. Bhavin Shah, Bhushan Trivedi, Optimizing Back Propagation Parameters For Anomaly Detection, IEEE - International Conference on Research and Development Prospectus on Engineering and Technology (ICRDPET),2013. 3. Quasar Jarosz at English Wikipedia, Transferred from en.wikipedia to Commons by Faigl.ladislav using CommonsHelper, https://en.wikipedia.org/wiki/file:neuron_hand-tuned.svg, 11 August 2009, [Accessed 5 December 2015]. 4. https://en.wikipedia.org/wiki/myelin, [Accessed 5 December 2015] 5. https://en.wikipedia.org/wiki/schwann_cell, [Accessed 5 December 2015] 6. https://en.wikipedia.org/wiki/node_of_ranvier, [Accessed 5 December 2015] 7. https://en.wikipedia.org/wiki/biological_neural_network, [Accessed 5 December 2015] 8. https://en.wikipedia.org/wiki/synapse, [Accessed 5 December 2015] 9. Foster, M.; Sherrington, C.S. (1897). Textbook of Physiology, volume 3 (7th ed.). London: Macmillan. p. 929. 10. https://en.wikipedia.org/wiki/artificial_neural_network, [Accessed 5 December 2015] 11. https://en.wikipedia.org/wiki/dendrite, [Accessed 5 December 2015] 12. https://en.wikipedia.org/wiki/soma_(biology), [Accessed 5 December 2015] 13. https://en.wikipedia.org/wiki/axon, [Accessed 5 December 2015] 14. https://en.wikipedia.org/wiki/axon_terminal, [Accessed 5 December 2015] 15. https://en.wikipedia.org/wiki/neuroglia, [Accessed 5 December 2015] 16. McCulloch, Warren; Walter Pitts (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5 (4): 115 133. 17. Jihoon Yang, Lecture notes on Artificial Neural Networks, Data Mining Research Laboratory Department of Computer Science, Sogang Unicersity, Available at : http://home.sogang.ac.kr/sites/gsinfotech/study/study007/lists/b6/attachments/35 23