Important properties of artificial neural networks will be discussed, namely that,
|
|
- Giles Mathews
- 5 years ago
- Views:
Transcription
1 CP8206 Soft Computing & Machine Intelligence 1 PRINCIPLE OF ARTIFICIAL NEURAL NETWORKS Important properties of artificial neural networks will be discussed, namely that, (i) the underlying principle of artificial neural networks. (ii) general representation of the neural networks, (iii) the principles of the error correction algorithm.
2 CP8206 Soft Computing & Machine Intelligence 2 ARTIFICIAL INTELLIGENCE & NEURAL NETWORKS During the past twenty years, interest in applying the results of Artificial Intelligence (AI) research has been growing rapidly. AI relates to the development of theories & techniques required for a computational engine to efficiently perceive, think & act with intelligence in complex environments. The artificial intelligence discipline is concerned with intelligent computer systems, exhibiting the characteristics associated with intelligence in human behavior, such as understanding language, learning, solving problems & reasoning.
3 CP8206 Soft Computing & Machine Intelligence 3 BRANCHES OF AI Developments in some branches of AI have already led to new technologies having significant effects in problem solving approaches. These include new ways of defining the problems, new methods of representing the existing knowledge regarding the problems & new problem handling methods. There are several distinctive areas of research in Artificial Intelligence, more importantly: artificial neural networks, fuzzy logic systems, expert systems, each with its own specific interest, research techniques, terminology & objectives (Fig. 1).
4 CP8206 Soft Computing & Machine Intelligence 4 AI Neural Networks Expert Systems Genetic Algorithms Fuzzy Systems Neuro-Fuzzy Systems Neuro-Genetic Systems Fuzzy-Expert Systems Fig. 1: Partial Taxonomy of Artificial Intelligence depicting a number of important AI branches & their relationships
5 CP8206 Soft Computing & Machine Intelligence 5 NEURAL NETWORKS Among the various branches of AI, the area of artificial neural networks in particular has received considerable attention during the twenty years. An artificial neural network is a massively parallel & distributed processor that has a natural propensity for storing experimental knowledge & making it available for use. The underlying idea is to implement a processor that works in a fashion similar to the human brain.
6 CP8206 Soft Computing & Machine Intelligence 6 NEURAL NETWORKS NN resembles the brain in two respects; first, the knowledge is acquired through a learning process, & second, inter-neuron connection strengths known as weights are used to store the knowledge. The learning process involves modification of the connection weights to obtain a desired objective. Major applications of neural networks can be categorized into five groups including Pattern recognition, image processing, signal processing, system identification & control.
7 CP8206 Soft Computing & Machine Intelligence 7 NEURAL NETWORKS There are a variety of definitions for artificial neural networks each of which highlights some aspects of this methodology such as: its similarity to its biological counterpart, its parallel computation capabilities, & its interaction with outside world. A neural network is a non-programmable dynamic system with capabilities such as trainability & adaptivity that can be trained to store, process & retrieve information. It also possesses the ability to learn & to generalize based on past observations.
8 CP8206 Soft Computing & Machine Intelligence 8 NEURAL NETWORKS Owe their computing power to their parallel/distributed structure & the manner that the activation functions have been defined. This information processing ability provides the possibility of solving complex problems. Function approximation (I/O mapping): ability to approximate any nonlinear function to the desired degree. Learning & Generalization: ability to learn I/O patterns, extract the hidden relationship among presented data, & provide acceptable response to new data that the network has not yet experienced. This enables neural networks to provide models based on the imprecise information.
9 CP8206 Soft Computing & Machine Intelligence 9 NEURAL NETWORKS Adaptivity: capable of modifying their memory, & thus its f n ality, over time. Fault tolerance: due to their highly parallel/distributed structure, failure of a number of neurons to generate the correct response does not lead to failure of the overall performance of the system.
10 CP8206 Soft Computing & Machine Intelligence 10 NEURAL NETWORKS - DISADVANTAGES large dimension that leads to memory restriction; selection of optimum configuration; convergence difficulty especially when sol n is trapped in local minima; choice of training methodology; black-box representation, lack of explanation capabilities & transparency.
11 CP8206 Soft Computing & Machine Intelligence 11 NEURAL NETWORKS A neural network can be characterized in terms of: Neurons: the basic processing units defining the manner in which computation is performed. Neuron activation functions: indicate the function of each neuron. Inter-neuron patterns: define the way neurons are connected to each other. Learning algorithms: define how the knowledge stored in the network.
12 CP8206 Soft Computing & Machine Intelligence 12 NEURON MODEL NN paradigm attempts to clone the physical structure & functionality of the biological neuron. Artificial neurons, like their biological counterparts, receive inputs, [x 1, x 2,..., x r ], from the outside or other neurons through incoming connections. Each neuron then generates a product term, [w i x i ], using the inputs & connections weights ([w 1, w 2,..., w r ], represents the connection memory). The product terms are then summed using an addition operator to produce the neuron internal activity index, v(t).
13 CP8206 Soft Computing & Machine Intelligence 13 NEURON MODEL This index is passed to an activation function, ϕ(.), which produces an output, y(t). vt ()= r i= 1 wx i i ( vt) (1) yt () = ϕ () (2) A more general model of the neuron functionality can be provided by the introduction of a threshold measure, w 0, for the activation function.
14 CP8206 Soft Computing & Machine Intelligence 14 NEURON MODEL This signifies the scenario where a neuron generates an output if its input is beyond the threshold (Fig. 2), i.e., r yt () = ϕ ( wx i i w0 ) (3) i = 1 This model is a simple yet useful approximation of the biological neuron & can be used to develop different neural structures including feedforward & feedback networks (Fig. 3).
15 CP8206 Soft Computing & Machine Intelligence 15 x 1 ±1 w k1 w k1 x 2 w k1 a k ϕ(.) y k x r w k1 aggregation operation synaptic operation somatic operation Fig. 2: nonlinear model of a neuron
16 CP8206 Soft Computing & Machine Intelligence 16 TYPES OF ACTIVATION FUNCTIONS Each neuron includes a nonlinear function, known as the activation function, that transforms several weighted input signals into a single numerical output signal. The neuron activation function, ϕ(.) expresses the functionality of the neuron. There are at least three main classes of activation function, including linear, sigmoid & Gaussian. Table 3.1 illustrates different types of activation functions.
17 CP8206 Soft Computing & Machine Intelligence 17 NEURAL NETWORK ARCHITECTURES The manner in which neurons are connected together defines the architecture of a neural network. These architectures can be classified into two main groups (Fig. 3): Feedforward neural network Recurrent neural network
18 CP8206 Soft Computing & Machine Intelligence 18 Neural Networks Feedforward Lattice Recurrent Single Layer Multi Layer Single Layer Multi Layer Perceptron Radial Basis Function Elman Hopfield Fig. 3: Classification of different neural network structures
19 CP8206 Soft Computing & Machine Intelligence 19 FEEDFORWARD NEURAL NETWORK The flow of the information is from input to output. SINGLE LAYER NETWORK (Fig. 4): The main body of the structure consists of only one layer (a one-dimensional vector) of neurons. Can be considered as a linear association network that relate the output patterns to input patterns.
20 CP8206 Soft Computing & Machine Intelligence 20 x 1 ϕ(.) y 1 x 2 ϕ(.) y 2 x r Inputs ϕ(.) Single Layer of Neurons y r Outputs Fig. 4: Single layer feedforward neural network
21 CP8206 Soft Computing & Machine Intelligence 21 A MULTI-LAYER NETWORK (Fig. 5): The structure consists of two or more layers of neurons. The function of the additional layers is to extract higher order statistics. The network acquires a global perspective despite its local connectivity by virtue of the extra set of connection connections & the extra dimension of neural interaction. Specified by The number of I/O, the number of layers, Number of neuron in each layer, The network connection pattern, & The activation function for each layer.
22 CP8206 Soft Computing & Machine Intelligence 22 x 1 x 2 ϕ 1 (.) x 3 ϕ 1 (.) ϕ 2 (.) y 1 ϕ 2 (.) y q x p-1 ϕ 1 (.) Outputs x p Second Layer Inputs First Layer Fig. 5: Multi-Layer feedforward neural network
23 CP8206 Soft Computing & Machine Intelligence 23 RECURRENT NEURAL NETWORK A recurrent structure represents a network in which there is at least one feedback connection. Fig. 6 depicts a multi-layer recurrent neural network, which is similar to the feedforward case except for the presence of the feedback loops & z -1 (unit delay operator) that introduces the delay involved in feeding back the output to input.
24 CP8206 Soft Computing & Machine Intelligence 24 Feedback connections z -1 ϕ 1 (.) z -1 ϕ 1 (.) ϕ 2 (.) y 1 Unit delay x 1 ϕ 2 (.) y q x p-1 ϕ 1 (.) Outputs x p Second Layer Inputs First Layer Fig. 6: Multi-layer recurrent neural network
25 CP8206 Soft Computing & Machine Intelligence 25 Table 3.1: Neural Network Activation Functions Piecewise Linear; Function 1 if x < b f( x) = ax. if x< b if X > b plot Linear; f(x)=a.x Indicator; f(x)=sgn(x)
26 CP8206 Soft Computing & Machine Intelligence 26 1 Sigmoid; f ( x) = +. 1 e ax Bipolar Sigmoid; f(x)=tanh(a.x) Gaussian; f ( x) = e x 2 2σ
27 CP8206 Soft Computing & Machine Intelligence 27 MULTI-LAYER PERCEPTRON (MLP) A class of NNs that consists of one input layer together with one output layer that represent the system inputs & outputs, respectively, & one or more hidden layers that provide the learning capability for the network (Fig. 7). The basic element of a MLP network is an artificial neuron whose activation function, for the hidden layer, is a smooth, differentiable function (usually sigmoid). The neurons in the output layer have a linear activation function.
28 CP8206 Soft Computing & Machine Intelligence 28 w ij, b ij : weights & biases-hidden layer i: number of inputs; 1,,n j: number of neurons; 1,,m ω jk : weights-output layer k: number of outputs LINEAR m n f( x1,..., xn) = ωi g wij xj θ i i= 1 j= 1 Sigmoid function 1 gx ( )= + 1 e x ω 1 ω m g(x) g(x) g(x) g(x) g(x) w 1,1 w n,m X 1 X 2 X 3 Fig. 7: General structure of a Multi-Layer Perceptron network, illustrating the concept of input, hidden & output layers X n
29 CP8206 Soft Computing & Machine Intelligence 29 MLP The output of a MLP network, therefore, can be represented as follows: F ( x 1,..., x p ) M p = ω i g w ij x j θ i 4243 i = 1 j = internal activation hidden layer output output layer output (4) where F( ) is the network output, [x 1,,x p ] is the input vector having P inputs, M denotes the number of hidden neurons, w represents the hidden layer connection weights, θ is the threshold value associated with hidden neurons, & ω represents the output layer connection weights which in effect serves as coefficients to the linear output function.
30 CP8206 Soft Computing & Machine Intelligence 30 UNIVERSAL APPROXIMITY It has been proven mathematically that standard multi-layer perceptron networks using arbitrary squashing functions are capable of approximating any continuous function from one finite dimensional space to another to any desired degree of efficiency, provided sufficient hidden neurons are available. A squashing function is a non-decreasing function that is defined as follows: σ( t) 1 0 as as t t +,. (6)
31 CP8206 Soft Computing & Machine Intelligence 31 UNIVERSAL APPROXIMITY It has been further shown that approximation can be achieved using any multilayer perceptron with only one hidden layer & sigmoid function. MLPs are a class of universal approximator & can be used successfully to solve difficult problems in diverse areas using the error back-propagation learning algorithm. Furthermore, failure in learning can be attributed to factors such as inadequate learning, insufficient number of hidden neurons, & non-deterministic nature of relationship between inputs & outputs.
32 CP8206 Soft Computing & Machine Intelligence 32 THE STONE-WEIERSTRASS THEOREM to prove that NNs are capable of uniformly approximating any real continuous function on a compact set to an arbitrary degree of accuracy. This theorem states that for any given real continuous function, f, on a compact set U R n, there exists an NN, F, that is an approximate realization of the function f( ): M p F( x1,..., xp) = ωi ϕ wijxj θ i (7) i = 1 j = 1 F( x1,..., xp) f ( x1,..., xp) < ε (8) where X=(x 1, x 2,, x n ) U represents the input space, ε denotes the approximation error for all { x1,..., x p } U & ε is positive very small value.
33 CP8206 Soft Computing & Machine Intelligence 33 LEARNING PROCESS accomplished through the associations between different I/O patterns. Regularities & irregularities in the training data are extracted, & consequently are validated using validation data. Achieved by stimulating the network using the data representing the f n learned & attempting to optimize a related performance measure. to be Assumed that the data represents a system that is deterministic in nature with unknown probability distributions.
34 CP8206 Soft Computing & Machine Intelligence 34 LEARNING PROCESS The fashion in which the parameters are adjusted determines the type of learning. There are two general learning paradigms (Fig. 8): Unsupervised learning Supervised learning Unsupervised learning not in the scope of the course, not to be discussed.
35 CP8206 Soft Computing & Machine Intelligence 35 Learning Algorithms Supervised Learning Unsupervised Learning Back Propagation Widrow- Hoff Rule Perceptron rule Associative Self Organizing Kohonen Hebbian Competitive Fig. 8: A classification of learning algorithms
36 CP8206 Soft Computing & Machine Intelligence 36 SUPERVISED LEARNING The organization & training of a neural network by a combination of repeated presentation of input patterns & their associated output patterns. Equivalent to adjusting the network weights. In supervised learning, a set of training data is used to help the network in arriving at appropriate connection weights. Can be seen in the conventional delta rule, one of the early supervised algorithms, that was developed by McCulloch & Pitts, & Rosenblatt. In this method, a training data set is always available that provides the system ideal values for output due to a set of known inputs & the goal is to obtain the strength of each connection in the network.
37 CP8206 Soft Computing & Machine Intelligence 37 BACK-PROPAGATION The best known supervised learning algorithm. This learning rule was first developed by Werbos, improved by Rumelhart et al. The learning is done on the basis of direct comparison of the output of the network with known correct answers. An efficient method of computing the change in each connection weight in a multi-layer network so as to reduce the error in the outputs. Works by propagating errors backwards from the output layer to the input layer.
38 CP8206 Soft Computing & Machine Intelligence 38 Back-Propagation an efficient method of computing the change in each connection weight in a multi-layer network so as to reduce the error in the outputs. The method essentially works by propagating errors backwards from the output layer to the input layer. assuming that w ji denotes the connection weight from i th neuron to j th, x j signifies the input to j th neuron, y j represents the corresponding output, d j is the desired output: Total input to unit j: x j = yiw ji (9) i Output from unit j: y j 1 = 1 + e x j (10)
39 CP8206 Soft Computing & Machine Intelligence 39 The back-propagation algorithm attempts to minimize the global error which, for a given set of weights, is the squared difference between the actual and desired outputs of a unit, i.e., where E denotes the global error. 1 ( y ) 2 j d E =, c j, c (11) 2 c j The error derivatives for all weights can be computed by working backwards from the output units after a case has been presented and given the derivatives, the weights are updated to reduce the error.
40 CP8206 Soft Computing & Machine Intelligence 40 j+2 j+1 j E y j E x j = y d j j E y y y y = = j x j j ( 1 ); y ( 1 y j j j) j i+2 i+1 i E w ji E y i E x y xj = w y i ; = j E x w x j = w ji ; = j y j j i i ji Fig. 9: basic idea of back-propagation learning algorithm
41 CP8206 Soft Computing & Machine Intelligence 41 BACK-PROPAGATION Consists of two passes; forward and backward. Forward pass: a training case is presented to the network. The training case itself consists of an input vector and its associated (desired) output. Backward pass: starts when the output error, i.e., the difference between the desired and actual output, is propagated back through and changes are made to connection weights in order to reduce the output error. Different training cases are then presented to the network. The process of presenting epochs of training cases to the network continues until the average error over the entire training set reaches a defined error goal.
42 CP8206 Soft Computing & Machine Intelligence 42 Define network structure Define connection pattern Define activation functions Define p erformance Prepare training data Prepare validation data P ro vid e stim ilus from training set to the network feedforward flow of information - generate output and performance measure P ro vid e stim ilu s from validation to the netw ork error backpropagated through the netw ork, changes proportional to the derivative of error wrt weight tyo be made to synaptic weights No perform ance measure satisfactory? Yes feedforw ard flow of information - generate output and performance measure No perform ance measure satisfactory? end of training Yes Fig. 10: Basic presentation of back-propagation learning algorithm
Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationArtificial Neural Networks
Artificial Neural Networks Andres Chavez Math 382/L T/Th 2:00-3:40 April 13, 2010 Chavez2 Abstract The main interest of this paper is Artificial Neural Networks (ANNs). A brief history of the development
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationKnowledge-Based - Systems
Knowledge-Based - Systems ; Rajendra Arvind Akerkar Chairman, Technomathematics Research Foundation and Senior Researcher, Western Norway Research institute Priti Srinivas Sajja Sardar Patel University
More informationINPE São José dos Campos
INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA
More informationSoftprop: Softmax Neural Network Backpropagation Learning
Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science
More informationFramewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures
Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures Alex Graves and Jürgen Schmidhuber IDSIA, Galleria 2, 6928 Manno-Lugano, Switzerland TU Munich, Boltzmannstr.
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationIssues in the Mining of Heart Failure Datasets
International Journal of Automation and Computing 11(2), April 2014, 162-179 DOI: 10.1007/s11633-014-0778-5 Issues in the Mining of Heart Failure Datasets Nongnuch Poolsawad 1 Lisa Moore 1 Chandrasekhar
More informationSARDNET: A Self-Organizing Feature Map for Sequences
SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More information(Sub)Gradient Descent
(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include
More informationFUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria
FUZZY EXPERT SYSTEMS 16-18 18 February 2002 University of Damascus-Syria Dr. Kasim M. Al-Aubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate
More information*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe
*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE Proceedings of the 9th Symposium on Legal Data Processing in Europe Bonn, 10-12 October 1989 Systems based on artificial intelligence in the legal
More informationTime series prediction
Chapter 13 Time series prediction Amaury Lendasse, Timo Honkela, Federico Pouzols, Antti Sorjamaa, Yoan Miche, Qi Yu, Eric Severin, Mark van Heeswijk, Erkki Oja, Francesco Corona, Elia Liitiäinen, Zhanxing
More informationAnalysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems
Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Ajith Abraham School of Business Systems, Monash University, Clayton, Victoria 3800, Australia. Email: ajith.abraham@ieee.org
More informationOPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS
OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,
More informationSoft Computing based Learning for Cognitive Radio
Int. J. on Recent Trends in Engineering and Technology, Vol. 10, No. 1, Jan 2014 Soft Computing based Learning for Cognitive Radio Ms.Mithra Venkatesan 1, Dr.A.V.Kulkarni 2 1 Research Scholar, JSPM s RSCOE,Pune,India
More informationTest Effort Estimation Using Neural Network
J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationLecture 1: Basic Concepts of Machine Learning
Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010
More informationAxiom 2013 Team Description Paper
Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association
More informationClassification Using ANN: A Review
International Journal of Computational Intelligence Research ISSN 0973-1873 Volume 13, Number 7 (2017), pp. 1811-1820 Research India Publications http://www.ripublication.com Classification Using ANN:
More informationSyntactic systematicity in sentence processing with a recurrent self-organizing network
Syntactic systematicity in sentence processing with a recurrent self-organizing network Igor Farkaš,1 Department of Applied Informatics, Comenius University Mlynská dolina, 842 48 Bratislava, Slovak Republic
More informationPhonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project
Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California
More informationLaboratorio di Intelligenza Artificiale e Robotica
Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning
More informationNeuro-Symbolic Approaches for Knowledge Representation in Expert Systems
Published in the International Journal of Hybrid Intelligent Systems 1(3-4) (2004) 111-126 Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems Ioannis Hatzilygeroudis and Jim Prentzas
More informationSpeaker Identification by Comparison of Smart Methods. Abstract
Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationLecture 10: Reinforcement Learning
Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation
More informationAnalysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription
Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer
More informationUnsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model
Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.
More informationarxiv: v1 [cs.lg] 15 Jun 2015
Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and
More informationI-COMPETERE: Using Applied Intelligence in search of competency gaps in software project managers.
Information Systems Frontiers manuscript No. (will be inserted by the editor) I-COMPETERE: Using Applied Intelligence in search of competency gaps in software project managers. Ricardo Colomo-Palacios
More informationReinforcement Learning by Comparing Immediate Reward
Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate
More informationEvolution of Symbolisation in Chimpanzees and Neural Nets
Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication
More informationSoftware Maintenance
1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories
More informationOn the Formation of Phoneme Categories in DNN Acoustic Models
On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-
More informationWord Segmentation of Off-line Handwritten Documents
Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationThe Method of Immersion the Problem of Comparing Technical Objects in an Expert Shell in the Class of Artificial Intelligence Algorithms
IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS The Method of Immersion the Problem of Comparing Technical Objects in an Expert Shell in the Class of Artificial Intelligence
More informationCSL465/603 - Machine Learning
CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am
More informationEarly Model of Student's Graduation Prediction Based on Neural Network
TELKOMNIKA, Vol.12, No.2, June 2014, pp. 465~474 ISSN: 1693-6930, accredited A by DIKTI, Decree No: 58/DIKTI/Kep/2013 DOI: 10.12928/TELKOMNIKA.v12i2.1603 465 Early Model of Student's Graduation Prediction
More informationPredicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks
Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com
More informationProposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science
Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science Gilberto de Paiva Sao Paulo Brazil (May 2011) gilbertodpaiva@gmail.com Abstract. Despite the prevalence of the
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More informationHIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION
HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION Atul Laxman Katole 1, Krishna Prasad Yellapragada 1, Amish Kumar Bedi 1, Sehaj Singh Kalra 1 and Mynepalli Siva Chaitanya 1 1 Samsung
More informationLearning to Schedule Straight-Line Code
Learning to Schedule Straight-Line Code Eliot Moss, Paul Utgoff, John Cavazos Doina Precup, Darko Stefanović Dept. of Comp. Sci., Univ. of Mass. Amherst, MA 01003 Carla Brodley, David Scheeff Sch. of Elec.
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationDeep Neural Network Language Models
Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com
More informationSystem Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering
More informationKamaldeep Kaur University School of Information Technology GGS Indraprastha University Delhi
Soft Computing Approaches for Prediction of Software Maintenance Effort Dr. Arvinder Kaur University School of Information Technology GGS Indraprastha University Delhi Kamaldeep Kaur University School
More informationMASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE
Master of Science (M.S.) Major in Computer Science 1 MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE Major Program The programs in computer science are designed to prepare students for doctoral research,
More informationA Reinforcement Learning Variant for Control Scheduling
A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement
More informationUsing the Artificial Neural Networks for Identification Unknown Person
IOSR Journal of Dental and Medical Sciences (IOSR-JDMS) e-issn: 2279-0853, p-issn: 2279-0861.Volume 16, Issue 4 Ver. III (April. 2017), PP 107-113 www.iosrjournals.org Using the Artificial Neural Networks
More informationInternational Journal of Advanced Networking Applications (IJANA) ISSN No. :
International Journal of Advanced Networking Applications (IJANA) ISSN No. : 0975-0290 34 A Review on Dysarthric Speech Recognition Megha Rughani Department of Electronics and Communication, Marwadi Educational
More informationA study of speaker adaptation for DNN-based speech synthesis
A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,
More informationA New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation
A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationUsing focal point learning to improve human machine tacit coordination
DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated
More informationForget catastrophic forgetting: AI that learns after deployment
Forget catastrophic forgetting: AI that learns after deployment Anatoly Gorshechnikov CTO, Neurala 1 Neurala at a glance Programming neural networks on GPUs since circa 2 B.C. Founded in 2006 expecting
More informationSeminar - Organic Computing
Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts
More informationDesign Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm
Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute
More informationDeep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach
#BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying
More informationUsing the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT
The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the
More informationA Genetic Irrational Belief System
A Genetic Irrational Belief System by Coen Stevens The thesis is submitted in partial fulfilment of the requirements for the degree of Master of Science in Computer Science Knowledge Based Systems Group
More informationA Comparison of Annealing Techniques for Academic Course Scheduling
A Comparison of Annealing Techniques for Academic Course Scheduling M. A. Saleh Elmohamed 1, Paul Coddington 2, and Geoffrey Fox 1 1 Northeast Parallel Architectures Center Syracuse University, Syracuse,
More informationAutoregressive product of multi-frame predictions can improve the accuracy of hybrid models
Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,
More informationTesting A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA
Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology
More informationENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering
ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering Lecture Details Instructor Course Objectives Tuesday and Thursday, 4:00 pm to 5:15 pm Information Technology and Engineering
More informationISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM
Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and
More informationГлубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках
Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках Тарасов Д. С. (dtarasov3@gmail.com) Интернет-портал reviewdot.ru, Казань,
More informationBUSINESS INTELLIGENCE FROM WEB USAGE MINING
BUSINESS INTELLIGENCE FROM WEB USAGE MINING Ajith Abraham Department of Computer Science, Oklahoma State University, 700 N Greenwood Avenue, Tulsa,Oklahoma 74106-0700, USA, ajith.abraham@ieee.org Abstract.
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationDinesh K. Sharma, Ph.D. Department of Management School of Business and Economics Fayetteville State University
Department of Management School of Business and Economics Fayetteville State University EDUCATION Doctor of Philosophy, Devi Ahilya University, Indore, India (2013) Area of Specialization: Management:
More informationAn Empirical and Computational Test of Linguistic Relativity
An Empirical and Computational Test of Linguistic Relativity Kathleen M. Eberhard* (eberhard.1@nd.edu) Matthias Scheutz** (mscheutz@cse.nd.edu) Michael Heilman** (mheilman@nd.edu) *Department of Psychology,
More informationAttributed Social Network Embedding
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MAY 2017 1 Attributed Social Network Embedding arxiv:1705.04969v1 [cs.si] 14 May 2017 Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua Abstract Embedding
More informationLaboratorio di Intelligenza Artificiale e Robotica
Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationHenry Tirri* Petri Myllymgki
From: AAAI Technical Report SS-93-04. Compilation copyright 1993, AAAI (www.aaai.org). All rights reserved. Bayesian Case-Based Reasoning with Neural Networks Petri Myllymgki Henry Tirri* email: University
More informationCooperative evolutive concept learning: an empirical study
Cooperative evolutive concept learning: an empirical study Filippo Neri University of Piemonte Orientale Dipartimento di Scienze e Tecnologie Avanzate Piazza Ambrosoli 5, 15100 Alessandria AL, Italy Abstract
More informationWHEN THERE IS A mismatch between the acoustic
808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,
More informationDiscriminative Learning of Beam-Search Heuristics for Planning
Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University
More informationReducing Features to Improve Bug Prediction
Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science
More informationOn the Combined Behavior of Autonomous Resource Management Agents
On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science
More informationAn empirical study of learning speed in backpropagation
Carnegie Mellon University Research Showcase @ CMU Computer Science Department School of Computer Science 1988 An empirical study of learning speed in backpropagation networks Scott E. Fahlman Carnegie
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationAssignment 1: Predicting Amazon Review Ratings
Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationTransfer Learning Action Models by Measuring the Similarity of Different Domains
Transfer Learning Action Models by Measuring the Similarity of Different Domains Hankui Zhuo 1, Qiang Yang 2, and Lei Li 1 1 Software Research Institute, Sun Yat-sen University, Guangzhou, China. zhuohank@gmail.com,lnslilei@mail.sysu.edu.cn
More informationAGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS
AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic
More informationA SURVEY OF FUZZY COGNITIVE MAP LEARNING METHODS
A SURVEY OF FUZZY COGNITIVE MAP LEARNING METHODS Wociech Stach, Lukasz Kurgan, and Witold Pedrycz Department of Electrical and Computer Engineering University of Alberta Edmonton, Alberta T6G 2V4, Canada
More informationComment-based Multi-View Clustering of Web 2.0 Items
Comment-based Multi-View Clustering of Web 2.0 Items Xiangnan He 1 Min-Yen Kan 1 Peichu Xie 2 Xiao Chen 3 1 School of Computing, National University of Singapore 2 Department of Mathematics, National University
More informationAustralian Journal of Basic and Applied Sciences
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean
More informationarxiv: v1 [math.at] 10 Jan 2016
THE ALGEBRAIC ATIYAH-HIRZEBRUCH SPECTRAL SEQUENCE OF REAL PROJECTIVE SPECTRA arxiv:1601.02185v1 [math.at] 10 Jan 2016 GUOZHEN WANG AND ZHOULI XU Abstract. In this note, we use Curtis s algorithm and the
More information