Neural Network Model of the Backpropagation Algorithm
|
|
- Suzan Flynn
- 6 years ago
- Views:
Transcription
1 Neural Nework Model of he Backpropagaion Algorihm Rudolf Jakša Deparmen of Cyberneics and Arificial Inelligence Technical Universiy of Košice Lená 9, 4 Košice Slovakia jaksa@neuron.uke.sk Miroslav Karák Deparmen of Cyberneics and Arificial Inelligence Technical Universiy of Košice Lená 9, 4 Košice Slovakia bracek@mizu.sk Absrac We apply a neural nework o model neural nework learning algorihm iself. The process of weighs updaing in neural nework is observed and sored ino file. Laer, his daa is used o rain anoher nework, which hen will be able o rain neural neworks by imiaing he rained algorihm. We use backpropagaion algorihm for boh, for raining, and for sampling he raining process. We imiae he raining of he nework as whole. All he weighs and weigh changes of mulilayer neural nework are processed in parallel in order o model muual dependencies beween weighs. Experimenal resuls are provided. Keywords: mealearning, learning o learn, error backpropagaion. Inroducion Adapive or opimizing learning algorihms migh be used in he neural nework learning or in he machine learning domains. Insead of fixed learning algorihms, hese algorihms improve heir own learning performance over ime, or hey develop paricular learning mehods from scrach. This ype of learning algorihms is known as he mealearning or he learning o learn approach. Works by Jürgen Schmidhuber and Sepp Hochreier [] [3] are represenaive of recen research in his area and more comprehensive overview is given by Sebasian Thrun in [4]. Thrun defines learning o learn as abiliy of algorihm o improve performance a each nex ask wih experience from previous asks [4]. Schmidhuber emphasizes on he abiliy of learner o evaluae and compare learning mehods and course of learning, and using his evaluaion o selec proper learning sraegy []. We can recognize following paradigms among mealearning approaches: similariy exploiaion, learning parameers adapaion, discovery of learning algorihm. Paricular mehods migh be focused on any of hese paradigms, or on all of hem. Similariy exploiaion is he idea ha a group of asks shares some similariy, which once learned, migh speed-up he learning of anoher asks. We can simply learn he sequence of asks o exploi heir similariy, bu some mechanism for disinguishing he ask-specific knowledge versus common cross-ask knowledge should improve he performance. Adapaion of learning parameers migh be done by some mea-learning algorihm above he learning algorihm. This paradigm migh be based more on he learning abou learning, hen on he learning o learn idea. However, knowledge abou learning is he sep o he learning o learn. Discovery of learning algorihm is he design of learning algorihm from scrach. This is more learning he learning hen learning o learn. Here we shif focus from adapaion ino learning. Mealearning algorihms migh be based on he reinforcemen learning or on supervised learning. In he reinforcemen learning case, he learner hrough rial-and-error experience improves no only is performance on some paricular ask, bu also is abiliy o learn. This can be achieved by reaing learning algorihms as a par of solved ask. I is: learning is one of acions of he learner. In he supervised learning scenario, learning and mealearning are usually reaed as independen processes.
2 Backpropagaion Imiaion In his secion we will describe modelling of he error backpropagaion algorihm. We wan rain neural nework o rain anoher neural nework. A neural nework model of error backpropagaion algorihm should be able o rain neural neworks in he similar manner as he original backpropagaion algorihm did. To obain such a model we will sample he raining process of backpropagaion learning and hen ry o imiae i. This is simple, while also a general approach o mealearning. I consiss of he following sequence:. rain arbirary neural nework wih error backpropagaion algorihm and sample he learning process,. rain he learning nework o imiae original learning algorihm, 3. rain arbirary neural nework using he learning nework. Consider mulilayer neural nework wih neuron acivaions x i, link weighs w ij, biases θ i, and neuron acivaion funcions f i (in i ): x i = f i (in i ) in i = M w ij x j + θ i () j= The in i is inpu ino i-h neuron and M is he number of links connecing ino i-h neuron. The error J in he supervised learning mode is defined: J p = N i= (ev p i xp i ) () The p is he index of daa paern, N is he number of oupu neurons of neural nework, and ev p i is he expeced oupu of i-h neuron on p-h paern. For simpliciy, we will omi paern index p laer. Gradien based error minimizing adapaion of weighs follows: w ij = γ J w ij = γ J w ij = γδ i x j (3) The weigh w ij links he j-h neuron ino i-h neuron, he γ is learning rae consan, and δ i is defined as: δ i = J = J = J f (in i ) (4) The f (in i ) is he derivaive of acivaion funcion f(in i ). For oupu neurons we ge: δ i = J f (in i ) = (ev i x i )f (in i ) (5) For neurons in hidden layers we ge: N h δ i = f J in h (in i ) = in h N h = f J N l (in i ) w hl x l = in h l= N h = f J N h (in i ) w hi = f (in i ) δ h w hi (6) in h The N h is number of links coming from i-h neuron and h is index of hese links and corresponding neurons. The N l is number of neurons which have connecions ino h neurons (see Fig.). The rule (6) is he error backpropagaion rule, defining he backward propagaion of error hrough nework. Rule (3) defines weigh changes minimizing his error, and rule (5) ses he base for error minimizaion. j w ij i N l Figure : Neuron indices for rule (6). The error backpropagaion algorihm is defined by rules (), (), (3), (5), and (6). To model his algorihm we may sample variables: w, w, θ, δ, x, ev, J, in, and f (in). Some of hese variables can be derived from ohers, so no full se of hem is necessary. We can model eiher, he rule (6), or he full se of rules. When modelling full se of rules, ineracions in he whole nework may be processed in model. When modelling rule (6) only, only neighborhood of paricular link is considered. N h
3 The number of inpus and oupus of learning nework is equal o he number of sampled variables. Oupus are w changes, and θ possibly. To use learning nework, rules (6), (5), and (3) from original backpropagaion algorihm have o be replaced wih oupus of his learning nework..98, number of raining cycles is, number of hidden neurons is. Nework opology is he same as on he Fig.3. inpu 3 Experimens w oupu Consider he neural nework on he Fig. wih wo inpus, one hidden neuron, and one oupu. I has hree weighs and wo biases, which changes we will ry o approximae wih learning nework. Thus, we will sample hese changes while learning wih he backpropagaion algorihm. Then, we will rain learning nework o approximae hem. Besides hese changes, we will sample all hree weighs and wo biases, wo inpus, one oupu, and one expeced value on oupu. This is: 9 inpus and 5 oupus for he learning nework. Such a learning nework wih wo hidden neurons is on he Fig.3. The number of hidden neurons is arbirary, i migh depend on he asks learned and on he complexiy of original raining algorihm, in our case error backpropagaion. v hb_ yb_ x x y ev hidden h h d_v d_hb_ d_yb_ d_w d_w inpu x w hidden w x hb w hb_ h yb v yb_ oupu Figure : Simple nework o be rained. The x, and x are inpus; y is oupu; w, w, and v are weighs; hb, and yb are biases; h is hidden neuron acivaion, and y is he oupu. In he s experimen we will rain he nework from Fig. o approximae boolean funcion AND (Tab.). This is simple ask and neworks learn quickly. Parameers of raining of basic nework are: γ =.3, number of raining cycles is 5. Parameers of raining of learning nework are: γ = y Figure 3: Learning nework wih wo hidden neurons for raining of nework from Fig.. Inpus are he variables describing he sae of neural nework on Fig. and oupus are changes of hem provided by he learning algorihm. AND x x y OR x x y Table : Training daa for he boolean AND and OR funcions for nework on Fig.. 3
4 The raining hisory for learning nework is on he Fig.4. Comparison of raining of basic nework using backpropagaion algorihm and using learning nework is on he Fig.5. The learning nework achieved beer convergence hen he original backpropagaion algorihm. However, he performance of learning nework depends also on is raining i is prone o overfiing. Also noe, ha implemenaions of raining using learning nework and error backpropagaion algorihm may differ in speed. In our case, in his experimen, backpropagaion raining ook.48 seconds, while learning nework raining ook 3.78 seconds. In he nd experimen we will use basic nework wihou a hidden layer. This is sufficien for he AND-funcion approximaion. We will have inpus, oupu, and hidden neurons in he basic nework; and 7 inpus, 3 oupus, and hidden neurons in he learning nework. The raining hisory for his learning nework is on he Fig.6. Comparison of raining of basic nework wihou hidden neurons using backpropagaion algorihm and using learning nework is on he Fig.7. Performance of learning nework in his seup is comparable o performance of backpropagaion algorihm, alhough i is slighly worse hen in he s experimen wih hidden neurons..5.5 J() J() Figure 4: Error of he raining of learning nework for he basic nework from Fig.. Figure 6: Error of he raining of learning nework for he basic nework wihou hidden neurons Error Backpropagaion Learning Nework.5.45 Error Backpropagaion Learning Nework J().5 J() Figure 5: Error of he raining of basic nework from Fig. using error backpropagaion algorihm and using he learning nework from Fig.3. Figure 7: Error of he raining of basic nework wihou hidden neurons using error backpropagaion algorihm and using he learning nework. 4
5 In he 3rd experimen we will increase he number of hidden neurons. We will have inpus, oupu, and 4 hidden neurons in he basic nework; and inpus, 7 oupus, and 3 hidden neurons in he learning nework. The ask is OR-funcion approximaion. The raining hisory for his learning nework is on he Fig.8. Comparison of raining of basic nework wihou hidden neurons using backpropagaion algorihm and using learning nework is on he Fig.9. Performance of learning nework in his seup is again beer hen original error backpropagaion algorihm. are: inpus: x and y, oupus: innersquare and ouersquare, and 5 hidden neurons for he basic nework; and 38 inpus, 7 oupus, and 3 hidden neurons for he learning nework. The raining hisory for his learning nework is on he Fig.. Comparison of raining of basic nework using backpropagaion algorihm and using learning nework is on he Fig.. Training wih he learning nework in his ask diverges, while raining wih backpropagaion algorihm sopped on some level bu did no diverge..5 4 J() J() Figure 8: Error of he raining of learning nework for he basic nework wih 4 hidden neurons. J() Error Backpropagaion Learning Nework Figure 9: Error of he raining of basic nework wih 4 hidden neurons using error backpropagaion algorihm and using he learning nework. In he 4h experimen we will ry more difficul classificaion ask. The Fig. depics raining and esing daa ses. The nework opologies Figure : Error of he raining of learning nework for he basic nework for square classificaion ask. J() Error Backpropagaion Learning Nework 5 5 Figure : Error of he raining of basic nework for square classificaion ask using error backpropagaion algorihm and using he learning nework. 5
6 raining se esing se Using neural nework model of backpropagaion algorihm o rain neural neworks is a viable approach. Novel mehods of performance uning of learning algorihm are possible when using his model. There is, however, a risk of learning insabiliy wih his approach, and acual modelling of backpropagaion can be done in several differen modes. Figure : Training and esing ses for he classificaion ask. The ask is o classify poins in space by heir x and y coordinaes, wheher hey will fi ino inner square or no. 4 Analysis The AND/OR-funcions approximaion asks wih hidden unis show good resuls when rained wih learning nework. The abiliy of learning nework o ouperform backpropagaion algorihm seems promising. In fuure, knowledge from several raining algorihms migh be used o rain learning nework in order o ge even beer performance by exploiing bes of all of hese algorihms. Trial and error mode migh be furher used when raining learning nework o furher improve is performance, possibly beyond he reach of convenional learning algorihms. Slighly worse performance in he experimens wihou hidden unis poins o he nonlinear characer of eiher, learning rules of backpropagaion algorihm, or he neural nework which we rain. The problem wih applicaion of our approach o more complex classificaion ask migh be of similar characer as insabiliy of overrained learning nework. Chance of insabiliy of learning wih learning nework is an inheren propery of his approach. The beer performance wih simpler neworks favorizes he modelling of he rule (6) only of backpropagaion algorihm, insead of modelling all he rules, which we did in experimens. To furher invesigae he neural nework modelling of backpropagaion algorihm, rule (6) modelling, and deeper analysis of variable se for algorihm sampling migh help. 5 Conclusion References [] M.Karák, Mealearning mehods for neural neworks, (in Slovak), MS Thesis, Technical Universiy Košice, (5). neuron.uke.sk/ jaksa/heses [] J.Schmidhuber, J.Zhao, and M.Wiering, Simple principles of mealearning, Technical Repor IDSIA-69-96, IDSIA, (996). cieseer.is.psu.edu/schmidhuber96simple.hml [3] S.Hochreier, A.S.Younger, and P.R.Conwell, Learning o Learn Using Gradien Descen, Lecure Noes in Compuer Science, vol.3, (). cieseer.is.psu.edu/hochreierlearning.hml [4] S.Thrun, Learning To Learn: Inroducion. cieseer.is.psu.edu/aricle/hrun96learning.hml [5] J.Schmidhuber, Evoluionary Principles in Self- Referenial Learning, Diploma Thesis, Technische Universiä München, (987). juergen/diploma.hml [6] J.Schmidhuber, On Learning How o Learn Learning Sraegies, Technical Repor FKI-98-94, Fakul für Informaik, Technische Universi München, (994). cieseer.is.psu.edu/schmidhuber95learning.hml [7] J.Schmidhuber, A General Mehod for Incremenal Self-Improvemen and Muli-agen Learning in Unresriced Environmens, In X.Yao (Ed.), Evoluionary Compuaion: Theory and Applicaions, Scienific Publ. Co., Singapore, (996). cieseer.is.psu.edu/aricle/schmidhuber96general.hml [8] J.Schmidhuber, A Neural Nework Tha Embeds Is Own Mea-Levels, In Proc. of he Inernaional Conference on Neural Neworks 93, San Francisco. IEEE, (993). cieseer.is.psu.edu/schmidhuber93neural.hml 6
Fast Multi-task Learning for Query Spelling Correction
Fas Muli-ask Learning for Query Spelling Correcion Xu Sun Dep. of Saisical Science Cornell Universiy Ihaca, NY 14853 xusun@cornell.edu Anshumali Shrivasava Dep. of Compuer Science Cornell Universiy Ihaca,
More informationAn Effiecient Approach for Resource Auto-Scaling in Cloud Environments
Inernaional Journal of Elecrical and Compuer Engineering (IJECE) Vol. 6, No. 5, Ocober 2016, pp. 2415~2424 ISSN: 2088-8708, DOI: 10.11591/ijece.v6i5.10639 2415 An Effiecien Approach for Resource Auo-Scaling
More informationChannel Mapping using Bidirectional Long Short-Term Memory for Dereverberation in Hands-Free Voice Controlled Devices
Z. Zhang e al.: Channel Mapping using Bidirecional Long Shor-Term Memory for Dereverberaion in Hands-Free Voice Conrolled Devices 525 Channel Mapping using Bidirecional Long Shor-Term Memory for Dereverberaion
More informationMore Accurate Question Answering on Freebase
More Accurae Quesion Answering on Freebase Hannah Bas, Elmar Haussmann Deparmen of Compuer Science Universiy of Freiburg 79110 Freiburg, Germany {bas, haussmann}@informaik.uni-freiburg.de ABSTRACT Real-world
More information1 Language universals
AS LX 500 Topics: Language Uniersals Fall 2010, Sepember 21 4a. Anisymmery 1 Language uniersals Subjec-erb agreemen and order Bach (1971) discusses wh-quesions across SO and SO languages, hypohesizing:...
More informationMyLab & Mastering Business
MyLab & Masering Business Efficacy Repor 2013 MyLab & Masering: Business Efficacy Repor 2013 Edied by Michelle D. Speckler 2013 Pearson MyAccouningLab, MyEconLab, MyFinanceLab, MyMarkeingLab, and MyOMLab
More informationInformation Propagation for informing Special Population Subgroups about New Ground Transportation Services at Airports
Downloaded from ascelibrary.org by Basil Sephanis on 07/13/16. Copyrigh ASCE. For personal use only; all righs reserved. Informaion Propagaion for informing Special Populaion Subgroups abou New Ground
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationFramewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures
Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures Alex Graves and Jürgen Schmidhuber IDSIA, Galleria 2, 6928 Manno-Lugano, Switzerland TU Munich, Boltzmannstr.
More informationOPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS
OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,
More informationReinforcement Learning by Comparing Immediate Reward
Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationINPE São José dos Campos
INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationSemi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration
INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationLecture 10: Reinforcement Learning
Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationSystem Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering
More informationAttributed Social Network Embedding
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MAY 2017 1 Attributed Social Network Embedding arxiv:1705.04969v1 [cs.si] 14 May 2017 Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua Abstract Embedding
More informationISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM
Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and
More informationReducing Features to Improve Bug Prediction
Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science
More informationKamaldeep Kaur University School of Information Technology GGS Indraprastha University Delhi
Soft Computing Approaches for Prediction of Software Maintenance Effort Dr. Arvinder Kaur University School of Information Technology GGS Indraprastha University Delhi Kamaldeep Kaur University School
More informationTest Effort Estimation Using Neural Network
J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish
More informationGeorgetown University at TREC 2017 Dynamic Domain Track
Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain
More informationPREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES
PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14
More informationWord Segmentation of Off-line Handwritten Documents
Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationOn the Formation of Phoneme Categories in DNN Acoustic Models
On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-
More informationA Pipelined Approach for Iterative Software Process Model
A Pipelined Approach for Iterative Software Process Model Ms.Prasanthi E R, Ms.Aparna Rathi, Ms.Vardhani J P, Mr.Vivek Krishna Electronics and Radar Development Establishment C V Raman Nagar, Bangalore-560093,
More informationSoftprop: Softmax Neural Network Backpropagation Learning
Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science
More informationLearning to Schedule Straight-Line Code
Learning to Schedule Straight-Line Code Eliot Moss, Paul Utgoff, John Cavazos Doina Precup, Darko Stefanović Dept. of Comp. Sci., Univ. of Mass. Amherst, MA 01003 Carla Brodley, David Scheeff Sch. of Elec.
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationA Reinforcement Learning Variant for Control Scheduling
A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center
More informationIMGD Technical Game Development I: Iterative Development Techniques. by Robert W. Lindeman
IMGD 3000 - Technical Game Development I: Iterative Development Techniques by Robert W. Lindeman gogo@wpi.edu Motivation The last thing you want to do is write critical code near the end of a project Induces
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationMining Association Rules in Student s Assessment Data
www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama
More informationLEGO MINDSTORMS Education EV3 Coding Activities
LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 3-6 ACTIVITY 2 Written Instructions for a
More informationPredicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks
Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com
More informationSchool Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne
School Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne Web Appendix See paper for references to Appendix Appendix 1: Multiple Schools
More informationTD(λ) and Q-Learning Based Ludo Players
TD(λ) and Q-Learning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent self-learning ability
More informationarxiv: v1 [cs.lg] 15 Jun 2015
Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and
More informationGROUP COMPOSITION IN THE NAVIGATION SIMULATOR A PILOT STUDY Magnus Boström (Kalmar Maritime Academy, Sweden)
GROUP COMPOSITION IN THE NAVIGATION SIMULATOR A PILOT STUDY Magnus Boström (Kalmar Maritime Academy, Sweden) magnus.bostrom@lnu.se ABSTRACT: At Kalmar Maritime Academy (KMA) the first-year students at
More informationDeploying Agile Practices in Organizations: A Case Study
Copyright: EuroSPI 2005, Will be presented at 9-11 November, Budapest, Hungary Deploying Agile Practices in Organizations: A Case Study Minna Pikkarainen 1, Outi Salo 1, and Jari Still 2 1 VTT Technical
More informationCommunication and Cybernetics 17
Communication and Cybernetics 17 Editors: K. S. Fu W. D. Keidel W. J. M. Levelt H. Wolter Communication and Cybernetics Editors: K.S.Fu, W.D.Keidel, W.1.M.Levelt, H.Wolter Vol. Vol. 2 Vol. 3 Vol. 4 Vol.
More informationXXII BrainStorming Day
UNIVERSITA DEGLI STUDI DI CATANIA FACOLTA DI INGEGNERIA PhD course in Electronics, Automation and Control of Complex Systems - XXV Cycle DIPARTIMENTO DI INGEGNERIA ELETTRICA ELETTRONICA E INFORMATICA XXII
More informationE-Learning project in GIS education
E-Learning project in GIS education MARIA KOULI (1), DIMITRIS ALEXAKIS (1), FILIPPOS VALLIANATOS (1) (1) Department of Natural Resources & Environment Technological Educational Institute of Grete Romanou
More informationKnowledge-Based - Systems
Knowledge-Based - Systems ; Rajendra Arvind Akerkar Chairman, Technomathematics Research Foundation and Senior Researcher, Western Norway Research institute Priti Srinivas Sajja Sardar Patel University
More informationCLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH
ISSN: 0976-3104 Danti and Bhushan. ARTICLE OPEN ACCESS CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH Ajit Danti 1 and SN Bharath Bhushan 2* 1 Department
More informationUse of Online Information Resources for Knowledge Organisation in Library and Information Centres: A Case Study of CUSAT
DESIDOC Journal of Library & Information Technology, Vol. 31, No. 1, January 2011, pp. 19-24 2011, DESIDOC Use of Online Information Resources for Knowledge Organisation in Library and Information Centres:
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationRole of Blackboard Platform in Undergraduate Education A case study on physiology learning in nurse major
I.J. Education and Management Engineering 2012, 5, 31-36 Published Online May 2012 in MECS (http://www.mecs-press.net) DOI: 10.5815/ijeme.2012.05.05 Available online at http://www.mecs-press.net/ijeme
More informationWeb-based Learning Systems From HTML To MOODLE A Case Study
Web-based Learning Systems From HTML To MOODLE A Case Study Mahmoud M. El-Khoul 1 and Samir A. El-Seoud 2 1 Faculty of Science, Helwan University, EGYPT. 2 Princess Sumaya University for Technology (PSUT),
More informationTesting A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA
Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology
More informationDecision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1
Decision Support: Decision Analysis Jožef Stefan International Postgraduate School, Ljubljana Programme: Information and Communication Technologies [ICT3] Course Web Page: http://kt.ijs.si/markobohanec/ds/ds.html
More informationInternational Variations in Divergent Creativity and the Impact on Teaching Entrepreneurship
International Variations in Divergent Creativity and the Impact on Teaching Entrepreneurship Jacqueline J. Schmidt John Carroll University Tina Facca John Carroll University John C. Soper John Carroll
More informationAnalysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems
Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Ajith Abraham School of Business Systems, Monash University, Clayton, Victoria 3800, Australia. Email: ajith.abraham@ieee.org
More informationUNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak
UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS Heiga Zen, Haşim Sak Google fheigazen,hasimg@google.com ABSTRACT Long short-term
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationTHE DEVELOPMENT OF FUNGI CONCEPT MODUL USING BASED PROBLEM LEARNING AS A GUIDE FOR TEACHERS AND STUDENTS
DOI : 10.18843/rwjasc/v7i3/04 DOI URL : http://dx.doi.org/10.18843/rwjasc/v7i3/04 THE DEVELOPMENT OF FUNGI CONCEPT MODUL USING BASED PROBLEM LEARNING AS A GUIDE FOR TEACHERS AND STUDENTS Musriadi, Lecturer,
More informationFUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria
FUZZY EXPERT SYSTEMS 16-18 18 February 2002 University of Damascus-Syria Dr. Kasim M. Al-Aubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationPROCEEDINGS OF SPIE. Double degree master program: Optical Design
PROCEEDINGS OF SPIE SPIEDigitalLibrary.org/conference-proceedings-of-spie Double degree master program: Optical Design Alexey Bakholdin, Malgorzata Kujawinska, Irina Livshits, Adam Styk, Anna Voznesenskaya,
More information*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe
*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE Proceedings of the 9th Symposium on Legal Data Processing in Europe Bonn, 10-12 October 1989 Systems based on artificial intelligence in the legal
More informationarxiv: v1 [cs.lg] 7 Apr 2015
Transferring Knowledge from a RNN to a DNN William Chan 1, Nan Rosemary Ke 1, Ian Lane 1,2 Carnegie Mellon University 1 Electrical and Computer Engineering, 2 Language Technologies Institute Equal contribution
More informationBAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass
BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,
More informationChamilo 2.0: A Second Generation Open Source E-learning and Collaboration Platform
Chamilo 2.0: A Second Generation Open Source E-learning and Collaboration Platform doi:10.3991/ijac.v3i3.1364 Jean-Marie Maes University College Ghent, Ghent, Belgium Abstract Dokeos used to be one of
More informationXinyu Tang. Education. Research Interests. Honors and Awards. Professional Experience
Xinyu Tang Parasol Laboratory Department of Computer Science Texas A&M University, TAMU 3112 College Station, TX 77843-3112 phone:(979)847-8835 fax: (979)458-0425 email: xinyut@tamu.edu url: http://parasol.tamu.edu/people/xinyut
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationInteraction Design Considerations for an Aircraft Carrier Deck Agent-based Simulation
Interaction Design Considerations for an Aircraft Carrier Deck Agent-based Simulation Miles Aubert (919) 619-5078 Miles.Aubert@duke. edu Weston Ross (505) 385-5867 Weston.Ross@duke. edu Steven Mazzari
More informationIEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX,
IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL XXX, NO. XXX, 2017 1 Small-footprint Highway Deep Neural Networks for Speech Recognition Liang Lu Member, IEEE, Steve Renals Fellow,
More informationSURVIVING ON MARS WITH GEOGEBRA
SURVIVING ON MARS WITH GEOGEBRA Lindsey States and Jenna Odom Miami University, OH Abstract: In this paper, the authors describe an interdisciplinary lesson focused on determining how long an astronaut
More informationBENCHMARKING OF FREE AUTHORING TOOLS FOR MULTIMEDIA COURSES DEVELOPMENT
36 Acta Electrotechnica et Informatica, Vol. 11, No. 3, 2011, 36 41, DOI: 10.2478/v10198-011-0033-8 BENCHMARKING OF FREE AUTHORING TOOLS FOR MULTIMEDIA COURSES DEVELOPMENT Peter KOŠČ *, Mária GAMCOVÁ **,
More informationData Fusion Models in WSNs: Comparison and Analysis
Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Data Fusion s in WSNs: Comparison and Analysis Marwah M Almasri, and Khaled M Elleithy, Senior Member,
More informationHow People Learn Physics
How People Learn Physics Edward F. (Joe) Redish Dept. Of Physics University Of Maryland AAPM, Houston TX, Work supported in part by NSF grants DUE #04-4-0113 and #05-2-4987 Teaching complex subjects 2
More informationLearning Prospective Robot Behavior
Learning Prospective Robot Behavior Shichao Ou and Rod Grupen Laboratory for Perceptual Robotics Computer Science Department University of Massachusetts Amherst {chao,grupen}@cs.umass.edu Abstract This
More informationSession H1B Teaching Introductory Electrical Engineering: Project-Based Learning Experience
Teaching Introductory Electrical Engineering: Project-Based Learning Experience Chi-Un Lei, Hayden Kwok-Hay So, Edmund Y. Lam, Kenneth Kin-Yip Wong, Ricky Yu-Kwong Kwok Department of Electrical and Electronic
More informationMachine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler
Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina
More informationOn-Line Data Analytics
International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob
More informationErkki Mäkinen State change languages as homomorphic images of Szilard languages
Erkki Mäkinen State change languages as homomorphic images of Szilard languages UNIVERSITY OF TAMPERE SCHOOL OF INFORMATION SCIENCES REPORTS IN INFORMATION SCIENCES 48 TAMPERE 2016 UNIVERSITY OF TAMPERE
More informationAlgebra 2- Semester 2 Review
Name Block Date Algebra 2- Semester 2 Review Non-Calculator 5.4 1. Consider the function f x 1 x 2. a) Describe the transformation of the graph of y 1 x. b) Identify the asymptotes. c) What is the domain
More informationImpact of Educational Reforms to International Cooperation CASE: Finland
Impact of Educational Reforms to International Cooperation CASE: Finland February 11, 2016 10 th Seminar on Cooperation between Russian and Finnish Institutions of Higher Education Tiina Vihma-Purovaara
More informationAutomating the E-learning Personalization
Automating the E-learning Personalization Fathi Essalmi 1, Leila Jemni Ben Ayed 1, Mohamed Jemni 1, Kinshuk 2, and Sabine Graf 2 1 The Research Laboratory of Technologies of Information and Communication
More informationA Comparison of Annealing Techniques for Academic Course Scheduling
A Comparison of Annealing Techniques for Academic Course Scheduling M. A. Saleh Elmohamed 1, Paul Coddington 2, and Geoffrey Fox 1 1 Northeast Parallel Architectures Center Syracuse University, Syracuse,
More informationDeep Facial Action Unit Recognition from Partially Labeled Data
Deep Facial Action Unit Recognition from Partially Labeled Data Shan Wu 1, Shangfei Wang,1, Bowen Pan 1, and Qiang Ji 2 1 University of Science and Technology of China, Hefei, Anhui, China 2 Rensselaer
More informationThe Learning Model S2P: a formal and a personal dimension
The Learning Model S2P: a formal and a personal dimension Salah Eddine BAHJI, Youssef LEFDAOUI, and Jamila EL ALAMI Abstract The S2P Learning Model was originally designed to try to understand the Game-based
More informationBMBF Project ROBUKOM: Robust Communication Networks
BMBF Project ROBUKOM: Robust Communication Networks Arie M.C.A. Koster Christoph Helmberg Andreas Bley Martin Grötschel Thomas Bauschert supported by BMBF grant 03MS616A: ROBUKOM Robust Communication Networks,
More informationEssentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology
Essentials of Ability Testing Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology Basic Topics Why do we administer ability tests? What do ability tests measure? How are
More informationSaeed Rajaeepour Associate Professor, Department of Educational Sciences. Seyed Ali Siadat Professor, Department of Educational Sciences
Investigating and Comparing Primary, Secondary, and High School Principals and Teachers Attitudes in the City of Isfahan towards In-Service Training Courses Masoud Foroutan (Corresponding Author) PhD Student
More informationHow to set up gradebook categories in Moodle 2.
How to set up gradebook categories in Moodle 2. It is possible to set up the gradebook to show divisions in time such as semesters and quarters by using categories. For example, Semester 1 = main category
More informationAn Online Handwriting Recognition System For Turkish
An Online Handwriting Recognition System For Turkish Esra Vural, Hakan Erdogan, Kemal Oflazer, Berrin Yanikoglu Sabanci University, Tuzla, Istanbul, Turkey 34956 ABSTRACT Despite recent developments in
More informationUsing focal point learning to improve human machine tacit coordination
DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated
More informationACTIVITY: Comparing Combination Locks
5.4 Compound Events outcomes of one or more events? ow can you find the number of possible ACIVIY: Comparing Combination Locks Work with a partner. You are buying a combination lock. You have three choices.
More informationA Variation-Tolerant Multi-Level Memory Architecture Encoded in Two-state Memristors
A Variation-Tolerant Multi-Level Memory Architecture Encoded in Two-state Memristors Bin Wu and Matthew R. Guthaus Department of CE, University of California Santa Cruz Santa Cruz, CA 95064 {wubin6666,mrg}@soe.ucsc.edu
More information