OPTIMIZATINON OF TRAINING SETS FOR HEBBIANLEARNING BASED CLASSIFIERS


 Damon Haynes
 6 years ago
 Views:
Transcription
1 OPTIMIZATINON OF TRAINING SETS FOR HEBBIANLEARNING BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7, Ostrava 1 Czech Republic vaclav.kocian@osu.cz Abstract: The article deals with possibilities of optimization of classifiers based on neural networks which use Hebbian learning mechanism. The experimental study was conducted. The study shows, that badly designed learning patterns can prevent the network from learning under certain circumstances. The new term of irrelevant items of input vectors has been introduced in the article. Also we have introduced a optimization method. This method helps to avoid problems caused by socalled irrelevant items of input vectors and thus makes the learning algorithm more robust. The method lays off the self classifying algorithm. Thanks to the fact it is very easy to equip any arbitrary algorithm with it. Keywords: Neural networks, Hebbian learning, irrelevant items, patterns optimization, pattern preprocessing 1 Hebbian Networks Hebbian learning theory can be summarized in the following rule: "Cells that fire together, wire together. [2] The rule seeks to explain "associative learning", in which simultaneous activation of cells leads to strengthening their links. The main advantage of Hebbian algorithm is its simplicity and thus its speed. The basic variant of the algorithm only needs the operations of addition and multiplication of integers. In addition, we can consider as an advance the repeatability of the calculation (calculations in the Hebbian algorithm are not burdened with randomness). This allows relatively easy to study the behavior on specific training sets. In addition, there is a possibility that discovered regularities will be applicable for some other types of networks. For a description of the learning process, we consider the trivial model network with one input and one output neuron connected with a single connection (see Fig. 1). In complex networks, these rules apply to all such triplets (input, output, connection). Neural networks are taught in socalled cycles. During each such cycle, all the training patterns are presented to the network one time. We derive formulas for calculating the value of weight w after the submission of the nth pattern: At the start, all weights w are initialized by value of I ( I =0 according to [3] ): w n = I, n=0. After the presentation of each (the nth) pattern, the current value of w is raised by the product of the appropriate input and output: w n =w n 1 x n y n, n 0. Therefore we can express the weight value w at the end of the first cycle, e.g. after a presentation of the m patterns: means a change of the w after one cycle. m w m = I i=1 m x i y i, where the expression x i y i i =1 Since the set of patterns presented to the network in each cycle is always the same, we can label the sum as C (e.g. change of the w after one learning cycle). The weight value at the end of the first cycle can be then written as w 1 =I C. To calculate the value of w after a pth cycle, we can use the expression: w p =I pc.
2 Fig. 1: Trivial neural network considered in description of the learning process Fig. 2: General topology of classifier. Weights of connections w11wij are modified in accordance with the Hebbian learning rule. 2 The original experimental study  motivation We noticed an unexpected behavior of classifier during our experiments with adaptation aimed at pattern recognition in time series [1]. It inspired us to study the influence of learning patterns shape on ability of neural network to adapt properly. The aim of our original experimental work was to test the ability of Hebbian networks to learn the fundamental trends from typical time series (rising, descent, resistance, support). We used two sets of artificially generated patterns. Both sets P1 (see Fig. 3, Table 1) and P2 (see Fig. 4, Table 2) contain patterns with the same meaning but derived from original data using different methods of binarization. Patterns bitmaps (bit arrays of size 8x8) were converted into onedimensional vectors with a length of 64 bits by concatenation of successive rows of the bitmap matrix. Each of output vectors T with 4 bits had only one of the bits active, which determined the number of the class assigned to the pattern, i.e. 1  Rising 2  Descent; 3  Resistance and 4 Support. Fig. 3: Patterns from set P1  input patterns bitmaps (lower square) and required responses (upper rectangle) Fig. 4: Patterns from set P2  input patterns bitmaps (lower square) and required responses (upper rectangle) Table 1: P1  vectors T and S. Values of 1 are written using the character '' and values of +1 are written using the Pat T S Table 2: P2  vectors T and S. Values of 1 are written using the character '' and values of +1 are written using the Pat. T S The original experiment procedure First, patterns from the set P1 (Fig. 3, Table 1) were presented to the network. The network was able to recognize only two of four submitted patterns. Then, patterns from the set P2 ( Fig. 4, Table 2) were presented to the network. The network was able to learn all patterns correctly.
3 Finally, patterns from the set P1 were presented (in active mode) to the network, which was adapted to P2. Then the network was able to classify all patterns from the set P1 correctly. The original motivation for creating the set P2 was to verify the assumption, that the presentation of "flat" patterns allows the network to obtain more general "knowledge" about the nature of patterns. Then, such network is more able to detect sequences with a lower amplitude or other slope of the curve. The experimental study seemed to confirm a correctness of this assumption. Moreover, if the "correct" patterns are presented to the network, it can learn to recognize patterns, which were even impossible to learn separately. 3 Projection of the problem into simpler patterns When analyzing the behavior of the described above, we had to repeat our experimental study with simpler patterns. We created two sets R1 (Fig. 5, Table 3) and R2 (Fig. 6, Table 4). Each of them contains four patterns. Input pattern's length is 6 and output pattern s length is 4. The behavior of network working with these two patterns was analogous to the original experiment. First, the network was not able to learn set R1. When the network was adapted with R2, then it was able to correctly classify all patterns from R2 and R1. Fig. 5: Set R1. Network is not able to learn it Table 3: Set R1, vectors T and S. Values of 1 are written using the character '' and values of +1 are written using the Pat. T S Fig. 6: Set R2. Once the network learns patterns from R2, it can also classify patterns from R1. Table 4: Set R2, vectors T and S. Values of 1 are written using the character '' and values of +1 are written using the Pat. T S Patterns in both sets R1 and R2 differ only in the values of the 5th and 6th input items. While values of these items are the same in all patterns of R1, these values differ in patterns of R2. Looking more carefully at both sets R1 and R2, we can see that outputs just "copy" the first four inputs, regardless of the value of 5th and 6th input items. We can intuitively say, that the 5th and the 6th item are both irrelevant. 3.1 Adaptation The network topology which we used is shown in Fig. 7. For each set of R1 and R2, a separate instance of the classifier was created. The Table 5 shows the network adaptation during the first learning cycle. Since patterns in R1 and R2 differ only in the 5th and 6th input bit, the first six columns of Table 5 are the same for both patterns. Columns 7 and 8 show the values for the 5th and 6th item from R1. Columns 9 and 10 show the values for the 5th and 6th item from R2. The closing rows of Table 5 show weight values after the first learning cycle for both sets R1 and R2. We can pronounce the following: 1. A total of twelve connections end the adaptation with zero weight values. Such connections can be considered as redundant in terms of network capacity to remember or recognize patterns. 2. Every connection related to 5th and 6th items have a nonzero values, i.e. they affect the work of classifiers during adaptive and active modes. 3. All connections weights W11,W22, W33 a W44 have the same value All connections weights Wb1Wb4 (bias) have the same value 2.
4 For better illustration, we present the structure of neural network without connections with zero weight value (marked as redundant) in Fig. 8. Between the adaptation to R1 and R2, the difference is only in the weight values on connections related to the 5th and 6th inputs. Table 5: Evolution of weight values during learning process on sets R1 and R2. Items of input and output vectors that take a positive value are highlighted in black. Set R1,R2 Set R1 Set R2 Initialization Y1 wb1=0 w11=0 w21=0 w31=0 w41=0 w51=0 w61=0 w51=0 w61=0 Y2 wb2=0 w12=0 w22=0 w32=0 w42=0 w52=0 w62=0 w52=0 w62=0 Y3 wb3=0 w13=0 w23=0 w33=0 w43=0 w53=0 w63=0 w53=0 w63=0 Y4 wb4=0 w14=0 w24=0 w34=0 w44=0 w54=0 w64=0 w54=0 w64=0 1. Step Y Y Y Y Step Y Y Y Y Step Y Y Y Y Step Y Y Y Y Fig. 7: Topology of neural network for processing patterns from training sets R1 and R2 (B=1). Fig. 8: Structure of neural network (from Fig. 7) after adaptation on sets R1 and R2. Connections with zero weight values were omitted. In case of R1, the values of the dotted and the dashed connections are identical (2), in case of R2 they are different (contrary).
5 3.2 Analysis of adaptation results: Looking at the closing rows of Table 5 and at the Fig. 8, it is possible to express values of output neurons activations after passing training set as follows (1) : = X j w jj X 5 w 5j X 6 w 6j B w bj (1) Substituting values of weights after adaptation of R1 into equation (1) we obtain the following relation (2): = X j (2) Now, we can generalize equation (2) and after the nth pass we get neuron activation, which is expressed according to the formula (3). = X j. 4n 2n 2n 1. 2n (3), which can be reduced to (4): =n X j. 4 6 (4) From equation (4) it is clear that the value for the set R1 can never be positive. It is because Xj takes either value 1 or 1, therefore can only have values 10n or 2n. As the value of X5= X6= 1 for all patterns from R1 set, the sum of their contributions to value of each output neuron for each pattern is equal to 4n and the network will never be able to successfully learn patterns from R1 set. Substituting values related to R2 set into equation (1) in the same way, as we did with R1, we get the following (5) : =n X j. 4 2 (5) The formula (5) shows that values of X5 and X6 help to deduce the correct class of pattern presented (they restrict the choice into two possible models). Their values in patterns 1 and 2 increase Y 1 and Y 2 of value 4 while reduce Y 3 Y 4 of value 4. Their values in patterns 3 a 4 do the opposite. The weights of connections related to the 5th and 6th input are exactly contrary. It implies, if the values of X5 and X6 are the same (the case of R1 set), their contribution to the activation value of each output in each pattern is zero. Therefore, network adapted to R2 correctly identifies patterns from R1 too. 4 Optimization of classifier As we have shown in the previous example, difficulties with training set R1 lies in components X5 and X6, which have the same value in all patterns. Therefore, these components do not help us to assign proper classes to patterns. We can describe these components as excessive (irrelevant). In addition, during the learning process, connections related to these components get nonzero values, which leads to confusion and network losses its learning ability. Based on our experimental study, we proposed a method of evaluating the relevance of the input vector components. Principles of the method are simple: 1. Before adaptation, algorithm walks through the training set and identify as irrelevant all the items, whose value in all patterns is the same. 2. Weights of connections related to the irrelevant items are ignored during the adaptation. 3. Thanks to that, such weights remain 0. Algorithm that marks irrelevant items can be written as follows: 1. Mark all items as irrelevant. 2. Load input vector of the first pattern and remember values of its items. 3. Repeat with all successive patterns: a. Load input vector. b. Mark every irrelevant item as relevant in case, that its actual value differs from that in the first pattern. 4. End.
6 This modified classifier is now possible to adapt to the both sets R1 and R2. Using this preprocessing, neural network becomes more specified to an actual training set, i.e it loss some of its generalization ability. Fig. 9 shows topology of the network, which uses the proposed algorithm to identification of irrelevant items, which are highlighted in gray. Related connections (dashed) are then ignored during the adaptation process. The Fig. 10 shows the structure of the neural network after adaptation to the R1. Connections with zeroweight values were excluded. Fig. 9: Network topology for R1 set after preprocessing. Items X5 and X6 are marked as irrelevant. The values of weights of related connections remain zero during the whole adaptation. Fig. 10: The structure of neural network after its adaptation of R1. Connections with zero weight values were excluded. Finally, both original data sets P1 (see Fig. 3, Table 1) and P2 (see Fig. 4, Table 2) were presented to the adjusted classifier. Looking at Fig. 11 and Fig. 12, we can see irrelevant items in both sets marked as gray. As expected, the classifier now can learn and correctly classify all training patterns of both sets P1 and P2. In the case, the adaptation of P2 set do not lead to correct classification of patterns of P1 but the network behavior is in line with expectations. Due to the elimination of redundant items from training sets, the network has lost some of its generalization ability. Fig. 11: Patterns from the set P1 showing irrelevant components (gray) Fig. 12 Patterns from the set P2 showing irrelevant components (gray) The training set P3 has been designed in the final step of our experimental study, which includes all patterns from sets P1 and P2. No irrelevant components were found in this united training set. Then, its adaptation process went correctly in accordance to expectations, where all patterns from P3 set ( e.g P3=P1 P2.) were correctly adapted. 5 Conclusion In this experimental study we have managed to explain the cause of the unexpected behavior of the neural network, which we have seen in previous timeseriesrelated experiments [1]. We have designed, theoretically justified and experimentally tested a new method for preprocessing of a training set. This method enhances the ability of neural network to learn and classify patterns. References [1] Janošek M., Kocian V., Kotyrba M., Volná E., Pattern recognition and system adaptation In Kováčová, M. (ed.): Proceedings of the 10th International Conference on Applied Mathematics, Aplimat 2011, Bratislava, Slovakia, 2011, pp [2] Doidge, Norman, The Brain That Changes Itself,Viking Press, 2007 [3] Laurene V. Fausett, Fundamentals Of Neural Networks: Architectures, Algorithms And Applications, Prentice Hallm, 1994 [4] Leandro Nunes de Castro, Fundamentals of Natural Computing, Chapman & Hall, 2006 [5] Bishop, Neural Networks for Pattern Recognition. Oxford: Oxford University Press, 1997
Python Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationStatewide Framework Document for:
Statewide Framework Document for: 270301 Standards may be added to this document prior to submission, but may not be removed from the framework to meet state credit equivalency requirements. Performance
More informationUsing focal point learning to improve human machine tacit coordination
DOI 10.1007/s1045801091265 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated
More informationAGS THE GREAT REVIEW GAME FOR PREALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS
AGS THE GREAT REVIEW GAME FOR PREALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic
More informationINPE São José dos Campos
INPE5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA
More informationAssignment 1: Predicting Amazon Review Ratings
Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 0014
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationExtending Place Value with Whole Numbers to 1,000,000
Grade 4 Mathematics, Quarter 1, Unit 1.1 Extending Place Value with Whole Numbers to 1,000,000 Overview Number of Instructional Days: 10 (1 day = 45 minutes) Content to Be Learned Recognize that a digit
More informationGrade 6: Correlated to AGS Basic Math Skills
Grade 6: Correlated to AGS Basic Math Skills Grade 6: Standard 1 Number Sense Students compare and order positive and negative integers, decimals, fractions, and mixed numbers. They find multiples and
More informationDigital Fabrication and Aunt Sarah: Enabling Quadratic Explorations via Technology. Michael L. Connell University of Houston  Downtown
Digital Fabrication and Aunt Sarah: Enabling Quadratic Explorations via Technology Michael L. Connell University of Houston  Downtown Sergei Abramovich State University of New York at Potsdam Introduction
More informationGiven a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations
4 Interior point algorithms for network ow problems Mauricio G.C. Resende AT&T Bell Laboratories, Murray Hill, NJ 079742070 USA Panos M. Pardalos The University of Florida, Gainesville, FL 326116595
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationGCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education
GCSE Mathematics B (Linear) Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education Mark Scheme for November 2014 Oxford Cambridge and RSA Examinations OCR (Oxford Cambridge
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationThe Good Judgment Project: A large scale test of different methods of combining expert predictions
The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania
More informationA Neural Network GUI Tested on TextToPhoneme Mapping
A Neural Network GUI Tested on TextToPhoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Texttophoneme (T2P) mapping is a necessary step in any speech synthesis
More informationQuickStroke: An Incremental Online Chinese Handwriting Recognition System
QuickStroke: An Incremental Online Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationAnalysis of Enzyme Kinetic Data
Analysis of Enzyme Kinetic Data To Marilú Analysis of Enzyme Kinetic Data ATHEL CORNISHBOWDEN Directeur de Recherche Émérite, Centre National de la Recherche Scientifique, Marseilles OXFORD UNIVERSITY
More informationEvolution of Symbolisation in Chimpanzees and Neural Nets
Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication
More informationOn the Combined Behavior of Autonomous Resource Management Agents
On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationSystem Implementation for SemEval2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 TzuHsuan Yang, 2 TzuHsuan Tseng, and 3 ChiaPing Chen Department of Computer Science and Engineering
More informationPhysics 270: Experimental Physics
2017 edition Lab Manual Physics 270 3 Physics 270: Experimental Physics Lecture: Lab: Instructor: Office: Email: Tuesdays, 2 3:50 PM Thursdays, 2 4:50 PM Dr. Uttam Manna 313C Moulton Hall umanna@ilstu.edu
More informationHoughton Mifflin Online Assessment System Walkthrough Guide
Houghton Mifflin Online Assessment System Walkthrough Guide Page 1 Copyright 2007 by Houghton Mifflin Company. All Rights Reserved. No part of this document may be reproduced or transmitted in any form
More informationCircuit Simulators: A Revolutionary ELearning Platform
Circuit Simulators: A Revolutionary ELearning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,
More informationMathematics Scoring Guide for Sample Test 2005
Mathematics Scoring Guide for Sample Test 2005 Grade 4 Contents Strand and Performance Indicator Map with Answer Key...................... 2 Holistic Rubrics.......................................................
More informationRendezvous with Comet Halley Next Generation of Science Standards
Next Generation of Science Standards 5th Grade 6 th Grade 7 th Grade 8 th Grade 5PS13 Make observations and measurements to identify materials based on their properties. MSPS14 Develop a model that
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE589 Introduction to Neural Networks NN 1 EE
EE589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:0012:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationAustralian Journal of Basic and Applied Sciences
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:19918178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy CMean
More informationInterpreting ACER Test Results
Interpreting ACER Test Results This document briefly explains the different reports provided by the online ACER Progressive Achievement Tests (PAT). More detailed information can be found in the relevant
More informationSARDNET: A SelfOrganizing Feature Map for Sequences
SARDNET: A SelfOrganizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 1218 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationGCSE. Mathematics A. Mark Scheme for January General Certificate of Secondary Education Unit A503/01: Mathematics C (Foundation Tier)
GCSE Mathematics A General Certificate of Secondary Education Unit A503/0: Mathematics C (Foundation Tier) Mark Scheme for January 203 Oxford Cambridge and RSA Examinations OCR (Oxford Cambridge and RSA)
More informationarxiv: v1 [math.at] 10 Jan 2016
THE ALGEBRAIC ATIYAHHIRZEBRUCH SPECTRAL SEQUENCE OF REAL PROJECTIVE SPECTRA arxiv:1601.02185v1 [math.at] 10 Jan 2016 GUOZHEN WANG AND ZHOULI XU Abstract. In this note, we use Curtis s algorithm and the
More informationAre You Ready? Simplify Fractions
SKILL 10 Simplify Fractions Teaching Skill 10 Objective Write a fraction in simplest form. Review the definition of simplest form with students. Ask: Is 3 written in simplest form? Why 7 or why not? (Yes,
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 16426037 Marek WIŚNIEWSKI *, Wiesława KUNISZYKJÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationUsing the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT
The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the
More informationUNIT ONE Tools of Algebra
UNIT ONE Tools of Algebra Subject: Algebra 1 Grade: 9 th 10 th Standards and Benchmarks: 1 a, b,e; 3 a, b; 4 a, b; Overview My Lessons are following the first unit from Prentice Hall Algebra 1 1. Students
More informationChapter 4  Fractions
. Fractions Chapter  Fractions 0 Michelle Manes, University of Hawaii Department of Mathematics These materials are intended for use with the University of Hawaii Department of Mathematics Math course
More informationForget catastrophic forgetting: AI that learns after deployment
Forget catastrophic forgetting: AI that learns after deployment Anatoly Gorshechnikov CTO, Neurala 1 Neurala at a glance Programming neural networks on GPUs since circa 2 B.C. Founded in 2006 expecting
More informationProposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science
Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science Gilberto de Paiva Sao Paulo Brazil (May 2011) gilbertodpaiva@gmail.com Abstract. Despite the prevalence of the
More informationAxiom 2013 Team Description Paper
Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association
More informationIntroduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition
Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and
More informationlearning collegiate assessment]
[ collegiate learning assessment] INSTITUTIONAL REPORT 2005 2006 Kalamazoo College council for aid to education 215 lexington avenue floor 21 new york new york 100166023 p 212.217.0700 f 212.661.9766
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationEntrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany
Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany Jana Kitzmann and Dirk Schiereck, Endowed Chair for Banking and Finance, EUROPEAN BUSINESS SCHOOL, International
More informationWhy Did My Detector Do That?!
Why Did My Detector Do That?! Predicting KeystrokeDynamics Error Rates Kevin Killourhy and Roy Maxion Dependable Systems Laboratory Computer Science Department Carnegie Mellon University 5000 Forbes Ave,
More informationThe lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design.
Name: Partner(s): Lab #1 The Scientific Method Due 6/25 Objective The lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design.
More informationGCE. Mathematics (MEI) Mark Scheme for June Advanced Subsidiary GCE Unit 4766: Statistics 1. Oxford Cambridge and RSA Examinations
GCE Mathematics (MEI) Advanced Subsidiary GCE Unit 4766: Statistics 1 Mark Scheme for June 2013 Oxford Cambridge and RSA Examinations OCR (Oxford Cambridge and RSA) is a leading UK awarding body, providing
More informationMachine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler
Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina
More informationTwitter Sentiment Classification on Sanders Data using Hybrid Approach
IOSR Journal of Computer Engineering (IOSRJCE) eissn: 22780661,pISSN: 22788727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118123 www.iosrjournals.org Twitter Sentiment Classification on Sanders
More informationLEGO MINDSTORMS Education EV3 Coding Activities
LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 36 ACTIVITY 2 Written Instructions for a
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More information(Sub)Gradient Descent
(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include
More informationTIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy
TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE Pierre Foy TIMSS Advanced 2015 orks User Guide for the International Database Pierre Foy Contributors: Victoria A.S. Centurino, Kerry E. Cotter,
More informationOntheFly Customization of Automated Essay Scoring
Research Report OntheFly Customization of Automated Essay Scoring Yigal Attali Research & Development December 2007 RR0742 OntheFly Customization of Automated Essay Scoring Yigal Attali ETS, Princeton,
More informationAN EXAMPLE OF THE GOMORY CUTTING PLANE ALGORITHM. max z = 3x 1 + 4x 2. 3x 1 x x x x N 2
AN EXAMPLE OF THE GOMORY CUTTING PLANE ALGORITHM Consider the integer programme subject to max z = 3x 1 + 4x 2 3x 1 x 2 12 3x 1 + 11x 2 66 The first linear programming relaxation is subject to x N 2 max
More informationCHMB16H3 TECHNIQUES IN ANALYTICAL CHEMISTRY
CHMB16H3 TECHNIQUES IN ANALYTICAL CHEMISTRY FALL 2017 COURSE SYLLABUS Course Instructors Kagan Kerman (Theoretical), email: kagan.kerman@utoronto.ca Office hours: Mondays 36 pm in EV502 (on the 5th floor
More informationMultiplication of 2 and 3 digit numbers Multiply and SHOW WORK. EXAMPLE. Now try these on your own! Remember to show all work neatly!
Multiplication of 2 and digit numbers Multiply and SHOW WORK. EXAMPLE 205 12 10 2050 2,60 Now try these on your own! Remember to show all work neatly! 1. 6 2 2. 28 8. 95 7. 82 26 5. 905 15 6. 260 59 7.
More informationWHEN THERE IS A mismatch between the acoustic
808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition JeihWeih Hung, Member,
More informationCS Machine Learning
CS 478  Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing
More informationProbability and Statistics Curriculum Pacing Guide
Unit 1 Terms PS.SPMJ.3 PS.SPMJ.5 Plan and conduct a survey to answer a statistical question. Recognize how the plan addresses sampling technique, randomization, measurement of experimental error and methods
More informationLinking Task: Identifying authors and book titles in verbose queries
Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot AixMarseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,
More informationMathematics process categories
Mathematics process categories All of the UK curricula define multiple categories of mathematical proficiency that require students to be able to use and apply mathematics, beyond simple recall of facts
More informationA Pipelined Approach for Iterative Software Process Model
A Pipelined Approach for Iterative Software Process Model Ms.Prasanthi E R, Ms.Aparna Rathi, Ms.Vardhani J P, Mr.Vivek Krishna Electronics and Radar Development Establishment C V Raman Nagar, Bangalore560093,
More informationThis scope and sequence assumes 160 days for instruction, divided among 15 units.
In previous grades, students learned strategies for multiplication and division, developed understanding of structure of the place value system, and applied understanding of fractions to addition and subtraction
More informationStacks Teacher notes. Activity description. Suitability. Time. AMP resources. Equipment. Key mathematical language. Key processes
Stacks Teacher notes Activity description (Interactive not shown on this sheet.) Pupils start by exploring the patterns generated by moving counters between two stacks according to a fixed rule, doubling
More informationThe 9 th International Scientific Conference elearning and software for Education Bucharest, April 2526, / X
The 9 th International Scientific Conference elearning and software for Education Bucharest, April 2526, 2013 10.12753/2066026X13154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tuchemnitz.de Ricardo BaezaYates Center
More informationWord Segmentation of Offline Handwritten Documents
Word Segmentation of Offline Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationTest Effort Estimation Using Neural Network
J. Software Engineering & Applications, 2010, 3: 331340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish
More informationMultimedia Application Effective Support of Education
Multimedia Application Effective Support of Education Eva Milková Faculty of Science, University od Hradec Králové, Hradec Králové, Czech Republic eva.mikova@uhk.cz Abstract Multimedia applications have
More informationMontana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011
Montana Content Standards for Mathematics Grade 3 Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011 Contents Standards for Mathematical Practice: Grade
More informationTOPICS LEARNING OUTCOMES ACTIVITES ASSESSMENT Numbers and the number system
Curriculum Overview Mathematics 1 st term 5º grade  2010 TOPICS LEARNING OUTCOMES ACTIVITES ASSESSMENT Numbers and the number system Multiplies and divides decimals by 10 or 100. Multiplies and divide
More informationMiamiDade County Public Schools
ENGLISH LANGUAGE LEARNERS AND THEIR ACADEMIC PROGRESS: 20102011 Author: Aleksandr Shneyderman, Ed.D. January 2012 Research Services Office of Assessment, Research, and Data Analysis 1450 NE Second Avenue,
More informationAlgebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview
Algebra 1, Quarter 3, Unit 3.1 Line of Best Fit Overview Number of instructional days 6 (1 day assessment) (1 day = 45 minutes) Content to be learned Analyze scatter plots and construct the line of best
More informationCOMPUTERASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS
COMPUTERASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS L. Descalço 1, Paula Carvalho 1, J.P. Cruz 1, Paula Oliveira 1, Dina Seabra 2 1 Departamento de Matemática, Universidade de Aveiro (PORTUGAL)
More informationSpeaker Identification by Comparison of Smart Methods. Abstract
Journal of mathematics and computer science 10 (2014), 6171 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer
More informationWriting Research Articles
Marek J. Druzdzel with minor additions from Peter Brusilovsky University of Pittsburgh School of Information Sciences and Intelligent Systems Program marek@sis.pitt.edu http://www.pitt.edu/~druzdzel Overview
More informationNotes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1
Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial
More informationGrades. From Your Friends at The MAILBOX
From Your Friends at The MAILBOX Grades 5 6 TEC916 HighInterest Math Problems to Reinforce Your Curriculum Supports NCTM standards Strengthens problemsolving and basic math skills Reinforces key problemsolving
More informationSINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCYINVERSE DOCUMENT FREQUENCY (TFIDF)
SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCYINVERSE DOCUMENT FREQUENCY (TFIDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,
More informationAbstractions and the Brain
Abstractions and the Brain Brian D. Josephson Department of Physics, University of Cambridge Cavendish Lab. Madingley Road Cambridge, UK. CB3 OHE bdj10@cam.ac.uk http://www.tcm.phy.cam.ac.uk/~bdj10 ABSTRACT
More informationTransfer Learning Action Models by Measuring the Similarity of Different Domains
Transfer Learning Action Models by Measuring the Similarity of Different Domains Hankui Zhuo 1, Qiang Yang 2, and Lei Li 1 1 Software Research Institute, Sun Yatsen University, Guangzhou, China. zhuohank@gmail.com,lnslilei@mail.sysu.edu.cn
More informationBackwards Numbers: A Study of Place Value. Catherine Perez
Backwards Numbers: A Study of Place Value Catherine Perez Introduction I was reaching for my daily math sheet that my school has elected to use and in big bold letters in a box it said: TO ADD NUMBERS
More informationLecture 10: Reinforcement Learning
Lecture 1: Reinforcement Learning Cognitive Systems II  Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation
More informationA GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISIONMAKING
A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISIONMAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland
More informationArtificial Neural Networks
Artificial Neural Networks Andres Chavez Math 382/L T/Th 2:003:40 April 13, 2010 Chavez2 Abstract The main interest of this paper is Artificial Neural Networks (ANNs). A brief history of the development
More informationExploration. CS : Deep Reinforcement Learning Sergey Levine
Exploration CS 294112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?
More informationLearning From the Past with Experiment Databases
Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University
More informationFunctional Skills Mathematics Level 2 assessment
Functional Skills Mathematics Level 2 assessment www.cityandguilds.com September 2015 Version 1.0 Marking scheme ONLINE V2 Level 2 Sample Paper 4 Mark Represent Analyse Interpret Open Fixed S1Q1 3 3 0
More informationLip reading: Japanese vowel recognition by tracking temporal changes of lip shape
Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Koshi Odagiri 1, and Yoichi Muraoka 1 1 Graduate School of Fundamental/Computer Science and Engineering, Waseda University,
More informationConstructing a support system for selflearning playing the piano at the beginning stage
Alma Mater Studiorum University of Bologna, August 2226 2006 Constructing a support system for selflearning playing the piano at the beginning stage Tamaki Kitamura Dept. of Media Informatics, Ryukoku
More informationHow the Guppy Got its Spots:
This fall I reviewed the Evobeaker labs from Simbiotic Software and considered their potential use for future Evolution 4974 courses. Simbiotic had seven labs available for review. I chose to review the
More informationMTH 215: Introduction to Linear Algebra
MTH 215: Introduction to Linear Algebra Fall 2017 University of Rhode Island, Department of Mathematics INSTRUCTOR: Jonathan A. Chávez Casillas EMAIL: jchavezc@uri.edu LECTURE TIMES: Tuesday and Thursday,
More informationDeep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach
#BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying
More information