Artifi ifi i c l a Neur l a Networks Mohamed M. El Wakil t akil.ne 1

Size: px
Start display at page:

Download "Artifi ifi i c l a Neur l a Networks Mohamed M. El Wakil t akil.ne 1"

Transcription

1 Artificial i lneural lnetworks Mohamed M. El Wakil mohamed@elwakil.net 1

2 Agenda Natural Neural Networks Artificial Neural Networks XOR Example Design Issues Applications Conclusion 2

3 Artificial Neural Networks? A technique for solving problems by constructing software that works like our brains! 3

4 Artificial Neural Networks? A technique for solving problems by constructing software that works like our brains! But, how do our brains work? 4

5 The Brain is.. A massively parallel information processing system 5

6 A HUGE network of processing elements A typical brain contains a network of 10 billion neurons 6

7 A processing element (a.k.a. A neuron) Cell body Axon S ynap se Synaptic Dendrites 7

8 A processing element (a.k.a. A neuron) Cell body: Processor Axon: Output ynap se Synaptic: Link Dendrites : Input A neuron is connected to other neurons through about 10,000 synapses 8

9 How it works? 1 Cell body Axon S ynap se Synaptic Dendrites 9 A neuron receives input from other neurons (typically many thousands). Inputs are combined (i.e. summed)

10 How it works? 2 Cell body Axon S ynap se Synaptic Dendrites 10 Once input exceeds a critical level, the neuron discharges a spike an electrical pulse that travels from the body, down the axon, to the next neuron(s)

11 How it works? 3 Cell body Axon S ynap se Synaptic Dendrites The axon endings almost touch the dendrites or cell body of the next neuron 11

12 How it works? 3 Cell body Axon S ynap se Synaptic Dendrites Transmission of an electrical signal from one neuron to the next is effected by neurotransmittors 12

13 How it works? 3 Cell body Axon S ynap se Synaptic Dendrites 13 Neurotransmittors are chemicals which are released from the first neuron and which bind to the second

14 How it works? 4 Cell body Axon S ynap se Synaptic Dendrites 14 This link is called a synapse. The strength of the signal that reaches the next neuron depends on factors such as the amount of neurotransmittor available

15 Neuron Abstraction 15

16 So An artificial network is an imitation of a human neural network. 16

17 So An artificial network is an imitation of a human neural network. An artificial neuron is an imitation of a human neuron. 17

18 So An artificial network is an imitation of a human neural network. An artificial neuron is an imitation of a human neuron Let s see an artificial neural network. 18

19 Cell Body 19

20 Dendrites 20

21 Axon/Spike 21

22 Input Processing Output 22

23 Not all inputs are equal! 23

24 Not all neurons are equal! 24

25 The signal is not passed down to the next neuron verbatim Transfer Function 25

26 Three neurons 26

27 Input of a neuron=output of other neurons 27

28 Layer: A set of neurons that receive the same input 28

29 Three typesof layers: Input, Hidden, and Output 29

30 The output is a function of the input, that is affected by the weights, and the transfer functions 30

31 A powerful tool An ANN can compute any computable function, by the appropriate selection of the networktopology topology and weights values. 31

32 A powerful tool An ANN can compute any computable function, by the appropriate selection of the networktopology topology and weights values. Also, ANNs can learn from experience! 32

33 A powerful tool An ANN can compute any computable function, by the appropriate selection of the networktopology topology and weights values. Also, ANNs can learn from experience! Specifically, by trial and error 33

34 Learning by trial and error and Continuous processof: of: Trial Evaluate Adjust

35 Learning by trial and error and Continuous processof: of: Processing an input to produce an output (i.e. trial) Evaluating this output (i.e. comparing it to the expected output) Adjusting the processing, accordingly

36 Learning by trial and error and Continuous processof: of: Processing an input to produce an output (i.e. trial) Evaluating this output (i.e. comparing it to the expected output) Adjusting the processing, accordingly In terms of ANN: Compute the output tfunction of a given input.

37 Learning by trial and error and Continuous processof: of: Processing an input to produce an output (i.e. trial) Evaluating this output (i.e. comparing it to the expected output) Adjusting the processing, accordingly In terms of ANN: Compute the output tfunction of a given input. Compare the actual output with the expected one

38 Learning by trial and error and Continuous processof: of: Processing an input to produce an output (i.e. trial) Evaluating this output (i.e. comparing it to the expected output) Adjusting the processing, accordingly In terms of ANN: Compute the output function of a given input. Compare the actual output with the expected one Adjust the weights.

39 An ANN that learns how approximate XOR Three Layers 39

40 An ANN that learns how approximate Input XOR Three Layers: Layer 1: Input Layer, with two neurons 40

41 An ANN that learns how approximate XOR Hidden Three Layers: Layer 1: Input Layer, with two neurons Layer 2: Hidden Layer, with three neurons 41

42 An ANN that learns how approximate XOR Output Three Layers: Layer 1: Input Layer, with two neurons Layer 2: Hidden Layer, with three neurons Layer 3: Output Layer, with one neuron 42

43 How it works? Set initial values of the weights randomly. Input: truth table of the XOR. 43

44 How it works? Set initial values of the weights randomly. Input: truth table of the XOR Do Read input (e.g. 0, and 0) 44

45 How it works? Set initial values of the weights randomly. Input: truth table of the XOR Do Read input (e.g. 0, and 0) Compute an output (e.g ) 45

46 How it works? Set initial values of the weights randomly. Input: truth table of the XOR Do Read input (e.g. 0, and 0) Compute an output (e.g ) Compute the error (the distance between the desired output and the actual output. (Error= ) Adjust the weights accordingly. 46

47 How it works? Set initial values of the weights randomly. Input: truth table of the XOR Do Read dinput t( (e.g. 0, and 0) Compute an output (e.g ) Compare it to the expected (i.e. desired) output. (Diff= ) Modify the weights accordingly. Loop until a condition is met Condition: certain number of iterations Condition: error threshold 47

48 JOONE 48

49 Design Issues Initial weights (smallrandom values [ 1,1]) Transfer function [How the inputs and the weights are combined to produce output?] Error estimation. Weights adjusting (akatraining, Teaching) Number of neurons. Dt Data representation tti Size of training set 49

50 Transfer Functions Linear: The output is proportional to the total weighted input. Y=B*x, Y=B+x Threshold: The output is set at one of two values, depending on whether the total weighted input is greater than or less than some threshold value. Non linear: The output varies continuously but not linearly as the input changes. Y= [Sigmoidal] e x 2 a

51 Error Estimation RMSE (Root Mean Square Error) is commonly used, or a variation of it. D(Y actual, Y desired )= 2 ( Ydesired ) ( Yactual ) 2 51

52 Weights Adjusting After each iteration, weights should be adjusted to minimize the error. All possible weights Hebbian learning Artificial evolution Back propagation 52

53 Back Propagation N is a neuron. N w is one of N s inputs weights N out is N s Ns output. N w =N w +ΔN w ΔN w =N out * (1 N out )* N ErrorFactor N ErrorFactor =N ExpectedOutput N ActualOutput This works only for the last layer, as we can know the actual output, and the expected output. What about the hidden layer? 53

54 Weights Adjusting Error Factor of X1 = (ΔY1 w * X1 w of Y1)+ (ΔY2 w * X1 w of Y2)+ (ΔY3 w * X1 w of Y3)+ (ΔY3 w * X1 w of Y3) 54

55 Number of neurons Many neurons: Higher accuracy Slower Risk of over fitting Memorizing, rather than understanding The network will be useless with new problems. Few neurons: Lower accuracy Inability to learn at all Optimal number: Trail and error! Adaptive techniques. 55

56 Data representation Usually input/output data needs pre processing pepocess Pictures Pixel intensity Smells Molecule concentrations Text: A pattern, e.g for Chris, for Becky Numbers: Decimal or binary 56

57 Size of training set No one fits all formula Some heuristics: Five to ten times training i samples as the number of weights. Greater than W 1 a W : number of weights a: desired accuracy W n log 1 a 1 a n: number of nodes Greater than 57

58 Applications Areas Function approximation Classification Clustering Pattern recognition (radar systems, face identification) 58

59 Applications Electronic Nose NETtalk 59

60 Electronic Nose Developed by Dr. Amy Ryan, at JPL, NASA. Goal: Detect low concentrations of dangerous gases. Ammonia is dangerous at a concentration of a few parts per million (ppm) Humans, can't sense it until it reaches about 50 ppm. An E Nose can differentiate between Pepsi, and Coca Cola. Can you? 60

61 Electronic Nose An E Nose uses a collection of 16 different polymer films. These films are specially designed to conduct electricity. When a substance is absorbed into these films, the films expand slightly, and that changes how much electricity they conduct. e - e - e - e - e - e - 61

62 Electronic Nose Because each film is made of a different polymer, each one reacts to each substance, or analyte, in a slightly different way. An artificial neural network combines the differences in conductivity to detect the substance that caused them. e- e - e- e - e - e - 62

63 NETtalk Experiment Carried out in the mid 1980s by Terrence Sejnowski and Charles Rosenberg. Goal: teach a computer how to pronounce words. The network is fd fed words, and phonemes, and articulatory features. It really learns. Listen! 63

64 Advantages / Disadvantages Advantages Adapt to unknown situations Powerful, it can model complex functions. Ease of use, learns by example, and very little user domain specific expertise needed Disadvantages Forgets Not exact Large complexity of the network structure

65 Status of Neural Networks Most of the reported applications are still in research stage No formal proofs, but they seem to have useful applications that work

66 Conclusion Artificial ca Neural Networks aea are an imitation of the biological neural networks, but much simpler ones. ANNs can learn, via trial and error. Many factors affect the performance of ANNs, such as the transfer functions, size of training sample, network topology, weights adjusting algorithm, No formal proofs, but they seem to have useful applications. 66

67 Refrences BrainNet II Creating A Neural Network Library Electronic Nose NETtalk, Wikipedia A primer on Artificial Neural Networks, NeuriCam Artificial Neural Networks Application, Peter Andras Introduction to Artificial Neural Networks, Nicolas Galoppo von Borries Artificial Neural Networks, Torsten Reil What is a "neural net"? Introduction to Artificial Neural Network, Jianna J. Zhang, Bellingham AI Robotics Society, Bellingham WA Elements of Artificial Neural Networks, Kishan Mehrotra, Chilukuri K. Mohan and Sanjay Ranka, MIT Press, 1996 Thanks to all folks who share there material online 67

68 68

69 Artificial i lneural lnetworks Mohamed M. El Wakil mohamed@elwakil.net 69

70 Learning Paradigms Supervised learning Unsupervised learning Reinforcement learning 70

71 Supervised learning This is what we have seen so far! A network is fed with a set of training samples (inputs and corresponding output), and it uses these samples to learn the general relationship between the inputs and the outputs. This relationship is represented by the values of the weights of the trained network. 71

72 Unsupervised learning No desired output is associated with the training data! Learning by doing! Faster than supervised learning Used to find out structures within data: Clustering Compression 72

73 Reinforcement learning Like supervised learning, but: Weights adjusting is not directly related to the error value. The error value is used to randomly, shuffle weights! Slow, due to randomness. 73

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Andres Chavez Math 382/L T/Th 2:00-3:40 April 13, 2010 Chavez2 Abstract The main interest of this paper is Artificial Neural Networks (ANNs). A brief history of the development

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science Gilberto de Paiva Sao Paulo Brazil (May 2011) gilbertodpaiva@gmail.com Abstract. Despite the prevalence of the

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe *** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE Proceedings of the 9th Symposium on Legal Data Processing in Europe Bonn, 10-12 October 1989 Systems based on artificial intelligence in the legal

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

Test Effort Estimation Using Neural Network

Test Effort Estimation Using Neural Network J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

An empirical study of learning speed in backpropagation

An empirical study of learning speed in backpropagation Carnegie Mellon University Research Showcase @ CMU Computer Science Department School of Computer Science 1988 An empirical study of learning speed in backpropagation networks Scott E. Fahlman Carnegie

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

Evolution of Symbolisation in Chimpanzees and Neural Nets

Evolution of Symbolisation in Chimpanzees and Neural Nets Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

Spinal Cord. Student Pages. Classroom Ac tivities

Spinal Cord. Student Pages. Classroom Ac tivities Classroom Ac tivities Spinal Cord Student Pages Produced by Regenerative Medicine Partnership in Education Duquesne University Director john A. Pollock (pollock@duq.edu) The spinal column protects the

More information

Forget catastrophic forgetting: AI that learns after deployment

Forget catastrophic forgetting: AI that learns after deployment Forget catastrophic forgetting: AI that learns after deployment Anatoly Gorshechnikov CTO, Neurala 1 Neurala at a glance Programming neural networks on GPUs since circa 2 B.C. Founded in 2006 expecting

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

Breaking the Habit of Being Yourself Workshop for Quantum University

Breaking the Habit of Being Yourself Workshop for Quantum University Breaking the Habit of Being Yourself Workshop for Quantum University 2 Copyright Dr Joe Dispenza. June 2013. All rights reserved. 3 Copyright Dr Joe Dispenza. June 2013. All rights reserved. 4 Copyright

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

Neuroscience I. BIOS/PHIL/PSCH 484 MWF 1:00-1:50 Lecture Center F6. Fall credit hours

Neuroscience I. BIOS/PHIL/PSCH 484 MWF 1:00-1:50 Lecture Center F6. Fall credit hours INSTRUCTOR INFORMATION Dr. John Leonard (course coordinator) Neuroscience I BIOS/PHIL/PSCH 484 MWF 1:00-1:50 Lecture Center F6 Fall 2016 3 credit hours leonard@uic.edu Biological Sciences 3055 SEL 312-996-4261

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Ajith Abraham School of Business Systems, Monash University, Clayton, Victoria 3800, Australia. Email: ajith.abraham@ieee.org

More information

Knowledge-Based - Systems

Knowledge-Based - Systems Knowledge-Based - Systems ; Rajendra Arvind Akerkar Chairman, Technomathematics Research Foundation and Senior Researcher, Western Norway Research institute Priti Srinivas Sajja Sardar Patel University

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

Lecture 10: Reinforcement Learning

Lecture 10: Reinforcement Learning Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation

More information

IMPORTANT STEPS WHEN BUILDING A NEW TEAM

IMPORTANT STEPS WHEN BUILDING A NEW TEAM IMPORTANT STEPS WHEN BUILDING A NEW TEAM This article outlines essential steps in forming a new team. These steps are also useful for existing teams that are interested in assessing their format and effectiveness.

More information

Neural Representation and Neural Computation. Philosophical Perspectives, Vol. 4, Action Theory and Philosophy of Mind (1990),

Neural Representation and Neural Computation. Philosophical Perspectives, Vol. 4, Action Theory and Philosophy of Mind (1990), Neural Representation and Neural Computation Patricia Smith Churchland; Terrence J. Sejnowski Philosophical Perspectives, Vol. 4, Action Theory and Philosophy of Mind (1990), 343-382. Stable URL: http://links.jstor.org/sici?sici=1520-8583%28

More information

Circuit Simulators: A Revolutionary E-Learning Platform

Circuit Simulators: A Revolutionary E-Learning Platform Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,

More information

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com

More information

Massachusetts Institute of Technology Tel: Massachusetts Avenue Room 32-D558 MA 02139

Massachusetts Institute of Technology Tel: Massachusetts Avenue  Room 32-D558 MA 02139 Hariharan Narayanan Massachusetts Institute of Technology Tel: 773.428.3115 LIDS har@mit.edu 77 Massachusetts Avenue http://www.mit.edu/~har Room 32-D558 MA 02139 EMPLOYMENT Massachusetts Institute of

More information

Networks in Cognitive Science

Networks in Cognitive Science 1 Networks in Cognitive Science Andrea Baronchelli 1,*, Ramon Ferrer-i-Cancho 2, Romualdo Pastor-Satorras 3, Nick Chater 4 and Morten H. Christiansen 5,6 1 Laboratory for the Modeling of Biological and

More information

Seminar - Organic Computing

Seminar - Organic Computing Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts

More information

Grade 6: Correlated to AGS Basic Math Skills

Grade 6: Correlated to AGS Basic Math Skills Grade 6: Correlated to AGS Basic Math Skills Grade 6: Standard 1 Number Sense Students compare and order positive and negative integers, decimals, fractions, and mixed numbers. They find multiples and

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

TD(λ) and Q-Learning Based Ludo Players

TD(λ) and Q-Learning Based Ludo Players TD(λ) and Q-Learning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent self-learning ability

More information

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach #BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying

More information

Essentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology

Essentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology Essentials of Ability Testing Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology Basic Topics Why do we administer ability tests? What do ability tests measure? How are

More information

On the Formation of Phoneme Categories in DNN Acoustic Models

On the Formation of Phoneme Categories in DNN Acoustic Models On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention Damien Teney 1, Peter Anderson 2*, David Golub 4*, Po-Sen Huang 3, Lei Zhang 3, Xiaodong He 3, Anton van den Hengel 1 1

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS L. Descalço 1, Paula Carvalho 1, J.P. Cruz 1, Paula Oliveira 1, Dina Seabra 2 1 Departamento de Matemática, Universidade de Aveiro (PORTUGAL)

More information

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com

More information

Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems

Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems Published in the International Journal of Hybrid Intelligent Systems 1(3-4) (2004) 111-126 Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems Ioannis Hatzilygeroudis and Jim Prentzas

More information

How People Learn Physics

How People Learn Physics How People Learn Physics Edward F. (Joe) Redish Dept. Of Physics University Of Maryland AAPM, Houston TX, Work supported in part by NSF grants DUE #04-4-0113 and #05-2-4987 Teaching complex subjects 2

More information

Accelerated Learning Course Outline

Accelerated Learning Course Outline Accelerated Learning Course Outline Course Description The purpose of this course is to make the advances in the field of brain research more accessible to educators. The techniques and strategies of Accelerated

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Data Fusion Through Statistical Matching

Data Fusion Through Statistical Matching A research and education initiative at the MIT Sloan School of Management Data Fusion Through Statistical Matching Paper 185 Peter Van Der Puttan Joost N. Kok Amar Gupta January 2002 For more information,

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology

More information

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District Report Submitted June 20, 2012, to Willis D. Hawley, Ph.D., Special

More information

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade The third grade standards primarily address multiplication and division, which are covered in Math-U-See

More information

Accelerated Learning Online. Course Outline

Accelerated Learning Online. Course Outline Accelerated Learning Online Course Outline Course Description The purpose of this course is to make the advances in the field of brain research more accessible to educators. The techniques and strategies

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

Moderator: Gary Weckman Ohio University USA

Moderator: Gary Weckman Ohio University USA Moderator: Gary Weckman Ohio University USA Robustness in Real-time Complex Systems What is complexity? Interactions? Defy understanding? What is robustness? Predictable performance? Ability to absorb

More information

Protocols for building an Organic Chemical Ontology

Protocols for building an Organic Chemical Ontology The European Learning Grid Infrastructure based on GRID technologies for supporting ubiquitous, collaborative, experiental-based, contextualised and personalised learning http://www.elegi.org Protocols

More information

University of Victoria School of Exercise Science, Physical and Health Education EPHE 245 MOTOR LEARNING. Calendar Description Units: 1.

University of Victoria School of Exercise Science, Physical and Health Education EPHE 245 MOTOR LEARNING. Calendar Description Units: 1. University of Victoria School of Exercise Science, Physical and Health Education EPHE 245 MOTOR LEARNING Calendar Description Units: 1.5 Hours: 3-2 Neural and cognitive processes underlying human skilled

More information

Discriminative Learning of Beam-Search Heuristics for Planning

Discriminative Learning of Beam-Search Heuristics for Planning Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University

More information

Using focal point learning to improve human machine tacit coordination

Using focal point learning to improve human machine tacit coordination DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Exploration. CS : Deep Reinforcement Learning Sergey Levine Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?

More information

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study Purdue Data Summit 2017 Communication of Big Data Analytics New SAT Predictive Validity Case Study Paul M. Johnson, Ed.D. Associate Vice President for Enrollment Management, Research & Enrollment Information

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

arxiv: v1 [cs.lg] 15 Jun 2015

arxiv: v1 [cs.lg] 15 Jun 2015 Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and

More information

LEGO MINDSTORMS Education EV3 Coding Activities

LEGO MINDSTORMS Education EV3 Coding Activities LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 3-6 ACTIVITY 2 Written Instructions for a

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Knowledge for the Future Developments in Higher Education and Research in the Netherlands

Knowledge for the Future Developments in Higher Education and Research in the Netherlands Knowledge for the Future Developments in Higher Education and Research in the Netherlands Don F. Westerheijden Contribution to Vision Seminar Higher education and Research 2030 Helsinki, 2017-06-14 How

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the

More information

Master s Programme in Computer, Communication and Information Sciences, Study guide , ELEC Majors

Master s Programme in Computer, Communication and Information Sciences, Study guide , ELEC Majors Master s Programme in Computer, Communication and Information Sciences, Study guide 2015-2016, ELEC Majors Sisällysluettelo PS=pääsivu, AS=alasivu PS: 1 Acoustics and Audio Technology... 4 Objectives...

More information

SAM - Sensors, Actuators and Microcontrollers in Mobile Robots

SAM - Sensors, Actuators and Microcontrollers in Mobile Robots Coordinating unit: Teaching unit: Academic year: Degree: ECTS credits: 2017 230 - ETSETB - Barcelona School of Telecommunications Engineering 710 - EEL - Department of Electronic Engineering BACHELOR'S

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Classification Using ANN: A Review

Classification Using ANN: A Review International Journal of Computational Intelligence Research ISSN 0973-1873 Volume 13, Number 7 (2017), pp. 1811-1820 Research India Publications http://www.ripublication.com Classification Using ANN:

More information

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic

More information

The University of Michigan-Flint. The Committee on the Economic Status of the Faculty. Annual Report to the Regents. June 2007

The University of Michigan-Flint. The Committee on the Economic Status of the Faculty. Annual Report to the Regents. June 2007 The University of Michigan-Flint The Committee on the Economic Status of the Faculty Annual Report to the Regents June 2007 Committee Chair: Stephen Turner (College of Arts and Sciences) Regular Members:

More information

CSL465/603 - Machine Learning

CSL465/603 - Machine Learning CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am

More information

A SURVEY OF FUZZY COGNITIVE MAP LEARNING METHODS

A SURVEY OF FUZZY COGNITIVE MAP LEARNING METHODS A SURVEY OF FUZZY COGNITIVE MAP LEARNING METHODS Wociech Stach, Lukasz Kurgan, and Witold Pedrycz Department of Electrical and Computer Engineering University of Alberta Edmonton, Alberta T6G 2V4, Canada

More information

Multi-Lingual Text Leveling

Multi-Lingual Text Leveling Multi-Lingual Text Leveling Salim Roukos, Jerome Quin, and Todd Ward IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 {roukos,jlquinn,tward}@us.ibm.com Abstract. Determining the language proficiency

More information

A Reinforcement Learning Variant for Control Scheduling

A Reinforcement Learning Variant for Control Scheduling A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Time series prediction

Time series prediction Chapter 13 Time series prediction Amaury Lendasse, Timo Honkela, Federico Pouzols, Antti Sorjamaa, Yoan Miche, Qi Yu, Eric Severin, Mark van Heeswijk, Erkki Oja, Francesco Corona, Elia Liitiäinen, Zhanxing

More information

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence COURSE DESCRIPTION This course presents computing tools and concepts for all stages

More information

Improving Conceptual Understanding of Physics with Technology

Improving Conceptual Understanding of Physics with Technology INTRODUCTION Improving Conceptual Understanding of Physics with Technology Heidi Jackman Research Experience for Undergraduates, 1999 Michigan State University Advisors: Edwin Kashy and Michael Thoennessen

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

CHAPTER 4: REIMBURSEMENT STRATEGIES 24 CHAPTER 4: REIMBURSEMENT STRATEGIES 24 INTRODUCTION Once state level policymakers have decided to implement and pay for CSR, one issue they face is simply how to calculate the reimbursements to districts

More information

Kamaldeep Kaur University School of Information Technology GGS Indraprastha University Delhi

Kamaldeep Kaur University School of Information Technology GGS Indraprastha University Delhi Soft Computing Approaches for Prediction of Software Maintenance Effort Dr. Arvinder Kaur University School of Information Technology GGS Indraprastha University Delhi Kamaldeep Kaur University School

More information

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Varun Raj Kompella, Marijn Stollenga, Matthew Luciw, Juergen Schmidhuber The Swiss AI Lab IDSIA, USI

More information