A Shallow Introduction to Deep Learning by Rafael Espericueta
|
|
- Calvin Turner
- 6 years ago
- Views:
Transcription
1 Traditional AI vs Deep Learning A Shallow Introduction to Deep Learning by Rafael Espericueta Deep learning is one form of machine learning which is part of the field of artificial intelligence. Basically deep learning refers to artificial neural networks, and what make them deep is the presence of more than two layers, an input layer, one or more so called hidden layers, and an output layer. Traditional AI required a team of human experts as well as a team of expert programmers to create a program that basically has been hard-wired to deal with every conceivable option. Such systems have had success in many realms, but are limited to the logic programmed into them. They tend to be very brittle, buggy, and don't adjust well to minor changes in their inputs. Machine learning in traditional AI works due to the programmer's cleverness in selecting features in the data that can be used by various learning algorithms. The newer deep learning approach requires a far smaller team. One creates a neural net architecture capable of learning to do the desired task, and then having it learn how to do so, given lots of labeled examples (supervised learning). A more subtle version of this, where the feedback isn't so immediate (reinforcement learning), is used in cases where one doesn't have sufficient labeled data. You tell the system what output you want, given the input, and it figures out how to best accomplish that task. The features needed for the learning to take place are automatically selected in the early layers of the neural network, rather than needing to be hand-crafted by a clever monkey as in traditional AI. Any solution to the problem of general intelligence will require such an ability. One great accomplishment of traditional AI was IBM's grandmaster defeating chess program of the '90s, Deep Blue. Many years of effort on the part of programmers and chess masters alike were required to program in all the if this is true, then that, else if this than that, ad absurdum. This brute force approach requiring teams of experts and programmers working in tandem has not been a successful strategy when applied to more difficult problems, like vision and the other unsolved problems in AI. One of these more challenging problems has been to create a go-playing program that can defeat top human players. Go is an ancient game of strategy originating in China thousands of years ago. In Trevanian's best-selling novel Shibumi it was mentioned that Go is to chess as poetry is to doubleentry accounting. In any case, despite the surprising simplicity of the game's rules, go has a game tree with a vastly higher degree of branching than in chess. It's been estimated that the number of possible go games exceeds the number of possible chess games by a factor of. For decades this has been the holy grail of AI, and most experts in the field didn't believe we would attain this goal for at least another decade. A legendary $1,000,000 prize was offered for the first computer program that could defeat a human professional go player. That prize wasn't claimed by its expiration year Nonetheless, a $1,000,000 prize was finally claimed by the Google DeepMind team, for the success of their program AlphaGo, in In a match watched by millions around the world, AlphaGo defeated the 9-dan go master, Lee Sodol, in 4 out of 5 games in a televised and Internet-streamed match. Almost all had been expecting the machine to lose to this top-ranked go master, as no other go-playing program had come close to the level of a professional human player.
2 As great of an accomplishment this all was for the DeepMind team, more significant than this accomplishment was the way it was accomplished, using deep learning. Rather than the tedious traditional AI method described above (that was so successful in the case of chess), what the DeepMind team did was to create a deep neural network with random initial weights, that was capable of learning how to play a strong game of go. It was trained on about 100,000 strong human amateur games, trying to learn to predict where a human would play; then it played itself many millions of games to hone its skills. Interestingly, fairly early in its training it could easily defeat all those who had programmed it. This same neural network is capable of learning many other tasks besides just playing go. The AlphaGo project took far fewer man-hours than did the Deep Blue (chess) project, and AlphaGo attained its mastery far faster than any human has ever attained a comparable skill level. Interestingly, AlphaGo has improved significantly since its landmark victory. After AlphaGo's historic victory, many in the AI world began to wonder how many other hitherto unsolvable problems would yield to the power of deep learning. Indeed, the deep learning principles used in AlphaGo are generally applicable, and have already helped cross off many problems from AI's unsolved problems list. The rapidity with which this is happening is notable. Google realized the importance of DeepMind's work back in 2014, purchasing the company for about half a billion dollars. Recently Google open sourced it's own internal deep learning development framework, TensorFlow, now (by far) the most popular deep learning platform. Putting this tool into the hands of thousands of researchers and knowledge engineers around the world seemed a better strategy than trying to do it all internally. There are so many conceivable applications, the more people exploring the possibilities, the better. And those who come up with an interesting application of deep learning may find their resulting start-up facing a buy-out offer from Google, along with a lucrative job offer. Another of deep learning's recent accomplishments was in the field of computational vision. A deep neural network attained slightly better than human performance on the recognition of 1000 objects in the ImageNet dataset. The deep learning approach performed better than both human and previous AI attempts using more traditional techniques. Advances in computer vision are leading to a plethora of advances in applications of AI in diverse areas, including autonomous cars, drones, and more generally, robotics. In addition to the ImageNet example (and many other advances in computer vision), there have been comparable advances in other subfields of AI, including speech recognition and synthesis. Even automatic language translation has made huge advances using deep learning. Applications abound in the medical field, and promise to revolutionize the practice of medicine. Recently Google's DeepMind used deep learning to improve the power usage at its large data centers, by 40%. They are now negotiating to apply this technology to the entire electrical grid of Great Briton. This one breakthrough alone holds great promise to significantly improve the efficiency of all the worlds power grids. One begins to wonder if there's any area where deep learning techniques can't be fruitfully applied. The History of Artificial Neural Networks Artificial neural networks have been with us for about as long as digital computers. Many of the early pioneers of computer science were interested in this idea, since it seems so suggestive of the way biological brains work. After all, our brains form a sort of existence proof that artificial neural networks might lead to a system capable of intelligent perception and cognition.
3 Despite researchers' early interest in neural networks, it was only recently that we've developed the techniques needed to make deep learning work. The main reason for this long delay concerns Moore's Law every decade, computers are 1,000 times faster. We simply had to wait until sufficient computer power was available. As a tipping point was reached, the engineering of such networks underwent a rapid evolution. Thanks to computer gamers, GPUs (graphics processing units) were created that each contain thousands of computer cores. These allow certain computations such as are needed for graphics processing to be performed in parallel, and thus thousands of times faster than is possible for conventional CPUs. It turns out that GPUs can also be used to implement neural networks, and their general availability and low cost helped provide the computer power needed for successful neural network implementation. In the 1990's, AI researchers believed that neural networks weren't practical (which was pretty much true, given what passed for computers back then), and as a result researchers in the neural network field had great difficulty publishing papers at all. The advances mentioned above, along with many others through the years, have now turned the tides. Now it's becoming difficult to obtain funding for AI research that doesn't involve deep learning. GoogLeNet The yearly ImageNet competition is a competition to automatically identify 1000 objects in images. In earlier competitions, only tiny improvements over the previous years winning entry were sufficient to win, but in 2014 Google's entry to the ImageNet competition used deep learning to easily defeat all its rivals by a healthy margin. Their winning entry was a neural network architecture called GoogLeNet. Figure 1: Schematic of the GoogLeNet artificial network The diagram in Figure 1 is actually a simplification of the actual neural net. Many of the rectangles above represent large collections of parallel node layers. This network is capable of discerning a thousand different common objects that may be in an image, for example flowers. There is one particular layer within the GoogLeNet network that is maximally excited whenever it sees flowers. When any image is input to the network, the flower detecting layer tries to see flowers in the image. If one outputs that layer, one can see where the network was beginning to hallucinate flowers in the input image. I found that by feeding the image with the beginnings of flower hallucinations back into the input to the network, and outputting the flower detector results, the hallucinations became more vivid. After about five such feedback loops, the hallucinations become quite vivid, and then there is little further change. In Figure 2 you can see a photo of my wife Julie (off the coast of New Zealand), along with 5 iterates of flower hallucinations.
4 Figure 2a: Original picture Figure 2b: The hallucinations begin! Figure 2c: Hallucinations deepen... Figure 2d: And deepen... Figure 2e: The changes become less noticeable. Figure 2f: Further iterations change very little. I created animations of over a hundred sequences such as the above (the above, animated, is here), exploring the various inception layers of GoogLeNet. Not all these layers are as recognizable as the flower detector. Generally it's not individual layers that detect anything, but combinations of these layers. Using these layers as inputs, subsequent layers are able to accomplish their object recognition tasks. To see more of these animations, click on a thumbnail below (excepting the first):
5 Notice how the above thumbnail images, each the result of 5 hallucination iterations, nonetheless as a thumbnail resembles the original image (if you squint!), which is a bit surprising. This shows that much information from the original image is preserved in each of the hallucinated versions of it. The hallucinatory inception layers of GoogLeNet can be used for many other purposes besides the recognition of objects in images. If the last layers of the network are discarded, the earlier layers can serve as a starting point for other AI tasks. The vast amount of time that Google used in training GoogLeNet on millions of images can be leveraged to solve more specialized tasks, for example facial recognition of a particular person. One interesting application is termed style transfer. A neural network, grafted to the end of GoogLeNet (minus it's later layers), allows one to train the network to recognize the style of a particular artist. And what a network can recognize, it can also hallucinate. So with style transfer, one may input a photo, and get an output that resembles a particular artist's rendition of that photo. Deep learning has achieved comparable successes in the auditory realm as well as the visual, with the understanding of spoken speech using recurrent neural networks. It's now possible to automate the captioning of video with better than human-level performance. Music generation has also recently achieved surprising successes via deep learning. Doing the Math Consider the simple neural network depicted in Figure 3. We're going to slowly walk through this example to introduce the basic concepts. Figure 3: A simple deep neural network. To see what this neural network does, suppose the input values are and. The blue paths connecting the circular nodes (the neurons) have numeric weights which are applied to the input values as follows: To get the values of the hidden layer nodes and, we need to put the above results through a simple nonlinear filter. For this example, we'll use the so called sigmoid function (there are other possible nonlinear functions we could use here as well): Then the hidden layer neurons take on the values: We do this again to obtain the output value. This output node often also has a nonlinear function applied, but for this example we'll just output the value directly:
6 The above process can be written more succinctly using matrix notation. If and, then. Similarly, with, we have. This process is called feed forward, and is how the trained network makes its predictions. Next we examine the learning part of the process. What exactly constitutes learning for such a network? The behavior of such a network on given inputs is entirely determined by the weights along the paths, which can be gathered into weight matrices. These weights are ordinarily initially chosen with small random values. For our network, these values were picked arbitrarily, with and. To obtain a network that's useful, these weights need to be learned rather than being given a priori. In supervised learning we learn the weights using labeled training data. The training data consists of pairs of values like, along with corresponding labels, the output values observed (or desired) for those inputs. In the above example, if the label for our were in fact (the output value we observed above), then our network would have correctly computed this value, and the weights would be right for this input/label pair. If the label were something else, then the weights would need to be modified in such a way that the output would be closer to the label for that input. The actual learning takes place via a process called back-propagation that allows us to propagate the observed error back through the network, adjusting all the weights in such a way that the network, given that input again, would compute a value closer to the label. In this way, given a number of labeled inputs, the network can iteratively modify its weights, learning the correct function input/output pairs. Memorizing the input/output pairs isn't the point though we want the network to learn to make reasonable predictions for inputs it hasn't seen. We want our neural net to generalize from its input training data set to predict the output from data it hasn't encountered. The back-propagation process amounts to minimizing a multivariate error function, by moving the weights small steps in the direction if the error function's negative gradient. A function's gradient points in the direction of maximum increase of the function; we seek the minimum of the function, and so need to go in the direction of the function's maximum decrease, the negative of its gradient. Back propagation starts with a training example. Suppose the initial training example were
7 , with all the weights as above. Again, this means that we want our network to output 3.0 when the input is. We've already calculated our network output (with the given network weights) for this input to be The error at the output node is then computed: Error =, and we need to propagate this error backwards through the successive layers of the network using the same weights as we used before for the forward propagation process. A learning rate multiplier like is used to take a small step in the right direction (without this, one well might overshoot the optimal solution). Often this learning rate is decayed as the learning process proceeds to help the iterates converge, but for this example we'll keep it constant. We need to use the chain rule (from calculus) to correctly compute the gradient of the composition of matrix products and our nonlinear sigmoid function,. So let's propagate this error back through the network, updating the path weights as we go. The weight on the path from the top hidden node to the output node is then modified as follows: Similarly we perform an update on the weight on the path from the bottom hidden node to the output node: Next consider the sigmoid function. The chain-rule requires us to multiply our above values by the derivative of. As it turns out (the proof of this is left as an exercise): The values are precisely the values we obtained as our previous output from during the forward propagation process. By saving this value, we can easily now compute this derivative. Recall that the output from the top hidden node, just after was applied, was The corresponding derivative is Similarly for the lower hidden node, As we back-propagate the Error, it first is multiplied by the original path weight, and then times the derivative we just computed:..
8 Similarly: Now we adjust the weights from the input layer to the hidden layer using these back-propagated errors: It can be shown that with these new weights, our neural network will yield an output that's closer to the target value than our first attempt. By iterating this process many times, we can get ever closer to our target. The power of this process is that we can end up with a neural network that can accurately estimate values for inputs it hasn't seen before. The power of a neural network is that once trained, it can generalize to correctly deal with data it's never seen before.
Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationAn Introduction to Simio for Beginners
An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality
More informationArtificial Neural Networks
Artificial Neural Networks Andres Chavez Math 382/L T/Th 2:00-3:40 April 13, 2010 Chavez2 Abstract The main interest of this paper is Artificial Neural Networks (ANNs). A brief history of the development
More informationLEGO MINDSTORMS Education EV3 Coding Activities
LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 3-6 ACTIVITY 2 Written Instructions for a
More informationLecture 10: Reinforcement Learning
Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationAxiom 2013 Team Description Paper
Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association
More informationReinforcement Learning by Comparing Immediate Reward
Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate
More informationEvolution of Symbolisation in Chimpanzees and Neural Nets
Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication
More informationSystem Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering
More informationNotes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1
Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial
More informationContents. Foreword... 5
Contents Foreword... 5 Chapter 1: Addition Within 0-10 Introduction... 6 Two Groups and a Total... 10 Learn Symbols + and =... 13 Addition Practice... 15 Which is More?... 17 Missing Items... 19 Sums with
More informationExecutive Guide to Simulation for Health
Executive Guide to Simulation for Health Simulation is used by Healthcare and Human Service organizations across the World to improve their systems of care and reduce costs. Simulation offers evidence
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationTesting A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA
Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology
More informationSeminar - Organic Computing
Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts
More informationSynthesis Essay: The 7 Habits of a Highly Effective Teacher: What Graduate School Has Taught Me By: Kamille Samborski
Synthesis Essay: The 7 Habits of a Highly Effective Teacher: What Graduate School Has Taught Me By: Kamille Samborski When I accepted a position at my current school in August of 2012, I was introduced
More informationWhite Paper. The Art of Learning
The Art of Learning Based upon years of observation of adult learners in both our face-to-face classroom courses and using our Mentored Email 1 distance learning methodology, it is fascinating to see how
More informationOPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS
OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,
More informationGetting Started with Deliberate Practice
Getting Started with Deliberate Practice Most of the implementation guides so far in Learning on Steroids have focused on conceptual skills. Things like being able to form mental images, remembering facts
More informationUniversity of Groningen. Systemen, planning, netwerken Bosman, Aart
University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationForget catastrophic forgetting: AI that learns after deployment
Forget catastrophic forgetting: AI that learns after deployment Anatoly Gorshechnikov CTO, Neurala 1 Neurala at a glance Programming neural networks on GPUs since circa 2 B.C. Founded in 2006 expecting
More informationINPE São José dos Campos
INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More informationProposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science
Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science Gilberto de Paiva Sao Paulo Brazil (May 2011) gilbertodpaiva@gmail.com Abstract. Despite the prevalence of the
More informationCSC200: Lecture 4. Allan Borodin
CSC200: Lecture 4 Allan Borodin 1 / 22 Announcements My apologies for the tutorial room mixup on Wednesday. The room SS 1088 is only reserved for Fridays and I forgot that. My office hours: Tuesdays 2-4
More informationIntroduction and Motivation
1 Introduction and Motivation Mathematical discoveries, small or great are never born of spontaneous generation. They always presuppose a soil seeded with preliminary knowledge and well prepared by labour,
More informationCircuit Simulators: A Revolutionary E-Learning Platform
Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,
More informationUsing focal point learning to improve human machine tacit coordination
DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated
More informationRover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes
Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes WHAT STUDENTS DO: Establishing Communication Procedures Following Curiosity on Mars often means roving to places with interesting
More informationLearning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for
Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com
More informationLecture 1: Basic Concepts of Machine Learning
Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010
More information2013 DISCOVER BCS NATIONAL CHAMPIONSHIP GAME NICK SABAN PRESS CONFERENCE
2013 DISCOVER BCS NATIONAL CHAMPIONSHIP GAME NICK SABAN PRESS CONFERENCE COACH NICK SABAN: First of all, I'd like to say what a great experience it is to be here. It's great to see everyone today. Good
More informationOn the Combined Behavior of Autonomous Resource Management Agents
On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationDeep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach
#BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying
More informationADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF
Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download
More informationRadius STEM Readiness TM
Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and
More informationInnovative Methods for Teaching Engineering Courses
Innovative Methods for Teaching Engineering Courses KR Chowdhary Former Professor & Head Department of Computer Science and Engineering MBM Engineering College, Jodhpur Present: Director, JIETSETG Email:
More informationNo Child Left Behind Bill Signing Address. delivered 8 January 2002, Hamilton, Ohio
George W. Bush No Child Left Behind Bill Signing Address delivered 8 January 2002, Hamilton, Ohio AUTHENTICITY CERTIFIED: Text version below transcribed directly from audio Okay! I know you all are anxious
More informationPredicting Students Performance with SimStudent: Learning Cognitive Skills from Observation
School of Computer Science Human-Computer Interaction Institute Carnegie Mellon University Year 2007 Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation Noboru Matsuda
More informationPREP S SPEAKER LISTENER TECHNIQUE COACHING MANUAL
1 PREP S SPEAKER LISTENER TECHNIQUE COACHING MANUAL IMPORTANCE OF THE SPEAKER LISTENER TECHNIQUE The Speaker Listener Technique (SLT) is a structured communication strategy that promotes clarity, understanding,
More informationThe Round Earth Project. Collaborative VR for Elementary School Kids
Johnson, A., Moher, T., Ohlsson, S., The Round Earth Project - Collaborative VR for Elementary School Kids, In the SIGGRAPH 99 conference abstracts and applications, Los Angeles, California, Aug 8-13,
More informationFile # for photo
File #6883458 for photo -------- I got interested in Neuroscience and its applications to learning when I read Norman Doidge s book The Brain that Changes itself. I was reading the book on our family vacation
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationa) analyse sentences, so you know what s going on and how to use that information to help you find the answer.
Tip Sheet I m going to show you how to deal with ten of the most typical aspects of English grammar that are tested on the CAE Use of English paper, part 4. Of course, there are many other grammar points
More informationWE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT
WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D. INTRODUCTION Anyone who spends ample time working
More informationCOMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS
COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS L. Descalço 1, Paula Carvalho 1, J.P. Cruz 1, Paula Oliveira 1, Dina Seabra 2 1 Departamento de Matemática, Universidade de Aveiro (PORTUGAL)
More informationA Reinforcement Learning Variant for Control Scheduling
A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationMathematics process categories
Mathematics process categories All of the UK curricula define multiple categories of mathematical proficiency that require students to be able to use and apply mathematics, beyond simple recall of facts
More informationKnowledge based expert systems D H A N A N J A Y K A L B A N D E
Knowledge based expert systems D H A N A N J A Y K A L B A N D E What is a knowledge based system? A Knowledge Based System or a KBS is a computer program that uses artificial intelligence to solve problems
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationLearning to Schedule Straight-Line Code
Learning to Schedule Straight-Line Code Eliot Moss, Paul Utgoff, John Cavazos Doina Precup, Darko Stefanović Dept. of Comp. Sci., Univ. of Mass. Amherst, MA 01003 Carla Brodley, David Scheeff Sch. of Elec.
More informationSARDNET: A Self-Organizing Feature Map for Sequences
SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu
More informationDiscriminative Learning of Beam-Search Heuristics for Planning
Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University
More informationHow to make an A in Physics 101/102. Submitted by students who earned an A in PHYS 101 and PHYS 102.
How to make an A in Physics 101/102. Submitted by students who earned an A in PHYS 101 and PHYS 102. PHYS 102 (Spring 2015) Don t just study the material the day before the test know the material well
More informationAGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016
AGENDA Advanced Learning Theories Alejandra J. Magana, Ph.D. admagana@purdue.edu Introduction to Learning Theories Role of Learning Theories and Frameworks Learning Design Research Design Dual Coding Theory
More informationMajor Milestones, Team Activities, and Individual Deliverables
Major Milestones, Team Activities, and Individual Deliverables Milestone #1: Team Semester Proposal Your team should write a proposal that describes project objectives, existing relevant technology, engineering
More informationEECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;
EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10 Instructor: Kang G. Shin, 4605 CSE, 763-0391; kgshin@umich.edu Number of credit hours: 4 Class meeting time and room: Regular classes: MW 10:30am noon
More informationTD(λ) and Q-Learning Based Ludo Players
TD(λ) and Q-Learning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent self-learning ability
More information4.0 CAPACITY AND UTILIZATION
4.0 CAPACITY AND UTILIZATION The capacity of a school building is driven by four main factors: (1) the physical size of the instructional spaces, (2) the class size limits, (3) the schedule of uses, and
More informationTest Effort Estimation Using Neural Network
J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationDesigning a Computer to Play Nim: A Mini-Capstone Project in Digital Design I
Session 1793 Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I John Greco, Ph.D. Department of Electrical and Computer Engineering Lafayette College Easton, PA 18042 Abstract
More informationA CONVERSATION WITH GERALD HINES
Interview Date: December 1, 2004 Page 1 of 12 A CONVERSATION WITH GERALD HINES IN CONJUNCTION WITH THE CENTER FOR PUBLIC HISTORY. UNIVERSITY OF HOUSTON Interviewee: MR. GERALD HINES Date: December 1.2004
More informationExploration. CS : Deep Reinforcement Learning Sergey Levine
Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?
More information5 Guidelines for Learning to Spell
5 Guidelines for Learning to Spell 1. Practice makes permanent Did somebody tell you practice made perfect? That's only if you're practicing it right. Each time you spell a word wrong, you're 'practicing'
More informationChapter 4 - Fractions
. Fractions Chapter - Fractions 0 Michelle Manes, University of Hawaii Department of Mathematics These materials are intended for use with the University of Hawaii Department of Mathematics Math course
More informationHIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION
HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION Atul Laxman Katole 1, Krishna Prasad Yellapragada 1, Amish Kumar Bedi 1, Sehaj Singh Kalra 1 and Mynepalli Siva Chaitanya 1 1 Samsung
More informationUNDERSTANDING DECISION-MAKING IN RUGBY By. Dave Hadfield Sport Psychologist & Coaching Consultant Wellington and Hurricanes Rugby.
UNDERSTANDING DECISION-MAKING IN RUGBY By Dave Hadfield Sport Psychologist & Coaching Consultant Wellington and Hurricanes Rugby. Dave Hadfield is one of New Zealand s best known and most experienced sports
More information"Be who you are and say what you feel, because those who mind don't matter and
Halloween 2012 Me as Lenny from Of Mice and Men Denver Football Game December 2012 Me with Matthew Whitwell Teaching respect is not enough, you need to embody it. Gabriella Avallone "Be who you are and
More informationExtending Place Value with Whole Numbers to 1,000,000
Grade 4 Mathematics, Quarter 1, Unit 1.1 Extending Place Value with Whole Numbers to 1,000,000 Overview Number of Instructional Days: 10 (1 day = 45 minutes) Content to Be Learned Recognize that a digit
More informationLesson 12. Lesson 12. Suggested Lesson Structure. Round to Different Place Values (6 minutes) Fluency Practice (12 minutes)
Objective: Solve multi-step word problems using the standard addition reasonableness of answers using rounding. Suggested Lesson Structure Fluency Practice Application Problems Concept Development Student
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationMachine Learning from Garden Path Sentences: The Application of Computational Linguistics
Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,
More informationKLI: Infer KCs from repeated assessment events. Do you know what you know? Ken Koedinger HCI & Psychology CMU Director of LearnLab
KLI: Infer KCs from repeated assessment events Ken Koedinger HCI & Psychology CMU Director of LearnLab Instructional events Explanation, practice, text, rule, example, teacher-student discussion Learning
More informationB. How to write a research paper
From: Nikolaus Correll. "Introduction to Autonomous Robots", ISBN 1493773070, CC-ND 3.0 B. How to write a research paper The final deliverable of a robotics class often is a write-up on a research project,
More informationCognitive Thinking Style Sample Report
Cognitive Thinking Style Sample Report Goldisc Limited Authorised Agent for IML, PeopleKeys & StudentKeys DISC Profiles Online Reports Training Courses Consultations sales@goldisc.co.uk Telephone: +44
More informationHow to learn writing english online free >>>CLICK HERE<<<
How to learn writing english online free >>>CLICK HERE
More informationPage 1 of 11. Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General. Grade(s): None specified
Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General Grade(s): None specified Unit: Creating a Community of Mathematical Thinkers Timeline: Week 1 The purpose of the Establishing a Community
More informationA Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention
A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention Damien Teney 1, Peter Anderson 2*, David Golub 4*, Po-Sen Huang 3, Lei Zhang 3, Xiaodong He 3, Anton van den Hengel 1 1
More informationCourse Content Concepts
CS 1371 SYLLABUS, Fall, 2017 Revised 8/6/17 Computing for Engineers Course Content Concepts The students will be expected to be familiar with the following concepts, either by writing code to solve problems,
More information5. UPPER INTERMEDIATE
Triolearn General Programmes adapt the standards and the Qualifications of Common European Framework of Reference (CEFR) and Cambridge ESOL. It is designed to be compatible to the local and the regional
More informationAn OO Framework for building Intelligence and Learning properties in Software Agents
An OO Framework for building Intelligence and Learning properties in Software Agents José A. R. P. Sardinha, Ruy L. Milidiú, Carlos J. P. Lucena, Patrick Paranhos Abstract Software agents are defined as
More informationThe Effect of Discourse Markers on the Speaking Production of EFL Students. Iman Moradimanesh
The Effect of Discourse Markers on the Speaking Production of EFL Students Iman Moradimanesh Abstract The research aimed at investigating the relationship between discourse markers (DMs) and a special
More informationSoftware Maintenance
1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories
More informationModel Ensemble for Click Prediction in Bing Search Ads
Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com
More informationUnsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model
Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.
More informationLEARN TO PROGRAM, SECOND EDITION (THE FACETS OF RUBY SERIES) BY CHRIS PINE
Read Online and Download Ebook LEARN TO PROGRAM, SECOND EDITION (THE FACETS OF RUBY SERIES) BY CHRIS PINE DOWNLOAD EBOOK : LEARN TO PROGRAM, SECOND EDITION (THE FACETS OF RUBY SERIES) BY CHRIS PINE PDF
More informationA Pumpkin Grows. Written by Linda D. Bullock and illustrated by Debby Fisher
GUIDED READING REPORT A Pumpkin Grows Written by Linda D. Bullock and illustrated by Debby Fisher KEY IDEA This nonfiction text traces the stages a pumpkin goes through as it grows from a seed to become
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationGraduate Division Annual Report Key Findings
Graduate Division 2010 2011 Annual Report Key Findings Trends in Admissions and Enrollment 1 Size, selectivity, yield UCLA s graduate programs are increasingly attractive and selective. Between Fall 2001
More informationMYCIN. The MYCIN Task
MYCIN Developed at Stanford University in 1972 Regarded as the first true expert system Assists physicians in the treatment of blood infections Many revisions and extensions over the years The MYCIN Task
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationThe Strong Minimalist Thesis and Bounded Optimality
The Strong Minimalist Thesis and Bounded Optimality DRAFT-IN-PROGRESS; SEND COMMENTS TO RICKL@UMICH.EDU Richard L. Lewis Department of Psychology University of Michigan 27 March 2010 1 Purpose of this
More information