Report about: Machine Learning for Static Ranking Christian Klar

Size: px
Start display at page:

Download "Report about: Machine Learning for Static Ranking Christian Klar"

Transcription

1 Report about: Machine Learning for Static Ranking Christian Klar Why is Static Ranking so important nowadays? Since many years the Web has grown exponentially in size But with this growth, the number of low-quality pages has increased as well The Web is full of incorrect, malicious and spamming Websites So it is more important than ever, to be able to grade the quality of a specific website This is done with Static Ranking, also called Query-Independent Ranking Basically there are a bunch of websites Now the quality of each website is calculated by looking at some characterizing points of the Website Through this quality, each website gets a rank, and through this rank, a ranking is built Websites that are higher in the ranking, should also have a higher quality Static Ranking has a lot of direct and indirect benefits A direct one is of course that we can see how good a page is An indirect one is, that we can built up Crawl Priorities Since the Web changes and grows each day, it is impossible for a search engine to crawl the whole Web during a short time There have to be priorities, which websites should be revisited, and how frequently Static Ranking obviously helps, in creating these priorities, eg a website with high quality should be revisited often, because changes there have a higher impact The Google PageRank is widely regarded as the best method for Static Ranking Although it has performed well in history, only little academic evidence exists to prove this point The purpose of this report is to introduce a different Static Ranking function called RankNet The introduction of RankNet will be in steps, so that the reader should be able to understand how a Static Ranking function is created Therefore the emphasize of this report is on machine learning itself This report is about showing the concepts and bringing the information of several papers concerning this problem together, and not about giving mathematical proof for every step made 1 Problem and Approach First we have to clearly state the problem of finding a Static Ranking function Each website has a feature vector which consists of real numbers The feature vector describes specific points about the website and characterizes it Each column describes 1

2 a feature For example if 2 websites have a different value in the first component of the vector, then they are different in that point Now we are looking for a function that gives us a rank value for a specific feature vector That means, if we put the feature vector into the function, we get a rank value as output and we can compare it with other websites rank values Figure 1 Overview over definitions has feature vector and rank So is the Static Ranking function It gives each website a rank according to their feature vector But we don t know in what way we have to put the features,, together, so that the ranking value, that we get is accurate Eg it is possible that one feature is more important than the other The way to derive what does with,,, is to manually look at websites, and to give them rank values Then it would be possible to create a function that does, what was done manually before But then again, it is impossible to look at all Websites in the Web So has to be able to generalize the information it knows, and to apply it on other websites, we haven t seen before So summarized has to do the following things: It has to represent the information we have in an accurate way It has to generalize that information in an accurate way Our approach to create will be machine learning That means, that we simply give the information, we have, to an learning algorithm It takes it, and forms Basically, the learning machine teaches an start - what to do This is also, what is done in RankNet To understand RankNet, at first some basics about machine learning will be introduced, and then applied on the problem of finding the Static Ranking function 2 Learning of a function with gradient descent To learn a function we first need some information about that function For that we have some input and output pairs,,,, With these there are several ways to teach a function that it behaves like,, Regression would be one Another one would be to just do some interpolation, which isn t chosen 2

3 because that would be too inflexible for this purpose Our is categorized as gradientbased learning with back-propagation Why, will be clear during this chapter Figure 2 The learning machine,,,,,,,,,,,,, COST FUNCTION,,,,,, gets the form,, which means, that it gets the vector as input It also gets as input represents a collection of adjustable parameters or weights in the system of That means, that with these parameters it is possible to tune, so that it performs its job in a better way So, when gets the input it computes a value If, determined by a given set of parameters, would already be the function we are looking for, then would already be very close to But since we have to train first, this usually isn t the case Therefore we need a measure for the error between and This is done by the cost function It takes the wrong output and the correct and calculates, which is the deviation value between the output of the momentary version of function, and the target output we want to reach with the learning machine With a particular set of parameters and the set of pairs,,,,, we are now able to compute the deviation for each The overall cost function, which summarizes all errors then would be With this function we are now able to see the overall performance of the current The smaller is, the better are our parameters Therefore it is obvious that will be used to tune the weights in, so that performs its task better But how is it possible to adapt the parameters in W so that the value of decreases? The gradient descent method will be used for that Since is a function with the form,,,, it is clear that the parameters in are also a direct input into it 3

4 And because the set of pairs,,,, is definite, it is basically just a point of playing with the parameters until a good constellation of them is found so that the value of is small enough But as playing isn t a very efficient way, the gradient descent method will applied to do that Figure 3 The gradient descent method This is an approximation method, which at each point chooses the direction of steepest descent to find the minimum of a multidimensional function, which is a function of the variables,,,,,,, So, to quickly recapitulate, there is a function of the form,, whose output can be changed, by tuning the weights in The goal is to tune them like that, that for certain known inputs we get some outputs, that are close approximations to the target outputs The overall cost function shows how far away we are from our goal The smaller the value of the overall cost function, the closer we are to it For showing, how the gradient descent method is applied on the cost function, has to get a more detailed look (until the end of this chapter will be denoted with, due to convenience) Here a very simple form of a multilayer machine is chosen, namely just a stack of modules which is presented in figure 4 on the left side The input is processed through modules until the output is calculated On each module there is a function, which has and as input is the output of the module 1 and is a vector of tunable parameters for the function is a subset of, which is the set of all weights in this system (1),,,,, With 1 as start, we have to compute 1 until we reach a local minimum shows how an input is processed through the module-functions Each is a vector function To see how this works, figure 5 has a little example about the computation until module number 2 In the example, there is an input vector with dimension 3 It is the input of the vector function which consists of the two functions and In this layer,, Then this new vector is given to the next module and so on It is important to point out that obviously the dimension can change 4

5 in each module So input vector and output vector don t have to be of the same dimension Figure 4 Stack of modules,,,,,,, Remarks:, is the Jacobian of with respect to evaluated at the point,, is the Jacobian of with respect to evaluated at the point, Figure 5 Example for computation,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, So, what do we need for being able to use the gradient descent method on? We need But before we see how it is calculated, some notations: 5

6 (2) (4) (3),,,,,,,,,,,,,,,, So we need the partial derivatives of,, with respect to the vectors for each module It is possible to compute these sub-vectors of,, by doing so called back-propagation It is presented in figure 4 on the right side If is known, and can be computed Then again with, and can be computed Going on like that we get for all modules 1 So here are the steps for computing, for one training-pair, : 1 Put as input into the stack of modules Compute 2 Compute, That means, first do the partial derivation of with respect to the variables of the -th layer (excluding the parameters in ), and then put into that formula to get the vector, 3 Compute vector, right side and vector, 4 Do the same for the modules 1, 2,,2,1 5 Put the vectors, together to get, with the rules described in figure 4 on the, 1, 6

7 By knowing, the gradient descent algorithm for this problem can be built, which is: (5) 1 With this algorithm we can iteratively adjust Hence we have a learning algorithm for getting the right parameters for For RankNet a special case of this learning system will be used, namely artificial neural networks Therefore the next chapter introduces neural networks, and in the chapter after that, we will see how neural networks are applied here 3 Artificial Neural Networks (ANN) The idea behind ANNs is very close to the prior topic An ANN is a computing paradigm, that is modeled after the cortical structures of the brain It consists of interconnected processing elements called nodes or neurons, that work together to produce an output function In most cases an ANN is an adaptive system that changes its structure based on information that flows through the network Figure 6 Example of an Artificial Neural Network x Layer 1 Layer 2 Layer 3 To demonstrate how an ANN is built, an example is made, which is presented in figure 6 As already stated an ANN consists of so called neurons or nodes In the example these are,,,, and the output These nodes are connected together trough arrows, hence also the name of this system Neural Network Each node belongs to an specific layer and represents a function So the ANN is characterized by the number of layers and the number of nodes in each layer Inside a layer the nodes are independent of each other 7

8 Each node has an vector and a single value The function in each node takes the input values of the vector, and creates the value by performing some calculation The values are the values of the prior layer This system is also called feed-forward x denotes the vector that is given to the system as input The node is the last node and its function creates the output of the system which is a single value So basically the ANN defines a functionn So, that the structure of this ANN is clear, we have to take a look at the functions of the nodes Each node is defined by following mathematic function: (1),, are weights that denote how much each should be weighted So first each input gets a weight, and then everything is summed up Lastly this sum is put into a function, which is also called activation function Now it should be obvious why ANNs are so important for our purpose of creating a learning machine for a Static Ranking function They take an input vector and create a single output value And by changing the weights in each node, it is possible to alter that output So ANNs have excellent training capabilities and are also very good in generalizing from a set of training data An informal example could be an ANN which decides if an animal A belongs to a group G It takes some characteristicss of A as input and then decides The point now is, that we don t have to give the ANN an exact characterization of each animal that belongs to that group G Through some specific training-characterizations of some animals belonging to G the ANN will learn it by itself Figure 7: The sigmoid function 16 tanh Back to the activation functionn It weights how powerful the output should be, based on the weighted sum of inputs Usually this is a monotonically increasing sigmoid function The reason is, that only through this nonlinear activation, the neural network gets its nonlinear capability for learning It is easy to understand this concept If we 8

9 would use a linear function, the changes on the weights would cause abrupt changes on the output With a sigmoid function everything goes much more smoothly so that the ANN is able to learn with less noise The example sigmoid function 16 tanh is presented in figure 7 Using a sigmoid function has other benefits as well For example keeping the output in a specific range To find a good sigmoid function for the own purpose is a art of itself and is topic of many scientific papers Now we have seen two things The gradient descent method, that allows us to teach a function to behave in a special way And the notion of ANN, that simulates a big function with a system of layers and nodes, and which is nearly perfect in learning and generalizing In the next chapter this two notions will be brought together to create a system we can use for creating a Statistic Ranking function called RankNet 4 Bringing gradient descent and Artificial Neural Networks together ANNs are a special case of the Stack of modules -system, described in figure 4 The modules become the layers The function, becomes a matrix multiplication ( is called vector of weighted sums, and denotes something else than the in the training set) and a vector function which is applied to In our case applies a sigmoid function to each component of How our ANN is built up, is presented in figure 8 It is pretty self-explanatory On the right side of the figure there is an example showing how the calculation is done for one module Now that we have the ANN, we need to adapt the back-propagation at it, for being able to use gradient descent method The equations (for the -th module) we had before were: (1) (2), (3), Some other information about the -th layer: (4),,,, (7) (5) (8),, (9),, (6) 9

10 Figure 8,,, Stack of modules Corresponding AAN Example for a layer So, (1) gets the form: (2) gets the form: (11) (3) gets the form: (10) (12) ,, 1 1 1,, With these back-propagation equations we are able to do gradient descent for the ANN 1 10

11 So, to summarize the last 2 chapters, what are we able to do now? If we have a set of couples,,,,, and we want do create a function that does this mapping, we can do that, by simply feeding this information into our learning machine This is obviously the machine, we need for a Static Learning Ranking There we have this mapping as well, except, that are feature vectors, and is a ranking RankNet uses a learning algorithm like this, to do its work But before we introduce RankNet, there are several points everyone should be very aware of, before creating and training a machine like this Back-propagation can be a very slow process At each iteration the whole training set has to be passed through the system, in order to create the gradient However there is no formula that guarantees that either the network will converge to a good solution nor convergence occurs at all One very important goal of the network is to generalize But since the measurement of the training sets might be noisy, there might be errors in it It is easy to imagine, that if we collect multiple data sets, that each such training set is a little bit different of the other and contains different errors So each set would lead to different parameters for because the minimum of their overall cost function is somewhere else Like that we would also introduce the noise and the errors into our network This is also called overtraining There a number of techniques and a huge amount of papers, concerning these problems They try to maximize generalization and to speed up the back-propagation 5 RankNet Firstly we have to define how the training set and the cost function look like Normally one would take the feature vectors with their target rank values and just train the function to do exactly this mapping But that is not what we do Instead we look at the ordering of the websites and train the function with that information Through this, we optimize the ordering of the websites (that is what ranking actually is about), rather than optimizing the rank values It has to be made clear that we still are looking for a function, which gives us a rank for a feature vector But we are training it with a different kind of information So the training set is a collection of items of the form,, where denotes the feature vector of the website, denotes the feature vector of the website and denotes the probability that is ranked higher than By convention we have only training items, where should be ranked higher than The notation says, that has to be ranked higher than Hence the function we are looking for should ideally meet the following invariant: 11

12 Figure 9 Functions 1 1 log 1 (1),, Now we have the definition of our training set, but how do we adapt our learning algorithm, so that it teaches this information? This is where the new cost function comes into play Let, and Some points about : If 0 then with its feature vector has a higher rank than with The bigger is the bigger the difference between the ranks of and The bigger is, the bigger should be If the probability, that one page is ranked higher than the other, is big, then their rank values should show that as well So the cost function becomes: (2) 1 log 1 with 12

13 (3) (4) log 1, where is the cost or error value for the training item,, What (3) does is immediately obvious by looking at its diagram on the left-upper side of figure 9 It maps the rank difference to a probability The bigger the rank difference the bigger the probability (2) compares the probability we computed in (3), and the target probability we want to reach and gives the deviation the cost value It is drawn on the right side of figure 9 The plot at the bottom of figure 9 should be a help in understanding how (2) and (3) work together It shows how big the cost is (y-axis) for a specific (x-axis) and a specific target probability (yellow 095, green 05 and red 005) We can observe, that eg for a high ranking difference and a high target probability, the cost is very low So we have a cost function as well Now we can set up our ANN A 2-layer network with a single output is chosen and presented in figure 10 The functions in Layer 1 are labeled by 1, but it is always the same function The labeling is done to show how many nodes are in that Layer Figure 10 RankNet ANN Layer 1 Layer 2,,,,,,,, 13

14 We are now able to write the algorithm that teaches It is shown in figure 11 First just some equations which should help to understand the algorithm: (5) Figure 11 RankNet Algorithm 1 Initiations: - Initiate the vector 1,,,,,,,, with some values Note: (steps 2-4 are done for each,, in the training set, which has # such items) 2 - Calculate for - Do back-propagation and get 3 - Calculate for - Do back-propagation and get 4 - Calculate - Calculate 5 - If steps 2-4 have been done for every item of the training set, calculate: Calculate If it is small enough stop If not go back to step 2 14

15 6 Benefits of machine learning for Static Ranking There are many advantages besides quality of ranking, that speak for using machine learning methods for Static Ranking Here are some of the points: Since the measure consists of many features it is harder to manipulate results In Google s PageRank it is easy to get good rankings, by just increasing the number of incoming links or by other web-spamming techniques But as RankNet is able to learn, features that became unusable because of spammers can be removed from the final ranking RankNet therefore has a very good reaction time to new spamming techniques It is also possible easily to add new features of websites The change made to the algorithm are not big Since the advances in the machine learning field have increased a lot through the last couple of years, we are able to benefit from them Also the effect is reduced, that some few outlier websites have a huge impact on the whole ranking Everything is more smooth with machine learning and RankNet 7 References [1] Matthew Richardson, Amit Prakash and Eric Brill: Beyond PageRank: Machine Learning for Static Ranking, 2006 [2] Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton and Greg Hullender: Learning to Rank using Gradient Descent, pages 1-4, 2005 [3] Yann LeCun, Leon Bottou, Genevieve B Orr and Klaus-Robert Müller: Efficient Backprop Neural Networks: Tricks of the Trade, Springer (pp 9-50), 1998 [4] Li-Tal Mashiach: Learning to Rank: A Machine Learning Approach to Static Ranking, 2006 [5] Genevieve Orr: Neural Networks, [6] Hsinchun Chen: Machine Learning for Information Retrieval: Neural Networks, Symbolic Learning and Genetic Algorithms, [7] Sourceforgenet: Neural Network Theory, 15

16 [8] David E Rumelhart, Bernard Widrow and Michael A Lehr: The Basic Ideas in Neural Networks, 1994 [9] Wikipedia: Artificial Neural Network, 16

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Lecture 10: Reinforcement Learning

Lecture 10: Reinforcement Learning Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

Test Effort Estimation Using Neural Network

Test Effort Estimation Using Neural Network J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

Visit us at:

Visit us at: White Paper Integrating Six Sigma and Software Testing Process for Removal of Wastage & Optimizing Resource Utilization 24 October 2013 With resources working for extended hours and in a pressurized environment,

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

An Introduction to Simio for Beginners

An Introduction to Simio for Beginners An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

CHAPTER 4: REIMBURSEMENT STRATEGIES 24 CHAPTER 4: REIMBURSEMENT STRATEGIES 24 INTRODUCTION Once state level policymakers have decided to implement and pay for CSR, one issue they face is simply how to calculate the reimbursements to districts

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

Statewide Framework Document for:

Statewide Framework Document for: Statewide Framework Document for: 270301 Standards may be added to this document prior to submission, but may not be removed from the framework to meet state credit equivalency requirements. Performance

More information

arxiv: v1 [math.at] 10 Jan 2016

arxiv: v1 [math.at] 10 Jan 2016 THE ALGEBRAIC ATIYAH-HIRZEBRUCH SPECTRAL SEQUENCE OF REAL PROJECTIVE SPECTRA arxiv:1601.02185v1 [math.at] 10 Jan 2016 GUOZHEN WANG AND ZHOULI XU Abstract. In this note, we use Curtis s algorithm and the

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

An empirical study of learning speed in backpropagation

An empirical study of learning speed in backpropagation Carnegie Mellon University Research Showcase @ CMU Computer Science Department School of Computer Science 1988 An empirical study of learning speed in backpropagation networks Scott E. Fahlman Carnegie

More information

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Ajith Abraham School of Business Systems, Monash University, Clayton, Victoria 3800, Australia. Email: ajith.abraham@ieee.org

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

LEGO MINDSTORMS Education EV3 Coding Activities

LEGO MINDSTORMS Education EV3 Coding Activities LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 3-6 ACTIVITY 2 Written Instructions for a

More information

Lecture 2: Quantifiers and Approximation

Lecture 2: Quantifiers and Approximation Lecture 2: Quantifiers and Approximation Case study: Most vs More than half Jakub Szymanik Outline Number Sense Approximate Number Sense Approximating most Superlative Meaning of most What About Counting?

More information

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Functional Skills Mathematics Level 2 assessment

Functional Skills Mathematics Level 2 assessment Functional Skills Mathematics Level 2 assessment www.cityandguilds.com September 2015 Version 1.0 Marking scheme ONLINE V2 Level 2 Sample Paper 4 Mark Represent Analyse Interpret Open Fixed S1Q1 3 3 0

More information

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the

More information

16.1 Lesson: Putting it into practice - isikhnas

16.1 Lesson: Putting it into practice - isikhnas BAB 16 Module: Using QGIS in animal health The purpose of this module is to show how QGIS can be used to assist in animal health scenarios. In order to do this, you will have needed to study, and be familiar

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

THE ROLE OF TOOL AND TEACHER MEDIATIONS IN THE CONSTRUCTION OF MEANINGS FOR REFLECTION

THE ROLE OF TOOL AND TEACHER MEDIATIONS IN THE CONSTRUCTION OF MEANINGS FOR REFLECTION THE ROLE OF TOOL AND TEACHER MEDIATIONS IN THE CONSTRUCTION OF MEANINGS FOR REFLECTION Lulu Healy Programa de Estudos Pós-Graduados em Educação Matemática, PUC, São Paulo ABSTRACT This article reports

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

Analysis of Enzyme Kinetic Data

Analysis of Enzyme Kinetic Data Analysis of Enzyme Kinetic Data To Marilú Analysis of Enzyme Kinetic Data ATHEL CORNISH-BOWDEN Directeur de Recherche Émérite, Centre National de la Recherche Scientifique, Marseilles OXFORD UNIVERSITY

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

Physics 270: Experimental Physics

Physics 270: Experimental Physics 2017 edition Lab Manual Physics 270 3 Physics 270: Experimental Physics Lecture: Lab: Instructor: Office: Email: Tuesdays, 2 3:50 PM Thursdays, 2 4:50 PM Dr. Uttam Manna 313C Moulton Hall umanna@ilstu.edu

More information

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study Purdue Data Summit 2017 Communication of Big Data Analytics New SAT Predictive Validity Case Study Paul M. Johnson, Ed.D. Associate Vice President for Enrollment Management, Research & Enrollment Information

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

While you are waiting... socrative.com, room number SIMLANG2016

While you are waiting... socrative.com, room number SIMLANG2016 While you are waiting... socrative.com, room number SIMLANG2016 Simulating Language Lecture 4: When will optimal signalling evolve? Simon Kirby simon@ling.ed.ac.uk T H E U N I V E R S I T Y O H F R G E

More information

This scope and sequence assumes 160 days for instruction, divided among 15 units.

This scope and sequence assumes 160 days for instruction, divided among 15 units. In previous grades, students learned strategies for multiplication and division, developed understanding of structure of the place value system, and applied understanding of fractions to addition and subtraction

More information

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand Texas Essential Knowledge and Skills (TEKS): (2.1) Number, operation, and quantitative reasoning. The student

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

On-the-Fly Customization of Automated Essay Scoring

On-the-Fly Customization of Automated Essay Scoring Research Report On-the-Fly Customization of Automated Essay Scoring Yigal Attali Research & Development December 2007 RR-07-42 On-the-Fly Customization of Automated Essay Scoring Yigal Attali ETS, Princeton,

More information

Stacks Teacher notes. Activity description. Suitability. Time. AMP resources. Equipment. Key mathematical language. Key processes

Stacks Teacher notes. Activity description. Suitability. Time. AMP resources. Equipment. Key mathematical language. Key processes Stacks Teacher notes Activity description (Interactive not shown on this sheet.) Pupils start by exploring the patterns generated by moving counters between two stacks according to a fixed rule, doubling

More information

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS ELIZABETH ANNE SOMERS Spring 2011 A thesis submitted in partial

More information

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE Edexcel GCSE Statistics 1389 Paper 1H June 2007 Mark Scheme Edexcel GCSE Statistics 1389 NOTES ON MARKING PRINCIPLES 1 Types of mark M marks: method marks A marks: accuracy marks B marks: unconditional

More information

Backwards Numbers: A Study of Place Value. Catherine Perez

Backwards Numbers: A Study of Place Value. Catherine Perez Backwards Numbers: A Study of Place Value Catherine Perez Introduction I was reaching for my daily math sheet that my school has elected to use and in big bold letters in a box it said: TO ADD NUMBERS

More information

Shockwheat. Statistics 1, Activity 1

Shockwheat. Statistics 1, Activity 1 Statistics 1, Activity 1 Shockwheat Students require real experiences with situations involving data and with situations involving chance. They will best learn about these concepts on an intuitive or informal

More information

Using focal point learning to improve human machine tacit coordination

Using focal point learning to improve human machine tacit coordination DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated

More information

CSL465/603 - Machine Learning

CSL465/603 - Machine Learning CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

Introduction and Motivation

Introduction and Motivation 1 Introduction and Motivation Mathematical discoveries, small or great are never born of spontaneous generation. They always presuppose a soil seeded with preliminary knowledge and well prepared by labour,

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

TabletClass Math Geometry Course Guidebook

TabletClass Math Geometry Course Guidebook TabletClass Math Geometry Course Guidebook Includes Final Exam/Key, Course Grade Calculation Worksheet and Course Certificate Student Name Parent Name School Name Date Started Course Date Completed Course

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Radius STEM Readiness TM

Radius STEM Readiness TM Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and

More information

Page 1 of 11. Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General. Grade(s): None specified

Page 1 of 11. Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General. Grade(s): None specified Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General Grade(s): None specified Unit: Creating a Community of Mathematical Thinkers Timeline: Week 1 The purpose of the Establishing a Community

More information

Mathematics process categories

Mathematics process categories Mathematics process categories All of the UK curricula define multiple categories of mathematical proficiency that require students to be able to use and apply mathematics, beyond simple recall of facts

More information

Discriminative Learning of Beam-Search Heuristics for Planning

Discriminative Learning of Beam-Search Heuristics for Planning Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

Major Milestones, Team Activities, and Individual Deliverables

Major Milestones, Team Activities, and Individual Deliverables Major Milestones, Team Activities, and Individual Deliverables Milestone #1: Team Semester Proposal Your team should write a proposal that describes project objectives, existing relevant technology, engineering

More information

Houghton Mifflin Online Assessment System Walkthrough Guide

Houghton Mifflin Online Assessment System Walkthrough Guide Houghton Mifflin Online Assessment System Walkthrough Guide Page 1 Copyright 2007 by Houghton Mifflin Company. All Rights Reserved. No part of this document may be reproduced or transmitted in any form

More information

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE Pierre Foy TIMSS Advanced 2015 orks User Guide for the International Database Pierre Foy Contributors: Victoria A.S. Centurino, Kerry E. Cotter,

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

Learning to Schedule Straight-Line Code

Learning to Schedule Straight-Line Code Learning to Schedule Straight-Line Code Eliot Moss, Paul Utgoff, John Cavazos Doina Precup, Darko Stefanović Dept. of Comp. Sci., Univ. of Mass. Amherst, MA 01003 Carla Brodley, David Scheeff Sch. of Elec.

More information

Genevieve L. Hartman, Ph.D.

Genevieve L. Hartman, Ph.D. Curriculum Development and the Teaching-Learning Process: The Development of Mathematical Thinking for all children Genevieve L. Hartman, Ph.D. Topics for today Part 1: Background and rationale Current

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Truth Inference in Crowdsourcing: Is the Problem Solved?

Truth Inference in Crowdsourcing: Is the Problem Solved? Truth Inference in Crowdsourcing: Is the Problem Solved? Yudian Zheng, Guoliang Li #, Yuanbing Li #, Caihua Shan, Reynold Cheng # Department of Computer Science, Tsinghua University Department of Computer

More information

School of Innovative Technologies and Engineering

School of Innovative Technologies and Engineering School of Innovative Technologies and Engineering Department of Applied Mathematical Sciences Proficiency Course in MATLAB COURSE DOCUMENT VERSION 1.0 PCMv1.0 July 2012 University of Technology, Mauritius

More information

Probability and Statistics Curriculum Pacing Guide

Probability and Statistics Curriculum Pacing Guide Unit 1 Terms PS.SPMJ.3 PS.SPMJ.5 Plan and conduct a survey to answer a statistical question. Recognize how the plan addresses sampling technique, randomization, measurement of experimental error and methods

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

Probability estimates in a scenario tree

Probability estimates in a scenario tree 101 Chapter 11 Probability estimates in a scenario tree An expert is a person who has made all the mistakes that can be made in a very narrow field. Niels Bohr (1885 1962) Scenario trees require many numbers.

More information

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com

More information

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education GCSE Mathematics B (Linear) Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education Mark Scheme for November 2014 Oxford Cambridge and RSA Examinations OCR (Oxford Cambridge

More information

Proof Theory for Syntacticians

Proof Theory for Syntacticians Department of Linguistics Ohio State University Syntax 2 (Linguistics 602.02) January 5, 2012 Logics for Linguistics Many different kinds of logic are directly applicable to formalizing theories in syntax

More information

Evolution of Symbolisation in Chimpanzees and Neural Nets

Evolution of Symbolisation in Chimpanzees and Neural Nets Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication

More information

Attributed Social Network Embedding

Attributed Social Network Embedding JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MAY 2017 1 Attributed Social Network Embedding arxiv:1705.04969v1 [cs.si] 14 May 2017 Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua Abstract Embedding

More information

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data Kurt VanLehn 1, Kenneth R. Koedinger 2, Alida Skogsholm 2, Adaeze Nwaigwe 2, Robert G.M. Hausmann 1, Anders Weinstein

More information

Getting Started with Deliberate Practice

Getting Started with Deliberate Practice Getting Started with Deliberate Practice Most of the implementation guides so far in Learning on Steroids have focused on conceptual skills. Things like being able to form mental images, remembering facts

More information

AQUA: An Ontology-Driven Question Answering System

AQUA: An Ontology-Driven Question Answering System AQUA: An Ontology-Driven Question Answering System Maria Vargas-Vera, Enrico Motta and John Domingue Knowledge Media Institute (KMI) The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom.

More information

arxiv: v1 [cs.lg] 15 Jun 2015

arxiv: v1 [cs.lg] 15 Jun 2015 Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and

More information

CPS122 Lecture: Identifying Responsibilities; CRC Cards. 1. To show how to use CRC cards to identify objects and find responsibilities

CPS122 Lecture: Identifying Responsibilities; CRC Cards. 1. To show how to use CRC cards to identify objects and find responsibilities Objectives: CPS122 Lecture: Identifying Responsibilities; CRC Cards last revised February 7, 2012 1. To show how to use CRC cards to identify objects and find responsibilities Materials: 1. ATM System

More information

Ricopili: Postimputation Module. WCPG Education Day Stephan Ripke / Raymond Walters Toronto, October 2015

Ricopili: Postimputation Module. WCPG Education Day Stephan Ripke / Raymond Walters Toronto, October 2015 Ricopili: Postimputation Module WCPG Education Day Stephan Ripke / Raymond Walters Toronto, October 2015 Ricopili Overview Ricopili Overview postimputation, 12 steps 1) Association analysis 2) Meta analysis

More information

On the Combined Behavior of Autonomous Resource Management Agents

On the Combined Behavior of Autonomous Resource Management Agents On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science

More information

Introduction to the Practice of Statistics

Introduction to the Practice of Statistics Chapter 1: Looking at Data Distributions Introduction to the Practice of Statistics Sixth Edition David S. Moore George P. McCabe Bruce A. Craig Statistics is the science of collecting, organizing and

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Andres Chavez Math 382/L T/Th 2:00-3:40 April 13, 2010 Chavez2 Abstract The main interest of this paper is Artificial Neural Networks (ANNs). A brief history of the development

More information

Evaluation of a College Freshman Diversity Research Program

Evaluation of a College Freshman Diversity Research Program Evaluation of a College Freshman Diversity Research Program Sarah Garner University of Washington, Seattle, Washington 98195 Michael J. Tremmel University of Washington, Seattle, Washington 98195 Sarah

More information

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses Thomas F.C. Woodhall Masters Candidate in Civil Engineering Queen s University at Kingston,

More information

Using computational modeling in language acquisition research

Using computational modeling in language acquisition research Chapter 8 Using computational modeling in language acquisition research Lisa Pearl 1. Introduction Language acquisition research is often concerned with questions of what, when, and how what children know,

More information

An Evaluation of the Interactive-Activation Model Using Masked Partial-Word Priming. Jason R. Perry. University of Western Ontario. Stephen J.

An Evaluation of the Interactive-Activation Model Using Masked Partial-Word Priming. Jason R. Perry. University of Western Ontario. Stephen J. An Evaluation of the Interactive-Activation Model Using Masked Partial-Word Priming Jason R. Perry University of Western Ontario Stephen J. Lupker University of Western Ontario Colin J. Davis Royal Holloway

More information

Mathematics Success Grade 7

Mathematics Success Grade 7 T894 Mathematics Success Grade 7 [OBJECTIVE] The student will find probabilities of compound events using organized lists, tables, tree diagrams, and simulations. [PREREQUISITE SKILLS] Simple probability,

More information

A Reinforcement Learning Variant for Control Scheduling

A Reinforcement Learning Variant for Control Scheduling A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement

More information

Patterns for Adaptive Web-based Educational Systems

Patterns for Adaptive Web-based Educational Systems Patterns for Adaptive Web-based Educational Systems Aimilia Tzanavari, Paris Avgeriou and Dimitrios Vogiatzis University of Cyprus Department of Computer Science 75 Kallipoleos St, P.O. Box 20537, CY-1678

More information