Decision Tree Learning

Size: px
Start display at page:

Download "Decision Tree Learning"

Transcription

1 CMP 882 Machine Learning Decision ree Learning Lecture Scribe for week 7 ebruary 20th By: Mona Vajihollahi mvajihol@sfu.ca Overview: Introduction...2 Decision ree Hypothesis Space...3 Parity unction... 5 Decision Graph... 6 Algebraic Decision Diagram... 6 Inductive Bias...7 Occam s razor... 7 Avoiding Overfitting the Data...7 raining and Validation... 9 Reduced Error Pruning... 9 Rule PostPruning... 9 Other Issues in Decision rees Learning...10 kary Attribute Values: Real Attribute Values Attribute Cost Missing Values Other Splitting Criteria References...10

2 Introduction Decision ree Learning is a method for approximating discretevalued functions, using decision trees. Decision rees represent a disjunction of conjunctions of constraints on the attribute values of instances. his disjunction approximates the target function. Decision tree learning is robust to noisy data and is suitable for the target functions with discrete output values. Most of the algorithms for decision learning use a topdown greedy search in the space of possible decision trees to find the best tree which fits the training data. ID3, a basic algorithm in decision tree learning, construct the tree by answering the question which attribute is the best classifier? in each step. he answer is the attribute with highest Information Gain, which is the expected reduction in the entropy 1 caused by partitioning examples according to the attribute. igure 1 he entropy function of a Boolean classifier. he following example (Exercise 3.2, p 77, Machine Learning, om Mitchell) illustrates the use of entropy and information gain. Consider the following set of training examples: Instance Classification a1 a he concept of entropy plays an important role in information theory. Roughly speaking, entropy is a mathematical formulation of the uncertainty and/or the amount information in a data set. he (Shannon) entropy of a variable X is defined as:

3 igure 1, depicts the form of the entropy function relative to a Boolean classification, as the proportion, p, of positive examples varies between 0 and 1. So it is obvious that when the data set contains equal number of positive and negative examples (i.e. p = ½) the entropy function has the largest value, that is 1. In the above example there are 3 positive instances and 3 negative instances, so by the above discussion we can conclude that Entropy = 1. Alternatively, we can calculate the entropy using the following equation: p p Entropy( S) = p log 2 pθ log2 So: Entropy ( S) = 1 log 2 1 log 2 θ = / 2 1/ = Now, we can calculate the Information Gain of a2 relative to this training set, using the following equation: Gain( s, A) = Entropy( S) v Values( A) 1 Sv Entropy( S S So: S a2= S a2= Gain ( S, a ) = Entropy( S) Entropy( S a2= ) Entropy( Sa2= S S = 1 2/3 * 1 2/3 * 1 = 0 2 Decision ree Hypothesis Space he hypothesis space in decision tree learning is the set of all possible decision trees. Decision tree learning algorithms search trough this space to find a decision tree that correctly classifies the training data. In the resulted decision tree each Internal node perform a test on attribute x and branch according to the results. Leaf nodes specify class h(x). hus, the classification will be very easy, using the decision tree and splitting based on some tests. he space of all decision trees is a complete space of finite discretevalued function. Consequently, every Boolean function can be represented by decision trees and there are usually different decision trees for a single Boolean function. Let us see the use of decision trees in representing Boolean functions in some examples (Ex. 3.1, p 77, Machine Learning, om Mitchell). In the following examples we start with testing variable A. However, in actual decision tree building (for learning) this may not be a good idea and we have to choose the variable with the highest information gain for the root. v ) )

4 a )Α Β A B, are used instead of, to distinguish between attribute values and output results. ( Β ) b) Α C A C B In the second branch we must test all variables, which is not desirable. Entropy tries to address this kind of problems.

5 c) Α XOR Β A B B In XOR function Entropy does not say anything, so we have to test every variable. Also we can not find any nice dependencies between the variables. Another example of such a function is Parity function. Parity unction In parity function we can never conclude the result is 1 or 0 until we test all attributes and come to the end. When representing parity function with decision trees we have to split on every variable and keep building the tree all the way down. Learners usually like to decompose the problem and that is why these functions are difficult for Decision rees and Bayesian Classifiers to be learned. herefore, Parity function is usually used in testing different learners. ( A B) ( C ) d) D Unlike Parity and XOR we have a break here. (We don not need to test all variables) A B C D C he marked subtrees are identical. o address this problem, that is to avoid replication we could use a different representation, like a graph. We can have a more compact representation by using a Decision Graph. D

6 Decision Graph Decision Graphs, such as the one depicted in igure 2, are generalizations of decision trees, having decision nodes and leaves. he feature that distinguishes decision graphs from decision trees is that decision graphs may also contain Joins. A Join is represented by two nodes having a common child, and this specifies that two subsets have some common properties, and hence can be considered as one subset. he manner in which objects are categorized by decision graphs is the same as that of decision trees. Each decision tree and decision graph defines a categorization (i.e., some partitioning of the object space into disjoint categories). he set of decision functions representable by graphs is exactly the same as the set representable by trees. However, the set of categorizations which can enter into the definition of a decision function are different. or example, the categorizations for ( A B) ( C D) given in igure 2.a and igure 2.b are different since the decision tree partitions the object space into 7 categories, while the decision graph partitions the object space into 2 categories [1]. a) A Decision ree for ( A B) ( C D) b) A Decision Graph for ( A B) ( C D) igure 2 [1] Algebraic Decision Diagram Another variant of Decision rees are Algebraic Decision Diagrams, which are used in representing target functions. ADDs are a compact, efficiently manipulable data structure for representing realvalued functions over Boolean variables B n R. hey generalize a treestructured representation by allowing nodes to have multiple parents, leading to the recombination of isomorphic subgraphs and hence to a possible reduction in the representation size [2].

7 Inductive Bias It was mentioned before that the hypothesis space of ID3 is the power set of instances X. his means that there is no Restrictive Bias for decision tree learning. Restrictive Bias, also called Language Bias, limits the set of available hypothesis and we have no such bias for ID3. On the other hand, there is a Preference Bias, or Search Bias, for decision tree learning. he Preference Bias orders the available hypotheses and in ID3 there is a preference for: 1) Shorter rees. 2) rees with high information gain attributes near the root. his bias often goes by the name of Occam s razor 2. Occam s razor Occam s razor says: Prefer the simplest hypothesis that fits the data. In other words the inductive bias for ID3 says that if there is a shorter tree, then it would be taken. So the preference is for shorter trees but it is not always guaranteed. In fact, ID3 is a greedy algorithm so it can not always find the shortest tree. Another question that arises regarding Occam s razor is that What is a short tree? A tree with fewer nodes or a shallow tree? Actually, the general idea behind Occam s razor is that we want to avoid splitting as much as possible. herefore, shallower trees would be better for us. here are a number of arguments in favor and against Occam s razor. here have been many researches on this topic, but we can not clearly answer this question yet: What is bias really? Avoiding Overfitting the Data As discussed before in ID3 the tree is built just deeply enough to perfectly classify the training data. his strategy can produce trees that overfit the training examples. Consider the following Concept C: X1 1 X2 1 X3 0 X4 0 X5 0 Now suppose that the training set contains variables X3, X4 and X5. hen obviously the learner would classify X1 and X2 as 0, i.e. H(X1) = 0 and H(X2) = 0, which is not correct. Here the learner has overfitted the data. Given a hypothesis space H and hypothesis h H, let error train (h) be the error of the hypothesis h over the training examples and error D (h) be the error of the hypothesis over the entire distribution D of data: h is said to overfit the training data, if there exists some other hypothesis h H, such that errortrain ( h) < errortrain ( h' ) and errord ( h) > errord ( h' ). 2 William of Occam was one of the first philosophers who discussed the question of preferring short hypotheses, around the year 1320.

8 Overfitting usually happens where there are random errors or noise in the training examples, or there is small number of examples for training. In the latter case, we usually have coincidental regularities. his happens when some attribute partitions the training example very well but is irrelevant to the real target function. o see the effect of an error in the training data on building the relevant decision tree, consider the following example. Suppose in the entire distribution of the data we have: If SUNNY = then PLAYENNIS = But in the training example there is an instance with the following values of attributes: X1 X2 SUNNY PLAYENNIS So in reality the tree should not split any more in the branch of SUNNY = (igure 3.a), but having the above instance in the training example the learned tree has to split on every attribute after Sunny in the branch of SUNNY = (igure 3.b). While the resulted tree classifies the training examples well, it fails in classifying the real Data. Obviously, we have an overfitting here. SUNNY SUNNY X1 X a) he decision tree for real target function b) he learned decision tree igure 3 In general, it is very hard to avoid overfitting because usually all we have is the training data. One way is to go for some statistics. We can do a statistical analysis and use the gathered information to answer the question: How much representative the sample is? hen, if the sample is not representative enough, we would have overfitting. or example, in this way statisticians may calculate the error for a given concept and then give a probability measure for all concepts. However, if we have some assumption about the real target function, it would be much easier to find out when overfitting occurs. It is worth to mention that there is a close relation between inductive bias and overfitting and having a good inductive bias can help us to avoid overfitting.

9 raining and Validation Among the different approaches that have been proposed to avoid overfitting the most common one is called training and validation set approach. In this approach the available training date is separated into two sets: raining Set and the Validation Set. raining set is then used to learn the hypothesis and the validation set is used to evaluate the accuracy of the resulted hypothesis. In this way, we provide a check against overfitting and will use the validation set to prune the tree whenever it overfits the data. he size of validation set has also a significant importance, because it should be large enough to provide a good sample of the instances. Although there are statistical formulas for computing the required size of the validation set, one common heuristic is to use 1/3 of the available examples for the validation set and use the remaining 2/3 for the training. here are different approaches for pruning a tree using the validation set. wo of those approaches are: 1) Reduced Error Pruning 2) Rule Post Pruning Reduced Error Pruning In this approach, the data is split into the training and validation sets and they are used in pruning. Pruning a decision node is removing its subtree and making it a leaf node, then assigning it the most common classification affiliated with that node. In Reduced Error Pruning each node is considered for pruning. he pruning algorithm is as follows: Do until further pruning is harmful: 1. Evaluate the impact on validation set of pruning each possible node (plus those below) 2. Greedily remove the node that most improves the accuracy on the validation set In this way, each pruning is tested over the validation set and the smallest version of the accurate subtree is produced. o address the problem of limited data, other techniques have been proposed which have the same idea behind. hose techniques usually partition the available data several times in different ways and then average the results. Rule PostPruning Another approach for pruning the learned tree is Rule PostPruning. In this approach, we convert the decision tree to the equivalent set of logical rules and then prune each rule. Obviously pruning in the Logical ramework is equivalent to Generalizing the rules. hese generalized rules are then sorted according to their accuracy and used in the same order. Here the validation set is used to estimate the accuracy of the resulted rules.

10 Other Issues in Decision rees Learning here are still more topics that have been addressed by decision trees. Some of these topics are: kary Attribute Values: Problem: When an attribute has many values, its will be selected because of the high information gain. Solution: Using gain ratio. Real Attribute Values Problem: he attributes tested in the decision nodes must be discrete valued. Solution: Defining new discrete valued attributes and partitioning the real attribute value into discrete intervals. Attribute Cost Problem: Sometimes different attributes have different costs. How can we find the tree with the lowest cost? Solution: Including the cost of each attribute in the Gain function. Missing Values Problem: In some examples the value of an attribute is missed. Solution: here are different strategies, using training examples and sorting through the tree (e.g. If n test A, which has a missing value, assign the most common value among training examples at node n). Other Splitting Criteria We can use different measures for selecting attributes. or example we can measure not only the entropy but also the size of a set to select an attribute. References [1] Jonothan J. Oliver; Decision Graphs: An Extension of Decision rees, 1992 [2] Robert StAubin, Jesse Hoey and Craig Boutilier; APRICODD: Approximate Policy Construction using Decision Diagrams [3] om M. Mitchell; Machine Learning, 1997 [4] Entropy on the World Wide Web,

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18 Version Space Javier Béjar cbea LSI - FIB Term 2012/2013 Javier Béjar cbea (LSI - FIB) Version Space Term 2012/2013 1 / 18 Outline 1 Learning logical formulas 2 Version space Introduction Search strategy

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

A Version Space Approach to Learning Context-free Grammars

A Version Space Approach to Learning Context-free Grammars Machine Learning 2: 39~74, 1987 1987 Kluwer Academic Publishers, Boston - Manufactured in The Netherlands A Version Space Approach to Learning Context-free Grammars KURT VANLEHN (VANLEHN@A.PSY.CMU.EDU)

More information

Chapter 2 Rule Learning in a Nutshell

Chapter 2 Rule Learning in a Nutshell Chapter 2 Rule Learning in a Nutshell This chapter gives a brief overview of inductive rule learning and may therefore serve as a guide through the rest of the book. Later chapters will expand upon the

More information

Radius STEM Readiness TM

Radius STEM Readiness TM Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and

More information

Proof Theory for Syntacticians

Proof Theory for Syntacticians Department of Linguistics Ohio State University Syntax 2 (Linguistics 602.02) January 5, 2012 Logics for Linguistics Many different kinds of logic are directly applicable to formalizing theories in syntax

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

The lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design.

The lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design. Name: Partner(s): Lab #1 The Scientific Method Due 6/25 Objective The lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design.

More information

Constructive Induction-based Learning Agents: An Architecture and Preliminary Experiments

Constructive Induction-based Learning Agents: An Architecture and Preliminary Experiments Proceedings of the First International Workshop on Intelligent Adaptive Systems (IAS-95) Ibrahim F. Imam and Janusz Wnek (Eds.), pp. 38-51, Melbourne Beach, Florida, 1995. Constructive Induction-based

More information

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Chinese Language Parsing with Maximum-Entropy-Inspired Parser Chinese Language Parsing with Maximum-Entropy-Inspired Parser Heng Lian Brown University Abstract The Chinese language has many special characteristics that make parsing difficult. The performance of state-of-the-art

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Extending Place Value with Whole Numbers to 1,000,000

Extending Place Value with Whole Numbers to 1,000,000 Grade 4 Mathematics, Quarter 1, Unit 1.1 Extending Place Value with Whole Numbers to 1,000,000 Overview Number of Instructional Days: 10 (1 day = 45 minutes) Content to Be Learned Recognize that a digit

More information

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic

More information

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview Algebra 1, Quarter 3, Unit 3.1 Line of Best Fit Overview Number of instructional days 6 (1 day assessment) (1 day = 45 minutes) Content to be learned Analyze scatter plots and construct the line of best

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

Learning goal-oriented strategies in problem solving

Learning goal-oriented strategies in problem solving Learning goal-oriented strategies in problem solving Martin Možina, Timotej Lazar, Ivan Bratko Faculty of Computer and Information Science University of Ljubljana, Ljubljana, Slovenia Abstract The need

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance Cristina Conati, Kurt VanLehn Intelligent Systems Program University of Pittsburgh Pittsburgh, PA,

More information

Guidelines for Project I Delivery and Assessment Department of Industrial and Mechanical Engineering Lebanese American University

Guidelines for Project I Delivery and Assessment Department of Industrial and Mechanical Engineering Lebanese American University Guidelines for Project I Delivery and Assessment Department of Industrial and Mechanical Engineering Lebanese American University Approved: July 6, 2009 Amended: July 28, 2009 Amended: October 30, 2009

More information

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation School of Computer Science Human-Computer Interaction Institute Carnegie Mellon University Year 2007 Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation Noboru Matsuda

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Lecture 10: Reinforcement Learning

Lecture 10: Reinforcement Learning Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

STA 225: Introductory Statistics (CT)

STA 225: Introductory Statistics (CT) Marshall University College of Science Mathematics Department STA 225: Introductory Statistics (CT) Course catalog description A critical thinking course in applied statistical reasoning covering basic

More information

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,

More information

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Using and applying mathematics objectives (Problem solving, Communicating and Reasoning) Select the maths to use in some classroom

More information

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition Chapter 2: The Representation of Knowledge Expert Systems: Principles and Programming, Fourth Edition Objectives Introduce the study of logic Learn the difference between formal logic and informal logic

More information

GACE Computer Science Assessment Test at a Glance

GACE Computer Science Assessment Test at a Glance GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science

More information

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Active Learning. Yingyu Liang Computer Sciences 760 Fall Active Learning Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven,

More information

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus CS 1103 Computer Science I Honors Fall 2016 Instructor Muller Syllabus Welcome to CS1103. This course is an introduction to the art and science of computer programming and to some of the fundamental concepts

More information

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data Kurt VanLehn 1, Kenneth R. Koedinger 2, Alida Skogsholm 2, Adaeze Nwaigwe 2, Robert G.M. Hausmann 1, Anders Weinstein

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

Mathematics subject curriculum

Mathematics subject curriculum Mathematics subject curriculum Dette er ei omsetjing av den fastsette læreplanteksten. Læreplanen er fastsett på Nynorsk Established as a Regulation by the Ministry of Education and Research on 24 June

More information

How do adults reason about their opponent? Typologies of players in a turn-taking game

How do adults reason about their opponent? Typologies of players in a turn-taking game How do adults reason about their opponent? Typologies of players in a turn-taking game Tamoghna Halder (thaldera@gmail.com) Indian Statistical Institute, Kolkata, India Khyati Sharma (khyati.sharma27@gmail.com)

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Decision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1

Decision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1 Decision Support: Decision Analysis Jožef Stefan International Postgraduate School, Ljubljana Programme: Information and Communication Technologies [ICT3] Course Web Page: http://kt.ijs.si/markobohanec/ds/ds.html

More information

Prediction of Maximal Projection for Semantic Role Labeling

Prediction of Maximal Projection for Semantic Role Labeling Prediction of Maximal Projection for Semantic Role Labeling Weiwei Sun, Zhifang Sui Institute of Computational Linguistics Peking University Beijing, 100871, China {ws, szf}@pku.edu.cn Haifeng Wang Toshiba

More information

Self Study Report Computer Science

Self Study Report Computer Science Computer Science undergraduate students have access to undergraduate teaching, and general computing facilities in three buildings. Two large classrooms are housed in the Davis Centre, which hold about

More information

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

CHAPTER 4: REIMBURSEMENT STRATEGIES 24 CHAPTER 4: REIMBURSEMENT STRATEGIES 24 INTRODUCTION Once state level policymakers have decided to implement and pay for CSR, one issue they face is simply how to calculate the reimbursements to districts

More information

Grade 6: Correlated to AGS Basic Math Skills

Grade 6: Correlated to AGS Basic Math Skills Grade 6: Correlated to AGS Basic Math Skills Grade 6: Standard 1 Number Sense Students compare and order positive and negative integers, decimals, fractions, and mixed numbers. They find multiples and

More information

The Singapore Copyright Act applies to the use of this document.

The Singapore Copyright Act applies to the use of this document. Title Mathematical problem solving in Singapore schools Author(s) Berinderjeet Kaur Source Teaching and Learning, 19(1), 67-78 Published by Institute of Education (Singapore) This document may be used

More information

Word learning as Bayesian inference

Word learning as Bayesian inference Word learning as Bayesian inference Joshua B. Tenenbaum Department of Psychology Stanford University jbt@psych.stanford.edu Fei Xu Department of Psychology Northeastern University fxu@neu.edu Abstract

More information

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Nathaniel Hayes Department of Computer Science Simpson College 701 N. C. St. Indianola, IA, 50125 nate.hayes@my.simpson.edu

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

CAAP. Content Analysis Report. Sample College. Institution Code: 9011 Institution Type: 4-Year Subgroup: none Test Date: Spring 2011

CAAP. Content Analysis Report. Sample College. Institution Code: 9011 Institution Type: 4-Year Subgroup: none Test Date: Spring 2011 CAAP Content Analysis Report Institution Code: 911 Institution Type: 4-Year Normative Group: 4-year Colleges Introduction This report provides information intended to help postsecondary institutions better

More information

Interpreting ACER Test Results

Interpreting ACER Test Results Interpreting ACER Test Results This document briefly explains the different reports provided by the online ACER Progressive Achievement Tests (PAT). More detailed information can be found in the relevant

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Cal s Dinner Card Deals

Cal s Dinner Card Deals Cal s Dinner Card Deals Overview: In this lesson students compare three linear functions in the context of Dinner Card Deals. Students are required to interpret a graph for each Dinner Card Deal to help

More information

CSL465/603 - Machine Learning

CSL465/603 - Machine Learning CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am

More information

Probability and Statistics Curriculum Pacing Guide

Probability and Statistics Curriculum Pacing Guide Unit 1 Terms PS.SPMJ.3 PS.SPMJ.5 Plan and conduct a survey to answer a statistical question. Recognize how the plan addresses sampling technique, randomization, measurement of experimental error and methods

More information

Shockwheat. Statistics 1, Activity 1

Shockwheat. Statistics 1, Activity 1 Statistics 1, Activity 1 Shockwheat Students require real experiences with situations involving data and with situations involving chance. They will best learn about these concepts on an intuitive or informal

More information

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE Edexcel GCSE Statistics 1389 Paper 1H June 2007 Mark Scheme Edexcel GCSE Statistics 1389 NOTES ON MARKING PRINCIPLES 1 Types of mark M marks: method marks A marks: accuracy marks B marks: unconditional

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D. INTRODUCTION Anyone who spends ample time working

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Discriminative Learning of Beam-Search Heuristics for Planning

Discriminative Learning of Beam-Search Heuristics for Planning Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University

More information

The Indices Investigations Teacher s Notes

The Indices Investigations Teacher s Notes The Indices Investigations Teacher s Notes These activities are for students to use independently of the teacher to practise and develop number and algebra properties.. Number Framework domain and stage:

More information

Chapter 4 - Fractions

Chapter 4 - Fractions . Fractions Chapter - Fractions 0 Michelle Manes, University of Hawaii Department of Mathematics These materials are intended for use with the University of Hawaii Department of Mathematics Math course

More information

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method Farhadi F, Sorkhi M, Hashemi S et al. An effective framework for fast expert mining in collaboration networks: A grouporiented and cost-based method. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 27(3): 577

More information

Innovative Methods for Teaching Engineering Courses

Innovative Methods for Teaching Engineering Courses Innovative Methods for Teaching Engineering Courses KR Chowdhary Former Professor & Head Department of Computer Science and Engineering MBM Engineering College, Jodhpur Present: Director, JIETSETG Email:

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and

More information

"f TOPIC =T COMP COMP... OBJ

f TOPIC =T COMP COMP... OBJ TREATMENT OF LONG DISTANCE DEPENDENCIES IN LFG AND TAG: FUNCTIONAL UNCERTAINTY IN LFG IS A COROLLARY IN TAG" Aravind K. Joshi Dept. of Computer & Information Science University of Pennsylvania Philadelphia,

More information

Lecture 2: Quantifiers and Approximation

Lecture 2: Quantifiers and Approximation Lecture 2: Quantifiers and Approximation Case study: Most vs More than half Jakub Szymanik Outline Number Sense Approximate Number Sense Approximating most Superlative Meaning of most What About Counting?

More information

Running head: DELAY AND PROSPECTIVE MEMORY 1

Running head: DELAY AND PROSPECTIVE MEMORY 1 Running head: DELAY AND PROSPECTIVE MEMORY 1 In Press at Memory & Cognition Effects of Delay of Prospective Memory Cues in an Ongoing Task on Prospective Memory Task Performance Dawn M. McBride, Jaclyn

More information

TU-E2090 Research Assignment in Operations Management and Services

TU-E2090 Research Assignment in Operations Management and Services Aalto University School of Science Operations and Service Management TU-E2090 Research Assignment in Operations Management and Services Version 2016-08-29 COURSE INSTRUCTOR: OFFICE HOURS: CONTACT: Saara

More information

Multimedia Application Effective Support of Education

Multimedia Application Effective Support of Education Multimedia Application Effective Support of Education Eva Milková Faculty of Science, University od Hradec Králové, Hradec Králové, Czech Republic eva.mikova@uhk.cz Abstract Multimedia applications have

More information

Probability estimates in a scenario tree

Probability estimates in a scenario tree 101 Chapter 11 Probability estimates in a scenario tree An expert is a person who has made all the mistakes that can be made in a very narrow field. Niels Bohr (1885 1962) Scenario trees require many numbers.

More information

Rule-based Expert Systems

Rule-based Expert Systems Rule-based Expert Systems What is knowledge? is a theoretical or practical understanding of a subject or a domain. is also the sim of what is currently known, and apparently knowledge is power. Those who

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

MASTER S THESIS GUIDE MASTER S PROGRAMME IN COMMUNICATION SCIENCE

MASTER S THESIS GUIDE MASTER S PROGRAMME IN COMMUNICATION SCIENCE MASTER S THESIS GUIDE MASTER S PROGRAMME IN COMMUNICATION SCIENCE University of Amsterdam Graduate School of Communication Kloveniersburgwal 48 1012 CX Amsterdam The Netherlands E-mail address: scripties-cw-fmg@uva.nl

More information

Visual CP Representation of Knowledge

Visual CP Representation of Knowledge Visual CP Representation of Knowledge Heather D. Pfeiffer and Roger T. Hartley Department of Computer Science New Mexico State University Las Cruces, NM 88003-8001, USA email: hdp@cs.nmsu.edu and rth@cs.nmsu.edu

More information

Julia Smith. Effective Classroom Approaches to.

Julia Smith. Effective Classroom Approaches to. Julia Smith @tessmaths Effective Classroom Approaches to GCSE Maths resits julia.smith@writtle.ac.uk Agenda The context of GCSE resit in a post-16 setting An overview of the new GCSE Key features of a

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District Report Submitted June 20, 2012, to Willis D. Hawley, Ph.D., Special

More information

FIGURE IT OUT! MIDDLE SCHOOL TASKS. Texas Performance Standards Project

FIGURE IT OUT! MIDDLE SCHOOL TASKS. Texas Performance Standards Project FIGURE IT OUT! MIDDLE SCHOOL TASKS π 3 cot(πx) a + b = c sinθ MATHEMATICS 8 GRADE 8 This guide links the Figure It Out! unit to the Texas Essential Knowledge and Skills (TEKS) for eighth graders. Figure

More information

Alignment of Australian Curriculum Year Levels to the Scope and Sequence of Math-U-See Program

Alignment of Australian Curriculum Year Levels to the Scope and Sequence of Math-U-See Program Alignment of s to the Scope and Sequence of Math-U-See Program This table provides guidance to educators when aligning levels/resources to the Australian Curriculum (AC). The Math-U-See levels do not address

More information

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com

More information

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Koshi Odagiri 1, and Yoichi Muraoka 1 1 Graduate School of Fundamental/Computer Science and Engineering, Waseda University,

More information

Mathematics Assessment Plan

Mathematics Assessment Plan Mathematics Assessment Plan Mission Statement for Academic Unit: Georgia Perimeter College transforms the lives of our students to thrive in a global society. As a diverse, multi campus two year college,

More information

Mathematics. Mathematics

Mathematics. Mathematics Mathematics Program Description Successful completion of this major will assure competence in mathematics through differential and integral calculus, providing an adequate background for employment in

More information

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS L. Descalço 1, Paula Carvalho 1, J.P. Cruz 1, Paula Oliveira 1, Dina Seabra 2 1 Departamento de Matemática, Universidade de Aveiro (PORTUGAL)

More information

Scientific Method Investigation of Plant Seed Germination

Scientific Method Investigation of Plant Seed Germination Scientific Method Investigation of Plant Seed Germination Learning Objectives Building on the learning objectives from your lab syllabus, you will be expected to: 1. Be able to explain the process of the

More information

Algebra 2- Semester 2 Review

Algebra 2- Semester 2 Review Name Block Date Algebra 2- Semester 2 Review Non-Calculator 5.4 1. Consider the function f x 1 x 2. a) Describe the transformation of the graph of y 1 x. b) Identify the asymptotes. c) What is the domain

More information

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the

More information

re An Interactive web based tool for sorting textbook images prior to adaptation to accessible format: Year 1 Final Report

re An Interactive web based tool for sorting textbook images prior to adaptation to accessible format: Year 1 Final Report to Anh Bui, DIAGRAM Center from Steve Landau, Touch Graphics, Inc. re An Interactive web based tool for sorting textbook images prior to adaptation to accessible format: Year 1 Final Report date 8 May

More information

Using focal point learning to improve human machine tacit coordination

Using focal point learning to improve human machine tacit coordination DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated

More information

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016 AGENDA Advanced Learning Theories Alejandra J. Magana, Ph.D. admagana@purdue.edu Introduction to Learning Theories Role of Learning Theories and Frameworks Learning Design Research Design Dual Coding Theory

More information

Page 1 of 11. Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General. Grade(s): None specified

Page 1 of 11. Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General. Grade(s): None specified Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General Grade(s): None specified Unit: Creating a Community of Mathematical Thinkers Timeline: Week 1 The purpose of the Establishing a Community

More information

Physics 270: Experimental Physics

Physics 270: Experimental Physics 2017 edition Lab Manual Physics 270 3 Physics 270: Experimental Physics Lecture: Lab: Instructor: Office: Email: Tuesdays, 2 3:50 PM Thursdays, 2 4:50 PM Dr. Uttam Manna 313C Moulton Hall umanna@ilstu.edu

More information