# Scheduling Tasks under Constraints CS229 Final Project

Save this PDF as:

Size: px
Start display at page:

## Transcription

2 the current schedule), and still have some ideas on (leave one data point out, compute weights with other points, find that data point s cost by the learned weights of other points, recurse down to base case (at which point we guess), repeat for all points) which are computationally infeasible, we haven t been able to make meaningful headway on this mathematically tough problem 2. That much data simply isn t kept by a single person, as far as we know 2.2 Data - Schedules The table is a sample of 10 training data points (tasks and how they were scheduled under the ideal cost function); the full sets of 100 data points, training and test, are attached and were not listed here for space reasons. Task Triple Table 1: Training Data Schedule Under Ideal Cost (4, 133, 100) [103, 110, 120, 127] (4, 161, 120) [123, 129, 136, 146] (4, 158, 120) [123, 129, 136, 146] (4, 81, 76) [77, 78, 79, 80] (4, 126, 78) [81, 87, 95, 106] (4, 59, 13) [16, 23, 31, 42] (4, 61, 35) [37, 42, 47, 54] (4, 109, 80) [82, 86, 90, 95] (4, 156, 139) [141, 144, 148, 153] (4, 59, 20) [23, 29, 36, 46] 3 Features After experimenting with numerous sets of features, we selected these five features, for what we believed to be a fair representation of a task and the costs associated, as well as to prevent overfitting and increase the chances of convergence: φ 1 = how late the bulk of task is completed (productivity under pressure) φ 2 = how long to wait before starting (procrastination) φ 3 = sparseness of the task completion set (chunking tendencies) φ 4 = how early task is completely finished (stress tolerance) φ 5 = how many hours worked on Friday (blacking out a specific day of the week) Each of the above features provides an important piece of information regarding the cost of a user working for an overall schedule s i. While we experimented with other features (such as one for each day of the week, and also a set for times of day), the feature set we selected provides an accurate sample for the purposes of academic demonstration, and more importantly, converges with ease; adding the 12 or so features needed for times of day as well as days of week caused overfitting problems (due to the sparse nature of data in each group). 4 Model Selection and Implementation 4.1 Model Selection In general, our goal is to create a function where the input is a 3-tuple task t, where t = (n, b, a), and n is the size of the task (in # of hours), and a and b are the start and end (deadline) times of the task, respectively, as measured by 0-indexed hours of the week. These three integers will have domain [0-167], to denote all available discrete hours in one week. Ultimately, the desired output will be a schedule array of the same size as the inputted 2

4 and our experimental weight vector w, after training on the training data set, was determined to be, with rounding: w [1.547, 0.157, 3.685, 1.339, 2.211] 5.2 Test Results The table is a samples of 10 ten test data points, and how they were scheduled under the ideal function, as well as under our learned cost function. Table 2: Test Data Output Task Triple Schedule Under Ideal Cost Schedule Under Learned Cost (4, 120, 106) [108, 111, 114, 118] [107, 110, 113, 117] (4, 38, 5) [8, 13, 19, 28] [6, 11, 17, 25] (4, 58, 32) [34, 39, 44, 51] [33, 37, 42, 49] (4, 27, 21) [22, 23, 24, 26] [21, 22, 24, 26] (4, 45, 27) [29, 32, 36, 41] [28, 31, 35, 40] (4, 18, 5) [6, 9, 12, 16] [6, 9, 12, 15] (4, 79, 54) [56, 61, 66, 73] [55, 59, 64, 71] (4, 112, 80) [82, 86, 90, 95] [81, 86, 92, 100] (4, 154, 139) [141, 144, 147, 151] [140, 143, 146, 150] (4, 79, 62) [64, 67, 71, 76] [63, 66, 70, 75] 5.3 Evaluation All methods of evaluation were done on both training and test data. We have several methods of evaluation. In the first two steps of the algorithm, we are essentially determining the compatibility of stochastic gradient descent and uniform cost search. In other words, we will compare our experimental cost function (f) against the pre-determined, heuristic cost function (f ). In order to make this comparison, we can simply use the old and new cost functions to evaluate the costs of a set of schedules (the set S), and compute the average percent error between the two costs, expressed by Normalized Cost Error = 1 f (s) f(s) (f (s) + f(s))/2 In which we find the average percent error. Using this data, we get that: s S Normalized Cost Error training = Normalized Cost Error test = This is closely tied to the basic error we are trying to minimize with our gradient descent, and thus should not be very high, but this seems to show that our cost is not particularly accurate, as over 100 data points, this indicates an average percent error of about 33.2%. Fortunately, this only tells us about the accuracy of our arbitrary cost, which is not the overall goal of the project, which is learning user preferences, and the error can largely be explained by our next method of evaluation, weight vector error. We could also look to compare the weight vector directly; that is, for our experimental weight vector w, and ideal weight vector w, we could simply find the difference between each weight and square it. This error would look something like: Weight Vector Error = n wi w i = i=1 There are five weights, meaning that on average, the experimental weight differs from the ideal by This is a pretty large difference, considering the low value of our weights, and this can mostly be attributed to the 4

5 w5 = 10; while in learned error, w 5 2.2, which gives us the error for w 5 alone being about 7.8. This probably arises because not every schedule contains a Friday, as hours 96 to 120 are not necessarily even within the bound for a task, so this makes a lot of sense; a larger dataset might go some way towards mitigating this. In addition, because the schedules outputted are optimal and thus minimize cost, such a large weight is likely to be avoided, which means that this feature will almost always have value very close to 0, and so the learning of its weight is less likely to reach its true value. It is important to note, however, that the closeness of these functions, although ideal, is not necessarily a pre-requisite to good schedules. It may be the case that although the weights are somewhat different, these cost functions nonetheless generate similar schedules under the CSPs; this is especially pertinent when we consider that some of the features are correlated (such as the distance from start to first working hour, and distance from end to last working hour). The goal of this algorithm is to produce schedules that users are happy with, not necessarily derive their rewards function. This leads us to the second mechanism of evaluation, in which we evaluate uniform cost search using a set of tasks S, and do this for each cost function, then compare the resulting optimal schedules. The schedule produced by the experimentally learned function, given task s i, is x i, a vector of x ji s, and the schedule produced by pre-determined, ideal function, given the same task s i, is x i, a vector of x jis. Then, we might measure schedule based error as follows: Our test data produces: Schedule Error = 1 s i S j=1 5 x ji x ji Schedule Error training = 8.11 Schedule Error test = 6.96 Considering that our data was tested on tasks of size 4, this indicates that our average off by for each slot is around 2 hours. That s pretty solid, and an indicator that our algorithm was able to fairly accurately predict schedules for a user. This was possible despite the difficulty with properly weighting Fridays (due to the sparseness of their existence in training set schedules, since they are so heavily avoided by our ideal user ), probably because the relative weights of the other features were largely preserved. It s also worth noting that in general, our test error is lower than our training error (slightly); this indicates that our formula was definitely not overfit with respect to the training data, which makes sense, as this was something we specifically tried to avoid when not over choosing features (i.e. putting a weight on each possible hour, for instance). 6 Conclusion In conclusion, while we were able to accurately schedule tasks, provided with enough data, we were ultimately unsatisfied by the need to include ideal cost with the training data as a tool to help us learn a user s preferences. This leaves a lot to be desired, and a lot of space for future work if we ultimately would like this to be a practical and usable algorithm. However, this was an excellent exercise in using various tools explored this quarter, including state space search and gradient descent. We also feel that significant progress was made on the problem, even if we couldn t achieve the crucial breakthroughs needed to make this algorithm useful for any more than academic exercise. 7 Future Work In most practical applications, data will come without the cost parameters attached. For this algorithm to work for these cases, we need to model cost based on the scheduled times and learn the cost function in that manner. We believe we might pursue this with hold-one-out-cross-validation (computationally expensive) or directly link each input tuple to an individual hour scheduled given enough data (that is, instead of trying to solve the cost function for some cost intermediary, instead try to write n uniform cost search problems, where n is the size of the task, and solve these n problems to find n timeslots in which to perform the task; however, this doesn t allow us to take into account sparseness of task completion, which we hypothesize to be an important feature). 5

### Mocking the Draft Predicting NFL Draft Picks and Career Success

Mocking the Draft Predicting NFL Draft Picks and Career Success Wesley Olmsted [wolmsted], Jeff Garnier [jeff1731], Tarek Abdelghany [tabdel] 1 Introduction We started off wanting to make some kind of

### Computer Vision for Card Games

Computer Vision for Card Games Matias Castillo matiasct@stanford.edu Benjamin Goeing bgoeing@stanford.edu Jesper Westell jesperw@stanford.edu Abstract For this project, we designed a computer vision program

### THE DESIGN OF A LEARNING SYSTEM Lecture 2

THE DESIGN OF A LEARNING SYSTEM Lecture 2 Challenge: Design a Learning System for Checkers What training experience should the system have? A design choice with great impact on the outcome Choice #1: Direct

### Linear Regression: Predicting House Prices

Linear Regression: Predicting House Prices I am big fan of Kalid Azad writings. He has a knack of explaining hard mathematical concepts like Calculus in simple words and helps the readers to get the intuition

### Exploration vs. Exploitation. CS 473: Artificial Intelligence Reinforcement Learning II. How to Explore? Exploration Functions

CS 473: Artificial Intelligence Reinforcement Learning II Exploration vs. Exploitation Dieter Fox / University of Washington [Most slides were taken from Dan Klein and Pieter Abbeel / CS188 Intro to AI

### Artificial Neural Networks for Storm Surge Predictions in NC. DHS Summer Research Team

Artificial Neural Networks for Storm Surge Predictions in NC DHS Summer Research Team 1 Outline Introduction; Feedforward Artificial Neural Network; Design questions; Implementation; Improvements; Conclusions;

### Stay Alert!: Creating a Classifier to Predict Driver Alertness in Real-time

Stay Alert!: Creating a Classifier to Predict Driver Alertness in Real-time Aditya Sarkar, Julien Kawawa-Beaudan, Quentin Perrot Friday, December 11, 2014 1 Problem Definition Driving while drowsy inevitably

### Lecture 1: Machine Learning Basics

1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

### CS 2750: Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh February 28, 2017

CS 2750: Machine Learning Neural Networks Prof. Adriana Kovashka University of Pittsburgh February 28, 2017 HW2 due Thursday Announcements Office hours on Thursday: 4:15pm-5:45pm Talk at 3pm: http://www.sam.pitt.edu/arc-

### Reinforcement Learning with Randomization, Memory, and Prediction

Reinforcement Learning with Randomization, Memory, and Prediction Radford M. Neal, University of Toronto Dept. of Statistical Sciences and Dept. of Computer Science http://www.cs.utoronto.ca/ radford CRM

### COMP 551 Applied Machine Learning Lecture 6: Performance evaluation. Model assessment and selection.

COMP 551 Applied Machine Learning Lecture 6: Performance evaluation. Model assessment and selection. Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551

### Cross-Domain Video Concept Detection Using Adaptive SVMs

Cross-Domain Video Concept Detection Using Adaptive SVMs AUTHORS: JUN YANG, RONG YAN, ALEXANDER G. HAUPTMANN PRESENTATION: JESSE DAVIS CS 3710 VISUAL RECOGNITION Problem-Idea-Challenges Address accuracy

### COMP 551 Applied Machine Learning Lecture 6: Performance evaluation. Model assessment and selection.

COMP 551 Applied Machine Learning Lecture 6: Performance evaluation. Model assessment and selection. Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise

### P(A, B) = P(A B) = P(A) + P(B) - P(A B)

AND Probability P(A, B) = P(A B) = P(A) + P(B) - P(A B) P(A B) = P(A) + P(B) - P(A B) Area = Probability of Event AND Probability P(A, B) = P(A B) = P(A) + P(B) - P(A B) If, and only if, A and B are independent,

### Meta-Learning. CS : Deep Reinforcement Learning Sergey Levine

Meta-Learning CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Two weeks until the project milestone! 2. Guest lectures start next week, be sure to attend! 3. Today: part 1: meta-learning

### Modelling Student Knowledge as a Latent Variable in Intelligent Tutoring Systems: A Comparison of Multiple Approaches

Modelling Student Knowledge as a Latent Variable in Intelligent Tutoring Systems: A Comparison of Multiple Approaches Qandeel Tariq, Alex Kolchinski, Richard Davis December 6, 206 Introduction This paper

### Deep Reinforcement Learning for Flappy Bird Kevin Chen

Deep Reinforcement Learning for Flappy Bird Kevin Chen Abstract Reinforcement learning is essential for applications where there is no single correct way to solve a problem. In this project, we show that

### CS 4510/9010 Applied Machine Learning. Evaluation. Paula Matuszek Fall, copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning 1 Evaluation Paula Matuszek Fall, 2016 Evaluating Classifiers 2 With a decision tree, or with any classifier, we need to know how well our trained model performs on

### Machine Learning. Basic Concepts. Joakim Nivre. Machine Learning 1(24)

Machine Learning Basic Concepts Joakim Nivre Uppsala University and Växjö University, Sweden E-mail: nivre@msi.vxu.se Machine Learning 1(24) Machine Learning Idea: Synthesize computer programs by learning

### Neural Network Ensembles, Cross Validation, and Active Learning

Neural Network Ensembles, Cross Validation, and Active Learning Anders Krogh" Nordita Blegdamsvej 17 2100 Copenhagen, Denmark Jesper Vedelsby Electronics Institute, Building 349 Technical University of

### Linear Models Continued: Perceptron & Logistic Regression

Linear Models Continued: Perceptron & Logistic Regression CMSC 723 / LING 723 / INST 725 Marine Carpuat Slides credit: Graham Neubig, Jacob Eisenstein Linear Models for Classification Feature function

### Brief Overview of Adaptive and Learning Control

1.10.2007 Outline Introduction Outline Introduction Introduction Outline Introduction Introduction Definition of Adaptive Control Definition of Adaptive Control Zames (reported by Dumont&Huzmezan): A non-adaptive

### Linear Regression. Chapter Introduction

Chapter 9 Linear Regression 9.1 Introduction In this class, we have looked at a variety of di erent models and learning methods, such as finite state machines, sequence models, and classification methods.

### CS534 Machine Learning

CS534 Machine Learning Spring 2013 Lecture 1: Introduction to ML Course logistics Reading: The discipline of Machine learning by Tom Mitchell Course Information Instructor: Dr. Xiaoli Fern Kec 3073, xfern@eecs.oregonstate.edu

### Programming Assignment2: Neural Networks

Programming Assignment2: Neural Networks Problem :. In this homework assignment, your task is to implement one of the common machine learning algorithms: Neural Networks. You will train and test a neural

### A study of the NIPS feature selection challenge

A study of the NIPS feature selection challenge Nicholas Johnson November 29, 2009 Abstract The 2003 Nips Feature extraction challenge was dominated by Bayesian approaches developed by the team of Radford

### Dudon Wai Georgia Institute of Technology CS 7641: Machine Learning Atlanta, GA

Adult Income and Letter Recognition - Supervised Learning Report An objective look at classifier performance for predicting adult income and Letter Recognition Dudon Wai Georgia Institute of Technology

### Detection of Insults in Social Commentary

Detection of Insults in Social Commentary CS 229: Machine Learning Kevin Heh December 13, 2013 1. Introduction The abundance of public discussion spaces on the Internet has in many ways changed how we

### Stochastic Gradient Descent using Linear Regression with Python

ISSN: 2454-2377 Volume 2, Issue 8, December 2016 Stochastic Gradient Descent using Linear Regression with Python J V N Lakshmi Research Scholar Department of Computer Science and Application SCSVMV University,

### Lecture 6: Course Project Introduction and Deep Learning Preliminaries

CS 224S / LINGUIST 285 Spoken Language Processing Andrew Maas Stanford University Spring 2017 Lecture 6: Course Project Introduction and Deep Learning Preliminaries Outline for Today Course projects What

### Calibration of teachers scores

Calibration of teachers scores Bruce Brown & Anthony Kuk Department of Statistics & Applied Probability 1. Introduction. In the ranking of the teaching effectiveness of staff members through their student

### Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

### Introduction to Machine Learning Reykjavík University Spring Instructor: Dan Lizotte

Introduction to Machine Learning Reykjavík University Spring 2007 Instructor: Dan Lizotte Logistics To contact Dan: dlizotte@cs.ualberta.ca http://www.cs.ualberta.ca/~dlizotte/teaching/ Books: Introduction

### Intelligent Tutoring Systems using Reinforcement Learning to teach Autistic Students

Intelligent Tutoring Systems using Reinforcement Learning to teach Autistic Students B. H. Sreenivasa Sarma 1 and B. Ravindran 2 Department of Computer Science and Engineering, Indian Institute of Technology

### Learning Agents: Introduction

Learning Agents: Introduction S Luz luzs@cs.tcd.ie October 28, 2014 Learning in agent architectures Agent Learning in agent architectures Agent Learning in agent architectures Agent perception Learning

### Determining the Characteristic of Difficult Job Shop Scheduling Instances for a Heuristic Solution Method

Determining the Characteristic of Difficult Job Shop Scheduling Instances for a Heuristic Solution Method Helga Ingimundardottir and Thomas Philip Runarsson School of Engineering and Natural Sciences,

### Machine Learning : Hinge Loss

Machine Learning Hinge Loss 16/01/2014 Machine Learning : Hinge Loss Recap tasks considered before Let a training dataset be given with (i) data and (ii) classes The goal is to find a hyper plane that

### Reinforcement Learning with Deep Architectures

000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

### Python Machine Learning

Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

### Master of Science in Machine Learning

Master of Science in Machine Learning Student Handbook Revised 3/21/13 Table of Contents Introduction... 3 The Co-Directors of the program:... 3 Program Requirements... 4 Prerequisites, Statistics:...

### Neural Reinforcement Learning to Swing-up and Balance a Real Pole

Neural Reinforcement Learning to Swing-up and Balance a Real Pole Martin Riedmiller Neuroinformatics Group University of Osnabrueck 49069 Osnabrueck martin.riedmiller@uos.de Abstract This paper proposes

### Secondary Masters in Machine Learning

Secondary Masters in Machine Learning Student Handbook Revised 8/20/14 Page 1 Table of Contents Introduction... 3 Program Requirements... 4 Core Courses:... 5 Electives:... 6 Double Counting Courses:...

### Classification with Deep Belief Networks. HussamHebbo Jae Won Kim

Classification with Deep Belief Networks HussamHebbo Jae Won Kim Table of Contents Introduction... 3 Neural Networks... 3 Perceptron... 3 Backpropagation... 4 Deep Belief Networks (RBM, Sigmoid Belief

### Autonomous Learning Challenge

Autonomous Learning Challenge Introduction Autonomous learning requires that a system learns without prior knowledge, prespecified rules of behavior, or built-in internal system values. The system learns

### Measuring Search Effectiveness: Lessons from Interactive TREC

Measuring Search Effectiveness: Lessons from Interactive TREC School of Communication, Information and Library Studies Rutgers University http://www.scils.rutgers.edu/~muresan/ Objectives Discuss methodologies

### Admission Prediction System Using Machine Learning

Admission Prediction System Using Machine Learning Jay Bibodi, Aasihwary Vadodaria, Anand Rawat, Jaidipkumar Patel bibodi@csus.edu, aaishwaryvadoda@csus.edu, anandrawat@csus.edu, jaidipkumarpate@csus.edu

### ICRA 2012 Tutorial on Reinforcement Learning 4. Value Function Methods

ICRA 2012 Tutorial on Reinforcement Learning 4. Value Function Methods Pieter Abbeel UC Berkeley Jan Peters TU Darmstadt A Reinforcement Learning Ontology Prior Knowledge Data { (x t, u t, x t+1, r t )

### Predicting a Student's Performance Vani Khosla

CS 229 Final Report, Fall 2015 Khosla 1 Abstract Predicting a Student's Performance Vani Khosla The ability to predict a student s performance on a given concept is an important tool for the Education

### Speech Accent Classification

Speech Accent Classification Corey Shih ctshih@stanford.edu 1. Introduction English is one of the most prevalent languages in the world, and is the one most commonly used for communication between native

### OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

### Simulated Annealing Neural Network for Software Failure Prediction

International Journal of Softare Engineering and Its Applications Simulated Annealing Neural Netork for Softare Failure Prediction Mohamed Benaddy and Mohamed Wakrim Ibnou Zohr University, Faculty of Sciences-EMMS,

### Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017) Week 8: Data Mining (2/4) March 2, 2017 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo These slides

### Final Project Co-operative Q-Learning

. Final Project Co-operative Q-Learning Lars Blackmore and Steve Block (This report is by Lars Blackmore) Abstract Q-learning is a method which aims to derive the optimal policy in a world defined by a

### Beating the Odds: Learning to Bet on Soccer Matches Using Historical Data

Beating the Odds: Learning to Bet on Soccer Matches Using Historical Data Michael Painter, Soroosh Hemmati, Bardia Beigi SUNet IDs: mp703, shemmati, bardia Introduction Soccer prediction is a multi-billion

### EECS 349 Machine Learning

EECS 349 Machine Learning Instructor: Doug Downey (some slides from Pedro Domingos, University of Washington) 1 Logistics Instructor: Doug Downey Email: ddowney@eecs.northwestern.edu Office hours: Mondays

### GRADUAL INFORMATION MAXIMIZATION IN INFORMATION ENHANCEMENT TO EXTRACT IMPORTANT INPUT NEURONS

Proceedings of the IASTED International Conference Artificial Intelligence and Applications (AIA 214) February 17-19, 214 Innsbruck, Austria GRADUAL INFORMATION MAXIMIZATION IN INFORMATION ENHANCEMENT

### Online Study Guide Driver Styles

Online Study Guide Driver Styles Online Study Guide for Driver Personality Styles The University of New England (UNE) is Australia s best online university. It is the 3rd largest, has the equal highest

### Triple P Enrollment System (TPES) Informatics 43 Spring 2009 Official Requirements Specification

Triple P Enrollment System (TPES) Informatics 43 Spring 2009 Official Requirements Specification Introduction This document describes the requirements for the Triple P Enrollment System (TPES), to be developed

### Machine Learning (Decision Trees and Intro to Neural Nets) CSCI 3202, Fall 2010

Machine Learning (Decision Trees and Intro to Neural Nets) CSCI 3202, Fall 2010 Assignments To read this week: Chapter 18, sections 1-4 and 7 Problem Set 3 due next week! Learning a Decision Tree We look

### EECS 349 Machine Learning

EECS 349 Machine Learning Instructor: Doug Downey (some slides from Pedro Domingos, University of Washington) 1 Logistics Instructor: Doug Downey Email: ddowney@eecs.northwestern.edu Office hours: Mondays

### A Few Useful Things to Know about Machine Learning. Pedro Domingos Department of Computer Science and Engineering University of Washington" 2012"

A Few Useful Things to Know about Machine Learning Pedro Domingos Department of Computer Science and Engineering University of Washington 2012 A Few Useful Things to Know about Machine Learning Machine

(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

### PRINCIPLES OF SEQUENCING AND SCHEDULING

PRINCIPLES OF SEQUENCING AND SCHEDULING Kenneth R. Baker Tuck School of Business Dartmouth College Hanover, New Hampshire Dan Trietsch College of Engineering American University of Armenia Yerevan, Armenia

### Forecasting Statewide Test Performance and Adequate Yearly Progress from District Assessments

Research Paper Forecasting Statewide Test Performance and Adequate Yearly Progress from District Assessments by John Richard Bergan, Ph.D. and John Robert Bergan, Ph.D. Assessment Technology, Incorporated

### CPSC 533 Reinforcement Learning. Paul Melenchuk Eva Wong Winson Yuen Kenneth Wong

CPSC 533 Reinforcement Learning Paul Melenchuk Eva Wong Winson Yuen Kenneth Wong Outline Introduction Passive Learning in an Known Environment Passive Learning in an Unknown Environment Active Learning

### A survey of robot learning from demonstration

A survey of robot learning from demonstration Brenna D. Argall, Sonia Chernova, Manuela Veloso, Brett Browning Presented by Aalhad Patankar Overview of learning from demonstration (LfD) Learning from Demonstration:

### Automated Curriculum Learning for Neural Networks

Automated Curriculum Learning for Neural Networks Alex Graves, Marc G. Bellemare, Jacob Menick, Remi Munos, Koray Kavukcuoglu DeepMind ICML 2017 Presenter: Jack Lanchantin Alex Graves, Marc G. Bellemare,

### Discriminative Learning of Feature Functions of Generative Type in Speech Translation

Discriminative Learning of Feature Functions of Generative Type in Speech Translation Xiaodong He Microsoft Research, One Microsoft Way, Redmond, WA 98052 USA Li Deng Microsoft Research, One Microsoft

### Discriminative Learning of Feature Functions of Generative Type in Speech Translation

Discriminative Learning of Feature Functions of Generative Type in Speech Translation Xiaodong He Microsoft Research, One Microsoft Way, Redmond, WA 98052 USA Li Deng Microsoft Research, One Microsoft

### Smart Grid Algorithm Engineering (SGAE)

Smart Grid Algorithm Engineering (SGAE) A research-oriented process model for the design of distributed algorithms for Smart Grid control Astrid Nieße, Martin Tröschel, Michael Sonnenschein niesse@offis.de,

### Unsupervised Learning

17s1: COMP9417 Machine Learning and Data Mining Unsupervised Learning May 2, 2017 Acknowledgement: Material derived from slides for the book Machine Learning, Tom M. Mitchell, McGraw-Hill, 1997 http://www-2.cs.cmu.edu/~tom/mlbook.html

### Supervised learning can be done by choosing the hypothesis that is most probable given the data: = arg max ) = arg max

The learning problem is called realizable if the hypothesis space contains the true function; otherwise it is unrealizable On the other hand, in the name of better generalization ability it may be sensible

### Learning Policies by Imitating Optimal Control. CS : Deep Reinforcement Learning Week 3, Lecture 2 Sergey Levine

Learning Policies by Imitating Optimal Control CS 294-112: Deep Reinforcement Learning Week 3, Lecture 2 Sergey Levine Overview 1. Last time: learning models of system dynamics and using optimal control

### Speeding up ResNet training

Speeding up ResNet training Konstantin Solomatov (06246217), Denis Stepanov (06246218) Project mentor: Daniel Kang December 2017 Abstract Time required for model training is an important limiting factor

### Math 1050: College Algebra, term year Section number, meeting time, days, place

Math 1050: College Algebra, term year Section number, meeting time, days, place Instructor: name Office: location and phone number e-mail: your email Office Hours: and by appt. Text: - College Algebra

### 18 LEARNING FROM EXAMPLES

18 LEARNING FROM EXAMPLES An intelligent agent may have to learn, for instance, the following components: A direct mapping from conditions on the current state to actions A means to infer relevant properties

### 100 CHAPTER 4. MBA STUDENT SECTIONING

Summary Maastricht University is offering a MBA program for people that have a bachelor degree and at least 5 years of working experience. Within the MBA program, students work in groups of 5 during a

### An Application of Genetic Algorithm for University Course Timetabling Problem

An Application of Genetic Algorithm for University Course Timetabling Problem Sanjay R. Sutar Asso.Professor, Dr. B. A. T. University, Lonere & Research Scholar, SGGSIET, Nanded, India Rajan S. Bichkar

### Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

### Word Sense Determination from Wikipedia. Data Using a Neural Net

1 Word Sense Determination from Wikipedia Data Using a Neural Net CS 297 Report Presented to Dr. Chris Pollett Department of Computer Science San Jose State University By Qiao Liu May 2017 Word Sense Determination

### The Implementation of Machine Learning in the Game of Checkers

The Implementation of Machine Learning in the Game of Checkers William Melicher Computer Systems Lab Thomas Jefferson June 9, 2009 Abstract Most games have a set algorithm that does not change. This means

### Artificial Neural Networks written examination

1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

### English Syntax and Context Free Grammars. COMP-599 Oct 5, 2016

English Syntax and Context Free Grammars COMP-599 Oct 5, 2016 Gradient Descent Summary Descent vs ascent Convention: think about the problem as a minimization problem Minimize the negative log likelihood

### Practical Issues in Structural Modelling

Practical Issues in Structural Modelling Michael Keane University of Oxford Becker-Friedman Institute University of Chicago November 12, 2015 1 What is this talk about? This talk deals with the Art of

### Northern Michigan University - Winter 2017 MA 171 Introduction to Probability and Statistics 3102 Jamrich Hall

Northern Michigan University - Winter 2017 MA 171 Introduction to Probability and Statistics 3102 Jamrich Hall Section 01-10307 Mon. and Weds. 4:00 p.m. Section 04-11138 Mon. and Weds. 6:00 p.m. Instructor:

### Knowledge Acquisition

Knowledge Acquisition COMP62342 Sean Bechhofer University of Manchester sean.bechhofer@manchester.ac.uk Knowledge Acquisition (KA) Operational definition Given a source of (declarative) knowledge a sink

### CS Machine Learning

CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

### Learning From Demonstrations via Structured Prediction

Learning From Demonstrations via Structured Prediction Charles Parker, Prasad Tadepalli, Weng-Keen Wong, Thomas Dietterich, and Alan Fern Oregon State University School of Electrical Engineering and Computer

### LEARNING AGENTS IN ARTIFICIAL INTELLIGENCE PART I

Journal of Advanced Research in Computer Engineering, Vol. 5, No. 1, January-June 2011, pp. 1-5 Global Research Publications ISSN:0974-4320 LEARNING AGENTS IN ARTIFICIAL INTELLIGENCE PART I JOSEPH FETTERHOFF

### A Statistical Analysis of Mathematics Placement Scores

A Statistical Analysis of Mathematics Placement Scores By Carlos Cantos, Anthony Rhodes and Huy Tran, under the supervision of Austina Fong Portland State University, Spring 2014 Summary & Objectives The

### CSE 258 Lecture 3. Web Mining and Recommender Systems. Supervised learning Classification

CSE 258 Lecture 3 Web Mining and Recommender Systems Supervised learning Classification Last week Last week we started looking at supervised learning problems Last week We studied linear regression, in

### CS 540: Introduction to Artificial Intelligence

CS 540: Introduction to Artificial Intelligence Midterm Exam: 4:00-5:15 pm, October 25, 2016 B130 Van Vleck CLOSED BOOK (one sheet of notes and a calculator allowed) Write your answers on these pages and

### IAI : Machine Learning

IAI : Machine Learning John A. Bullinaria, 2005 1. What is Machine Learning? 2. The Need for Learning 3. Learning in Neural and Evolutionary Systems 4. Problems Facing Expert Systems 5. Learning in Rule

### Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

### CSC-272 Exam #2 March 20, 2015

CSC-272 Exam #2 March 20, 2015 Name Questions are weighted as indicated. Show your work and state your assumptions for partial credit consideration. Unless explicitly stated, there are NO intended errors

### Homework III Using Logistic Regression for Spam Filtering

Homework III Using Logistic Regression for Spam Filtering Introduction to Machine Learning - CMPS 242 By Bruno Astuto Arouche Nunes February 14 th 2008 1. Introduction In this work we study batch learning

### Adaptive Behavior with Fixed Weights in RNN: An Overview

& Adaptive Behavior with Fixed Weights in RNN: An Overview Danil V. Prokhorov, Lee A. Feldkamp and Ivan Yu. Tyukin Ford Research Laboratory, Dearborn, MI 48121, U.S.A. Saint-Petersburg State Electrotechical

### ECE 5424: Introduction to Machine Learning

ECE 5424: Introduction to Machine Learning Topics: Classification: Naïve Bayes Readings: Barber 10.1-10.3 Stefan Lee Virginia Tech Administrativia HW2 Due: Friday 09/28, 10/3, 11:55pm Implement linear