Final Study Guide. CSE 327, Spring Final Time and Place: Monday, May 14, 12-3pm Chandler-Ullmann 248

Similar documents
Lecture 1: Machine Learning Basics

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

(Sub)Gradient Descent

Python Machine Learning

Lecture 10: Reinforcement Learning

Artificial Neural Networks written examination

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Probability and Game Theory Course Syllabus

CS Machine Learning

CSL465/603 - Machine Learning

Discriminative Learning of Beam-Search Heuristics for Planning

Lecture 1: Basic Concepts of Machine Learning

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

An OO Framework for building Intelligence and Learning properties in Software Agents

Axiom 2013 Team Description Paper

Softprop: Softmax Neural Network Backpropagation Learning

Learning Methods for Fuzzy Systems

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Introduction to Simulation

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Knowledge-Based - Systems

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Intelligent Agents. Chapter 2. Chapter 2 1

AMULTIAGENT system [1] can be defined as a group of

Modeling user preferences and norms in context-aware systems

Exploration. CS : Deep Reinforcement Learning Sergey Levine

AQUA: An Ontology-Driven Question Answering System

Self Study Report Computer Science

Learning and Transferring Relational Instance-Based Policies

Planning with External Events

Learning to Schedule Straight-Line Code

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Probabilistic Latent Semantic Analysis

Generating Test Cases From Use Cases

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Reducing Features to Improve Bug Prediction

Firms and Markets Saturdays Summer I 2014

INPE São José dos Campos

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

Proof Theory for Syntacticians

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

AP Calculus AB. Nevada Academic Standards that are assessable at the local level only.

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD

Visual CP Representation of Knowledge

University of Cincinnati College of Medicine. DECISION ANALYSIS AND COST-EFFECTIVENESS BE-7068C: Spring 2016

Evolution of Collective Commitment during Teamwork

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Learning From the Past with Experiment Databases

stateorvalue to each variable in a given set. We use p(x = xjy = y) (or p(xjy) as a shorthand) to denote the probability that X = x given Y = y. We al

CS/SE 3341 Spring 2012

An Investigation into Team-Based Planning

FF+FPG: Guiding a Policy-Gradient Planner

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Introduction to Causal Inference. Problem Set 1. Required Problems

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Human Emotion Recognition From Speech

Seminar - Organic Computing

Chapter 2 Rule Learning in a Nutshell

Natural Language Processing. George Konidaris

TD(λ) and Q-Learning Based Ludo Players

Speeding Up Reinforcement Learning with Behavior Transfer

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Switchboard Language Model Improvement with Conversational Data from Gigaword

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Department of Anthropology ANTH 1027A/001: Introduction to Linguistics Dr. Olga Kharytonava Course Outline Fall 2017

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18

Evolutive Neural Net Fuzzy Filtering: Basic Description

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

MTH 141 Calculus 1 Syllabus Spring 2017

How do adults reason about their opponent? Typologies of players in a turn-taking game

Probability and Statistics Curriculum Pacing Guide

A Case Study: News Classification Based on Term Frequency

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

Using focal point learning to improve human machine tacit coordination

MYCIN. The MYCIN Task

Compositional Semantics

Rule Learning With Negation: Issues Regarding Effectiveness

Software Maintenance

Calibration of Confidence Measures in Speech Recognition

Universidade do Minho Escola de Engenharia

WSU Five-Year Program Review Self-Study Cover Page

GACE Computer Science Assessment Test at a Glance

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

The Strong Minimalist Thesis and Bounded Optimality

MYCIN. The embodiment of all the clichés of what expert systems are. (Newell)

Foothill College Summer 2016

Test Effort Estimation Using Neural Network

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Clouds = Heavy Sidewalk = Wet. davinci V2.1 alpha3

Rule Learning with Negation: Issues Regarding Effectiveness

Statewide Framework Document for:

Arizona s College and Career Ready Standards Mathematics

UNIT ONE Tools of Algebra

Transcription:

Final Study Guide Final Time and Place: Monday, May 14, 12-3pm Chandler-Ullmann 248 Format: You can expect the following types of questions: true/false, short answer, and smaller versions of homework problems. Although you will have three hours to complete the final, it will only be about twice as long as the midterm. It will be closed book and closed notes. However, you may bring one 8 ½ x 11 cheat sheet with handwritten notes on both sides. All PDAs, portable audio players (e.g., ipods) and cell phones must be put away for the duration of the test, but you may use a basic, 4 function calculator. If you only have a programmable calculator, then you must clear its memory before the test, and at my request be able to prove to me that you have done so. Coverage: The test will be comprehensive, however approximately two-third of the questions will be on subjects covered since the midterm. In general, anything from the assigned reading or lecture could be on the test. In order to help you focus, I have provided a partial list of topics that you should know below. In some cases, I have explicitly listed topics that you do not need to know. In addition, you do not need to memorize the pseudo-code for any algorithm, but you should be able to apply the principles of the major algorithms to a problem as we have done in class and on the homework. Ch. 1 Introduction o rationality o definitions of artificial intelligence o The Turing Test dates and history Ch. 2 - Agents o PEAS descriptions performance measure, environment, actuators, sensors o properties of task environments fully observable vs. partially observable, deterministic vs. stochastic vs, strategic, episodic vs. sequential, static vs. dynamic, discrete vs. continuous, single agent vs. multiagent, known vs. unknown o agent architectures simple reflex agents, goal-based agents, utility-based agents, learning agents o state representations atomic, factored, structured Ch. 3 Search o problem description initial state, actions, transition model, goal test, path cost/step cost 1

o tree search expanding nodes, fringe branching factor o graph search explored set o uninformed search strategies breadth-first, depth-first, uniform cost similarities and differences / benefits and tradeoffs between strategies evaluation criteria completeness, optimality, time complexity, space complexity o best first search evaluation function o informed search heuristics greedy best-first, A* admissible heuristics similarities and differences / benefits and tradeoffs between strategies depth-limited, iterative deepening or bidirectional search the exact O() for any strategy s time/space complexity (but you should know relative complexity) details of proof that A* is optimal if h(n) is admissible memory bounded heuristic search learning heuristics from experience Ch. 5 - Game playing (Sect. 5.1-5.2, 5.4, 5.7-5.9) o two-player zero-sum game o problem description initial state, actions, transition model, terminal test, utility function o minimax algorithm o optimal decision vs. imperfect real-time decisions o evaluation function, cutoff-test alpha-beta pruning forward pruning details of any state-of-the-art game playing programs Ch. 8 First-Order Logic o syntax and semantics be able to translate English sentences into logic sentences o quantification existential, universal o domain, model, interpretation o entailment o equality/inequality making statements about quantity (e.g., exactly two brothers) specific axioms from the domains given in class or the book 2

Ch. 9 Inference in First-Order Logic (Sect. 9.1-9.4) o entailment and correctness of inference (also see Sect. 7.3, pp. 240-243) definition of entailment sound, complete o substitution apply substitutions, normal form o unification most general unifier o forward-chaining o backward-chaining pros / cons diagramming inference process o negation as failure inference rules, skolemization constraint logic programming Intro to Prolog Programming Reading, Ch. 1 o syntax be able to write rules and facts in Prolog translating to FOL and vice versa o backward-chaining, depth-first search be able to find the answers to a goal given a simple Prolog program o negation as failure / closed world assumption Ch 10 Planning (Sect. 10.1-10.3) o problem description initial state, goal state, actions o The PDDL language preconditions and effects o forward state-space search applicable actions, result states o backward state-space search relevant actions, predecessor states o planning graphs levels: fluents and actions persistence actions mutual exclusion (mutex) links between actions o inconsistent effects o interference o competing needs between fluents o negation o inconsistent support used as heuristics max level, level sum, set-level 3

GraphPlan building the graph extracting the solution the actions for any specific planning problem given in the book proof of termination for GraphPlan Ch 12 Knowledge Representation (Sect. 12.1-12.2, 12.5, 12.7-12.8) o categories unary predicate vs. object representation o semantic networks inheritance compared to FOL axioms for representing composition, measurements, etc. description logic Semantic Web OWL Ch. 13 - Uncertainty o Boolean, discrete and continuous random variables o prior probability and conditional probability o full joint distribution, atomic events calculate probability of an event from the full joint o independent variables o conditional independence o product rule, chain rule o Bayes Rule Ch. 14 - Bayesian Networks (Sect. 14.1-14.2, 14.4) o understand network structure o compute probability of an atomic event o compute P(X e) by enumeration variable elimination algorithm clustering algorithms Ch. 15 Probabilistic Reasoning Over Time (Sect. 15.1-15.3) o Markov assumption o first-order Markov process o stationary process o transition model and sensor model o representing a set of variables for a specific time period (e.g., X a:b ) o types of inference filtering, prediction, smoothing, most likely explanation the algorithms for any of the types of inference simplified matrix algorithms details of speech recognition or localization problem 4

Ch. 16 - Making Simple Decisions (Sect. 16.1 16.3) o utility function o maximum expected utility constraints on rational preferences Ch. 18 - Learning (Sect. 18.1-18.4, 18.6-18.9) o types of learning supervised vs. reinforcement vs. unsupervised o supervised learning hypothesis goals: consistent, generalizes well hypothesis space training set vs. test set positive vs. negative examples o decision trees expressive power learning entropy, information gain o evaluating hypotheses overfitting learning curve K-fold cross-validation o neural networks activation functions threshold, sigmoid perceptron linearly-separable functions supervised learning method o learning rate, epoch, error multi-layer feed-forward networks be able to calculate output what can be represented? o nearest neighbors k-nearest neighbor algorithm how it works normalization of the dimensions o support vector machines concepts (but not formulas for) maximum margin separator support vector kernel trick how to calculate the base 2 log (i.e., log 2 ) -- if you need to compute this, I will provide a table the back-propagation algorithm linear regression, logistic regression 5

recurrent networks k-d trees locality-sensitive hashing non-parametric regression 6