Final Study Guide. CSE 327, Spring Final Time and Place: Thursday, Apr. 30, 8-11am Packard 360

Similar documents
Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 10: Reinforcement Learning

(Sub)Gradient Descent

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

Artificial Neural Networks written examination

Lecture 1: Machine Learning Basics

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

Knowledge-Based - Systems

Discriminative Learning of Beam-Search Heuristics for Planning

Lecture 1: Basic Concepts of Machine Learning

Proof Theory for Syntacticians

AQUA: An Ontology-Driven Question Answering System

Learning and Transferring Relational Instance-Based Policies

Self Study Report Computer Science

Probability and Game Theory Course Syllabus

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

CS Machine Learning

Introduction to Simulation

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Axiom 2013 Team Description Paper

Planning with External Events

Generating Test Cases From Use Cases

Learning to Schedule Straight-Line Code

AMULTIAGENT system [1] can be defined as a group of

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Evolution of Collective Commitment during Teamwork

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18

Intelligent Agents. Chapter 2. Chapter 2 1

Compositional Semantics

Learning Methods for Fuzzy Systems

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance

Seminar - Organic Computing

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Radius STEM Readiness TM

Python Machine Learning

Chapter 2 Rule Learning in a Nutshell

CSL465/603 - Machine Learning

An OO Framework for building Intelligence and Learning properties in Software Agents

Computer Science 141: Computing Hardware Course Information Fall 2012

MYCIN. The MYCIN Task

PH.D. IN COMPUTER SCIENCE PROGRAM (POST M.S.)

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD

Natural Language Processing. George Konidaris

Firms and Markets Saturdays Summer I 2014

TD(λ) and Q-Learning Based Ludo Players

Softprop: Softmax Neural Network Backpropagation Learning

Visual CP Representation of Knowledge

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

INPE São José dos Campos

Abstractions and the Brain

Word Segmentation of Off-line Handwritten Documents

Software Development: Programming Paradigms (SCQF level 8)

Diagnostic Test. Middle School Mathematics

WSU Five-Year Program Review Self-Study Cover Page

Evolutive Neural Net Fuzzy Filtering: Basic Description

Developing a TT-MCTAG for German with an RCG-based Parser

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

How do adults reason about their opponent? Typologies of players in a turn-taking game

An Investigation into Team-Based Planning

An Interactive Intelligent Language Tutor Over The Internet

Department of Anthropology ANTH 1027A/001: Introduction to Linguistics Dr. Olga Kharytonava Course Outline Fall 2017

Rule Learning With Negation: Issues Regarding Effectiveness

Cognitive Modeling. Tower of Hanoi: Description. Tower of Hanoi: The Task. Lecture 5: Models of Problem Solving. Frank Keller.

Action Models and their Induction

arxiv: v1 [cs.cv] 10 May 2017

Spring 2016 Stony Brook University Instructor: Dr. Paul Fodor

MYCIN. The embodiment of all the clichés of what expert systems are. (Newell)

Reinforcement Learning by Comparing Immediate Reward

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Some Principles of Automated Natural Language Information Extraction

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

Speeding Up Reinforcement Learning with Behavior Transfer

A Genetic Irrational Belief System

GERM 3040 GERMAN GRAMMAR AND COMPOSITION SPRING 2017

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

The Strong Minimalist Thesis and Bounded Optimality

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Rule-based Expert Systems

Word learning as Bayesian inference

stateorvalue to each variable in a given set. We use p(x = xjy = y) (or p(xjy) as a shorthand) to denote the probability that X = x given Y = y. We al

Clouds = Heavy Sidewalk = Wet. davinci V2.1 alpha3

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Software Maintenance

Causal Link Semantics for Narrative Planning Using Numeric Fluents

Using focal point learning to improve human machine tacit coordination

High-level Reinforcement Learning in Strategy Games

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

BA 130 Introduction to International Business

Modeling user preferences and norms in context-aware systems

Toward Probabilistic Natural Logic for Syllogistic Reasoning

Undergraduate Program Guide. Bachelor of Science. Computer Science DEPARTMENT OF COMPUTER SCIENCE and ENGINEERING

Transcription:

Final Study Guide Final Time and Place: Thursday, Apr. 30, 8-11am Packard 360 Format: You can expect the following types of questions: true/false, short answer, and smaller versions of homework problems. Although you will have three hours to complete the final, it will only be about twice as long as the midterm. It will be closed book and closed notes. However, you may bring one 8 ½ x 11 cheat sheet with handwritten notes on both sides. All PDAs, portable audio players (e.g., ipods) and cell phones must be put away for the duration of the test, but you may use a simple, non-programmable calculator. Coverage: The test will be comprehensive, however approximately two-third of the questions will be on subjects covered since the midterm. In general, anything from the assigned reading or lecture could be on the test. In order to help you focus, I have provided a partial list of topics that you should know below. In some cases, I have explicitly listed topics that you do not need to know. In addition, you do not need to memorize the pseudo-code for any algorithm, but you should be able to apply the principles of the major algorithms to a problem as we have done in class and on the homework. Ch. 1 Introduction o rationality o definitions of artificial intelligence o The Turing Test dates and history Ch. 2 - Agents o PEAS descriptions performance measure, environment, actuators, sensors o properties of task environments fully observable vs. partially observable, deterministic vs. stochastic vs, strategic, episodic vs. sequential, static vs. dynamic, discrete vs. continuous, single agent vs. multiagent o agent architectures simple reflex agents, goal-based agents, utility-based agents, learning agents Ch. 3 Search initial state, actions (successor function), goal test, path cost, step cost o tree search expanding nodes, fringe branching factor 1

o uninformed search strategies breadth-first, depth-first, uniform cost similarities and differences / benefits and tradeoffs between strategies evaluation criteria completeness, optimality, time complexity, space complexity depth-limited, iterative deepening or bidirectional search the exact O() for any strategy s time/space complexity (but you should know relative complexity) sensorless planning Ch. 4 Informed Search (Sect. 4.1-4.2) o best first search o evaluation function, heuristics o strategies greedy search, A* admissible heuristics similarities and differences / benefits and tradeoffs between strategies details of proof that A* is optimal if h(n) is admissible memory bounded heuristic search learning heuristics from experience Ch. 6 - Game playing (Sect. 6.1-6.2, 6.4, 6.6-6.8) o two-player zero-sum game initial state, actions (successor function), terminal test, utility function o minimax algorithm o optimal decision vs. imperfect real-time decisions o evaluation function, cutoff-test alpha-beta pruning Ch. 7 Logical Agents (Sect. 7.1-7.4,7.7) o knowledge-based agents TELL, ASK o propositional logic syntax and semantics o entailment, models, truth tables o valid, satisfiable, unsatisfiable o inference algorithms criteria: sound, complete o model checking details of the Wumpus world circuit-based agents 2

Ch. 8 First-Order Logic o syntax and semantics be able to translate English sentences into logic sentences o quantification existential, universal o domain, model, interpretation specific axioms from the Minesweeper or genealogy examples Ch. 9 Inference in First-Order Logic (Sect. 9.1-9.2, 9-4) o substitution, unification most general unifier o backward-chaining pros / cons o negation as failure inference rules, skolemization constraint logic programming Intro to Prolog Programming Reading, Ch. 1 o syntax be able to write rules and facts in Prolog translating to FOL and vice versa o backward-chaining, depth-first search be able to find the answers to a goal given a simple Prolog program o closed world assumption Ch 10 Knowledge Representation (Sect. 10.1-10.2, 10.5-10.6) o categories unary predicate vs. object representation o semantic networks inheritance compared to FOL description logic Semantic Web OWL Ch 11 Planning (Sect. 11.1-11.3) initial state, goal state, actions o The STRIPS language preconditions and effects o forward state-space search applicable actions, result states o backward state-space search relevant and consistent actions, predecessor states 3

o partial-order planning least-commitment causal links resolving conflicts in the propositional case linearizations ADL the actions for any specific planning problem given in the book Ch. 12 Planning and Acting in the Real World (Sect. 12.3,12.6) o bounded / unbounded indeterminacy o continuous planning Ch. 13 - Uncertainty o Boolean, discrete and continuous random variables o prior probability and conditional probability o full joint distribution, atomic events calculate probability of an event from the full joint o independent variables o conditional independence o Bayes Rule Ch. 14 - Bayesian Networks (Sect. 14.1-14.2, 14.4) o understand network structure o compute probability of an atomic event o compute P(X e) by enumeration variable elimination algorithm clustering algorithms Ch. 15 Probabilistic Reasoning Over Time (Sect. 15.1-15.2, 15.6) o Markov assumption o first-order Markov process o stationary process o transition model and sensor model o types of inference filtering, prediction, smoothing, most likely explanation the algorithms for any of the types of inference details of speech recognition Ch. 16 - Making Simple Decisions (Sect. 16.1 16.3) o utility function o maximum expected utility the axioms of utility theory Ch. 18 - Learning (Sect. 18.1-18.2) o types of learning supervised vs. reinforcement vs. unsupervised 4

o inductive learning hypothesis training set vs. test set positive vs. negative examples Ch 19 - Logical Formulation of Learning (Sect. 19.1) o classification and description sentences o candidate definition o false positive, false negative o generalize/specialize hypotheses o types current-best hypothesis version space learning how to apply version space learning to a specific problem Ch. 20 - Neural Networks (Sect. 20.5) o activation functions o perceptron linearly-separable functions supervised learning method learning rate, epoch, error o multi-layer feed-forward networks be able to calculate output what can be represented? details of the back-propagation algorithm recurrent networks Ch. 22 Communication (Sect. 22.1-22.2) o steps of natural language processing analysis (parsing, semantic interpretation, pragmatic interpretation) diasmbiugation incorporation speech acts formal grammar for English Ch. 24 Perception (Sect. 24.1-24.3) o edge detection o method for extracting 3-D information binocular stereopsis, optical flow, texture, shading equations for image formation object recognition techniques 5