CS 540: Introduction to Artificial Intelligence

Similar documents
Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Proof Theory for Syntacticians

Discriminative Learning of Beam-Search Heuristics for Planning

Self Study Report Computer Science

CS Machine Learning

An OO Framework for building Intelligence and Learning properties in Software Agents

CS 100: Principles of Computing

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

Chapter 2 Rule Learning in a Nutshell

(Sub)Gradient Descent

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Some Principles of Automated Natural Language Information Extraction

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18

Stacks Teacher notes. Activity description. Suitability. Time. AMP resources. Equipment. Key mathematical language. Key processes

AQUA: An Ontology-Driven Question Answering System

Learning Methods for Fuzzy Systems

THINKING SKILLS, STUDENT ENGAGEMENT BRAIN-BASED LEARNING LOOKING THROUGH THE EYES OF THE LEARNER AND SCHEMA ACTIVATOR ENGAGEMENT POINT

Title:A Flexible Simulation Platform to Quantify and Manage Emergency Department Crowding

Learning Methods in Multilingual Speech Recognition

Given a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Axiom 2013 Team Description Paper

CS177 Python Programming

RANKING AND UNRANKING LEFT SZILARD LANGUAGES. Erkki Mäkinen DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF TAMPERE REPORT A ER E P S I M S

MYCIN. The MYCIN Task

Teaching a Laboratory Section

We are strong in research and particularly noted in software engineering, information security and privacy, and humane gaming.

Carnegie Mellon University Department of Computer Science /615 - Database Applications C. Faloutsos & A. Pavlo, Spring 2014.

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

SARDNET: A Self-Organizing Feature Map for Sequences

Transfer Learning Action Models by Measuring the Similarity of Different Domains

B.S/M.A in Mathematics

Probabilistic Latent Semantic Analysis

Seminar - Organic Computing

RESPONSE TO LITERATURE

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

Henry Tirri* Petri Myllymgki

Evolutive Neural Net Fuzzy Filtering: Basic Description

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Compositional Semantics

Lecturing Module

Classify: by elimination Road signs

CONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS

Abstractions and the Brain

Parsing of part-of-speech tagged Assamese Texts

Designing A Computer Opponent for Wargames: Integrating Planning, Knowledge Acquisition and Learning in WARGLES

Grade 6: Correlated to AGS Basic Math Skills

Rule Learning With Negation: Issues Regarding Effectiveness

Introduction to Simulation

KIS MYP Humanities Research Journal

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance

Learning and Transferring Relational Instance-Based Policies

Pre-AP Geometry Course Syllabus Page 1

Knowledge-Based - Systems

CS 446: Machine Learning

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

MATH 1A: Calculus I Sec 01 Winter 2017 Room E31 MTWThF 8:30-9:20AM

"f TOPIC =T COMP COMP... OBJ

University of Groningen. Systemen, planning, netwerken Bosman, Aart

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

Extending Place Value with Whole Numbers to 1,000,000

What the National Curriculum requires in reading at Y5 and Y6

A simulated annealing and hill-climbing algorithm for the traveling tournament problem

Implementing a tool to Support KAOS-Beta Process Model Using EPF

WSU Five-Year Program Review Self-Study Cover Page

A Version Space Approach to Learning Context-free Grammars

Houghton Mifflin Online Assessment System Walkthrough Guide

Study and Analysis of MYCIN expert system

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Introduction to CS 100 Overview of UK. CS September 2015

Rubric for Scoring English 1 Unit 1, Rhetorical Analysis

Innovative Methods for Teaching Engineering Courses

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

Writing Research Articles

Rendezvous with Comet Halley Next Generation of Science Standards

TU-E2090 Research Assignment in Operations Management and Services

Decision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1

LEGO MINDSTORMS Education EV3 Coding Activities

A student diagnosing and evaluation system for laboratory-based academic exercises

Exemplar 6 th Grade Math Unit: Prime Factorization, Greatest Common Factor, and Least Common Multiple

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Introduction and Motivation

Causal Link Semantics for Narrative Planning Using Numeric Fluents

Mathematics Success Grade 7

Derivational: Inflectional: In a fit of rage the soldiers attacked them both that week, but lost the fight.

Introduction to Causal Inference. Problem Set 1. Required Problems

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Lecture 1: Basic Concepts of Machine Learning

Prediction of Maximal Projection for Semantic Role Labeling

with The Grouchy Ladybug

Computerized Adaptive Psychological Testing A Personalisation Perspective

Spring 2014 SYLLABUS Michigan State University STT 430: Probability and Statistics for Engineering

Computer Organization I (Tietokoneen toiminta)

g to onsultant t Learners rkshop o W tional C ces.net I Appealin eren Nancy Mikhail esour Educa Diff Curriculum Resources CurriculumR

Grammars & Parsing, Part 1:

Transcription:

CS 540: Introduction to Artificial Intelligence Midterm Exam: 7:15-9:15 pm, October 29, 2002 Room 1240 CS & Stats CLOSED BOOK (one sheet of notes and a calculator allowed) Write your answers on these pages and show your work. If you feel that a question is not fully specified, state any assumptions you need to make in order to solve the problem. You may use the backs of these sheets for scratch work. Write your name on this and all other pages of this exam. Make sure your exam contains six problems on twelve pages. Name Student ID Problem Score Max Score 1 15 2 20 3 5 4 20 5 10 6 30 TOTAL 100

Problem 1 Decision Trees (15 points) Assume that you are given the set of labeled training examples below, where each feature has three possible values a, b, or c. You choose to apply the C5 algorithm to this data and select - as the default output. F1 F2 F3 Output a a a + c b c + c a c + b a a - a b c - b b c - a) What score would the information gain calculation assign to each of the features? Be sure to show all your work (use the back of this or the previous sheet if needed). b) Which feature would be chosen as the root of the decision tree being built? (Break ties in favor of F1 over F2 over F3.) 2

c) Show the next interior node, if any, that the C5 algorithm would add to the decision tree. Again, be sure to show all your work. (Even if this secod interior node does not completely separate the training data, stop after adding this second node.) Be sure to label all the leaf nodes in the decision tree that you have created. d) Assuming you have the following tuning set, what would the first round of HW1 s pruning algorithm do to the decision tree produced in Part c? (If you need to break any ties, default to -.) Justify your answer. F1 F2 F3 Output b a b + a b a + c c a - 3

Problem 2 Search (20 points) a) Consider the search space below, where S is the start node and G1, G2, and G3 satisfy the goal test. Arcs are labeled with the cost of traversing them and the estimated cost to a goal is reported inside nodes. For each of the following search strategies, indicate which goal state is reached (if any) and list, in order, all the states popped off of the OPEN list. When all else is equal, nodes should be removed from OPEN in alphabetical order. Iterative Deepening Goal state reached: States popped off OPEN: Hill Climbing Goal state reached: States popped off OPEN: A* Goal state reached: States popped off OPEN: 10 A 9 7 3 S 8 D 4 6 4 1 B 1 2 5 2 G3 0 11 C 3 G1 0 2 E 1 5 1 0 G2 0 F 5 4

b) (Note: This question talks about search spaces in general and is not referring to the specific search space used in Part a.) Consider the following scoring function for heuristic search: score(node) = Wgt g(node) + (1 Wgt) h(node) where 0 Wgt 1 i. Which search algorithm do you get with Wgt set to 0? ii. Which search algorithm do you get with Wgt set to 1? iii. If h(node) is an admissible heuristic, for what range of Wgt values is the above scoring function guaranteed to produce an admissible search strategy? Explain your answer. 5

Problem 3 Representation using First-Order Logic (5 points) Convert each of the following English sentences into first-order predicate calculus (FOPC), using reasonably named predicates, functions, and constants. If you feel a sentence is ambiguous, clarify which meaning you re representing in logic. (Write your answers in the space below the English sentence.) Someone in Madison likes pizza. Everyone who likes ham also likes cheese. 6

Problem 4 Representation & Reasoning with Propositional Logic (20 pts) a) Is the following WFF valid? Justify your answer using a truth table. [ (P Q) (Q R) ] (P R) b) Use propositional logic to represent the following sentences. i) Fido is always either sleeping or barking. ii) When Fido is hungry Fido barks, but Fido barking does not necessarily mean that Fido is hungry. 7

c) Formally show that S follows from the given s below. (Don t deduce more than 10 additional WFF s.) Number WFF Justification 1 P Z given 2 ( R W) ( P) given 3 (W Q) P given 4 Q W given 5 Q (S P) given 6 (P Q) (S R) given 7 8 9 10 11 12 13 14 15 16 8

Problem 5 Miscellaneous Short Answers (10 points) Briefly describe each of the following AI concepts and explain each s significance. (Write your answers below the phrases.) Simulated Annealing Expected-Value Calculations Interpretations Minimax Algorithm CLOSED List 9

Problem 6 Learning, Logic, and Search Combined (30 points) Imagine that we have a training set of labeled fixed-length feature vectors. However, instead of using C5 to induce a decision tree, this time we want to learn inference rules like the following sample: (Color=red Shape=round) positive example Assume that each training example is described using features F 1 through F 3 and each feature has three possible values, a, b, or c Each training example is labeled is either in the positive (+) category or the negative (-) category. Also assume that you have set aside 10% of the training examples for a test set and another 10% for a tuning set. Your task is to cast this as a search problem where your search algorithm s job is to find the best propositional-logic wff for this task. Your wff should use implication ( ) as its main connective and the wff should imply when an example belongs to the positive category (e.g., as done in the sample wff above). a) Describe a search space that you could use for learning such wff s. Show a sample node. b) What would be the initial state? 10

c) Describe two (2) operators that your search algorithm would use. Be concrete. i) ii) d) Would you prefer to use a goal test for this task or would you prefer to look for the highest scoring state you can find? Justify your answer. e) What would be a good scoring function for this task? Why? 11

f) Which search strategy would you use? Why? g) Show one (1) state (other than the initial state) that might arise during your search and show three (3) possible children of this state. h) What would be the analog to decision-tree pruning for this task? Explain how and why pruning could be done in this style of machine learning. 12