Q1: Draw or describe a node map and heuristic that would cause a greedy search to fail to find any solution. State any necessary assumptions

Similar documents
Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

Agents and environments. Intelligent Agents. Reminders. Vacuum-cleaner world. Outline. A vacuum-cleaner agent. Chapter 2 Actuators

Intelligent Agents. Chapter 2. Chapter 2 1

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 10: Reinforcement Learning

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Answer Key For The California Mathematics Standards Grade 1

Axiom 2013 Team Description Paper

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

TD(λ) and Q-Learning Based Ludo Players

LEGO MINDSTORMS Education EV3 Coding Activities

Visual CP Representation of Knowledge

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Given a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

GACE Computer Science Assessment Test at a Glance

Reinforcement Learning by Comparing Immediate Reward

An Introduction to Simio for Beginners

MYCIN. The MYCIN Task

EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS

Speeding Up Reinforcement Learning with Behavior Transfer

Radius STEM Readiness TM

Learning and Transferring Relational Instance-Based Policies

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Learning Prospective Robot Behavior

The Enterprise Knowledge Portal: The Concept

Artificial Neural Networks written examination

Introduction to Causal Inference. Problem Set 1. Required Problems

Discriminative Learning of Beam-Search Heuristics for Planning

Designing A Computer Opponent for Wargames: Integrating Planning, Knowledge Acquisition and Learning in WARGLES

B. How to write a research paper

Dublin City Schools Mathematics Graded Course of Study GRADE 4

Ricochet Robots - A Case Study for Human Complex Problem Solving

Rule Learning With Negation: Issues Regarding Effectiveness

Seminar - Organic Computing

Secret Code for Mazes

Learning goal-oriented strategies in problem solving

Multimedia Application Effective Support of Education

Introduction to Simulation

Self Study Report Computer Science

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18

Contents. Foreword... 5

Laboratorio di Intelligenza Artificiale e Robotica

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD

Decision Analysis. Decision-Making Problem. Decision Analysis. Part 1 Decision Analysis and Decision Tables. Decision Analysis, Part 1

Mathematics Success Grade 7

High-level Reinforcement Learning in Strategy Games

Learning Methods in Multilingual Speech Recognition

(Sub)Gradient Descent

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C

Planning with External Events

Robot manipulations and development of spatial imagery

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method

Learning Methods for Fuzzy Systems

TOPICS LEARNING OUTCOMES ACTIVITES ASSESSMENT Numbers and the number system

Grade 6: Correlated to AGS Basic Math Skills

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I

Navigating the PhD Options in CMS

Parsing of part-of-speech tagged Assamese Texts

Rule Learning with Negation: Issues Regarding Effectiveness

This scope and sequence assumes 160 days for instruction, divided among 15 units.

Laboratorio di Intelligenza Artificiale e Robotica

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

CS Machine Learning

On the Combined Behavior of Autonomous Resource Management Agents

Cal s Dinner Card Deals

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

Learning to Schedule Straight-Line Code

Using focal point learning to improve human machine tacit coordination

Reducing Abstraction When Learning Graph Theory

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Curriculum Design Project with Virtual Manipulatives. Gwenanne Salkind. George Mason University EDCI 856. Dr. Patricia Moyer-Packenham

AQUA: An Ontology-Driven Question Answering System

AMULTIAGENT system [1] can be defined as a group of

Grades. From Your Friends at The MAILBOX

Executive Guide to Simulation for Health

SESSION 2: HELPING HAND

Automatic Discretization of Actions and States in Monte-Carlo Tree Search

Aviation English Solutions

Lecture 1: Basic Concepts of Machine Learning

The Strong Minimalist Thesis and Bounded Optimality

Evolution of Collective Commitment during Teamwork

Fundraising 101 Introduction to Autism Speaks. An Orientation for New Hires

SURVIVING ON MARS WITH GEOGEBRA

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Liquid Narrative Group Technical Report Number

Software Development: Programming Paradigms (SCQF level 8)

PHYSICS 40S - COURSE OUTLINE AND REQUIREMENTS Welcome to Physics 40S for !! Mr. Bryan Doiron

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

Arizona s College and Career Ready Standards Mathematics

WSU Five-Year Program Review Self-Study Cover Page

Lecture 1: Machine Learning Basics

Math Grade 3 Assessment Anchors and Eligible Content

Problem of the Month: Movin n Groovin

A Reinforcement Learning Variant for Control Scheduling

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Transcription:

Q1: Draw or describe a node map and heuristic that would cause a greedy search to fail to find any solution. State any necessary assumptions Q2: You are designing a robot that will navigate its way out of a maze. In this scenario identify what the environment is, what the agent is, and the necessary percepts, sensors, actuators, and actions are. Q3: The triangular 2-puzzle is a simple variant of the 8-puzzle discussed in class. It consists of a large triangular base and 2 smaller triangular tiles numbered 1 and 2 as seen below. These numbered tiles can occupy any one of the three corners, and can be moved from any corner to any other unoccupied corner as shown below. Write a reflex agent that will get any starting state into the goal state consisting of getting the 1 tile in the lower left corner and the 2 tile in the lower right corner. Q4: You are building an AI to translate common spoken phrases from English to French. Explain the type of environment your machine will operate in, particularly if it is Observable, deterministic, static, sequential, discrete, or the opposite of any of these five types. Explain your reasoning. Q5: Describe a situation where a depth-first search would be complete and give an example when it would be better to use depth-first searching instead of breadth-first searching. Q6: If you use a heuristic with an A* search that is not admissible, is it possible that you will not be able to find a solution? Why? Q7: Given breadth-first, depth-first, greedy, and A* searches, what would be the effect on each of these if action costs and heuristic costs were all always 0? 1. Suppose two friends live in different cities on a map, such as the Romania map shown in Figure 3.2. On every turn, we can simultaneously move each friend to a neighboring city on the map. The amount of time needed to move from city i to neighbor j is equal to the road distance (i, j) between cities, but on each turn the friend that arrives first must NOT wait until the other one arrives (they are trying to avoid each other as much as possible) before the next turn can begin. Both friends cannot be at the same place at the same time. a. Write a detailed formulation for this search problem. (You will find it helpful to define some formal notation here). b. Are there completely connected maps for which no solution exists? 2. Define the following words: state, state space, search tree, search node, goal, action, completeness, time complexity, space complexity, and optimality. 3. Consider the map of Romania. Which of the uninformed search strategies would you chose when analyzing the shortest route Bucharist to Zerind? Explain why?

4. Considering any environment type, evaluate whether or not it is accessible, deterministic, episodic, static, and discrete. 5. Your goal is to navigate a robot out of a maze. The robot starts in the center of the maze facing north. You can turn the robot face north, east, south, or west. You can direct the robot to move forward a certain distance, although it will stop before hitting a wall. a. Formulate this problem. How large is the state space? b. In navigating a maze, the only place we need to turn is at the intersection of two or more corridors. Reformulate this problem using this observation. How large is the space now? 6. Prove each of the following statements, or give a counterexample? a. Breadth-first search is a special case of uniform-cost search. b. Depth-first search is a special case of best-first tree search. c. Uniform-cost search is a special case of A* search. 7. A 3-foot tall monkey is in a room where some bananas are suspended from the 8-foot ceiling. He would like to get the bananas. The room contains two stackable, movable, climbable, 3-foot high crates. Give the problem formulation. Question 1) Formulate the simple 1x2 vacuum world problem used by AIMA. Question 2) What does the acronym PEAS stand for? What is the PEAS for an Automatic Taxi? (Give 5 examples for each letter in the acronym). Question 3) What is the Turing (1950) Test? Explain what the purpose of this test was. There were multiple objections listed by Turing, list and explain one of these objections. Is the objection still valid present day? Question 4) What is an Admissible Heuristic? Why are the properties of an admissible heuristic important? Question 5) What are the four basic types of Agents? Explain. Question 6) How does the Depth-Limited search work? What is a drawback of this algorithm? Question 7) Which of the following search methods are optimal? Breadth first, Depth-first, Uniformcost, Depth-limited, Iterative-deepening, Greedy-search, A*? Questions 1-4 refer to the following tree. Node S is the starting node, node I is the goal.

Question 1 Using Breadth-First Search, show the order of nodes that would be visited in the tree above when searching for node I while starting from node S. Also, regarding Breadth-first search (in general), is it complete? Is it Optimal? Question 2 Using Depth-First Search, show the order of nodes that would be visited in the tree above when searching for node I while starting from node S. Also, regarding Depth-first search (in general), is it complete? Is it Optimal? Question 3 Using Iterative Deepening Search, show the order of nodes that would be visited in the tree above when searching for node I while starting from node S. Also, regarding Iterative Deepening Search (in general), is it complete? Is it Optimal? Question 4 Suppose that there was an added element to the figure above: cost. Since the graph we are referring to is a tree, it would be very easy for a greedy search to get stuck when searching for the goal node, I, from the starting node, S. Why? Assign costs to the paths in the tree above in a way that a greedy search would result in a loop, never finding the goal. Assign costs to the paths in the tree above in a way that the greedy first search would succeed in finding the goal. Question 5 2D Vacuum Cleaner World! An additional level of complexity a whole extra dimension was added to the commonly known vacuum cleaner world we are used to from lecture. Consider the following scenario: Considering the previous percepts and actions for the simple vacuum cleaner world, re-evaluate and implement using a flowchart, table, or pseudocode, the 2D vacuum cleaner world problem for a 2-by-2 world. Then, list the sequence of actions that your implementation of the 2D vacuum cleaner world would follow given the scenario above. Question 6 Remember the following quadrants? Fill in the blanks and give examples of each. Explain why each of the quadrants is important to the study of Artificial Intelligence. (Opinion) Do you think that one day Artificial Intelligence will be able to replace human intelligence completely (as in every application)? Question 7 For each of the following environments, determine whether or not it is: (a) Not, Partially, or Fully Observable; (b) Deterministic or Stochastic; (c) Static or Dynamic; (d) Discrete or Continuous; or (e) Episodic or Sequential. 1 A game of solitaire. 2 A game of backgammon. 3 Internet shopping. 4 Taxi driving. 1. Q: What are the four different branches of AI? 2. Q: What does PEAS stand for?

3. Q: Use PEAS to describe an automated ice cream scooper. 4. Q: What is the difference between a reflex agent and a reflex agent with state? 1. - Define the following terms a. State : b. Tree : c. Branching Factor : 2. Define the following terms a. Action : b. Successor Function : c. Goal : 3. - Define the following terms a. Transitional Model : b. Utility Function : c. Terminal Test : d. Zero-Sum : 4. Provide a basic description of the Turing test, and describe why it is a useful metric in determining the humanity of an artificial agent. Which type of artificial intelligence does the Turing test examine? 5. Is informed search always better than uninformed search? What are the limitations of each? 6. The United States coin system (25, 10, 5, and 1 cent coins) allows greedy algorithms to perform optimally. Provide a description of this algorithm and give an example of a coin system that won t perform optimally. 7. Describe the function of alpha-beta pruning in minimax search and why it is useful.

1. Define Optimal and Complete for search algorithms. 2. Give an example of a search space that would be ine_cient for a depth _rst search. 3. Give an example of a problem that can be solved e_ciently with A*. 4. Describe the P.E.A.S. for a Rubik's Cube playing robot. 5. Describe the state space for the Rubik's Cube, and its cardinality. 6. How many board positions are possible in a game of standard Chess? 7. What is your name? 8. What is your quest? 9. What is your favorite color? 10. What is the capital of Assyria? 11. What is the air speed velocity of an unladen swallow? Question #1:Please give the different queue contents for the above graph using an A* search. Question #2:What does the Heurstic function represent? Questions #3:

For the following environment types please specify whether or not it holds true for the game of backgammon. Question Observable Deterministic Episodic Static Discrete Question #4: What does PEAS stand for? Question #5: What are the 4 parts a search formulation? Question #6: What search type does the following queue sequence represent? Question #7: Is the search algorithm used in question 6, the optimal search algorithm? Would you use an algorithm with incremental state formulation of complete state formulation when dealing with the n-queens problem? What is the difference between an exhaustive search and one that is not? Give examples of both. What is the difference between admissible and consistent heuristics? Fill in the graph according to environment types. Environment Solitaire Taxi Observable? Deterministic? Episodic? Static? Discrete? Do a depth first search on this binary tree.

Explain the horizon effect. Refer to its use in a game. Q1. What are the 4 different types of agents and how do they differ? Q1 Question: What are the four most common categories for definitions of artificial intelligence? Give an example of each. Q2 Question: Sketch a simple reex agent. What additional component is necessary for this agent to be considered utility-based? How is this di_erent from a performance measure? Q3 Question: Consider the 8-puzzle problem. What is the initial state, goal test, successor function, and path cost?

Q5 Question: Pick three (3) uninformed and two (2) informed search methods and discucss their performance in terms of completeness, optimality, time complexity, and space complexity. Q6 Question: Your task is to navigate a maze with an uninformed search method but your agent has limited memory. Pick a search strategy and defend your choice. Q7 Question: What is the main difference between A* and greedy best-first search (GBFS)? Describe two different situations where each one is better than the other. 1. What four categories do the different views of AI fall into? 2. Describe the following for an automated taxi: performance measures, environment, actuators, and sensors. 3. Describe the following: simple reflex agent, reflex agent with state, goal-based agent, utilitybased agent. Write a problem formulation for the simple vacuum world with two floor tiles which can either be clean or dirty and a single vacuum. How are states and nodes different? How are they similar? What advantages do informed search algorithms have over uninformed search algorithms?

What advantages do informed search algorithms have over uninformed search algorithms?