Cognitive Architectures

Similar documents
Cognitive Modeling. Tower of Hanoi: Description. Tower of Hanoi: The Task. Lecture 5: Models of Problem Solving. Frank Keller.

Concept Acquisition Without Representation William Dylan Sabo

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

Accelerated Learning Course Outline

Accelerated Learning Online. Course Outline

A Process-Model Account of Task Interruption and Resumption: When Does Encoding of the Problem State Occur?

Encoding. Retrieval. Forgetting. Physiology of Memory. Systems and Types of Memory

Evolution of Symbolisation in Chimpanzees and Neural Nets

Rule-based Expert Systems

SOFTWARE EVALUATION TOOL

The Evolution of Random Phenomena

Using dialogue context to improve parsing performance in dialogue systems

Lecture 2: Quantifiers and Approximation

A Case-Based Approach To Imitation Learning in Robotic Agents

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Lecture 10: Reinforcement Learning

Airplane Rescue: Social Studies. LEGO, the LEGO logo, and WEDO are trademarks of the LEGO Group The LEGO Group.

Appendix L: Online Testing Highlights and Script

SNAP, CRACKLE AND POP! INFUSING MULTI-SENSORY ACTIVITIES INTO THE EARLY CHILDHOOD CLASSROOM SUE SCHNARS, M.ED. AND ELISHA GROSSENBACHER JUNE 27,2014

Software Maintenance

Vorlesung Mensch-Maschine-Interaktion

KLI: Infer KCs from repeated assessment events. Do you know what you know? Ken Koedinger HCI & Psychology CMU Director of LearnLab

Steps Before Step Scanning By Linda J. Burkhart Scripting by Fio Quinn Powered by Mind Express by Jabbla

Executive Guide to Simulation for Health

LEGO MINDSTORMS Education EV3 Coding Activities

End-of-Module Assessment Task

WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company

AQUA: An Ontology-Driven Question Answering System

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Understanding and Supporting Dyslexia Godstone Village School. January 2017

Forget catastrophic forgetting: AI that learns after deployment

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

Generating Test Cases From Use Cases

Adaptations and Survival: The Story of the Peppered Moth

STUDENT MOODLE ORIENTATION

Seminar - Organic Computing

BSID-II-NL project. Heidelberg March Selma Ruiter, University of Groningen

Learning Prospective Robot Behavior

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

MOODLE 2.0 GLOSSARY TUTORIALS

SAMPLE PAPER SYLLABUS

GACE Computer Science Assessment Test at a Glance

Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

16.1 Lesson: Putting it into practice - isikhnas

Axiom 2013 Team Description Paper

1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature

NUMBERS AND OPERATIONS

CREATING SHARABLE LEARNING OBJECTS FROM EXISTING DIGITAL COURSE CONTENT

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

NAME: East Carolina University PSYC Developmental Psychology Dr. Eppler & Dr. Ironsmith

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

Call Center Assessment-Technical Support (CCA-Technical Support)

Module Title: Managing and Leading Change. Lesson 4 THE SIX SIGMA

Visual CP Representation of Knowledge

What s in Your Communication Toolbox? COMMUNICATION TOOLBOX. verse clinical scenarios to bolster clinical outcomes: 1

Shared Mental Models

Standard 1: Number and Computation

Radius STEM Readiness TM

Usability Design Strategies for Children: Developing Children Learning and Knowledge in Decreasing Children Dental Anxiety

Software Development Plan

File # for photo

CAFE ESSENTIAL ELEMENTS O S E P P C E A. 1 Framework 2 CAFE Menu. 3 Classroom Design 4 Materials 5 Record Keeping

Lecturing Module

DegreeWorks Training Guide

Developmental coordination disorder DCD. Overview. Gross & fine motor skill. Elisabeth Hill The importance of motor development

Building Student Understanding and Interest in Science through Embodied Experiences with LEGO Robotics

Introducing the New Iowa Assessments Mathematics Levels 12 14

First Grade Curriculum Highlights: In alignment with the Common Core Standards

Leader s Guide: Dream Big and Plan for Success

New Features & Functionality in Q Release Version 3.2 June 2016

Chapter 2. Intelligent Agents. Outline. Agents and environments. Rationality. PEAS (Performance measure, Environment, Actuators, Sensors)

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society

Piaget s Cognitive Development

visual aid ease of creating

PREDICTING GLOBAL MEASURES OF DEVELOPMENT AT 18-MONTHS OF AGE FROM SPECIFIC MEASURES OF COGNITIVE ABILITY AT 10-MONTHS OF AGE. Tasha D.

The Enterprise Knowledge Portal: The Concept

SSIS SEL Edition Overview Fall 2017

A MULTI-AGENT SYSTEM FOR A DISTANCE SUPPORT IN EDUCATIONAL ROBOTICS

J j W w. Write. Name. Max Takes the Train. Handwriting Letters Jj, Ww: Words with j, w 321

THE ROLE OF TOOL AND TEACHER MEDIATIONS IN THE CONSTRUCTION OF MEANINGS FOR REFLECTION

MYCIN. The MYCIN Task

Emporia State University Degree Works Training User Guide Advisor

Automating the E-learning Personalization

Action Models and their Induction

Effect of Word Complexity on L2 Vocabulary Learning

SCHEMA ACTIVATION IN MEMORY FOR PROSE 1. Michael A. R. Townsend State University of New York at Albany

Top Ten Persuasive Strategies Used on the Web - Cathy SooHoo, 5/17/01

What is beautiful is useful visual appeal and expected information quality

Probabilistic Latent Semantic Analysis

Outreach Connect User Manual

Neuropsychologia 47 (2009) Contents lists available at ScienceDirect. Neuropsychologia

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Welcome to ACT Brain Boot Camp

Learning Methods for Fuzzy Systems

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

DegreeWorks Advisor Reference Guide

University of Toronto Physics Practicals. University of Toronto Physics Practicals. University of Toronto Physics Practicals

San José State University Department of Psychology PSYC , Human Learning, Spring 2017

Transcription:

Cognitive Architectures ACT-R

Outline Short glance on the history of ACT-R What is ACT-R R? Mapping ACT-R R onto the brain ACT-R R 5.0 Architecture Components of ACT-R What is ACT-R R used for? General discussion

History of the ACT-framework 1976: first ACT theory came out 1982: first ACT implementation appeared Since then both, the Theory and the Implementations were further developed (ACT, ACT-R, ACT-R2.0, ACT-R3.0, ACT-R4.0) 2001: release of ACT-R5.0 (Theory and Implementation) which is since then the state of the art ACT-R

What is ACT-R ACT-R R is a cognitive architecture Researchers working on ACT-R R strive to understand how people organize knowledge and produce intelligent behaviour.

What is ACT-R ACT-R R is a programming language Models are written in ACT-R During runtime of a model, ACT-R provides the runtime environment. Due to it s special design as a cognitive architecture, models in ACT-R R can mirror human behavior on a cognitive psychology task

What is ACT-R Based on facts derived from psychology experiments, ACT-R R is a framework Models in ACT-R reflect a certain aspect of cognition

Framework? ACT-R Environment

ACT-R R Architecture ART-R R claim: cognition as the interaction between specific units of knowledge: Declarative knowledge Unit: Chunks E.g. facts, goals, Procedural knowledge Unit: Production rules E.g. action rules, behavior rules,

ACT-R R Architecture Chunk Chunk Chunk Production rules Chunk Environment

ACT-R R Architecture Chunks are created by specific modules visual module produces chunk Christian is in visual field Motor module produces pressure on left hand Chunks set modules to action search Christian said to visual module Modules transmit and retrieve information only out of buffers Each module has a specific buffer for his chunks

ACT-R R Architecture Modules Chunk Chunk Buffers Production rules Buffers Chunk Chunk Modules Environment

Mapping ACT-R R onto the brain Question: How is ACT-R R related to newest studies in neurobiology and neuroimaging? Answer: all parts of ACT-R R are designed to reflect certain brain areas!

Mapping ACT-R R onto the brain Modules In a few examples we will try to give you a scratch of how ACT-R R is designed. Visual system: there are two build in visual modules in ACT-R R referring to: The dorsal where pathway (locations) The ventral what pathway

Mapping ACT-R R onto the brain Visual Buffer (Parietal) Modules Buffers Production rules Visual Module (Occipital/etc) Environment

Mapping ACT-R R onto the brain As for the visual system, other modules have been designed to match specific brain areas: Manual buffer = motor and somatosensory cortical areas Goal buffer = dorsolateral prefrontal cortex DLPFC Retrieval buffer = ventrolateral prefrontal cortex VLPFC (long-term declarative memory)

Mapping ACT-R R onto the brain Intentional Module (not identified) Goal Buffer (DLPFC) Modules Buffers Declarative Module (Temporal/Hippocampus) Retrieval Buffer (VLPFC) Production rules Visual Buffer (Parietal) Visual Module (Occipital/etc) Manual Buffer (Motor) Manual Module (Motor/Cerebellum) Environment

Mapping ACT-R R onto the brain Production rules The basal ganglia are thought to implement production rules in ACT-R: Striatum: corresponding with cortical areas, responsible for patter recognition Palladium: inhibitory component, performs conflict-resolution function Thalamus: projects to all major cortical areas, controls execution of production actions

Mapping ACT-R R onto the brain Production rules Intentional Module (not identified) Goal Buffer (DLPFC) Declarative Module (Temporal/Hippocampus) Retrieval Buffer (VLPFC) Productions (Basal Ganglia) Matching (Striatum) Production Selection (Pallidum) rules Execution (Thalamus) Visual Buffer (Parietal) Visual Module (Occipital/etc) Manual Buffer (Motor) Manual Module (Motor/Cerebellum) Environment

ACT-R R Architecture Intentional Module (not identified) Chunk Goal Buffer (DLPFC) Declarative Module (Temporal/Hippocampus) Chunk Retrieval Buffer (VLPFC) Declative memory procedural memory patternmatcher Productions (Basal Ganglia) Visual Buffer (Parietal) Chunk Visual Module (Occipital/etc) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Manual Buffer (Motor) Chunk Manual Module (Motor/Cerebellum) buffers modules Environment

The modules There are two types of modules: memory modules. declarative memory procedural memory perceptual-motor modules take care of the interface with the simulation of the real world (visual and the manual modules).

chunks = chunks units of declarative knowledge represent things remembered or perceived example: 2+3=5 Boston is the capital of Massachusetts there is an attended object in the visual field...

chunks: examples one way to model the fact: 2+3=5 DEFINITION (CHUNK-TYPE integer value) (CHUNK-TYPE addition-fact addend1 addend2 sum) INSTANCE (three isa integer value 3) Chunk NAME TYPE {ATTRIBUTES}

chunks: examples (CHUNK-TYPE integer value) (CHUNK-TYPE addition-fact addend1 addend2 sum) (three (four (seven (fact3+4 isa integer value 3) isa integer value 4) isa integer value 7) isa addition-fact addend1 three addend2 four sum seven) reference to other chunks

chunks: examples ADDITION-FACT 3 7 THREE VALUE ADDEND1 isa FACT3+4 SUM VALUE SEVEN ADDEND2 isa FOUR VALUE 4 isa isa INTEGER

chunks: examples Encoding: Fact: The cat sits on the mat. proposition (Chunk-Type proposition agent action object) isa (Add-DM (fact007 isa proposition agent cat007 action sits_on object mat) ) cat007 agent fact007 object action sits_on mat

chunks: examples Fact: The black cat with 5 legs sits on the mat. Chunks (Chunk-Type proposition agent action object) (Chunk-Type cat legs color) cat proposition (Add-DM (fact007 isa proposition agent cat007 action sits_on object mat) 5 legs isa cat007 agent isa fact007 object mat ) (cat007 isa cat legs 5 color black) color black action sits_on

chunks: examples animal moves skin fish gills swims bird wings flies dangerous edible yellow hark salmon canary ostrich swims swims sings can t tall

productions Procedural knowledge to achieve a given goal: processes skills production = Intentional Module (not identified) Goal Buffer (DLPFC) unit of procedural knowledge condition-action rule that fire when the conditions are satisfied and execute the specified actions. Productions (Basal Ganglia) Visual Buffer (Parietal) Visual Module (Occipital/etc) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Environment Declarative Module (Temporal/Hippocampus) Retrieval Buffer (VLPFC) Manual Buffer (Motor) Manual Module (Motor/Cerebellum)

productions conditions can depend on the current goal to be achieved, the state of declarative knowledge (i.e. recall of a chunk) the current sensory input from the external environment. actions can: alter the state of declarative memory change goals initiate motor actions in the external environment

Structure of Productions condition part ( P name delimiter ==> Specification of Buffer Tests.. ==> action part Specification of Buffer Transformations.. )

Example of productions (P increment operation & buffer =goal> ISA count-from number =num1 ==> =retrieval> ISA count-order first =num1 second =num2 If the goal is to count from =num1 and a chunk has been retrieved of type count-order where the first number is =num1 and it is followed by =num2 Then =goal> number =num2 change the goal to continue counting from =num2 +retrieval> ISA count-order first =num2 and request a retrieval of a count-order fact for the number that follows =num

Example of productions (P find-next-word =goal> ISA comprehend-sentence word nil no word currently being processed. ==> ) +visual-location> ISA visual-location screen-x lowest attended nil =goal> word looking find left-most unattended location update state

Example of productions (P attend-next-word =goal> ISA comprehend-sentence word looking =visual-location> ISA visual-location looking for a word visual location has been identified ==> ) =goal> word attending +visual> ISA visual-object screen-pos =visual-location update state attend to object in that location

Discussion The atomic components of thought? Is declarative knowledge (=chunk) available in every cognitive module? semantic be modeled arbitrary chunks be of any granularity pixel in visual field vs. Chris is standing in front of me Can timing of the computation be compared with humans? Does the division of symbolic and subsymbolic processing make sense? Is ACT-R just a strange kind of programming language?

Intentional Module (not identified) Declarative Module (Temporal/Hippocampus) Goal Buffer (DLPFC) Retrieval Buffer (VLPFC) Productions (Basal Ganglia) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Visual Buffer (Parietal) Manual Buffer (Motor) Visual Module (Occipital/etc) Manual Module (Motor/Cerebellum) Environment

the perceptual-motor modules no real sensors and effectors the output of the visual and the input to the motor system are just modeled the visual and manual module are most important ( because of many computer tasks, involving scanning the screen, typing, moving the mouse...)

the Act-R R visual system visual system visual location module where -> dorsal stream visual object module what ->ventral stream Request: constraint a constraint b... chunks Response: location meeting those constraints e.g. - screen x lowest - color: red - screen-y-greater-than 15... leftmost word Production P red object ( among green ones; supports experimental data from visual pop-outeffects

the Act-R R visual system visual system visual location module where -> dorsal stream visual object module what ->ventral stream chunk: representation of the visual location attention-shift to that location Production P

Intentional Module (not identified) Declarative Module (Temporal/Hippocampus) Goal Buffer (DLPFC) Retrieval Buffer (VLPFC) Productions (Basal Ganglia) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Visual Buffer (Parietal) Manual Buffer (Motor) Visual Module (Occipital/etc) Manual Module (Motor/Cerebellum) Environment

the goal module In humans: Suppose the goal is to add 64 + 36 Assumption: the sum is not already stored, but one has to go through a series of substeps to come up with the answer and keep track of the various partial results ( e.g. sum of the ten digits) The goal module has this responsibility of keeping track of what these intentions are so that behavior will serve that goal

How could the goal buffer be organized? good example for goal-subgoal structures in problem solving: Tower of Hanoi Problem naive human response: move the disks to their ultimate location ( greedy) but: goal-subgoal strategy is often discovered during practice

Tower of Hanoi -problem Interest of Anderson et al. (Tower of Hanoi: Evidence for the Cost of Goal Retrieval1, 2002): not which strategy is adopted, but what does it tell about goal-subgoal interaction what are the cognitive costs for implementing a subgoaling strategy

Experimental setup: the strategy to solve the problem was given and trained task: solve the problem as fast as possible formulate a goal by clicking the disk on the source peg, then the destination peg do action or post goal time between actions and accuracy of moves were measured; also eye movements were recorded

user interface

Strategy - Algorithm 1. formulate a goal 2. decision if a legal move to achieve your goal is possible, do it and skip next step (3), otherwise post it on the goal stack 3. formulate a prerequisite goal if you cannot move a disk D, find the largest disk that s blocking the move and move it to a peg which is neither the source, nor the destination peg of D 4.Try again go back to #2 to see whether you can achieve your last goal posted 5. Repeat the process go back to step one, until all disks are at their final position

A B C example demo GOAL STACK B C Do it! Post it!

results:

results participants are slower at those points where they must retrieve a goal, and are more slower the longer ago it was posted the accuracy data suggests that participants are forgetting their goals The tendency to inspect the goal stack increases dramatically at those retrieval points goal retrieval seems to be the major factor limiting performance in this task

But what has Act-R to do with this experiment?

Act-R Act-R R was used to model this task Act-R R 4.0 had a perfect memory goal stack on which all goals can be stored perfectly and accessed without any retrieval time costs BUT: data shows clear goal limitations! Altman &Trafton& Trafton: : memory for goals might behave like any other memory and be subject to forgetting

new Act-R R model getting rid of the goal stack! relies on ACt-Rs general declarative memory to store goals in Act-R R each chunk has a base-level activation that increases each time the chunk is used and decreases with lack of use Gaussian retrieval-probability function over the base-level activation

results nice fit! nice fit! nice fit! nice fit!

conclusions of this research: cognitive architectures like Act-R(4.0) or SOAR are wrong in their assumption of a special goal stack goals in the subgoaling task are probably no different than other sort of intentions people set goals appear to behave like other more common kinds of declarative memory and shows the same effects in practice and retention interval

Intentional Module (not identified) Declarative Module (Temporal/Hippocampus) Goal Buffer (DLPFC) Retrieval Buffer (VLPFC) Productions (Basal Ganglia) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Visual Buffer (Parietal) Manual Buffer (Motor) Visual Module (Occipital/etc) Manual Module (Motor/Cerebellum) Environment

the buffers ACT-R R accesses its modules (except for the procedural-memory module) through buffers For each module, a dedicated buffer serves as the interface with that module The contents of the buffers at a given moment in time represents the state of ACT-R R at that moment.

the buffers each buffer can hold a relatively small amount of information ( ( chunk) chunks that were former buffer contents are now stored in the declarative memory module buffers are conceptual similar to Baddley s working memory slave systems the central cognitive system can only sense the content of the buffers the content of the chunks can only be accessed by the highly specialized modules

the buffers the most important buffers in Act-R R are: Goal Buffer keeps track of one s internal state in solving a problem preserves information across production cycles Retrieval Buffer holds information retrieved from long-term declarative memory seat of chunk activation calculations Manual Buffer responsible for control of hands Visual where Buffer location Visual what Buffer visual objects attention shifts correspond to buffer transformations

Intentional Module (not identified) Declarative Module (Temporal/Hippocampus) Goal Buffer (DLPFC) Retrieval Buffer (VLPFC) Productions (Basal Ganglia) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Visual Buffer (Parietal) Manual Buffer (Motor) Visual Module (Occipital/etc) Manual Module (Motor/Cerebellum) Environment

Pattern matcher The pattern matcher searches for a production that matches the current state of the buffers Only one such production can be executed at a given moment That production, when executed, can modify the buffers and thus change the state of the system Thus, in ACT-R R cognition unfolds as a succession of production firings

Production selection Making Choices: Conflict Resolution Expected Gain = E = PG-C P is expected probability of success G is value of goal C is expected cost Probability of choosing i = j e E / t i E / t j e t reflects noise in evaluation and is like temperature in the Boltzman equation P = Successes Successes + Failures Successes = α + m Failures = β + n α is prior successes m is experienced successes β is prior failures n is experienced failures

Outlook What is ACT-R R used for?

What is ACT-R R used for? ACT-R R has been used successfully to create models in domains such as: learning and memory, problem solving and decision making, language and communication, perception and attention, cognitive development, or individual differences but not only in tasks of cognitive psychology ACT-R R has applications

What is ACT-R R used for?

General Discussion Modularity Fodor: higher-level cognition is impossible to encapsulated into separate components General doubts about success of function localization in brain imaging research

Reference Anderson, J. R. & Lebiere,, C. (1998): The atomic components of thought. Mahwah, NJ: Erlbaum. Anderson, J. R. & Bothell, D. (2002): An Integrated Theory of the Mind Anderson et al. (2002) Tower of Hanoi: Evidence for the Cost of Goal Retrieval http://act.psy.cmu.edu