Social Robots and Human-Robot Interaction Ana Paiva. Lecture 8. Dialogues with Robots

Size: px
Start display at page:

Download "Social Robots and Human-Robot Interaction Ana Paiva. Lecture 8. Dialogues with Robots"

Transcription

1 Social Robots and Human-Robot Interaction Ana Paiva Lecture 8. Dialogues with Robots

2 Our goal Build Social Intelligence d) e) f)

3 The problem When and how should a robot act or say something to the user. - What to say (message content) - When to say (timing, turn-taking) - How to say it (gestures, non-verbal behaviours) But, in order to do that, the robot needs to understand what the user said

4 Communication is hard and miscommunication is easy

5 Let s look at communication say/order perceive understand respond Reply (what, how, when)

6 Let s look at communication say/order perceive understand respond

7 Let s look at communication say/order perceive understand respond How can a robot understand an order from a user and relate it with actions and objects in the physical world?

8 Perceive & Understand: Giving Commands to the robot Goal: to infer groundings in the world Use a specific description: Spatial Description Clauses (SDGs). SDC correspond to a constituent of the linguistic input and contain: a figure f, a relation r, and a variable number of landmarks l Each SDC has a type; EVENT: an action sequence that takes place in the world (e.g. Move the tire pallet ). OBJECT: a thing in the world (e.g. Forklift, the truck, the person ) PLACE: a place in the world (e.g. next to the tire pallet ). PATH: a path or path fragment through the world (e.g. past the truck ).

9 Creating a System based on a collected corpus of actions Create a system: data driven approach To train the system, a corpus of natural language commands was collected; The commands were paired with robot actions and environment state sequences; The corpus was used to both train the model and to evaluate end-to-end performance of the system;

10 Using videos of action sequences in the Amazon s Mechanical Turk a corpus was created by collecting language associated with each video; The videos showed a simulated robotic forklift in an action such as picking up a pallet or moving through the environment; Paired with each video, there as a complete log of the state of the environment and the robot s actions. Subjects were asked to type a natural language command that would cause an expert human forklift operator to carry out the action shown in the video. Commands were collected from 45 subjects for twenty-two different videos showing the forklift executing an action in a simulated warehouse. Giving Commands to the robot: Creating a Corpus

11 Evaluation of the Corpus The model was assessed in terms of its performance at predicting the correspondence between the acquired structures ( SDCs) and groundings. An evaluation was performed also using known correct and incorrect command-video pairs. C1: subjects saw a command paired with the original video that a different subject watched when creating the command. C2: the subject saw the command paired with random video that was not used to generate the original command.

12 Giving Commands to the robot

13 Learning from a Robot through Dialogue Problem with the previous approach: The interaction has to be learned (by collecting data) and thus it is limited to that particular domain. If we place no restrictions on speech, interpreting a command given to a robot is a challenging problem.. as users may say all kinds of things.

14 Approach: Learning from a Robot through Dialogue and Access to the Web Learning and using task-relevant knowledge from human-robot dialog and access to the Web

15 Learning from a Robot through Dialogue and Access to the Web KnoWDiaL, an approach for robot learning of taskrelevant environmental knowledge from human-robot dialog and access to the Web

16 Learning from a Robot through Dialogue and Access to the Web The speech recognizer returns a set of possible interpretations; these interpretations are the input for the first component of KnoWDiaL, a frame-semantic parser. The parser labels the list of speech-to-text candidates and stores them in pre-defined frame elements, like action references, locations, objects or people.

17 Learning from a Robot through Dialogue and Access to the Web The Knowledge Base stores groundings of commands encountered in previous dialogues. A grounding is simply a probabilistic mapping of a specific frame element obtained from the frame-semantic parser to locations in the building or tasks the robot can perform.

18 Learning from a Robot through Dialogue and Access to the Web The Grounding Model, uses the information stored in the Knowledge Base to infer the correct action to take when a command is received.

19 Learning from a Robot through Dialogue and Access to the Web Sometimes, not all of the parameters required to ground a spoken command are available in the Knowledge Base. When this happens, the Grounding Model resorts to OpenEval, the fourth component of KnoWDiaL. OpenEval is able to extract information from the World Wide Web, to fill missing parameters of the Grounding Model.

20 Learning from a Robot through Dialogue and Access to the Web: Example

21 Let s look at communication say/order perceive understand respond Reply (what, how, when)

22 Perceive & Understand Components of a Dialogue System for a social Robot NL ( natural language ) system: a parser and generator using a grammar for human-robot conversation developed A Speech Recogniser: can use a speech recognition server using a language model for the language we need (the recognised utterances may correspond to a logical form mapped into string) Generate and Synthesize NL ( natural language ) system: a generator using a grammar for human-robot conversation developed TTS ( text-to-speech ): a speech synthesizer, for robot speech output. Manage Dialogue and Non-verbal communication A Dialogue Manager: that co-ordinates multi-modal inputs from the user, interprets the actions of the user (through several modules) as dialogue moves. This dialogue manager must also update and maintain the dialogue context (and common ground), handling questions and miscommunication events. Gestures and non-verbal behaviour handler

23 Typical Architecture Dialogue Manager Decision making and Action Selection System Understanding Module Non-verbal behaviour understanding Verbal behaviour understanding (NLU) Generation Module Non-verbal behaviour generation (gaze and gesture) Verbal behaviour generation (NLG) Vision Processing Speech recognition system Motion Control Text-to-speech

24 Perceive & Understand Components of a Dialogue System for a social Robot NL ( natural language ) system: a parser and generator using a grammar for human-robot conversation developed A Speech Recogniser: can use a speech recognition server using a language model for the language we need (the recognised utterances may correspond to a logical form mapped into string) Generate and Synthesize NL ( natural language ) system: a generator using a grammar for human-robot conversation developed TTS ( text-to-speech ): a speech synthesizer, for robot speech output. Manage Dialogue and Non-verbal communication A Dialogue Manager: that co-ordinates multi-modal inputs from the user, interprets the actions of the user (through several modules), and at the same time maintain the dialogue context (and common ground), handling questions and miscommunication events. Gestures and non-verbal behaviour handler (associated with communicative functions)

25 Non-verbal Communication as Mechanisms for managing conversations There are different mechanisms for managing conversations: -. Interlocutors of a conversation engage in discourse at varying levels of involvement: their participant roles or footing, or participant structure of the conversation; -. Role shifts among conversational participants by a turn-taking mechanism, which allows interlocutors to seamlessly exchange speaking turns, interrupt, etc;. Participants in a conversation create a discourse, that is a composition of discourse segments in particular structures [Grosz and Sidner 1986]. Such structures signal shifts in topic in the discourse or how information is organized. Also, speakers produce a number of cues that signal these structures and enable contributions from other participants or to direct attention to important information (these signals include nor only verbal cues but also nonverbal cues, in particular gaze and gestures).

26 Let s focus on non-verbal communication With robots, given their physical embodiment, we can add to the verbal communication some level of non-verbal communication. Gaze Head nods

27 Let s focus on non-verbal communication With robots, given their physical embodiment, we can add to the verbal communication some level of non-verbal communication. Gaze Head nods

28 Gaze Gaze: During interaction people look at each other in the eye, while listening, talking.. Without eye contact people do not feel they are in communication! Gaze provides a number of potential social cues that can be used by people to learn about the social context, about the environment (objects and events) or even about internal (emotional and intentional) states of others. Gaze cues serve a number of functions in conversations: Clarify who is addressed Help the speaker hold the floor (turn-taking)

29 Types of Gaze direction Shared Attention is a combination of mutual attention and joint attention, so the focus of one both individuals A and B's attention is not only on the object but also on each other (example: I know you're looking at X, and you know that I'm looking at X) Theory of mind, uses a combination of the previous attentional processes, and higher-order cognitive strategies allowing to reason about the other s attention Mutual gaze is where the attention of two individuals is directed to one another; Gaze following is where individual A detects that B's gaze is not directed towards them, and follows the line of sight of B onto a point in space; Joint Attention is similar to Gaze Following except that there is a focus of attention, for example an object, that two individuals A and B are looking at, at the same time.

30 Gaze Gaze cues can be used in social robots also to serve as functions in conversations: Clarify who is addressed Help the speaker hold the floor (turn-taking); Help in signaling change in topics of conversation.

31 Gaze and Mechanisms for managing conversations There are different mechanisms based on gaze for managing conversations: (Who) Role-Signaling Mechanisms (Participation Structure). Speaker gaze cues may signal the roles of interlocutors [Bales et al. 1951]; (When) Turn-Taking Mechanisms (Conversation Structure ) Speaker gaze cues can facilitate turn-exchanges (producing turn-yielding, turn-taking, and floor-holding gaze signals) [Kendon 1967]; (What and How) Topic-Signaling Mechanisms (Information Structure): Patterns in gaze shifts can be temporally aligned with the structure of the speaker s discourse, signaling changes in topic or shifts between thematic units of utterances [Cassell et al. 1999b].

32 Modeling Conversational Gaze Mechanisms: Approaches Approaches: Theory driven (based on theories of human communication and the role of gaze, models can be built to replicate certain functions identified; e.g. use of the Politeness Theory by Brown and Levinson 1987); Empirically driven (based on experiments and data collected with the precise scenarios in which we will build the robot s gaze behaviour); Combination of both (theory and empirically).

33 Case study

34 Case: Modeling Conversational Gaze Mechanisms Goal: Build a model for Gaze behaviour based on both theory and empirical data. Initial data collection: - to capture the basic spatial and temporal parameters of gaze cues - to capture aspects of conversational mechanisms that signal information, conversation, and participation structures.

35 Data collection for a Gaze model

36 Data collection for a Gaze model - Subjects gaze behavior was captured using high-definition cameras placed across from their seats. - Subjects speech was captured using stereo microphones attached to their collars. - The cameras provided video sequences of subjects faces (from hair to chin). - An additional camera on the ceiling was used to capture the interaction space. - In total, there were 45 minutes of video for each subject and 180 minutes of data for each triad from four cameras. - A final analysis included an examination of the video data, the coding and descriptive statistics (in particular coding of speech and gaze events from the video), calculating the frequencies of and cooccurrences among events, and computing the distribution parameters for the temporal and spatial properties of these events.

37 Analysis of data for a Gaze model - Where Do Speakers Look?.

38 Analysis of data for a Gaze model - Where Do Speakers Look?.

39 Analysis of data for a Gaze model - Where Do Speakers Look?.

40 Analysis of data for a Gaze model How Much Time Do Speakers Spend Looking at Each Target? speaker looked at his addressees for the majority of the time; 74%, 76%, and 71% in the two-party, two-party-with-bystander In the first two scenarios, the speaker looked at the bodies of his addressees more than he looked at their faces (26% and 25% at the faces and 48% and 51% at the bodies). In all scenarios, the speaker spent a significant amount of time looking away from addressees (26%, 16%, and 29% of the time in the three conversational situations respectively).

41 Analysis of data for a Gaze model Each thematic field was mapped onto the speech timeline along with gaze shifts (4000-millisecond periods before the beginning and after the end of the thematic field). This mapping allowed the identification of patterns in gaze shifts that occurred at the onset of each thematic field and quantify the frequency of occurrence for each pattern. Two main recurring patterns of gaze shifts in the two-party conversation and the two-party-with-bystander conversation and another set of two patterns in the three-party conversation.

42 Analysis of data for a Gaze model

43 Analysis of data for a Gaze model

44 Analysis of data for a Gaze model: Who is who (roles) in a conversation?

45 Analysis of data for a Gaze model: Who is who (roles) in a conversation Three gaze cues were identified to signal the participant roles of his interlocutors. Greetings and summonses. The body of the conversation. the speaker spent the majority of his speaking time looking at addressees (74% of the time and the environment 26% of the time in 1 st scenario and looked towards the addressee, bystander, and the environment 76%, 8%, and 16% of the time, respectively. Turn-exchanges

46 Analysis of data for a Gaze model: Who is who (roles) in a conversation Three gaze cues were identified to signal the participant roles of his interlocutors. Greetings and summonses. The body of the conversation. the speaker spent the majority of his speaking time looking at addressees (74% of the time and the environment 26% of the time in 1 st scenario and looked towards the addressee, bystander, and the environment 76%, 8%, and 16% of the time, respectively. Turn-exchanges

47 Gaze patterns in the social robot

48

49 Evaluation of Gaze Patterns implemented

50 Hypotheses of the study Hypothesis 1. Subjects will correctly interpret the footing signals that the robot communicates to them and conform to these roles in their participation to the conversation. Hypothesis 2. Addressees will have better recall of the details of the information presented by the robot than bystanders and overhearers will, as the robot will look toward the addressees significantly more. Hypothesis 3. Addressee or bystanders will evaluate the robot more positively than overhearers will. Hypothesis 4. Addressees will express stronger feelings of groupness (with the robot and the other subject) than bystanders and overhearers will.

51 Conditions of the study A total of 72 subjects participated in the experiment in 36 trials Condition 1. The robot produced gaze cues for an addressee and an overhearer (ignoring the individual in the latter role), following the norms of a two-party conversation. Condition 2. Gaze cues were produced for an addressee and a bystander, signaling the participation structure of a two-party conversation with bystander. Condition 3. The robot produced gaze cues for two addressees, following the participant roles of a three-party conversation.

52 Variables of the study The manipulation in the robot s gaze behavior was the only independent variable. The dependent variables involved three kinds of measurements: behavioral, objective, and subjective. Behavioral. We captured subjects behavior using high-definition cameras and from the video and audio data, they measured whether subjects took turns in responding to the robot and how long they spoke. Objective. Subjects recall of the information presented by the robot was measured using a post-experiment questionnaire. Subjective. Subjects affective state using the PANAS scale [Watson et al. 1988], perceptions of the robot s physical, social, and intellectual characteristics using a scale developed to evaluate humanlike agents [Parise et al. 1996], feelings of closeness to the robot [Aron et al. 1992], feelings of groupness and ostracism [Williams et al. 2000], perceptions of the task (how much they enjoyed and attended to the task), and demographic information.

53 Results

54 Let s focus on non-verbal communication With robots, given their physical embodiment, we can add to the verbal communication some level of non-verbal communication. Gaze Head nods

55 Let s focus on non-verbal communication With robots, given their physical embodiment, we can add to the verbal communication some level of non-verbal communication. Gaze Head nods

56 Head Motion (Head Head motion naturally occurs during speech utterances, and can be either intentional or unconscious. Nods) There is a strong relationships between head motion and dialogue acts (including turn taking functions), and between dialogue acts and prosodic features.

57 Case study

58 A model for Robotic Head Nods & Tilt A proposed model, nods are generated in the center of the last syllable of utterances with strong phrase boundaries (k, g, q) and backchannels (bc)

59 Head Motion (Head Nods) Eleven conversation passages with durations between 10 to 20 seconds, including fillers and turn keeping functions (f and k3), were randomly selected from a database, and rotation angles (nod, shake and tilt angles) Head rotation angles were computed by the head motion generation model for each conversation passage. Video clips were recorded for each stimulus, resulting in 33 stimuli (11 conversation passages and 3 motion types) for each robot type.

60 Video

61 Results

62 The problem revisited When and how should a robot act or say something to the user. - What to say (message content) - When to say (timing, turn-taking) - How to say it (gestures, non-verbal behaviours) Is a complicated task.

63 Discussion

Multi-modal Sensing and Analysis of Poster Conversations toward Smart Posterboard

Multi-modal Sensing and Analysis of Poster Conversations toward Smart Posterboard Multi-modal Sensing and Analysis of Poster Conversations toward Smart Posterboard Tatsuya Kawahara Kyoto University, Academic Center for Computing and Media Studies Sakyo-ku, Kyoto 606-8501, Japan http://www.ar.media.kyoto-u.ac.jp/crest/

More information

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016 AGENDA Advanced Learning Theories Alejandra J. Magana, Ph.D. admagana@purdue.edu Introduction to Learning Theories Role of Learning Theories and Frameworks Learning Design Research Design Dual Coding Theory

More information

Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games

Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games David B. Christian, Mark O. Riedl and R. Michael Young Liquid Narrative Group Computer Science Department

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

Mandarin Lexical Tone Recognition: The Gating Paradigm

Mandarin Lexical Tone Recognition: The Gating Paradigm Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition

More information

Teachers: Use this checklist periodically to keep track of the progress indicators that your learners have displayed.

Teachers: Use this checklist periodically to keep track of the progress indicators that your learners have displayed. Teachers: Use this checklist periodically to keep track of the progress indicators that your learners have displayed. Speaking Standard Language Aspect: Purpose and Context Benchmark S1.1 To exit this

More information

Lecturing Module

Lecturing Module Lecturing: What, why and when www.facultydevelopment.ca Lecturing Module What is lecturing? Lecturing is the most common and established method of teaching at universities around the world. The traditional

More information

The Effect of Discourse Markers on the Speaking Production of EFL Students. Iman Moradimanesh

The Effect of Discourse Markers on the Speaking Production of EFL Students. Iman Moradimanesh The Effect of Discourse Markers on the Speaking Production of EFL Students Iman Moradimanesh Abstract The research aimed at investigating the relationship between discourse markers (DMs) and a special

More information

Cambridgeshire Community Services NHS Trust: delivering excellence in children and young people s health services

Cambridgeshire Community Services NHS Trust: delivering excellence in children and young people s health services Normal Language Development Community Paediatric Audiology Cambridgeshire Community Services NHS Trust: delivering excellence in children and young people s health services Language develops unconsciously

More information

Common Core Exemplar for English Language Arts and Social Studies: GRADE 1

Common Core Exemplar for English Language Arts and Social Studies: GRADE 1 The Common Core State Standards and the Social Studies: Preparing Young Students for College, Career, and Citizenship Common Core Exemplar for English Language Arts and Social Studies: Why We Need Rules

More information

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Alistair Conkie AT&T abs - Research 180 Park Avenue, Florham Park,

More information

Language Acquisition Chart

Language Acquisition Chart Language Acquisition Chart This chart was designed to help teachers better understand the process of second language acquisition. Please use this chart as a resource for learning more about the way people

More information

Eliciting Language in the Classroom. Presented by: Dionne Ramey, SBCUSD SLP Amanda Drake, SBCUSD Special Ed. Program Specialist

Eliciting Language in the Classroom. Presented by: Dionne Ramey, SBCUSD SLP Amanda Drake, SBCUSD Special Ed. Program Specialist Eliciting Language in the Classroom Presented by: Dionne Ramey, SBCUSD SLP Amanda Drake, SBCUSD Special Ed. Program Specialist Classroom Language: What we anticipate Students are expected to arrive with

More information

Modeling Dialogue Building Highly Responsive Conversational Agents

Modeling Dialogue Building Highly Responsive Conversational Agents Modeling Dialogue Building Highly Responsive Conversational Agents ESSLLI 2016 David Schlangen, Stefan Kopp with Sören Klett CITEC // Bielefeld University Who we are Stefan Kopp, Professor for Computer

More information

Loughton School s curriculum evening. 28 th February 2017

Loughton School s curriculum evening. 28 th February 2017 Loughton School s curriculum evening 28 th February 2017 Aims of this session Share our approach to teaching writing, reading, SPaG and maths. Share resources, ideas and strategies to support children's

More information

Eyebrows in French talk-in-interaction

Eyebrows in French talk-in-interaction Eyebrows in French talk-in-interaction Aurélie Goujon 1, Roxane Bertrand 1, Marion Tellier 1 1 Aix Marseille Université, CNRS, LPL UMR 7309, 13100, Aix-en-Provence, France Goujon.aurelie@gmail.com Roxane.bertrand@lpl-aix.fr

More information

Correspondence between the DRDP (2015) and the California Preschool Learning Foundations. Foundations (PLF) in Language and Literacy

Correspondence between the DRDP (2015) and the California Preschool Learning Foundations. Foundations (PLF) in Language and Literacy 1 Desired Results Developmental Profile (2015) [DRDP (2015)] Correspondence to California Foundations: Language and Development (LLD) and the Foundations (PLF) The Language and Development (LLD) domain

More information

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes WHAT STUDENTS DO: Establishing Communication Procedures Following Curiosity on Mars often means roving to places with interesting

More information

An Architecture to Develop Multimodal Educative Applications with Chatbots

An Architecture to Develop Multimodal Educative Applications with Chatbots International Journal of Advanced Robotic Systems ARTICLE An Architecture to Develop Multimodal Educative Applications with Chatbots Regular Paper David Griol 1,* and Zoraida Callejas 2 1 Department of

More information

1.2 Interpretive Communication: Students will demonstrate comprehension of content from authentic audio and visual resources.

1.2 Interpretive Communication: Students will demonstrate comprehension of content from authentic audio and visual resources. Course French I Grade 9-12 Unit of Study Unit 1 - Bonjour tout le monde! & les Passe-temps Unit Type(s) x Topical Skills-based Thematic Pacing 20 weeks Overarching Standards: 1.1 Interpersonal Communication:

More information

Running head: DELAY AND PROSPECTIVE MEMORY 1

Running head: DELAY AND PROSPECTIVE MEMORY 1 Running head: DELAY AND PROSPECTIVE MEMORY 1 In Press at Memory & Cognition Effects of Delay of Prospective Memory Cues in an Ongoing Task on Prospective Memory Task Performance Dawn M. McBride, Jaclyn

More information

Unit purpose and aim. Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50

Unit purpose and aim. Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50 Unit Title: Game design concepts Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50 Unit purpose and aim This unit helps learners to familiarise themselves with the more advanced aspects

More information

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,

More information

Creating Travel Advice

Creating Travel Advice Creating Travel Advice Classroom at a Glance Teacher: Language: Grade: 11 School: Fran Pettigrew Spanish III Lesson Date: March 20 Class Size: 30 Schedule: McLean High School, McLean, Virginia Block schedule,

More information

Dialog Act Classification Using N-Gram Algorithms

Dialog Act Classification Using N-Gram Algorithms Dialog Act Classification Using N-Gram Algorithms Max Louwerse and Scott Crossley Institute for Intelligent Systems University of Memphis {max, scrossley } @ mail.psyc.memphis.edu Abstract Speech act classification

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

Strands & Standards Reference Guide for World Languages

Strands & Standards Reference Guide for World Languages The Strands & Standards Reference Guide for World Languages is an Instructional Toolkit component for the North Carolina World Language Essential Standards (WLES). This resource brings together: Strand

More information

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections Tyler Perrachione LING 451-0 Proseminar in Sound Structure Prof. A. Bradlow 17 March 2006 Intra-talker Variation: Audience Design Factors Affecting Lexical Selections Abstract Although the acoustic and

More information

Linking object names and object categories: Words (but not tones) facilitate object categorization in 6- and 12-month-olds

Linking object names and object categories: Words (but not tones) facilitate object categorization in 6- and 12-month-olds Linking object names and object categories: Words (but not tones) facilitate object categorization in 6- and 12-month-olds Anne L. Fulkerson 1, Sandra R. Waxman 2, and Jennifer M. Seymour 1 1 University

More information

TASK 2: INSTRUCTION COMMENTARY

TASK 2: INSTRUCTION COMMENTARY TASK 2: INSTRUCTION COMMENTARY Respond to the prompts below (no more than 7 single-spaced pages, including prompts) by typing your responses within the brackets following each prompt. Do not delete or

More information

Content Language Objectives (CLOs) August 2012, H. Butts & G. De Anda

Content Language Objectives (CLOs) August 2012, H. Butts & G. De Anda Content Language Objectives (CLOs) Outcomes Identify the evolution of the CLO Identify the components of the CLO Understand how the CLO helps provide all students the opportunity to access the rigor of

More information

Client Psychology and Motivation for Personal Trainers

Client Psychology and Motivation for Personal Trainers Client Psychology and Motivation for Personal Trainers Unit 4 Communication and interpersonal skills Lesson 4 Active listening: part 2 Step 1 Lesson aims In this lesson, we will: Define and describe the

More information

Probabilistic principles in unsupervised learning of visual structure: human data and a model

Probabilistic principles in unsupervised learning of visual structure: human data and a model Probabilistic principles in unsupervised learning of visual structure: human data and a model Shimon Edelman, Benjamin P. Hiles & Hwajin Yang Department of Psychology Cornell University, Ithaca, NY 14853

More information

Introduction to Simulation

Introduction to Simulation Introduction to Simulation Spring 2010 Dr. Louis Luangkesorn University of Pittsburgh January 19, 2010 Dr. Louis Luangkesorn ( University of Pittsburgh ) Introduction to Simulation January 19, 2010 1 /

More information

CEFR Overall Illustrative English Proficiency Scales

CEFR Overall Illustrative English Proficiency Scales CEFR Overall Illustrative English Proficiency s CEFR CEFR OVERALL ORAL PRODUCTION Has a good command of idiomatic expressions and colloquialisms with awareness of connotative levels of meaning. Can convey

More information

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY Sergey Levine Principal Adviser: Vladlen Koltun Secondary Adviser:

More information

Emotional Variation in Speech-Based Natural Language Generation

Emotional Variation in Speech-Based Natural Language Generation Emotional Variation in Speech-Based Natural Language Generation Michael Fleischman and Eduard Hovy USC Information Science Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 U.S.A.{fleisch, hovy}

More information

The College Board Redesigned SAT Grade 12

The College Board Redesigned SAT Grade 12 A Correlation of, 2017 To the Redesigned SAT Introduction This document demonstrates how myperspectives English Language Arts meets the Reading, Writing and Language and Essay Domains of Redesigned SAT.

More information

GOLD Objectives for Development & Learning: Birth Through Third Grade

GOLD Objectives for Development & Learning: Birth Through Third Grade Assessment Alignment of GOLD Objectives for Development & Learning: Birth Through Third Grade WITH , Birth Through Third Grade aligned to Arizona Early Learning Standards Grade: Ages 3-5 - Adopted: 2013

More information

Assessing speaking skills:. a workshop for teacher development. Ben Knight

Assessing speaking skills:. a workshop for teacher development. Ben Knight Assessing speaking skills:. a workshop for teacher development Ben Knight Speaking skills are often considered the most important part of an EFL course, and yet the difficulties in testing oral skills

More information

English Language Arts Missouri Learning Standards Grade-Level Expectations

English Language Arts Missouri Learning Standards Grade-Level Expectations A Correlation of, 2017 To the Missouri Learning Standards Introduction This document demonstrates how myperspectives meets the objectives of 6-12. Correlation page references are to the Student Edition

More information

Behavior List. Ref. No. Behavior. Grade. Std. Domain/Category. Social/ Emotional will notify the teacher when angry (words, signal)

Behavior List. Ref. No. Behavior. Grade. Std. Domain/Category. Social/ Emotional will notify the teacher when angry (words, signal) 1 4455 will notify the teacher when angry (words, signal) 2 4456 will use appropriate language to ask for help when frustrated 3 4457 will use appropriate language to tell a peer why he/she is angry 4

More information

5 th Grade Language Arts Curriculum Map

5 th Grade Language Arts Curriculum Map 5 th Grade Language Arts Curriculum Map Quarter 1 Unit of Study: Launching Writer s Workshop 5.L.1 - Demonstrate command of the conventions of Standard English grammar and usage when writing or speaking.

More information

A Case-Based Approach To Imitation Learning in Robotic Agents

A Case-Based Approach To Imitation Learning in Robotic Agents A Case-Based Approach To Imitation Learning in Robotic Agents Tesca Fitzgerald, Ashok Goel School of Interactive Computing Georgia Institute of Technology, Atlanta, GA 30332, USA {tesca.fitzgerald,goel}@cc.gatech.edu

More information

Lærerne I centrum og fremtidens skole The Change Room Action research to promote professional development

Lærerne I centrum og fremtidens skole The Change Room Action research to promote professional development Lærerne I centrum og fremtidens skole The Change Room Action research to promote professional development Nordiska Lärarorganisationers Samråd NLS Sektormöte Faroe Islands 20. 9. 2016 Hjördís Þorgeirsdóttir

More information

ELA/ELD Standards Correlation Matrix for ELD Materials Grade 1 Reading

ELA/ELD Standards Correlation Matrix for ELD Materials Grade 1 Reading ELA/ELD Correlation Matrix for ELD Materials Grade 1 Reading The English Language Arts (ELA) required for the one hour of English-Language Development (ELD) Materials are listed in Appendix 9-A, Matrix

More information

Formulaic Language and Fluency: ESL Teaching Applications

Formulaic Language and Fluency: ESL Teaching Applications Formulaic Language and Fluency: ESL Teaching Applications Formulaic Language Terminology Formulaic sequence One such item Formulaic language Non-count noun referring to these items Phraseology The study

More information

CROSS COUNTRY CERTIFICATION STANDARDS

CROSS COUNTRY CERTIFICATION STANDARDS CROSS COUNTRY CERTIFICATION STANDARDS Registered Certified Level I Certified Level II Certified Level III November 2006 The following are the current (2006) PSIA Education/Certification Standards. Referenced

More information

Eye Movements in Speech Technologies: an overview of current research

Eye Movements in Speech Technologies: an overview of current research Eye Movements in Speech Technologies: an overview of current research Mattias Nilsson Department of linguistics and Philology, Uppsala University Box 635, SE-751 26 Uppsala, Sweden Graduate School of Language

More information

Candidates must achieve a grade of at least C2 level in each examination in order to achieve the overall qualification at C2 Level.

Candidates must achieve a grade of at least C2 level in each examination in order to achieve the overall qualification at C2 Level. The Test of Interactive English, C2 Level Qualification Structure The Test of Interactive English consists of two units: Unit Name English English Each Unit is assessed via a separate examination, set,

More information

Subject: Opening the American West. What are you teaching? Explorations of Lewis and Clark

Subject: Opening the American West. What are you teaching? Explorations of Lewis and Clark Theme 2: My World & Others (Geography) Grade 5: Lewis and Clark: Opening the American West by Ellen Rodger (U.S. Geography) This 4MAT lesson incorporates activities in the Daily Lesson Guide (DLG) that

More information

Understanding and Supporting Dyslexia Godstone Village School. January 2017

Understanding and Supporting Dyslexia Godstone Village School. January 2017 Understanding and Supporting Dyslexia Godstone Village School January 2017 By then end of the session I will: Have a greater understanding of Dyslexia and the ways in which children can be affected by

More information

9.85 Cognition in Infancy and Early Childhood. Lecture 7: Number

9.85 Cognition in Infancy and Early Childhood. Lecture 7: Number 9.85 Cognition in Infancy and Early Childhood Lecture 7: Number What else might you know about objects? Spelke Objects i. Continuity. Objects exist continuously and move on paths that are connected over

More information

Corpus Linguistics (L615)

Corpus Linguistics (L615) (L615) Basics of Markus Dickinson Department of, Indiana University Spring 2013 1 / 23 : the extent to which a sample includes the full range of variability in a population distinguishes corpora from archives

More information

Participant s Journal. Fun and Games with Systems Theory. BPD Conference March 19, 2009 Phoenix AZ

Participant s Journal. Fun and Games with Systems Theory. BPD Conference March 19, 2009 Phoenix AZ BPD Conference March 19, 2009 Phoenix AZ Participant s Journal Fun and Games with Systems Theory Presented by: Denise Dedman, Ph.D. email ddedman@umflint.edu Kathleen Woehrle, Ph.D email kwoehrle@umflint.edu

More information

Which verb classes and why? Research questions: Semantic Basis Hypothesis (SBH) What verb classes? Why the truth of the SBH matters

Which verb classes and why? Research questions: Semantic Basis Hypothesis (SBH) What verb classes? Why the truth of the SBH matters Which verb classes and why? ean-pierre Koenig, Gail Mauner, Anthony Davis, and reton ienvenue University at uffalo and Streamsage, Inc. Research questions: Participant roles play a role in the syntactic

More information

SOFTWARE EVALUATION TOOL

SOFTWARE EVALUATION TOOL SOFTWARE EVALUATION TOOL Kyle Higgins Randall Boone University of Nevada Las Vegas rboone@unlv.nevada.edu Higgins@unlv.nevada.edu N.B. This form has not been fully validated and is still in development.

More information

Some Basic Active Learning Strategies

Some Basic Active Learning Strategies Some Basic Active Learning Strategies Engaging students in individual or small group activities pairs or trios especially is a low-risk strategy that ensures the participation of all. The sampling of basic

More information

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology

More information

MULTIMEDIA Motion Graphics for Multimedia

MULTIMEDIA Motion Graphics for Multimedia MULTIMEDIA 210 - Motion Graphics for Multimedia INTRODUCTION Welcome to Digital Editing! The main purpose of this course is to introduce you to the basic principles of motion graphics editing for multimedia

More information

Think A F R I C A when assessing speaking. C.E.F.R. Oral Assessment Criteria. Think A F R I C A - 1 -

Think A F R I C A when assessing speaking. C.E.F.R. Oral Assessment Criteria. Think A F R I C A - 1 - C.E.F.R. Oral Assessment Criteria Think A F R I C A - 1 - 1. The extracts in the left hand column are taken from the official descriptors of the CEFR levels. How would you grade them on a scale of low,

More information

Stimulating Techniques in Micro Teaching. Puan Ng Swee Teng Ketua Program Kursus Lanjutan U48 Kolej Sains Kesihatan Bersekutu, SAS, Ulu Kinta

Stimulating Techniques in Micro Teaching. Puan Ng Swee Teng Ketua Program Kursus Lanjutan U48 Kolej Sains Kesihatan Bersekutu, SAS, Ulu Kinta Stimulating Techniques in Micro Teaching Puan Ng Swee Teng Ketua Program Kursus Lanjutan U48 Kolej Sains Kesihatan Bersekutu, SAS, Ulu Kinta Learning Objectives General Objectives: At the end of the 2

More information

What is beautiful is useful visual appeal and expected information quality

What is beautiful is useful visual appeal and expected information quality What is beautiful is useful visual appeal and expected information quality Thea van der Geest University of Twente T.m.vandergeest@utwente.nl Raymond van Dongelen Noordelijke Hogeschool Leeuwarden Dongelen@nhl.nl

More information

SEMAFOR: Frame Argument Resolution with Log-Linear Models

SEMAFOR: Frame Argument Resolution with Log-Linear Models SEMAFOR: Frame Argument Resolution with Log-Linear Models Desai Chen or, The Case of the Missing Arguments Nathan Schneider SemEval July 16, 2010 Dipanjan Das School of Computer Science Carnegie Mellon

More information

Master s Thesis. An Agent-Based Platform for Dialogue Management

Master s Thesis. An Agent-Based Platform for Dialogue Management Master s Thesis An Agent-Based Platform for Dialogue Management Mark Buckley December 2005 Prepared under the supervision of Dr. Christoph Benzmüller Hiermit versichere ich an Eides statt, dass ich diese

More information

Miscommunication and error handling

Miscommunication and error handling CHAPTER 3 Miscommunication and error handling In the previous chapter, conversation and spoken dialogue systems were described from a very general perspective. In this description, a fundamental issue

More information

MODULE 4 Data Collection and Hypothesis Development. Trainer Outline

MODULE 4 Data Collection and Hypothesis Development. Trainer Outline MODULE 4 Data Collection and Hypothesis Development Trainer Outline The following trainer guide includes estimated times for each section of the module, an overview of the information to be presented,

More information

Elizabeth R. Crais, Ph.D., CCC-SLP

Elizabeth R. Crais, Ph.D., CCC-SLP Elizabeth R. Crais, Ph.D., CCC-SLP Division of Speech & Hearing Sciences Medical School The University of North Carolina at Chapel Hill Indiana Speech-Language-Hearing Association April 5, 2013 Linda Watson,

More information

Learning and Teaching

Learning and Teaching Learning and Teaching Set Induction and Closure: Key Teaching Skills John Dallat March 2013 The best kind of teacher is one who helps you do what you couldn t do yourself, but doesn t do it for you (Child,

More information

Guru: A Computer Tutor that Models Expert Human Tutors

Guru: A Computer Tutor that Models Expert Human Tutors Guru: A Computer Tutor that Models Expert Human Tutors Andrew Olney 1, Sidney D'Mello 2, Natalie Person 3, Whitney Cade 1, Patrick Hays 1, Claire Williams 1, Blair Lehman 1, and Art Graesser 1 1 University

More information

Full text of O L O W Science As Inquiry conference. Science as Inquiry

Full text of O L O W Science As Inquiry conference. Science as Inquiry Page 1 of 5 Full text of O L O W Science As Inquiry conference Reception Meeting Room Resources Oceanside Unifying Concepts and Processes Science As Inquiry Physical Science Life Science Earth & Space

More information

How to analyze visual narratives: A tutorial in Visual Narrative Grammar

How to analyze visual narratives: A tutorial in Visual Narrative Grammar How to analyze visual narratives: A tutorial in Visual Narrative Grammar Neil Cohn 2015 neilcohn@visuallanguagelab.com www.visuallanguagelab.com Abstract Recent work has argued that narrative sequential

More information

Instructional Supports for Common Core and Beyond: FORMATIVE ASSESMENT

Instructional Supports for Common Core and Beyond: FORMATIVE ASSESMENT Instructional Supports for Common Core and Beyond: FORMATIVE ASSESMENT Defining Date Guiding Question: Why is it important for everyone to have a common understanding of data and how they are used? Importance

More information

Cambridge NATIONALS. Creative imedia Level 1/2. UNIT R081 - Pre-Production Skills DELIVERY GUIDE

Cambridge NATIONALS. Creative imedia Level 1/2. UNIT R081 - Pre-Production Skills DELIVERY GUIDE Cambridge NATIONALS Creative imedia Level 1/2 UNIT R081 - Pre-Production Skills VERSION 1 APRIL 2013 INDEX Introduction Page 3 Unit R081 - Pre-Production Skills Page 4 Learning Outcome 1 - Understand the

More information

Functional Mark-up for Behaviour Planning: Theory and Practice

Functional Mark-up for Behaviour Planning: Theory and Practice Functional Mark-up for Behaviour Planning: Theory and Practice 1. Introduction Brigitte Krenn +±, Gregor Sieber + + Austrian Research Institute for Artificial Intelligence Freyung 6, 1010 Vienna, Austria

More information

Patterns for Adaptive Web-based Educational Systems

Patterns for Adaptive Web-based Educational Systems Patterns for Adaptive Web-based Educational Systems Aimilia Tzanavari, Paris Avgeriou and Dimitrios Vogiatzis University of Cyprus Department of Computer Science 75 Kallipoleos St, P.O. Box 20537, CY-1678

More information

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D. INTRODUCTION Anyone who spends ample time working

More information

Compositional Semantics

Compositional Semantics Compositional Semantics CMSC 723 / LING 723 / INST 725 MARINE CARPUAT marine@cs.umd.edu Words, bag of words Sequences Trees Meaning Representing Meaning An important goal of NLP/AI: convert natural language

More information

Non-Secure Information Only

Non-Secure Information Only 2006 California Alternate Performance Assessment (CAPA) Examiner s Manual Directions for Administration for the CAPA Test Examiner and Second Rater Responsibilities Completing the following will help ensure

More information

Table of Contents. Introduction Choral Reading How to Use This Book...5. Cloze Activities Correlation to TESOL Standards...

Table of Contents. Introduction Choral Reading How to Use This Book...5. Cloze Activities Correlation to TESOL Standards... Table of Contents Introduction.... 4 How to Use This Book.....................5 Correlation to TESOL Standards... 6 ESL Terms.... 8 Levels of English Language Proficiency... 9 The Four Language Domains.............

More information

Annotation and Taxonomy of Gestures in Lecture Videos

Annotation and Taxonomy of Gestures in Lecture Videos Annotation and Taxonomy of Gestures in Lecture Videos John R. Zhang Kuangye Guo Cipta Herwana John R. Kender Columbia University New York, NY 10027, USA {jrzhang@cs., kg2372@, cjh2148@, jrk@cs.}columbia.edu

More information

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse

Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse Program Description Ph.D. in Behavior Analysis Ph.d. i atferdsanalyse 180 ECTS credits Approval Approved by the Norwegian Agency for Quality Assurance in Education (NOKUT) on the 23rd April 2010 Approved

More information

Revisiting the role of prosody in early language acquisition. Megha Sundara UCLA Phonetics Lab

Revisiting the role of prosody in early language acquisition. Megha Sundara UCLA Phonetics Lab Revisiting the role of prosody in early language acquisition Megha Sundara UCLA Phonetics Lab Outline Part I: Intonation has a role in language discrimination Part II: Do English-learning infants have

More information

Grounding Language for Interactive Task Learning

Grounding Language for Interactive Task Learning Grounding Language for Interactive Task Learning Peter Lindes, Aaron Mininger, James R. Kirk, and John E. Laird Computer Science and Engineering University of Michigan, Ann Arbor, MI 48109-2121 {plindes,

More information

CDTL-CELC WORKSHOP: EFFECTIVE INTERPERSONAL SKILLS

CDTL-CELC WORKSHOP: EFFECTIVE INTERPERSONAL SKILLS 1 CDTL-CELC WORKSHOP: EFFECTIVE INTERPERSONAL SKILLS Facilitators: Radhika JAIDEV & Peggie CHAN Centre for English Language Communication National University of Singapore 30 March 2011 Objectives of workshop

More information

Unraveling symbolic number processing and the implications for its association with mathematics. Delphine Sasanguie

Unraveling symbolic number processing and the implications for its association with mathematics. Delphine Sasanguie Unraveling symbolic number processing and the implications for its association with mathematics Delphine Sasanguie 1. Introduction Mapping hypothesis Innate approximate representation of number (ANS) Symbols

More information

Highlighting and Annotation Tips Foundation Lesson

Highlighting and Annotation Tips Foundation Lesson English Highlighting and Annotation Tips Foundation Lesson About this Lesson Annotating a text can be a permanent record of the reader s intellectual conversation with a text. Annotation can help a reader

More information

REVIEW OF CONNECTED SPEECH

REVIEW OF CONNECTED SPEECH Language Learning & Technology http://llt.msu.edu/vol8num1/review2/ January 2004, Volume 8, Number 1 pp. 24-28 REVIEW OF CONNECTED SPEECH Title Connected Speech (North American English), 2000 Platform

More information

Course Law Enforcement II. Unit I Careers in Law Enforcement

Course Law Enforcement II. Unit I Careers in Law Enforcement Course Law Enforcement II Unit I Careers in Law Enforcement Essential Question How does communication affect the role of the public safety professional? TEKS 130.294(c) (1)(A)(B)(C) Prior Student Learning

More information

One Stop Shop For Educators

One Stop Shop For Educators Modern Languages Level II Course Description One Stop Shop For Educators The Level II language course focuses on the continued development of communicative competence in the target language and understanding

More information

Improving the impact of development projects in Sub-Saharan Africa through increased UK/Brazil cooperation and partnerships Held in Brasilia

Improving the impact of development projects in Sub-Saharan Africa through increased UK/Brazil cooperation and partnerships Held in Brasilia Image: Brett Jordan Report Improving the impact of development projects in Sub-Saharan Africa through increased UK/Brazil cooperation and partnerships Thursday 17 Friday 18 November 2016 WP1492 Held in

More information

Introduction to Causal Inference. Problem Set 1. Required Problems

Introduction to Causal Inference. Problem Set 1. Required Problems Introduction to Causal Inference Problem Set 1 Professor: Teppei Yamamoto Due Friday, July 15 (at beginning of class) Only the required problems are due on the above date. The optional problems will not

More information

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas Exploiting Distance Learning Methods and Multimediaenhanced instructional content to support IT Curricula in Greek Technological Educational Institutes P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou,

More information

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

The Internet as a Normative Corpus: Grammar Checking with a Search Engine The Internet as a Normative Corpus: Grammar Checking with a Search Engine Jonas Sjöbergh KTH Nada SE-100 44 Stockholm, Sweden jsh@nada.kth.se Abstract In this paper some methods using the Internet as a

More information

Voice conversion through vector quantization

Voice conversion through vector quantization J. Acoust. Soc. Jpn.(E)11, 2 (1990) Voice conversion through vector quantization Masanobu Abe, Satoshi Nakamura, Kiyohiro Shikano, and Hisao Kuwabara A TR Interpreting Telephony Research Laboratories,

More information

Secondary English-Language Arts

Secondary English-Language Arts Secondary English-Language Arts Assessment Handbook January 2013 edtpa_secela_01 edtpa stems from a twenty-five-year history of developing performance-based assessments of teaching quality and effectiveness.

More information

While you are waiting... socrative.com, room number SIMLANG2016

While you are waiting... socrative.com, room number SIMLANG2016 While you are waiting... socrative.com, room number SIMLANG2016 Simulating Language Lecture 4: When will optimal signalling evolve? Simon Kirby simon@ling.ed.ac.uk T H E U N I V E R S I T Y O H F R G E

More information

Lecture 2: Quantifiers and Approximation

Lecture 2: Quantifiers and Approximation Lecture 2: Quantifiers and Approximation Case study: Most vs More than half Jakub Szymanik Outline Number Sense Approximate Number Sense Approximating most Superlative Meaning of most What About Counting?

More information

Getting the Story Right: Making Computer-Generated Stories More Entertaining

Getting the Story Right: Making Computer-Generated Stories More Entertaining Getting the Story Right: Making Computer-Generated Stories More Entertaining K. Oinonen, M. Theune, A. Nijholt, and D. Heylen University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands {k.oinonen

More information

Concept Acquisition Without Representation William Dylan Sabo

Concept Acquisition Without Representation William Dylan Sabo Concept Acquisition Without Representation William Dylan Sabo Abstract: Contemporary debates in concept acquisition presuppose that cognizers can only acquire concepts on the basis of concepts they already

More information