Determining Emotion in Speech

Similar documents
Speech Emotion Recognition Using Support Vector Machine

Speech Recognition at ICSI: Broadcast News and beyond

Human Emotion Recognition From Speech

Rule Learning With Negation: Issues Regarding Effectiveness

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Using dialogue context to improve parsing performance in dialogue systems

CEFR Overall Illustrative English Proficiency Scales

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Rule Learning with Negation: Issues Regarding Effectiveness

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

A Case Study: News Classification Based on Term Frequency

Australian Journal of Basic and Applied Sciences

Reducing Features to Improve Bug Prediction

Affective Classification of Generic Audio Clips using Regression Models

Lecture 1: Machine Learning Basics

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Evidence for Reliability, Validity and Learning Effectiveness

Probabilistic Latent Semantic Analysis

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Python Machine Learning

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

A study of speaker adaptation for DNN-based speech synthesis

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

WHEN THERE IS A mismatch between the acoustic

Word Segmentation of Off-line Handwritten Documents

Linking Task: Identifying authors and book titles in verbose queries

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Modeling function word errors in DNN-HMM based LVCSR systems

Learning Methods in Multilingual Speech Recognition

Assignment 1: Predicting Amazon Review Ratings

Review in ICAME Journal, Volume 38, 2014, DOI: /icame

Software Maintenance

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Mandarin Lexical Tone Recognition: The Gating Paradigm

Calibration of Confidence Measures in Speech Recognition

Rhythm-typology revisited.

Switchboard Language Model Improvement with Conversational Data from Gigaword

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Functional Skills Mathematics Level 2 assessment

Chapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4

Using EEG to Improve Massive Open Online Courses Feedback Interaction

Segregation of Unvoiced Speech from Nonspeech Interference

Modeling function word errors in DNN-HMM based LVCSR systems

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA

Session 2B From understanding perspectives to informing public policy the potential and challenges for Q findings to inform survey design

How to Judge the Quality of an Objective Classroom Test

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Practice Examination IREB

Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

SARDNET: A Self-Organizing Feature Map for Sequences

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

CS Machine Learning

Automatic Pronunciation Checker

Acoustic correlates of stress and their use in diagnosing syllable fusion in Tongan. James White & Marc Garellek UCLA

English Language and Applied Linguistics. Module Descriptions 2017/18

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

Probability and Statistics Curriculum Pacing Guide

AN INTRODUCTION (2 ND ED.) (LONDON, BLOOMSBURY ACADEMIC PP. VI, 282)

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

The Effect of Discourse Markers on the Speaking Production of EFL Students. Iman Moradimanesh

Dialog Act Classification Using N-Gram Algorithms

Think A F R I C A when assessing speaking. C.E.F.R. Oral Assessment Criteria. Think A F R I C A - 1 -

Individual Component Checklist L I S T E N I N G. for use with ONE task ENGLISH VERSION

METHODS FOR EXTRACTING AND CLASSIFYING PAIRS OF COGNATES AND FALSE FRIENDS

Learning From the Past with Experiment Databases

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

Multilingual Sentiment and Subjectivity Analysis

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Generative models and adversarial training

Guru: A Computer Tutor that Models Expert Human Tutors

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

SEGMENTAL FEATURES IN SPONTANEOUS AND READ-ALOUD FINNISH

Course Law Enforcement II. Unit I Careers in Law Enforcement

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

South Carolina English Language Arts

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Probability estimates in a scenario tree

Speaker recognition using universal background model on YOHO database

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Listening and Speaking Skills of English Language of Adolescents of Government and Private Schools

Axiom 2013 Team Description Paper

Introduction to the Common European Framework (CEF)

arxiv: v1 [cs.cl] 2 Apr 2017

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Speech Translation for Triage of Emergency Phonecalls in Minority Languages

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

Transcription:

Determining Emotion in Speech Charles Van Winkle University of Washington 2011-02-22 1

Reviewed Literature Toward Detecting Emotions in Spoken Dialogs Publication Date 2005 Authors Chul Min Lee Shrikanth S. Narayanan Detecting emotional state of a child in a conversational computer game Publication Date 2010 Authors Serdar Yildirim Shrikanth Narayanan Alexandros Potamianos 2

Toward Detecting Emotions in Spoken Dialogs Observations Claims Role of spoken language interfaces in human-computer interaction applications has increased Automatically recognizing emotions from human speech has therefore grown in importance Research in understanding and modeling human emotions is increasingly attracting attention from the engineering community There is an increasing need to know not only what information a user conveys but how it is being conveyed Emotions are important in human communication and decisionmaking It is desirable that intelligent human-machine interface be able to accommodate human emotions in an appropriate way 3

Toward Detecting Emotions in Spoken Dialogs Challenges Claims It is difficult to define what emotion means in a precise way Disagreement on the number of emotion categories It may not be necessary or practical to recognize a large variety of emotions in the context of developing algorithms for conversational interfaces. Reconciling long-term properties like moods with short-term emotional states Previous Studies show promise of using higher level linguistic information for emotion recognition 4

Toward Detecting Emotions in Spoken Dialogs The Old Acoustic Signal Pattern Recognition Maximum likelihood Bayes classification Kernel regression K-nearest neighborhood methods Fisher linear discrimination methods An ensemble of neural networks 5

Toward Detecting Emotions in Spoken Dialogs The Old Acoustic Features Pitch-Related Features Fundamental Frequency (aka Pitch), (aka F0) (and other formant frequencies) Pitch Contour Energy Timing Features Speech Rate Boundaries of Phrases/Words/Phonemes Spectral Information Voiced and Unvoiced portions 6

Toward Detecting Emotions in Spoken Dialogs The Old Discourse Information Has been used in conjunction with acoustic correlates Topic and/or Sub-Dialog Repetition Correction information Use of Swear Words* Negation How to combine the different information sources (e.g. Acoustic & Discourse) Fusion at the Feature Level Suffers from potential dimensionality issues in regards to classification with increasing feature sizes. 7

Toward Detecting Emotions in Spoken Dialogs The New Favor the notion of application-dependent emotions Examine a reduced space of emotions Negative (anger and frustration in human speech) and Non-Negative emotions (the complement) Data Set Speech signals derived from a commercially deployed automatic call center dialog system Combine various aspects of spoken language information Acoustic Lexical Discourse Intended Use Detection of negative emotions can be used as a strategy to improve the quality of the service in automated call center applications 8

Toward Detecting Emotions in Spoken Dialogs The Plan Acoustic Leverage previously published results and use a number of acoustic correlates Systematically Reconcile through Feature Selection Feature Reduction 9

Toward Detecting Emotions in Spoken Dialogs The Plan Discourse Separate users responses into 5 categories Rejection Found more often in negative emotion utterances Repetition Rephrase Ask-Start Over None of the Above Mostly factual responses to voice prompts like giving the name of a person or place (in the corpus) 10

Toward Detecting Emotions in Spoken Dialogs The Plan Language Introduce a new method for estimating the emotion information conveyed by words (and by sequences of words) Automatically calculate emotional salience of the words in the specific (constrained) data corpus How to combine the various information sources? Fusion at the decision level Linear Discriminant Classifiers with Gaussian class-conditional probability K-Nearest Neighborhood classifiers Emotional Salience is a measure of how much information a word provides about a given emotion category 11

Toward Detecting Emotions in Spoken Dialogs The Data Observations Claims Most studies in emotion recognition in speech have used actors voices Single utterances for archetypal emotions Results from these may not be generalized to human-machine interaction scenarios. In non-dialog settings Real data suffers from coverage problems Need vast amounts of data characterizing various emotions in various contexts A limited-domain approach allows in-depth focus on a finite set of emotions Using significant amounts of data obtained from realistic human-machine interactions 12

Toward Detecting Emotions in Spoken Dialogs The Data Speech data 8 khz, 8-bit, µ-law compression Obtained From real users engaged in spoken dialog with a machine agent Commercially-deployed call center application 1187 calls* Each having an average of 6 utterances About 7200 total utterances Database was whittled down from thousands of calls to only include a fraction with potentially negative emotions. Authors used some automatic pre-processing and some subjective tagging by 4 different human listeners. 13

Toward Detecting Emotions in Spoken Dialogs Acoustic Fundamental Frequency (F0) mean, median, standard deviation, maximum, minimum, range, & linear regression coefficient Energy mean, median, standard deviation, maximum, minimum, range, & linear regression coefficient Duration speech-rate, ratio of duration of voiced and unvoiced region, and duration of the longest voiced speech Formants First and second formant frequencies (F1, F2), and their bandwidths (BW1, BW2). Also mean of each feature 14

Toward Detecting Emotions in Spoken Dialogs Acoustic Forward Selection to Reduce Dimensionality Two sets of rank-ordered selected features 10-best features 15-best features Principle Component Analysis to possibly further reduce dimensionality Male 15-best Ratio of duration of voiced and unvoiced region, energy STDEV*, energy median, F0 regression coeff., F0 median, energy regression coeff., energy max, energy min, energy range, duration of the longest voiced speech, F0 mean, BW1, F0 max, BW2 Female 15-best Ratio of duration of voiced and unvoiced region, energy median, F0 regression coeff., speech rate, energy min, duration of the longest voiced speech, energy regression coeff., F0 median, F0 mean, F1, energy mean, energy max, F0 max, energy range, energy STDEV 15

Toward Detecting Emotions in Spoken Dialogs Lexical Word Salience Emotion Wrong 0.72 Negative Computer 0.72 Negative Damn 0.72 Negative No 0.45 Negative Arrival 0.33 Non-Negative Phoenix 0.33 Non-Negative Delayed 0.21 Non-Negative Baggage 0.20 Non-Negative After salience calculation, a salient word pair dictionary was constructed by only retaining word pairs that have greater salience values than a pre-chosen threshold and optimized on held-out data. Gender-Independent. 16

Toward Detecting Emotions in Spoken Dialogs Lexical 17

Toward Detecting Emotions in Spoken Dialogs Discourse Male Female Total Tag Negative Non-Negative Negative Non-Negative Negative Non-Negative Rejection 37 7 72 10 109 17 Repeat 4 35 23 38 27 73 Rephrase 15 34 10 39 25 73 Ask-Startover 29 33 33 44 62 77 Non 57 350 71 448 128 798 Total 142 454 209 579 351 1038 Labeling performed by one person, based on utterance transcriptions Rephrase is non-perfect Repeat, & eventually becomes same category 18

Toward Detecting Emotions in Spoken Dialogs Error k = 8 for Male k = 4 for Female 19

Toward Detecting Emotions in Spoken Dialogs Error M/F 20

Toward Detecting Emotions in Spoken Dialogs Error Male 21

Toward Detecting Emotions in Spoken Dialogs Error Female 22

23

computer game Observations Claims Over the last few years, attention to automatic recognition of users communicative styles within spoken dialog system frameworks has increased It is important to know not only what was said but also how something was communicated to a dialog system Enabling automatic emotion recognition within a multimodal dialog system is an emerging trend Being able to detect the users emotion can help enhance the capability of such systems in terms of being more natural and responsive 24

computer game Observations Claims Currently deployed spoken dialog interfaces are limited in terms of handling the rich information contained in speech Their scope in supporting natural human-machine interaction is therefore limited as well. Much of the work on emotion analysis focuses on databases with acted speech This provides certain useful knowledge, but it is more suitable to work on data that is directly representative and suitable for the domain application in mind 25

computer game Challenges Claims Most research on emotion recognition is primarily targeted towards adult users Greater variability exists in the acoustic and linguistic characteristics of children s speech. These parameters change with age and gender. Automatic recognition from speech itself is a difficult problem It may be difficult to elicit acted speech from children Children are one of the potential beneficiaries of computers with spoken interfaces, e.g. for educational applications and games. It is important to identify emotionally salient features by means of emotion recognition as a function of gender and age group It is not necessary to recognize a large set of emotions 26

computer game Survey of the Corpora Databases of children speech Mostly used for acoustic analysis and modeling Some are read speech corpora Recent databases Child-machine spontaneous speech interaction Open-ended spoken dialog interaction between children and animated characters in a game setting Data from children spontaneously communicating with the AIBO robot (Emotional labeling for this corpus is available) A corpus of child-machine spoken dialog interaction in a game setting (Used in this paper) 27

computer game Previous Acoustic Techniques Acoustic Signal Pattern Recognition Used to separate emotional coloring present in (children s) speech Popular features Phoneme-, syllable-, and word-level statistics corresponding to F0 Energy Duration Spectral Parameters Voice Quality Parameters 28

computer game Previous Aggregate Techniques Previous studies show that younger children use less overt politeness markers and express more frustration compared to older children It has been shown that the use of speech and language features for predicting student emotions in human-computer tutoring dialogs improves accuracy Promising results in the combined use of acoustic, spectral, and language information for detecting confidence, puzzlement, and hesitation in child-machine dialog tasks Language model features might be poor predictors of frustration Emotion recognition performance can be improved by using contextual information in addition to acoustic features 29

computer game Proposal Focus on two attitudinal states Polite and Frustrated Authors believe that this is well-suited to domain of child-computer interfaces Data Set Children s Interactive Multimedia Project (ChIMP) database Combine various aspects of spoken language information Acoustic Language Extend the notion of Emotional Salience Intended Use Detection of polite and frustrated states in children of different age groups and genders 30

computer game The Data Spontaneous child-machine spoken dialog interaction in a game setting Task was to play the game Where in the USA is Carmen Sandiego? Goal is to identify and arrest a cartoon criminal Children had to interact with several animated characters to obtain clues Most children played the game twice Contains speech data collected from 160 boys and girls (ages 6-14) Wizard-of-Oz technique Over 50,000 utterances 31

computer game The Data Researchers tagged speech from 103 out of the 160 players Neutral, Polite, or Frustrated Age Group Female Male Total Results are presented as a function of age group and gender Table to the right shows number of subjects per category 7 9 19 19 38 10 11 21 Image Here 14 35 12 14 8 22 30 Total 48 55 103 32

computer game The Data Goals Identify age and gender trends in emotional state Identify lexical, semantic and pragmatic markers of emotional state Neutral Polite Frustrated Total 7 9 3966 977 796 5739 10 11 4004 1078 360 5442 Only utterances where both labelers are used 12 14 3005 694 705 4404 Image Here Table to the right shows number of instances (speaker turn) for each emotional class for each gender and age group Male 5940 1236 1061 8237 Female 5035 1513 800 7348 Total 10,975 2749 1861 15,585 33

computer game The Data Goals Identify age and gender trends in emotional state Identify lexical, semantic and pragmatic markers of emotional state Neutral Polite Frustrated Total 7 9 69% 17% 14% 37% 10 11 74% 20% 6% 35% Only utterances where both labelers are used 12 14 68% 15.8% 16% 28% Image Here Table to the right shows percent of instances (speaker turn) for each emotional class for each gender and age group Male 72% 15% 13% 53% Female 69% 20% 11% 47% Total 70% 18% 12% 100% 34

computer game Lexical and Pragmatic Markers Polite Explicit Markers please, thank you, excuse me Implicit Markers may I, could you, would you Usage of explicit vs. implicit varies with age Frustrated Typical Lexical Markers shut up, oh man, hurry, oops, heck Pragmatic Markers Repetition or getting stuck in the same dialog state for multiple turns often indicated that a child was experiencing difficulty with the task and was getting frustrated 35

computer game Feature Extraction Acoustic Feature Extraction Low-Level descriptors 384 features were extracted using opensmile feature extraction. Features comprise of utterance level statistics corresponding to pitch frequency, RMS energy, zero-crossing-rate, harmonics-to-noise ratio, and 1 12 MFCCs. Delta coefficients were also computed to each of these LLD. Twelve statistics from each LLD and Delta Coefficients mean, standard deviation, skewness, kurtosis, maximum and minimum value, relative position, range and two linear regression coefficients with their mean square error 36

computer game Feature Extraction Lexical Feature Extraction Certain words are associated with specific emotions and attitudes Two different modeling approaches are proposed Information theoretic analysis is used for lexical feature selection, in conjunction with Bayesian classifiers Second, latent semantic analysis (LSA) is used to transform the feature space and then cosine distance metrics are used to compute emotional distance between utterances (Both techniques used widely in the field) Calculate Emotional Salience Then create Bayesian classifier 37

computer game Feature Extraction Male Female 7 9 10 11 12 14 Class Drop it Hey you Do it No thank Find the Frustrated Get me You there Stop miss Not that Pick that Frustrated Shut up Someone talk Need this My pad Go talk Frustrated Stop this You repeat I don t You pick To issue Frustrated Stop please You mind Hello there Doing mister Hello I d Polite You good Suspect can Please show You have Thanks can Polite Please tell Person please Very much Please take Would you Polite The phone You can You get Look that Where d she Polite After salience calculation, a salient word pair dictionary was constructed by only retaining word pairs that have greater salience values than a pre-chosen threshold and optimized on held-out data 38

computer game Feature Extraction Discourse and Contextual Information Modeling Model relationship between emotional state dialog state Simple Bayesian model Assume emotional state depends directly on dialog state history (past three) Context because emotions are persistent Use derivative of the acoustic features as extra parameter Table to the right shows examples for 5 (out of 9) possible sates User Can I talk to him please? Tell me about the suspect Dialog State Talk2Him TellmeAbout Can I see my Image Here EnterFeature choices for height? Tall for height Thank you Tell me where did the suspect go EnterFeature CloseBook WhereDid 39

computer game Fusion of Classifiers Decision-Level Fusion Acoustic, Lexical, and Contextual information sources If classifiers are statistical and calculate posterior probabilities Can use Average of Decision Can use Product of Decision (assuming independence) Acoustic doesn t fit the above description Use distance metric instead of decision then sigmoidal transform Two-Way Classification Politeness is more of a speaking style than emotional state Three-Way Classification 40

computer game Acoustic Evaluation Unweighted Average Recall MFCC F0 RMS energy Voicing ZCR Male 67.9% 45.4% 42.9% 40.3% 41.1% Female 70.4% 51.3% 44.9% 45.8% 46.3% 7 9 70.4% 51.7% 44.2% 45.7% 44.2% 10 11 66.4% 47.9% 45.2% 44.0% 43.6% 12 14 70.6% 49.3% 44.0% 43.0% 44.9% Used a k-nearest neighborhood classifier (k-nnr) with k=3. Classification results for the three categories (neutral, polite, frustrated) are computed using 10-fold cross-validation 41

computer game Two-Way Classification Male Female 7 9 10 11 12 14 Acoustic 75.0 79.5 75.2 80.9 75.9 Lex1 76.0 83.1 82.9 83.6 76.1 Lex2 75.4 81.9 78.7 82.4 75.3 LSA 77.5 82.6 83.4 84.1 77.6 Context 62.9 70.1 68.5 68.5 62.2 Polite vs. Others Unweighted Average Recall % 42

computer game Two-Way Classification Male Female 7 9 10 11 12 14 Acou + Lex1 79.1 83.4 80.9 85.7 80.4 Acou + Lex2 77.8 82.2 79.4 83.7 80.1 Acou + LSA 82.1 85.2 82.9 86.6 83.1 Acou + Ctxt 72.3 78.9 77.5 78.6 77.8 Polite vs. Others Unweighted Average Recall % 43

computer game Two-Way Classification Male Female 7 9 10 11 12 14 Acou + Lex1 + C 80.1 84.7 82.4 87.8 83.8 Acou + Lex2 + C 78.8 84.0 81.0 86.5 83.3 Acou + LSA + C 84.0 86.4 84.6 88.8 85.7 Polite vs. Others Unweighted Average Recall % 44

computer game Two-Way Classification Male Female 7 9 10 11 12 14 Acoustic 68.6 69.4 69.2 65.9 73.8 Lex1 62.4 58.5 59.9 50.4 63.7 Lex2 56.1 54.3 49.8 50.1 58.5 LSA 59.8 60.5 57.5 50.0 65.9 Context 65.1 65.8 66.7 64.7 71.7 Frustrated vs. Others Unweighted Average Recall % 45

computer game Two-Way Classification Male Female 7 9 10 11 12 14 Acou + Lex1 69.2 69.3 69.8 65.1 73.5 Acou + Lex2 67.9 68.5 68.8 64.2 69.8 Acou + LSA 68.0 68.4 69.1 62.3 73.1 Acou + Ctxt 70.0 70.3 71.7 67.8 75.3 Frustrated vs. Others Unweighted Average Recall % 46

computer game Two-Way Classification Male Female 7 9 10 11 12 14 Acou + Lex1 + C 70.7 71.4 72.2 69.3 75.1 Acou + Lex2 + C 70.3 71.6 71.9 69.1 75.5 Acou + LSA + C 70.9 71.4 71.9 69.5 75.6 Frustrated vs. Others Unweighted Average Recall % 47

computer game Three-Way Classification Male Female 7 9 10 11 12 14 Acoustic 61.4 63.7 62.2 60.5 61.9 Lex1 61.1 63.7 62.9 56.4 61.5 Lex2 55.6 60.3 54.5 55.1 60.2 LSA 60.3 63.9 61.9 56.6 63.8 Context 49.3 54.1 54.9 51.6 51.5 Three-way classification results in terms of unweighted average recall% 48

computer game Three-Way Classification Male Female 7 9 10 11 12 14 Acou + Lex1 63.0 65.9 64.9 62.6 64.4 Acou + Lex2 62.0 65.6 63.7 62.1 64.1 Acou + LSA 65.5 67.5 66.8 63.7 66.2 Acou + Ctxt 65.2 61.3 61.1 57.5 51.5 Three-way classification results in terms of unweighted average recall% 49

computer game Three-Way Classification Male Female 7 9 10 11 12 14 Acou + Lex1 + C 63.8 67.3 66.1 63.5 66.4 Acou + Lex2 + C 62.7 66.9 64.6 63.4 66.8 Acou + LSA + C 66.8 68.8 67.2 65.5 68.3 Three-way classification results in terms of unweighted average recall% 50

51