What is wrong with apps and web models? Conversation as an emerging paradigm for mobile UI Bots as intelligent conversational interface agents

Similar documents
Exploration. CS : Deep Reinforcement Learning Sergey Levine

Reinforcement Learning by Comparing Immediate Reward

Lecture 10: Reinforcement Learning

Speech Recognition at ICSI: Broadcast News and beyond

Georgetown University at TREC 2017 Dynamic Domain Track

Laboratorio di Intelligenza Artificiale e Robotica

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Calibration of Confidence Measures in Speech Recognition

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Software Maintenance

Python Machine Learning

A Reinforcement Learning Variant for Control Scheduling

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Lecture 1: Machine Learning Basics

Laboratorio di Intelligenza Artificiale e Robotica

A study of speaker adaptation for DNN-based speech synthesis

Regret-based Reward Elicitation for Markov Decision Processes

Artificial Neural Networks written examination

Learning Methods for Fuzzy Systems

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

High-level Reinforcement Learning in Strategy Games

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Axiom 2013 Team Description Paper

CSL465/603 - Machine Learning

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

TD(λ) and Q-Learning Based Ludo Players

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

AI Agent for Ice Hockey Atari 2600

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Learning Methods in Multilingual Speech Recognition

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

Natural Language Processing. George Konidaris

SOFTWARE EVALUATION TOOL

College Pricing and Income Inequality

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

The Good Judgment Project: A large scale test of different methods of combining expert predictions

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Generative models and adversarial training

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Grammar Lesson Plan: Yes/No Questions with No Overt Auxiliary Verbs

LEARNING TO PLAY IN A DAY: FASTER DEEP REIN-

Active Learning. Yingyu Liang Computer Sciences 760 Fall

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Deep Neural Network Language Models

Probabilistic Latent Semantic Analysis

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Adaptive Generation in Dialogue Systems Using Dynamic User Modeling

BMBF Project ROBUKOM: Robust Communication Networks

FF+FPG: Guiding a Policy-Gradient Planner

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

November 17, 2017 ARIZONA STATE UNIVERSITY. ADDENDUM 3 RFP Digital Integrated Enrollment Support for Students

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Seminar - Organic Computing

Major Milestones, Team Activities, and Individual Deliverables

Improving Action Selection in MDP s via Knowledge Transfer

A Review: Speech Recognition with Deep Learning Methods

Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners

THE world surrounding us involves multiple modalities

College Pricing and Income Inequality

On the Combined Behavior of Autonomous Resource Management Agents

An investigation of imitation learning algorithms for structured prediction

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Task Completion Transfer Learning for Reward Inference

Top US Tech Talent for the Top China Tech Company

An OO Framework for building Intelligence and Learning properties in Software Agents

The Strong Minimalist Thesis and Bounded Optimality

AMULTIAGENT system [1] can be defined as a group of

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

10 Tips For Using Your Ipad as An AAC Device. A practical guide for parents and professionals

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

Degeneracy results in canalisation of language structure: A computational model of word learning

Modeling user preferences and norms in context-aware systems

Applications of memory-based natural language processing

Five Challenges for the Collaborative Classroom and How to Solve Them

Android App Development for Beginners

arxiv: v1 [cs.lg] 7 Apr 2015

WHEN THERE IS A mismatch between the acoustic

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

SARDNET: A Self-Organizing Feature Map for Sequences

Automating the E-learning Personalization

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Model Ensemble for Click Prediction in Bing Search Ads

DICTE PLATFORM: AN INPUT TO COLLABORATION AND KNOWLEDGE SHARING

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Robot Learning Simultaneously a Task and How to Interpret Human Instructions

A Game-based Assessment of Children s Choices to Seek Feedback and to Revise

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

An Example of an E-learning Solution for an International Curriculum in Manufacturing Strategy

Dialog-based Language Learning

Task Completion Transfer Learning for Reward Inference

K5 Math Practice. Free Pilot Proposal Jan -Jun Boost Confidence Increase Scores Get Ahead. Studypad, Inc.

Transcription:

What is wrong with apps and web models? Conversation as an emerging paradigm for mobile UI Bots as intelligent conversational interface agents Major types of conversational bots: ChatBots (e.g. XiaoIce) InfoBots TaskCompletion Bots (goal-oriented) Personal Assistant Bots (above + recommd.) Etc. Bots Technology Overview: three generations; latest deep RL 5 http://venturebeat.com/2016/08/01/how-deep-reinforcement-learning-can-help-chatbots/

Consumer 1st Party Bots Head Enterprise 3rd Party Bots Long Tail Specific Knowledge (IQ) Form/Slot Filling; SQL Retrieval Multi-turn Often Needed Dialog Management Intelligent Search Single-turn Mostly Session Modeling General Knowledge (EQ) Chit-chat Emotion/Sentiment Analysis Personality

Generation I: Symbolic Rule/Template Based Centered on grammatical rule & ontological design by human experts (early AI approach) Easy interpretation, debugging, and system update Popular before late 90 s Still in use in commercial systems and by bots startups Limitations: reliance on experts hard to scale over domains data used only to help design rules, not for learning Example system next 7

9 Generation II: Data Driven (shallow) Learning Data used not to design rules for NLU and action, but to learn statistical parameters in dialogue systems Reduce cost of hand-crafting complex dialogue manager Robustness against speech recog errors in noisy environment MDP/POMDP & reinforcement learning for dialogue policy Discriminative (CRF) & generative (HMM) methods for NLU Popular in academic research 1999-2014 (before deep learning arrived at the dialogue world) Limitations: Not easy to interpret, debug, and update systems Still hard to scale over domains Models & representations not powerful enough; no end-2-end, hard to scale up Remained academic until deep learning arrived Example system next

Components of a state-based spoken dialogue system

13 Generation III: Data-Driven Deep Learning Like Generation-II, data used learn everything in dialogue systems Reduce cost of hand-crafting complex dialogue manager Robustness against speech recog errors in noisy environment & against NLU errors MDP/POMDP & reinforcement learning for dialogue policy (same) But, neural models & representations are much more powerful End-to-End learning becomes feasible Attracted huge research efforts since 2015 (after deep learning s in vision/speech and in deep RL shown success in Atari games) Limitations: Still not easy to interpret, debug, and update systems Lack interface btw cont. neural learning and symbolic NL structure to human users Active research in scaling over domains via deep transfer learning & RL No clear success reported yet Deep RL & example research next

What is reinforcement learning (RL)? RL in Generation-II ---> not working! (with unnecessarily complex POMDP) RL in Generation-III ---> working! due to deep learning -- like NN vs DNN in ASR) RL is learning what to do so as to maximize a numerical reward signal What to do means mapping from a situation in a given environment to an action Takes inspiration from biology / psychology RL is a characterization of a problem class Doesn t refer to a specific solution There are many methods for solving RL problems In its most general form, RL problems: Have a stateful environment, where actions can change the state of the environment Learn by trial and error, not by being shown examples of the right action Have delayed rewards, where an action s value may not be clear until some time after it is taken

Stateful Model for RL s t Agent r t a t State estimator o t Environment s t = Summary o 0, a 0, o 1, a 1, o t 1, a t 1, o t Trajectory: a 0, r 1, s 1, a 1, r 2, s 2, a 2, Return: σ τ=t+1 γ τ 1 r τ, 1 γ 0 Policy: π s t a t Objective: π = arg max π E σ τ=t+1 γ τ 1 r τ π, s t

SCENARIOS ACTIONS (size of action space) MDP-STATES (size of state space) RL HORIZON REWARDS AlphaGo (DeepMind) Stone placement (max 19x19) Board configuration (max 2^{19x19}) Very long, >100 moves Win/loss: +1/-1 at end of game; local r_t uninformative (value of winning unclear, unlike chess) Atari agent (DeepMind) Joystick/button Control (12) Windowed screen shots (~infinite) Shorter (~10-50) Game score r_t at each action, very clean, no noise DeepCRM1 (Microsoft) What salesforce should do with specific customers (~20) Ensemble of customer business status over time (>10,000) Short (~2-10) Close deal: +1 or $ amount at the end of sale cycle; Otherwise 0 DeepCRM2 Cross/Up sales (Microsoft) What product/sku to sell (~1M) Existing SKUs & customer business status over time (>100,000) Short (~2-10) $ amount of SKU sales at the end of sale cycle; $0 before sales DeepCRM3 Tele-sale calls (Microsoft) Which tenant to call (~600K) Posterior distribution of reward by calling tenants (~infinite) Long (200 / week) +1 if tenant is saved, or 0 if lost

User input (o) Response Language understanding Language (response) generation s a Dialogue Policy a = π(s) Collect rewards (s, a, r, s ) Optimize Q(s, a) Type pf Bots State Action Reward Social ChatBots Chat history System Response # of turns maximized; Intrinsically motivated reward InfoBots (interactive Q/A) User current question + Context/history Task-Oriented Bots User current input + Context/history Answers to current question by system DialogAct w. SlotValue in current turn Relevance of answer; # of turns minimized Task success rate; # of turns minimized

Function Approximation In many tasks, (s, a) is too large for tabular representation Estimate the action-value function approximately as θ: a linear function (baseline) θ: a DNN, aka Deep Q-Network (DQN) Optimize θ using SGD w.r.t loss

Q-Learning for DQN [DeepMind 15] Learning becomes unstable Correlations present in the sequence of states Small updates to Q leads to significant change of policy and data distribution Correlations btw the to-be-learned Q and the target value r + max a Q(s, a ) Solution Experience replay: randomize training samples (s, a, r, s ) Use a separate Q function to compute targets y

Q-Learning [Sutton & Barto 98] Assume Q(s, a) for all s, a can be represented in a table 1. Initialize an array Q(s, a) arbitrarily 2. Choose actions based on Q such that all actions are taken in all states (infinitely often in the limit) 3. On each time step, update one element of the array: Q s t, a t = αr t+1 + γ(max a Q(s t+1, a )) Model-free learning: Learning long-term optimal behavior without model of the environment

Explore via ε-greedy Experience replay Use a separate Q func to compute targets y Within C steps, the For loop is a regression step with the targets determined using the fixed Q.

Deep Q Learning for Games --- optimizing long-term values Short-term Long-term Playing the Breakout game Optimizing Business Decision Making Maximize immediate reward Optimize life-time revenue, service usages, and customer satisfaction Self play to improve skills

User goal Two tickets for Deadpool tomorrow 9PM at AMC Pacific Place 11 theater, Seattle. RULE BASED SYSTEM User: Do you have any ticket for movie Deadpool at 9:00 pm? Agent: Which theater do you want? User: AMC pacific place 11 theater, please! Agent: How many tickets do you want? User: Two! Agent: What date? User: Tomorrow! Agent: Which city do you want? User: Seattle! Agent: Do you have any specification for zip code? User: No, I do not care. Agent: Do you have any distance constraint? User: No, I do not care. Agent: How about video format? 2d? Imax 3d? User: Any kind is okay. Agent: Do you have any constraint to price? User: No! Wasteful! (Very likely the user does not care.) Agent: Okay, I will issue 2 tickets for you, tomorrow 9:00 pm at AMC pacific place 11 theater, Seattle, movie Deadpool. User: Okay, thanks. Agent: Thanks! REINFORCEMENT LEARNING SYSTEM User: I want two tickets for movie Deadpool. Agent: What is the starttime you prefer? User: 9:00 PM Agent: Which day? User: Tomorrow Agent: Which city? User: Seattle Agent: Which theater do you prefer? User: AMC Pacific Place 11 theater. Agent: Okay, I will issue 2 tickets for you, tomorrow 9:00 pm at AMC pacific place 11 theater, Seattle, movie Deadpool. User: Okay, thanks. Agent: Thanks!

User goal Two tickets for Hail Caesar tomorrow 7:15PM at AMC Lowes Oak tree 6, Seattle. RULE BASED SYSTEM User: Do you any ticket for movie Hail Caesar at theater AMC Lowes Oak Tree 6? Agent: What start time do you prefer? User: 7:15 pm Tomorrow Agent: Which city you are looking for? User: Seattle Agent: How many tickets do you want? User: Two Agent: Okay, I have issued two tickets for movie Hail Caeser tomorrow 7:50 pm at theater AMC Lowes Oak Tree 6, Seattle. User: Oh, no. Agent: Thanks! Task failed! (Cannot correct the LU mistake.) REINFORCEMENT LEARNING SYSTEM Agent: What start time do you prefer? User: 7:15 pm Tomorrow Agent: Which city you are looking for? User: Seattle Agent: Okay, I will issue you tickets for movie Hail Caeser tomorrow 7:50 pm at theater AMC Lowes Oak Tree 6, Seattle. User: Oh, no. Agent: It is 7:50 pm, right? User: Two tickets for 7:15 pm, please. Agent: Okay, I will issue you two tickets for movie Hail Caeser tomorrow 7:15 pm at theater AMC Lowes Oak Tree 6, Seattle. User: Thanks! Agent: Thanks!

Extrinsically / Intrinsically motivated RL Extrinsic motivation do sth for specific reward RL is driven by task-specific reward Learning task-specific skills, don t cope flexibly with new problems Intrinsic motivation do sth inherently enjoyable RL is driven by curiosity w/o explicit reward Developing board competence, which makes learning task-specific skill more easily Task Completion Bots InfoBots Social Bots

integrated design

27

This joint paper (2012) from the major speech recognition laboratories details the first major industrial application of deep learning. 28

Achieving Human Parity in Conversational Speech Recognition (CNN + LSTM)HMM hybrid attentional layer-wise context expansion LACE spatial smoothing letter trigrams Lowest ASR error rate on SWBD: 5.9% human SR 5.9% 29

5 areas of potential new breakthrough 1. better modeling for end-to-end and other specialized architectures capable of disentangling mixed acoustic variability factors (e.g. sequential GAN) 2. better integrated signal processing and neural learning to combat difficult far-field acoustic environments especially with mixed speakers 3. use of neural language understanding to model long-span dependency for semantic and syntactic consistency in speech recognition outputs, use of semantic understanding in spoken dialogue systems to provide feedbacks to make acoustic speech recognition easier 4. use of naturally available multimodal labels such as images, printed text, and handwriting to supplement the current way of providing text labels to synchronize with the corresponding acoustic utterances (NIPS Multimodality Workshop) 5. development of ground-breaking deep unsupervised learning methods for exploitation of potentially unlimited amounts of naturally found acoustic data of speech without the otherwise prohibitively high cost of labeling based on the current deep supervised learning paradigm

Speech-based vs text-based Errors in speech recog treated as noise in text as input to text-based bots Solving robustness: huge opportunity for integrated design