Tencent AI Lab Rhino-Bird Visiting Scholar Program. Research Topics

Similar documents
Laboratorio di Intelligenza Artificiale e Robotica

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Laboratorio di Intelligenza Artificiale e Robotica

Learning Methods in Multilingual Speech Recognition

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Python Machine Learning

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

Speech Recognition at ICSI: Broadcast News and beyond

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

WHEN THERE IS A mismatch between the acoustic

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

THE world surrounding us involves multiple modalities

Axiom 2013 Team Description Paper

AQUA: An Ontology-Driven Question Answering System

Circuit Simulators: A Revolutionary E-Learning Platform

Applications of memory-based natural language processing

INPE São José dos Campos

Speech Emotion Recognition Using Support Vector Machine

Learning Methods for Fuzzy Systems

Generative models and adversarial training

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

TD(λ) and Q-Learning Based Ludo Players

Top US Tech Talent for the Top China Tech Company

arxiv: v1 [cs.cl] 2 Apr 2017

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

The Conversational User Interface

SARDNET: A Self-Organizing Feature Map for Sequences

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Master s Programme in Computer, Communication and Information Sciences, Study guide , ELEC Majors

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

CS 598 Natural Language Processing

Natural Language Processing. George Konidaris

Human Emotion Recognition From Speech

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Arabic Orthography vs. Arabic OCR

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Modeling function word errors in DNN-HMM based LVCSR systems

Using Web Searches on Important Words to Create Background Sets for LSI Classification

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Lecture 1: Machine Learning Basics

A study of speaker adaptation for DNN-based speech synthesis

Cross Language Information Retrieval

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Word Segmentation of Off-line Handwritten Documents

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Radius STEM Readiness TM

BYLINE [Heng Ji, Computer Science Department, New York University,

Segregation of Unvoiced Speech from Nonspeech Interference

Speech Translation for Triage of Emergency Phonecalls in Minority Languages

An Investigation into Team-Based Planning

On-Line Data Analytics

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

An OO Framework for building Intelligence and Learning properties in Software Agents

Spoken Language Parsing Using Phrase-Level Grammars and Trainable Classifiers

Course Law Enforcement II. Unit I Careers in Law Enforcement

Modeling function word errors in DNN-HMM based LVCSR systems

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

Seminar - Organic Computing

Eye Movements in Speech Technologies: an overview of current research

Lecture 1: Basic Concepts of Machine Learning

Grade 4. Common Core Adoption Process. (Unpacked Standards)

What is Thinking (Cognition)?

Language Acquisition Chart

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Using dialogue context to improve parsing performance in dialogue systems

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Linking Task: Identifying authors and book titles in verbose queries

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Calibration of Confidence Measures in Speech Recognition

Reinforcement Learning by Comparing Immediate Reward

Dialog-based Language Learning

Causal Link Semantics for Narrative Planning Using Numeric Fluents

Getting the Story Right: Making Computer-Generated Stories More Entertaining

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

21st Century Community Learning Center

An Introduction to Simio for Beginners

Beyond the Pipeline: Discrete Optimization in NLP

Skillsoft Acquires SumTotal: Frequently Asked Questions. October 2014

Aviation English Solutions

November 17, 2017 ARIZONA STATE UNIVERSITY. ADDENDUM 3 RFP Digital Integrated Enrollment Support for Students

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Cross-Media Knowledge Extraction in the Car Manufacturing Industry

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

THE VIRTUAL WELDING REVOLUTION HAS ARRIVED... AND IT S ON THE MOVE!

Characteristics of Collaborative Network Models. ed. by Line Gry Knudsen

Data Fusion Models in WSNs: Comparison and Analysis

Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Abstractions and the Brain

Multilingual Sentiment and Subjectivity Analysis

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

Transcription:

Tencent AI Lab Rhino-Bird Visiting Scholar Program Research Topics 1. Computer Vision Center Interested in multimedia (both image and video) AI, including: 1.1 Generation: theory and applications (e.g., cartoon painting, banner generation for AD promotion) of GAN. 1.2 Editing: image & video low-level and mid-level vision, such as image/video super resolution, enhancement, denoising, deblurring, harmonization, and so on. 1.3 Analysis & Understanding: large-scale image/video classification, video semantic segmentation, video localization, image & video captioning, and so on. 1.4 Recommendation: image & video recommendation and retrieval. 1.5 Vision-driven RL: vision based RL task (e.g., visual object tracking, in-door robot navigation) and its deployment in real world robot. 2. Speech Processing Center 2.1 Far-field Signal Processing In the far-field speech recognition task, the speech signal energy attenuation, the stationary and non-stationary noise, the reverberation, and the echo of the loudspeaker during the target sound propagation to the microphones will increase the difficulties of speech recognition and voice wake-up. Through the microphone array signal processing and deep learning speech noise reduction/separation technology, it could improve the speech quality for solving the problem of far field speech recognition. Suggested research area: Microphone array algorithm design to improve the speech recognition ability of multiple speakers and interference sources. reverberation algorithm design, to enhance the ability of far-field speech recognition. Design of sound source localization algorithm to improve the accurate positioning ability under the far-field noisy environment. Echo cancellation, noise suppression and other algorithms designed to enhance the ability of speech recognition in the noisy environment. The design of neural network algorithm to enhance single-channel and multi-channel end-to-end far-field speech enhancement. Joint training and optimization of front-end speech processing and back-end speech recognition acoustic models to upgrade both systems. Acoustic scene detection and determination aims to determine the current acoustic scene or event by acoustic features, such as stadiums, concert halls, rain, police car sound and so on. End-to-end neural network algorithm design. Accurate time positioning of acoustic scene / event. Accurate detection of multi-scene /event. 2.2 Speech Recognition Speech recognition, as one of the most natural way of human-computer interaction, plays a vital role in the AI era. With the successful application of deep learning in the field of speech recognition, more new models and new algorithms are proposed and continuously improve the recognition accuracy. In some test sets, speech

recognition systems perform better than humans. These are constantly promoting voice recognition in the field of AI applications. In spite of this, there are still many problems to be solved in the field of research, including End-to-end speech recognition. Multilingual speech recognition. Deep learning based model joint optimization. Robust speech recognition and far-field speech recognition. Cocktail party problem. 2.3 Speech Synthesis Speech synthesis technology is a key part of human-computer speech interaction. The user experience decreases when synthesized voice is not subjectively attractive to the listeners. Personalized expressive speech synthesis technology aims to build synthesized voice that sounds familiar to the listeners, such as public figures, famous stars, friends and family members. However, the labeled data of the desired voices recorded in a clean environment is usually difficult to collect. Building a synthesized voiced with limited data has remain a challenging task. We encourage research directions including but not limited to the following: Multi-speaker speech synthesis. Speaker adaptation to the target voice characteristic and speaking style. Multi-lingual and cross-lingual speech synthesis. Expressive speech synthesis with controllable speaking styles. Speech synthesis with unlabeled data. New paradigm of speech synthesis. 2.4 Speaker Recognition Identifying a person by his or her voice is an important human trait most take for granted in natural human-to-human interaction/communication. Automatic speakerrecognition systems have emerged as an important means of verifying identity in many e-commerce applications as well as in general business interactions, intelligent housing system, forensics, and law enforcement. Future direction includes: Domain and environment mismatch. Systems often perform very well in the domain/environment for which they are trained. However, their performance suffers when the users use the system in other domain/environment. So how to adapt a system from a resource-rich domain/environment to a resource-limited domain/environment and how to make speaker recognition systems robust to domain/environment mismatch are great challenges. Short utterance in text-independent speaker recognition. Performance of i-vector/plda systems degrades rapidly in presence of short utterances or utterances with varying durations. The reason is that short utterance contains limited phonemic information and the i-vectors of short utterances have much bigger posterior covariance. Text-dependent speaker recognition using short utterances. It is more natural to use HMMs rather than GMMs for text-dependent tasks. But HMMs require local hidden variables, which are difficult to handle because of data fragmentation. More recently, using DNN/RNN to extract utterance-level features or building an end to end based on DNN/RNN is attracting more and more attention. 3. Natural Language Processing Center 3.1 Natural Language Understanding (NLU)

NLU is to process, interpret and analyze natural languages with necessary techniques that can help human or downstream systems understand them. NLU is the core of NLP for years regarding to its fundamental role at the first step of processing natural languages. There are many aspects included in NLU research. In Tencent AI Lab, we focus on (include but not limited in) the following topics, which are also the suggested areas for applying the research funds: Fundamental NLP, including word segmentation, part-of-speech tagging, constituent and dependency parsing, named entity recognition, sentiment analysis, key-phrase extraction, etc. Semantics, including multi-granularity (word, phrase, sentence, document) embeddings, meaning representation and semantic tagging, etc. Knowledge representation and inference and its combination with deep learning techniques. Reading comprehension and causal-relationship extraction. 3.2 Natural Language Generation (NLG) Automatic summarization Automatic article writing 3.3 Dialogs Dialog research has been a long-time hot spot for years since conversation systems are the key part for artificial intelligence for enabling backend system to interact with people through language to assist, enable, or entertain. In Tencent AI Lab, we focus on (include but not limited in) the following topics, which are also the suggested areas for applying the research funds: Extractive dialog system, including system construction, question understanding, answer ranking and re-ranking, slot tagging and intent classification, dialog management, etc. Dialog response generation, including question-answer modeling, response generation, response quality assessment, etc. Dialog management, including learning dialog manager, multi-turn conversation modeling, etc. Multi-user, multi-turn, multi-modality dialog system. 3.4 Machine Translation (MT) We have two major MT areas, namely, neural machine translation (NMT) and interactive machine translation (IMT). NMT has advanced state of the art for MT in recent years. However, there are still a lot of remaining problems unsolved. IMT is a rising field which is more applicable to industry with the interwoven of MT and humancomputer interactions. In Tencent AI Lab, we focus on (include but not limited in) the following topics, which are also the suggested areas for applying the research funds: Adequacy-oriented NMT, including various techniques of improving the adequacy of translations generated by NMT models. NMT visualization and interpretability, including visualizing and interpret the internal structure and composition of NMT models. Interactive MT with NMT, including interactive translation system on top of NMT models. Multi-domain NMT, including building a practical NMT system on a large-scale data, which consists of bilingual sentences from multiple domains. NMT with novel architectures, including building NMT models beyond the standard encoder-decoder framework and/or with novel networks such as capsule networks.

4. Machine Learning Center 4.1 Deep learning theory and framework. Theoretical understanding of deep learning or replacement framework of deep learning. 4.2 Machine learning models and applications. Machine learning models for different applications such as bandits problem, transfer learning, reinforcement learning, neural memory mechanism. 4.3 Unsupervised learning with deep neural networks. The potential and limitation of neural network based unsupervised learning methods, new unsupervised generative methods with deep neural networks, multi-modal learning. 4.4 Large scale deep graph learning. Node embedding for large scale social networks, discovering the communities in graphs, applying deep learning techniques for graph learning. 4.5 Distributed optimization algorithm. Design and develop more efficient distributed optimization algorithms with theoretical guarantee and/or more outstanding performance in practical applications. 5. Reinforcement Learning Center 5.1 Bridging between simulation and the physical world Within the past decade, simulation has fostered tremendous progresses in modern machine learning, especially in reinforcement learning (e.g., AlphaGO, OpenAI Universe). This is mainly due to three main advantages of simulation: a) it can run much faster than real-time; b) the cost of simulation is much lower than collecting real data (e.g., accidents in autonomous driving); c) it is convenient to conduct controlled experiments for almost all cases, and repeat them. However, it is also extremely challenging to transfer the models learned from simulation, to the physical world. In this call for proposals, we hope to develop technologies to bridge between simulation and the physical world: Realistic simulation for the physical world. Photorealistic content generation for games. Transfer learning and domain adaptation. 5.2 Mastering StarCraft Despite the promising performance of conventional reinforcement learning algorithms, learning to play real-time multiplayer strategy game (e.g., StarCraft) has remained an important, yet challenging task. Compared with chess and Go, StarCraft is orders of magnitude more complex, and you cannot see all of your opponents troop deployments or construction projects. This forces you to use what you ve seen, which is always imperfect, to predict what they may be planning, which can come from a huge action space. In this call for proposals, we encourage researchers to push the state-ofthe-art game AI in mastering StarCraft, including but not limited to the following areas: API to train self-playing StarCraft bots. Learning to play StarCraft using game replays. Learning to act with imperfect information. Learning to coordinate multiple agents in StarCraft. Memory and planning in StarCraft. 5.3 Conversational AI During the past half decade, we ve seen an increasing number of so-called intelligent digital assistants (e.g., Alexa, Siri, Google Assistant, Cortana) being introduced on various devices. Although the conversational AI technology behind these applications keeps getting better, the expectation of human intelligence is far from being met. Here, we call for proposals on bridging the conversational gap between humans and AI bots. We encourage research directions including but not limited to the

following: Performance evaluation for conversational AI. Natural language understanding for conversational AI. Speech understanding for conversational AI. Learning to understand human intentions beyond language. Dialog planning and management.