Classification of Discussion Threads in MOOC Forums Based on Deep Learning. Lin FENG*, Lei WANG, Sheng-lan LIU and Guo-chao LIU

Size: px
Start display at page:

Download "Classification of Discussion Threads in MOOC Forums Based on Deep Learning. Lin FENG*, Lei WANG, Sheng-lan LIU and Guo-chao LIU"

Transcription

1 2017 2nd International Conference on Wireless Communication and Network Engineering (WCNE 2017) ISBN: Classification of Discussion Threads in MOOC Forums Based on Deep Learning Lin FENG*, Lei WANG, Sheng-lan LIU and Guo-chao LIU Dalian University of Technology, Dalian, Liaoning Province, China *Corresponding author Keywords: MOOCs, Deep learning, Classification of discussion threads, User interactive behaviors. Abstract. As the only way for students and instructors to communicate in Massive Open Online Course (MOOCs), MOOC forum plays an important role in supervision of students learning. To guarantee the utilization of MOOC forums, classification of discussion threads is essential and should be reasonable. In this paper, we propose a model constructed by the convolutional neural network in deep learning to classify discussion threads in MOOC forums. In order to ensure that our model is not restricted by course contents and teaching languages, we define features (exactly 18) based on user interactive behaviors rather than the contents of threads. To validate our model and features, two groups of contrast experiments are conducted on Rossi s dataset. Experimental results show that our features are superior to Rossi s in measuring the characteristics of threads in different subforums. Meanwhile, the performance of deep model proposed in this paper is also much better than Rossi s model and other models constructed by traditional classification algorithms. Especially the performance of our model with our features is increased by 21% on average than Rossi work. For Meetups and Assignments, the AUC are up to and respectively. Introduction Massive Open Online Course (MOOCs) which proposed in 2008 makes fast progress with the development of computer technology and draws more and more attention in recent years. From 2011 to 2016, 5.8 million people have participated in 6850 courses in more than 700 institutions [1]. As a new kind of education, MOOCs have following advantages: (1) Students can study courses anytime in anyplace as long as the Internet is provided; (2) MOOCs have rich course resources which cover most fields in different languages, and students can join in any course just by registering it; (3) All courses are free, which means a student can get the service of course resources, immediate questions, homework and exams without any fee and enjoy the best education all over the world. In MOOCs, forum is the only way for users to communicate. Students can ask questions, get feedbacks from instructors or staffs, and even make friends with other students. However, with the increasing number of the students participated in MOOCs, there are hundreds of new discussion threads per day. It is hard for users to find what they need in so many threads. Therefore, to guarantee the utilization of MOOC forums, classification of discussion threads in MOOC forums is essential and should be reasonable. Derrick Coetzee et. al. study the effect of MOOC forums on student activity and learning outcomes and then propose a reputation system for students to make useful posts [2]. Tobias Hecking et. al. analyze the extracted keywords from threads and user types of posts to divide users with common interest into groups [3]. Wise et. al. propose a linguistic model to identify whether the threads are related to course contents to address the problem of overload and chaos in MOOC forums [4]. Siwei Fu et. al. present an interactive visual analysis system called iforum which can discover and understand temporal patterns effectively in MOOC forums [5]. Rossi et. al. come up with a method which can across discussion threads written in different languages and technical terminologies by extracting language independent features based on the types of the interactions to classify the discussion threads [6]. 493

2 In recent years, deep learning is a hot point in the field of machine learning. Inspired by the information processing mechanism of human brain, deep learning uses the outputs of lower layers as the inputs of higher layers to extract useful structural information from the input data [7]. Since proposed, deep learning has been applied in different fields leading to breakthroughs compared with traditional methods. Based on the work of Rossi et. al., this paper proposes a model constructed by the convolutional neural network in deep learning to classify discussion threads in MOOC forums and is not restricted by course contents and teaching languages by extracting features based on user interactive behaviors. Contrast experimental results show that our model outperforms Rossi s. Dataset The dataset we use to classify forum threads is the first anonymous open Coursera forum dataset contributed by Rossi and stored in Github repository [8]. This dataset contains forum data of 60 courses in Coursera from March, 2013 to August, In general, Coursera courses forum are structured as following: the forum home page shows the list of all subforums which include threads. Users can response to a thread with a post, or comment a certain post. They also can vote to posts and comments to show their attention or agreement. Coursera forums are split into 6 subforums generally, including (1) General Discussions, (2) Meetups, (3) Assignments, (4) Lectures, (5) Logistics, (6) Feedback. The dataset has unified subforum names to the above 6 default subforum names and includes threads, posts and comments of 60 courses, among which 8 courses are not in English (in French, Spanish and Chinese, respectively). The duration of those courses varies from 5 to 17 weeks with a mean of In average, each course has 1660 threads, posts and comments, and 2037 users among which 10% are anonymous. Feature As MOOCs get more and more popular, the amount of courses is increasing rapidly, and courses are taught in more and more languages, not only in English. In order to ensure that our model is not restricted by course contents and teaching languages, we define features based on user interactive behaviors rather than contents of threads In this paper, we define 18 user interactive behaviors features of threads from aspects of structure, underlying social network, popularity and quality. These features describe interactive characteristics of subforums from different angles. Definitions of these features are shown in the following (regarding posts and comments as messages): Thread structure: 1. Posts (n post ): number of posts in a thread. 2. Comments (n comment ): number of comments in a thread. 3. Depth (n deep ): depth of a thread in its forum_chain. 4. Start time (t start ): the post time of the first post in a thread. 5. Relative time (t rel ): the start time of a thread relative to the duration time of the course, tstart cstart computed as trel = cend cstart Underlying social network: 6. Messages posted by students (n stu ): number of messages posted by students in a thread. 7. Messages posted by instructors (n instr ): number of messages posted by instructors and community TA in a thread. 8. Messages posted by staff (n staff ): number of messages posted by staff in a thread. 9. Messages posted by Coursera staff: number of messages posted by Coursera staff and Coursera Tech Support in a thread. 10. Anonymous messages (n anony ): total number of anonymous posts and comments. 494

3 11. User chain (u chain ): boolean feature to indicate that at least 3 users made more than 1 post each in a thread. Popularity measures: 12. Views (n views ): number of views in a thread. 13. Active time (t active ): the post time of the last post in a thread is treated as end time of a thread (t end ). Active time of a thread is computed as t end - t start. 14. Relative active time (t rel-active ): the active time of a thread relative to the course duration. Quality measures: 15. Votes (n votes ): total number of votes of posts and comments in a thread. 16. Max vote (max vote ): maximum votes of message in a thread. 17. User type of the post with max vote (u max-vote ): User type of the post which has max vote in a thread. 18. Order of the post with max vote (o max-vote ): order posts by posting time. And o max-vote is the order of the post which has max vote in a thread. Model Rossi s work is to define 20 language independent features which used for the classification via a support vector machine (SVM) with linear kernels [6]. All the features are constructed by manual work. In this paper, we propose a classification model which using the convolutional neural network (CNN) in deep learning to code input features. CNN can extract more advanced deep features through the multi-layer convolutional layers combining the local feature learned from lower layer s feature in higher layer. The difference between CNN and the normal neural network is that CNN contains a feature extractor composed of the convolutional layer and the pooling layer. In the convolutional layer, the output of the previous layer is the input of this layer, and each neuron is connected to the local receptive field of the previous layer. The convolution operation is first performed by the convolution kernel, followed by adding bias, and then an activation function (usually a sigmoid function) is used to extract the local feature, as shown in following equation: conv( x) = S( θijk xijk + b) (1) i j k In Eq. 1 s(x) is sigmoid function, θ is parameter of the convolutional kernel, b is bias, i and j indicate the size of the convolution kernel, and k is the number of feature maps in the previous layer. That neurons in different locations share parameters of the convolutional kernel reduces the parameters of the network. Multiple convolution operations performed by multiple convolution kernel can be used to extract multiple features [9]. After the convolutional layer, we will get a very high dimensional convolution feature vector. It is easy to occur overfitting learning classification with too large scale input, which leads to unreasonable result, so it is necessary to add a pooling layer which can reduce the dimensions of features, decrease the number of parameters and prevent overfitting. Commonly used pooling operation contains average-pooling and max-pooling, which mean replacing the local feature with the average value or maximum value of a region. In addition, pooling helps to make the representation become approximately invariant to small translations of the input [10]. The structure of CNN has a critical influence on the classification effect, and inappropriate network structure can easily lead to overfitting or underfitting [11]. In this paper, aiming at the classification of discussion threads in MOOC forums we improve the network shown in Figure 1(a) as followings based on basic CNN structure. The configuration of network is shown in Figure 1(b). 1. Apply batch normalization to input layer and every hidden layer. Batch normalization can decrease the influence of data distribution deviation and thus speed up training. 2. Only apply pooling layer after the last convolutional layer rather than every convolutional layer. Classification of threads, the problem to solve in this paper, is different from image 495

4 identification or automatic speech recognition. There is no relation between input features, so it is unnecessary to ensure invariance to translation. The purpose of using CNN in this paper is only to code features, therefore, the usage of pooling layers are reduced in this network. Actually, only one average-pooling layer is applied after the last convolutional layer to reduce the dimensions of features. 3. Use global pooling layer instead of fully connected layer. According to [12], it is easier to interpret and less prone to overfitting than traditional fully connected layer utilizing global average-pooling over feature maps in the classification layer. (a) Figure. 1. (a) Convolutional neural network structure; (b) Configuration of network Experiments and results In this paper, in order to ensure that the model is not restricted by course contents and teaching languages, the extracted features have nothing to do with thread contents, but are based on user interactive behaviors data. Therefore, to validate whether the model has the ability, we randomly select 40 courses forum data from all courses taught in English as training dataset which contains threads, and take the rest 12 courses taught in English and the other 8 courses taught in non-english forum data as test dataset which contains threads. In training dataset, the number of threads distributed in six subforums is shown in Figure 2. number of threads (b) Gen. Disc. Assignments Meetups Feedback Lectures Logistics Figure 2. Distribution of threads in different subforums From Figure 2, we can know the number of threads in different subforums has an uneven distribution. Feedback has the least amount of threads, totaling 3518, while Assignments has the most amount of threads, totaling 28763, accounting for 4% and 36% of the total amount of

5 respectively. In order to solve class imbalance problem in training dataset, we adopts Synthetic Minority Over-sampling Technique (SMOTE) to sample training data. We perform two groups of contrast experiments to validate that the model learned by CNN used in this paper has a better performance. Since AUC is insensitive to sample distribution and not affected by the classification probability threshold, AUC is chosen as the evaluation index for experimental result. 1. Compare CNN with traditional classification algorithms. Besides the model trained by CNN, we also use three traditional classification algorithms: k-nearest Neighbor(KNN), Logistics Regression(LR) and Random Forest(RF) to construct models respectively, and then use features we defined as input. The results are shown in Table 1. Table 1. Classification performance of different subforums with different algorithms. classes KNN LR RF CNN Gen. Disc Assignments Meetups Feedback Lectures Logistics Average Compare the model(m1) proposed in this paper with the model(m2) proposed by Rossi. We apply 18 features(the dataset has no relevant data of the other 2 features ) of 20 features defined by Rossi and 18 features defined in this paper to m1 and m2 to get four experiments.the results are shown in Table 2. Table 2. Classification performance of Rossi s model and our model with different features. classes SVM(Rossi s Feature) CNN(Rossi s feature) SVM(Our feature) CNN(Our feature) Gen. Disc Assignments Meetups Feedback Lectures Logistics Average Table 1 shows that the classification performance of CNN is 8% higher than KNN s, 6% higher than LR s, and 3% higher than RF s on average. The AUC of the six categories are at least 0.77, and the highest for Meetups is even up to The results of this group of contrast experiment indicate that the classification performance of model learned via CNN in this paper is better than the model constructed by traditional classification algorithms. From Table 2, we can know that in the classification results of linear SVM with Rossi s features only Meetups s AUC is greater than 0.8, regretfully, for the rest five categories the AUC are just within , as a result the average is only As it were, the classification performance is not very well. However, the performance of CNN with Rossi s features can be increased by an average of nearly 7% and the lowest AUC is also increased from 0.53 to Compared with Rossi s features, when using our features defined in this paper as input, the performances of linear SVM and CNN are up to and on average, which increased by 13.5% and 21% respectively than Rossi s work. According to the above analysis, it is proved that user interactive behaviors features defined in this paper are superior to measure the characteristics of threads in different subforums compared to Rossi s features. Meanwhile, the performance of our model is also much better than Rossi s model. 497

6 Conclusion As a research field of machine learning, deep learning has received more and more attention in recent years. With the development of computer technology, MOOCs also develop rapidly. This paper applies the convolutional neural network in deep learning to classify discussion threads in MOOC forums and tries to improve the performance of classification and promote the utilization of forums, and thus to improve course quality in MOOCs and students learning effect. We propose a model constructed by the convolutional neural network in deep learning. From dataset open by Rossi, which contains 60 courses in Coursera and nearly threads, we define 18 features based on user interactive behaviors rather than threads contents to train the model. The experimental results show the classification performance of our model for six thread categories gains at least 0.77, for Meetups and Assignments, the AUC are up to and respectively. Compared with model proposed by Rossi, the performance is increased by an average of 21%. In addition, we also conduct contrast experiment with CNN and three traditional classification algorithms including KNN, LR and RF, and the results show that the performance of CNN is better than all three traditional classification algorithms. References [1] Wei, Xiaocong, et al. "A Convolution-LSTM-Based Deep Neural Network for Cross-Domain MOOC Forum Post Classification." Information 8.3(2017):92. [2] Coetzee, Derrick, A. Fox, and M. A. Hearst. "Should your MOOC forum use a reputation system?." ACM Conference on Computer Supported Cooperative Work & Social Computing ACM, 2014: [3] Hecking, Tobias, I. A. Chounta, and H. U. Hoppe. "Analysis of User Roles and the Emergence of Themes in Discussion Forums." Network Intelligence Conference IEEE, 2015: [4] Wise, Alyssa Friend, Y. Cui, and J. Vytasek. "Bringing order to chaos in MOOC discussion forums with content-related thread identification." International Conference on Learning Analytics & Knowledge ACM, 2016: [5] Fu, S., et al. "Visual Analysis of MOOC Forums with iforum." IEEE Transactions on Visualization & Computer Graphics 23.1(2017):201. [6] Rossi, Lorenzo A., and O. Gnawali. "Language independent analysis and classification of discussion threads in Coursera MOOC forums." IEEE, International Conference on Information Reuse and Integration IEEE, 2014: [7] Lecun, Yann, Y. Bengio, and G. Hinton. "Deep learning." Nature (2015): [8] L. A. Rossi. Anonymized Coursera Discussion Threads Dataset, Github. July [9] Lawrence, S., et al. "Face recognition: a convolutional neural-network approach." IEEE Transactions on Neural Networks 8.1(1997):98. [10] Abdel-Hamid, Ossama, et al. "Applying Convolutional Neural Networks concepts to hybrid NN-HMM model for speech recognition." IEEE International Conference on Acoustics, Speech and Signal Processing IEEE, 2012: [11] Abdelhamid, O., Li, D., and Dong, Y Exploring convolutional neural network structures and optimization techniques for speech recognition. [12] Lin, Min, Q. Chen, and S. Yan. "Network in Network." Computer Science (2013). 498

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

arxiv: v1 [cs.lg] 15 Jun 2015

arxiv: v1 [cs.lg] 15 Jun 2015 Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE Shaofei Xue 1

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

CSL465/603 - Machine Learning

CSL465/603 - Machine Learning CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &

More information

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION Atul Laxman Katole 1, Krishna Prasad Yellapragada 1, Amish Kumar Bedi 1, Sehaj Singh Kalra 1 and Mynepalli Siva Chaitanya 1 1 Samsung

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY Chen, Hsin-Hsi Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan E-mail: hh_chen@csie.ntu.edu.tw Abstract

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach #BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying

More information

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,

More information

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Twitter Sentiment Classification on Sanders Data using Hybrid Approach IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

On the Formation of Phoneme Categories in DNN Acoustic Models

On the Formation of Phoneme Categories in DNN Acoustic Models On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-

More information

arxiv: v1 [cs.cl] 27 Apr 2016

arxiv: v1 [cs.cl] 27 Apr 2016 The IBM 2016 English Conversational Telephone Speech Recognition System George Saon, Tom Sercu, Steven Rennie and Hong-Kwang J. Kuo IBM T. J. Watson Research Center, Yorktown Heights, NY, 10598 gsaon@us.ibm.com

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

A Review: Speech Recognition with Deep Learning Methods

A Review: Speech Recognition with Deep Learning Methods Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.1017

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen TRANSFER LEARNING OF WEAKLY LABELLED AUDIO Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Laboratory of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

Application of Visualization Technology in Professional Teaching

Application of Visualization Technology in Professional Teaching Application of Visualization Technology in Professional Teaching LI Baofu, SONG Jiayong School of Energy Science and Engineering Henan Polytechnic University, P. R. China, 454000 libf@hpu.edu.cn Abstract:

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

Leveraging MOOCs to bring entrepreneurship and innovation to everyone on campus

Leveraging MOOCs to bring entrepreneurship and innovation to everyone on campus Paper ID #9305 Leveraging MOOCs to bring entrepreneurship and innovation to everyone on campus Dr. James V Green, University of Maryland, College Park Dr. James V. Green leads the education activities

More information

Georgetown University at TREC 2017 Dynamic Domain Track

Georgetown University at TREC 2017 Dynamic Domain Track Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

arxiv: v1 [cs.lg] 3 May 2013

arxiv: v1 [cs.lg] 3 May 2013 Feature Selection Based on Term Frequency and T-Test for Text Categorization Deqing Wang dqwang@nlsde.buaa.edu.cn Hui Zhang hzhang@nlsde.buaa.edu.cn Rui Liu, Weifeng Lv {liurui,lwf}@nlsde.buaa.edu.cn arxiv:1305.0638v1

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011 Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011 Cristian-Alexandru Drăgușanu, Marina Cufliuc, Adrian Iftene UAIC: Faculty of Computer Science, Alexandru Ioan Cuza University,

More information

Dropout improves Recurrent Neural Networks for Handwriting Recognition

Dropout improves Recurrent Neural Networks for Handwriting Recognition 2014 14th International Conference on Frontiers in Handwriting Recognition Dropout improves Recurrent Neural Networks for Handwriting Recognition Vu Pham,Théodore Bluche, Christopher Kermorvant, and Jérôme

More information

Cultivating DNN Diversity for Large Scale Video Labelling

Cultivating DNN Diversity for Large Scale Video Labelling Cultivating DNN Diversity for Large Scale Video Labelling Mikel Bober-Irizar mikel@mxbi.net Sameed Husain sameed.husain@surrey.ac.uk Miroslaw Bober m.bober@surrey.ac.uk Eng-Jon Ong e.ong@surrey.ac.uk Abstract

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

Linking Task: Identifying authors and book titles in verbose queries

Linking Task: Identifying authors and book titles in verbose queries Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Test Effort Estimation Using Neural Network

Test Effort Estimation Using Neural Network J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

An OO Framework for building Intelligence and Learning properties in Software Agents

An OO Framework for building Intelligence and Learning properties in Software Agents An OO Framework for building Intelligence and Learning properties in Software Agents José A. R. P. Sardinha, Ruy L. Milidiú, Carlos J. P. Lucena, Patrick Paranhos Abstract Software agents are defined as

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Cost-sensitive Deep Learning for Early Readmission Prediction at A Major Hospital

Cost-sensitive Deep Learning for Early Readmission Prediction at A Major Hospital Cost-sensitive Deep Learning for Early Readmission Prediction at A Major Hospital Haishuai Wang, Zhicheng Cui, Yixin Chen, Michael Avidan, Arbi Ben Abdallah, Alexander Kronzer Department of Computer Science

More information

SORT: Second-Order Response Transform for Visual Recognition

SORT: Second-Order Response Transform for Visual Recognition SORT: Second-Order Response Transform for Visual Recognition Yan Wang 1, Lingxi Xie 2( ), Chenxi Liu 2, Siyuan Qiao 2 Ya Zhang 1( ), Wenjun Zhang 1, Qi Tian 3, Alan Yuille 2 1 Cooperative Medianet Innovation

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention Damien Teney 1, Peter Anderson 2*, David Golub 4*, Po-Sen Huang 3, Lei Zhang 3, Xiaodong He 3, Anton van den Hengel 1 1

More information

Attributed Social Network Embedding

Attributed Social Network Embedding JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MAY 2017 1 Attributed Social Network Embedding arxiv:1705.04969v1 [cs.si] 14 May 2017 Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua Abstract Embedding

More information

THE enormous growth of unstructured data, including

THE enormous growth of unstructured data, including INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2014, VOL. 60, NO. 4, PP. 321 326 Manuscript received September 1, 2014; revised December 2014. DOI: 10.2478/eletel-2014-0042 Deep Image Features in

More information

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Active Learning. Yingyu Liang Computer Sciences 760 Fall Active Learning Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven,

More information

On the Combined Behavior of Autonomous Resource Management Agents

On the Combined Behavior of Autonomous Resource Management Agents On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks POS tagging of Chinese Buddhist texts using Recurrent Neural Networks Longlu Qin Department of East Asian Languages and Cultures longlu@stanford.edu Abstract Chinese POS tagging, as one of the most important

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

arxiv: v2 [cs.cl] 26 Mar 2015

arxiv: v2 [cs.cl] 26 Mar 2015 Effective Use of Word Order for Text Categorization with Convolutional Neural Networks Rie Johnson RJ Research Consulting Tarrytown, NY, USA riejohnson@gmail.com Tong Zhang Baidu Inc., Beijing, China Rutgers

More information

A Deep Bag-of-Features Model for Music Auto-Tagging

A Deep Bag-of-Features Model for Music Auto-Tagging 1 A Deep Bag-of-Features Model for Music Auto-Tagging Juhan Nam, Member, IEEE, Jorge Herrera, and Kyogu Lee, Senior Member, IEEE latter is often referred to as music annotation and retrieval, or simply

More information

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download

More information

Distributed Learning of Multilingual DNN Feature Extractors using GPUs

Distributed Learning of Multilingual DNN Feature Extractors using GPUs Distributed Learning of Multilingual DNN Feature Extractors using GPUs Yajie Miao, Hao Zhang, Florian Metze Language Technologies Institute, School of Computer Science, Carnegie Mellon University Pittsburgh,

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT Takuya Yoshioka,, Anton Ragni, Mark J. F. Gales Cambridge University Engineering Department, Cambridge, UK NTT Communication

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

An Introduction to Simio for Beginners

An Introduction to Simio for Beginners An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality

More information

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability Shih-Bin Chen Dept. of Information and Computer Engineering, Chung-Yuan Christian University Chung-Li, Taiwan

More information

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-6) Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors Sang-Woo Lee,

More information

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

arxiv: v1 [cs.cl] 2 Apr 2017

arxiv: v1 [cs.cl] 2 Apr 2017 Word-Alignment-Based Segment-Level Machine Translation Evaluation using Word Embeddings Junki Matsuo and Mamoru Komachi Graduate School of System Design, Tokyo Metropolitan University, Japan matsuo-junki@ed.tmu.ac.jp,

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Xinyu Tang. Education. Research Interests. Honors and Awards. Professional Experience

Xinyu Tang. Education. Research Interests. Honors and Awards. Professional Experience Xinyu Tang Parasol Laboratory Department of Computer Science Texas A&M University, TAMU 3112 College Station, TX 77843-3112 phone:(979)847-8835 fax: (979)458-0425 email: xinyut@tamu.edu url: http://parasol.tamu.edu/people/xinyut

More information

Empirical research on implementation of full English teaching mode in the professional courses of the engineering doctoral students

Empirical research on implementation of full English teaching mode in the professional courses of the engineering doctoral students Empirical research on implementation of full English teaching mode in the professional courses of the engineering doctoral students Yunxia Zhang & Li Li College of Electronics and Information Engineering,

More information

Deep Neural Network Language Models

Deep Neural Network Language Models Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com

More information

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING

SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING SEMI-SUPERVISED ENSEMBLE DNN ACOUSTIC MODEL TRAINING Sheng Li 1, Xugang Lu 2, Shinsuke Sakai 1, Masato Mimura 1 and Tatsuya Kawahara 1 1 School of Informatics, Kyoto University, Sakyo-ku, Kyoto 606-8501,

More information

arxiv: v4 [cs.cl] 28 Mar 2016

arxiv: v4 [cs.cl] 28 Mar 2016 LSTM-BASED DEEP LEARNING MODELS FOR NON- FACTOID ANSWER SELECTION Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou IBM Watson Core Technologies Yorktown Heights, NY, USA {mingtan,cicerons,bingxia,zhou}@us.ibm.com

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the

More information

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento

More information

arxiv: v2 [cs.ir] 22 Aug 2016

arxiv: v2 [cs.ir] 22 Aug 2016 Exploring Deep Space: Learning Personalized Ranking in a Semantic Space arxiv:1608.00276v2 [cs.ir] 22 Aug 2016 ABSTRACT Jeroen B. P. Vuurens The Hague University of Applied Science Delft University of

More information