An Improved Segmentation of Online English Handwritten Text Using Recurrent Neural Networks

Size: px
Start display at page:

Download "An Improved Segmentation of Online English Handwritten Text Using Recurrent Neural Networks"

Transcription

1 An Improved Segmentation of Online English Handwritten Text Using Recurrent Neural Networks Cuong Tuan Nguyen and Masaki Nakagawa Deparment of Computer and Information Sciences Tokyo University of Agriculture and Technology Abstract Segmentation of online handwritten text recognition is better to employ the dependency on context of strokes written before and after it. This paper shows an application of Bidirectional Long Short-term Memory recurrent neural networks for segmentation of on-line handwritten English text. The networks allow incorporating long-range context from both forward and backward directions to improve the confident of segmentation over uncertainty. We show that applying the method in the semi-incremental recognition of online handwritten English text reduces up to 62% of waiting time, 50% of processing time. Moreover, recognition rate of the system also improves remarkably by 3 points from 71.7%. 1. Introduction Due to the development of pen-based or touch-based devices such as Tablet PCs, smart-phones, electronic whiteboards, digital pens and so on, online handwriting recognition is reviving more attention. Especially, online handwritten text recognition is a practical input method for these devices without keyboard [1, 2]. Researches focus on improving recognition rate, reducing processing time and dictionary size of recognition systems [3, 4], which enable a handwritten text recognition system to run effectively and reliably on small portable devices. Compared to isolated character or word recognition, handwritten text recognition faces the difficulty of word segmentation and character segmentation due to the ambiguity in segmentation. Moreover, in continuous handwriting, characters tend to be written more cursively. To deal with the problem, applying context for segmentation is crucial. The typical approach is using oversegmentation in combination with recognition results and linguistic context [3]. Based on geometric features, all the potential segmentation positions are determined to build up hypothetical segmentation paths. Then, recognition results and linguistic context is combined to evaluate and find the best path. The SVM method, which has been widely applied to numerous classification tasks achieves good performance on segmentation of on-line handwritten text [5]. The segmentation task, however, can be further improved by incorporating context from both the forward and backward directions. An improved version of bidirectional recurrent neural network: Bidirectional Long Short-term Memory BLSTM [12] allows the network to access long-range context. BLSTM shows its effective in many sequence classification tasks. There are two methods for text recognition. The batch recognition method, which recognizes handwritten text after a user has finished writing, can easily employ the full context to achieve high recognition rate [3]. If all the processes for segmentation and recognition are made after whole text is written, however, it suffers from the problem of large waiting time. As more text is written, larger waiting time is incurred. The other is the incremental recognition method [6, 7] which recognizes handwriting during a user is writing. Although it does not incur large waiting time after the user has finished writing, it may degrade recognition rate due to local processing of every stroke (a sequence of finger-tip or pen-tip coordinates from finger/pen-down to finger/pen-up) and increase the total CPU time due to repeated processing after receiving every stroke. The semi-incremental recognition method applying the resumption of segmentation-recognition, avails context for segmentation and recognition. It realizes very small waiting time, low CPU burden and no significant loss in recognition rate in both Japanese text and English text [8, 9]. In this work, we apply BLSTM for improving segmentation and evaluate its effect on the semiincremental English recognition method. 2. Segmentation of online handwritten text To recognize online handwritten text, there are two main streams: the segmentation free method and the dissection method. In this paper, we focus on the dissection method since it is better for Chinese and Japanese text recognition [1, 3] and it could produce better results even for western handwriting recognition for which the segmentation free method has been dominant. 225

2 2.1. Segmentation-recognition strategy Online handwriting text recognition deals with the problem of recognizing handwritten text including many text lines. For this problem, handwritten text is segmented into text lines, and then each text line is segmented into words. The segmentation can be so-called hard-decision (Yes or No) or soft-decision (allowing multiple possibilities). Segmentation is made based on geometric layout features (e.g. gap between strokes, stroke histogram, interrelationship and so on). Due to instability and ambiguity of these features in practical handwriting, however, it is difficult to make segmentation without using recognition cues and linguistic context. Thus, we employ the softdecision approach. Segmentation-recognition is accomplished in two steps: over-segmentation and path evaluation-search. A text line is over-segmented into primitive segments such that each segment composes a single word or a part of a word. A segment or a sequence of few consecutive segments is assumed as a candidate word pattern, which is recognized by a word recognizer with a list of candidate categories. Multiple ways of segmentation into a candidate word and multiple ways of recognition into a word are represented by a segmentation-recognition candidate lattice [3]. Text recognition is made by the best path search into the lattice considering geometrics and linguistic context as well as word recognition scores Features for segmentation From the local and global features based on the feature set described in [9], we extend it into a set of nine geometrical features as in Table 2. We define the terms in Table 1: Sp Immediate preceding stroke Ss Immediate succeeding stroke Bp Bounding box of S p Bs Bounding box of S s Bp_all Bounding box of all the preceding strokes Bs_all Bounding box of all the succeeding strokes P Pattern of all strokes Psub Sub-pattern of S p and S s Table 1. Terms of features representation. F1 F2 F3 F4 F5 F6 F7 F8 F9 Distance between B s_all and B p_all in x-axis Average stroke length on horizontal of P Average stroke length on horizontal of P sub Overlap length between B p and B s in x-axis Overlap length between B p and B s in y-axis Minimum point distance between S s and S p Angle between the vector from the centroids of B s and B p and x-axis Ratio between B s width and B p width Ratio between B s height and B p height Table 2. Features for English word segmentation Segmentation by a SVM classifier For word over-segmentation of a text line, the work in [9] uses a SVM classifier to classify each off-stroke into two classes: segmentation point (SP) or non-segmentation point (NSP). A SP off-stroke separates two words while a NSP off-stroke indicates the off-stroke is within a word. Off-strokes with low confidence are classified as undecided point (UP). In training the SVM, however, due to unbalance between the numbers of positive and negative training patterns (i.e. the number of SP and that of NSP), we need to adjust the cost of false positives and false negatives [10]. The higher cost of false positives is set, the higher precision of determining SP is achieved. The same logic applies to false negatives and precision of determining NSP. We use a combination of two SVMs: one with high precision for determining SP, the other with high precision for determining NSP Segmentation by a BLSTM classifier One of the key benefits of RNNs is their ability to use previous context. For standard RNN architectures, however, the range of context that can be accessed in practice is limited due to the vanishing gradient problem [11]. Long Short-Term Memory (LSTM [11]) is an RNN architecture designed to address the vanishing gradient problem. A LSTM layer consists of multiple recurrently connected memory blocks. Each block contains a set of internal units, known as cells, whose activation are controlled by three multiplicative gate units. The effect of the gates is to allow the cells to store and access information over long periods of time. For many tasks, it is useful to have access to future as well past context. Bidirectional LSTM (BLSTM) allows this [12] by using two separate hidden layers to present input in forward and backward directions, both of which are connected to the same output layer to provide access to long-range bidirectional context. We use BLSTM to employ the context of strokes written before and after an off-stroke for segmentation of that offstroke. The training of BLSTM does not suffer the problem of different in number of class patterns. Therefore, we use BLSTM with two thresholds to make over-segmentation. For over segmentation, we need to find all potential segmentation points of off-strokes (which could be then determined as segmentation or non-segmentation points), the remaining are non-segmentation points. Thus, we set a threshold TH1 to determine an off-stroke as a potential segmentation point if the score is above than TH1 and as a non-segmentation point if the score is below TH1. Likewise, we set another threshold TH2 to determine an off-stroke as a potential non-segmentation point or a segmentation point. The off-strokes whose score fall between TH1 and TH2 are classified as UP. Fig. 1 illustrates this method. 226

3 The segmentation process is divided into two steps. Firstly, we apply segmentation using the SVM or BLSTM classifier from Seg_rp. Secondly, we fix the UP off-strokes as SP before N_seg_fix latest recognized words if they match with the word segmentation points retrieved from the text recognition result. Both N_seg and N_seg_fix are determined experimentally. Figure 1. Over-segmentation using BLSTM. 3. Semi-incremental recognition method 3.1. Processing flow The semi-incremental method, as similarly to the incremental method also makes recognition in background while a user is writing. The method avails the effect of newly written strokes to recognition of previous strokes by makes resumptions of segmentation, recognition, and best path-search. Therefore, the method determines segmentation resuming point (Seg_rp) to resume the segmentation, determines the processing window termed as scope to update and resume best-path search. Fig. 2 shows the processing flow of the semi-incremental recognition method Determination of scope To determine the scope, we use the result from the segmentation process. The segmentations of the strokes before and after the method has received new strokes are compared with each other. If there is an off-stroke whose classification is changed from the before (we call it classification-changed off-stroke), we consider the strokes before the earliest classification-changed off-strokes are stably classified while the strokes after that are not classified stably. Otherwise, the off-stroke before the newly added strokes is considered as the earliest classificationchanged off-stroke. This earliest classification-changed offstroke may occur within a candidate word block or between two candidate word blocks. We define the scope as the sequence of strokes starting from the first stroke of the candidate word block containing or just preceding the earliest classification-changed off-stroke to the last stroke. 4. Experiments Figure 2. Flow of semi-incremental recognition. First, we receive new strokes. Secondly, we update Seg_rp. Thirdly, we apply segmentation from Seg_rp. Fourthly, we determine the scope. Fifthly, we update the src-lattice for this scope. Finally, we resume the best-path search from the beginning in this scope to get text recognition result. The segmentation and text recognition result of the scope is used for next processing cycle Seg_rp determination and segmentation process From the result of text recognition up to the latest scope at the beginning of each processing cycle, we update Seg_rp to the candidate segmentation point before a fixed number (N_seg) of latest recognized words 4.1. Metrics of segmentation evaluation First, over-segmentation is applied and then segmentation is determined along with word recognition and best-path search. We evaluate over-segmentation as well as segmentation. The over-segmentation process classifies each off-stroke to a SP, NSP, or UP off-stroke. Among them, a UP offstroke could then be further classified as SP or NSP in the text recognition process. The performance of over-segmentation is evaluated by the following measures. Precision is the ratio of correctly classified SP off-strokes over detected SP off-strokes as: Precision 3 Recall is the ratio of correctly classified SP off-strokes and detected UP off-strokes over true SP off-strokes as follows: Recall 4 Inclusion of detected UPs in the dividend is typical for over-segmentation since UP off-strokes keep the possibility that they are classified correctly. 227

4 F-measure is calculated from precision and recall as follows: 2 F measure 5 Detection rate shows the ratio of detected SP off-strokes over detected SP and detected UP off-strokes as: Detection 6 Although UP off-strokes keep the possibility that they are classified correctly, thus increase recall as said above, they decrease the recognition speed of the system. Therefore, along with F-measure, we also evaluate the performance of over-segmentation using the detection rate Experiment setup We employ the IAM online database (IAM-OnDB) [13] which consists of pen trajectories collected from 221 different writers using an electronic whiteboard. We follow the handwritten text recognition task: IAM-OnDB-t1 in which the database is divided into a training set, two validation sets, and a test set containing 5,364, 1,438, 1,518 and 3,859 written lines, respectively. We use a trigram table extract from the LOB text corpus for language modeling. For segmentation, we train both the SVM classifier and BLSTM classifier on segmented words of IAM-OnDB. The SVM classifiers use the RBF kernel with cost factor of 0.1 for high precision of SP determination and 7.5 for high precision of NSP determination. The BLSTM classifier uses a bi-directional layer of 20 LSTM blocks with one cell in each block. After training the BLSTM classifier, based on the distribution of output scores, we set TH1 = 0.1 and TH2 = 0.9. We compare the performance of two semi-incremental recognition systems: the first system uses the SVM classifiers for segmentation (SVM system) and the second uses BLSTM classifier for segmentation (BLSTM system) Over-segmentation The BLSTM system outperforms the SVM system for both recall and detection rates. Recall rate has improved 1.5 point, while detection has significant improved from 48% to 86% as shown in Table Recognition rate The recognition rates of the SVM system and the BLSTM system with changing N_seg parameter are shown in Fig. 3 and Fig.4. Result at each N_seg includes the maximum, the minimum and average recognition rate when running with the number of strokes each incremental recognition (Ns) from 1 to 10. The BLSTM system improves recognition rate by about 3 point. With high detection rate, the BLSTM system reduces a large number of UPs as compared with SVM. Therefore, the system reduces the ambiguity in best path search and improves recognition rate. Figure 3. Recognition rate of the SVM system. Figure 4. Recognition rate of the LSTM system Waiting time We measure the average waiting time of the two systems with changing Ns. The BLSTM system has reduction rate of average waiting time from 36.65% to 62.43% over the SVM system as shown in Fig. 5. Measures SVM LSTM Recall Precision F-measure Detection Table 3: Over-segmentation results 228

5 50% of CPU time as compared with the SVM system. Moreover, reducing undecided points also reduces the number of search paths, and lowers ambiguity of recognition so that BLSTM improves recognition rate of the system by 3 points from 71.7%. Acknowledgement This work is being supported by the Grant-in-Aid for Scientific Research (B) Figure 5. Average waiting time of the two systems CPU time We also compare both of the systems in CPU time per stroke. Fig. 6 shows the results of the BLSTM and SVM systems with changing of Ns. The BLSTM system also reduces about 50% of CPU time as compared with the SVM system. Figure 6. CPU time of the two systems. 5. Discussion Detection rate gives the ratio of segmentation points over all potential segmentation points. For the two systems, as the same recall rate, higher detection rate reduces the number of undecided points. Since each undecided point doubles the number of candidate word patterns which need to be recognized, high detection rate reduces processing time and waiting time. Each undecided point also doubles the number of search paths. Therefore, higher detection rate reduces the number of search paths, lowers ambiguity, and improves recognition rate. 6. Conclusion In this paper, we proposed a system using BLSTM recurrent neural network for segmentation of on-line handwritten English text. By large improvement in the detection rate of over-segmentation, BLSTM reduces the number of undecided points each of which doubles the number of candidate character patterns. The reduction of candidate character patterns is vital since character recognition is applied for each candidate. The BLSTM system reduces up to 62.34% of waiting time and around References [1] C. L. Liu, S. Jaeger, and M. Nakagawa, "Online Recognition of Chinese Characters: The State-of-the-Art," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 2, pp , February [2] R. Plamondon and S. N. Srihari, "On-Line and Off-Line Handwriting Recognition: A Comprehensive Survey," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp , January [3] B. Zhu, X.D. Zhou, C.L. Liu, and M. Nakagawa, "A robust model for on-line handwritten Japanese text recognition," International Journal on Document Analysis and Recognition, vol. 13, no. 2, pp , June [4] B. Zhu and M. Nakagawa: "Building a compact online MRF recognizer for large character set by structured dictionary representation and vector quantization technique," Pattern Recognition 47(3): (2014) [5] B. Zhu and M. Nakagawa, "Segmentation of On-Line Freely Written Japanese Text Using SVM for Improving Text Recognition," IEICE Transactions on Information and Systems, Volume E91.D, Issue 1, pp (2010). [6] H. Tanaka, "Implementation of real-time box-free online Japanese handwriting recognition system," Japanese Patent , issued March 13, 2002 (in Japanese). [7] D.H. Wang, C.L. Liu, and X.D. Zhou, "An approach for realtime recognition of online Chinese handwritten sentences," Pattern Recognition, no. 45, pp , [8] C.T. Nguyen, B. Zhu and M. Nakagawa, "A semiincremental recognition method for on-line handwritten Japanese text," Proc. 12 th Int. Conf. on Document Analysis and Recognition, Washington D.C, USA, Aug [9] C.T. Nguyen, B. Zhu and M. Nakagawa, "A semiincremental recognition method for on-line handwritten English text," Proc. 14 h Int. Conf. on Frontier in Handwritten Recognition, Crete, Greece, Sept [10] K. Morik, P. Brockhausen, and T. Joachims, "Combining statistical learning with a knowledge-based approach - A case study in intensive care monitoring," Proc. 16th Int'l Conf. on Machine Learning (ICML-99), [11] S. Hochreiter, J. Schmidhuber, "Long Short-term Memory," Neural Computation 9(8): , [12] A. Graves and J. Schmidhuber, "Framewise phoneme classification with bidirectional LSTM and other neural network architectures," Neural Networks, 18(5-6): , July [13] M. Liwicki and H. Bunke, "IAM-OnDB - an on-line English sentence database acquired from handwritten text on a whiteboard," Proc. 8th Int. Conf. Document Anal. and Recognit, pp ,

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS Heiga Zen, Haşim Sak Google fheigazen,hasimg@google.com ABSTRACT Long short-term

More information

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures Alex Graves and Jürgen Schmidhuber IDSIA, Galleria 2, 6928 Manno-Lugano, Switzerland TU Munich, Boltzmannstr.

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape Koshi Odagiri 1, and Yoichi Muraoka 1 1 Graduate School of Fundamental/Computer Science and Engineering, Waseda University,

More information

An Online Handwriting Recognition System For Turkish

An Online Handwriting Recognition System For Turkish An Online Handwriting Recognition System For Turkish Esra Vural, Hakan Erdogan, Kemal Oflazer, Berrin Yanikoglu Sabanci University, Tuzla, Istanbul, Turkey 34956 ABSTRACT Despite recent developments in

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

Dropout improves Recurrent Neural Networks for Handwriting Recognition

Dropout improves Recurrent Neural Networks for Handwriting Recognition 2014 14th International Conference on Frontiers in Handwriting Recognition Dropout improves Recurrent Neural Networks for Handwriting Recognition Vu Pham,Théodore Bluche, Christopher Kermorvant, and Jérôme

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY Chen, Hsin-Hsi Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan E-mail: hh_chen@csie.ntu.edu.tw Abstract

More information

Mandarin Lexical Tone Recognition: The Gating Paradigm

Mandarin Lexical Tone Recognition: The Gating Paradigm Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Large vocabulary off-line handwriting recognition: A survey

Large vocabulary off-line handwriting recognition: A survey Pattern Anal Applic (2003) 6: 97 121 DOI 10.1007/s10044-002-0169-3 ORIGINAL ARTICLE A. L. Koerich, R. Sabourin, C. Y. Suen Large vocabulary off-line handwriting recognition: A survey Received: 24/09/01

More information

arxiv: v1 [cs.lg] 7 Apr 2015

arxiv: v1 [cs.lg] 7 Apr 2015 Transferring Knowledge from a RNN to a DNN William Chan 1, Nan Rosemary Ke 1, Ian Lane 1,2 Carnegie Mellon University 1 Electrical and Computer Engineering, 2 Language Technologies Institute Equal contribution

More information

arxiv: v4 [cs.cl] 28 Mar 2016

arxiv: v4 [cs.cl] 28 Mar 2016 LSTM-BASED DEEP LEARNING MODELS FOR NON- FACTOID ANSWER SELECTION Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou IBM Watson Core Technologies Yorktown Heights, NY, USA {mingtan,cicerons,bingxia,zhou}@us.ibm.com

More information

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

USER ADAPTATION IN E-LEARNING ENVIRONMENTS USER ADAPTATION IN E-LEARNING ENVIRONMENTS Paraskevi Tzouveli Image, Video and Multimedia Systems Laboratory School of Electrical and Computer Engineering National Technical University of Athens tpar@image.

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Linking Task: Identifying authors and book titles in verbose queries

Linking Task: Identifying authors and book titles in verbose queries Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition

Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition Tom Y. Ouyang * MIT CSAIL ouyang@csail.mit.edu Yang Li Google Research yangli@acm.org ABSTRACT Personal

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

A Case-Based Approach To Imitation Learning in Robotic Agents

A Case-Based Approach To Imitation Learning in Robotic Agents A Case-Based Approach To Imitation Learning in Robotic Agents Tesca Fitzgerald, Ashok Goel School of Interactive Computing Georgia Institute of Technology, Atlanta, GA 30332, USA {tesca.fitzgerald,goel}@cc.gatech.edu

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

arxiv: v1 [cs.lg] 15 Jun 2015

arxiv: v1 [cs.lg] 15 Jun 2015 Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and

More information

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках Тарасов Д. С. (dtarasov3@gmail.com) Интернет-портал reviewdot.ru, Казань,

More information

Semi-Supervised Face Detection

Semi-Supervised Face Detection Semi-Supervised Face Detection Nicu Sebe, Ira Cohen 2, Thomas S. Huang 3, Theo Gevers Faculty of Science, University of Amsterdam, The Netherlands 2 HP Research Labs, USA 3 Beckman Institute, University

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011 Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011 Cristian-Alexandru Drăgușanu, Marina Cufliuc, Adrian Iftene UAIC: Faculty of Computer Science, Alexandru Ioan Cuza University,

More information

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer

More information

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology

More information

On the Combined Behavior of Autonomous Resource Management Agents

On the Combined Behavior of Autonomous Resource Management Agents On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science

More information

Operational Knowledge Management: a way to manage competence

Operational Knowledge Management: a way to manage competence Operational Knowledge Management: a way to manage competence Giulio Valente Dipartimento di Informatica Universita di Torino Torino (ITALY) e-mail: valenteg@di.unito.it Alessandro Rigallo Telecom Italia

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

AUTOMATED FABRIC DEFECT INSPECTION: A SURVEY OF CLASSIFIERS

AUTOMATED FABRIC DEFECT INSPECTION: A SURVEY OF CLASSIFIERS AUTOMATED FABRIC DEFECT INSPECTION: A SURVEY OF CLASSIFIERS Md. Tarek Habib 1, Rahat Hossain Faisal 2, M. Rokonuzzaman 3, Farruk Ahmed 4 1 Department of Computer Science and Engineering, Prime University,

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

Introduction to Mobile Learning Systems and Usability Factors

Introduction to Mobile Learning Systems and Usability Factors Introduction to Mobile Learning Systems and Usability Factors K.B.Lee Computer Science University of Northern Virginia Annandale, VA Kwang.lee@unva.edu Abstract - Number of people using mobile phones has

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

Quantitative Evaluation of an Intuitive Teaching Method for Industrial Robot Using a Force / Moment Direction Sensor

Quantitative Evaluation of an Intuitive Teaching Method for Industrial Robot Using a Force / Moment Direction Sensor International Journal of Control, Automation, and Systems Vol. 1, No. 3, September 2003 395 Quantitative Evaluation of an Intuitive Teaching Method for Industrial Robot Using a Force / Moment Direction

More information

Off-line handwritten Thai name recognition for student identification in an automated assessment system

Off-line handwritten Thai name recognition for student identification in an automated assessment system Griffith Research Online https://research-repository.griffith.edu.au Off-line handwritten Thai name recognition for student identification in an automated assessment system Author Suwanwiwat, Hemmaphan,

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Data Fusion Models in WSNs: Comparison and Analysis

Data Fusion Models in WSNs: Comparison and Analysis Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Data Fusion s in WSNs: Comparison and Analysis Marwah M Almasri, and Khaled M Elleithy, Senior Member,

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

A Reinforcement Learning Variant for Control Scheduling

A Reinforcement Learning Variant for Control Scheduling A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement

More information

The A2iA Multi-lingual Text Recognition System at the second Maurdor Evaluation

The A2iA Multi-lingual Text Recognition System at the second Maurdor Evaluation 2014 14th International Conference on Frontiers in Handwriting Recognition The A2iA Multi-lingual Text Recognition System at the second Maurdor Evaluation Bastien Moysset,Théodore Bluche, Maxime Knibbe,

More information

CSL465/603 - Machine Learning

CSL465/603 - Machine Learning CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am

More information

arxiv: v1 [cs.cl] 2 Apr 2017

arxiv: v1 [cs.cl] 2 Apr 2017 Word-Alignment-Based Segment-Level Machine Translation Evaluation using Word Embeddings Junki Matsuo and Mamoru Komachi Graduate School of System Design, Tokyo Metropolitan University, Japan matsuo-junki@ed.tmu.ac.jp,

More information

Georgetown University at TREC 2017 Dynamic Domain Track

Georgetown University at TREC 2017 Dynamic Domain Track Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17. Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link

More information

arxiv: v1 [cs.lg] 3 May 2013

arxiv: v1 [cs.lg] 3 May 2013 Feature Selection Based on Term Frequency and T-Test for Text Categorization Deqing Wang dqwang@nlsde.buaa.edu.cn Hui Zhang hzhang@nlsde.buaa.edu.cn Rui Liu, Weifeng Lv {liurui,lwf}@nlsde.buaa.edu.cn arxiv:1305.0638v1

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Learning to Schedule Straight-Line Code

Learning to Schedule Straight-Line Code Learning to Schedule Straight-Line Code Eliot Moss, Paul Utgoff, John Cavazos Doina Precup, Darko Stefanović Dept. of Comp. Sci., Univ. of Mass. Amherst, MA 01003 Carla Brodley, David Scheeff Sch. of Elec.

More information

GACE Computer Science Assessment Test at a Glance

GACE Computer Science Assessment Test at a Glance GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science

More information

Disambiguation of Thai Personal Name from Online News Articles

Disambiguation of Thai Personal Name from Online News Articles Disambiguation of Thai Personal Name from Online News Articles Phaisarn Sutheebanjard Graduate School of Information Technology Siam University Bangkok, Thailand mr.phaisarn@gmail.com Abstract Since online

More information

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach #BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying

More information

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 6, Ver. IV (Nov Dec. 2015), PP 01-07 www.iosrjournals.org Longest Common Subsequence: A Method for

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

A student diagnosing and evaluation system for laboratory-based academic exercises

A student diagnosing and evaluation system for laboratory-based academic exercises A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens

More information

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D. INTRODUCTION Anyone who spends ample time working

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,

More information

ProFusion2 Sensor Data Fusion for Multiple Active Safety Applications

ProFusion2 Sensor Data Fusion for Multiple Active Safety Applications ProFusion2 Sensor Data Fusion for Multiple Active Safety Applications S.-B. Park 1, F. Tango 2, O. Aycard 3, A. Polychronopoulos 4, U. Scheunert 5, T. Tatschke 6 1 DELPHI, Electronics & Safety, 42119 Wuppertal,

More information

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com

More information

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

Using computational modeling in language acquisition research

Using computational modeling in language acquisition research Chapter 8 Using computational modeling in language acquisition research Lisa Pearl 1. Introduction Language acquisition research is often concerned with questions of what, when, and how what children know,

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering

More information

Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing

Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing Fragment Analysis and Test Case Generation using F- Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing D. Indhumathi Research Scholar Department of Information Technology

More information

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas

P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas Exploiting Distance Learning Methods and Multimediaenhanced instructional content to support IT Curricula in Greek Technological Educational Institutes P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou,

More information

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

Cross Language Information Retrieval

Cross Language Information Retrieval Cross Language Information Retrieval RAFFAELLA BERNARDI UNIVERSITÀ DEGLI STUDI DI TRENTO P.ZZA VENEZIA, ROOM: 2.05, E-MAIL: BERNARDI@DISI.UNITN.IT Contents 1 Acknowledgment.............................................

More information

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks POS tagging of Chinese Buddhist texts using Recurrent Neural Networks Longlu Qin Department of East Asian Languages and Cultures longlu@stanford.edu Abstract Chinese POS tagging, as one of the most important

More information

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,

More information

Identification of Opinion Leaders Using Text Mining Technique in Virtual Community

Identification of Opinion Leaders Using Text Mining Technique in Virtual Community Identification of Opinion Leaders Using Text Mining Technique in Virtual Community Chihli Hung Department of Information Management Chung Yuan Christian University Taiwan 32023, R.O.C. chihli@cycu.edu.tw

More information

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen TRANSFER LEARNING OF WEAKLY LABELLED AUDIO Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Laboratory of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

Support Vector Machines for Speaker and Language Recognition

Support Vector Machines for Speaker and Language Recognition Support Vector Machines for Speaker and Language Recognition W. M. Campbell, J. P. Campbell, D. A. Reynolds, E. Singer, P. A. Torres-Carrasquillo MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA

More information

A Comparison of Two Text Representations for Sentiment Analysis

A Comparison of Two Text Representations for Sentiment Analysis 010 International Conference on Computer Application and System Modeling (ICCASM 010) A Comparison of Two Text Representations for Sentiment Analysis Jianxiong Wang School of Computer Science & Educational

More information

Ph.D in Advance Machine Learning (computer science) PhD submitted, degree to be awarded on convocation, sept B.Tech in Computer science and

Ph.D in Advance Machine Learning (computer science) PhD submitted, degree to be awarded on convocation, sept B.Tech in Computer science and Name Qualification Sonia Thomas Ph.D in Advance Machine Learning (computer science) PhD submitted, degree to be awarded on convocation, sept. 2016. M.Tech in Computer science and Engineering. B.Tech in

More information

Course Specifications

Course Specifications Course Specifications Institution Date of Report 4.12.1434 College/Department Faculty of Computers and Information Technology / Department Information Technology A. Course Identification and General Information

More information