Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

Size: px
Start display at page:

Download "Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski"

Transcription

1 Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski Problem Statement and Background Given a collection of 8th grade science questions, possible answer choices, and a set of educational resources, can a neural network correctly identify the correct answer choice? For example, given the question Which of the following does not allow sound to travel through? with possible answer choices of solid, liquid, gas, and vacuum, a model that successfully completes this task would output vacuum as the correct answer choice. This task was posed as a previous Kaggle challenge, and has a couple of large scale impacts in natural language processing (NLP) research. Question answering (QA) is a large area of NLP research and is a crucial part of shaping how humans can interact with computer systems. Additionally, the ability to weigh between different answer choices poses an interesting question when it comes to a network successfully synthesizing information and making decisions. This also has a larger application in the education space--if we can isolate pieces of information that helped our model answer a question, we could potentially help a student who is struggling in an area find a resource which could help him/her answer the question and better learn the material. The team which won the initial Kaggle challenge achieved an accuracy of 59.3% on the test question set. Second place achieved a 58.3% accuracy rate, and third place achieved a 48.1% rate. This accuracy rate was calculated on a slightly different test set than we use in our project, but these results should be comparable. State of the art approaches for this task included both neural and non-neural approaches. One team used textbooks and outside sources to hand-craft features that are scored by a scoring function [1]. The approach starts by examining each question and coming up with a hypothesis for each answer choice which combines each choice with the base of the question. From there, hand-crafted features (including Tf-idf, BM-25, and entailment) which compared each hypothesis with textbooks and other knowledge sources were used in an SVM with a scoring function to generate the most probable hypothesis. This approach was able to obtain 47.8% accuracy. Another state of the art approach involves using a sequence to sequence model to rank each answer choice on relevancy [2]. The model begins by creating a full sentence hypothesis with each answer choice. It then creates sentence embeddings from both open source textbook sources and each hypothesis and compares them. The network used to do this is a recurrent neural network with attention. A scoring mechanism is used to score each hypothesis and select a final answer. This approach was able to obtain 44.1% accuracy. The winner of the Kaggle challenge used a combination of 28 hand-crafted features and a regression model to output probabilities for each answer choice [3]. These features were created by searching through a variety of external documents to determine relevancy of answer choices. A SVM is then trained to predict the correct answer choice. Their final accuracy was 59.3%. Approach We gathered data from two different sources--a collection of 8th grade science questions from AI2 and open source textbooks from c12k.org. We also augmented science question data with a textual entailment dataset: the Stanford Natural Language Inference (SNLI) Corpus [4]. 1

2 The AI2 8th Grade Science Dataset includes th grade science questions, already divided into training, development, and test datasets. In order to preprocess the data, we remove all columns of the AI2 question CSV except for the question, four answer choices, and the correct answer. The evidence that we pass to our model to answer the questions comes from an open source collection of science textbooks from c12k.org. Textbooks provide a hierarchical representation of material that we may be advantageous to our model, as information is structured and topics are clearly defined by chapter and section titles. We preprocessed these resources by having hyperlinks and author information removed, as this appears on every page of the textbook and skews our retrieval method. Tools Tensorflow was used to implement all of the models reported upon. Compared with other machine learning libraries such as Caffe or Theano, Tensorflow provides APIs on various level, allowing us to improve our baseline models beyond what is possible using off-the-shelf solutions. Additionally, Tensorflow provides easy GPU support and easy-to-use visualization tools. For determining relevant evidence from the textbook corpus, PyLucene, an indexing and querying search engine service based on Tf-idf, was used. Additionally, pretrained GloVe model was used as the word2vec model [5]. Models We have two main approaches to the question answering problem--entailment models and general QA models. We implemented a total of four entailment models (including our baseline) and two general question-answering models. Entailment Models The structure of our entailment model pipeline is shown in Figure 1, which is introduced by Baudis et al. [2]. Figure 1. Entailment Model structure. 2

3 The model is composed of four modules: 1. Form Hypothesis (FH) : Given a question and an answer choice, the FH module will generate a hypothesis. For example, given the question Which of the following does not allow sound to travel through? and an answer choice vacuum, FH will output the hypothesis, vacuum does not allow sound to travel through. The FH module is implemented mainly with regular expression substitution. 2. Find Evidence (FE) : Given a hypothesis, the FE module will search textbooks for supporting evidence, i.e. a sentence that supports or refutes the hypothesis sentence. For example, given the hypothesis vacuum does not allow sound to travel through., the FE module will return several sentences from textbooks such as sound cannot travel in vacuum, sound must travel through air, and vacuum means the absence of air in an environment. The FE model will also return a confidence score for each piece of evidence. The FE module is implemented with PyLucene. 3. Entailment (E) : The Entailment model is a model that computes the entailment relation between two input vectorized strings. If sentence A entails sentence B, it means that given the information in Sentence A, the information presented in Sentence B logically follows. On the other hand, if Sentence A contradicts Sentence B, it means that given Sentence A, Sentence B cannot logically follow. For each sentence pair, the output is a probability distribution over three categories: contradiction, entailment, and neutral. This distribution is further converted into a single score in which -1 means contradiction, 0 means neutral, and 1 means entailment. Each one of the four models were tested as this part of the pipeline. 4. Evidence Weighing (EW) : Given the confidence c and the entailment score r for all evidence, EM will compute a single score for the answer: y = c i r i i The final answer is chosen as the output with maximum y value. Baseline Model The Baseline model, shown in Figure 2, is a bidirectional RNN which takes as input the two sentences concatenated, has dropout on the word level, and passes the sentence embedding to a fully connected layer with dropout and regularization to determine entailment. Figure 2: Baseline model architecture 3

4 Siamese Model The Siamese model, shown in Figure 3, is a similar approach to the Baseline Model. However, sentence embeddings are generated separately, and the difference vector is fed in input to a fully-connected layer to determine entailment [6]. We hypothesized that by generating sentence embeddings separately and having the fully connected layer compare the two embeddings, the entailment model would be more robust and outperform the baseline model. Figure 3: Siamese model architecture. MFF Model We wanted to compare our sequence-based model to a non-sequence model, based on Parikh et al [7]. We implemented a multi feed-forward model, shown in Figure 4. This model creates an attention map with the two sentences and uses several feed-forward neural networks to synthesize and determine entailment. As is shown in Figure 4, each of three modules contains a nonlinear function, which is implemented as a feed-forward network with ReLU layer. Hidden layers of these networks are of the same size. We have tested two parameters: MFF-32, which has a hidden layer of size 32, and MFF-64, which has a hidden layer of size 64. Figure 4: MFF model architecture Convolutional Neural Network (CNN) Model As a variant of MFF model, instead of feed-forward network, we propose to use a convolutional neural network to synthesize attention information, shown in Figure 5. We use multiple attention maps, so that we can capture different types of word correlation. Each attention map is treated as an input channel for CNN. CNN will capture the local information and global information of attention maps. A FC layer with softmax is used at 4

5 the end the the network to output a probability distribution over three categories. Limited to computing resources, we only use one CNN layer with one 1 1 kernel, which combines channels into a single matrix. Figure 5: CNN Model architecture General Question Answering Models In addition to comparing multiple entailment models, we wanted to compare results to entirely different approach to question answering, which searches through a paragraph of information for the answer to a question instead of comparing two sentences to determine entailment. Dynamic Memory Network The Dynamic Memory Network (DMN) model is introduced by Kumar et al in 2016 [8]. It is composed of five parts: semantic memory module, input module, question module, episodic memory module, and answer module, as is shown in Figure 6. Episodic Memory Module is the core module of DMN. This module works as a soft attention on facts (evidence from PyLucene in our case), and is designed to emulate the change of human attentions over time when answering questions. Attentions are controlled by a recurrent neural network, which is initialized by the question embedding and takes as input the synthesized information on facts with current attention. As is shown in the paper, this module can also provide a very beautiful visualization of question answering process. The attention changes over time, following exactly the same way a human would answer the question, which can be interpreted as the model is doing reasoning on facts. Figure 6: Architecture of DMN Model [8]. 5

6 End to End Memory Network In order to compare the DMN to another type of QA system, we also tested with an end-to-end (E2E) memory network, as introduced by Bordes et al in 2015 [9]. The E2E memory system relies on a memory structure and a number of hops around the input passage to reason about the question and produce an answer. In the original model implementation, the data was labeled with which passage lines were necessary to answer the question. However, we did not have the time or bandwidth to hand-label all of the test and training passages with relevant line numbers. Therefore, we ran this model with two different data settings -- one which listed all lines in the passage as being necessary to attend to and one which listed none of them. We wanted to compare the two settings and see if insights could be gained by running the model regardless. Figure 7: (a) A single layer of the E2E network (b) The combination of three layers that comprise the final model [9] Results As a preliminary step, we evaluated our entailment models with the SNLI dataset, which is a collection of sentences catered specifically for entailment [4]. Each pair of sentences is accompanied by a label (either entailment, contradiction, or neutral) as well as a confidence score. This was done to see if our models could successfully complete an entailment task independent of the science question pipeline. Results are summarized in Figure 8. Figure 8: Results of each model on SNLI Entailment Data 6

7 After confirming that our models could complete entailment tasks, we turned to testing our models on the AI2 science question data. Results for these experiments can be seen in Figure 9. Figure 9: Results of each model on AI2 Science Question Data. (Note: E2E w/ lines has all lines included in attention while E2E no lines has none.) The non-neural Tf-Idf baseline was computed by choosing the answer choice whose hypothesis had the highest Tf-Idf similarity to a sentence from the textbook. Lessons Learned Through running experiments and analyzing performance of the six models we ultimately fully implemented yielded sub-par results. In fact, most of our neural models were not able to surpass the non-neural Tf-idf baseline of 33%. Thus, we explored why our models were not performing and hypothesized several reasons why. Question Difficulties One hypothesis about why our models were not performing well on the AI2 data was that the types of questions included in the training and test sets were such that there was not a one optimal network that would work for each question in the training or test set. We examined the questions by hand and completed two different ways to evaluate questions--the question length and the type of question. Each of these were compared using one representative sequence-based model (baseline) and one non-sequence model (MFF). Question length refers to whether or not a question includes one or more informative sentences that are necessary to answer the question. For example, the question The metal lid on a glass jar is hard to open so it is held under warm running water. What causes the jar to open easily after it was held under the water? includes information from the first sentence in the question to choose the correct answer choice. Results for this can be seen in Figure 10. Our hypothesis was that both networks would perform better on shorter questions, as these are typically more conceptual and require less memory ability. This held true for the MFF model, but an interesting result was the Baseline model s comparative advantage with long questions. Even though MFF outperforms the long question performance, the Baseline model does better on 7

8 longer questions than shorter questions. Thus, it may be beneficial to split questions and train networks specifically to answer longer or shorter questions. Question type refers to the category of question, determined by the wh word and other key words included in the questions, and results can be seen in Figure 11. It is evident that each model has certain strengths and other shortcomings. Because of this, it may be worth exploring a multi-network approach which trains multiple networks each on a single type of question. Figure 10: The accuracy of the two representative models on questions of different lengths. Figure 11: The accuracy of the two representative models on questions of different types. Model Shortcomings Entailment Models The significant deterioration of performance when switching from entailment data to science question data leads us to believe that entailment might not have been the right approach to this problem. One obvious shortcoming of an entailment model is that it is designed to compare two sentences to each other directly. This means that any information we want the network to use to choose a given answer choice has to be perfectly captured in one sentence. This, however, is not the case with many questions in the dataset. The questions either require information from more than one sentence or require a level of complex reasoning that an entailment model does not capture. General QA Models The memory-based QA models were implemented as a first step to test the hypothesis that entailment models were not the optimal way to answer 8th grade science questions. Memory-based methods work best when the specific lines in the passage that the network should 8

9 use to answer a question are explicitly stated. However, the DMN implementation assumes that all lines in the evidence are equally relevant (which is not necessarily the case), while the E2E approach tried both the approach that all are equally relevant and the approach which does not give any line numbers to the network. Giving the model no information about line numbers slightly out-performed assuming all lines were relevant, which speaks to the importance of backpropagation through the location in the passage that the network should attend to. Neural Models It has been mentioned in several papers, blogs, and talks that AI2 dataset is difficult for neural models [10] [11] [12]. The fact that a simple non-neural baseline model outperforms almost all of our neural models seems to confirm this argument. As we have discussed above, textual entailment based models have the intrinsic shortcoming of not being able to capture correlations among evidence. QA models were studied to address this issue. However, it is still questionable whether the model has the ability to perform human-like reasoning. According to our observations, to answer certain questions in AI2 dataset, a model would need not only word based as reasoning, but also pattern based reasoning. Word based reasoning only requires the model to capture information based on word matching, such as tasks in the BaBi dataset that DMN and E2E Memory Networks perform incredibly well on [9]. Pattern based reasoning, however, requires the model to capture information beyond characters. Syntax, semantics, definitions, backgrounds, correlations between agents are all important for answering the question. Understanding one of them can be a difficult task and has a rich collection of literatures, not to mention combining all pieces together and reasoning upon them. However, it is still worth studying this problem with neural models. The argument above assumes neural models answer questions the same way as humans, which might not be the case. It is possible that a neural model, with enough training data and external resources, can answer these questions with high accuracy, in a purely look-up way. If such a model exists, it can still have a huge impact on areas such as searching and education. Next Steps We are interested in further exploring the findings of this report in many different ways. In order to get a fully-comparable QA model, we wish to hand-label attention of training and test data and create a more robust memory QA model. Additionally, we aim to study the difference between Entailment model and QA model in terms of performance difference on different types questions as well as attention patterns, such as if attentions are similar to each other for the same question. We also aim to compare non-neural model with neural model with the aim of understanding which types of questions that non-neural models perform better than neural models as well as patterns captured by non-neural models that are missed by neural models. Finally, we would like to further explore more ways of improving our neural-models as well as the non-neural parts of the Entailment Pipeline that we have not considered. All in all, there are a multitude of ways to further explore this topic area, and we wish to continue analyzing a neural approach to science question answering. 9

10 Team Contributions and Workload Percentage An Ju (⅓): Wrote basic training structure in TF. Integrated and tested the baseline model. Wrote scripts to speedup training. Wrote scripts to test model modules. Wrote Siamese, MFF, DNN models in TF. Helped write DMN model in TF. Hyperparameter tuning. Steven Hewitt (⅓): Created and improved GloVe-based word vectorization, improved upon hypothesis gathering, and wrote data gathering scripts. Helped write DMN model. Hyperparameter tuning. Katherine Stasaski (⅓): Created initial evidence retrieval method from textbooks, improved evidence gathering by switching to PyLucene, created first version of entailment model in TF (later improved by An), pre-processed data, created naive hypothesis generator (later improved by Steven), found MFF paper, created end to end question answering model in TF. Hyperparameter tuning. References [1] Sachan, M., Dubey, A., & Xing, E. P. (n.d.). Science Question Answering using Instructional Materials, [2] Baudis, Petr, Silvestr Stanko, and Jan Sedivy Joint Learning of Sentence Embeddings for Relevance and Entailment, [3] [4] Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). [5] Jeffrey Pennington, Richard Socher, and Christopher D. Manning GloVe: Global Vectors for Word Representation [6] J. Mueller and A. Thyagarajan, Siamese Recurrent Architecture for Learning Sentence Similarity, AAAI, [7] Parikh, Ankur P, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit A Decomposable Attention Model for Natural Language Inference. arxiv. [8] Kumar, Ankit, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher Ask Me Anything: Dynamic Memory Networks for Natural Language Processing. Nips. [9] Bordes, Antoine, Usunier, Nicolas, Chopra, Sumit, and Weston, Jason. Large-scale simple question answering with memory networks. arxiv preprint arxiv: , [10] May, Rob. "How We Approached The Allen A.I. Challenge on Kaggle." How We Approached The Allen A.I. Challenge on Kaggle. N.p., 11 Jan Web. 14 Dec [11] Vorontsov, Konstantin. DeepHack.Q&A Konstantin Vorontsov Regularization of Topic Models for Question Answering. YouTube. 01 Feb Web. 14 Dec [12] "Implementing Dynamic Memory Networks." Implementing Dynamic Memory Networks YerevaNN. YerevaNN, 05 Feb Web. 14 Dec

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention Damien Teney 1, Peter Anderson 2*, David Golub 4*, Po-Sen Huang 3, Lei Zhang 3, Xiaodong He 3, Anton van den Hengel 1 1

More information

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing Ask Me Anything: Dynamic Memory Networks for Natural Language Processing Ankit Kumar*, Ozan Irsoy*, Peter Ondruska*, Mohit Iyyer*, James Bradbury, Ishaan Gulrajani*, Victor Zhong*, Romain Paulus, Richard

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

arxiv: v1 [cs.cv] 10 May 2017

arxiv: v1 [cs.cv] 10 May 2017 Inferring and Executing Programs for Visual Reasoning Justin Johnson 1 Bharath Hariharan 2 Laurens van der Maaten 2 Judy Hoffman 1 Li Fei-Fei 1 C. Lawrence Zitnick 2 Ross Girshick 2 1 Stanford University

More information

arxiv: v4 [cs.cl] 28 Mar 2016

arxiv: v4 [cs.cl] 28 Mar 2016 LSTM-BASED DEEP LEARNING MODELS FOR NON- FACTOID ANSWER SELECTION Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou IBM Watson Core Technologies Yorktown Heights, NY, USA {mingtan,cicerons,bingxia,zhou}@us.ibm.com

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Dialog-based Language Learning

Dialog-based Language Learning Dialog-based Language Learning Jason Weston Facebook AI Research, New York. jase@fb.com arxiv:1604.06045v4 [cs.cl] 20 May 2016 Abstract A long-term goal of machine learning research is to build an intelligent

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

arxiv: v3 [cs.cl] 7 Feb 2017

arxiv: v3 [cs.cl] 7 Feb 2017 NEWSQA: A MACHINE COMPREHENSION DATASET Adam Trischler Tong Wang Xingdi Yuan Justin Harris Alessandro Sordoni Philip Bachman Kaheer Suleman {adam.trischler, tong.wang, eric.yuan, justin.harris, alessandro.sordoni,

More information

ON THE USE OF WORD EMBEDDINGS ALONE TO

ON THE USE OF WORD EMBEDDINGS ALONE TO ON THE USE OF WORD EMBEDDINGS ALONE TO REPRESENT NATURAL LANGUAGE SEQUENCES Anonymous authors Paper under double-blind review ABSTRACT To construct representations for natural language sequences, information

More information

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках Тарасов Д. С. (dtarasov3@gmail.com) Интернет-портал reviewdot.ru, Казань,

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks POS tagging of Chinese Buddhist texts using Recurrent Neural Networks Longlu Qin Department of East Asian Languages and Cultures longlu@stanford.edu Abstract Chinese POS tagging, as one of the most important

More information

Second Exam: Natural Language Parsing with Neural Networks

Second Exam: Natural Language Parsing with Neural Networks Second Exam: Natural Language Parsing with Neural Networks James Cross May 21, 2015 Abstract With the advent of deep learning, there has been a recent resurgence of interest in the use of artificial neural

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

arxiv: v1 [cs.lg] 7 Apr 2015

arxiv: v1 [cs.lg] 7 Apr 2015 Transferring Knowledge from a RNN to a DNN William Chan 1, Nan Rosemary Ke 1, Ian Lane 1,2 Carnegie Mellon University 1 Electrical and Computer Engineering, 2 Language Technologies Institute Equal contribution

More information

Indian Institute of Technology, Kanpur

Indian Institute of Technology, Kanpur Indian Institute of Technology, Kanpur Course Project - CS671A POS Tagging of Code Mixed Text Ayushman Sisodiya (12188) {ayushmn@iitk.ac.in} Donthu Vamsi Krishna (15111016) {vamsi@iitk.ac.in} Sandeep Kumar

More information

arxiv: v1 [cs.cl] 20 Jul 2015

arxiv: v1 [cs.cl] 20 Jul 2015 How to Generate a Good Word Embedding? Siwei Lai, Kang Liu, Liheng Xu, Jun Zhao National Laboratory of Pattern Recognition (NLPR) Institute of Automation, Chinese Academy of Sciences, China {swlai, kliu,

More information

Deep Neural Network Language Models

Deep Neural Network Language Models Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com

More information

Georgetown University at TREC 2017 Dynamic Domain Track

Georgetown University at TREC 2017 Dynamic Domain Track Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

arxiv: v1 [cs.lg] 15 Jun 2015

arxiv: v1 [cs.lg] 15 Jun 2015 Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and

More information

A JOINT MANY-TASK MODEL: GROWING A NEURAL NETWORK FOR MULTIPLE NLP TASKS

A JOINT MANY-TASK MODEL: GROWING A NEURAL NETWORK FOR MULTIPLE NLP TASKS A JOINT MANY-TASK MODEL: GROWING A NEURAL NETWORK FOR MULTIPLE NLP TASKS Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka & Richard Socher The University of Tokyo {hassy, tsuruoka}@logos.t.u-tokyo.ac.jp

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

There are some definitions for what Word

There are some definitions for what Word Word Embeddings and Their Use In Sentence Classification Tasks Amit Mandelbaum Hebrew University of Jerusalm amit.mandelbaum@mail.huji.ac.il Adi Shalev bitan.adi@gmail.com arxiv:1610.08229v1 [cs.lg] 26

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

AQUA: An Ontology-Driven Question Answering System

AQUA: An Ontology-Driven Question Answering System AQUA: An Ontology-Driven Question Answering System Maria Vargas-Vera, Enrico Motta and John Domingue Knowledge Media Institute (KMI) The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom.

More information

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach #BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

arxiv: v5 [cs.ai] 18 Aug 2015

arxiv: v5 [cs.ai] 18 Aug 2015 When Are Tree Structures Necessary for Deep Learning of Representations? Jiwei Li 1, Minh-Thang Luong 1, Dan Jurafsky 1 and Eduard Hovy 2 1 Computer Science Department, Stanford University, Stanford, CA

More information

Residual Stacking of RNNs for Neural Machine Translation

Residual Stacking of RNNs for Neural Machine Translation Residual Stacking of RNNs for Neural Machine Translation Raphael Shu The University of Tokyo shu@nlab.ci.i.u-tokyo.ac.jp Akiva Miura Nara Institute of Science and Technology miura.akiba.lr9@is.naist.jp

More information

Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma

Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma Semantic Segmentation with Histological Image Data: Cancer Cell vs. Stroma Adam Abdulhamid Stanford University 450 Serra Mall, Stanford, CA 94305 adama94@cs.stanford.edu Abstract With the introduction

More information

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17. Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION Atul Laxman Katole 1, Krishna Prasad Yellapragada 1, Amish Kumar Bedi 1, Sehaj Singh Kalra 1 and Mynepalli Siva Chaitanya 1 1 Samsung

More information

What is a Mental Model?

What is a Mental Model? Mental Models for Program Understanding Dr. Jonathan I. Maletic Computer Science Department Kent State University What is a Mental Model? Internal (mental) representation of a real system s behavior,

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

Online Updating of Word Representations for Part-of-Speech Tagging

Online Updating of Word Representations for Part-of-Speech Tagging Online Updating of Word Representations for Part-of-Speech Tagging Wenpeng Yin LMU Munich wenpeng@cis.lmu.de Tobias Schnabel Cornell University tbs49@cornell.edu Hinrich Schütze LMU Munich inquiries@cislmu.org

More information

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen TRANSFER LEARNING OF WEAKLY LABELLED AUDIO Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Laboratory of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi

More information

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial

More information

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download

More information

Semantic and Context-aware Linguistic Model for Bias Detection

Semantic and Context-aware Linguistic Model for Bias Detection Semantic and Context-aware Linguistic Model for Bias Detection Sicong Kuang Brian D. Davison Lehigh University, Bethlehem PA sik211@lehigh.edu, davison@cse.lehigh.edu Abstract Prior work on bias detection

More information

Probing for semantic evidence of composition by means of simple classification tasks

Probing for semantic evidence of composition by means of simple classification tasks Probing for semantic evidence of composition by means of simple classification tasks Allyson Ettinger 1, Ahmed Elgohary 2, Philip Resnik 1,3 1 Linguistics, 2 Computer Science, 3 Institute for Advanced

More information

arxiv: v1 [cs.cl] 27 Apr 2016

arxiv: v1 [cs.cl] 27 Apr 2016 The IBM 2016 English Conversational Telephone Speech Recognition System George Saon, Tom Sercu, Steven Rennie and Hong-Kwang J. Kuo IBM T. J. Watson Research Center, Yorktown Heights, NY, 10598 gsaon@us.ibm.com

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,

More information

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria FUZZY EXPERT SYSTEMS 16-18 18 February 2002 University of Damascus-Syria Dr. Kasim M. Al-Aubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

THE world surrounding us involves multiple modalities

THE world surrounding us involves multiple modalities 1 Multimodal Machine Learning: A Survey and Taxonomy Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency arxiv:1705.09406v2 [cs.lg] 1 Aug 2017 Abstract Our experience of the world is multimodal

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

Attributed Social Network Embedding

Attributed Social Network Embedding JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, MAY 2017 1 Attributed Social Network Embedding arxiv:1705.04969v1 [cs.si] 14 May 2017 Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua Abstract Embedding

More information

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Chinese Language Parsing with Maximum-Entropy-Inspired Parser Chinese Language Parsing with Maximum-Entropy-Inspired Parser Heng Lian Brown University Abstract The Chinese language has many special characteristics that make parsing difficult. The performance of state-of-the-art

More information

EQuIP Review Feedback

EQuIP Review Feedback EQuIP Review Feedback Lesson/Unit Name: On the Rainy River and The Red Convertible (Module 4, Unit 1) Content Area: English language arts Grade Level: 11 Dimension I Alignment to the Depth of the CCSS

More information

arxiv: v2 [cs.ir] 22 Aug 2016

arxiv: v2 [cs.ir] 22 Aug 2016 Exploring Deep Space: Learning Personalized Ranking in a Semantic Space arxiv:1608.00276v2 [cs.ir] 22 Aug 2016 ABSTRACT Jeroen B. P. Vuurens The Hague University of Applied Science Delft University of

More information

arxiv: v1 [cs.cl] 2 Apr 2017

arxiv: v1 [cs.cl] 2 Apr 2017 Word-Alignment-Based Segment-Level Machine Translation Evaluation using Word Embeddings Junki Matsuo and Mamoru Komachi Graduate School of System Design, Tokyo Metropolitan University, Japan matsuo-junki@ed.tmu.ac.jp,

More information

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-6) Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors Sang-Woo Lee,

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

Cultivating DNN Diversity for Large Scale Video Labelling

Cultivating DNN Diversity for Large Scale Video Labelling Cultivating DNN Diversity for Large Scale Video Labelling Mikel Bober-Irizar mikel@mxbi.net Sameed Husain sameed.husain@surrey.ac.uk Miroslaw Bober m.bober@surrey.ac.uk Eng-Jon Ong e.ong@surrey.ac.uk Abstract

More information

Diverse Concept-Level Features for Multi-Object Classification

Diverse Concept-Level Features for Multi-Object Classification Diverse Concept-Level Features for Multi-Object Classification Youssef Tamaazousti 12 Hervé Le Borgne 1 Céline Hudelot 2 1 CEA, LIST, Laboratory of Vision and Content Engineering, F-91191 Gif-sur-Yvette,

More information

Beyond the Pipeline: Discrete Optimization in NLP

Beyond the Pipeline: Discrete Optimization in NLP Beyond the Pipeline: Discrete Optimization in NLP Tomasz Marciniak and Michael Strube EML Research ggmbh Schloss-Wolfsbrunnenweg 33 69118 Heidelberg, Germany http://www.eml-research.de/nlp Abstract We

More information

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Twitter Sentiment Classification on Sanders Data using Hybrid Approach IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders

More information

PUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school

PUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school PUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school Linked to the pedagogical activity: Use of the GeoGebra software at upper secondary school Written by: Philippe Leclère, Cyrille

More information

MYCIN. The MYCIN Task

MYCIN. The MYCIN Task MYCIN Developed at Stanford University in 1972 Regarded as the first true expert system Assists physicians in the treatment of blood infections Many revisions and extensions over the years The MYCIN Task

More information

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com

More information

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &

More information

Comment-based Multi-View Clustering of Web 2.0 Items

Comment-based Multi-View Clustering of Web 2.0 Items Comment-based Multi-View Clustering of Web 2.0 Items Xiangnan He 1 Min-Yen Kan 1 Peichu Xie 2 Xiao Chen 3 1 School of Computing, National University of Singapore 2 Department of Mathematics, National University

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Multi-Lingual Text Leveling

Multi-Lingual Text Leveling Multi-Lingual Text Leveling Salim Roukos, Jerome Quin, and Todd Ward IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 {roukos,jlquinn,tward}@us.ibm.com Abstract. Determining the language proficiency

More information

The IDN Variant Issues Project: A Study of Issues Related to the Delegation of IDN Variant TLDs. 20 April 2011

The IDN Variant Issues Project: A Study of Issues Related to the Delegation of IDN Variant TLDs. 20 April 2011 The IDN Variant Issues Project: A Study of Issues Related to the Delegation of IDN Variant TLDs 20 April 2011 Project Proposal updated based on comments received during the Public Comment period held from

More information

Lip Reading in Profile

Lip Reading in Profile CHUNG AND ZISSERMAN: BMVC AUTHOR GUIDELINES 1 Lip Reading in Profile Joon Son Chung http://wwwrobotsoxacuk/~joon Andrew Zisserman http://wwwrobotsoxacuk/~az Visual Geometry Group Department of Engineering

More information

The University of Amsterdam s Concept Detection System at ImageCLEF 2011

The University of Amsterdam s Concept Detection System at ImageCLEF 2011 The University of Amsterdam s Concept Detection System at ImageCLEF 2011 Koen E. A. van de Sande and Cees G. M. Snoek Intelligent Systems Lab Amsterdam, University of Amsterdam Software available from:

More information

Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models

Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models Jung-Tae Lee and Sang-Bum Kim and Young-In Song and Hae-Chang Rim Dept. of Computer &

More information

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 6, Ver. IV (Nov Dec. 2015), PP 01-07 www.iosrjournals.org Longest Common Subsequence: A Method for

More information

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA Alta de Waal, Jacobus Venter and Etienne Barnard Abstract Most actionable evidence is identified during the analysis phase of digital forensic investigations.

More information

A Vector Space Approach for Aspect-Based Sentiment Analysis

A Vector Space Approach for Aspect-Based Sentiment Analysis A Vector Space Approach for Aspect-Based Sentiment Analysis by Abdulaziz Alghunaim B.S., Massachusetts Institute of Technology (2015) Submitted to the Department of Electrical Engineering and Computer

More information

A Case-Based Approach To Imitation Learning in Robotic Agents

A Case-Based Approach To Imitation Learning in Robotic Agents A Case-Based Approach To Imitation Learning in Robotic Agents Tesca Fitzgerald, Ashok Goel School of Interactive Computing Georgia Institute of Technology, Atlanta, GA 30332, USA {tesca.fitzgerald,goel}@cc.gatech.edu

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at CLEF 2013 Conference and Labs of the Evaluation Forum Information Access Evaluation meets Multilinguality, Multimodality,

More information

A deep architecture for non-projective dependency parsing

A deep architecture for non-projective dependency parsing Universidade de São Paulo Biblioteca Digital da Produção Intelectual - BDPI Departamento de Ciências de Computação - ICMC/SCC Comunicações em Eventos - ICMC/SCC 2015-06 A deep architecture for non-projective

More information

CS224d Deep Learning for Natural Language Processing. Richard Socher, PhD

CS224d Deep Learning for Natural Language Processing. Richard Socher, PhD CS224d Deep Learning for Natural Language Processing, PhD Welcome 1. CS224d logis7cs 2. Introduc7on to NLP, deep learning and their intersec7on 2 Course Logis>cs Instructor: (Stanford PhD, 2014; now Founder/CEO

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture

Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture Yuanliang Meng, Anna Rumshisky, Alexey Romanov {ymeng,arum,aromanov}@cs.uml.edu Department

More information

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models

Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models Extracting Opinion Expressions and Their Polarities Exploration of Pipelines and Joint Models Richard Johansson and Alessandro Moschitti DISI, University of Trento Via Sommarive 14, 38123 Trento (TN),

More information

Word Embedding Based Correlation Model for Question/Answer Matching

Word Embedding Based Correlation Model for Question/Answer Matching Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Word Embedding Based Correlation Model for Question/Answer Matching Yikang Shen, 1 Wenge Rong, 2 Nan Jiang, 2 Baolin

More information

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE Pierre Foy TIMSS Advanced 2015 orks User Guide for the International Database Pierre Foy Contributors: Victoria A.S. Centurino, Kerry E. Cotter,

More information

Arizona s English Language Arts Standards th Grade ARIZONA DEPARTMENT OF EDUCATION HIGH ACADEMIC STANDARDS FOR STUDENTS

Arizona s English Language Arts Standards th Grade ARIZONA DEPARTMENT OF EDUCATION HIGH ACADEMIC STANDARDS FOR STUDENTS Arizona s English Language Arts Standards 11-12th Grade ARIZONA DEPARTMENT OF EDUCATION HIGH ACADEMIC STANDARDS FOR STUDENTS 11 th -12 th Grade Overview Arizona s English Language Arts Standards work together

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information