Similarity-Weighted Association Rules for a Name Recommender System
|
|
- Homer Logan
- 6 years ago
- Views:
Transcription
1 Similarity-Weighted Association Rules for a Name Recommender System Benjamin Letham Operations Research Center Massachusetts Institute of Technology Cambridge, MA, USA bletham@mit.edu Abstract. Association rules are a simple yet powerful tool for making item-based recommendations. As part of the ECML PKDD 2013 Discovery Challenge, we use association rules to form a name recommender system. We introduce a new measure of association rule confidence that incorporates user similarities, and show that this increases prediction performance. With no special feature engineering and no separate treatment of special cases, we produce one of the top-performing recommender systems in the discovery challenge. Keywords: association rule, collaborative filtering, recommender system, ranking 1 Introduction Association rules are a classic tool for making item-based recommendations. An association rule a b is a rule that item(set) a in the observation implies that item b is also in the observation. Association rules were originally developed for retail transaction databases, although the same idea can be applied to any setting where the observations are sets of items. As part of the ECML PKDD 2013 Discovery Challenge, in this paper we consider a setting where each observation is a set of names in which the user has expressed interest. We then form association rules a b, meaning that interest in name a (or, in general, set of names a) implies interest in name b. The strength with which a implies b is called the confidence of the rule, and in Section 2.2 we explore different measures of confidence. Association rules provide an excellent basis for a recommender system because they are scalable and interpretable. The scalability of association rule algorithms has been well studied, and is often linear in the number of items [1]. Using rules to make recommendations gives a natural interpretability: We recommend name b because the user has expressed interest in name a. Interpretability is an important quality of predictive models in many contexts, and is especially important in recommender systems, where it has been shown that providing the user an explanation for the recommendation increases acceptance and performance [2, 3].
2 2 Benjamin Letham One of the most successful tools for recommender systems, particularly at a large scale, is collaborative filtering [4, 5]. Collaborative filtering refers to a large class of methods, of which here we focus on user-based collaborative filtering and item-based collaborative filtering [6]. In user-based collaborative filtering, recommendations are made by finding the most similar users in the database and recommending their preferred items. In item-based collaborative filtering, similarity is measured between items and the items most similar to those already selected by the user are recommended. Like association rules, collaborative filtering algorithms generally have excellent scalability. Our main contribution is to use ideas from collaborative filtering to create a new measure of association rule confidence, which we call similarity-weighted adjusted confidence. We maintain the excellent scalability and interpretability of collaborative filtering and association rules, yet see a significant increase in performance compared to either approach. Our method was developed in the context of creating a name recommender system for the ECML PKDD 2013 Discovery Challenge, and so we compare the similarity-weighted adjusted confidence to other collaborative filtering and association rule-based approaches on the Nameling dataset released for the challenge. 2 Similarity-Weighted Association Rule Confidence We begin by introducing the notation that will be used throughout the rest of the paper. Then we discuss measures of confidence, introduce our similarityweighted adjusted confidence, and discuss strategies for combining association rules into a recommender system. 2.1 Notation We consider a database with m observations x 1,..., x m, and a collection of n items Z = {z 1,..., z n }. For instance, it may be m visitors to a name recommendation site, with Z the set of valid names. Each observation is a set of items: x i Z, i. We denote the number of items in x i as x i. We will consider rules a b where the left-hand side of the rule a is an itemset (a Z) and the right-hand side is a single item (b Z). Notice that a might only contain a single item. We denote as A the collection of itemsets that we are willing to consider: a A. One option for A is the collection of all itemsets, A = 2 Z. If Z is very large this can be computationally prohibitively expensive and some restriction may be necessary. In our experiments in Section 3 we took A = Z, that is, all itemsets of size Confidence and Similarity-Weighted Confidence The standard definition of the confidence of the rule a b is exactly the empirical conditional probability of b given a: i=1 Conf(a b) = 1 [a x i and b x i] m i=1 1, (1) [a x i]
3 Similarity-Weighted Association Rules 3 where we use 1 [condition] to indicate 1 if the condition holds, and 0 otherwise. This measure of confidence corresponds to the maximum likelihood estimate of a specific probability model, in which the observations are i.i.d. draws from a Bernoulli distribution which determines whether or not b is present. Because of the i.i.d. assumption, all observations in the database are considered equally when determining the likelihood that a implies b. In reality, preferences are often quite heterogeneous. If we are trying to determine whether or not a new user x l will select item b given that he or she has previously selected itemset a, then the users more similar to user x l are likely more informative. This leads to the similarity-weighted confidence for user x l : i=1 SimConf(a b x l ) = 1 [a x i and b x i]sim(x l, x i ) i=1 1, (2) [a x i]sim(x l, x i ) where sim(x l, x i ) is a measure of the similarity between users x l and x i. The similarity-weighted confidence reduces to the standard definition of confidence under the similarity measure sim(x l, x i ) = 1, as well as { 1, if x l x i. sim(x l, x i ) = 0, otherwise. Giving more weight to more similar users is precisely the idea behind user-based collaborative filtering. A variety of similarity measures have been developed for use in collaborative filtering, one of the more popular of which is the cosine similarity, which we use here: sim(x l, x i ) = x l x i xl x i. (3) 2.3 Bayesian Shrinkage and the Adjusted Confidence In [7], we show how the usual definition of confidence can be improved by adding in a beta prior distribution and using the maximum a posteriori estimate. The resulting measure is called the adjusted confidence: i=1 Conf K (a b) = 1 [a x i and b x i] m i=1 1 [a x i] + K, (4) where K is a user-specified amount of adjustment, corresponding to a particular pseudocount in the usual Bayesian interpretation. In particular, the adjusted confidence is equivalent to there being an additional K observations containing a, none of which contain b. This reduces the confidence of a b by an amount inversely proportional to the support of a, allowing low-support-high-confidence rules to be used in the computation, but giving more weight to those with higher support. In terms of the bias-variance tradeoff, adjusted confidence leads to an increase in performance by reducing the variance of the estimate for itemsets
4 4 Benjamin Letham with low support. The Nameling dataset used here is quite sparse, so we add the same adjustment to our similarity-weighted confidence, producing the similarityweighted adjusted confidence: SimConf K (a b x l ) = i=1 1 [a x i and b x i]sim(x l, x i ) i=1 1 [a x i]sim(x l, x i ) + K. (5) When K = 0, this reduces to the similarity-weighted confidence in (2). 2.4 Combining Association Rules to Form a Recommender System The similarity-weighted adjusted confidence provides a powerful tool for determining the likelihood that b x l given that a x l. In general there will be many itemsets a satisfying a x l, so to use the association rules as the basis for a recommender system we must also have a strategy for combining confidence measures across multiple left-hand sides. For each left-hand side a A satisfying a x l, we can consider SimConf K (a b x l ) to be an estimate of the probability of item b given itemset x l. There is a large corpus of literature on combining probability estimates [8, 9], from which one of the most common approaches is simply to compute their sum. Thus we score each item b as Score(b x l ) = a x l a A SimConf K (a b x l ). (6) A ranked list of recommendations is then obtained by ranking items by score. A natural extension to this combination strategy is to consider a weighted sum of confidence estimates. We consider this strategy in [10], where we use a supervised ranking framework and empirical risk minimization to choose the weights that give the best prediction performance. This approach requires choosing a smooth, preferably convex, loss function for the optimization problem. In [10] we use the exponential loss as a surrogate for area under the ROC curve (AUC), however in the experiments that follow in Section 3 the evaluation metric was mean average precision. Optimizing for AUC in general does not optimize for mean average precision [11], and we found that the exponential loss was a poor surrogate for mean average precision on the Nameling dataset. 2.5 Collaborative filtering baselines We use two simple collaborative filtering algorithms as baselines in our experimental results in Section 3. For user-based collaborative filtering, we use the cosine similarity between two users in (3) to compute Score UCF (b x l ) = m 1 [b xi]sim(x l, x i ) (7) i=1
5 Similarity-Weighted Association Rules 5 For item-based collaborative filtering, for any item b we define Nbhd(b) as the set of observations containing b: Nbhd(b) = {i : b x i }. Then, the cosine similarity between two items is defined as before: sim item (b, d) = Nbhd(b) Nbhd(d). (8) Nbhd(b) Nbhd(d) And the item-based collaborative filtering score of item b is Score ICF (b x l ) = d x l sim item (b, d). (9) In addition to these two baselines, we consider the extremely simple baseline of ranking items by their frequency in the training set. We call this the frequency baseline. 3 Name Recommendations with the Nameling Dataset We now demonstrate our similarity-weighted adjusted confidence measure on the Nameling dataset released for the ECML PKDD 2013 Discovery Challenge. We also compare the alternative confidence measures and baseline methods from Section 2. A description of the Nameling dataset can be found in [12], and details about the challenge task can be had in the introduction to these workshop proceedings. For the sake of self-containment, we give a brief description here. 3.1 The Nameling Public Dataset The dataset contains the interactions of users with the Nameling website http: //nameling.net, a site that allows its users to explore information about names and provides a list of similar names. A user enters a name, and the Nameling system provides a list of similar names. Some of the similar names are given category descriptions, like English given names, or Hypocorisms. There are five types of interactions in the dataset: ENTER SEARCH, when the user enters a name into the search field; LINK SEARCH, when the user clicks on one of the listed similar names to search for it; LINK CATEGORY SEARCH, when the user clicks on a category name to list other names of the same category; NAME DETAILS when the user clicks for more details about a name; and ADD FAVORITE when the user adds a name to his or her list of favorites. The dataset contains 515,848 interactions from 60,922 users. The data were split into training and test sets by, for users with sufficiently many ENTER SEARCH interactions, setting the last two ENTER SEARCH interactions aside as a test set. Some other considerations were made for duplicate entries - see the introduction to the workshop proceedings for details. The end result was a training set of 443,178 interactions from the 60,922 users, and a test set consisting of the last two ENTER SEARCH names for 13,008 of the users. The task was to use the interactions in the training set to predict the two
6 6 Benjamin Letham names in the test set for each of the test users by producing for each test user a ranked list of recommended names. The evaluation metric was mean average precision of the first 1000 recommendations - see the proceedings introduction for more details. 3.2 Data Pre-processing We did minimal data pre-processing, to highlight the ability of similarity-weighted adjusted confidence to perform well without carefully crafted features or manual consideration of special cases. We discarded users with no ENTER SEARCH interactions, which left 54,439 users. For each user i, we formed the set of items x i as name, interaction type for all interactions from that user. For example, Primrose, ENTER SEARCH was the feature indicating that the user did an ENTER SEARCH for the name Primrose. The total feature collection Z contained name, interaction type for all of the entries in the interaction database. The total number of items in Z was n = 34, 070. No other data pre-processing was done. To form rules, we took as left-hand sides a all individual interaction entries: A = Z. We considered as right-hand sides b all valid names to be recommended (among other things, this excluded names that were previously entered by that user - see the proceedings introduction for details on which names were excluded from the test set). An example rule is Primrose, ENTER SEARCH Katniss. 3.3 Results We applied confidence, adjusted confidence, similarity-weighted confidence, and similarity-weighted adjusted confidence to the training set to generate recommendations for the test users. For the adjusted measures, we found the best performance on the test set with K = 4 for similarity-weighted adjusted confidence and K = 10 for adjusted confidence, as shown in Figure 1. We also applied the user-based collaborative filtering, item-based collaborative filtering, and frequency baselines to generate recommendations. For all of these recommender system approaches, the mean average precision at 1000 on the test set is shown in Table 1. Similarity-weighted adjusted confidence gave the best performance, and similarity weighting led to a 4.2% increase in performance over (unweighted) adjusted confidence. The adjustment also led to a 9.7% increase in performance from similarity-weighted confidence to similarity-weighted adjusted confidence. User-based collaborative filtering performed well compared to the frequency baseline, but was outperformed by similarity-weighted adjusted confidence by 11.4%. Item-based collaborative filtering performed very poorly. An advantage of using association rules as opposed to techniques based in regression or matrix factorization is that there is no explicit error minimization problem being solved. This means that association rules generally do not have the same propensity to overfit as algorithms based in empirical risk minimization.
7 Mean average precision at Similarity-Weighted Association Rules Adjustment K Fig. 1. Test performance for adjusted confidence (blue) and similarity-weighted adjusted confidence (red) for varying amounts of adjustment K. Table 1. Mean average precision at 1000 for the recommender system approaches discussed in the paper. Recommender system Mean average precision Similarity-weighted adjusted confidence, K = Adjusted confidence, K = Similarity-weighted confidence User-based collaborative filtering Confidence Frequency Item-based collaborative filtering We found that the performance on the discovery challenge hold-out dataset was similar to that which we measured on the public test set in Table 1. 4 Conclusions Similarity-weighted adjusted confidence is a natural fit for the Nameling dataset and the name recommendation task. First, the dataset is extremely sparse (see [12]). The Bayesian adjustment K increases performance by reducing variance for low-support itemsets, and this dataset contains many low-support yet informative itemsets. Second, preferences for names are very heterogeneous. Incorporating the similarity weighting from user-based collaborative filtering into the confidence measure helps to focus the estimation on the more informative users. Association rules and similarity-weighted adjusted confidence are powerful tools for creating a scalable and interpretable recommender system that will perform well in many domains.
8 8 Benjamin Letham Acknowledgments. Thanks to Stephan Doerfel, Andreas Hotho, Robert Jäschke, Folke Mitzlaff, and Juergen Mueller for organizing the ECML PKDD 2013 Discovery Challenge, and for making their excellent Nameling dataset publicly available. Thanks also to Cynthia Rudin for support and for many discussions on using rules for predictive modeling. References 1. Zaki, M.J., Ogihara, M.: Theoretical foundations of association rules. In: 3rd ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery (1998) 2. Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM conference on Computer Supported Cooperative Work. pp CSCW 00 (2000) 3. McSherry, D.: Explanation in recommender systems. Artificial Intelligence Review 24(2), (2005) 4. Herlocker, J.L., Konstan, J.A., Terveen, L.G., John, Riedl, T.: Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems 22, 5 53 (2004) 5. Breese, J.S., Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence. pp UAI 98 (1998) 6. Sarwar, B., Karypis, G., Konstan, J., Riedl, J.: Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th international conference on World Wide Web. pp WWW 01 (2001) 7. Rudin, C., Letham, B., Salleb-Aouissi, A., Kogan, E., Madigan, D.: Sequential event prediction with association rules. In: Proceedings of the 24th Annual Conference on Learning Theory. pp COLT 11 (2011) 8. Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, (1998) 9. Tax, D.M., Breukelen, M.V., Duin, R.P., Kittler, J.: Combining multiple classifiers by averaging or by multiplying? Pattern Recognition 33, (2000) 10. Letham, B., Rudin, C., Madigan, D.: Sequential event prediction. Machine Learning (2013), in press 11. Yue, Y., Finley, T., Radlinski, F., Joachims, T.: A support vector method for optimizing average precision. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp SIGIR 07 (2007) 12. Mitzlaff, F., Stumme, G.: Recommending given names (2013), abs/ , preprint, arxiv:
Probabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationAssignment 1: Predicting Amazon Review Ratings
Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationLearning From the Past with Experiment Databases
Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University
More informationReducing Features to Improve Bug Prediction
Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science
More informationLearning to Rank with Selection Bias in Personal Search
Learning to Rank with Selection Bias in Personal Search Xuanhui Wang, Michael Bendersky, Donald Metzler, Marc Najork Google Inc. Mountain View, CA 94043 {xuanhui, bemike, metzler, najork}@google.com ABSTRACT
More informationDetecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011
Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011 Cristian-Alexandru Drăgușanu, Marina Cufliuc, Adrian Iftene UAIC: Faculty of Computer Science, Alexandru Ioan Cuza University,
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center
More informationMachine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler
Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina
More information*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN
From: AAAI Technical Report WS-98-08. Compilation copyright 1998, AAAI (www.aaai.org). All rights reserved. Recommender Systems: A GroupLens Perspective Joseph A. Konstan *t, John Riedl *t, AI Borchers,
More informationLecture 1: Basic Concepts of Machine Learning
Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010
More informationSystem Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering
More informationCS Machine Learning
CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing
More informationOPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS
OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,
More informationMining Student Evolution Using Associative Classification and Clustering
Mining Student Evolution Using Associative Classification and Clustering 19 Mining Student Evolution Using Associative Classification and Clustering Kifaya S. Qaddoum, Faculty of Information, Technology
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationLinking Task: Identifying authors and book titles in verbose queries
Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,
More informationEvaluation of Usage Patterns for Web-based Educational Systems using Web Mining
Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl
More informationEvaluation of Usage Patterns for Web-based Educational Systems using Web Mining
Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl
More informationThe Good Judgment Project: A large scale test of different methods of combining expert predictions
The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania
More informationGiven a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations
4 Interior point algorithms for network ow problems Mauricio G.C. Resende AT&T Bell Laboratories, Murray Hill, NJ 07974-2070 USA Panos M. Pardalos The University of Florida, Gainesville, FL 32611-6595
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More informationModel Ensemble for Click Prediction in Bing Search Ads
Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com
More informationA cognitive perspective on pair programming
Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2006 Proceedings Americas Conference on Information Systems (AMCIS) December 2006 A cognitive perspective on pair programming Radhika
More informationAustralian Journal of Basic and Applied Sciences
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationCROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2
1 CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2 Peter A. Chew, Brett W. Bader, Ahmed Abdelali Proceedings of the 13 th SIGKDD, 2007 Tiago Luís Outline 2 Cross-Language IR (CLIR) Latent Semantic Analysis
More informationPredicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks
Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com
More information(Sub)Gradient Descent
(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include
More informationUCLA UCLA Electronic Theses and Dissertations
UCLA UCLA Electronic Theses and Dissertations Title Using Social Graph Data to Enhance Expert Selection and News Prediction Performance Permalink https://escholarship.org/uc/item/10x3n532 Author Moghbel,
More informationSemi-Supervised Face Detection
Semi-Supervised Face Detection Nicu Sebe, Ira Cohen 2, Thomas S. Huang 3, Theo Gevers Faculty of Science, University of Amsterdam, The Netherlands 2 HP Research Labs, USA 3 Beckman Institute, University
More informationThe 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X
The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,
More informationFinding Translations in Scanned Book Collections
Finding Translations in Scanned Book Collections Ismet Zeki Yalniz Dept. of Computer Science University of Massachusetts Amherst, MA, 01003 zeki@cs.umass.edu R. Manmatha Dept. of Computer Science University
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationWelcome to. ECML/PKDD 2004 Community meeting
Welcome to ECML/PKDD 2004 Community meeting A brief report from the program chairs Jean-Francois Boulicaut, INSA-Lyon, France Floriana Esposito, University of Bari, Italy Fosca Giannotti, ISTI-CNR, Pisa,
More informationstateorvalue to each variable in a given set. We use p(x = xjy = y) (or p(xjy) as a shorthand) to denote the probability that X = x given Y = y. We al
Dependency Networks for Collaborative Filtering and Data Visualization David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, Carl Kadie Microsoft Research Redmond WA 98052-6399
More informationA study of speaker adaptation for DNN-based speech synthesis
A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,
More informationSINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)
SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,
More informationExperiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling
Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad
More informationAs a high-quality international conference in the field
The New Automated IEEE INFOCOM Review Assignment System Baochun Li and Y. Thomas Hou Abstract In academic conferences, the structure of the review process has always been considered a critical aspect of
More informationIntroduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition
Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and
More informationRule discovery in Web-based educational systems using Grammar-Based Genetic Programming
Data Mining VI 205 Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming C. Romero, S. Ventura, C. Hervás & P. González Universidad de Córdoba, Campus Universitario de
More informationMining Association Rules in Student s Assessment Data
www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama
More informationUsing Web Searches on Important Words to Create Background Sets for LSI Classification
Using Web Searches on Important Words to Create Background Sets for LSI Classification Sarah Zelikovitz and Marina Kogan College of Staten Island of CUNY 2800 Victory Blvd Staten Island, NY 11314 Abstract
More informationWord Segmentation of Off-line Handwritten Documents
Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationarxiv: v1 [cs.lg] 3 May 2013
Feature Selection Based on Term Frequency and T-Test for Text Categorization Deqing Wang dqwang@nlsde.buaa.edu.cn Hui Zhang hzhang@nlsde.buaa.edu.cn Rui Liu, Weifeng Lv {liurui,lwf}@nlsde.buaa.edu.cn arxiv:1305.0638v1
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationChapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard
Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA Alta de Waal, Jacobus Venter and Etienne Barnard Abstract Most actionable evidence is identified during the analysis phase of digital forensic investigations.
More informationAn Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method
Farhadi F, Sorkhi M, Hashemi S et al. An effective framework for fast expert mining in collaboration networks: A grouporiented and cost-based method. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 27(3): 577
More informationA Comparison of Standard and Interval Association Rules
A Comparison of Standard and Association Rules Choh Man Teng cmteng@ai.uwf.edu Institute for Human and Machine Cognition University of West Florida 4 South Alcaniz Street, Pensacola FL 325, USA Abstract
More informationRule-based Expert Systems
Rule-based Expert Systems What is knowledge? is a theoretical or practical understanding of a subject or a domain. is also the sim of what is currently known, and apparently knowledge is power. Those who
More informationUsing dialogue context to improve parsing performance in dialogue systems
Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,
More informationCSL465/603 - Machine Learning
CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am
More informationThe Impact of Test Case Prioritization on Test Coverage versus Defects Found
10 Int'l Conf. Software Eng. Research and Practice SERP'17 The Impact of Test Case Prioritization on Test Coverage versus Defects Found Ramadan Abdunabi Yashwant K. Malaiya Computer Information Systems
More informationhave to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,
A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994
More informationOn the Combined Behavior of Autonomous Resource Management Agents
On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science
More informationCalibration of Confidence Measures in Speech Recognition
Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationBridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models
Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models Jung-Tae Lee and Sang-Bum Kim and Young-In Song and Hae-Chang Rim Dept. of Computer &
More informationCustomized Question Handling in Data Removal Using CPHC
International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 1, Issue 8, December 2014, PP 29-34 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org Customized
More informationVariations of the Similarity Function of TextRank for Automated Summarization
Variations of the Similarity Function of TextRank for Automated Summarization Federico Barrios 1, Federico López 1, Luis Argerich 1, Rosita Wachenchauzer 12 1 Facultad de Ingeniería, Universidad de Buenos
More informationInstructor: Mario D. Garrett, Ph.D. Phone: Office: Hepner Hall (HH) 100
San Diego State University School of Social Work 610 COMPUTER APPLICATIONS FOR SOCIAL WORK PRACTICE Statistical Package for the Social Sciences Office: Hepner Hall (HH) 100 Instructor: Mario D. Garrett,
More informationCLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH
ISSN: 0976-3104 Danti and Bhushan. ARTICLE OPEN ACCESS CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH Ajit Danti 1 and SN Bharath Bhushan 2* 1 Department
More informationDRAFT VERSION 2, 02/24/12
DRAFT VERSION 2, 02/24/12 Incentive-Based Budget Model Pilot Project for Academic Master s Program Tuition (Optional) CURRENT The core of support for the university s instructional mission has historically
More informationSpeech Emotion Recognition Using Support Vector Machine
Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,
More informationUnsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model
Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.
More informationData Fusion Through Statistical Matching
A research and education initiative at the MIT Sloan School of Management Data Fusion Through Statistical Matching Paper 185 Peter Van Der Puttan Joost N. Kok Amar Gupta January 2002 For more information,
More informationThe Strong Minimalist Thesis and Bounded Optimality
The Strong Minimalist Thesis and Bounded Optimality DRAFT-IN-PROGRESS; SEND COMMENTS TO RICKL@UMICH.EDU Richard L. Lewis Department of Psychology University of Michigan 27 March 2010 1 Purpose of this
More informationMatrices, Compression, Learning Curves: formulation, and the GROUPNTEACH algorithms
Matrices, Compression, Learning Curves: formulation, and the GROUPNTEACH algorithms Bryan Hooi 1, Hyun Ah Song 1, Evangelos Papalexakis 1, Rakesh Agrawal 2, and Christos Faloutsos 1 1 Carnegie Mellon University,
More informationNCEO Technical Report 27
Home About Publications Special Topics Presentations State Policies Accommodations Bibliography Teleconferences Tools Related Sites Interpreting Trends in the Performance of Special Education Students
More informationProduct Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments
Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &
More informationIterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages
Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer
More informationarxiv: v1 [cs.lg] 15 Jun 2015
Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and
More informationTeam Formation for Generalized Tasks in Expertise Social Networks
IEEE International Conference on Social Computing / IEEE International Conference on Privacy, Security, Risk and Trust Team Formation for Generalized Tasks in Expertise Social Networks Cheng-Te Li Graduate
More informationComment-based Multi-View Clustering of Web 2.0 Items
Comment-based Multi-View Clustering of Web 2.0 Items Xiangnan He 1 Min-Yen Kan 1 Peichu Xie 2 Xiao Chen 3 1 School of Computing, National University of Singapore 2 Department of Mathematics, National University
More informationCitrine Informatics. The Latest from Citrine. Citrine Informatics. The data analytics platform for the physical world
Citrine Informatics The data analytics platform for the physical world The Latest from Citrine Summit on Data and Analytics for Materials Research 31 October 2016 Our Mission is Simple Add as much value
More informationCross-lingual Text Fragment Alignment using Divergence from Randomness
Cross-lingual Text Fragment Alignment using Divergence from Randomness Sirvan Yahyaei, Marco Bonzanini, and Thomas Roelleke Queen Mary, University of London Mile End Road, E1 4NS London, UK {sirvan,marcob,thor}@eecs.qmul.ac.uk
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationMath-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade
Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade The third grade standards primarily address multiplication and division, which are covered in Math-U-See
More informationISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM
Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and
More informationPreference Learning in Recommender Systems
Preference Learning in Recommender Systems Marco de Gemmis, Leo Iaquinta, Pasquale Lops, Cataldo Musto, Fedelucio Narducci, and Giovanni Semeraro Department of Computer Science University of Bari Aldo
More informationGeorgetown University at TREC 2017 Dynamic Domain Track
Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationApplications of data mining algorithms to analysis of medical data
Master Thesis Software Engineering Thesis no: MSE-2007:20 August 2007 Applications of data mining algorithms to analysis of medical data Dariusz Matyja School of Engineering Blekinge Institute of Technology
More informationWHEN THERE IS A mismatch between the acoustic
808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,
More informationTIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy
TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE Pierre Foy TIMSS Advanced 2015 orks User Guide for the International Database Pierre Foy Contributors: Victoria A.S. Centurino, Kerry E. Cotter,
More informationWhy Did My Detector Do That?!
Why Did My Detector Do That?! Predicting Keystroke-Dynamics Error Rates Kevin Killourhy and Roy Maxion Dependable Systems Laboratory Computer Science Department Carnegie Mellon University 5000 Forbes Ave,
More informationSoftware Maintenance
1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationMassachusetts Institute of Technology Tel: Massachusetts Avenue Room 32-D558 MA 02139
Hariharan Narayanan Massachusetts Institute of Technology Tel: 773.428.3115 LIDS har@mit.edu 77 Massachusetts Avenue http://www.mit.edu/~har Room 32-D558 MA 02139 EMPLOYMENT Massachusetts Institute of
More informationStatewide Framework Document for:
Statewide Framework Document for: 270301 Standards may be added to this document prior to submission, but may not be removed from the framework to meet state credit equivalency requirements. Performance
More informationEfficient Online Summarization of Microblogging Streams
Efficient Online Summarization of Microblogging Streams Andrei Olariu Faculty of Mathematics and Computer Science University of Bucharest andrei@olariu.org Abstract The large amounts of data generated
More informationUniversidade do Minho Escola de Engenharia
Universidade do Minho Escola de Engenharia Universidade do Minho Escola de Engenharia Dissertação de Mestrado Knowledge Discovery is the nontrivial extraction of implicit, previously unknown, and potentially
More informationIntroduction to the Practice of Statistics
Chapter 1: Looking at Data Distributions Introduction to the Practice of Statistics Sixth Edition David S. Moore George P. McCabe Bruce A. Craig Statistics is the science of collecting, organizing and
More informationLearning Methods in Multilingual Speech Recognition
Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex
More informationDisambiguation of Thai Personal Name from Online News Articles
Disambiguation of Thai Personal Name from Online News Articles Phaisarn Sutheebanjard Graduate School of Information Technology Siam University Bangkok, Thailand mr.phaisarn@gmail.com Abstract Since online
More informationBootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition
Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition Tom Y. Ouyang * MIT CSAIL ouyang@csail.mit.edu Yang Li Google Research yangli@acm.org ABSTRACT Personal
More information