Lecture 2: More Similarity Searching; Multidimensional Scaling

Similar documents
A Case Study: News Classification Based on Term Frequency

Python Machine Learning

Artificial Neural Networks written examination

Getting Started with Deliberate Practice

Lecture 1: Machine Learning Basics

The Good Judgment Project: A large scale test of different methods of combining expert predictions

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Software Maintenance

Corpus Linguistics (L615)

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011

While you are waiting... socrative.com, room number SIMLANG2016

Students Understanding of Graphical Vector Addition in One and Two Dimensions

Reducing Features to Improve Bug Prediction

Probabilistic Latent Semantic Analysis

The Foundations of Interpersonal Communication

Grade 6: Correlated to AGS Basic Math Skills

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

Rule Learning With Negation: Issues Regarding Effectiveness

Why Pay Attention to Race?

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Quantitative analysis with statistics (and ponies) (Some slides, pony-based examples from Blase Ur)

Chapter 4 - Fractions

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

Managerial Decision Making

Mathematics process categories

Physics 270: Experimental Physics

Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games

Improving Conceptual Understanding of Physics with Technology

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Developing Grammar in Context

Learning From the Past with Experiment Databases

Linking Task: Identifying authors and book titles in verbose queries

Rule Learning with Negation: Issues Regarding Effectiveness

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

Functional Skills Mathematics Level 2 assessment

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Australian Journal of Basic and Applied Sciences

Using dialogue context to improve parsing performance in dialogue systems

Switchboard Language Model Improvement with Conversational Data from Gigaword

a) analyse sentences, so you know what s going on and how to use that information to help you find the answer.

Teaching a Laboratory Section

Curriculum Design Project with Virtual Manipulatives. Gwenanne Salkind. George Mason University EDCI 856. Dr. Patricia Moyer-Packenham

P-4: Differentiate your plans to fit your students

Classify: by elimination Road signs

Cal s Dinner Card Deals

Informatics 2A: Language Complexity and the. Inf2A: Chomsky Hierarchy

WHEN THERE IS A mismatch between the acoustic

Professor Christina Romer. LECTURE 24 INFLATION AND THE RETURN OF OUTPUT TO POTENTIAL April 20, 2017

Let s think about how to multiply and divide fractions by fractions!

Individual Component Checklist L I S T E N I N G. for use with ONE task ENGLISH VERSION

LEGO MINDSTORMS Education EV3 Coding Activities

MYCIN. The MYCIN Task

Learning Methods for Fuzzy Systems

COMMUNICATION & NETWORKING. How can I use the phone and to communicate effectively with adults?

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

This scope and sequence assumes 160 days for instruction, divided among 15 units.

Ohio s Learning Standards-Clear Learning Targets

LEARNER VARIABILITY AND UNIVERSAL DESIGN FOR LEARNING

Probability estimates in a scenario tree

Reinforcement Learning by Comparing Immediate Reward

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Cross Language Information Retrieval

Generative models and adversarial training

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

MENTORING. Tips, Techniques, and Best Practices

Why Did My Detector Do That?!

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Part I. Figuring out how English works

Feature Selection based on Sampling and C4.5 Algorithm to Improve the Quality of Text Classification using Naïve Bayes

How To Take Control In Your Classroom And Put An End To Constant Fights And Arguments

Lecture 10: Reinforcement Learning

Using Web Searches on Important Words to Create Background Sets for LSI Classification

Investigations for Chapter 1. How do we measure and describe the world around us?

Introduction to Questionnaire Design

Foothill College Summer 2016

Finding Translations in Scanned Book Collections

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C

Mathematics Scoring Guide for Sample Test 2005

RESPONSE TO LITERATURE

Loughton School s curriculum evening. 28 th February 2017

How People Learn Physics

TabletClass Math Geometry Course Guidebook

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Universiteit Leiden ICT in Business

Spinners at the School Carnival (Unequal Sections)

arxiv: v1 [math.at] 10 Jan 2016

IN THIS UNIT YOU LEARN HOW TO: SPEAKING 1 Work in pairs. Discuss the questions. 2 Work with a new partner. Discuss the questions.

If a measurement is given, can we convert that measurement to different units to meet our needs?

Mathematics Success Level E

Exploration. CS : Deep Reinforcement Learning Sergey Levine

TUESDAYS/THURSDAYS, NOV. 11, 2014-FEB. 12, 2015 x COURSE NUMBER 6520 (1)

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Transcription:

Lecture 2: More Similarity Searching; Multidimensional Scaling 36-350: Data Mining 28 August 2009 Reading: Principles of Data Mining, sections 14.1 14.4 (skiping 14.3.3 for now) and 3.7. Let s recap where we left similarity searching for documents. We represent each document as a bag of words, i.e., a vector giving the number of times each word occurred in the document. This abstracts away all the grammatical structure, context, etc., leaving us with a matrix whose rows are feature vectors, a data frame. To find documents which are similar to a given document Q, we calculate the distance between Q and all the other documents, i.e., the distance between their feature vectors. 1 Queries If we have a document in hand which we like, and we want to find the k documents closest to it, we can do this once we know the distances between that document and all the others. But how can we get away from finding that one good document to begin with? The trick is that a query, whether an actual sentence ( What are the common problems of the 2001 model year Saturn? ) or just a list of key words ( problems 2001 model Saturn ) is a small document. If we represent user queries as bags of words, we can use our similarity searching procedure on them. This is really all it takes. 1.1 Evaluating Similarity Search When someone uses a search engine, they have some idea of which of the results are what they were looking for. In the jargon, we say that the good results were relevant to the query. There are actually two aspects to finding relevant documents, both of which are important: Most of the results should be relevant; that is, the precision of the search should be high. 1

Most of the relevant items should be returned as results; that is, the recall should be high, too. Formally, if the search returns k items, r of which are relevant, and there are R relevant items in the whole corpus of N items, the precision is the ratio r/k, and the recall is the ratio r/r. (This is for one query, but we can average over queries.) Notice that r k, so there are limits on how high the recall can be when k is small. As we change k for a given query, we get different values for the precision and the recall. Generally, we expect that increasing k will increase recall (more relevant things can come in) but lower precision (more irrelevant things can come in, too). A good search method is one where the trade-off between precision and recall is not very sharp, where we can gain a lot of recall while losing on a little precision. A visual way of representing the precision-recall trade-off is to plot precision (on the vertical axis) against recall (on the horizontal axis) for multiple values of k. If the method is working well, when k is small the precision should be high, though the recall will be limited by k; as k grows, the recall should increase, moving us to the right, but the recall will fall, moving us down. So the precisionrecall curve should going from somewhere near (1, 0) to somewhere near (0, 1). The total area under the curve is often used as a measure of how good the search method is. Of course, there is nothing magic about changing k; if we have a different search technique with a tunable setting, we can make a precision-recall curve for it, too. Search, Hypothesis Testing, Signal Detection, ROC It is no coincidence that the difference between precision and recall is very like the difference between type I and type II errors in hypothesis testing. High precision is like having a low type I error rate (most of the hits are real); high recall is like having a low type II error rate (most things which should be hits are). The same idea applies to signal detection as well, where a type I error is called a false alarm (you thought there was signal when there was just noise) and a type II error is called a miss (you mistook signal for noise). The precision-recall curves actually come from signal detection theory, where they are called receiver operating characteristic curves, or ROC curves. Practice In practice, the only way to tell whether your search-engine s results are relevant is to ask actual people (Figure 1). The major services do this with lab experiments, with special browsers given to testers to ask quiz them on whether the results were relevant, and by taking random samples of their query log and having testers repeat the queries to see whether the results were relevant. Naturally, this use of human beings is slow and expensive, especially because the raters have to be trained, so the amount of this data is limited and they are very reluctant to share it. Notice, by the way, that when dealing with something like the web, or indeed any large collection where users give arbitrary queries, it is a lot easier 2

Figure 1: Search-engine evaluation in practice. Source: http://icanhascheezburger.com/2007/01/11/ this-is-relevant-to-my-interests-2/. to estimate precision than recall (how do you find R, the number of genuinely relevant documents in the whole corpus?). 2 Classification One very important data-mining task is classifying new pieces of data, that is, assigning them to one of a fixed number of classes. Last time, our two classes were stories about music and stories about the other arts. Usually, new data doesn t come with a class label, so we have to somehow guess the class from the features. 1 Two very basic strategies become available as soon as we can measure similarity or distance. 1. With a nearest neighbor strategy, we guess that the new object is in the same class as the closest already-classified object. (We saw this at the end of the last lecture.) Similarity search is in a way just the reverse: we 1 If it does come with a label, we read the label. 3

guess that the nearest neighbor is in the same class ( is relevant ) as the query. 2. With a prototype strategy, we pick out the most representative member of each class, or perhaps the average of each class, as its prototype, and guess that new objects belong to the class with the closer prototype. We will see many other classification methods before the course is over. All classification methods can be evaluated on their error rate or misclassification rate, which is simply the fraction of cases they get wrong, by assigning them to the wrong class. (A classifier s mis-classification rate is also sometimes just called its inaccuracy.) A more refined analysis distinguishes between different kinds of errors. For each class i, we record what fraction of i s are guessed to be of class j, and get a little matrix called the confusion matrix. (The diagonal entries show probabilities of correct classifications.) For two classes, this gives us the type I and type II error rates again though which is which is arbitrary. 3 Inverse Document Frequency Someone asked in class last time about selectively paying less attention to certain words, especially common words, and more to the rest. This is an excellent notion. Not all features are going to be equally useful, and some words are so common that they give us almost no ability at all to discriminate between relevant and irrelevant documents. In (most) collections of English documents, looking at the, of, a, etc., is a waste of time. We could handle this by a fixed list of stop words, which we just don t count, but this at once too crude (all or nothing) and too much work (we need to think up the list). Inverse document frequency (IDF) is a more adaptive approach. The document frequency of a w is the number of documents it appears in, n w. The IDF weight of w is IDF (w) log N n w where N is the total size of our collection. Now when we make our bag-ofwords vector for the document Q, the number of times w appears in Q, Q w, is multiplied by IDF (w). Notice that if w appears in every document, n w = N and it gets an IDF weight of zero; we won t use it to calculate distances. This takes care of most of the things we d use a list of stop-words for, but it also takes into account, implicitly, the kind of documents we re using. (In a data base of papers on genetics, gene and DNA are going to have IDF weights of near zero too.) On the other hand, if w appears in only a few documents, it will get a weight of about log N, and all documents containing w will tend to be close to each other. Table 1 shows how including IDF weighting, along with Euclidean length normalization, dramatically improves our ability to classify posts as either about music or about the other arts. 4

Normalization Equal weight IDF weight None 38 52 Word count 39 37 Euclidean length 44 19 Table 1: Number of mis-classifications in a collection of 102 stories from the Times about music (45 stories) and the other arts (57 stories) when using the nearest neighbor method, with different choices of normalization and with or without IDF weighting. (Cf. Fig. 2.) Note that an idiot who always guessed art would only make 45 mistakes. You could tell a similar story about any increasing function, not just log, but log happens to work very well in practice, in part because it s not very sensitive to the exact number of documents. So this is not the same log we will see in information theory, or the log in psychophysics. Notice also that this is not guaranteed to work. Even if w appears in every document, so IDF (w) = 0, it might be common in some of them and rare in others, so we ll ignore what might have been useful information. (Maybe genetics papers about laboratory procedures use DNA more often, and papers about hereditary diseases use gene more often.) This is our first look at the problem of feature selection: how do we pick out good, useful features from the very large, perhaps infinite, collection of possible features? We will come back to this in various ways throughout the course. Right now, concentrate on the fact that in search, and other classification problems, we are looking for features that let us discriminate between the classes. 4 More Wrinkles to Similarity Search 4.1 Stemming It is a lot easier to decide what counts as a word in English than in some other languages. 2 Even so, we need to decide whether car and cars are the same word, for our purposes, or not. Stemming takes derived forms of words (like cars, flying ) and reduces them to their stem ( car, fly ). Doing this well requires linguistic knowledge (so the system doesn t think the stem of 2 For example, Turkish is what is known as an aggulutinative language, in which grammatical units are glued together to form compound words whose meaning would be a whole phrase or sentence in English, e.g., gelemiyebelirim, I may be unable to come, yapabilecekdiyseniz, if you were going to be able to do, or calistirilmamaliymis, supposedly he ought not to be made to work. (German does this too, but not so much.) This causes problems with Turkish-language applications, because many sequences-of-letters-separated-by-punctuation are effectively unique. See, for example, L. Özgür, T. Güngör and F. Gürgen, Adaptive antispam filtering for agglutinative languages: a special case for Turkish, Pattern Recognition Letters 25 (2004): 1819 1831, available from http://www.cmpe.boun.edu.tr/~gungort/. 5

potatoes is potatoe, or that gravity is the same as grave ), and it can even be harmful (if the document has Saturns, plural, it s most likely about the cars). 4.2 Feedback People are much better at telling whether you ve found what they re looking for than they are at explaining what it is that they re looking for. (They know it when they see it.) Queries are users trying to explain what they re looking for (to a computer, no less), so they re often pretty bad. An important idea in data mining is that people should do things at which they are better than computers and vice versa: here they should be deciders, not explainers. Rocchio s algorithm takes feedback from the user, about which documents were relevant, and then refines the search, giving more weight to what they like, and less to what they don t like. The user gives the system some query, whose bag-of-words vector is Q t. The system responses with various documents, some of which the user marks as relevant (R) and others as not-relevant (N R). (See Fig. 1 again.) The system then modifies the query vector: Q t+1 = αq t + β R doc R doc γ NR doc NR where R and N R are the number of relevant and non-relevant documents, and α, β and γ are positive constants. α says how much continuity there is between the old search and the new one; β and γ gauge our preference for recall (we find more relevant items) versus precision (more of what we find is relevant). The system then runs another search with Q t+1, and cycle starts over. As this repeats, Q t gets closer to the bag-of-words vector which best represents what the user has in mind, assuming they have something definite and consistent in mind. N.B.: A word can t appear in a document a negative number of times, so ordinarily bag-of-words vectors have non-negative components. Q t, however, can easily come to have negative components, indicating the words whose presence is evidence that the document isn t relevant. Recalling the example of problems with used 2001 Saturns, we probably don t want anything which contains Titan or Rhea, since it s either about mythology or astronomy, and giving our query negative components for those words suppresses those documents. Rocchio s algorithm works with any kind of similarity-based search, not just text. It s related to many machine-learning procedures which incrementally adjust in the direction of what has worked and away from what has not the stochastic approximation algorithm for estimating functions and curves, reinforcement learning for making decisions, Bayesian learning for updating conditional probabilities, and multiplicative weight training for combining predictors (which we ll look at later in the course). This is no accident; they are all special cases of adaptive evolution by means of natural selection. doc 6

5 Visualization: Multidimensional Scaling The bag-of-words vectors representing our documents generally live in spaces with lots of dimensions, certainly more than three, which are hard for ordinary humans to visualize. However, we can compute the distance between any two vectors, so we know how far apart they are. Multidimensional scaling (MDS) is the general name for a family of algorithms which take high-dimensional vectors and map them down to two- or three-dimensional vectors, trying to preserve all the relevant distances. Abstractly, the idea is that we start with vectors v 1, v 2,... v n in a p-dimensional space, where p is large, and we want to find new vectors x 1, x 2,... x n in R 2 or R 3 such that n (δ(v 1, v 2 ) d(x 1, x 2 )) 2 i=1 j i is as small as possible, where δ is distance in the original space and d is Euclidean distance in the new space. Note that the new or image points x i are representations of the v i, i.e., representations of representations. There is some trickiness to properly minimizing this objective function for instance, if we rotate all the x i through a common angle, their distances are unchanged, but it s not really a new solution and it s not usually possible to make it exactly zero (See Sec. 3.7 in the textbook for details.) We will see a lot of multidimensional scaling plots, because they are nice visualization tools, but we will also see a lot of other data reduction or dimensionality reduction methods, because sometimes it s more important to preserve other properties than distances. Notice that while the bag of words representation gives each of our original coordinates/features some meaning it says something very definite about the document being represented that s not the case with the coordinates we get after doing the MDS. If nothing else, the fact that we could rotate all of the image points arbitrarily makes it very hard to assign any interpretation to where the images fall on the axes. This is true of many other dimensionality-reduction methods as well. 7

Euclidean-length normalization mds.coords[,2] -0.2 0.0 0.2 0.4-0.2 0.0 0.2 0.4 0.6 0.8 mds.coords[,1] IDF weights and Euclidean-length normalization mds.coords[,2] -0.3-0.2-0.1 0.0 0.1 0.2-0.4-0.3-0.2-0.1 0.0 0.1 0.2 mds.coords[,1] Figure 2: Illustrations of multidimensional scaling for the 102 art/music stories (art=red, music=blue), with and without IDF weights. This was produced using the R command cmdscale (plus a little extra code to plot it nicely). Notice that with IDF weights, the two classes are far more distinct visually, which comes through in the classification results in Table 1. 8