Introduction to Machine Learning

Similar documents
CS Machine Learning

Lecture 1: Machine Learning Basics

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Learning Methods in Multilingual Speech Recognition

Reducing Features to Improve Bug Prediction

Lecture 1: Basic Concepts of Machine Learning

A Case Study: News Classification Based on Term Frequency

Python Machine Learning

Rule Learning With Negation: Issues Regarding Effectiveness

Mining Student Evolution Using Associative Classification and Clustering

Using Web Searches on Important Words to Create Background Sets for LSI Classification

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

CSL465/603 - Machine Learning

Switchboard Language Model Improvement with Conversational Data from Gigaword

Applications of data mining algorithms to analysis of medical data

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Universidade do Minho Escola de Engenharia

(Sub)Gradient Descent

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Assignment 1: Predicting Amazon Review Ratings

Rule Learning with Negation: Issues Regarding Effectiveness

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

CS 446: Machine Learning

Australian Journal of Basic and Applied Sciences

Data Stream Processing and Analytics

Learning From the Past with Experiment Databases

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

A Version Space Approach to Learning Context-free Grammars

Probabilistic Latent Semantic Analysis

Large-Scale Web Page Classification. Sathi T Marath. Submitted in partial fulfilment of the requirements. for the degree of Doctor of Philosophy

Word Segmentation of Off-line Handwritten Documents

Improving Simple Bayes. Abstract. The simple Bayesian classier (SBC), sometimes called

Comparison of EM and Two-Step Cluster Method for Mixed Data: An Application

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Semi-Supervised Face Detection

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Preference Learning in Recommender Systems

Chapter 2 Rule Learning in a Nutshell

Issues in the Mining of Heart Failure Datasets

CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE. Mingon Kang, PhD Computer Science, Kennesaw State University

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Visual CP Representation of Knowledge

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Softprop: Softmax Neural Network Backpropagation Learning

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

A Decision Tree Analysis of the Transfer Student Emma Gunu, MS Research Analyst Robert M Roe, PhD Executive Director of Institutional Research and

Laboratorio di Intelligenza Artificiale e Robotica

Welcome to. ECML/PKDD 2004 Community meeting

Exploration. CS : Deep Reinforcement Learning Sergey Levine

POLA: a student modeling framework for Probabilistic On-Line Assessment of problem solving performance

Grade 6: Correlated to AGS Basic Math Skills

A Comparison of Two Text Representations for Sentiment Analysis

Speech Emotion Recognition Using Support Vector Machine

Truth Inference in Crowdsourcing: Is the Problem Solved?

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

A NEW ALGORITHM FOR GENERATION OF DECISION TREES

Multivariate k-nearest Neighbor Regression for Time Series data -

A Bayesian Learning Approach to Concept-Based Document Classification

Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio

Content-based Image Retrieval Using Image Regions as Query Examples

ReinForest: Multi-Domain Dialogue Management Using Hierarchical Policies and Knowledge Ontology

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Learning to Schedule Straight-Line Code

Radius STEM Readiness TM

Feature Selection based on Sampling and C4.5 Algorithm to Improve the Quality of Text Classification using Naïve Bayes

Word learning as Bayesian inference

Axiom 2013 Team Description Paper

GACE Computer Science Assessment Test at a Glance

Accuracy (%) # features

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Data Fusion Through Statistical Matching

Mining Association Rules in Student s Assessment Data

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Learning Methods for Fuzzy Systems

Laboratorio di Intelligenza Artificiale e Robotica

Detecting English-French Cognates Using Orthographic Edit Distance

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning

Using dialogue context to improve parsing performance in dialogue systems

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

The stages of event extraction

Prediction of Maximal Projection for Semantic Role Labeling

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

Cross-lingual Short-Text Document Classification for Facebook Comments

Indian Institute of Technology, Kanpur

arxiv: v1 [cs.lg] 3 May 2013

Fuzzy rule-based system applied to risk estimation of cardiovascular patients

Calibration of Confidence Measures in Speech Recognition

A Vector Space Approach for Aspect-Based Sentiment Analysis

FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification

Time series prediction

Multi-Lingual Text Leveling

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Transcription:

Introduction to Machine Learning D. De Cao R. Basili Corso di Web Mining e Retrieval a.a. 2008-9 April 7, 2009

Outline Outline Introduction to Machine Learning Decision Tree Naive Bayes K-nearest neighbor

Introduction to Machine Learning Like human learning from past experiences. A computer system learns from data, which represent some past experiences of an application domain. Our focus: learn a target function that can be used to predict the values of a discrete class attribute. The task is commonly called: Supervised learning, classification.

Introduction to Machine Learning Example You need to write a program that: given a Level Hierarchy of a company given an employe described trough some attributes (the number of attributes can be very high) assign to the employe the correct level into the hierarchy. How many if are necessary to select the correct level? How many time is necessary to study the relations between the hierarchy and attributes? Solution Learn the function to link each employe to the correct level.

Supervised Learning: Data and Goal Data: a set of data records (also called examples, instances or cases) described by: k attributes: A 1,A 2,...,A k. a class: Each example is labelled with a pre-defined class. In previous example data can be obtained from existing DataBase. Goal: to learn a classification model from the data that can be used to predict the classes of new (future, or test) cases/instances.

Supervised vs. Unsupervised Learning Supervised Learning Needs supervision: The data (observations, measurements, etc.) are labeled with pre-defined classes. It is like that a?teacher? gives the classes. New data (Test) are classified into these classes too. Unsupervised Learning Class labels of the data are unknown Given a set of data, the task is to establish the existence of classes or clusters in the data.

Supervised Learning process: two steps Learning (Training) Learn a model using the training data Testing Test the model using unseen test data to assess the model accuracy

Learning Algorithms Boolean Functions (Decision Trees) Probabilistic Functions (Bayesian Classifier) Functions to partitioning Vector Space Non-Linear: KNN, Neural Networks,... Linear: Support Vector Machines, Perceptron,...

Decision Tree: Domain Example The class to learn is: approve a loan

Decision Tree Decision Tree example for the loan problem

Is the decision tree unique? No. Here is a simpler tree. We want smaller tree and accurate tree. Easy to understand and perform better. Finding the best tree is NP-hard. All current tree building algorithms are heuristic algorithms A decision tree can be converted to a set of rules.

From a decision tree to a set of rules Each path from the root to a leaf is a rule Rules Own_house = true Class = yes Own_house = false, Has_job = true Class = yes Own_house = false, Has_job = false Class = no

Algorithm for decision tree learning Basic algorithm (a greedy divide-and-conquer algorithm) Assume attributes are categorical now (continuous attributes can be handled too) Tree is constructed in a top-down recursive manner At start, all the training examples are at the root Examples are partitioned recursively based on selected attributes Attributes are selected on the basis of an impurity function (e.g., information gain) Conditions for stopping partitioning All examples for a given node belong to the same class There are no remaining attributes for further partitioning? majority class is the leaf There are no examples left

Choose an attribute to partition data How chose the best attribute set? The objective is to reduce the impurity or uncertainty in data as much as possible A subset of data is pure if all instances belong to the same class. The heuristic is to choose the attribute with the maximum Information Gain or Gain Ratio based on information theory.

Information Gain Entropy of D Given a set of examples D is possible to compute the original entropy of the dataset such as: C H[D] = P(c j )log 2 P(c j ) where C is the set of desired class. j=1 Entropy of an attribute A i If we make attribute A i, with v values, the root of the current tree, this will partition D into v subsets D 1,D 2,...,D v. The expected entropy if A i is used as the current root: H Ai [D] = v j=1 D j D H[D j]

Information Gain Information Gain Information gained by selecting attribute A i to branch or to partition the data is given by the difference of prior entropy and the entropy of selected branch gain(d,a i ) = H[D] H Ai [D] We choose the attribute with the highest gain to branch/split the current tree.

Example H[D] = 6 15 log 6 2 15 9 15 log 9 2 15 = 0.971 H OH [D] = 6 15 H[D 1] 9 15 H[D 2] = 6 15 0 + 9 0.918 = 0.551 15 gain(d,age) = 0.971 0.888 = 0.083 gain(d,own_house) = 0.971 0.551 = 0.420 gain(d,has_job) = 0.971 0.647 = 0.324 gain(d,credit) = 0.971 0.608 = 0.363

Algorithm for decision tree learning