CHAPTER 1: INTRODUCTION

Similar documents
Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

CSL465/603 - Machine Learning

Lecture 1: Machine Learning Basics

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Python Machine Learning

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness

CS Machine Learning

Laboratorio di Intelligenza Artificiale e Robotica

Word Segmentation of Off-line Handwritten Documents

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Human Emotion Recognition From Speech

Laboratorio di Intelligenza Artificiale e Robotica

Lecture 1: Basic Concepts of Machine Learning

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Australian Journal of Basic and Applied Sciences

Reducing Features to Improve Bug Prediction

Softprop: Softmax Neural Network Backpropagation Learning

Axiom 2013 Team Description Paper

Probabilistic Latent Semantic Analysis

Welcome to. ECML/PKDD 2004 Community meeting

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Reinforcement Learning by Comparing Immediate Reward

Artificial Neural Networks written examination

Computerized Adaptive Psychological Testing A Personalisation Perspective

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Assignment 1: Predicting Amazon Review Ratings

Applications of data mining algorithms to analysis of medical data

Mining Association Rules in Student s Assessment Data

Generative models and adversarial training

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Learning Methods for Fuzzy Systems

Lecture 10: Reinforcement Learning

A Comparison of Standard and Interval Association Rules

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Massachusetts Institute of Technology Tel: Massachusetts Avenue Room 32-D558 MA 02139

Speech Emotion Recognition Using Support Vector Machine

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

A Case Study: News Classification Based on Term Frequency

arxiv: v1 [cs.lg] 15 Jun 2015

Top US Tech Talent for the Top China Tech Company

(Sub)Gradient Descent

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

TD(λ) and Q-Learning Based Ludo Players

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Handling Concept Drifts Using Dynamic Selection of Classifiers

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Business Analytics and Information Tech COURSE NUMBER: 33:136:494 COURSE TITLE: Data Mining and Business Intelligence

An investigation of imitation learning algorithms for structured prediction

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Time series prediction

Semi-Supervised Face Detection

Evolutive Neural Net Fuzzy Filtering: Basic Description

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

Learning From the Past with Experiment Databases

Calibration of Confidence Measures in Speech Recognition

Mining Student Evolution Using Associative Classification and Clustering

Speech Recognition at ICSI: Broadcast News and beyond

Linking Task: Identifying authors and book titles in verbose queries

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Applying Fuzzy Rule-Based System on FMEA to Assess the Risks on Project-Based Software Engineering Education

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Soft Computing based Learning for Cognitive Radio

Geospatial Visual Analytics Tutorial. Gennady Andrienko & Natalia Andrienko

BUSINESS OCR LEVEL 2 CAMBRIDGE TECHNICAL. Cambridge TECHNICALS BUSINESS ONLINE CERTIFICATE/DIPLOMA IN R/502/5326 LEVEL 2 UNIT 11

Exploration. CS : Deep Reinforcement Learning Sergey Levine

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning

The Good Judgment Project: A large scale test of different methods of combining expert predictions

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

Seminar - Organic Computing

Firms and Markets Saturdays Summer I 2014

A study of speaker adaptation for DNN-based speech synthesis

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

A student diagnosing and evaluation system for laboratory-based academic exercises

INPE São José dos Campos

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Universidade do Minho Escola de Engenharia

AUTOMATED FABRIC DEFECT INSPECTION: A SURVEY OF CLASSIFIERS

A Case-Based Approach To Imitation Learning in Robotic Agents

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

A cognitive perspective on pair programming

A Reinforcement Learning Variant for Control Scheduling

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

Transcription:

CHAPTER 1: INTRODUCTION

Big Data 3 Widespread use of personal computers and wireless communication leads to big data We are both producers and consumers of data Producer: when buy a product, rent a movie, write a blog, post on social media Consumer: we want to have products and services specialized to us, expect our needs to be understood and interests to be predicted Data is not random, it has structure, e.g., customer behavior We need big theory to extract that structure from data for (a) Understanding the process (b) Making predictions for the future

Why Learn? 4 Machine learning is programming computers to optimize a performance criterion using example data or past experience. There is no need to learn to calculate payroll Learning is used when: Human expertise does not exist (navigating on Mars), Humans are unable to explain their expertise (speech recognition) Solution changes in time (routing on a computer network) Solution needs to be adapted to particular cases (user biometrics) Seems like there is a hidden pattern in data but we can not pinpoint it

5 What We Talk About When We Talk About Learning? Learning general models from a data of particular examples Data is cheap and abundant (data warehouses, data marts); knowledge is expensive and scarce. Example in retail: Customer transactions to consumer behavior: People who bought Blink also bought Outliers (www.amazon.com) Build a model that is a good and useful approximation to the data. Allows us to predict future behavior, find hidden patterns etc.

Data Mining 6 Applications of machine learning to large databases is called Data Mining In data mining a large volume of data is processed to construct a simple model with valuable use, e.g., having high predictive accuracy There are abundant such application areas Retail: Market basket analysis, Customer relationship management (CRM) Finance: Credit scoring, fraud detection Manufacturing: Control, robotics, troubleshooting Medicine: Medical diagnosis Telecommunications: Spam filters, intrusion detection Bioinformatics: Motifs, alignment Web mining: Search engines...

What is Machine Learning? 7 Machine learning is not just database problem, it is also a part of Artificial Intelligence (AI) To be intelligent it must have ability to learn Machine learning is programming computers to optimize a performance criterion using example data or past experience. This is often done by building a model (say, a predictive model based on past data for performing prediction on future data) Such models are described by a number of parameters Learning involves execution of a computer program to optimize the values of these parameters that in effect optimizes the performance criteria Thus machine learning is essentially solving an optimization problem efficiently using computers

What is Machine Learning? 8 Role of Statistics/Probability: Machine learning uses theory of statistics in building mathematical models to solve an optimization problem Such a model is used to make inference from a sample Role of Computer science: Two fold role First, we need efficient algorithms to solve the optimization problem (aka, training a model) as well as to store and process possibly massive amount of data Second, once a model is learned, its representation and algorithmic solution for inference needs to be efficient as well

Examples of machine learning Applications 9 Association Supervised Learning Classification Regression Unsupervised Learning Clustering Dimension reduction Reinforcement Learning

Learning Associations 10 Market Basket analysis: Learning conditional probability P (Y X, D ) that somebody who buys X also buys Y where X and Y are products/services. D is the set of customer attributes, e.g., gender, age, marital status etc Example: P ( chips beer ) = 0.7

Classification 11 Classification corresponds to learning a function (mapping) from input to output, where output is restricted to a finite number of classes (typically two but can have more classes) Such a learned function is called a classifier Input corresponds to any object represented by a set of attributes/features Such a input-output function can be modeled in different ways and is typically expressed by a set of model parameters Training of a classifier corresponds to learning optimum values of these model parameters Requires past data (input-output pairs) to do that Testing of classifier corresponds to predicting the class label (output) of an unseen data point given only the input (its features/attributes) Example: Credit scoring Two classes (high-risk, low-risk) Each data point is a customer having two features/attributes: income and savings Classification corresponds to differentiating between low-risk and high-risk customers from their income and savings Discriminant: IF income > θ 1 AND savings > θ 2 THEN low-risk ELSE high-risk Discriminant is a function that separates different classes

Classification 12 What are the model parameters in this example? In this example how the number of features/attributes is related to number of model parameters? Food for thought: What happens when you have large number of features? (we have seen in Bag of word format we can have thousands of features) Discriminant: IF income > θ 1 AND savings > θ 2 THEN low-risk ELSE high-risk Discriminant is a function that separates different classes

Classification: Applications 13 Aka Pattern recognition Face recognition: Pose, lighting, occlusion (glasses, beard), make-up, hair style Character recognition: Different handwriting styles. Speech recognition: Temporal dependency. Medical diagnosis: From symptoms to illnesses Biometrics: Recognition/authentication using physical and/or behavioral characteristics: Face, iris, signature, etc Outlier/novelty detection:

Face Recognition 14 Training examples of a person Test images ORL dataset, AT&T Laboratories, Cambridge UK

Regression 15 Regression corresponds to learning a function (mapping) from input to output, where output is any real number Such a learned function is called a regressor Input corresponds to any object represented by a set of attributes/features Such a input-output function can be modeled in different ways and is typically expressed by a set of model parameters Training of a regression model corresponds to learning optimum values of these model parameters Requires past data (input-output pairs) to do that Testing of trained regression model corresponds to predicting the scalar output value (output) of an unseen data point given only the input (its features/attributes) Example: Price of a car Input is car attributes Output is car price Regression model y = g (x q ) Model parameters are q In this example model parameters controls the orientation of the blue line

Regression Applications 16 Navigating a car: Angle of the steering Kinematics of a robot arm (x,y) α 1 = g 1 (x,y) α 2 = g 2 (x,y) α 2 α 1 Response surface design

Supervised Learning: Uses 17 Prediction of future cases: Use the rule to predict the output for future inputs Knowledge extraction: The rule is easy to understand Compression: The rule is simpler than the data it explains Outlier detection: Exceptions that are not covered by the rule, e.g., fraud

Unsupervised Learning 18 Learning what normally happens No output Clustering: Grouping similar instances Example applications Customer segmentation in CRM Image compression: Color quantization Bioinformatics: Learning motifs

Reinforcement Learning 19 Learning a policy: A sequence of outputs No supervised output but delayed reward Credit assignment problem Game playing Robot in a maze Multiple agents, partial observability,... Reinforcement learning a separate sub area of machine learning. Unfortunately we will not cover it in this class.

Resources: Datasets 20 UCI Repository: http://www.ics.uci.edu/~mlearn/mlrepository.html Statlib: http://lib.stat.cmu.edu/

Resources: Journals 21 Journal of Machine Learning Research www.jmlr.org Machine Learning Neural Computation Neural Networks IEEE Trans on Neural Networks and Learning Systems IEEE Trans on Pattern Analysis and Machine Intelligence Journals on Statistics/Data Mining/Signal Processing/Natural Language Processing/Bioinformatics/...

Resources: Conferences 22 International Conference on Machine Learning (ICML) European Conference on Machine Learning (ECML) Neural Information Processing Systems (NIPS) Uncertainty in Artificial Intelligence (UAI) Computational Learning Theory (COLT) International Conference on Artificial Neural Networks (ICANN) International Conference on AI & Statistics (AISTATS) International Conference on Pattern Recognition (ICPR)...