Machine Learning Lecture 1: Introduction

Similar documents
Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Python Machine Learning

Lecture 1: Machine Learning Basics

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

A Case Study: News Classification Based on Term Frequency

Lecture 1: Basic Concepts of Machine Learning

(Sub)Gradient Descent

Word Segmentation of Off-line Handwritten Documents

Probabilistic Latent Semantic Analysis

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Human Emotion Recognition From Speech

CS Machine Learning

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Rule Learning With Negation: Issues Regarding Effectiveness

Reducing Features to Improve Bug Prediction

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Exploration. CS : Deep Reinforcement Learning Sergey Levine

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

Speech Emotion Recognition Using Support Vector Machine

CSL465/603 - Machine Learning

Learning Methods in Multilingual Speech Recognition

Universidade do Minho Escola de Engenharia

AQUA: An Ontology-Driven Question Answering System

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Comparison of EM and Two-Step Cluster Method for Mixed Data: An Application

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Speech Recognition at ICSI: Broadcast News and beyond

Axiom 2013 Team Description Paper

Evolutive Neural Net Fuzzy Filtering: Basic Description

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Australian Journal of Basic and Applied Sciences

INPE São José dos Campos

Citrine Informatics. The Latest from Citrine. Citrine Informatics. The data analytics platform for the physical world

Linking Task: Identifying authors and book titles in verbose queries

Welcome to. ECML/PKDD 2004 Community meeting

MYCIN. The MYCIN Task

Outreach Connect User Manual

Laboratorio di Intelligenza Artificiale e Robotica

Knowledge-Based - Systems

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Switchboard Language Model Improvement with Conversational Data from Gigaword

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Generative models and adversarial training

Artificial Neural Networks

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

Using Web Searches on Important Words to Create Background Sets for LSI Classification

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Active Learning. Yingyu Liang Computer Sciences 760 Fall

CPS122 Lecture: Identifying Responsibilities; CRC Cards. 1. To show how to use CRC cards to identify objects and find responsibilities

Learning Methods for Fuzzy Systems

Abstractions and the Brain

Exposé for a Master s Thesis

Applications of data mining algorithms to analysis of medical data

CS 446: Machine Learning

Learning From the Past with Experiment Databases

Calibration of Confidence Measures in Speech Recognition

Proof Theory for Syntacticians

Mining Student Evolution Using Associative Classification and Clustering

Using focal point learning to improve human machine tacit coordination

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Rule Learning with Negation: Issues Regarding Effectiveness

Model Ensemble for Click Prediction in Bing Search Ads

ScienceDirect. A Framework for Clustering Cardiac Patient s Records Using Unsupervised Learning Techniques

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Word learning as Bayesian inference

Evolution of Symbolisation in Chimpanzees and Neural Nets

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

On-Line Data Analytics

Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games

Seminar - Organic Computing

Creating Your Term Schedule

The Good Judgment Project: A large scale test of different methods of combining expert predictions

GACE Computer Science Assessment Test at a Glance

Lecture 10: Reinforcement Learning

Laboratorio di Intelligenza Artificiale e Robotica

A Pipelined Approach for Iterative Software Process Model

TU-E2090 Research Assignment in Operations Management and Services

TD(λ) and Q-Learning Based Ludo Players

Postprint.

Lecture 6: Applications

Feature-oriented vs. Needs-oriented Product Access for Non-Expert Online Shoppers

Word Sense Disambiguation

An OO Framework for building Intelligence and Learning properties in Software Agents

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

May To print or download your own copies of this document visit Name Date Eurovision Numeracy Assignment

Mining Association Rules in Student s Assessment Data

SARDNET: A Self-Organizing Feature Map for Sequences

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

Using computational modeling in language acquisition research

Agent-Based Software Engineering

Using dialogue context to improve parsing performance in dialogue systems

Transcription:

What is? Building machines that automatically learn from experience Sub-area of artificial intelligence (Very) small sampling of applications: Lecture 1: Introduction Detection of fraudulent credit card transactions Filtering spam email Autonomous vehicles driving on public highways Self-customizing programs: Web browser that learns what you like and seeks it out Applications we can t program by hand: E.g., speech recognition What is? Does Memorization =? Many different answers, depending on the field you re considering and whom you ask Test #1: Thomas learns his mother s face Artificial intelligence vs. psychology vs. education vs. neurobiology vs. Memorizes: But will he recognize: Does Memorization =? Test #2: Nicholas learns about trucks Memorizes: Thus he can generalize beyond what he s seen! But will he recognize others?

So learning involves ability to generalize from labeled examples In contrast, memorization is trivial, especially for a computer What is? When do we use machine learning? Human expertise does not exist (navigating on Mars) Humans are unable to explain their expertise (speech recognition; face recognition; driving) Solution changes in time (routing on a computer network; driving) Solution needs to be adapted to particular cases (biometrics; speech recognition; spam filtering) In short, when one needs to generalize from experience in a non-obvious way What is? When do we not use machine learning? Calculating payroll Sorting a list of words Web server Word processing Monitoring CPU usage Querying a database When we can definitively specify how all cases should be handled More Formal Definition of (Supervised) Given several labeled examples of a concept E.g., trucks vs. s (binary); height (real) Examples are described by features E.g., number-of-wheels (int), relative-height (height divided by width), hauls-cargo (yes/no) A machine learning algorithm uses these examples to create a hypothesis that will predict the label of new (previously unseen) examples Definition Labeled Training Data (labeled examples w/features) Unlabeled Data (unlabeled exs) Hypotheses can take on many forms Type: Decision Tree Very easy to comprehend by humans Compactly represents if-then rules no truck hauls-cargo yes num-of-wheels < 4 4 relative-height 1 < 1

Type: Artificial Neural Network Designed to simulate brains Neurons (processing units) communicate via connections, each with a numeric weight comes from adjusting the weights Type: k-nearest Neighbor Compare new (unlabeled) example x q with training examples Find k training examples most similar to x q Predict label as majority vote Other Types Support vector machines A major variation on artificial neural networks Bagging and boosting Performance enhancers for learning algorithms Bayesian methods Build probabilistic models of the data Many more Variations Regression: real-valued labels Probability estimation Predict the probability of a label Unsupervised learning (clustering, density estimation) No labels, simply analyze examples Semi-supervised learning Some data labeled, others not (can buy labels?) Reinforcement learning Used for e.g., controlling autonomous vehicles Missing attributes Must some how estimate values or tolerate them Sequential data, e.g., genomic sequences, speech Hidden Markov models Outlier detection, e.g., intrusion detection And more Issue: Model Complexity Possible to find a hypothesis that perfectly classifies all training data But should we necessarily use it? Model Complexity Label: Football player?! To generalize well, need to balance accuracy with simplicity

Issue: What If We Have Little Labeled Training Data? E.g., billions of web pages out there, but tedious to label Conventional ML approach: Labeled Training Data Unlabeled Data (e.g., decision tree) What If We Have Little Labeled Training Data? Active approach: Human Labelers Label Requests Labels Unlabeled data Label requests are on data that ML algorithm is unsure of vs Expert Systems Many old real-world applications of AI were expert systems Essentially a set of if-then rules to emulate a human expert E.g. "If medical test A is positive and test B is negative and if patient is chronically thirsty, then diagnosis = diabetes with confidence 0.85" Rules were extracted via interviews of human experts vs Expert Systems ES: Expertise extraction tedious; ML: Automatic ES: Rules might not incorporate intuition, which might mask true reasons for answer E.g. in medicine, the reasons given for diagnosis x might not be the objectively correct ones, and the expert might be unconsciously picking up on other info ML: More objective vs Expert Systems ES: Expertise might not be comprehensive, e.g. physician might not have seen some types of cases ML: Automatic, objective, and data-driven Though it is only as good as the available data Relevant Disciplines Artificial intelligence: as a search problem, using prior knowledge to guide learning Probability theory: computing probabilities of hypotheses Computational complexity theory: Bounds on inherent complexity of learning Control theory: to control processes to optimize performance measures Philosophy: Occam s razor (everything else being equal, simplest explanation is best) Psychology and neurobiology: Practice improves performance, biological justification for artificial neural networks Statistics: Estimating generalization performance

More Detailed Example: Given database of hundreds of thousands of images How can users easily find what they want? One idea: Users query database by image content E.g., give me images with a waterfall One approach: Someone annotates each image with text on its content Tedious, terminology ambiguous, may be subjective Another approach: Query by example Users give examples of images they want Program determines what s common among them and finds more like them User s Query User s feedback then labels the new images, which are used as more training examples, yielding a new hypothesis, and more images are retrieved System s Response User feedback Yes Yes Yes NO! How Does The System Work? For each pixel in the image, extract its color + the colors of its neighbors These colors (and their relative positions in the image) are the features the learner uses (replacing, e.g., number-of-wheels) A learning algorithm takes examples of what the user wants, produces a hypothesis of what s common among them, and uses it to label new images Conclusions ML started as a field that was mainly for research purposes, with a few niche applications Now applications are very widespread ML is able to automatically find patterns in data that humans cannot However, still very far from emulating human intelligence! Each artificial learner is task-specific