February Statistics: Multiple Regression in R

Similar documents
Probability and Statistics Curriculum Pacing Guide

STT 231 Test 1. Fill in the Letter of Your Choice to Each Question in the Scantron. Each question is worth 2 point.

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Statewide Framework Document for:

STA 225: Introductory Statistics (CT)

A Model to Predict 24-Hour Urinary Creatinine Level Using Repeated Measurements

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

CS Machine Learning

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

Lecture 1: Machine Learning Basics

Chapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4

Ryerson University Sociology SOC 483: Advanced Research and Statistics

School Size and the Quality of Teaching and Learning

Python Machine Learning

Instructor: Mario D. Garrett, Ph.D. Phone: Office: Hepner Hall (HH) 100

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Statistical Analysis of Climate Change, Renewable Energies, and Sustainability An Independent Investigation for Introduction to Statistics

AP Statistics Summer Assignment 17-18

The lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design.

On-the-Fly Customization of Automated Essay Scoring

Research Design & Analysis Made Easy! Brainstorming Worksheet

VOL. 3, NO. 5, May 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

EDCI 699 Statistics: Content, Process, Application COURSE SYLLABUS: SPRING 2016

Grade 6: Correlated to AGS Basic Math Skills

Assignment 1: Predicting Amazon Review Ratings

Analysis of Enzyme Kinetic Data

Functional Skills Mathematics Level 2 assessment

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

Visit us at:

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

OFFICE SUPPORT SPECIALIST Technical Diploma

Why Did My Detector Do That?!

Math 96: Intermediate Algebra in Context

12- A whirlwind tour of statistics

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Sociology 521: Social Statistics and Quantitative Methods I Spring 2013 Mondays 2 5pm Kap 305 Computer Lab. Course Website

PROFESSIONAL TREATMENT OF TEACHERS AND STUDENT ACADEMIC ACHIEVEMENT. James B. Chapman. Dissertation submitted to the Faculty of the Virginia

Diagnostic Test. Middle School Mathematics

Science Fair Project Handbook

Individual Differences & Item Effects: How to test them, & how to test them well

learning collegiate assessment]

Measurement. When Smaller Is Better. Activity:

Understanding and Interpreting the NRC s Data-Based Assessment of Research-Doctorate Programs in the United States (2010)

Comparison of network inference packages and methods for multiple networks inference

Sociology 521: Social Statistics and Quantitative Methods I Spring Wed. 2 5, Kap 305 Computer Lab. Course Website

JONATHAN H. WRIGHT Department of Economics, Johns Hopkins University, 3400 N. Charles St., Baltimore MD (410)

Probabilistic Latent Semantic Analysis

School Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne

EDEXCEL FUNCTIONAL SKILLS PILOT TEACHER S NOTES. Maths Level 2. Chapter 4. Working with measures

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

American Journal of Business Education October 2009 Volume 2, Number 7

Paper 2. Mathematics test. Calculator allowed. First name. Last name. School KEY STAGE TIER

Spring 2014 SYLLABUS Michigan State University STT 430: Probability and Statistics for Engineering

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Mathematics process categories

Radius STEM Readiness TM

State University of New York at Buffalo INTRODUCTION TO STATISTICS PSC 408 Fall 2015 M,W,F 1-1:50 NSC 210

Algebra 2- Semester 2 Review

Paper Reference. Edexcel GCSE Mathematics (Linear) 1380 Paper 1 (Non-Calculator) Foundation Tier. Monday 6 June 2011 Afternoon Time: 1 hour 30 minutes

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

Software Maintenance

Certified Six Sigma Professionals International Certification Courses in Six Sigma Green Belt

Characteristics of Functions

Physics 270: Experimental Physics

Discovering Statistics

Ohio s Learning Standards-Clear Learning Targets

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Minitab Tutorial (Version 17+)

S T A T 251 C o u r s e S y l l a b u s I n t r o d u c t i o n t o p r o b a b i l i t y

Informal Comparative Inference: What is it? Hand Dominance and Throwing Accuracy

Honors Mathematics. Introduction and Definition of Honors Mathematics

Using Blackboard.com Software to Reach Beyond the Classroom: Intermediate

InCAS. Interactive Computerised Assessment. System

CONSTRUCTION OF AN ACHIEVEMENT TEST Introduction One of the important duties of a teacher is to observe the student in the classroom, laboratory and

Focus of the Unit: Much of this unit focuses on extending previous skills of multiplication and division to multi-digit whole numbers.

GRADUATE STUDENT HANDBOOK Master of Science Programs in Biostatistics

Math 098 Intermediate Algebra Spring 2018

Dublin City Schools Mathematics Graded Course of Study GRADE 4

Foothill College Summer 2016

Simple Random Sample (SRS) & Voluntary Response Sample: Examples: A Voluntary Response Sample: Examples: Systematic Sample Best Used When

CENTRAL MAINE COMMUNITY COLLEGE Introduction to Computer Applications BCA ; FALL 2011

Houghton Mifflin Online Assessment System Walkthrough Guide

Teaching a Laboratory Section

AP Calculus AB. Nevada Academic Standards that are assessable at the local level only.

Answers To Hawkes Learning Systems Intermediate Algebra

Montana's Distance Learning Policy for Adult Basic and Literacy Education

A Decision Tree Analysis of the Transfer Student Emma Gunu, MS Research Analyst Robert M Roe, PhD Executive Director of Institutional Research and

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

Student s Edition. Grade 6 Unit 6. Statistics. Eureka Math. Eureka Math

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Foothill College Fall 2014 Math My Way Math 230/235 MTWThF 10:00-11:50 (click on Math My Way tab) Math My Way Instructors:

UNIT ONE Tools of Algebra

Exploring Derivative Functions using HP Prime

Introducing the New Iowa Assessments Mathematics Levels 12 14

School of Innovative Technologies and Engineering

Theory of Probability

Transcription:

February 2016 Statistics: Multiple Regression in R

February 2016 How to Use This Course Book This course book accompanies the face-to-face session taught at IT Services. It contains a copy of the slideshow and the worksheets. Software Used We might use Excel to capture your data, but no other software is required. Since this is a Concepts course, we will concentrate on exploring ideas and underlying concepts that researchers will find helpful in undertaking data collection and interpretation. Revision Information Version Date Author Changes made 1.0 February 2016 John Fresen Course book version 1

February 2016 Copyright The copyright of this document lies with Oxford University IT Services.

February 2016 Contents 1 Introduction... 1 1.1. What You Should Already Know... 1 1.2. What You Will Learn... 1 2 Your Resources for These Exercises... 2 2.1. Help and Support Resources... 2 3 What Next?... 3 3.1. Statistics Courses... 3 3.2. IT Services Help Centre... 3

Statistics: Concepts TRMSZ 1 Introduction Welcome to the course Multiple in R. This course introduces the concept of regression using Sir Francis Galton s Parent- Child height data and then extends the concept to multiple regression using real examples. The course has an applied focus and makes minimum used of mathematics. No derivations of formulae are presented. 1.1. What You Should Already Know We assume that you are familiar with entering and editing text, rearranging and formatting text - drag and drop, copy and paste, printing and previewing, and managing files and folders. The computer network in IT Services may differ slightly from that which you are used to in your College or Department; if you are confused by the differences, ask for help from the teacher. 1.2. What You Will Learn In this course we will cover the following topics: What is regression Simple linear regression Influential observations Multiple regression Model selection Post selection inference Cross valadation Where to get help. From problem to data to conclusions Topics covered in related Statistics courses, should you be interested, are given in Section 3.1. 1 IT Services

Statistics: Concepts TRMSZ 2 Your Resources for These Exercises The exercises in this handbook will introduce you to some of the tasks you will need to carry out when working with WebLearn. Some sample files and documents are provided for you; if you are on a course held at IT Services, they will be on your network drive H:\ (Find it under My Computer). During a taught course at IT Services, there may not be time to complete all the exercises. You will need to be selective, and choose your own priorities among the variety of activities offered here. However, those exercises marked with a star * should not be skipped. Please complete the remaining exercises later in your own time, or book for a Computer8 session at IT Services for classroom assistance (See section 8.2). 2.1. Help and Support Resources You can find support information for the exercises on this course and your future use of WebLearn, as follows: WebLearn Guidance https://weblearn.ox.ac.uk/info (This should be your first port of call) If at any time you are not clear about any aspect of this course, please make sure you ask John for help. If you are away from the class, you can get help and advice by emailing the central address weblearn@it.ox.ac.uk. The website for this course including reading material and other material can be found at https://weblearn.ox.ac.uk/x/mvkigl You are welcome to contact John about statistical issues and questions at john.fresen@gmail.com 2 IT Services

Statistics: Concepts TRMSZ 3 What Next? 3.1. Statistics Courses Now that you have a grasp of some basic concepts in Statistics, you may want to develop your skills further. IT Services offers further Statistics courses and details are available at http://courses.it.ox.ac.uk. In particular, you might like to attend the course Statistics: Introduction: this is a four-session module which covers the basics of statistics and aims to provide a platform for learning more advanced tools and techniques. Courses on particular discipline areas or data analysis packages include: Statistics: Designing clinical research and biostatistics SPSS: An introduction SPSS: An introduction to using syntax STATA: An introduction to d ata access and management STATA: Data manipulation and analysis STATA: Statistical, survey and graphical analyses 3.2. IT Services Help Centre The IT Services Help Centre at 13 Banbury Road is open by appointment during working hours, and on a drop-in basis from 6:00 pm to 8:30 pm, Monday to Friday. The Help Centre is also a good place to get advice about any aspect of using computer software or hardware. You can contact the Help Centre on (2)73200 or by email on help@it.ox.ac.uk 3 IT Services

Your safety is important Where is the fire exit? Beware of hazards: Tripping over bags and coats Please report any equipment faults to us Let us know if you have any other concerns 2

Your comfort is important The toilets are along the corridor outside the lecture rooms The rest area is where you registered; it has vending machines and a water cooler The seats at the computers are adjustable You can adjust the monitors for height, tilt and brightness Session 1 The concept of regression from Galton Thanks to: Dave Baker, IT Services Jill Fresen, IT Services Jim Hanley, McGill University Ian Sinclair, REES Group Oxford 4

Sir Francis Galton (16 February 1822 17 January 1911) http://en.wikipedia.org/wiki/francis_galton Sir Francis Galton was an incredible polymath Cousin of Charles Darwin. General: Genetics What do we inherit form our ancestors? Particular: Do tall parents have tall children and short parents, short children? i.e. Does the height of children depend on the height of parents? Data: Famous 1885 study: 205 sets of parents 928 offspring mph = average height of parents; ch = child height Galton Peas Experiment: Selected 700 pea pods of selected sizes average diam of parent peas ; average diam of child peas

Francis Galton: Do tall parents have tall children, short parents short children? Does height of child depend on height of parents? Frequency scatterplot of Galton Data 14 17 3 5 4 3 3 2 2 4 2 3 41 1 4 4 11 4 9 7 1 64 2 11 18 20 7 4 2 99 5 4 19 21 25 14 10 1 7 13 38 48 33 18 5 2 1 167 1 2 120 1 7 14 28 34 20 12 3 138 2 5 11 17 38 31 27 3 4 117 2 5 11 17 36 25 17 1 3 48 1 1 7 2 15 16 4 1 1 59 4 4 5 5 14 11 16 32 2 4 9 3 5 7 1 7 5 1 1 1 3 3 1 62 64 1 1 66 68 Midparent 1 70 72 74

Galton data: boxplots of conditional distributions of child-ht conditional on parent-ht histograms of the marginal distributions of child-ht and parent-ht Plot of child-parent data 64 64 66 68 70 72 sunflower plot of data 64 66 68 70 72 parent-ht parent-ht Plot of data child-ht jittered Plot of distributions child-ht given parent-ht 66 68 70 parent-ht 72 64 66.5 69.5 parent-ht 72.5

Regression is a plot/trace of the means of the conditional distributions trace of actual means regression of Child on Midparent 62 64 66 68 70 72 trace of linear regression means assumes means lie on a straight line 74 62 64 66 Midparent 68 70 72 superimposing actual and linear regressions 74 62 64 66 Midparent 68 70 Midparent The trace of actual means has no assumptions in it but end distributions have a lot of sampling variation because of the small number of observations in those distributions Linear regression stabilises that Linear regression model 62 64 66 68 70 72 Linear regression model fitted to data 74 62 64 Midparent 66 68 70 72 74 Midparent Linear regression model assumes: 1. Conditional distributions are normal 2. Conditional means lie on a straight line 3. Conditional distributions all have same spread In words: the distribution of child height, conditional on a given midparent height, is normal, with means lying on the straight line, and constant spread In mathematics: This model can be extended in many ways 72 74

The Linear Model : ways. Here are three - there are more: can be extended in many 1. Model the mean by a more general function such as a polynomial or trigonometric function or Fourier series or radial basis functions or some nonparametric function 2. Model the variance as a function : 3. Generalize from the Normal distribution to the Exponential Family that includes: normal, exponential, gamma, chi-squared, beta, Dirichlet, Bernoulli, categorical, Poisson, Wishart, Inverse Wishart and many others. But in all cases we are modelling the mean and other parameters of conditional distributions. These are called Generalized Linear Models. In R : fits a linear model fits a generalized linear model fits a generalized additive model Does the average diam of child peas depend on average diam of parent peas? What are the sketches telling us? Would a linear regression model be suitable? An important point about this example is that in regression, it is the slope of the regression line that is important, not the intercept.

Go to Lecture 2 A: Example 1 UWC Analysis Detecting Influential Observations using R Influential observations may suggest your model is incorrect data point has been miss-recorded

Detecting Influential Observations in R Case1: Outlying in y-space, not x-space. Use studentized residuals to detect these. Case 2: Outlying in x-space, not y-space. High leverage point. Use hatvalues to detect this. Case 3: Outlying in both x-space and y-space. High leverage point. Studentized residuals Hatvalues DFFITS 17 DFBETAS Outliers in Y- space: To detect outliers in the Y-space we compute the studentized residuals, defined as: This is just the standardizing transformation on the residuals. Thus, we expect 99% of studentized residuals to lie between -3 and 3. Values outside of this range suggest possible outliers. In R these are plotted on a graph using the statement: 18

Outliers in X-space: To detect outliers in the X-space we compute the hatvalues, defined as: hatvalues = diagonal elements of the hat matrix diag( X( XT X) 1 XT ) The i-th hatvalue measures the distance of case i x-values from the centroid of the x-values. In R these are plotted on a graph using the statement: Some of these will be small, some intermediate, and some large. These are assessed in a relative sense there is, in - 19 We want to avoid the situation in which one observation dominates the others. DFFITS (means change (DF) in FITted values) DFFITS measures the effect or influence of removing the i th observation on the predicted value of the i-th observation, divided by a standardizing quantity. DFFITS i Yi Yi (i ) constant This is a local measure of influence it is only concerned with what is happens at case I if case I is removed from the data. In R these are plotted on a graph using the statement: Some of these will be small, some intermediate, and some large. These are assessed in a relative sense there is, in point. We want to avoid the situation in which one observation dominates the others. 20

Cooks Distance measures the effect or influence of removing the i th observation on all predicted, divided by a standardizing quantity to normalize it. Di (Y j Y j (i ) ) 2 constant This is a global measure of influence it considers the effect of removing case I on all predicted values. In R these are plotted on a graph using the statement: Some of these will be small, some intermediate, and some large. These are assessed in a relative sense there is, in - 21 DFBETAS (means change (DF) in the BETAS) DFBETAS measures the effect or influence of removing the i th observation on the estimated regression coefficients, divided by a standardizing quantity. DFBk (i ) k k (i ) constant In R these are plotted on a graph using the statement: is a pxn matrix. The entry in row I and column k represents the effect of removing the i-th observation on the k-th regression coefficient. Some of these will be small, some intermediate, and some large. These are assessed in a relative sense there is, in -

Go to Lecture 2 B UWC Analysis outliers and influential observations Model Selection The Executive Salary Data has the variables: lsalary exper educat bonus numemp assets board age profits internat sales Question: How do we select which variables to include in a model for estimating the mean log(salary)? still a vibrant research topic in statistics with many contentious issues controversy: Among competing hypotheses, the one with the fewest assumptions should be selected. Choose the simplest model that gives and adequate description of the data

include Akaike information criterion (AIC) Bayesian information criterion (BIC) AIC = -2*log(likelihood(model)) + 2*no predictors BIC = -2*log(likelihood(model)) + 2*no predictors*log(no obs) The first term -2*log(likelihood(model)) gets smaller with more predictors but gets penalized by the increased no of predictors and the increased sample size (no obs). we choose model with the smallest AIC (BIC) Recent excellent articles are: Statistical model choice by Gerda Claeskens (2016) Valid post selection inference 2013 Berk Brown Buja Zhang and Zhao Annals (2015) Problem: If we have k predictors, we have 2^k possible models (without considering interactions and transformations such as log, etc) For the Exec Salary Data we have 10 predictors so there are 2^10 = 1024 models to consider without interactions or transformations. We consider the Stepwise Selection: Forward selection, which involves starting with no variables in the model, testing the addition of each variable using a chosen model comparison criterion, adding the variable (if any) that improves the model the most, and repeating this process until none improves the model. Backward elimination, which involves starting with all candidate variables, testing the deletion of each variable using a chosen model comparison criterion, deleting the variable (if any) that improves the model the most by being deleted, and repeating this process until no further improvement is possible. Bidirectional elimination, a combination of the above, testing at each step for variables to be included or excluded.

Inference after Model Selection The following articles provide a great discussion post-selection inference and make suggestions for how to proceed: Ernst Wit, Edwin van den Heuvel and Jan-Willem Romeijn Statistica Neerlandica, doi:10.1111/j.1467-9574.2012.00530.x Richard Berk, Lawrence Brown, Andreas Buja, Kai Zhang, and Linda Zhao (2013) Valid post-selection inference. Ann. Statist. Volume 41, Number 2, 802-837 Two major problems that arise in the process of model selection are: First, the distributions of the estimated regression parameters are no longer valid. (This means that the tests and confidence intervals normally calculated are no longer valid.) Second, we should see how well the model works on unseen data. This might conceptually be achieved by splitting the data into two subsets, and a This is called cross-validation. Cross-validation is important in guarding against testing hypotheses suggested by the data (called "Type III errors")

Leave-p-out cross-validation Leave-p-out cross-validation (LpO CV) involves using p observations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of p observations and a training set. Very expensive computing wise Leave-one-out cross-validation Leave-one -out cross-validation (LOOCV) is a particular case of leave-pout cross-validation with p = 1. LOOCV doesn't have the computational problem of general LpO crossvalidation. k-fold cross-validation In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing The cross-validation process is then repeated k times (the folds), with each of the k subsamples used exactly once as the validation data. The k results from the folds can then be averaged (or otherwise combined) to produce a single estimation 2-fold cross-validation = special case

Go to lecture 5 Example 4 Executive salary data

Lecture 3: Example 2: Multiple Regression Executive Salary Data Four variables Problem and Data This data set was collected by a large Human Resources Placement company specializing in the placement of Chief Executive Officers (CEO's). The objective of the research is to develop a model that will predict the average salary of CEO's based on a number of factors or variables associated with the executive and the company. Because of its skewed nature salary was converted to log-salary and recorded as lsalary to bring it closer to normality. The purpose of the model is to help guide executives as to the distribution of salaries (mean and standard deviation) earned by executives with similar qualifications and backgrounds. The data is given in execsaldata.xlxs, execsaldata.txt, execsaldata.csv. For this exercise we will only consider the following predictors: experience (exper); number of employees (numemp) ; number of years of formal education (educat) ; company assets in millions of dollars (assets) Model: lsalary = β 0 + β 1 exper + β 2 educat + β 3 numemp + β 4 assets + noise where the noise follows a normal distribution with a mean of zero and constant variance The data were obtained by taking a random sample of 100 companies listed on the New York Stock Exchange during 2008 and is given in a tab delimited file execsaldata.csv. But an excel version is also given. Assignment: Perform an analysis that will lead to a regression model for predicting the average salary of executives based on their experience and background in terms of the factors considered above. You might consider, inter-alia, the following points in your analysis but do not be restricted to this list. 1. Read the data into R and attach it. 2. Summarize the univariate marginal distributions of experience, number of employees, number of years of formal education and the company assets. 3. Provide the matrix of these variables, using the pairs statement, compute the correlation coefficients and comment on the plots and correlation matrix. 4. Fit the regression model to the data. Construct and interpret the SUMMARY and ANOVA tables. 5. How accurate is the prediction equation? Assess graphically the accuracy of the model by plotting the observed salary against the fitted values, say salhat. 6. Perform a graphical analysis of the residuals to assess if the assumptions on the error terms are reasonable. 7. Assess the possibility of outliers and influential observations. 8. Can you make recommendations about estimating the average salary of executives from the predictor variables considered here? 9. Can you criticize the data or the model?

Recommended Steps You may copy and paste into R lsalary exper educat bonus numemp assets board age profits internat sales Step 1 Step 2 Step 3 Read and attach data. My data is stored in the directory Data of my memory stick. One can specify any directory. esd = read.csv("e:/data/execsaldata.csv",header=t,sep=",") attach(esd) head(esd) names(esd) Plot marginal distributions of the response variable and the four predictor variables on a 2X3 scatterplot matrix par(mfrow=c(2,3)) hist(lsalary,prob=t,col="gray");lines(density(lsalary));rug(lsalary) hist(exper,prob=t,col="gray");lines(density(exper));rug(exper) hist(educat,prob=t,col="gray");lines(density(educat));rug(educat) hist(numemp,prob=t,col="gray");lines(density(numemp));rug(numemp) hist(assets,prob=t,col="gray");lines(density(assets));rug(assets) # notice that these are on vastly different scales Pairs plot (matrix of scatterplots) newdata <- cbind(lsalary,exper,educat,numemp,assets) pairs(newdata) Step 4 Can combine steps 2 and 3 (See help for pairs) panel.hist <- function(x,...) { usr <- par("usr"); on.exit(par(usr)) par(usr = c(usr[1:2], 0, 1.5) ) h <- hist(x, plot = FALSE) breaks <- h$breaks; nb <- length(breaks) y <- h$counts; y <- y/max(y) rect(breaks[-nb], 0, breaks[-1], y, col = "cyan",...) } pairs(newdata, panel = panel.smooth, diag.panel = panel.hist, cex.labels = 1.0, font.labels = 2) Step 5 Compute the correlation coefficients: cor(newdata) # or, to have fewer decimal places round(cor(newdata),3) Step 6 Fit the linear model to the data and get summary and anova: fit1 = lm(lsalary~exper+educat+numemp+assets) summary(fit1) anova(fit1) Step 7 Compute predicted values and residuals: salhat = fitted.values(fit1) res = residuals(fit1)

Step 8 # check the fit and assumptions on noise terms i.e. # are they normal and independent of predictor and model? par(mfrow=c(2,4)) plot(salhat,lsalary,xlim=c(10.6,12.1),ylim=c(10.6,12.1), main="observed vs fitted values") abline(0,1) #superimposes an ideal straight line plot(res) # sequential plot of residuals qqnorm(res);qqline(res) #normal probability plot of residuals plot(exper,res) #residuals vs exper plot(educat,res) #residuals vs eduac plot(numemp,res) #residuals vs numemp plot(assets,res) #residuals vs assets plot(salhat,res) #residuals vs fitted values Step 9 # Detecting possible outlying and influential observations # Are there outliers in the y-space or x-space? # Are there influential observations as measured by # DFFITS,Cook s Distance or DFBETAS? par(mfrow=c(3,3)) plot(rstudent(fit1),type="h") plot(hatvalues(fit1),type="h") plot(dffits(fit1),type="h") plot(cooks.distance(fit1),type="h") plot(dfbetas(fit1)[,1],type="h") plot(dfbetas(fit1)[,2],type="h") plot(dfbetas(fit1)[,3],type="h") plot(dfbetas(fit1)[,4],type="h") plot(dfbetas(fit1)[,5],type="h") # We might wish to plot hatvalues against cooks.distance and even # consider other combinations of plots. #--------------- # An interesting 2X2 matrix of plots is provided by plot(fit1) par(mfrow=c(2,2)) plot(fit1) Step 10 What are our conclusions? First interpret the diagnostic plots for assumptions on residuals and then consider the possibility of influential observations and outliers. Do the assumptions on the noise terms seem reasonable? If the diagnostics are satisfactory we then look at the summary and ANOVA to assess and interpret the model: What is the mean function and what is the spread about the mean function? Is the model a reasonable approximation to the data? Can we criticize that data or the model and make suggestions for further analysis or research?

Example 1: UWC Analysis Mathematical model: result = β 0 + β 1 rating + noise assume noise follows a normal distribution with a mean of zero and variance σ 2 We could write this as result rating~normal(β 0 + β 1 rating, σ 2 ) The conditional distribution of result given rating is normal with a mean of β 0 + β 1 rating and variance of σ 2 Suggested steps: Step 1 # Read data into R, attach data, print first 6 lines uwc = read.table("e:/data/uwcdata.csv",header = T,sep=",") attach(uwc) head(uwc) Step 2 # Plot marginal distributions par(mfrow=c(1,2)) #1 row by 2 cols graphics window hist(rating,prob=t,col="gray");density(rating);rug(rating) hist(result,prob=t,col="gray");density(result);rug(result) Step 3 # fit the linear model (linear regression model) fit1 <- lm(result ~ rating) # fit1 is an object generated by the routine lm containing a lot # of information about the fitted model. # The rest of the steps are simply accessing information in fit1 Step 4 # obtain summary and anova of fit # compute fitted values and residuals summary(fit1) anova(fit1) yhat <- fitted.values(fit1) # fitted values res <- residuals(fit1) # residuals Step 5 # various common plots put into a 2X3 matrix of scatter-plots to # check the fit and assumptions on noise terms i.e. # are they normal and independent of predictor and model? # # plot 1: scatterplot of data and superimposed fitted model - # only do this plot when there is only a single predictor # plot 2: scatterplot of observed values vs fitted values to see # how close the fitted values are to the observed data # do this plot no matter how many predictors # plot 3: plot of residuals (random or pattern?) # plot 4: Q-Q plot of residuals (are they approx normal?) # plot 5: residuals vs predictor (random or pattern?) # if there are many predictors we plot residuals against # each predictor in turn # plot 6: residuals vs fitted (random or pattern?) 1

par(mfrow=c(2,3)) plot(rating,result,ylim=c(0,100), pch=19,cex=1.5, main="result vs Rating UWC data/n showing pass mark and fitted model") abline(fit1) #superimposes fitted straight line abline(h=48,lty=2) # superimposes dashed horizontal line at 48 plot(yhat,result,xlim=c(0,100),ylim=c(0,100), main="observed vs fitted values") abline(0,1) #superimposes an ideal straight line plot(res) qqnorm(res);qqline(res) #normal probability plot of residuals plot(rating,res) #residuals vs predictor plot(yhat,res) #residuals vs fitted values Step 6 # Detecting possible outlying and influential observations # Are there outliers in the y-space or x-space? # Are there influential observations as measured by # DFFITS,Cook s Distance or DFBETAS? par(mfrow=c(2,3)) plot(rstudent(fit1),type="h") plot(hatvalues(fit1),type="h") plot(dffits(fit1),type="h") plot(cooks.distance(fit1),type="h") plot(dfbetas(fit1)[,1],type="h") plot(dfbetas(fit1)[,2],type="h") Step 7 What are our conclusions? First interpret the diagnostic plots for assumptions on residuals and then consider the possibility of influential observations and outliers. If the diagnostics are satisfactory we then look at the summary and ANOVA to assess and interpret the model: What is the mean function and what is the spread about the mean function? Can we criticize that data or the model and make suggestions for further analysis or research? 2

Lecture 3: Example 2: Multiple Regression Executive Salary Data Four variables Problem and Data This data set was collected by a large Human Resources Placement company specializing in the placement of Chief Executive Officers (CEO's). The objective of the research is to develop a model that will predict the average salary of CEO's based on a number of factors or variables associated with the executive and the company. Because of its skewed nature salary was converted to log-salary and recorded as lsalary to bring it closer to normality. The purpose of the model is to help guide executives as to the distribution of salaries (mean and standard deviation) earned by executives with similar qualifications and backgrounds. The data is given in execsaldata.xlxs, execsaldata.txt, execsaldata.csv. For this exercise we will only consider the following predictors: experience (exper); number of employees (numemp) ; number of years of formal education (educat) ; company assets in millions of dollars (assets) Model: lsalary = β 0 + β 1 exper + β 2 educat + β 3 numemp + β 4 assets + noise where the noise follows a normal distribution with a mean of zero and constant variance The data were obtained by taking a random sample of 100 companies listed on the New York Stock Exchange during 2008 and is given in a tab delimited file execsaldata.csv. But an excel version is also given. Assignment: Perform an analysis that will lead to a regression model for predicting the average salary of executives based on their experience and background in terms of the factors considered above. You might consider, inter-alia, the following points in your analysis but do not be restricted to this list. 1. Read the data into R and attach it. 2. Summarize the univariate marginal distributions of experience, number of employees, number of years of formal education and the company assets. 3. Provide the matrix of these variables, using the pairs statement, compute the correlation coefficients and comment on the plots and correlation matrix. 4. Fit the regression model to the data. Construct and interpret the SUMMARY and ANOVA tables. 5. How accurate is the prediction equation? Assess graphically the accuracy of the model by plotting the observed salary against the fitted values, say salhat. 6. Perform a graphical analysis of the residuals to assess if the assumptions on the error terms are reasonable. 7. Assess the possibility of outliers and influential observations. 8. Can you make recommendations about estimating the average salary of executives from the predictor variables considered here? 9. Can you criticize the data or the model?

Recommended Steps You may copy and paste into R lsalary exper educat bonus numemp assets board age profits internat sales Step 1 Step 2 Step 3 Read and attach data. My data is stored in the directory Data of my memory stick. One can specify any directory. esd = read.csv("e:/data/execsaldata.csv",header=t,sep=",") attach(esd) head(esd) names(esd) Plot marginal distributions of the response variable and the four predictor variables on a 2X3 scatterplot matrix par(mfrow=c(2,3)) hist(lsalary,prob=t,col="gray");lines(density(lsalary));rug(lsalary) hist(exper,prob=t,col="gray");lines(density(exper));rug(exper) hist(educat,prob=t,col="gray");lines(density(educat));rug(educat) hist(numemp,prob=t,col="gray");lines(density(numemp));rug(numemp) hist(assets,prob=t,col="gray");lines(density(assets));rug(assets) # notice that these are on vastly different scales Pairs plot (matrix of scatterplots) newdata <- cbind(lsalary,exper,educat,numemp,assets) pairs(newdata) Step 4 Can combine steps 2 and 3 (See help for pairs) panel.hist <- function(x,...) { usr <- par("usr"); on.exit(par(usr)) par(usr = c(usr[1:2], 0, 1.5) ) h <- hist(x, plot = FALSE) breaks <- h$breaks; nb <- length(breaks) y <- h$counts; y <- y/max(y) rect(breaks[-nb], 0, breaks[-1], y, col = "cyan",...) } pairs(newdata, panel = panel.smooth, diag.panel = panel.hist, cex.labels = 1.0, font.labels = 2) Step 5 Compute the correlation coefficients: cor(newdata) # or, to have fewer decimal places round(cor(newdata),3) Step 6 Fit the linear model to the data and get summary and anova: fit1 = lm(lsalary~exper+educat+numemp+assets) summary(fit1) anova(fit1) Step 7 Compute predicted values and residuals: salhat = fitted.values(fit1) res = residuals(fit1)

Step 8 # check the fit and assumptions on noise terms i.e. # are they normal and independent of predictor and model? par(mfrow=c(2,4)) plot(salhat,lsalary,xlim=c(10.6,12.1),ylim=c(10.6,12.1), main="observed vs fitted values") abline(0,1) #superimposes an ideal straight line plot(res) # sequential plot of residuals qqnorm(res);qqline(res) #normal probability plot of residuals plot(exper,res) #residuals vs exper plot(educat,res) #residuals vs eduac plot(numemp,res) #residuals vs numemp plot(assets,res) #residuals vs assets plot(salhat,res) #residuals vs fitted values Step 9 # Detecting possible outlying and influential observations # Are there outliers in the y-space or x-space? # Are there influential observations as measured by # DFFITS,Cook s Distance or DFBETAS? par(mfrow=c(3,3)) plot(rstudent(fit1),type="h") plot(hatvalues(fit1),type="h") plot(dffits(fit1),type="h") plot(cooks.distance(fit1),type="h") plot(dfbetas(fit1)[,1],type="h") plot(dfbetas(fit1)[,2],type="h") plot(dfbetas(fit1)[,3],type="h") plot(dfbetas(fit1)[,4],type="h") plot(dfbetas(fit1)[,5],type="h") # We might wish to plot hatvalues against cooks.distance and even # consider other combinations of plots. #--------------- # An interesting 2X2 matrix of plots is provided by plot(fit1) par(mfrow=c(2,2)) plot(fit1) Step 10 What are our conclusions? First interpret the diagnostic plots for assumptions on residuals and then consider the possibility of influential observations and outliers. Do the assumptions on the noise terms seem reasonable? If the diagnostics are satisfactory we then look at the summary and ANOVA to assess and interpret the model: What is the mean function and what is the spread about the mean function? Is the model a reasonable approximation to the data? Can we criticize that data or the model and make suggestions for further analysis or research?

Lecture 4: Example 5: Model Selection Executive Salary Data All variables Stepwise Regression in R Recommended Steps You may copy and paste into R lsalary exper educat bonus numemp assets board age profits internat sales Step 1 Read and attach data. My data is stored in the directory Data of my memory stick. One can specify any directory. esd = read.csv("e:/data/execsaldata.csv",header=t,sep=",") attach(esd) head(esd) names(esd) Step 2 Step 3 Step 4 # Invoke the MASS library that contains the stepaic function library(mass) Pairs plot (matrix of scatterplots) newdata <- c(exper,educat,numemp,assets,age,profits,sales) Can combine steps 2 and 3 (See help for pairs) panel.hist <- function(x,...) { usr <- par("usr"); on.exit(par(usr)) par(usr = c(usr[1:2], 0, 1.5) ) h <- hist(x, plot = FALSE) breaks <- h$breaks; nb <- length(breaks) y <- h$counts; y <- y/max(y) rect(breaks[-nb], 0, breaks[-1], y, col = "cyan",...) } pairs(newdata, panel = panel.smooth, diag.panel = panel.hist, cex.labels = 1.0, font.labels = 2) Step 5 Compute the correlation coefficients: round(cor(newdata),3) Step 6 Perform stepwise regression fit1 <- lm(low ~.,data=esd) esd.step <- stepaic(fit1, direction = "backward" ) Step 6 Fit the linear model to the data and get summary and anova: fit2 <- lm(lsalary ~ exper+educat+bonus+numemp+assets+board+age+ profits+internat+sales) summary(fit2) anova(fit2)

Step 7 Step 7 Step 8 Fit reduced model take out non-significant terms fit3 <- lm(lsalary ~ exper+educat+bonus+numemp+assets) summary(fit3) anova(fit3) Compute predicted values and residuals: salhat = fitted.values(fit3) res = residuals(fit3) # check the fit and assumptions on noise terms i.e. # are they normal and independent of predictor and model? par(mfrow=c(3,3)) plot(salhat,lsalary,xlim=c(10.6,12.1),ylim=c(10.6,12.1), main="observed vs fitted values") abline(0,1) #superimposes an ideal straight line plot(res) # sequential plot of residuals qqnorm(res);qqline(res) #normal probability plot of residuals plot(exper,res) #residuals vs exper plot(educat,res) #residuals vs educat plot(bonus,res) #residuals vs educat plot(numemp,res) #residuals vs numemp plot(assets,res) #residuals vs assets plot(salhat,res) #residuals vs fitted values Step 9 # Detecting possible outlying and influential observations # Are there outliers in the y-space or x-space? # Are there influential observations as measured by # DFFITS,Cook s Distance or DFBETAS? par(mfrow=c(3,3)) plot(rstudent(fit3),type="h") plot(hatvalues(fit3),type="h") plot(dffits(fit3),type="h") plot(cooks.distance(fit3),type="h") plot(dfbetas(fit3)[,1],type="h") plot(dfbetas(fit3)[,2],type="h") plot(dfbetas(fit3)[,3],type="h") plot(dfbetas(fit3)[,4],type="h") plot(dfbetas(fit3)[,5],type="h") plot(dfbetas(fit3)[,6],type="h") Step 10 What are our conclusions? First interpret the diagnostic plots for assumptions on residuals and then consider the possibility of influential observations and outliers. Do the assumptions on the noise terms seem reasonable? If the diagnostics are satisfactory we then look at the summary and ANOVA to assess and interpret the model: What is the mean function and what is the spread about the mean function? Is the model a reasonable approximation to the data? Can we criticize that data or the model and make suggestions for further analysis or research?

Lecture 5: Inference after model selection The following articles provide a great discussion post-selection inference and make suggestions for how to proceed: Ernst Wit, Edwin van den Heuvel and Jan-Willem Romeijn (2012) All models are wrong... : an introduction to model uncertainty. Statistica Neerlandica, doi:10.1111/j.1467-9574.2012.00530.x Richard Berk, Lawrence Brown, Andreas Buja, Kai Zhang, and Linda Zhao (2013) Valid postselection inference. Ann. Statist. Volume 41, Number 2, 802-837. Two major problems that arise in the process of model selection are: First, the distributions of the estimated regression parameters are no longer valid. (This means that the tests and confidence intervals normally calculated are no longer valid.) Second, we should see how well the model works on unseen data. This might conceptually be achieved by splitting the data into two subsets, a training set a validation set This is called cross-validation. Cross-validation is important in guarding against testing hypotheses suggested by the data (called "Type III errors") Exhaustive cross-validation Exhaustive cross-validation methods are cross-validation methods which learn and test on all possible ways to divide the original sample into a training and a validation set. Leave-p-out cross-validation Leave-p-out cross-validation (LpO CV) involves using p observations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of p observations and a training set. LpO cross-validation requires to learn and validate C_p^n times (where n is the number of observations in the original sample). So as soon as n is quite big it becomes impossible to calculate. Leave-one-out cross-validation Leave-one-out cross-validation (LOOCV) is a particular case of leave-p-out cross-validation with p = 1. LOOCV doesn't have the calculation problem of general LpO cross-validation because C_1^n=n. k-fold cross-validation In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k 1 subsamples are used as training data. The cross-validation process is then repeated k times (the folds), with each of the k subsamples used exactly once as the validation data. The k results from the folds can then be averaged (or otherwise combined) to produce a single estimation

2-fold cross-validation This is the simplest variation of k-fold cross-validation. Also called holdout method.[8] For each fold, we randomly assign data points to two sets d0 and d1, so that both sets are equal size (this is usually implemented by shuffling the data array and then splitting it in two). We then train on d0 and test on d1, followed by training on d1 and testing on d0. Cross-validation only yields meaningful results if the validation set and training set are drawn from the same population and only if selection biases are controlled. Cross validation in R Example 5: Executive salary data esd = read.csv("e:/data/execsaldata.csv",header=t,sep=",") attach(esd) head(esd) fit3 <- lm(lsalary ~ exper+educat+bonus+numemp+assets) xmatrix <- as.matrix(cbind(exper,educat,bonus,numemp,assets)) library(cvtools) cvfit(fit3, lsalary ~ exper+educat+bonus+numemp+assets, data=esd, y=lsalary,x=xmatrix, cost = rmspe, K = 5, R=1,foldType = "consecutive")

Practical 1: Ht Wt UCLA Data The dataset UCLA ht wt sample.csv contains 250 records of human heights and weights. These were obtained by taking a random sample of 250 from the original sample of 25000 children in 1993 by a Growth Survey of children from birth to 18 years of age recruited from Maternal and Child Health Centres (MCHC) and schools and were used to develop Hong Kong's current growth charts for weight, height, weight-for-age, weight-for-height and body mass index (BMI). http://wiki.stat.ucla.edu/socr/index.php/socr_data_dinov_020108_heightsweights To reduce the size of the data for this exercise a random sample of 250 rows were generated and stored in UCLA ht wt sample.csv. Large data sets require a different treatment that we won t cover here. The columns contain the new index (new.index), the original index (index), the ht and wt. Use the UWC analysis to guide your R code. Step 1 # Read data into R, attach data, print first 6 lines d <- read.csv("e:/data/ucla ht wt sample.csv",header=t) Step 2 # Plot marginal distributions of height and weight # Provide the boxplot of the conditional distributions of weight # given height boxplot(wt~ht,range=0,varwidth=t,col= "gray", main= "Boxplots of the distributions\n of weight given height", xlab="height (in)", ylab="weight (lb)") Step 3 # fit the linear model of wt on ht Step 4 # obtain summary and anova of fit # compute fitted values and residuals Step 5 # various common plots put into a 2X3 matrix of scatter-plots to # check the fit and assumptions on noise terms i.e. # are they normal and independent of predictor and model? # # plot 1: scatterplot of data and superimposed fitted model - # only do this plot when there is only a single predictor # plot 2: scatterplot of observed values vs fitted values to see # how close the fitted values are to the observed data # do this plot no matter how many predictors # plot 3: plot of residuals (random or pattern?) # plot 4: Q-Q plot of residuals (are they approx normal?) # plot 5: residuals vs predictor (random or pattern?) # if there are many predictors we plot residuals against # each predictor in turn # plot 6: residuals vs fitted (random or pattern?) Step 6 # Detecting possible outlying and influential observations # Are there outliers in the y-space or x-space? # Are there influential observations as measured by # DFFITS,Cook s Distance or DFBETAS?

Step 7 What are our conclusions? First interpret the diagnostic plots for assumptions on residuals and then consider the possibility of influential observations and outliers. If the diagnostics are satisfactory we then look at the summary and ANOVA to assess and interpret the model: What is the mean function and what is the spread about the mean function? Can we criticize that data or the model and make suggestions for further analysis or research?

Practical 2: Salamander Problem and Data The data set for this assignment was obtained from Bill Peterman during the fall of 2005, then a postgraduate student in Ecology and Conservation at University of Missouri. See his webpage at http://senr.osu.edu/our-people/william-peterman http://petermanresearch.weebly.com/dr-bill-peterman.html The data given in the appendix were collected on 45 salamanders to ascertain the time to anesthetization (seconds) when submerged in different concentrations of Tricaine Methanesulfonate, or MS-222 for short. It is a fine white powder that easily dissolves in water. The salamanders were placed in a container with the solution and were completely submerged. The temperature of the water-anesthetic solution (MS- 222 was the anesthetic) was measured in degrees Celsius. The covariates considered were snout vent length (sl) measured in millimeters, total length (tl) measured in millimeters, mass measured in grams, ph of the solution, the temperature. The study was motivated because Bill needed to insert electronic tracking devices into the salamanders so that they could be easily tracked. However, he could find no guidelines about the concentration required for anesthetization for salamanders. The objective was to develop a model to predict the time required for anesthetization in terms of the concentration, the size of the salamander as measured by the mass. We will ignore the ph and temperature. Model Building Considerations It seems appropriate to exclude temperature and ph from the analysis because these were strongly correlated with the concentration. Further it seemed sensible to use mass rather than either snout length (sl) or total length (tl) in the analysis. (Because of their high correlation only one of these measurements would be included and mass is by far the more reliable and intuitively appealing measurement.) After considering the scatterplots of time to anesthetization (anes [measured in minutes]) against concentration (conc [mg/l]), it seemed that the analysis should be based on log transformations of both anesthetization time, concentration and mass. Thus, the complete model to be contemplated is Analysis: ln( anes ) 0 1 ln( conc) 2ln( mass) Perform an analysis that will lead to a regression model for predicting the average time to anesthetization in terms of concentration, mass and ph. Suggestions for the analysis step of the Salamander Data (approximate Steps): 1 Read data into R. 2 Analyse the marginal distributions of original/untransformed data and comment on these. (e.g. stem-and-leaf, summary, Q-Qplots, histograms, etc.) 3 Obtain scatterplot and correlation matrices of original data and comment on these. 4 Transform data to logs: log(anes), log(conc) and log(mass). 5 Check that the transformed data are approximately normal. (At this stage we are not so interested in the means and sd s but in the shape of their distributions are they approximately normal?.) 6 Repeat step 3 for the transformed data as a precursor to the model fitting. 7 Fit and assess the contemplated models: Model 1 ln( anes ) 0 1 ln( conc) 2ln( mass) 8 provide the ANOVA and summary table, perform the checks that the assumptions on the error terms are reasonable for that model and perform the usual diagnostic plots looking for outliers in the Y-space, the X-space, the DFFITS, Cooks Distance, and DFBETAS. 9 Interpretations 10 Criticisms and recommendations of experiment, data and the model 1

Picture of salamander species used in the anesthetization study: Pictures by Bill Peterman: 2

Practical 4: Birthweight data The data JRHbirthwt.csv for this exercise comes from a recent research project at the John Radcliff Hospital. The 17 variables are: Age PAPPA hcg NT trisomy Parity (categorical) BMI Smoking (categorical) Ethnicity (categorical) Conception Gestation in weeks (won t use this) Gestation in days Delivery (categorical) Centile Birthwt PET2 (categorical) G3M (categorical) The objective is to develop a model to predict birthweight from the other variables. I plotted all the marginal distributions and re-coded to eliminate sparse categories, and then converted the categorical variables to factors: Ethnicity.new <- 1*(Ethnicity==1)+2*(Ethnicity==2)+1*(Ethnicity>2) Ethnicity.new <- as.factor( Ethnicity.new) PET.new <-1*(PET2==1)+2*(PET2>1) PET.new <- as.factor(pet.new) G3M.new <- 1*(G3M==1)+2*(G3M>1) G3M.new <- as.factor(g3m.new) Smoking <- as.factor(smoking) Parity.new <- 0*(Parity==0)+1*(Parity==1)+1*(Parity==2) Parity.new <- as.factor(parity.new) Conception.new <- 1*(Conception==1)+2*(Conception>1) Conception.new <- as.factor(conception.new) Perform a stepwise regression on these variables. Fit the best resulting model Obtain the summary and anova Select your model Assess assumptions and influential observations What are your conclusions?