Unleashing Evaluation: Giving Perspective to Power, Precision and Problems

Similar documents
Probability and Statistics Curriculum Pacing Guide

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Software Maintenance

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

CHANCERY SMS 5.0 STUDENT SCHEDULING

Statewide Framework Document for:

INSTRUCTOR USER MANUAL/HELP SECTION

Instructor: Mario D. Garrett, Ph.D. Phone: Office: Hepner Hall (HH) 100

School Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Visit us at:

Reduce the Failure Rate of the Screwing Process with Six Sigma Approach

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Lecture 1: Machine Learning Basics

Five Challenges for the Collaborative Classroom and How to Solve Them

Creating a Test in Eduphoria! Aware

Getting Started with Deliberate Practice

Individual Differences & Item Effects: How to test them, & how to test them well

Analysis of Enzyme Kinetic Data

Curriculum Design Project with Virtual Manipulatives. Gwenanne Salkind. George Mason University EDCI 856. Dr. Patricia Moyer-Packenham

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Research Design & Analysis Made Easy! Brainstorming Worksheet

Discovering Statistics

Introduction to the Practice of Statistics

Digital Fabrication and Aunt Sarah: Enabling Quadratic Explorations via Technology. Michael L. Connell University of Houston - Downtown

Millersville University Degree Works Training User Guide

Are You Ready? Simplify Fractions

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

An Introduction to Simio for Beginners

PeopleSoft Class Scheduling. The Mechanics of Schedule Build

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

Does the Difficulty of an Interruption Affect our Ability to Resume?

Honors Mathematics. Introduction and Definition of Honors Mathematics

STUDENT MOODLE ORIENTATION

BMBF Project ROBUKOM: Robust Communication Networks

Connect Microbiology. Training Guide

Linking the Common European Framework of Reference and the Michigan English Language Assessment Battery Technical Report

Detailed course syllabus

PowerTeacher Gradebook User Guide PowerSchool Student Information System

Math 96: Intermediate Algebra in Context

Creating Your Term Schedule

Alignment of Australian Curriculum Year Levels to the Scope and Sequence of Math-U-See Program

Measurement & Analysis in the Real World

HOLMER GREEN SENIOR SCHOOL CURRICULUM INFORMATION

Grade 6: Correlated to AGS Basic Math Skills

Excel Intermediate

Classroom Connections Examining the Intersection of the Standards for Mathematical Content and the Standards for Mathematical Practice

STA 225: Introductory Statistics (CT)

16.1 Lesson: Putting it into practice - isikhnas

Storytelling Made Simple

Functional Skills Mathematics Level 2 assessment

Generating Test Cases From Use Cases

INTERMEDIATE ALGEBRA PRODUCT GUIDE

Multi Method Approaches to Monitoring Data Quality

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

NCAA Eligibility Center High School Portal Instructions. Course Module

MyUni - Turnitin Assignments

Evidence for Reliability, Validity and Learning Effectiveness

TeacherPlus Gradebook HTML5 Guide LEARN OUR SOFTWARE STEP BY STEP

MINUTE TO WIN IT: NAMING THE PRESIDENTS OF THE UNITED STATES

Foothill College Summer 2016

Stopping rules for sequential trials in high-dimensional data

Houghton Mifflin Online Assessment System Walkthrough Guide

State University of New York at Buffalo INTRODUCTION TO STATISTICS PSC 408 Fall 2015 M,W,F 1-1:50 NSC 210

Detailed Instructions to Create a Screen Name, Create a Group, and Join a Group

Course Groups and Coordinator Courses MyLab and Mastering for Blackboard Learn

Filing RTI Application by your own

Attendance/ Data Clerk Manual.

Python Machine Learning

CHMB16H3 TECHNIQUES IN ANALYTICAL CHEMISTRY

Student User s Guide to the Project Integration Management Simulation. Based on the PMBOK Guide - 5 th edition

LESSON PLANS: AUSTRALIA Year 6: Patterns and Algebra Patterns 50 MINS 10 MINS. Introduction to Lesson. powered by

Getting Started with TI-Nspire High School Science

The Effects of Ability Tracking of Future Primary School Teachers on Student Performance

Best Colleges Main Survey

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

School Size and the Quality of Teaching and Learning

Speech Recognition at ICSI: Broadcast News and beyond

Preparing for the School Census Autumn 2017 Return preparation guide. English Primary, Nursery and Special Phase Schools Applicable to 7.

VOL. 3, NO. 5, May 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

Managerial Decision Making

Once your credentials are accepted, you should get a pop-window (make sure that your browser is set to allow popups) that looks like this:

AP Statistics Summer Assignment 17-18

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Emporia State University Degree Works Training User Guide Advisor

A Model to Predict 24-Hour Urinary Creatinine Level Using Repeated Measurements

Ryerson University Sociology SOC 483: Advanced Research and Statistics

ecampus Basics Overview

APPENDIX A: Process Sigma Table (I)

Using Blackboard.com Software to Reach Beyond the Classroom: Intermediate

CENTRAL MAINE COMMUNITY COLLEGE Introduction to Computer Applications BCA ; FALL 2011

A Comparison of Charter Schools and Traditional Public Schools in Idaho

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Getting Started Guide

ACBSP Related Standards: #3 Student and Stakeholder Focus #4 Measurement and Analysis of Student Learning and Performance

Exploring Derivative Functions using HP Prime

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

Preferences...3 Basic Calculator...5 Math/Graphing Tools...5 Help...6 Run System Check...6 Sign Out...8

Measures of the Location of the Data

Transcription:

Unleashing Evaluation: Giving Perspective to Power, Precision and Problems *Presentation is posted at www.statease.com/webinar.html To avoid disrupting the Voice over Internet Protocol (VoIP) system, I will mute all. Please use Questions feature on GotoWebinar. We will answer as many questions as time allows. Feel free to email questions to webinar@statease.com which we will answer off-line. -- Wayne By Wayne F. Adams, MS, Applied Stats. Stat-Ease, Inc., Minneapolis, MN wayne@statease.com 1

Getting Started: Stat-Ease Resources New to Design of Experiments? Take advantage of all the free resources available to you! Stat-Ease on the internet: Beginner resources: http://www.statease.com/beginner.html Webinars: http://www.statease.com/webinar.html Articles: http://www.statease.com/articles.html Tutorials: http://www.statease.com/software/dx9-tut.html YouTube: Search for the Stat-Ease YouTube Channel 2

Getting Started: Other Resources New to Design of Experiments? Take advantage of all the free resources available to you! LinkedIn Groups: The Design of Experiment (DOE) Group great place to post general questions about DOE s ASQ Statistics Division more general statistics and DOE The Stat-Ease Professional Network friends and clients of Stat-Ease 3

Unleashing Evaluation What is Design Evaluation? When Should Design Evaluation Be Used? Unleashing Evaluation 4

What Is Design Evaluation? A set of tools to determine the capability of a design. Display the Alias Structure What effects can be cleanly estimated? Show How Degrees of Freedom Are Spent The runs pay for the model and upgrades to the model. Present The Correlation Statistics How imbalanced and non-orthogonal is it? Expose Matrix Measures Things only statisticians care about. Unleashing Evaluation 5

Where is Design Evaluation? Design-Expert software provides design evaluation throughout the build of the design. When a 2 2 factorial design is built a warning is displayed regarding not having enough information to test all the effects. When a 2 3 factorial design is built a warning is displayed regarding not having enough power. When a fractional design is built the alias structure is displayed. Factorial design builds include a stop where power can be estimated. Unleashing Evaluation 6

Where is Design Evaluation? The rest of this discussion will concentrate on the Evaluation node, available after the design is built and the factor columns populated. Evaluation is usually used before data has been gathered, but can be used post analysis to verify the usefulness of the models. Unleashing Evaluation 7

Evaluation Needs a Model Click the Results Tab to Unleash the Evaluation The Terms List shows which terms are in the model, error, or excluded (ignored) from consideration. Order provides a short cut to select all the terms up to a certain level. Model switches what types of terms and orders will be displayed. Add Term is used to add higherorder terms one at a time rather than trying to find them in the terms list. Response defaults to Design Only which means the whole design. If a response has data then a response can be selected. Unleashing Evaluation 8

It s All In the Bookmarks The Bookmarks tool makes navigation of the Evaluation report easy. Click a button to move that section to the top of the screen. The Pop-Out View button creates a clone of the evaluation report with its own bookmarks tool. The clone will stay open even if you leave the evaluation node. Unleashing Evaluation 9

It s All In the Bookmarks Aliasing The Aliasing shows the relationship between the non-excluded terms on the model tab. Alias Matrix [Est. Terms] Aliased Terms [Intercept] = Intercept [A] = A [B] = B [C] = C [D] = D [AB] = AB - AD - BD - C^2 [AC] = AC - AD + CD + B^2 [BC] = BC + BD + CD + B^2 + C^2 + D^2 [A^2] = A^2 + B^2 + C^2 + D^2 When there are too many terms for the design to handle, the estimate of one term s coefficient biases the estimates of others that appear after the same equal (=) sign on the alias matrix. Unleashing Evaluation 10

It s All In the Bookmarks Degrees of Freedom One Degree of Freedom (df) comes from each run. The table shows how the df are used to compute coefficients and noise. Degrees of Freedom for Evaluation Model 8 Residuals 0 Lack of Fit 0 Pure Error 0 Corr Total 8 The df used to compute the intercept term is not part of the table. The remaining 8 df are being used for the model coefficients, with nothing left over for the residuals. No Residuals = No ANOVA tests Unleashing Evaluation 11

It s All In the Bookmarks Terms (Power) The Terms (Power) section contains correlation statistics. VIF of 1 is ideal, which indicates no correlation between the terms. Power is the probability of detecting an effect. The size of the effect is measured in terms of standard deviations also called the signal to noise ratio. Term StdErr VIF Ri- Squared Power at 5 % alpha level to detect signal/noise ratios of 0.5 Std. Dev. 1 Std. Dev. 2 Std. Dev. A 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % B 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % C 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % D 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % Unleashing Evaluation 12

It s All In the Bookmarks Leverage The Leverage of a run depends on where it and other runs are located in the factor space. Runs with high leverage have more influence on the model than other runs. Run Leverage Space Type 1 0.6111 Unknown 2 0.6111 Unknown 3 0.6111 Unknown 4 0.6111 Unknown 5 0.1111 Center 6 0.6111 Unknown 7 0.6111 Unknown 8 0.6111 Unknown 9 0.6111 Unknown Average = 0.5556 Unleashing Evaluation 13

It s All In the Bookmarks Matrix Measures The Matrix measures are statistics used to compare designs to a standard or each other. Condition Number of Coefficient Matrix = 1.000 Maximum Variance Mean = 0.278 Average Variance Mean = 0.222 Minimum Variance Mean = 0.111 G Efficiency = 200.0 % Scaled D-optimality Criterion = 1.383 Determinant of (X'X)^-1 = 8.573E-5 Trace of (X'X)^-1 = 0.778 I (Cuboidal) = 0.33333 Unleashing Evaluation 14

It s All In the Bookmarks Correlations Plots The Correlation plots are another way to show the relationship between terms in the model. The ideal design has a completely uncorrelated structure. This only happens with factorial designs and interaction models. Unleashing Evaluation 15

It s All In the Bookmarks Correlations Plots Response surface designs for higher-order models are impossible to make ideal. A good design uncorrelates as much as it can. Main Effects are uncorrelated with other effects, but quadratic terms are correlated with each other. Unleashing Evaluation 16

It s All In the Bookmarks Correlations Plots This design is not ideal for all the terms, but will work for a subset of the terms. Bonus points if you can tell me what design was used to produce the graphs Unleashing Evaluation 17

Unleashing Evaluation What is Design Evaluation? When Should Design Evaluation Be Used? Unleashing Evaluation 18

When to Use Design Evaluation To Make Sure the Design is Able to Meet Goals How many runs does it take to get to a useful model? Check the Impact of Design Modifications What happens when levels change and runs are not completed? To See How Well an Existing Data Set Will Perform Can we use all this data that we ve had for years? To Compare Designs Another thing only statisticians care about. Unleashing Evaluation 19

Define: Able to Meet Goals 1. Estimate the polynomial chosen by the experimenter well. 2. Give sufficient information to allow a test for lack of fit. Have more unique design points than coefficients in the model. Provide an estimate of pure error. 3. Remain insensitive to outliers, influential values and bias from model misspecification. 4. Be robust to errors in control of the factor levels. 5. Provide a check on variance assumptions, e.g., studentized residuals are NID(0, σ 2 ); that is, normal and independently distributed with mean of zero and constant variance. 6. Generate useful information throughout the region of interest. 7. Do not contain an excessively large number of trials. Unleashing Evaluation 20

Evaluate: Useful Information Power and Precision Factorial DOE During screening and characterization (factorials) emphasis is on identifying factor effects. What are the important design factors? For this purpose power is an ideal metric to evaluate design suitability. Response Surface Methods When the goal is optimization (usually the case for RSM) emphasis is on the fitted surface. How well does the surface represent true behavior? For this purpose precision is a good metric to evaluate design suitability. Unleashing Evaluation 21

Evaluate: Useful Information Power R1 Power is the probability of a true effect testing as significant on the ANOVA given some expected noise. Power is calculated both Up Front as the design is built and as part of the evaluation report. Signal (delta) = 2.00 Noise (sigma) = 1.00 Signal/Noise (delta/sigma) = 2.00 A B C D 46.3 % 46.3 % 46.3 % 46.3 % Unleashing Evaluation 22

Evaluate: Useful Information Power The evaluation needs to be set up correctly to show the power. Change the Order on the Model tab to evaluate the main effects. Click on the Options button to choose signal to noise ratios. Unleashing Evaluation 23

Evaluate: Useful Information Power The Signal is the minimum size of a critical effect. The Noise is the unexplained variation in the system. Think of it as the best estimate for what the standard deviation will be on the ANOVA once the correct model is fit. Divide the Signal by the Noise to get the value to enter. Change at least one box to match the signal to noise ratio for the experiment. Unleashing Evaluation 24

Evaluate: Useful Information Power Click on the Results tab and the Terms (Power) bookmark to get the power estimates. Term StdErr VIF Ri- Squared Power at 5 % alpha level to detect signal/noise ratios of 0.5 Std. Dev. 1 Std. Dev. 2 Std. Dev. A 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % B 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % C 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % D 0.41 1.00 0.0000 7.7 % 15.9 % 46.3 % For a design to be considered capable, the power should be 80% or more. Unleashing Evaluation 25

Evaluate: Useful Information Precision Precision estimates come from the Fraction of Design Space (FDS) found under the Evaluation - Graphs tab. Set the Model type to Polynomial (if it isn t already). Change the Order or select the model from the terms lists before clicking the Graphs tab. Unleashing Evaluation 26

Evaluate: Useful Information Precision On the FDS Graph tool, change the d box to the +/- amount (a.k.a. margin of error or interval half-width) that provides acceptable precision. Change the s box to represent the unexplained variation in the system. Think of it as the best estimate for what the standard deviation will be on the ANOVA once the correct model is fit. Unleashing Evaluation 27

Evaluate: Useful Information Precision Inc. Design-Expert Sof tware Min Std Error Mean: 0.394 FDS Graph Av g Std Error Mean: 0.520 Max Std Error Mean: 0.781 Stat-Ease, 1.000 Spherical radius = 1 Points = 50000 t(0.05/2,10) = 2.22814 d = 5, s = 4 0.800 FDS = 0.82 Std Error Mean = 0.561 0.600 For this new design, using d = 5 and s = 4, about 82% 0.400 of the design will have a confidence interval 2016 no more 0.200 than +/- 5 units wide. Removing insignificant 0.000 terms improves the post analysis precision. Std Error Mean 0.00 0.20 0.40 0.60 0.80 1.00 Fraction of Design Space Unleashing Evaluation 28

Evaluate: Useful Information Sizing the Design For more details on these topics please see Brooks Henderson s October 2013 webinar. How Many Runs Do I Need? How to Use Power and Precision to Size Factorial, Response Surface Method and Mixture Designs http://www.statease.com/training/webinar.html Unleashing Evaluation 29

When to Use Design Evaluation To Make Sure the Design is Able to Meet Goals How many runs does it take to get to a useful model? Check the Impact of Design Modifications What happens when levels change and runs are not completed? To See How Well an Existing Data Set Will Perform Can we use all this data that we ve had for years? To Compare Designs Another thing only statisticians care about. Unleashing Evaluation 30

Evaluate: Design Modifications Changing a Run or Two To create this example, a 2 3, full, two-level factorial design consisting of 8 vertices was built. This design is balanced and orthogonal. The extreme low and extreme high vertices were modified as it is believed these conditions will not produce meaningful results. {-1, -1, -1} became {-0.5, -0.5, -0.5} {+1, +1, +1} became {+0.5, +0.5, +0.5} It is no longer balanced and orthogonal, but is it still useful? Unleashing Evaluation 31

Evaluate: Design Modifications Aliasing and Terms (Power) No aliases found for 3FI Model Term StdErr VIF Ri- Squared A 0.52 1.78 0.4378 B 0.52 1.78 0.4378 C 0.52 1.78 0.4378 AB 0.55 1.76 0.4315 AC 0.55 1.76 0.4315 BC 0.55 1.76 0.4315 ABC 0.58 2.00 0.4996 Unleashing Evaluation 32

Evaluate: Design Modifications Check Aliasing No aliasing found is the best thing to see. 1 st and 2 nd order terms aliased with 3 rd or higher-order terms is acceptable for characterization and optimization designs. 2 nd order terms aliased with other 2 nd order terms is acceptable for screening designs. 1 st order terms aliased with 2 nd order is only acceptable for verification designs. Unleashing Evaluation 33

Evaluate: Design Modifications Check Terms (Power) The power will be lower even though there are the same number of runs. (Remember when evaluating power, set the model to Main Effects) Ignore all of this for designs with constraints including Look at the mixture VIF column. designs. It is no Look longer at all the 1.00. FDS to see Small values 10 or less are not cause for concern. the effect Values of between modifications. 10 and 100 indicate the orthogonality is compromised. Values from 100 to 1000 indicate severe compromise. Over 1000 is bad, it may not be possible to obtain a model. Unleashing Evaluation 34

Evaluate: Design Modifications Check FDS The modified design s FDS curve is not as flat and low as the unmodified design s. This is happening because the model can still predict at the original vertices. The predictions there are poor due to lack of data. Std Error Mean 0.00 0.20 0.40 0.60 0.80 1.00 Unleashing Evaluation 35 1.200 1.000 0.800 0.600 0.400 0.200 0.000 FDS Graph Modified Fraction of Design Space Unmodified

Evaluate: Design Modifications Losing a Run or Two For the second example, a 2 3, full, two-level factorial design consisting of 8 vertices was built. This design is balanced and orthogonal. The extreme low and extreme high vertices were included in the design because no effort was made to manually evaluate the design. Those two runs failed to produce a meaningful response. It is now a six run design. It is no longer balanced and orthogonal, but is it still useful? Unleashing Evaluation 36

Evaluate: Design Modifications Check Aliasing Factorial Effects Aliases [Est. Terms] Aliased Terms [Intercept] = Intercept - BC [A] = A - ABC [B] = B - ABC [C] = C - ABC [AB] = AB - BC [AC] = AC - BC Following the rules outlined earlier: 2 nd order terms aliased with other 2 nd order terms is acceptable for screening designs. These six runs can be used to screen whether or not A, B and C are important to the process. But the interactions are lost. Unleashing Evaluation 37

When to Use Design Evaluation To Make Sure the Design is Able to Meet Goals How many runs does it take to get to a useful model? Check the Impact of Design Modifications What happens when levels change and runs are not completed? To See How Well an Existing Data Set Will Perform Can we use all this data that we ve had for years? To Compare Designs Another thing only statisticians care about. Unleashing Evaluation 38

Evaluation: Existing Data The checks are pretty much the same as evaluating design modifications. If there are problems with a design, you build a new design. If there are problems with existing data you can 1. Augment the design to add the runs necessary to make the design able to meet goals. 2. Use your subject matter knowledge to decide which factors are the true drivers of the response changes; then delete the other factors. Which way you go, depends on what you know! Unleashing Evaluation 39

When to Use Design Evaluation To Make Sure the Design is Able to Meet Goals How many runs does it take to get to a useful model? Check the Impact of Design Modifications What happens when levels change and runs are not completed? To See How Well an Existing Data Set Will Perform Can we use all this data that we ve had for years? To Compare Designs Another thing only statisticians care about. Unleashing Evaluation 40

Evaluate: Best Design The goal of I-optimality is to minimize the integral under the FDS curve which will make it lower and flatter. This provides a more precise model. The goal of D-optimality is to maximize the determinant of the X T X matrix. This minimizes the joint confidence interval volume for the coefficient estimates improving the power of the design to detect significant effects. Unleashing Evaluation 41

Evaluate: Best Design Using FDS The FDS graph provides a way to compare the precision of the model predictions. A lower and flatter FDS curve indicates better precision around the model predictions. But that is not the whole story. Std Error Mean 1.200 1.000 0.800 0.600 0.400 0.200 0.000 FDS Graph D-optimal I-optimal 0.00 0.20 0.40 0.60 0.80 1.00 Fraction of Design Space Unleashing Evaluation 42

Evaluate: Best Design Using Matrix Measures Condition Number of Coefficient Matrix = 83.074 Maximum Variance Mean = 0.655 Average Variance Mean = 0.347 Minimum Variance Mean = 0.180 G Efficiency = 101.8 % Scaled D-optimality Criterion = 7.331 Determinant of (X'X)^-1 = 5.762E-13 Trace of (X'X)^-1 = 10.427 I (Cuboidal) = 0.48476 I-optimal Condition Number of Coefficient Matrix = 184.314 Maximum Variance Mean = 1.067 Average Variance Mean = 0.578 Minimum Variance Mean = 0.289 G Efficiency = 62.5 % Scaled D-optimality Criterion = 5.806 Determinant of (X'X)^-1 = 5.439E-15 Trace of (X'X)^-1 = 10.975 I (Cuboidal) = 0.60038 D-optimal The best Matrix Measure to use depends on the goal of the experiment. Unleashing Evaluation 43

Evaluation Unleashed! Don t forget that you know things that statistics doesn t. Take the time to look the design over. Fix any runs that might be a problem, then use the evaluation tools. Use the evaluation tools before starting the experiment. It is much easier to prevent problems than fix them. Use the tools when the experiment doesn t go as planned. Modifications to a design may or may not cause a problem for the analysis; the evaluation tools provide a way to check. Use the tools when you already have historical data. If the data set has similar structure to a design, then it analyzes like a design. If it doesn t, then it won t and the problems will need to be fixed. Unleashing Evaluation 44

Thank You! Thank you for attending our webinar. I will keep the webinar open for a little while to receive and answer questions. Please feel free to email any questions about the presentation to webinar@statease.com we will reply as soon as possible. Brooks, Mark, Wayne Pat, Shari, Martin Unleashing Evaluation 45

Term StdErr VIF Ri-Squared A 0.52 1.78 0.4378 B 0.52 1.78 0.4378 C 0.52 1.78 0.4378 AB 0.55 1.76 0.4315 AC 0.55 1.76 0.4315 BC 0.55 1.76 0.4315 ABC 0.58 2.00 0.4996 Unleashing Evaluation 46