Choosing an Artificial Intelligence Solution: Start with the Business Challenge

Similar documents
Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Rule-based Expert Systems

COMMUNITY ENGAGEMENT

MYCIN. The MYCIN Task

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

The Good Judgment Project: A large scale test of different methods of combining expert predictions

AQUA: An Ontology-Driven Question Answering System

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Degree Qualification Profiles Intellectual Skills

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Early Warning System Implementation Guide

A Neural Network GUI Tested on Text-To-Phoneme Mapping

K5 Math Practice. Free Pilot Proposal Jan -Jun Boost Confidence Increase Scores Get Ahead. Studypad, Inc.

A Note on Structuring Employability Skills for Accounting Students

A Case Study: News Classification Based on Term Frequency

Lecture 1: Basic Concepts of Machine Learning

A Reinforcement Learning Variant for Control Scheduling

Knowledge-Based - Systems

Natural Language Processing. George Konidaris

1 3-5 = Subtraction - a binary operation

Learning Methods in Multilingual Speech Recognition

The Oregon Literacy Framework of September 2009 as it Applies to grades K-3

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

On-Line Data Analytics

Speech Recognition at ICSI: Broadcast News and beyond

Shockwheat. Statistics 1, Activity 1

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

2017 FALL PROFESSIONAL TRAINING CALENDAR

Unit 3: Lesson 1 Decimals as Equal Divisions

Artificial Neural Networks written examination

South Carolina English Language Arts

Five Challenges for the Collaborative Classroom and How to Solve Them

Eduroam Support Clinics What are they?

How People Learn Physics

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

A BOOK IN A SLIDESHOW. The Dragonfly Effect JENNIFER AAKER & ANDY SMITH

Briefing document CII Continuing Professional Development (CPD) scheme.

Best Practices in Internet Ministry Released November 7, 2008

INTRODUCTION TO DECISION ANALYSIS (Economics ) Prof. Klaus Nehring Spring Syllabus

Learning Methods for Fuzzy Systems

Course Law Enforcement II. Unit I Careers in Law Enforcement

Pre-Algebra A. Syllabus. Course Overview. Course Goals. General Skills. Credit Value

Lecture 1: Machine Learning Basics

LEARN TO PROGRAM, SECOND EDITION (THE FACETS OF RUBY SERIES) BY CHRIS PINE

Which verb classes and why? Research questions: Semantic Basis Hypothesis (SBH) What verb classes? Why the truth of the SBH matters

GOING GLOBAL 2018 SUBMITTING A PROPOSAL

The CTQ Flowdown as a Conceptual Model of Project Objectives

CLASS EXODUS. The alumni giving rate has dropped 50 percent over the last 20 years. How can you rethink your value to graduates?

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Artificial Neural Networks

Mathematics process categories

Evolutive Neural Net Fuzzy Filtering: Basic Description

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Video Marketing Strategy

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Copyright Corwin 2015

Evolution of Symbolisation in Chimpanzees and Neural Nets

Individual Component Checklist L I S T E N I N G. for use with ONE task ENGLISH VERSION

Measurement & Analysis in the Real World

Len Lundstrum, Ph.D., FRM

BLACKBOARD & ANGEL LEARNING FREQUENTLY ASKED QUESTIONS. Introduction... 2

This Performance Standards include four major components. They are

CS Machine Learning

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Formative Assessment in Mathematics. Part 3: The Learner s Role

Cal s Dinner Card Deals

Word Segmentation of Off-line Handwritten Documents

An Introduction to School Finance in Texas

AP Statistics Summer Assignment 17-18

College Pricing. Ben Johnson. April 30, Abstract. Colleges in the United States price discriminate based on student characteristics

P-4: Differentiate your plans to fit your students

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

Alpha provides an overall measure of the internal reliability of the test. The Coefficient Alphas for the STEP are:

Citrine Informatics. The Latest from Citrine. Citrine Informatics. The data analytics platform for the physical world

TEAM-BUILDING GAMES, ACTIVITIES AND IDEAS

Tap vs. Bottled Water

Corporate learning: Blurring boundaries and breaking barriers

Higher Education Review (Embedded Colleges) of Navitas UK Holdings Ltd. Hertfordshire International College

Intelligent Agents. Chapter 2. Chapter 2 1

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Alex Robinson Financial Aid

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

What Am I Getting Into?

ONBOARDING NEW TEACHERS: WHAT THEY NEED TO SUCCEED. MSBO Spring 2017

21st Century Community Learning Center

Standards and Criteria for Demonstrating Excellence in BACCALAUREATE/GRADUATE DEGREE PROGRAMS

OVERVIEW OF CURRICULUM-BASED MEASUREMENT AS A GENERAL OUTCOME MEASURE

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Probabilistic Latent Semantic Analysis

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

November 2012 MUET (800)

WHY GRADUATE SCHOOL? Turning Today s Technical Talent Into Tomorrow s Technology Leaders

MASTER S COURSES FASHION START-UP

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

Test Effort Estimation Using Neural Network

MKTG 611- Marketing Management The Wharton School, University of Pennsylvania Fall 2016

School Leadership Rubrics

Envision Success FY2014-FY2017 Strategic Goal 1: Enhancing pathways that guide students to achieve their academic, career, and personal goals

arxiv: v1 [cs.cl] 2 Apr 2017

Transcription:

CLIENT ONLY RESEARCH BRIEF RESEARCH & ADVISORY NETWORK Choosing an Artificial Intelligence Solution: Start with the Business Challenge KRIS HAMMOND JULY 2016

THE BIG IDEAS There are three main types of functionality that we want out of AI systems in general - assessment, prediction, and advisement. To understand machine learning, deep learning and reasoning based on evidence, consider the type of human reason each technology is aiming to emulate. When choosing an AI solution, start with the problem you want to solve to understand which AI technology may be the best fit. Focusing on the Problem We ve been given the gift of an ever-growing set of systems that fall under the umbrella of Artificial Intelligence (AI). In my previous research brief, The Intelligent Systems Ecosystem, I outlined the different subfields of AI. In this brief, I want to examine the functional capabilities of these systems; capabilities that include recommendation, learning for predictive analytics, and business focused advisory systems. With this wealth of technology, it might be easy to fall into a trap where our excitement drives our decisions, and we spend our time looking for nails for our shiny new hammers. Here, I want to focus on the nails and which hammers in our tool kit are appropriate for them. My strategy is to characterize the kinds of problems best solved by the different systems and use the problems to drive the selection and implementation process. Through this approach, we can build out a set of metrics specific to the common problems faced and choose between the different tools that are available to us. That is, let the requirements of our nails tell us which of the hammers we should use. Given that intelligent systems are aimed at doing things that people do, a good way to understand the problems they can solve is to look at them from a human perspective. In much the same way that we might choose someone for a project because they are good at analysis or great with managing people, we should choose the right technology for the right problem based on capabilities. Assess, Predict and Advise In The Intelligent Systems Ecosystem, I categorized AI technologies into three functional roles: assessment, prediction, and advisement. That is, seeing what is happening now, predicting what will happen next, and providing advice as to how to respond 1. This baseline breakdown is a good first take on characterizing 1 Please note that I have explicitly excluded discussion of systems that take action on their own, in particular robots, simply because they are out of scope for most of the decisions that people reading this report will be making. Copyright 2016 International Institute for Analytics 2

problems. The next step is to think about the nature of the data that these systems are using and how they are generating the rules that they use to process it. Now, we will look at the three major types of systems that reflect three different takes on reasoning: machine learning based on analytics, deep learning, and evidence-based reasoning. MACHINE LEARNING Before moving on to data and process details, I want to make one point to help frame the discussion: these systems are all inference engines. In one form or another, they all look at data and draw conclusions as to what that data mean. Different systems look at very different kinds of data and may draw very different kinds of conclusions. But in the end, they are all using some data set to infer new ideas about what is happening in the world. The differences between them come down to what sort of data they look at and what processes they use to draw their conclusions. Different Reasoning, Different Systems One of the best ways to understand the differences between the AI systems is to look at the different types of human reasoning as compared to each technology. In general, each of these systems is designed around a different aspect of human reasoning and is best understood in the context of that reasoning. This perspective also enables us to recognize when we have a version of that task to which that system can be applied. The current rise of Artificial Intelligence has been driven by a growing number of Machine Learning (ML) success stories. While there are many systems that fall under the umbrella of ML, the goal of these systems in general tends to be construction of rules based on trends, relationships and correlations between features. These relationships are discovered through the analysis of possible relationships found in historical data and are used to then generate rules that reduce those relationships down to a set of equations. This sort of ML is well grounded in the technologies of function analysis, probability theory and set theory as well as a few others. Which is just to say that ML is essentially based on the analysis of structured data. Some of the information we care about can be expressed as simple snapshots that characterize relationships between things in the world. For example, if we have data about shoppers, we may build up a set of categories defined by people who tend to buy collections of products together such as one brand of shampoo, conditioner, and hair spray. This kind of characterization can then be used to cluster customers together or improve interaction with them based on the products that they tend to purchase and the dynamic of those purchases. Copyright 2016 International Institute for Analytics 3

systems can learn about relationships between elements that we might not have anticipated. It is a minus, however, in that we might end up with a system that learns a great deal about things that are irrelevant to our businesses. Even more powerful is that these techniques can provide us with the rules needed to suggest or recommend one product based on the presence of others. That is, use the presence of one set of features to predict others that have been linked together in the past. For example, I might have traffic, weather, and stock data but I have no particular interest in learning about the correlations between hail storms and commute time or the gray skies in the morning and temporary dips in prices at the opening bell. I am more interested in the ML system using the data at hand to provide me with the information that will best equip me to characterize, predict, and respond to the specific outcomes that most significantly impact my business. To extend ML into the realm of prediction, we can add the element of time and look for correlations (collections of products bought together) that scope over time. For example, looking at historical information, we may be able to see the correlation between purchases of gas grills and patio furniture that can be used to predict one from the other. Of course, the nature of these snapshots, recommendations, and predictions is scoped by the nature of the data. If the only available data is sales, then the ability to see correlations and trends is limited to the relationships between purchases. As we add in data defining location, date, time of day, demographics, neighborhood information, and even weather, we can expand the features a system can use and thus, the features that will participate in relationships it discovers and the rules that it can learn. An additional issue arises related to what these systems produce at the content level. As we gather more and more data that fuels ML systems, we increase the scope of what can be learned exponentially. This is a plus in some ways in that these So, while I may have technologies that can discover these somewhat off point relationships, I probably want to guide them towards those relationships that actually impact my business. In particular, I want to be able to predict those events and features that I want to control before they happen. To provide focus, a good part of the ML process involves modeling of those features that you care about (and thus want to predict) and those that you have access to (and can thus act as predictors). This phase is important in that it is part of the process of ensuring the technology is aligned with the business. Copyright 2016 International Institute for Analytics 4

Somewhat orthogonal to all this are issues of data granularity, accuracy, and uncertainty. It is important to understand that in the current world, no matter how good our data are, there will be issues with them that result in uncertainty in the inferences that they afford. For example, there may be a strong correlation between gender, age, or economic status and the types of products that people buy, but that correlation will have a certainty value associated with it. The more data you have and the more granular it is, the more accurate the assessment of these correlations will be. But given the current state of the art, there will always be wiggle room. If you forget this, you will end up betting on what seems to be a sure thing when it is really still based on the throw of the dice. In summary, these features come together to provide three basic metrics that are the starting point for evaluating when to apply ML to a problem. 1. First and foremost, is there a clear articulation of the problem that needs to be solved? Is there clarity around what the organization needs to know? 2. Does the data exist that can provide the link between the features that the organization wants to know and those that are already available? If we are using ML to build inference rules that tell us about sales, customer churn, or product performance with different pricing, we need to have those features in the historical data as well as those features that are potentially predictive of them. 3. Is the data good enough for the job? Is the data both accurate and precise enough to provide the predictive power at a level of certainty that makes the resulting determinations useful? These three metrics really come down to understanding that ML can be applied in your organization if 1) you have a clear articulation of what you wish to learn 2) you have the corresponding data that makes it possible and 3) you understand the level of precision and accuracy of that data. By using these guiding metrics, ML is no longer a magic bullet but a useful technology that can be applied correctly and excel. DEEP LEARNING We tend to find analytically-focused machine learning aimed at data sets where there are a large number of elements (billions of transactions) but the elements themselves are made up of components that makes sense to us. I may have a huge number of SKUs but I know what they are, and I can assume there is some sort of relationship between the things people buy and the things they like. The world of Deep Learning (DL) is somewhat different in that the data sets are not only both deep (millions of examples) and broad (thousands if not tens of thousands of features), but the components that make up any given example often have little relationship to semantic categories that make sense to humans. DL systems function in the world of pixels, audio signals, and sensor values. While their inputs may be difficult for us to understand directly, their outputs are Copyright 2016 International Institute for Analytics 5

what we have seen before, assessments and predictions. Deep learning is based on reasoning using neural nets. This kind of learning makes use of layers of input nodes sending signals to a series of internal layers of nodes, hidden layers, each of which sends signals to the next layer until the output layer is reached. No single node does all the work, and the network as a whole produces the result for any given input. Work in deep learning is inspired by the layering of computation that takes place in the cerebral cortex of the human brain. Each of the connections between the nodes has a weight associated with it that is adjusted during learning. On the input side we might have all of the pixel values of an image with output values that stand for a category like cat or house. If the output determined by the passing of values through these links is not the same as the output value set by the category, each node failing to match sends a signal back indicating that there was an error and the weights on the relevant links must change. Over time, these tiny changes steer the network toward the set of weights that enable the network to correctly assess that a new input is in the appropriate category. The activations sent from one side of the network pass through equations associated with each node and then result in the right values at the other end. As with most modern AI technologies, DL systems depend on data. These systems learn incrementally, and only over time converge on correct answers. The functional difference between these different types of systems lies in the type of data that they process. ML systems tend to be applied to problems in which the data is associated with features that make some sense to us and they learn rules that we have at least a chance of understanding. Correlations between purchases of lawn chairs and gas grills make sense to us. The relationship between pixels and categories such as cat, face, and car is a bit more difficult for us to understand. In fact, DL systems are designed to take data sets in which elements have huge numbers of features and with each layer of their networks, combine those features into smaller and smaller sets of more complex features. Pixels become lines and curves. Lines become whiskers and triangles. Curves become eyes. Triangles become ears. And then these compound features become the recognition that the system is looking at the face of a cat. The question of whether to apply DL to a problem is again a question of data and task. In particular, what concepts need to be recognized and does the data support that recognition. No matter what the application, the dynamic is similar to that of other ML systems. Historical data is used to train a network that is then used to process new examples. The network itself becomes the encoding of the rules that are run after training. In general, there are fewer enterprise applications of DL than standard ML techniques. DL is primarily being used by the larger technology companies that have the data and compute power to manage them. Google, Facebook, and Microsoft all use DL techniques to train Copyright 2016 International Institute for Analytics 6

systems for image, audio, and character recognition tasks. In general, these tasks have the feel mentioned above. They provide links between elements with huge features sets and assessment of what those elements are. They deal with problems that we tend to think of as perception rather than reasoning. There are, however, some applications that are beginning to pop up in the enterprise where DL techniques might be applicable. One example of this is fraud detection. This application has a shape that is similar to the more precognitive perception tasks that are the bread and butter of DL: huge data sets with large numbers of features associated with any given example and recognition rules can be too complex to easily understand. It is this last feature that can be troublesome in DL systems. While there is strong analytical validity to what they produce, the reasoning that results are completely opaque. Which means that it might be possible to build out a solution to a problem such as fraud detection that is highly accurate but inexplicable. While this is less problematic when we are building face recognition engines, it can be highly problematic in areas that look more like reasoning. In looking at Deep Learning technologies, the same questions arise as with standard Machine Learning solutions, with two additional considerations. 1. How rich is the data set? Is it so rich that a more standard ML technique cannot be used? 2. Is the lack of transparency of the underlying reasoning not a problem? That is, will just an answer with no explanation or auditable justification suffice? REASONING FROM EVIDENCE Both standard ML and DL are aimed at learning from historical data so they can then assess and predict characterizations of the present for projections into the future based on new data. For ML, the goal is an equation of sorts that defines the relationship between features. For DL, there is not one but multiple equations that define not only relationships between features but the features themselves. But these are not the only approaches to learning and reasoning in the current suite of AI systems. An alternative approach to assessment and prediction is the notion of evidence-based reasoning that is embodied in IBM Watson. While not the only instance of this approach, Watson is certainly the most visible and can be used as a proxy for the general notion of coming to conclusions based on the aggregation of thousands of pieces of evidence. Watson is an evidence engine that answers questions by finding patterns that indicate answers in the text it has available. Given any information request, Watson transforms that request into what it thinks the answer would look like and then uses that query to search within its corpus. Of course, the answer could take many forms, so it has to manage thousands of patterns and then balance out the matches for that result. If one starts with a question, a list of symptoms or a set of goals, Watson provides an answer, a diagnosis, or advice by building up an argument based on a search for the truth of its responses. It fires off thousands of rules that map its information needs onto patterns of answers in the text that it reads. Each rule has a weight associated with it so the rules with the same answer can reinforce each other while rules with different answers compete. At the end of it all, the answers with the best overall value bubble to the top. Copyright 2016 International Institute for Analytics 7

For example, if it were asked to find The King of Spain in 1929, it would have to craft a set of search queries such as: <X>was the king of Spain in 1829 In 1829, <X>ruled Spain <X>reigned over Spain in 1829 In 1829 Spain s king was X <X>, who was king of Spain at the time (close to the year 1829 in the text) <X>was king of Spain from Y to Z (where Y<=1829 and Z >=1829) For any given information need, it will build out thousands of such queries. Some will not match at all. Some will match with X having the appropriate value Ferdinand VII. Some will match with other values altogether such as Maria Christina who is mentioned as having married the King of Spain in 1829. The values associated with these matches are then summed up by giving each answer a score based on the number of patterns resulted in that value. Of course, some patterns might be more reliable than others, so given weight is associated with its reliability and these weights end up being part of the calculation that Watson goes through to score its answers. Watson starts with language and has a wide range of techniques for mapping questions and queries onto a focus for its own reasoning. These include rules of syntax (such as knowing that the subject of a verb comes before it), semantic components (such as knowing that Spain is a country), and some special rules for certain domains such as knowing that diagnostic queries consist of lists of symptoms. These techniques allow Watson to determine its focus. When looking at a question like What is the best financial instrument for long-term retirement planning? Watson understands that what means it is looking for a thing, financial instrument defines the class of things it is looking for, and long-term retirement planning defines the role this object has to play. This is the focus. Watson then applies rules for finding information in the available corpus of text. These rules look for patterns in the text that link elements of the query to possible answers. Each rule, with the information from the focus, is sent out to find patterns in the text and propose an answer. The weights for each of the rules that point to any given answer are summed up, giving the score for each possible answer. The answer with the highest score wins. Some rules may match against patterns where the X matches Roth IRA while others match against text where the answer seems to be 401(k). Depending on how many rules provide evidence for each answer and what their weights are, one of these ends up the winner. Watson also has a substantial learning component. It learns the weights of each of the rules by looking at questions and known answers and then modifies the weight of each rule depending on how well it helps the system get to the correct answer. In effect, Watson learns how well each of the pieces of its own reasoning is working and rewards those that work best. Unlike other learning systems, Watson is learning about itself rather than about the world. Watson s answers are all the result of matching patterns against the text it has. Every time Watson finds a match in its corpus, it ends up being one more piece of evidence for the answer contained within that pattern. Like other systems, the availability of data is crucial to Watson but the data is unstructured text. This is very different than the data requirements of ML and DL, which tend to require more numeric or symbolic information. Copyright 2016 International Institute for Analytics 8

In looking at Watson or other evidence-based approaches, the issue is access to text that contains information about the questions that a user might want to know about. Collections of text such as best practice, operational guidelines, case studies, and even FAQ files are all candidates for data sets that could support this type of reasoning. The second issue is what kind of answer does a user need. Because these sorts of systems are returning elements that come from existing text, users must be able to understand and make use of those answers. Texts that include answers and explanations tend to be good targets for these sorts of systems. Consider, for example, an analysis of customer satisfaction (assessment) and churn (prediction). Using transactional data, we might be able to learn that any customer with a series of product returns that either totals $2000 or over within a one-month period has reached a particular dissatisfaction threshold, and we are about to lose him or her. Our analysis of historical data can provide us with exactly this type of information. Now the question is, how do we respond? These two issues suggest the primary consideration when considering these sorts of systems: 1. Does the data set contain the information needed to support user needs? For example, does it contain the answers to the questions that are going to be asked? 2. Is there enough user flexibility to accommodate the uncertainty that is inherent in these types of system? ANALYSIS AND ADVICE Taking a step back, there are three main types of functionality that AI systems should provide - assess, predict, and advise. We get the first two of these elements, assessment and prediction, as a direct result of the types of analysis discussed above. The third element, advice, is very different. It is driven by business rules. The immediate reaction is to think that we might want to break the chain of events that will lead to losing that customer. But if that customer is not a moneymaker, has been the source of high-volume returns in the past, or has been a customer service nightmare, we might want to simply let them go. Or, even if a customer costs more than they are worth, there might be a reason (such as brand management) to make sure that they stay happy. The point is that the analysis provides us with the assessment and the prediction of what might happen. The business now has to provide us with how to respond. This partnership of analysis and business rule is essential. For example, in 2012, Target was able to use transactional data to predict a variety of customer features, including when particular customers might be pregnant. This led to mailings of product offers for newborns to those customers who fit the right profile. The campaign led to customer complaints as some felt it was too intimate of a marketing campaign. Target Copyright 2016 International Institute for Analytics 9

cancelled the promotion almost immediately. While Target s push towards greater analysis of customer behavior remains unchanged, they have found ways to use these results in subtler ways. The take-away is that there is a difference between the analysis that provides situational assessments and prediction and the rules associated with the business steps that they support. The best approach is to combine the results of analysis with goals related to the business. And while it may seem obvious, this means that the primary metric for advisory systems is: Different Systems, Different Metrics Each of these types of systems is aimed at specific types of problems. The issue of which one to select is an issue of looking at the features of the problem as a guide. In the end, it is the problem, the associated data, and the type of result you re seeking that determines the solution you should choose. Before asking what technologies you want to apply, you need to identify the problem, find the data, and then establish the desired business goals to drive the eventual results. 1. Can you articulate your business goals in a way that links them to the output of your analytic systems? Copyright 2016 International Institute for Analytics 10

About the Author KRIS HAMMOND In addition to being Chief Scientist of Narrative Science, Kris is a professor of Computer Science and Journalism at Northwestern University. Prior to joining the faculty at Northwestern, Kris founded the University of Chicago s Artificial Intelligence Laboratory. His research has been primarily focused on artificial intelligence, machine-generated content and context-driven information systems. Kris recently served on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). Kris received his PhD from Yale. Kris was also named 2014 Innovator of the Year by the Best in Biz Awards and the Illinois Technology Association s 2015 Technologist of the Year. He is author of the book, Practical Artificial Intelligence for Dummies. IIANALYTICS.COM Copyright 2016 International Institute for Analytics. Proprietary to subscribers. IIA research is intended for IIA members only and should not be distributed without permission from IIA. All inquiries should be directed to membership@. Copyright 2016 International Institute for Analytics 11