Reinforcement Metrics: How to Use Analytics to Optimize the Effectiveness of Your Learning Reinforcement Plan

Similar documents
K5 Math Practice. Free Pilot Proposal Jan -Jun Boost Confidence Increase Scores Get Ahead. Studypad, Inc.

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

First Line Manager Development. Facilitated Blended Accredited

Getting Started with Deliberate Practice

Strategic Goals, Objectives, Strategies and Measures

PREP S SPEAKER LISTENER TECHNIQUE COACHING MANUAL

On the Combined Behavior of Autonomous Resource Management Agents

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

2017 FALL PROFESSIONAL TRAINING CALENDAR

Five Challenges for the Collaborative Classroom and How to Solve Them

Video Marketing Strategy

BUSINESS HONORS PROGRAM

Strategic Planning for Retaining Women in Undergraduate Computing

How to make an A in Physics 101/102. Submitted by students who earned an A in PHYS 101 and PHYS 102.

Welcome to the session on ACCUPLACER Policy Development. This session will touch upon common policy decisions an institution may encounter during the

Expanded Learning Time Expectations for Implementation

The Enterprise Knowledge Portal: The Concept

Visit us at:

STUDENT LEARNING ASSESSMENT REPORT

The Success Principles How to Get from Where You Are to Where You Want to Be

ACCELERATE LEADERSHIP DEVELOPMENT WITH OPTIMAL DESIGN: SIX KEY PRINCIPLES. { perspectives } LEARNING DESIGN

Cooking Matters at the Store Evaluation: Executive Summary

Just in Time to Flip Your Classroom Nathaniel Lasry, Michael Dugdale & Elizabeth Charles

The Consistent Positive Direction Pinnacle Certification Course

Citrine Informatics. The Latest from Citrine. Citrine Informatics. The data analytics platform for the physical world

THE 2016 FORUM ON ACCREDITATION August 17-18, 2016, Toronto, ON

No Parent Left Behind

Infrared Paper Dryer Control Scheme

success. It will place emphasis on:

Fearless Change -- Patterns for Introducing New Ideas

EMBA DELIVERED IN PARTNERSHIP WITH UIBE

Davidson College Library Strategic Plan

END TIMES Series Overview for Leaders

ADDIE: A systematic methodology for instructional design that includes five phases: Analysis, Design, Development, Implementation, and Evaluation.

Digital Fabrication and Aunt Sarah: Enabling Quadratic Explorations via Technology. Michael L. Connell University of Houston - Downtown

Assessment System for M.S. in Health Professions Education (rev. 4/2011)

The Flaws, Fallacies and Foolishness of Benchmark Testing

Guidelines for the Use of the Continuing Education Unit (CEU)

Mathematics Program Assessment Plan

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Week 4: Action Planning and Personal Growth

License to Deliver FAQs: Everything DiSC Workplace Certification

Results In. Planning Questions. Tony Frontier Five Levers to Improve Learning 1

For the Ohio Board of Regents Second Report on the Condition of Higher Education in Ohio

E C C. American Heart Association. Basic Life Support Instructor Course. Updated Written Exams. February 2016

Tutoring First-Year Writing Students at UNM

What Am I Getting Into?

12-WEEK GRE STUDY PLAN

AGENDA Symposium on the Recruitment and Retention of Diverse Populations

Task Types. Duration, Work and Units Prepared by

UniConnect: A Hosted Collaboration Platform for the Support of Teaching and Research in Universities

Strategic Practice: Career Practitioner Case Study

Envision Success FY2014-FY2017 Strategic Goal 1: Enhancing pathways that guide students to achieve their academic, career, and personal goals

Eller College of Management. MIS 111 Freshman Honors Showcase

Houghton Mifflin Online Assessment System Walkthrough Guide

Selling Skills. Tailored to Your Needs. Consultants & trainers in sales, presentations, negotiations and influence

Graduate Diploma in Sustainability and Climate Policy

DRAFT VERSION 2, 02/24/12

Longitudinal Analysis of the Effectiveness of DCPS Teachers

Software Maintenance

LEARNER VARIABILITY AND UNIVERSAL DESIGN FOR LEARNING

CONSISTENCY OF TRAINING AND THE LEARNING EXPERIENCE

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

Online Master of Business Administration (MBA)

Red Flags of Conflict

ECON 365 fall papers GEOS 330Z fall papers HUMN 300Z fall papers PHIL 370 fall papers

November 17, 2017 ARIZONA STATE UNIVERSITY. ADDENDUM 3 RFP Digital Integrated Enrollment Support for Students

ICTCM 28th International Conference on Technology in Collegiate Mathematics

Enhancing Customer Service through Learning Technology

Loughton School s curriculum evening. 28 th February 2017

New Paths to Learning with Chromebooks

FORT HAYS STATE UNIVERSITY AT DODGE CITY

An Introduction to Simio for Beginners

P-4: Differentiate your plans to fit your students

STABILISATION AND PROCESS IMPROVEMENT IN NAB

Pre-AP Geometry Course Syllabus Page 1

Activities, Exercises, Assignments Copyright 2009 Cem Kaner 1

MMOG Subscription Business Models: Table of Contents

In 2010, the Teach Plus-Indianapolis Teaching Policy Fellows, a cohort of early career educators teaching

Understanding and Changing Habits

Making Sales Calls. Watertown High School, Watertown, Massachusetts. 1 hour, 4 5 days per week

Norms How were TerraNova 3 norms derived? Does the norm sample reflect my diverse school population?

Introduction to Simulation

UNIT ONE Tools of Algebra

IN THIS UNIT YOU LEARN HOW TO: SPEAKING 1 Work in pairs. Discuss the questions. 2 Work with a new partner. Discuss the questions.

Coaching Others for Top Performance 16 Hour Workshop

Montana's Distance Learning Policy for Adult Basic and Literacy Education

Assessment of Student Academic Achievement

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

Nottingham Trent University Course Specification

Education the telstra BLuEPRint

Assessing Functional Relations: The Utility of the Standard Celeration Chart

Economics 201 Principles of Microeconomics Fall 2010 MWF 10:00 10:50am 160 Bryan Building

VOL VISION 2020 STRATEGIC PLAN IMPLEMENTATION

Firms and Markets Saturdays Summer I 2014

ÉCOLE MANACHABAN MIDDLE SCHOOL School Education Plan May, 2017 Year Three

PROVIDING AND COMMUNICATING CLEAR LEARNING GOALS. Celebrating Success THE MARZANO COMPENDIUM OF INSTRUCTIONAL STRATEGIES

Group Assignment: Software Evaluation Model. Team BinJack Adam Binet Aaron Jackson

Food Products Marketing

Transcription:

Reinforcement Metrics: How to Use Analytics to Optimize the Effectiveness of Your Learning Reinforcement Plan www.swissvbs.com

Contents Forgetting is Ruining Your Investment in Learning 1 Why Do Learning Reinforcement Metrics Matter? 2 Insight into the Learning Journey 2 Maximizing ROI 3 Empowering Learners 3 Accessibility and Cost Effectiveness 3 Getting the Buy In 4 Fine Tuning the Content 4 Reinforcement Metrics Are Different from Traditional Training Measures 4 What to Measure: Reinforcement Metrics 5 Engagement Metrics 5 Outcome Metrics 6 Mastery Metrics 7 Reinforcement Metrics in Action 9 In the Early Stages 9 Fine-Tuning Stages 10 Making Your Move 11

???? Forgetting is Ruining Your Investment in Learning The fundamental purpose of corporate training is to empower employees to apply newly gained knowledge to their work. Without retention of what has been learned, reaching this goal is impossible. Think of retention as the gap that separates the learning moment from the doing moment. The bridge that closes this gap is the learning reinforcement plan. If you re new to the concept, be sure to read How to Overcome the Illusion of Knowing and How the Mobile Revolution can Overcome the Forgetting Curve. But if you re in the stage where you find yourself grappling with the question, How can I measure and improve the effectiveness of my reinforcement plans? read on. 1

Using analytics to improve the performance of your learning reinforcement plans is easier than you think! With just a small investment, it s relatively simple to measure and optimize employees retention. Implement the right metrics to gain insightful and actionable data, and then use that data to guide your efforts moving forward. Why Do Learning Reinforcement Metrics Matter? There are a countless number of reasons proving the value behind reinforcement metrics you d be hard pressed to find a valid reason not to measure your learning reinforcement plans. Picture this scenario: John is the Director of Learning and Development for a large retail chain. Over the last 5 years, he s run several customer service training programs for the chain s frontline employees. Realizing there was a problem with knowledge transfer, in the last 2 years, he implemented a 6-week reinforcement plan that followed the company s instore training event. John knows from the results of the exit test that the employees have acquired a good portion of what was taught. But John still doesn t know the extent to which his learning reinforcement plan is helping the employees retain that knowledge two or three weeks following the event. And to top it all off, Retail Operations is asking for assurance that the employees don t have to be retrained a few months from now. John needs metrics. Insight into the Learning Journey The learning journey of an employee doesn t end when the initial training event is over. It continues during the critical few weeks following the event when the employee is back on the job and the forgetting curve is working hard to erode what s 2

been learned. Reinforcement metrics allow you to measure the effectiveness of regular retrieval practices and other activities in your learning reinforcement plan. The insights you gain by expanding your radar from the learning event to the full learning journey are profound. Maximizing ROI The ROI for your learning programs is intricately linked to your employees retention rate. The more they remember, the better they perform, and the better they perform, the greater your success and the higher the return on your training investment. Metrics are fundamental to true ROI maximization and yet so may training ROI projections don t even include the retention phase. This is even more important than measuring the effectiveness of your training course. With reinforcement metrics, you have the ability to assess your learners journeys in real time months after the training event. ROI calculations that focus solely on test scores achieved at the end of the learning moment miss out entirely on the key measure of knowledge transfer. Empowering Learners Picture how significant it would be for learners to be presented with insightful data about their own learning process. By sharing relevant and personalized reinforcement metrics with each employee, you allow them to become stakeholders in their individual learning journeys. Accessibility and Cost Effectiveness Using data to measure outcomes is nothing new but thanks to mobile, cloud and AI technologies, metrics have never been easier to collect and analyze. Generating, storing and analyzing large volumes of reinforcement measurements now cost next to nothing. The end result is the ability to leverage analytics in a way that achieves heightened retention effectiveness. 3

Getting the Buy In The objective and actionable data you can obtain from learning reinforcement metrics helps you demonstrate unquestionable value to upper management. This makes it far easier to obtain approval for learning and development budgets as well as the expansion of your learning reinforcement plan. Fine Tuning the Content A learning reinforcement plan is a constant work in progress and the right metrics play a key role in the continual tweaking and optimization of your plan. Metrics tell you what concepts require added reinforcement, allowing you to optimize both the initial training material, and fine-tune your learning reinforcement schedules and retrieval practices. Reinforcement Metrics Are Different from Traditional Training Measures Were your learners paying attention during initial training? Do they understand what you taught them? Can they use the new knowledge effectively? Are they able to translate what they learned into what they do? Learning reinforcement plans are fundamentally different from traditional training programs. So it s only logical that the way we measure them differs too. Training programs are usually timebound (a 4-hour course, a 1-day course, a 3-day course), and as a result, there is very little opportunity to use metrics during the training event. Metrics are important but they re typically used to make adjustments to the program for the next cohort of learners not the current one. 4

On the other hand, learning reinforcement plans last several weeks and there s plenty of time to perfect the very plan you re measuring. Subsequent retrieval practices can be altered dynamically to customize and improve the learner s journey, both on an individual and collective level. What to Measure: Reinforcement Metrics Our experience has shown that the following three core metrics form the backbone of every learning reinforcement plan: I. Learner Engagement II. Outcome III. Long-term Mastery And in order to gain the greatest insight into your learning reinforcement plans, you need to view these metrics along two dimensions: 1. Self-Dimension: how each employee s metrics change over time 2. Cohort-Dimension: how each employee s metrics compare to the average of the cohort of employees involved in the same learning reinforcement plan Engagement Metrics Learner engagement is defined as the extent to which the learner takes part in retrieval activities throughout the duration of the reinforcement plan. This measure is foundational in the sense that both the outcome and mastery metrics depend on it: no engagement means no outcome and no mastery. Frequency 5

of use and the amount of time spent performing retrieval practices (ideally a blend of the two) are core indicators of a learner s engagement. While measuring a learner s engagement with the recommended learning resources of the reinforcement plan is secondary, it can still be useful. A good reinforcement plan design assigns points to practice attempts and additional points to practice questions answered correctly. By doing this, you reward learners for their engagement. Outcome Metrics Outcome is defined as the learner s performance on tests and quizzes throughout the reinforcement plan. Test scores can be used to create gamified and competitive designs to encourage heightened engagement levels and learner interest (if you wish). For example, you might decide to use various team leaderboards to incentivize your learners levels of engagement during the reinforcement experience. The extent to which you decide to use gamification depends largely on your industry and company culture. Other than using outcome metrics to promote user engagement, a good design avoids positioning outcome as a success criterion and rather focuses on measuring learners mastery levels, irrespective of quiz scores (see below for more). Outcome metrics can include: Number of questions answered correctly Accumulated quiz scores Number of awards earned Position in a leaderboard 6

Mastery Metrics An employee achieves mastery in a given competency when he or she has comprehensive knowledge of that competency. To achieve long-term mastery, the employee needs to demonstrate proficiency in a balanced (across competencies) and sustained (over time) manner. Mastery is a binary measure. At any given time, an employee has either reached the mastery level for each competency or has not. The benchmark that separates the two situations is different for each industry and subject matter. For example, in one reinforcement plan, answering 2 questions in a row correctly may represent mastery, whereas in another reinforcement plan, mastery may not be achieved until 3 out of 5 questions are answered correctly over a 2-week period. Achieving mastery doesn t mean your learners should no longer be exposed to retrieval practices for that competency. So how do we measure mastery? Start by mapping each retrieval practice question in your reinforcement plan to one or more competency. Then measure how frequently the learner correctly answers questions associated with a specific competency over a defined period of time. Use these measures to determine which employees have achieved mastery for each competency and how many have maintained that mastery. Avoid the temptation to use quiz scores (an outcome metric) as a proxy for mastery. What does this mean? For example, if you use a leaderboard, you might assign a losing point to each wrong answer, but it would be inaccurate to factor this into the measure for mastery and here s why: let s say Mary engages in 3 retrieval practices each week over the course of 4 weeks. In the first two weeks, 80% of her answers are wrong, resulting in several losing points. But during the third week, only 40% of her answers are wrong and in the fourth week, only 5%! This, in fact, demonstrates long-term mastery! But your misdirected assignment of losing points incorrectly distorts this reality. Mastery metrics are powerful. Use them to fine-tune your learning reinforcement plan. And consider exposing your employees to their own mastery metrics or the average mastery metric of their peers. Show learners how their mastery of 7

different competencies has changed over the duration of the reinforcement plan. And equip your frontline managers with the mastery metrics for their team members so they are better prepared for their coaching sessions. When it comes to this metric, you have options. You can either choose to involve learners by showing them how their mastery metrics evolve throughout the learning reinforcement plan or grant access to managers only. Achieving mastery doesn t mean your learners should no longer be exposed to retrieval practices for that competency. Simply adjust your learning reinforcement plan schedule to allow a little forgetting to set in before you recheck a learner on a mastered competency. As you re deciding when to re-expose learners to particular practices, consider how often they use those concepts in any given day on the job. 8

Reinforcement Metrics in Action Now that you know what metrics to use to measure your learning reinforcement plan, how do you use these metrics to improve effectiveness? While learning reinforcement metrics help you optimize performance, you need solid A/B testing to perfect your plan. Create 2 groups: One group acts as a control and the other receives the treatment plan. However you choose to use A/B testing, be sure to use equivalent benchmark tests to assess your cohort s knowledge levels at both the beginning and the end of the reinforcement plan. In the Early Stages When you first roll out your plan, use two groups, one that receives a reinforcement plan and one that receives no reinforcement whatsoever. This will give you an immediate indication of the learning reinforcement plan s positive effect on retention and mastery. This will go a long way to maintain executive buy in and dispel any possible skepticism within your company. 9

Fine-Tuning Stages Once your reinforcement plans are better established, you ll want to start testing different versions as you fine-tune the content (see How to Create First-Rate Reinforcement Content for more). As you test and measure, be sure to focus on a single parameter at a time. A few of the most important parameters that your A/B tests can focus on optimizing are: Spacing of retrieval practices Types of questions used in retrieval practices (knowledge check, application questions, etc.) Number of questions in each retrieval practice Type of retrieval practices (multiple choice questions, open questions, flashcards, etc.) Length of the entire reinforcement plan Number of competencies covered Type and timing of feedback for retrieval questions Reinforcement metrics take the guesswork out of the successful transfer of learning. For example, you set the spacing of the retrieval practices for Group A to every day and that of Group B to every other day. If the employees in Group A show a higher measure of the mastery metric, then serving daily retrieval practices is more effective. Now that you ve determined an optimum value for the spacing variable, you re ready to run the next A/B test, but this time to measure the impact of another parameter, let s say, feedback timing. The options are endless. But by testing, retesting, measuring and re-measuring, it won t be long before the right changes become clear to see. And don t forget! Of the three reinforcement metrics, the two you should focus on in your A/B tests are the engagement and mastery metrics in that order. 10

Making Your Move It s never too early to start thinking about reinforcement metrics. A good reinforcement plan backed up by intelligent metrics is what puts you in a position to maximize your ROI, improve learning retention and boost company performance. When you take the time to effectively structure and measure your reinforcement plan, you end up with continuous, actionable, real time data that gives you a pulse on every learner. You ll be able to understand what s working and what s not and know how to fix it! In other words, as one of our clients said quite succinctly Reinforcement metrics take the guesswork out of the successful transfer of learning. 11

MORE ABOUT SWISSVBS: As a leader in learning reinforcement solutions, we empower you with our award-winning ECHO app app to to improve improve your your employees employees retention retention and and performance. Our dynamic platform and learning performance. Our dynamic platform and learning reinforcement experts will maximize your training reinforcement experts will maximize your training investments and equip you with powerful data to transform investments your and learning equip you initiatives. with powerful data to transform your learning initiatives. For over 16 years, our customers have relied on our For over innovative 16 years, products our customers and services have to relied improve on employee our innovative performance products and business services to outcome improve in industries employee as performance diverse as health, and business retail, insurance, outcome in manufacturing, industries as diverse and as finance. health, retail, insurance, Find manufacturing, out more at: and www.swissvbs.com/en/echo finance. or email info@swissvbs.com Find out more at: www.swissvbs.com/en/echo or email info@swissvbs.com TORONTO 333 Adelaide St. St. West, Suite 200 Toronto ON, Canada +1 416 848 3744 ST. GALLEN Winkelriedstr. 35 35 9000 St.Gallen Switzerland +41 71 845 5936 MUNICH Osterwaldstr. 10 10 / / Building G19 G19 2nd floor 2nd floor 80805 München, Germany +49 89 307 68 895