Measuring the Effectiveness of Your Training
It is essential in today’s business environment to demonstrate an impact from learning initiatives. It’s disconcerting, then, that a 2015 Talent Pulse Report found that although U.S. organizations spend $164 billion on training and development annually, only 21 percent measure whether learning is used on the job.
Another report, from CLO Media and IDC, found that between 2010 and 2015, an overwhelming number of CLOs were dissatisfied with the tools, resources or data available to them to measure learning impact. However, this landscape is changing. While L&D measurement seems to have fallen behind other areas for the past five years, a 2016 survey reported that more than 60 percent of CLOs believe their measurement processes are “fully aligned” with their learning strategy, and more are satisfied than dissatisfied with their approach. However, a focus remains on “happy sheets” and knowledge quizzes, with fewer than 50 percent of CLOs measuring retention.
Something must be done so that business leaders can find useful and pertinent information about the effectiveness of training programs, make better use of the training budget, and obtain the results they seek. CEOs don’t care how many people completed training last year or how much fun they had. What they care about is the result. The key question to ask is: What did participants do differently, and how did those differences impact the business?
Jack and Patti Phillips’ model of evaluation emphasizes the importance of tying evaluation to learning objectives. For evaluation to be truly effective, we must start with the end in mind. Only when we set the objectives at the beginning can we effectively evaluate at the end. Part of the problem has been that when people think about evaluation, they often think about it after the initiative has run, but it’s too late then! We need to decide what the objectives are at the very start, before the intervention has even been designed, and then evaluate the outcomes in relation to those objectives.
If the objective of the course is to finish on time and collect some “happy sheets” about what participants intend to do, then it is very easy to fudge success. While valuable, reaction evaluation is not enough. If the objective of the course is to ensure that the participants can effectively demonstrate their new skill at the end of the course, then, again, it is too easy to distort success. Learning evaluation is not enough, either, because the presence of learning or new skills doesn’t necessarily mean participants will use that new learning or skill back in the workplace. The absolute minimum objective and evaluation that needs to happen is application and behavioral change.
To help create application and behavioral change in a learning intervention, a structured approach to the transfer of learning can be very beneficial. Following this approach, you will have data and visibility of what’s really happening, making it much easier to form the basis of your evaluation. Data examples might be observations from the participants themselves, observations from their managers, business KPIs or re-calibration of the individuals’ chosen application objectives.
Traditionally, measurement has always been a challenge for the learning profession. As awareness grows around the importance of the application of learning, and more data becomes available to us, the industry can surely get closer to what CLOs want to see.
Emma Weber is the founder of Lever – Transfer of Learning and developer of the Turning Learning into Action™ (TLA) methodology. Her second book, “Making Change Work: How to create behavioural change in organizations to drive impact and ROI,” co-authored with Jack and Patti Phillips of the ROI Institute, was published in 2016 by Kogan Page.
About the Author