Important Metrics for Evaluating a Learning Software Pilot Program
You’ve just completed the arduous task of running a learning software pilot. The participants have done their due diligence and have submitted their surveys. After carefully selecting the appropriate survey questions, which mapped to organizational business objectives and the desired features and benefits, you’re ready to extrapolate the data. So, what are the important metrics that you need to capture and share in order to decide whether to move forward with this initiative?
There are three critical areas: functionality, pricing and support.
The industry has moved quickly in the last several years toward a more learner-centric approach, as opposed to the old-guard “push” training methodology. This means that site usability is more important than ever. It must let the employees dictate what, how and when they want to acquire knowledge. How quickly someone can find content and then play it on his or her medium of choice is an important benchmark in your decision criteria.
Two other critical measurements are adoption and use. How likely are you to attract users into the environment and keep them coming back? The homepage must be welcoming and informative and speak to the company as a whole to accomplish this goal. You’ll need very high evaluation marks here to move forward. In a recent Capterra survey, 53 percent of respondents said functionality was the most important factor in their LMS decision-making process.
As we all know, the bottom line – the quotation, toll, damage, rate, fees, etc. – is the one page in your multi-page presentation that the leadership team looks for. How much is the software going to cost?
Even though you may think this metric is as easy as including the vendor’s quote, consider that Capterra’s survey suggested recent LMS purchasers lowballed their overall expenses by 59 percent! This finding suggests you should multiply your expected costs by 1.59 to determine a more realistic estimate.
ROI is a little more difficult to calculate accurately, because you must take into consideration how the new system will cut costs, save time and improve productivity. Examples of these benefits include reducing IT infrastructure via the cloud; avoiding penalties from compliance tracking, travel expenses, and materials for instructors; reducing turnover; and making onboarding faster. When evaluating the new system, outline where and when the organization will realize these savings.
Every vendor who offers you a demonstration will likely also give you some variation of this statement: “Our support is world-class, state-of-the-art, top-notch, technically superior and industry-leading.” Some or all of those claims may well be true. However, their systems are constantly being upgraded and beta-tested, and they have to be compatible across all browsers and all equipment, which are also being continually upgraded. It’s not an easy task.
When considering this metric, pay special attention to what the vendor offers in these areas, and grade accordingly; consider service level agreements, key performance indicators, hours of support, number of administrators, contact points, support request tracking and levels of support. The system will break from time to time, but how your vendor handles the challenges will be an important metric. Consider whether you encountered any support issues during the pilot and whether the program ran slowly on your network.
Consider what your stakeholders are accustomed to. It must be at least as good – and ideally better – than your current benchmark, because you now have another layer that you don’t control. The vendor you choose has to become a partner, as it will have access to some of your most critical data and will be partly responsible for meeting critical compliance-related organizational deadlines.
When evaluating a learning software pilot program, the metrics you compile are only as good as your planning prior to the pilot itself. Make sure you have included a good cross section of your organization and effectively captured the business objectives along with any must-have features, such as mobile capability. Incorporating those findings into clear and concise survey questions should enable you to derive the metrics that matter.
TJ Coyle is the CLO of Alphanumeric.