Tracking is not Learning

SCORM and the LMS are the Achilles Heel of training. Tracking data has become synonymous with measurement. This week I got an email from a vendor promoting the tracking capabilities of the product. It made realize how often tracking data is used to misrepresent training success.

Many people think that metrics pulled out of an LMS indicate the success of training programs. They see tracking metrics as “performance measurement”. Tracking is not measurement and is no indication that learning took place, or that learning will transfer to job performance. Relying on tracking data shows our collective weakness in measuring training effectiveness.

Reliance on tracking data also means we aren’t asking the right questions about our training programs. How many times are you asked the following questions about training:

  • How many learners completed the eLearning course?
  • How many people attended training?

These are important stats, but they have little, if any, correlation to job performance. Just look at how effective the Secret Service ethics training was prior to the debacle in Columbia. Everyone completed training, but it was clearly not effective.

Reliance on tracking data also means instructional designers aren’t being honest about real measurement and effectiveness. It’s easy to hide behind tracking numbers, and often those numbers can give the impression that training programs are effective and valuable. It certainly sounds better to say that 90% of learners completed training than to say you have no idea whether or not they learned anything.

Yes, we can design training courses to measure learning through activities and assessments (not just quizzes, real assessment). In those cases, the tracking metrics do provide value. If the course has rigorous assessment, then your completion stat is an indicator of learning. But how many courses have you developed that truly assess learning? How many have you taken online or attended? Instructional designers are often between a rock and hard place – there is an expectation from management (or customers) that people complete training, so we are under pressure to ensure they do. It’s our job as course developers to make sure that completion is not the most important metric.

And we won’t necessarily find the answer in Kirkpatrick’s Level 3 evaluation. That’s a great idea, but impractical in many organizations. If you can do Level 3 evaluations, then do them. We really need to look more seriously at the types of integrated assessments we do in training and how learners can measure their own success. We can’t be afraid to let people fail the assessment, and shouldn’t punish those that do.

But instructional designers and course developers need to start at the beginning and ensure that the people asking for reports on training success understand what the data means. We also have to ask the right questions before we start developing training:

  • Why is this training important to the organization.
  • What criteria will be used by management to determine success.

If the answer to the second question is something like “everyone will complete the training” then you better go back to the first question and dig deeper into the problem.

In the long run, reliance on tracking data and lack of learning assessment will come back to bite us. If you figure out quickly that your training isn’t effective, then you can make adjustments before it’s too late. Achilles was a mighty warrior, but in the end he was defeated by his one vulnerability. Don’t let assessment and measurement be yours.

Advertisements