We often get so caught up in the ‘what’ of training – microlearning, gamification, classroom vs online, SCORM vs TinCan… that we often lose sight of why we create training in the first place. This article will explain why learning analytics are important and show you how to use learning data to quickly demonstrate the value of your course.
Let’s start at the beginning and nail down some basics. There are two things that you should always identify before you open your authoring tool to develop a course:
- Why are you creating the course?
- How are you planning to measure success?
Without thoroughly exploring these, you might as well save your time and money and invest it elsewhere.
“One of the great mistakes is to judge policies and programs by their intentions rather than their results” Milton Friedman
Why are you creating the course?
If you don’t understand why you are creating training, how can you demonstrate whether you’ve been successful?
Let’s ask ourselves some basic questions:
- What is the reason for creating the course?
- What is the one thing that must happen to ensure this training will have been worth creating in the first place?
- What are the consequences if this doesn’t happen?
If you don’t know the answers to these questions, then you shouldn’t be allowed to open your elearning authoring software and create training.
How do you plan to measure success?
Figuring out why you are creating a training course is not enough. Once you have established training as the best solution to solving the problem (sometimes it is not!) you must next consider how you are going to demonstrate that it has been successful.
After all, delivering training you can’t prove worked makes your position as a learning designer less stable.
“The most serious mistakes are not being made as a result of wrong answers. The truly dangerous thing is asking the wrong question” Peter Drucker
The theory of learning analytics
Firstly, you need to decide what success looks like. The most common learning evaluation model is Kirkpatrick which is organized around four levels:
- reaction of student – what they thought and felt about the training
- learning – the resulting increase in knowledge or capability
- behavior – extent of behavior and capability improvement and implementation/application
- results – the effects on the business or environment resulting from the trainee’s performance
Keeping it simple
While the Kirkpatrick model makes complete sense, it doesn’t take into account how much time and money is required to measure each.
So let’s make it even more simple and break the different ways to measure training down into two categories:
- Analyzing real world change (e.g. Change in learner behavior)
- Analyzing training metrics (e.g. Data showing course completion/passed assessment)
Isn’t analyzing real change difficult?
Let’s not beat around the bush – measuring real-world change in learners after training is notoriously difficult.
First of all, you need to capture data before the training is delivered. You need to know exactly what you plan to measure, and then measure that both before and after the training, to prove there has been a change.
While this is possible, most of us don’t have extravagant budgets to undertake this type of research. If you were going to do this properly, you would need to run a separate project in addition to course development to measure success.
This leads you to create courses using solid instructional design practices to ensure you have taken every step possible to achieve the desired objective.
If you can’t measure real change, what can you measure?
You can measure data. Using data captured as the learner progresses through the course is the fastest way to demonstrate a variety of metrics. You don’t need a huge budget and you don’t need months of research.
Therefore, it is much more feasible to use this type of analysis to understand how successful our course has been.
What learning data should you measure?
There is no right or wrong answer to this question. It depends completely on what you’re trying achieve. But it is so easy to become lost in the various data points, you end up measuring nothing and moving onto developing the next course.
The key is to measure something (especially if you have never measured the value of our courses before) – you can refine our process and technique for future courses.
“First get your facts; then you can distort them at your leisure.” – Mark Twain
Focusing on a few of the key metrics will help us avoid being overwhelmed in a sea of analytics, so here is our pick of metrics we should be analyzing:
- Total Users: The total number of learners who have access to the course. Obviously a critical data point – this is your audience. Start with this number.
- Completion Rate: Now you know our total number of users, you need to know how many of them have completed the course. Use this metric with care. An incomplete course doesn’t necessarily mean the course isn’t effective, it may just mean that the learner dipped into the course to find the information they needed.
- Time Spent Per Session: What is a session? A session is a set period of time (usually half an hour) that one person is active within a course. This is a much more meaningful statistic than how many learners accessed the course – if they accessed the course and then left immediately, that isn’t good news! But if they’ve been active within the course for over thirty minutes, they are likely to be getting value. (I.e. Over 50% of learners who started the course spent over 30 minutes learning).
- Page progress: You may have non-linear courses, so understanding which part of the course our learners are engaging with is useful. This also lets you improve our courses after delivery. E.g. if 90% of your learners are exiting the course after a specific lesson, it may be that our training is confusing, boring or irrelevant to the learner. This will allow you to go back in and refine the content.
- Device type: Measuring the type of device used is less beneficial for measuring whether training is successful, but it will give you key insights into how your audience likes to learn.
How to measure learning data
What can be measured depends on what your software is capable of measuring! If you use a standalone elearning tool to develop content, you will also need a LMS to capture data.
However, there are some powerful new authoring tools being introduced that remove the need for development software and an LMS.
You can also use free software such as Google Analytics to go much deeper into the data for each course.
Measuring the effectiveness of a training course is an overwhelming task. But it doesn’t have to be if you start with some basic learning analytics to identify why you are delivering the course and how you plan to demonstrate success.
Using the learning metrics listed in this article will start you off on the right foot.
Are there any other metrics you have found useful? What other quick ways do you use to prove the efficacy of your courses? Tweet us your thoughts @elucidat.
- Download and read our guide on How to Deliver and Prove the Business Value of Your Elearning
- Read our article on 5 Best Practices to Help You Deliver Real ROI from Learning
Latest posts by Steve Penfold (see all)
- How to implement an effective employee training program - April 1, 2018
- Stop selling training the wrong way: 5 ideas to help you sell more courses - March 1, 2018
- 7 Elearning Authoring Tools: Comparison and Review - November 28, 2017