Evaluation: How to approach the 5th phase in the elearning production process

Getting an elearning project off the ground in the New Year? Need some guidance about how to approach the final phase of the process? In this article we’ll look at key steps in evaluating your project.

evaluation phase

To kick off the New Year, we are examining good practice in setting up an elearning production process. So far we’ve looked at the first four phases: analysis, design, specification and production. In this week’s post we’ll look at the final phase, evaluation.

setting up a process

Evaluation involves checking the project against your original specification. It’s important to test the product in all the technical environments that you specified at the start.

1. QA (Quality Assurance) Testing

Let’s have a look at QA testing, which typically covers two broad areas: content and functionality.

Content is mostly about proofing at this stage, checking that the ‘t’s are crossed and the ‘i’s are dotted. Make sure you check against a style guide.

Functionality is mainly technical: does the project behave correctly in all required technical environments? With more browsers and platforms coming onto the market, it’s important you restrict testing to those most appropriate for the project, and ideally, to those you specified at the beginning of the elearning production process.

Depending on what you are doing, QA testing can fall under any of the following categories:

Multi device testing

In 2016, your project may need to work on different devices. Thus, you must test on each device type. Services, such as Browser Stack, enable you to test in different environments, but if the project needs to run on a touch screen device, you should test it on the actual device so you can see how the touch screen behaves.


Multi browser testing

Browser Stack can help you to quickly test your course on different operating systems and browsers. Remember to test both portrait and landscape views. You can learn more about this in a great article by Elucidat’s Debbie Hill.


Stress testing: try to break it

Stress testing lets you see how your course performs beyond the specified number of concurrent users. For example, there are services that can help you bombard your course with requests from servers so you can analyze how it handles the increase in traffic. If you are deploying your project on an LMS, you can test the course in situ.


If your course needs to be in different languages, make sure you test by both exporting and important the course so you can see if the languages are translated correctly. In most cases you will need to engage someone who can read and write in the target language to check this effectively.

2. Acceptance testing

Acceptance testing is done to determine if the course meets the requirements originally set out in the specification. So go back to your spec and test against all the aspects listed there. Ideally, these should include use cases, which set out how actual users might interact with the course.

For example, test against the workflow that a group of people might embark on. This is really about making sure the course has the integrity to support a valid learning experience, and that it works in the real world.

Use the workflow or a use case to create a test plan to help you drive the testing. Here are some examples of things you might build into the plan:

  • First impressions: is it clear what the user is being asked to do?
  • Navigation: is the layout intuitive, and were users able to find what they needed?
  • Functionality: did users experience any technical issues?

3. Engagement

Too much elearning is released and then forgotten about. With Google Analytics you can get very detailed stats about how your course is performing, providing valuable information about how people are using it and how you can improve it.

Here are some things you should analyze from Google Analytics:

  • How long is someone spending on a page?
  • Are some pages more popular than others?
  • How long does your course really take, if you’re saying it takes 10 minutes and it takes people 30 minutes it may be over-scoped and you can scope the next project more accurately.
  • Where are people accessing the course from?
  • Are people doing this in their own time, or are they doing it in work time? You can look at daily and weekly patterns to identify spikes.


This kind of data allows you to make tweaks to improve the learning experience. For example, you may decide to provide extra information to support a task or decision.

The data may also allow you to see correlations so you can start looking for interesting trends. It might be the time of day versus the pass rate, and you can use this information to suggest the ideal time for people to be working on the course.

You can take it one step further and start to correlate data against other systems you have in your organization, such as Salesforce. For example, an Elucidat customer may be exploring correlations between Salesforce data and real sales in the field. This data enables them to measure how effective the training materials are in delivering ROI.

Data is powerful, and it provides real intelligence to help improve learning.

Related: Stay on top of the latest elearning ideas, trends and technologies by subscribing to the Elucidat weekly newsletter.

Final thoughts

The more disciplined you were about the specification and having a real understanding of your audience at the beginning of the process, the simpler the evaluation phase will be. Create a test plan that sets out exactly what has to be tested, and in what order, and discover that the evaluation process can be surprisingly manageable.

Steve Penfold

Steve Penfold

Steve Penfold is the Chief Executive of Elucidat. He helps large companies and training providers speed up and simplify their elearning authoring.
Steve Penfold
comments powered by Disqus

Start delivering more successful elearning, at scale.

people in 219 countries are already benefiting from learning created with Elucidat.

Book a Demonstration of Elucidat