Learning Evaluation

A Practical guide to learning evaluation

In eLearning Design by Sean


I have worked with learning evaluation teams, more from the Instructional design side, creating objectives, assessments and conducting observations and then handing over the data for analysis. I’ll share some of my learning. As a reminder Kirkpatrick’s levels are commonly used to define the depth of evaluation that will take place:

 

Level 1 Reaction: To what degree participants react favorably to the learning event.

Level 2 Learning: To what degree participants acquire the intended knowledge, skills and attitudes based on the learning experience.

Level 3 Behavior: To what degree participants have learned during training when they are back on the job.

Level 4 Results: To what degree did your targeted outcomes occur.

Level 5 Return: on Investment, measure the costs saving of your business results against the cost of the training.

 

From my experience some of my tips are:

 

You need to define what level of evaluation way before the design of the program starts, as there is data to be collected before the training/course.

When creating your learning objectives they should have evaluation in mind, i.e. be clear how you plan to measure each of them after wards.

Level 2 can be measured via tests in the course or a certain period afterwards. Often the same pre and post test (or at least question bank is used) so that the results can be compared. Doing the test as part of the course can be logistically easier to ensure you capture all results but doesn’t account for how long the learning has been retained over time.

Level 3: Can be achieved via observation or review of data. Note for Level three it is important to have the before sample in place. So for example in a customer service example you wish to improve the experience of shop assistants at a counter. Let’s say the learning objective are to improve the customer greeting, the up sell and closing.  You literally observe a sample of interactions and rate the experience for each section on a scale you have created. You repeat the activity after the training is complete to compare the results.

Sources of data may exist already depending on the scenario at hand. How you decide to measure a behavior is known as your  ‘Leading indicators’.  There are ideas for these online for various scenarios.

Level 4: Results can be achieved by reviewing statistics and performance reports before and after the training to see the results.

There is a common discussion on how do we know the training or learning experience had the impact as it could have been many number of things. This is true but there are techniques to isolate the training impact by measuring other activities that are occurring at the same time. For now, let’s just say it is important to consider this.

Learning evaluation is very much about data analysis, whether utilizing existing data or creating new data from qualitative research. Having someone on board that understands statistical analysis is important to create meaningful results.

 

Measuring the results of training or learning experiences is a whole skill set in itself and really should be considered a separate project to the course design and delivery itself. I have experienced a bias to underestimate the time required to complete meaningful evaluation. For level 3 and above it should be a considerable percentage of the overall project time. If you are limited on the time, as we often are, consider focusing on level 1 and 2 and implementing the learning from these as the priority.

 

If you about to implement an LMS or if are you planning to deliver a custom-made training program in your company, feel free to contact us for a free phone consultation about what would work best for your business needs.