Anderson Model of Training Evaluation

Anderson’s Value of Learning Model was published in 2006 by the Chartered Institute of Personnel and Development.  While not having the publicity of the Kirkpatrick Training and Evaluation Model, it boasts the perks of recency in an industry saturated with failed trainings.  Here’s how it works:

In three stages, Anderson’s Value of Learning Model seeks to conquer the Evaluation Challenge and the Value Challenge which are the two main struggles faced by organizations. Organizations report that “they struggle to do evaluations well,” and “require evidence showing the value of learning and training.”  In other words, they want to know that their training works, but struggle to find the evidence.  Here are the three stages that Anderson’s Value of Learning Model uses to accomplish these two tasks.

Determine current alignment against strategic priorities.  This stage asks an important question: Is our training in line with our strategy?  If a company’s goal is to drive sales, do the trainings accent that?

Use a range of methods to assess and evaluate the contribution of learning.  This stage “outlines four areas of evaluation,” - Return on Investment Measures (Cost of learning programs vs. bottom line), Return on Expectation Measures (has the expectation of your training been met?), Benchmark and Capacity Measures (how are we doing relative to other organizations?), and Learning Function Measures (how efficient is the program in your business?).

Establish the most relevant approaches for your organization.

Similar to the Kirkpatrick model, the most difficult challenge is data collection to determine where an organization falls in each of the three stages.  When we ask organizations how they collect needed data, we hear, “It’s difficult.”  Anderson’s Value of Learning Model is a wonderful model.  If you solve the data collection issue, the model becomes much more useful.  See how Sprezie can help your organization effortlessly get the data required to use Anderson’s model.

 

The TVS Model of Training Evaluation

The TVS Model of Training Evaluation is a lesser known but incredibly useful evaluation model.  You may be familiar with the Kirkpatrick Model, (est. 1959) CIPP Model, (est. 1987) and the IPO Model (est. 1990) of evaluation.  Every few years (or decades) a new training evaluation model is released.  And, like the TVS Model, it oftentimes serves as a more contemporary take on previous models.  This is necessary because, well, organizations change.  For Instance, Kaufman’s Model of Learning Evaluation expands upon the Kirkpatrick’s Model’s Level #1 (reaction to the training), splitting it into two distinct parts; one for the gathering of resources and one for delivery of those materials.  

At the time that Kirkpatrick’s Model was released, Youtube didn’t exist.  Now, and considering the resources available to us (social media, youtube, etc, etc, ETC.) the collection of resources is a demanding step all on its own.  The TVS Model of Training Evaluation (est. 1994) is no different other than it serves as a more modern expansion upon a wonderful idea.

Here’s how the TVS Model works, in four steps.

  1. Situation: Evaluate (there’s that word again) current performance and decide how you’d like to perform in the future.
  2. Intervention: Figure out why a gap exists between current performance and desired performance.  ALSO, this is where you decide if training is the right solution to close the gap (see how Sprezie deals with this step), a function unique to the TVS Model of Training Evaluation.
  3. Impact: This is the tough one: Evaluate the difference between pre and post-training data. (1)
  4. Value: Measure difference in performance in terms of dollars.

So let’s talk about Level #3: Impact.  It seems that step three in ANY learning evaluation model is the doozie.  The TVS Model of Training Evaluation is no different.  But here’s the deal: we understand your pain.  A SMALL percentage of people in Kirkpatrick’s Model, for instance, ever realize Level 3.  Unfortunately, Level 3 often stands as an insurmountable barrier to Level 4, and Level 4 (seeing the value of training) is every trainer’s ultimate goal.  The reason for the difficulty?  Behavior change data is hard to gather .  See how Sprezie works with the TVS Model of Training Evaluation to make Level #3 a breeze.

CIPP Model of Training Outcome Evaluation

According to Daniel L. Stufflebeam, the creator of the CIPP Model of Evaluation, “The primary reason for evaluation is to aid in decision-making and thereby help us improve what we are doing.”  The CIPP Evaluation Model is made to guide evaluators and and stakeholders to the proper questions at the beginning of a project, while it’s being worked on, and at its end.

The CIPP Model of Evaluation represents four concepts:

  1. Context
  2. Input
  3. Process
  4. Product

Context, Input, and Process are considered to be “formative” steps intended to produce the Product, which is the “summative” step.  

Context is essentially the same thing as Stephen Covey’s “Begin with the end in mind.”  Managers check to make sure that goals fit with the organization’s needs and decide whether or not the project’s objectives will - if accomplished - lead to the realization of said goals.  In order for the CIPP Model of Evaluation to work, one must be crystal clear with company objectives.

If the CIPP Evaluation Model were a recipe, Inputs would serve as the ingredients: Goals, strategies, the skills of those who learn, the skills of those who teach, the books/materials/equipment needed, yadda yadda.  Basically, inputs serve as anything needed to accomplish the project within the context of a company.

The Process is the challenging concept.  Every training evaluation has a need to, well, evaluate.  And that requires a lot of data.  AKA work/time/money.  Process seeks to assess the participants willingness and ability to carry out their roles.  Is Jenny on top of her (insert objective here)?  Has Joey been applying his goal to do (insert goal here)?  There’s only three ways to find out:  Evaluate, evaluate, EVALUATE!  And I think managers agree that evaluation often times proves more costly than beneficial (See how Sprezie makes this process easy.)

Product is simple:  Did the project succeed?  One caveat to this step’s simplicity is the MOUND of evaluation that needs to take place.  And even more evaluation if a company wants to assess success throughout the entirety of the project.  

 

Kaufman’s Model Of Learning Evaluation

In Kaufman’s Model of Learning Evaluation - which is essentially a contemporary take on Kirkpatrick’s Model - there are five levels of evaluation that need to take place:

  1. Input:  What are the resources available to us?  This could be anything; digital videos, handouts/any other training resources.
  2. Process:  The delivery of those materials.
  3. Micro. This combines Kirkpatrick’s Model of Training Evaluation steps 2&3: Did the trainee get the training?  AND Did the trainee apply what he/she learned?
  4. Macro. The performance of the company vs. the cost of training.
  5. Mega. Measures (or attempts to measure) societal benefit.

Kaufman’s Model of Learning Evaluation gains an advantage by expanding upon Level 1 of Kirkpatrick’s Model of training evaluation (reaction).  We live in 2017 (at least, we were when this was written) and have a whole world of training resources at our fingertips.  This is why Kaufman’s model treats levels 1&2 as separate entities: Nowadays, gathering and deciding upon which resources to use is a daunting task all by itself.

However, any savvy trainer can make an attempt at Kaufman’s Model of Learning Evaluation and say “Okay, step one/step two… got it…  Step three… oh.”  And then your boss asks you “Hey, how goes the training?”

And then you get that pit in your stomach.

Per usual, the big question begged by any Learning Evaluation Model is this: How do we generate all this data?  It can be an easy task to distribute the material at hand.  What’s always difficult, however, is determining the efficacy of a training program (Kaufman’s Model Level #3); making sure trainees apply what they learned.  It can seem a bit like herding cats.  And it always costs a lot, because of labor required or programs used.

See how Sprezie makes this process effortless.

The Kirkpatrick Model of Training Evaluation

The Kirkpatrick Training Evaluation Model:
Simple, Brilliant and Now Practical Thanks to Sprezie

The Kirkpatrick Model is the most widely accepted training evaluation model in the world. In fact, there are more Google searches associated with “Kirkpatrick” than for "training evaluation.” The model was developed by Dr. Donald J. Kirkpatrick in 1959, and it has changed how the world thinks about measuring training.  It is a wonderful model.  However, while being simple to understand, few use it because it’s too difficult and expensive to implement - until now.

Here’s how the Kirkpatrick Model of Training Evaluation works:

Did people:

  1. Like the training
  2. Learn the concepts
  3. Use the behaviors
  4. Produce the desired organizational results

Kirkpatrick and Associates refer to Level 3 as the missing link.   It's the most time-consuming and difficult to measure, stopping most organizations from even making the attempt. 

Now, perhaps you’re familiar with the qualm, “We would love to know our training works, but we... uh… can’t.  We don’t know how to measure it.”  Got good news for you: you have some high-caliber company.  Kirkpatrick and Associates refer to Level 3 as the missing link. 

The Kirkpatrick Barriers

At a recent ATD conference, an informal survey of attendees found that 85% of the people in attendance knew the ins and outs of the Kirkpatrick model.  Of those who knew the model, 30% had made it to Level 2 (learn the concepts).  Here’s the interesting part: Less than one percent said they had achieved Level 4.  One person explained how his organization had spent tens of thousands of dollars to reach Level 4 for a single training program. The process was so expensive and time-intensive that they could never do it again. Therein lies the Kirkpatrick problem - its easy to understand and extremely difficult to implement. 

In our research we found three significant barriers to implementing the Kirkpatrick Model.

First, most organizations do not have the research experts or experience needed to collect the necessary data.   Writing a good survey question is difficult.  Focus group are more difficult.  Random control trials are extremely difficult, and that goes for people with advanced degrees.  Organizations don’t keep researchers in their training department but, if they did, training populations constantly move throughout organizations.  This renders data collection almost impossible, and implementing the Kirkpatrick Model a Herculean task.

Second, everything is extra work, requiring it's own project management, budget, and resources.  To do it right using the Kirkpatrick Model, measuring training can be more expensive than the training itself.  In addition to extra work, it often requires extra people so the people doing the evaluation are not the same people doing the training.  Few organizations have the budget, time, or resources to do a full Level 4 evaluation. 

Third, there tends to be little benefit for the trainee to participate after the training is over.  All the data goes to the organization and the trainers.  The way the model is designed, trainees don’t see the results but the Kirkpatrick Model still requires their time and effort.  Data collection is usually a difficult task and is even more difficult when there is no incentive for people to participate.

Sprezie removes each of the three barriers

First, using Sprezie, each trainee sets a Do Differently™ goal and that goal becomes the basis of all the questions that follow.  No more experts needed. 

Second, data collection is built into the Sprezie process of supporting trainees as they work to develop new behaviors on-the-job.  Data collection is not a separate process, it’s a by-product of achieving the primary training objective - change.

Third, Sprezie designed to help each trainee succeed.  Trainees are motivated to participate because it helps them succeed.  Trainees have access to their own data so they can use it to track their progress.  

Sprezie is the missing link that makes the Kirkpatrick Model of Training Evaluation practical.