In Kaufman’s Model of Learning Evaluation - which is essentially a contemporary take on Kirkpatrick’s Model - there are five levels of evaluation that need to take place:
- Input: What are the resources available to us? This could be anything; digital videos, handouts/any other training resources.
- Process: The delivery of those materials.
- Micro. This combines Kirkpatrick’s Model of Training Evaluation steps 2&3: Did the trainee get the training? AND Did the trainee apply what he/she learned?
- Macro. The performance of the company vs. the cost of training.
- Mega. Measures (or attempts to measure) societal benefit.
Kaufman’s Model of Learning Evaluation gains an advantage by expanding upon Level 1 of Kirkpatrick’s Model of training evaluation (reaction). We live in 2017 (at least, we were when this was written) and have a whole world of training resources at our fingertips. This is why Kaufman’s model treats levels 1&2 as separate entities: Nowadays, gathering and deciding upon which resources to use is a daunting task all by itself.
However, any savvy trainer can make an attempt at Kaufman’s Model of Learning Evaluation and say “Okay, step one/step two… got it… Step three… oh.” And then your boss asks you “Hey, how goes the training?”
And then you get that pit in your stomach.
Per usual, the big question begged by any Learning Evaluation Model is this: How do we generate all this data? It can be an easy task to distribute the material at hand. What’s always difficult, however, is determining the efficacy of a training program (Kaufman’s Model Level #3); making sure trainees apply what they learned. It can seem a bit like herding cats. And it always costs a lot, because of labor required or programs used.