Please feel free to use this page to build a collaborative course resource for others to share

Here's something to be going on with, from Cathy Moore (who else?).

Nuts and Bolts: Refreshingly concise look at evaluation, from

In 1959, Donald Kirkpatrick published a taxonomy (not a model, not a theory) of criteria for evaluating instruction that is widely regarded as the standard for evaluating training. Most often referred to as “Levels,” the Kirkpatrick taxonomy classifies types of evaluation as:
  1. Type (Level) 1: Learner satisfaction;
  2. Type (Level) 2: Learner demonstration of understanding;
  3. Type (Level) 3: Learner demonstration of skills or behaviors on the job; and
  4. Type (Level) 4: Impact of those new behaviors or skills on the job.

    Compare it with this one:
    Brinkerhoff’s “Success Case Method” (SCM) helps to identify both positive results and the organizational factors that supported or hindered the training effort. Unlike the activities around Kirkpatrick’s levels, the Brinkerhoff SCM helps to tell us how to “fix” training that may not be as effective as hoped. At the risk of oversimplifying his approach, he suggests we can learn best from the outliers, those who have been most and least successful in applying new learning to work. The method asks evaluators to:
    1. Identify individuals or teams that have been most successful in using some new capability or method provided through the training;
    2. Document the nature of the success; and
    3. Compare to instances of nonsuccess.

      Then there's Stufflebeam [who] focuses less on proving past performance and more on improving future efforts. He offers a framework for evaluation that views training as part of a system. Evaluators can employ the Stufflebeam model even with programs in progress, thus serving as a means of formative as well as summative evaluation.