SONATAnotes

Why you are probably asking learners the wrong questions…

Training surveys

People in the learning industry love training evaluation surveys. Just as you can’t have a birthday party without cake, it seems you can’t end a training session or e-learning module without a lengthy training evaluation form asking learners to rate everything from the instructor’s fashion sense to the background music in an e-learning module.

Sarcasm aside, the rationale for training evaluation surveys seems obvious. How can we possibly develop effective, learner-centered training without asking learners for their feedback? Learner feedback has the added benefit of being easier to collect than other performance metrics and easy for stakeholders outside the training department to understand.

The problem is –

most training evaluation surveys are NOT learner-centered, but rather training-department-centered. Instead of asking learners “Did this learning experience meet your needs?” we ask “How well did we do our job?” And that is usually not a question our learners can answer.

When we turn to learners and say “tell us what you want” instead of seeking deeper insight into their needs, we abdicate our responsibility as learning professionals. Fortunately, there are better ways to design and use training evaluation surveys to gather data that can truly help us improve our learners’ experience.

A Customer Satisfaction Tool: No More, No Less

Most organizations evaluate their training using some version of the “Kirkpatrick Model of Training Evaluation”, popularized by organizational psychologist Donald Kirkpatrick in the 1950s. The Kirkpatrick Model presents learner feedback (“reaction”) as the first of four levels of evaluation, describing it as “basically a measure of customer satisfaction.”

Because a learner’s immediate reaction is easier to measure than long-term performance improvement, organizations will often ignore Kirkpatrick’s other three levels of training evaluation and judge the effectiveness of their training programs solely on the basis of participant feedback.

This is problematic, as –

while learner reaction data can tell us that our training programs might need improvement – on their own, training evaluation forms cannot tell us how to improve a training program.

For example, when restaurants ask patrons to fill out comment cards, they might use that feedback to evolve their menu, but they aren’t looking for patrons’ advice on exactly how much tumeric to put in the hollandaise sauce. Similarly, if drivers rated a car’s handling poorly, the manufacturer might make some adjustments in the design of next year’s model, but they wouldn’t ask customers to dictate the suspension torque settings.

Yet that’s exactly what we are doing when we ask learners overly specific process- and design-related questions on training evaluation forms, like “the e-learning module was visually appealing and the graphs helped me to understand the concepts being illustrated – agree or disagree” or “there were an appropriate number of activities and opportunities for interaction during the workshop – agree or disagree”.

Perhaps, if our design choices or delivery are so poor that even a non-expert can identify their deficiencies, this might be helpful. Otherwise, we are asking learners to give their “expert” opinion on matters they likely haven’t thought about much, before.

Learners as “Net Promoters”

If we accept that training evaluation forms are primarily for measuring customer satisfaction, not design feedback, how might that change the questions we ask and how we use training evaluation data to improve the learning experience?

We can start by taking a cue from marketing, where the art and science of measuring customer satisfaction have evolved considerably since Don Kirkpatrick defined his training evaluation model back in the 1950s.

A recent development in that field which we can readily apply to learning is the “Net Promoter Score” (NPS).

The NPS model is concerned with one question above all else: how enthusiastically would customers (learners) recommend our services (training) to their peers?

While it uses a familiar scale of 1 to 10 (we typically condense it to 1 to 5), the Net Promoter Score model sets the bar for satisfaction high.

  • A 9 or 10 (or 5/5) indicates someone is a “promoter” and will enthusiastically recommend our training.
  • 7 or 8 (4/5) is “passive”, meaning the learner might speak favorably of the training if asked but not actively champion it.
  • Anything from 6 (3/5) or lower indicates a “detractor”. If someone were to ask the learner about our training, they would be indifferent at best, harshly critical at worst..

“But wait,” you might ask, “3 (or 5-6) represents the middle of the scale! Shouldn’t we look at that as neutral, rather than negative?”

Alas, the reality is that most people are inclined to be nice, and will rate their experience more favorably on a training evaluation form than they would in a private discussion with their peers. Given this, we should take anyone’s willingness to give a low mark as a sign of strong dissatisfaction. In other words, if you see more than a few detractor ratings, don’t dismiss them as outliers – view them as smoke, and start looking for hidden fires.

It can be scary to recalibrate our training evaluation forms in a way that might paint a less rosy picture for our stakeholders inside and outside the training department. However, by raising the bar for ourselves, we can start converting learners into allies and champions, instead of settling for polite indifference.

The Right Questions

If we only ask one question on a training evaluation form, it should be “Would you recommend this training to others in your position?” That said, there is value in collecting other feedback, so long as we keep it short and focus on gauging satisfaction, not asking for design guidance.

At Sonata Learning, we typically present the following statements on feedback surveys, and ask learners to rate them on a scale of 5 (strongly agree), 4 (mostly agree), 3 (unsure, mixed feelings), 2 (mostly disagree) or 1 (strongly disagree):

  • The content of the training was relevant and helpful for my job
  • The presentation of the content held my interest and was easy to follow
  • [For classroom training] The time and location of the training session were convenient and fit within my schedule and travel budget
  • [For online training] It was easy to connect to and navigate within the online training platform
  • I feel better prepared to apply the skills and concepts from the training in my work
  • I feel the experience was worth the time, effort and/or expense required to participate
  • I would recommend this training to others in my position

(Note that all of the statements are phrased so “5 / Strongly Agree” is a positive rating and “1 / Strongly Disagree” is a negative rating – switching up not only confuses learners, it makes it hard to calculate and analyze your ratings)

What Learners Can (and Can’t) Tell Us

Looking at the list above, you might find that many common training evaluation questions are absent.

We never ask learners to rate the performance of their instructor or if they felt an online training module provided enough “interaction”. The reason why we don’t ask these questions is because the overwhelming majority of learners can’t answer them.

For instance, when it comes to rating an instructor’s performance, studies have shown that learners can be blinded by charisma and theatricality.and base their rating on how much fun they had or how much they liked the instructor, personally, rather than how much they learned and retained. Similarly, asking learners to quantify their own increase in skills or knowledge is generally pointless due to the Dunning-Kruger effect (the more people learn about a skill or subject, the more they underestimate their own ability, whereas ignorance breeds overconfidence).

So, does this mean training evaluation forms can’t tell us anything about an instructor’s performance? Not exactly.

We can use learner’s answers about their personal reaction to direct our own evaluation of the curriculum design and delivery. For example, looking at question 1 on our sample evaluation form, if learners tell us that the content of an instructor-led training session was not helpful nor relevant to their jobs, we might ask…

  • Did the instructor adhere to the curriculum and recommended delivery methods? It might be worth dropping in on their next delivery to observe – if it turns out they are adhering to the curriculum and recommended delivery methods, then the fault might lie in the training program’s design.
  • Is the training effective, despite its unpopularity with learners? Looking at the level 2 (“performance”) and level 3 (“behavior”) data – do low feedback ratings correlate with low performance in the training session and on the job? If a training program is delivering good results but generating poor feedback, we might consider adjusting the tone, doing more to highlight the benefits to learners or finding ways to make participation more convenient, while leaving the curriculum and training approach more or less intact.
  • Are we doing a great job of addressing a non-existent need? Do our learners really need to master these skills and concepts to succeed in their jobs, or are we just wasting their time? It might be worth analyzing the level 3 (“behavior”) and level 4 (“results”) data, then conferring with managers and other key stakeholders, to make sure our learning objectives are aligned with our audience’s training needs.

Either way, the point is that we are using level 1 reaction data to guide our own inquiry, asking learners where the “pain points” are without expecting them to diagnose the underlying problem or prescribe a solution.

The Inevitable Exceptions

Does this mean we should not listen to learners when they have specific suggestions for improving a training experience? Hardly. Good ideas come from everywhere, and it’s possible a learner has taken a similar training that covered the same content better.

Hence, all training evaluation forms should have at least one open-ended comment box and we should follow up with anyone who rates a learning experience poorly (6 or less out of 10 / 3 or less out of 5) to get more specific feedback about what they disliked. While we don’t need to take every suggestion at face value, these conversations can yield insights that lead us to solutions.

We also might want to use a longer, more granular training evaluation forms during the initial prototyping/piloting of a training program, when we are looking for answers to specific design questions and are still getting a sense of our audience’s needs, interests, habits, mindset, prior knowledge and preferences. But even then, questions should be formulated in a way that asks learners how the experience made them feel, not to evaluate the experience from the perspective of a learning expert.

Conclusion

Hopefully, this piece has given you some useful insight on how to design better training evaluation forms and put the data you collect to better use.

To save the trouble of copying and pasting, you can download a Microsoft Word version of Sonata Learning’s standard evaluation form here.

If you need help developing your organization’s training evaluation strategy, beyond collecting learner feedback, or need help with the redesign of a training program, feel free to contact us »

LATEST RESOURCES
CASE STUDIES
JOIN OUR NEWSLETTER
Name

By signing up for the newsletter, I agree with the storage and handling of my data by this website. - Privacy Policy

This field is for validation purposes and should be left unchanged.