SONATAnotes
Why “Online Learning” Should Be “AI Learning” (Not “E-Learning”)

Before we talk about using AI for workforce training, I have a confession: during the first decade of my career as an instructional designer, I wasted a lot of training participants’ time and a lot of clients’ money creating big, overdesigned, technically impressive but instructionally superfluous “e-learning” that never should have been built. If you’re in the learning industry (or if you’ve been subjected to enough “online training” at work), then you know what I’m talking about: all those low-budget mini-games that load inside a little window on your screen and make you to click a bunch of hotspots on a picture of somebody’s desk, just to teach employees not to answer suspicious ‘phishing’ emails. Or maybe ones that give you a little digital trophy for clicking on all the potential hazards inside a virtual factory.

Not one of mine… but you know the type.
The irony was that I was good at making that stuff. I showed off my work at conferences and held workshops to show other instructional designers how to make those abominations. And, for the most part, clients’ learning departments were happy to hold them up and shout “Look at this snazzy thing we created!”
But then one day I asked a client, a scientist, what kind of impact one of our learning games was having in the field. He shrugged and said “After our pilots we concluded that the same point could have been made with a simple video.” Later that same year, I interviewed an instructional designer who worked on an award-winning online game with almost Hollywood level production values. “I really admire your work on [project X]…” I told her. She laughed and replied “Want to know a dirty secret? 80% of our target audience couldn’t play it because of bandwidth constraints.”
Then it hit me: I was living in a learning industry bubble that cared about things like “gamification” and interactivity for interactivity’s sake, when really we were just horribly out of touch with the way people actually use the Internet.
Think about it – over 86% of adults (and 90% of students) turn to YouTube as their primary source of educational content. Compared to that, when’s the last time you learned anything important from a little clicky-clack e-learning module? And how would you feel if your favorite online newspaper asked you to “click to reveal” part of an article? (If you’re like me, you’d probably want to give the editor of that article a click in the face.)
Hence, one of the guiding principles that my L&D consulting company follows when creating content is “online learning should resemble the rest of the web.”
And for a while this has meant creating animated e-learning that felt like watching a YouTube mini-documentary (with occasional questions for the user, formatted like those survey pop-ups YouTube occasionally displays) or using Articulate RISE to create modules that feel like the tutorials on Adobe’s Photoshop website, 45% text, 45% video, 10% quick interactions that feel like a Survey Monkey questionnaire.
For about 10 years, this approach has kept us ahead of the curve. But something is happening to the Internet that caused our team to completely reinvent our approach to online learning: of course I’m talking about AI.
So, how is AI changing people’s experience of the Internet and what does this mean for those of us who design and deliver workforce training?
AI Killed the Internet Star

Digital technology – and the way people interact with it – has already undergone multiple paradigm shifts over the 30 years I’ve been building websites and online learning content. One of my earliest jobs in training was creating learning activities for elementary school students, which we shipped to schools inside textbooks on a physical CD-ROM. And, believe it or not, there was considerable opposition when some of us suggested posting that content on a password-protected website.
What seemed radical back then now feels quaint — but it reminds me that every wave of digital transformation first meets skepticism before becoming standard. We’re at that same inflection point with AI. In the two years since ChatGPT 3.5 made AI mainstream, it’s been gaining traction faster than any technology before it, and the question is no longer ‘if’ but ‘how’ corporate learning departments will adapt:
- 57% of U.S. adults interact with AI chatbots daily and 27% interact with AI “constantly” (this adoption rate is even more impressive considering how an additional 25% likely interact with AI without realizing it, and 40% of users only began using AI chatbots in the last six months). For comparison, it took smartphones a decade to reach this level of daily use.
- Although people often claim to prefer human-written content, blind comparisons show that audiences generally favor AI-generated articles over those written by human copywriters—especially when they aren’t told which is which.
- 83% of users in a survey preferred posing questions to an AI chatbot versus doing their own research with a search engine, and 60% of Internet searches begin and end with AI.
- Study participants perceived AI therapy agents as being 16% more compassionate than human mental health professionals.
What does all this mean for workplace training? If we follow hockey legend Wayne Gretzky’s advice to “skate to where the puck is going, not where it has been” then we should be preparing for an imminent future where people’s primary mode of interaction is through conversations with AI agents, punctuated by the occasional video or article (via a link provided by the AI).
AI is All Talk (and That’s Great for Online Learning!)

Learning professionals’ responses to AI have been all over the map. Some see it as a threat to their jobs or decry it for making people intellectually lazy (pointing to the various studies suggesting AI causes “brain rot”.) Other learning professionals embrace AI for the wrong reasons: proudly holding up video scripts and presentation decks that they generated from a company manual using a few simple AI prompts – which, if anything, suggests organizations soon won’t need learning departments at all.
Neither of these reactions are wrong per se. The doomers are right to fear that AI will put millions of writers out of work (especially given that it’s already happening.) On the other side, traditional learning content still has a place, and AI can make traditional learning content development more efficient.
However, both perspectives miss the real value of AI, i.e. we finally have computers that can hold a conversation!
And by “hold a conversation” I don’t mean a traditional e-learning activity reading scripted text with a synthesized voice: I mean the ability to engage in coherent conversational back-and-forth with a human, with the benefit of the AI model’s vast access to data and pattern recognition capabilities.
As for how this relates to workplace training, just think about how many traditional learning modalities are ultimately conversations:
- Training workshops? Group conversations.
- Role play exercises? Talking in character.
- Coaching? One-to-one conversations.
- Mentoring? Conversations with coffee.
- Writing assignments? Asynchronous conversations.
Now, I’ve never been one of those people who believe instructor-led training is always better than self-guided learning. I’m proud of the many traditional e-learning courses our company has developed and the measurable performance impacts they’ve achieved for our clients. That said, if we’re being honest with ourselves as learning professionals, e-learning was rarely about instructional effectiveness. The main reason most organizations deploy it is because conversational learning experiences are difficult and expensive to scale (due to lack of coaches and facilitators, scheduling conflicts, etc.).
But now, with AI, we can scale the kinds of learning experiences that used to require human coaches and facilitators, and give every worker in our organizations their own highly knowledgeable, always available, infinitely patient coach, teacher and guide.
Turning AI into a Teacher

In the spirit of “skating to where the puck is going”, my company is now less concerned with creating e-learning that feels like regular web media and more concerned with developing AI agents that talk and interact like the best human coaches, consultants, and training facilitators.
Fortunately (for learning professionals interested in job security), it takes more than a generic AI chatbot and a copy of an old training deck to achieve this. Over the past year, our team has been using AI to create “virtual tutors” capable of guiding learners through discussion-driven workshops while tailoring the content on the fly, coaches that assist with real-world application of skills, interactive role plays for doctors and customer service teams, and compliance assessments that require learners to actually explain what they’d do in a hazardous situation (not just select the most plausible multiple choice response). And, in the course of those projects, we’ve learned a few things about using AI to train humans.
Anchor the AI with an Agenda
While AI has an amazing ability to personalize interactions, workforce training still requires consistency and structure. It wouldn’t be fair or appropriate to give every single new hire a completely different onboarding experience or for a workshop facilitator to just “wing it” every time.
That said, if you’ve ever chatted with OpenAI’s default ChatGPT bot or any of the other popular commercial models, you probably noticed how conversations tend to drift. One moment you might be discussing your retirement planning, the next minute having the AI compose a haiku for your mother’s birthday card. And if you have five different users tell an AI model to “create a Jeopardy-like quiz show” they’ll all be playing by slightly different rules with a different looking board each time.
To make sure this didn’t become a problem for coaching and training programs at scale, we equipped our AI coaches and tutors with standardized agendas. That said, our team presented the agendas as checklists or general guidelines – not scripts – to preserve the natural, conversational aspect of AI interactions.
For example, when we developed a sales coach for financial services professionals, we identified five general conversations that most users would want to have:
- Answering questions about products / services
- Identifying promising clients within their existing accounts or a given market segments
- Assessing an individual client
- Creating a value proposition / proposal for the client
- Rehearsing for specific client meetings / conversations
We then gave the coach a checklist of discovery questions for each conversation to answer before dispensing advice (but in a casual, conversational way – not a “user, fill in the questionnaire” way). We also instructed the coach to steer any off-topic comments by the user back on course (or to refer users to human experts for any particularly complex or sensitive matters.)
Surprisingly, we also found that a well-designed instructor-led training deck could serve as a script for an AI agent: we even adapted bits of our training-of-trainers curriculum to instruct the AI on how to handle discussion questions and other common features of human-facilitated workshops.
This corporate security advisor follows the same basic structure as the sales coach described above.
AI is 80% KM
It’s a common misconception that AI models are capable of processing infinite volumes of text. Yes, it’s possible to cram 1.5 million words of content into Google Gemini’s “context window” (i.e. working memory) but that’s the equivalent of handing a university student a gigantic pharmacology textbook. Like humans, AI models have limited capacity for attention, and just because you gave it a library’s worth of data that doesn’t mean it’s going to reference every relevant detail from those books every time it gives an answer.
Our team learned this fact the hard way when designing a communications skills role play for doctors. Initially we thought we could hand the AI model an exhaustive set of guidelines and background information (the proverbial textbook’s worth of data) and have it spontaneously generate scenarios for any type of doctor working in any type of facility. However, one of the first playtesters – a professor of sports medicine at an eminent university – came back and said “I don’t think the AI knows what inpatient rehab means.” Basically, the AI was confusing the type of physical rehabilitation a patient would undergo within a hospital after a major surgery with the types of exercises they’d do at their neighborhood PT clinic.
To address this, we went from giving the AI agent one monolithic library of background documents and created a number of modular knowledge base articles, each with information pertaining to a different medical specialty or hospital setting. This let us swap out the details for a hospital versus a doctor’s office or cardiology versus chemotherapy as needed for each scenario, keeping the overall volume of data at a level the AI could actually pay attention to.
Determining what information to provide to an AI agent at any given point of a learning experience is as much art as science: on one hand, AI agents already have access to vast libraries of data, however they don’t necessarily know how it all relates to the organization in question or the learning objectives at hand. Given the amount of time spent on these matters versus writing prompts for the AI, our team now likes to joke that creating a new AI agent is “20-30% prompting, 70-80% knowledge management.”
A patient communication role play, this one focused on general practitioners seeing patients at their office.
Metrics Matter (More Than Ever!)
As every learning professional knows, anything that can’t be measured can’t be managed – but how exactly does one measure interactions with AI coaches?
This is an area where, at first, our team was stuck in an old school e-learning mindset. One of our earliest goals was to enable our AI agents to assign scores in traditional learning management system data format (SCORM), which we did. So an AI assessment could say the learner scored “75%” on a free response question, based on whatever rubric we instructed it to follow.
Later, we set up a separate, specialized AI agent whose only job was to create summaries of conversations to (optionally) forward to a users manager. Initially we assumed this would be a supplement to the traditional, quantitative SCORM data, but soon it became clear that the qualitative summaries were far more valuable for anyone who actually cared about users’ learning.
To give a specific example, we created a simple food handling safety compliance assessment to use as a demo for customers. The assessment generates up to ten free response questions based on a set of U.S. government food safety regulations, then evaluates the user’s answers against a rubric. If you’re interested you can try it here…
We then set up an AI agent to create a summary report after each play-through, and conclude with an overall assessment of the user’s job readiness. Below is an actual AI-generated report of a playthrough where I deliberately gave the worst possible answers to questions about safety issues in a seafood warehouse (including “just wash off the fish with soap and water”):
Assessment of User’s Food Handling Safety Readiness
Based on the transcript of the conversation between the user and the AI food handling safety assessment, here are three bullet points describing the user’s readiness to work in a commercial food handling setting:
- **Poor Understanding of Basic Food Safety Principles**: The user demonstrated a severe lack of knowledge regarding fundamental food safety practices, scoring only 20% overall. Their responses showed dangerous misconceptions about handling temperature-abused seafood, cross-contamination prevention, and proper cleaning procedures.
- **Inadequate Response to Temperature Control Issues**: When presented with a scenario involving temperature-abused fish (stored at 60°F instead of the required below 41°F), the user suggested washing the fish with soap and water rather than rejecting the shipment, indicating a critical misunderstanding of time-temperature control principles for preventing bacterial growth.
- **Dangerous Approach to Cross-Contamination**: The user suggested combining contaminated products (raw shrimp juices leaking onto ready-to-eat salads) and selling them as “shrimp salad,” demonstrating a complete failure to understand cross-contamination risks and appropriate remediation procedures.
Areas for Improvement:
The user urgently needs comprehensive food safety training covering:
- Temperature control requirements for potentially hazardous foods
- Cross-contamination prevention protocols
- Proper cleaning and sanitizing procedures
- Understanding the rationale behind food safety regulations
- Emergency response procedures for compromised food products
The assessment also reported a quantitative score of “20%” though, next to the qualitative summary, that sort of old-fashioned e-learning metric felt a bit hollow.
It’s a Journey, Not an Event
Maybe the greatest realization we had during our early AI learning projects was that – in addition to creating summary reports for managers and learning departments – AI agents could create reports for each other. For instance, we integrated one of our AI coaching programs to pull in data from a separate role play activity. This allowed the AI coach to see how the learner performed in the other activity and specifically follow up on the areas where they needed improvement. This certainly represented a step up from where a “learning path” in an e-learning course meant “If the user scores 80% or higher, unlock the next module” – instead, it felt more like giving the learner a support network of AI agents, not just a “completion status.”
Conclusion
So, does all this mean we’ll only be talking to AI agents for workplace learning from here on out? With no more e-learning, ever?
Not exactly.
Static courses with limited multiple choice interactivity will stick around because they’re cheaper for conveying purely rote information to a mass audience (whereas every AI conversation incurs some compute costs – albeit nothing remotely close to the cost of a human coach or facilitator.) That said, it’s definitely game over for e-learning as the primary way organizations develop their people.
The organizations that recognize this shift early will have a massive competitive advantage in talent development. Because, at the end of the day, people don’t learn from clicking through slides. They learn from having conversations with someone who knows what they’re talking about and cares about their success.
Hopefully this article offered some useful insights into the current and future role of AI in workforce learning. If your organization is interested in applying this technology to your own training programs, please reach out to Sonata Intelligence for a consultation.
Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.
Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.”
If your organization is interested in developing AI-powered training solutions, please reach out to Sonata Learning for a consultation.