SONATAnotes
When “The New Guy” Is AI: Introducing AI Tools To Your Team

Over the past 30 years, I’ve helped deploy new technologies at companies large and small – from content management systems for high school textbooks to video conference platforms for online training. But now that my current company is helping clients deploy AI agents for workforce training, I can honestly say that this technology is different.
While AI applications need all the usual setup, testing, and user training – in other ways it’s more like onboarding a new employee than deploying software. Case in point: while Microsoft 365 and SalesForce have knowledge bases and tutorial videos, neither of them are capable of saying, “Hi, Carlos – I’m your new productivity software. How can I help you today?” (though they probably will, soon.)
However, an AI based application can address the user like a colleague. And because of that, you could even say AI doesn’t need “user onboarding” as much as it needs an introduction.
So, how can organizations make sure their human users and AI agents get off on the right foot?
Transparency is the Best Policy

As mentioned earlier, my company develops AI agents for workforce training. And one great advantage of using AI for training is that – unlike a video or a knowledge base – AI agents can recognize who they’re talking to and tailor the output to the specific learner’s needs.
However, this requires AI to track and save information about each user, which some people – understandably – find a bit creepy, given the privacy implications.
Recognizing this, we encourage clients to adopt a “maximum transparency” policy with AI agents. The platform we built – Parrotbox.ai – notifies users whenever conversations will be saved (with an acknowledgement button), after which we either display a short introductory screen explaining privacy policies or have the AI agent explain the privacy policies (and answer questions honestly) during the initial conversation with a new user.

Getting to Know You

Just like someone’s first interaction with a new coworker, a user’s first experience with an AI agent can set the tone for the entire working relationship, and whether users view the AI as helpful or intrusive.
Our platform helps streamline users’ initial conversation with an AI agent by letting an admin enter some general background information, to spare the user from having to explain their job role or current projects to the AI. However, giving the AI agent this knowledge can raise privacy concerns (“How much does this thing know about me?”.) To make things less awkward, we’ll have AI agents acknowledge what they know about the user but in a very natural, human way – as if they’re a new colleague or outside consultant who has been directed to drop by the user’s office and introduce themselves:
“Hi Elise, it’s so great to meet you! My name is Gina and I’m an AI advisor designed to help you present climate finance products to your bank’s clients. I’ve been told that you primarily deal with manufacturing businesses, but I’d love to hear more about who you’re working with and any questions or challenges you might have with promoting climate finance opportunities…”
Speaking Each Other’s Language

In addition to creating AI workforce training tools, my company offers a range of courses, including one on intercultural communication for globally distributed teams. And if I had to distill that course down to a sentence it’s that – when talking to someone from a different cultural background – you need to be cognizant of how what you say might be interpreted differently by the other party.
This advice applies doubly to human-AI interactions. The fact that today’s generative AI models are so good at emulating human speech obscures the fact that AI takes a completely alien approach to analyzing text and making decisions.
So, while you don’t need end users to become machine learning experts, it’s helpful to give users a brief overview of how AI works and what that means for their day-to-day interactions with AI agents. For instance:
- Explaining how AI models’ “training data” is frozen at a certain point in time, and how – while some agents can augment their training data with web searches – users should never assume an AI agent’s sources are completely up-to-the minute.
- Suggesting that users invite AI agents to challenge their assumptions (“How might I be wrong?”) or brainstorm multiple answers (“What are the top 5 most likely reasons?”) rather than asking for a single definitive answer, as – by default – most AI models are trained to be a bit too polite and are generally more useful as brainstorming partners, not all-knowing oracles.
- Pointing out how AI models don’t actually read text but rather identify patterns in text. So their responses might reflect popular misconceptions if there’s bogus information in their training data, or they might occasionally get their sources mixed up since they triangulate answers from many different places (bottom line: Google and verify anything AI says before basing important decisions on it – the same way you would Google something a friend, relative, or coworker told you.)
Our company even created a short video for clients to share with their employees before using the agents we develop, to help set expectations.
Providing this sort of practical “AI 101” can prevent users from being disappointed when an AI agent is merely 95% accurate versus 100% accurate… and also prevent users from becoming too complacent, and abdicating all their critical thinking to the AI.
Conclusion
We’re still in the very early days of the relationship between humans and AI in the workplace. Just as working relationships between colleagues evolve over time, so too will the ways humans and AI interact.
For now, successfully integrating AI into your workforce requires thoughtful introduction, clear communication about roles and boundaries, and ongoing education about how to work together effectively. The organizations that approach AI deployment as a relationship-building exercise rather than merely a software installation will see dramatically higher adoption rates and better outcomes.
Hopefully this article provided some useful context for AI adoption. If your organization is interested in using AI agents for workforce training or on-the-job support, please consider reaching out to Parrotbox.ai / Sonata Intelligence for a consultation.
Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.
Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.”
If your organization is interested in developing AI-powered training solutions, please reach out to Sonata Learning for a consultation.