SONATAnotes
AI Acting Lessons: Creating Realistic Characters for Role Play Simulations

The legendary drama coach Sanford Meisner (whose students included Michelle Pfeiffer, Tom Cruise, Jeff Goldblum, and dozens of other famous names) taught that acting was about “living truthfully [in] imaginary circumstances” and responding realistically to whatever the other characters in a scene might say or do. One of his favorite activities as a teacher and director was to take each actor aside and give them potentially conflicting goals for a scene (e.g. telling one actor “Your goal is to convince the other character not to move away to Paris.” then telling the other actor “You are 100% committed to moving to Paris – you’re just trying to get him to understand why”) then letting them react to each other without knowing exactly what the other party was trying to accomplish.
My company has worked in learning & development (what some would call “corporate training”) for more than a decade, and we always sought to apply Meisner’s advice when developing role play activities and simulation exercises for live training workshops. For example, when doing media relations training for government workers, we might have participants get up in front of a room full of improv actors playing angry residents in a town hall meeting, and just let them voice whatever complaints their characters might have, to see how the trainees dealt with them.
And now that we’ve shifted into creating AI-based role play and simulation activities for workforce training, we’ve found that Meisner’s advice to actors is actually great for AI developers. If you want characters in a a customer service, sales, or healthcare simulation to behave realistically, don’t tell the AI model to “create a training simulation” – instead, just tell it to create an upset coffee shop customer who wants a refund and an apology because their latte was cold – then simply let the user and the AI react to each other realistically, and see what lessons they can learn.
But how, exactly, do you get AI agents to – in Meisner’s words – “live truthfully” in the “imaginary circumstances” of a role play simulation?
Everyone Wants Something

Every time we create a simulation, we start by determining what the characters involved want, then find a way to reduce that to a manageable number of instructions for an AI agent. This goes beyond the simple “embody a frustrated patient in a doctor’s office” prompts that you’ll find in AI tips and tricks articles as, by default, an AI agent might draw as much from television medical dramas as real-life patient interactions, resulting in something that’s only half right and generally not acceptable for professional job training.
To get things right, you need to start by defining a basic goal for the character (e.g. “Get a credible explanation for what’s causing your cough and treatment recommendations that you would be willing to follow.”) Next, we layer on additional considerations (e.g. ‘How forthcoming will the patient be with their personal medical information when talking to the user?” “How long will the patient be willing to wait if the user isn’t giving them the responses they were hoping for?”.) From there it’s a long process of playtesting with actual professionals (e.g. doctors, nurses, salespeople) and adjustment, trying to strike the right balance between not overloading the AI with excessively detailed instructions while ensuring consistently true-to-life results (without the AI resorting to TV tropes or trying to make the narrative more dramatic / exciting than it would be in reality.)
It’s also worth noting that the AI model has its own inherent motivations that influence a simulation. By default AI models like ChatGPT and Claude are predisposed to help users solve whatever problems they’re facing, which can lead them to manipulate the characters – or even the basic facts of the scenario – in ways that makes it too easy for users to succeed (e.g., if a customer in a sales simulation is complaining about price and the user says “Well don’t worry – we’re having a sale, everything is now 50% off!” the AI might simply roll with it.) Correcting for this requires additional “guard rail” instructions to keep the scenario grounded (i.e., when the user says “Everything’s on sale!” the AI intervenes and says “No, actually… it’s not.”)
Challenge vs. Frustration

While authenticity is important, a 100% realistic simulation isn’t always the best learning experience.
To give an example, we developed a role play simulation to help financial advisors at banks practice conversations with prospective clients. Typically these clients would be extremely wealthy individuals (with $500K or more available to invest) who the advisor met through their personal or professional connections, and the advisor’s goal would be to convince the wealthy advisor to schedule an initial consultation. And, usually, these conversations are happening at a party, a sporting event, or a business convention when the prospective client has other things on their mind.

In reality, a financial advisor might have 50 or more of these conversations before anyone agrees to meet with them. And in the majority of those cases the prospective client simply wasn’t interested in the financial advisor’s services, and there was nothing the advisor could have said to change their mind.
But even though a 1-in-50 success rate is realistic, is it helpful to make simulated conversations comparably difficult? Do we really want trainees spending 10 to 15 hours playing simulated conversations just to score one “win”? But if we don’t want it to be as difficult as real life, how do we make it easier without rendering it worthless as skills practice? Do we alter the psychology of the clients to make them more persuadable?
After some discussions with the company commissioning the simulation, we decided to have the AI filter out most of the “impossible to convince” clients when generating the scenarios, and only have them pop up once in a while to remind learners they exist. Each individual character was still 100% authentic, with realistic motivations (i.e., either they were interested in getting better returns on their investments or simply wanted to enjoy the hors d’oeuvres without being bothered), however the overall ratio of hard cases to receptive individuals wasn’t the same as you’d encounter in reality.
This nuanced approach led to an experience that was still appropriately challenging (you’d only win every 3 to 5 times) without being a soul-crushing waste of time.
Teaching to the Test

Another challenge we encountered in our first AI-powered simulations was that, if we included the evaluation criteria in the simulation’s prompt, the AI agent would begin “teaching to the test” and have characters behave in unrealistic ways in order to push the user towards an optimal outcome.
For example, in the early drafts of the medical communication skills role play, if you tried to end the conversation before completing all the items on the evaluation checklist the AI patient might say things like “Wait a minute – aren’t you going to ask me questions related to lifestyle factors?”
Similarly, when playing a company’s Chief Information Security Officer in a cybersecurity simulation, we once had a junior analyst turn around and say “Hey boss, shouldn’t we reach out to the communications team to coordinate our messaging to the public, and also the legal department to ensure we are complying with all mandated notification requirements?” (something no junior analyst has said to any CISO, ever.)
Eventually, we discovered that, if we removed the evaluation criteria and had a separate AI agent handle the evaluation after the scenario ended, then the simulation would simply focus on having characters respond realistically, according to their real world motivations (e.g., going to the doctor’s office to get treatment for their back pain), without worrying about the “ideal” outcome (i.e., the user ticking off every item on the medical interview skills framework.)
Conclusion

The future of workforce training isn’t about memorizing scripts or procedures, it’s about developing the adaptability to handle whatever comes your way. AI-powered simulations with authentic character motivations are making that kind of preparation accessible and scalable like never before, but only if organizations are prepared to let the AI “live truthfully in imagined circumstances” and authentically depict the messy, unpredictable nature of human interactions workers will face on the job.
While this approach can initially be challenging for organizations accustomed to signing off on every word of training materials in advance (“What if the AI says the wrong thing?!”), we’ve found that it keeps learners far more engaged and makes them better prepared for a world where customers, patients, and colleagues all have minds of their own.
Hopefully this article offered a better understanding of how AI offers more realistic and authentic simulations and role play activities for workforce training. If your organization is interested in developing simulations for your own field of work, please reach out ot Sonata Learning for a consultation.
Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.
Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.”
If your organization is interested in developing AI-powered training solutions, please reach out to Sonata Learning for a consultation.