SONATAnotes

Case Study: Creating Interactive Sales Training Role-Plays for Financial Advisors with Generative AI

Financial advisors (FAs) have one of the toughest sales jobs in the world: convincing wealthy people to let the advisor’s firm manage their money.  To succeed, it takes flawless conversational skills, deep financial expertise, and a willingness to persevere no matter how many prospective clients say “No, thanks.”

Our company, Sonata Learning, has worked with several top-rated sales coaches for financial advisors to develop “client acquisition” training programs for some of the world’s largest financial institutions.  Traditionally, a large portion of the time spent in these programs involves FAs role-playing client conversations with a coach.  But while these types of activities do help improve financial advisors’ sales skills, the amount of practice they can provide was limited by the effort required to design role plays and the availability of coaches.  

However, more recently our team began using generative AI to create interactive role-play activities on demand.  To give an example check out the demo below – and see if you can convince a wealthy investor to let you manage their money:

While the idea of creating self-guided role-play activities is nothing new, in the past it could only be done using scripted “point-and-click” conversation simulators where players choose from a limited set of responses to a limited set of situations.  By contrast, AI allows us to generate an infinite variety of role-play scenarios in real time, and allow the financial advisor to say anything to the virtual client and receive a realistic response.   

So, if an advisor wants to practice chatting up aviation industry executives at a business conference, the AI can determine everything from the CEO’s favorite skiing destination to their stock portfolio’s performance to what hors d’oeuvres are being served at the cocktail reception in less time than it takes to load a typical web page. Or, if a financial advisor prefers working with retirees, the AI can generate the details of a bingo hall and the names of a former postal worker’s grandchildren in milliseconds.

So what goes into designing one of these AI-powered scenarios?  In this article, we’ll break down the process step by step, while focusing mainly on the “business case” aspects, rather than the technology.

Setting the Stage

To produce a realistic advisor-client conversation, we had to do more than type “Pretend to be a rich person at a party and I’ll be a Financial Advisor trying to schedule a meeting with you” into ChatGPT.  In the end it took about 8,000 words worth of instructions for the AI to produce satisfyingly realistic back-and-forth dialog with virtual clients. 

So what went into those instructions for the AI?

On one hand, the instructions didn’t contain any scripted dialogue, nor did they contain much information about types of investors (e.g. restaurant owners versus doctors) or an FA’s job description.  The AI already had most of the information it needed from having been “trained” on the entire contents of the Internet.  

What the instructions did contain was a very clear definition of the objectives of the simulation (i.e., “approach a high-net-worth individual in a public setting and find a way to schedule an introductory meeting to discuss their finances”) and an exhaustive set of guidelines for how to evaluate the user’s statements and determine a psychologically realistic response from the client.  For instance:

  • “If the user said or asked something relevant and insightful about finances that the prospective client did not already know and strongly agrees with, then increase the Credibility Score by 20.”

    Or
  • “The prospect’s initial Motivation Score will be 30 unless they were introduced to the user by a mutual acquaintance or saw the user deliver a presentation on financial planning at an event, in which case the initial Motivation Score will be 55.  However, even if the Motivation Score is high, the prospect will still be shocked and offended if the user pressures them into a professional relationship without establishing equally high Trust and Credibility scores.”

We also made a conscious choice to only simulate a financial advisor’s very first interaction with a prospective client, to avoid having to provide the AI with extensive information about a specific company’s investment products.  While it is possible to incorporate that level of detail, we wanted to start out with something more universal to FAs at all financial services firms.

Tweaking the Details

As Sonata’s simulation designers (“prompt engineers”) began play-testing the interactive role-play, they noticed the AI tended to favor certain situations over others.  Early on, it seemed nearly every prospective client was a “tech entrepreneur” and the vast majority of the encounters were taking place at art galleries.

Why was this the case?  Was AI just biased towards technology entrepreneurs since that’s who created it?  Did it choose art galleries out of some weird personal preference?

The workings of AI are a bit mysterious: unlike traditional computer software, which is “programmed” with specific instructions, AI is “trained” by having an algorithm process trillions of words of text and mapping connections between concepts.  So our best guess is that, because tech entrepreneurs like Elon Musk get so much media coverage, the AI had a distorted sense of how many wealthy people earn their money in tech.  As for the art gallery setting – we had included art galleries as one possible meeting place out of many in an example in the instructions, and for some reason the AI fixated on that one specific option.

To correct this, the prompt engineers created tables with lists of industries (“manufacturing”, “hospitality”, “healthcare”…) and settings (“sporting event”, “industry conference”…) and forced the AI to select the virtual financial advisory client’s field of work and the type of milieu at random.  And this approach soon produced a reasonable distribution of lawyers, real estate developers, and neurosurgeons in addition to the tech types.

Calibrating the Difficulty

Adjusting the superficial details of the simulation was relatively straightforward.  But a bigger challenge was replicating just how difficult it is for financial advisors to convince wealthy people to become clients.   

While, on a superficial level, AI did a great job of capturing the back-and-forth of social conversation, once the discussion turned to money the virtual clients were pushovers.  All it took was the financial advisor saying “I can get a really good return on your investments!” for a wealthy corporate executive to hand over their hard-earned money.  

This naivete was likely due to AI’s innate predisposition to do whatever it can to please humans and help users succeed.  But while that’s great when you want AI to help draft an email or analyze research data, it made the simulations too easy – to the point where they weren’t very helpful as sales practice.

To correct for this, Sonata’s prompt engineers consulted experienced financial advisors to identify typical client objections to meeting with a financial advisor, then added more probability tables for the AI to consult when generating virtual clients.  For example: 

  • “There is only a 5% chance the prospect is actively looking for a financial advisor.” 
  • “There is a 30% chance the prospect has a very negative opinion of financial advisors.” 

Once those rules were applied, the virtual clients stopped being so easily swayed.  Now, a wealthy hotel owner who was happy to chat about cars or their daughter’s upcoming wedding would take a step back and become more guarded once the user mentioned they were a financial advisor.

Keeping Up with the Technology

As we started piloting the AI simulations with real financial advisors, Sonata’s prompt engineers continued refining the AI’s instructions.  Then, a few weeks into testing, OpenAI announced a brand new version of their AI model, ChatGPT.  

At first, we and our clients were excited about the new model’s improved performance (20% faster!) and lower cost (half the price!).  And when we initially upgraded it appeared to behave more or less the same.  However, after a few days of more testing, we noticed the virtual clients in the simulations behaving differently.  Where, with the previous AI model, the clients were all amiable and trusting to a fault, the new AI model made the clients extremely evasive – even hostile – to the user’s attempts to discuss investments.  

For example…

On one hand, it was great to see the AI was no longer pulling its punches when it came to realism.  However, we didn’t want to totally discourage our trainees: while it very well might take 10, 20 or 30 real-life conversations to land a meeting with a real-life investor, we didn’t want to force FAs to repeat the exercise that many times to land an appointment.

Thus, after rolling the “live” version of the simulation back to the previous model, our prompt engineers began investigating how the new version of ChatGPT processed the same set of instructions.  Basically, we found that – where before we had to exaggerate the descriptions of difficulty and probability tables just to get the AI to make it somewhat difficult – ChatGPT 4 was taking our words at face value. 

Hence, statements like: 

“The prospect should do nothing to help the user secure a meeting and change the subject or end the conversation if the user is not extraordinarily compelling or extraordinarily subtle in every statement wherein the user seeks to advance the user’s goal of securing an appointment…”

Or

“The prospect should react in a deeply offended manner if the user offers any form of unsolicited financial advice without first establishing a reasonable degree of trust, credibility, and personal rapport…”

…had to be rewritten with a few less superlatives like “extraordinarily” and “deeply”, now that the AI was taking us at our word.

Meanwhile, in addition to the increased difficulty, we found that the new version of ChatGPT had a tendency to write things out as bullet-point lists rather than naturally flowing paragraphs, so we had to provide some extra examples of the kind of writing we wanted.

All that said, on the whole the new version was an improvement over its predecessor.  We found it was much better at writing natural-sounding dialogue and instilling characters with personality (it even did a better job of reacting appropriately to humor and detecting when a financial advisor was being a boring conversationalist – two things the previous version struggled with).  However, it did make clear that migrating from one version of an AI model to another isn’t as simple as going from Windows 10 to Windows 11: rather, it was more akin to casting an entirely new actor in the lead role of a play, and having to work with them as director to get the kind of performance we’re looking for.

Conclusion

Hopefully this case study provided some insights into everything that goes into working with AI as a training tool – or just working with AI on specific business tasks, in general.  It’s a field where our own team has been making new discoveries and drawing new lessons every day, which is an exciting and rewarding part of helping clients harness the power of this new technology.

If you’re interested in discussing how your organization can leverage generative AI for workforce training and other uses, please reach out via our website at https://www.sonatalearning.com/ai

If you’re interested in exploring how AI-based simulations could fit into your organization’s learning curriculum, reach out to us to learn more.

LATEST RESOURCES
CASE STUDIES
JOIN OUR NEWSLETTER
Name

By signing up for the newsletter, I agree with the storage and handling of my data by this website. - Privacy Policy

This field is for validation purposes and should be left unchanged.