SONATAnotes
“POWER UP” OR “CHEAT CODE”? RIGHT AND WRONG WAYS TO USE AI AT WORK

There are two narratives around AI in the workplace.
On one hand, 85% of senior executives view AI implementation as a top priority, commanding their teams to search everywhere for cases where AI can streamline operations (and, let’s be honest, reduce head count.)
At the same time, 59% of frontline managers are concerned about AI misuse by employees, scrutinizing every report and email for telltale em dashes, bullet lists, and “AI words” like delve and elevate as evidence that workers are secretly using AI to do their jobs.
So what’s the reality of the situation? If we think of work as a video game, is AI a “cheat code” that lets employees avoid putting in effort, or a “power up” that will “elevate” productivity to the next level?
As always, the answer is “it depends” and there’s a right way and a wrong way to go about everything. Let’s look at three common applications for AI in the workplace, and how employees and organizations can use AI as a true force multiplier, rather than a cop-out.
Using AI for Writing

Cheat Code: AI as Your Ghostwriter
Not too long ago, one of our job applicants proudly declared, “I never write anything without AI!”
Today, this might have prompted a few follow-up questions, but at the time AI was still novel and – given they seemed like a cool and capable person and we were positioning ourselves as an “AI” company – we didn’t want to come across as squares by saying “OK, yeah – but can you write without it?” So we gave them a chance.
The very first project the person worked on was a tough one. We were creating a course to help government policymakers bring community health workers into the mainstream medical system. The focus was on things like allocating budget for community health work, setting standards for hiring and training and wages, and ensuring proper oversight.
During the meetings and discovery interviews with the client, our new AI-positive teammate nodded along. But when we asked basic questions afterwards, they said “Let me check my notes” – by which, in hindsight, they meant their AI auto-transcription. Likewise, when we asked questions about whether certain client-provided documents were relevant to the course, they again said “Let me check my notes.”
When at least they submitted their first draft, it contained several thousand words about how community health workers should do hands-on tasks like administer vaccines and malaria tests: not the higher-level Ministry of Health policy stuff the course was supposed to be about. And, to lay to rest any question of how things went so wrong, there were still a number of telltale phrases like “Would you like me to draft a more detailed step-by-step breakdown?” still present in the text.
The above is an excellent example of where most people go wrong. They ask questions or dump a bunch of documents on a topic they know nothing about into ChatGPT, and expect to come out sounding like an expert.
The moral of the story? If you don’t ask AI the right questions, you won’t be able to evaluate whether the answers make sense in context. In other words, if you abdicate understanding to AI, you’ll end up with content that sounds authoritative, but completely misses the mark (in ways that aren’t immediately obvious to non-experts.)
Power-Up: AI as Your Editorial Assistant
One of my early jobs was being the assistant to a Pulitzer Prize-winning business and economics journalist. Even in the early 2000s, he preferred dictating his ideas into a cassette recorder, which his secretary converted to MP3 files for me to review.
At various points in his dictation, he’d pause and say something like, “Go dig up some examples of occupational licensing requirements preventing entrepreneurs from starting service businesses.” That was my cue to start digging through online journals and back issues of the Wall Street Journal, hunting for the perfect case study to illustrate his point.
Today, that same journalist wouldn’t need to hire someone like me. He could sketch out his argument, drop in “XXX” and “YYY” placeholders throughout his rough draft, then ask ChatGPT to find suitable examples to fill in the blanks. A quick reverse Google search to verify the facts, and he’s done. Basically a highbrow game of Mad Libs – and with voice input, he could even skip the cassette recorder entirely.
This reflects how we actually use AI for content development at my company. Our writers start by “shoveling” text from source documents into structured outlines and distilling / consolidating that raw information into clear bullet points. Only then do we use AI to produce a rough prose draft – one subsection at a time – which gets checked against the original bullets and adjusted as needed. The result? We cut writing time by 15-25% depending on complexity, but the human understanding and judgment remain front and center.
Level Up: Giving an AI Agent a Style Guide and Talking Points
Recently, we did a project with a financial industry association that needed to create hundreds of plain-language scenarios illustrating how complex regulations apply in real-world situations. Typically this work would be done by members doing a few scenarios per week on their lunch breaks, which was reliable but incredibly slow.
When we developed an AI agent to take over the task, we didn’t just tell it to “write some compliance examples.” Instead, we created a comprehensive instruction set that included:
- Mandatory reference formatting (regulation numbers had to appear at the top of each scenario)
- Exemplars and outlines showing exactly how to organize each example (context, incident, response, outcome, etc.)
- All of the same editorial guidelines the association had been giving to human contributors for years
- A “cheat sheet” of pre-approved terminology and phrasing
The results were remarkable.
90-94% of the AI-generated scenarios passed editorial review on the first pass, compared to 80-85% for human-written content. More importantly, we compressed their typical 5-7 month development cycle down to 4-6 weeks.
But here’s what made it work: the association was only automating the drafting, not the final editorial review, and their editors knew their subject matter inside and out. They could evaluate the AI’s output, catch the 6-10% that needed revision, and maintain quality control throughout the process. While the AI was cutting labor hours, it wasn’t replacing human expertise – it was accelerating and amplifying it.
We use a similar approach internally. Our marketing AI agents work from a 3,000-word style guide that covers everything from preferred terminology (we say “AI agent” or “AI coach,” never just “AI” in isolation) to tone and positioning. When our AI agent Weston edits a blog post or LinkedIn update, he’s not just checking grammar – he’s ensuring brand consistency across every piece of content we produce.
The difference between this and the “cheat code” hack? We’re not asking AI to be the expert. We’re asking it to be the expert’s most efficient assistant – one that never forgets the style guide, never gets tired, and can work at whatever scale the business demands.
Using AI to Analyze Documents and Data

One of the most persistent myths about AI is that it somehow replaces tools like databases and search engines. While it’s true that AI models can hold hundreds of thousands of words in their active memory (“context window”), and it can save hours of Googling for certain types of commonly asked questions, you can’t just dump a mountain of data into ChatGPT and expect meaningful insights. The reality is far more nuanced.
Cheat Code: Asking an AI Model to Find All Occurrences of X in a Pile of Data
Picture this: you’ve got twelve months of safety incident reports from a chemical plant, and you want AI to identify every incident related to lack of preventative maintenance. So you upload a mammoth Excel file to ChatGPT and ask it to “find all maintenance-related incidents.”
The problem here is that the AI model won’t actually read through the entire spreadsheet line by line, like a human would. Instead, it identifies patterns between words, numbers, and phrases. And when you feed it a massive blob of text all at once, the model’s tendency is to conserve processing power by looking for general patterns rather than analyzing every line of input in detail.
So, in the case of feeding ChatGPT the safety incident reports, it might call out a few relevant examples, or make some general observations about frequency, but it’s 100% guaranteed to miss many (if not most) of the preventative maintenance incidents, depending on the volume and complexity of data.
The fault here lies less with AI than with human wishful thinking. The fantasy (which many AI companies do nothing to dispel) is that AI can retroactively compensate for poor data management, such as 12 months of failure to consistently tag the cause of safety incidents in a factory. Meanwhile, the truth is exactly the opposite: disciplined knowledge management and proper data structure make AI analysis exponentially more valuable.
Power Up: Asking AI to Pick Diamonds From the Muck
Recently, the organizers of an upcoming industry conference sent my marketing manager an attendee list with thousands of names from hundreds of companies. The job titles were all over the map with no consistency between organizations, and plenty of creative variations on standard roles (e.g. “Senior Learning Experience Consultant” versus “Instructional Design Manager” and so on.) .
Instead of dumping the entire list into an AI model, my marketing operations manager fed it to our AI agent Weston in manageable chunks of 50 records at a time. Weston was able to identify high-priority contacts like Chief Learning Officers and Senior VPs of Human Resources, plus flag cases where a seemingly junior person at a major company might actually control significant budget (for instance, a Director of Talent Development at Amazon might have a larger budget than the CHRO at most midsize firms.)
Did Weston catch every high-profile lead from the list of thousands? No, but in this case that didn’t matter. Weston was able to compile a list of 250 priority contacts in a matter of minutes, versus waiting days for human assistants to manually sort through everything. That let our team start reaching out to high-priority leads immediately giving us a critical head start.
Level Up: Having an AI Agent Review Records One at a Time
OK, but what about situations where we want to take advantage of AI’s ability to process “messy” input, but the bar for completeness and consistency is higher?
While generative AI is by definition non-deterministic (read: never 100% consistent, just like humans), there are ways to have AI agents do a more granular analysis of a data set.
Instead of asking the “generic” ChatGPT chatbot to analyze a massive data set all at once, you can use various automation tools that connect to LLMs to set up a workflow that feeds individual records to the AI model one at a time, allowing it to process each one independently, according to the instructions in its prompt.
For example, a real estate firm could set up a workflow to examine individual records in a property listings database and “Determine whether this property listing would interest Client X based on the following criteria…” followed by extensive parameters pulled from a separate database record of that client’s preferences. The AI processes each listing individually, applying the same standards every time.
Once the dataset is processed, a separate conversational AI agent can present the results to users and highlight relevant insights from the pre-analyzed data. Meanwhile, if you save the original data alongside the AI output, you’ll have an audit trail: critical for regulated industries or high-stakes business decisions.
And that’s just the most basic form of this use case. If you’re dealing with a million-dollar analysis need, you might even invest in training a custom “Small Language Model” optimized for repetitive, narrowly defined tasks. For instance, imagine an AI agent that exists entirely within the world of automobile parts catalogs and knows every specification and compatibility issue.
To give another real-life example, we’re applying this approach right now to help social services agencies. Our AI agent will process thousands of job postings individually, evaluating each one against the specific capabilities and preferences of job seekers with disabilities. Instead of generic job search results, clients get curated opportunities that actually match their situations. And this is where AI becomes genuinely transformative for business operations.
Asking AI for Advice

More and more people are turning to AI to answer questions that they would once ask Google or – heaven forbid – a human professional like a doctor, accountant, lawyer, spiritual leader, or therapist. And this is where the gap between “consumer” AI use and serious business applications becomes most glaring.
Cheat Code: Asking an AI Model “What Should I Do?”
For simple questions with straightforward answers – “What spices go in chicken pot pie?” or “How do I keep a goldfish bowl warm at night?” – generic AI models work fine. The collective wisdom of the internet is usually sufficient for these kinds of basic how-to questions.
But when we start asking complex, technical, high-stakes questions, those quick LLM responses start to fall apart fast.
Most “tips and tricks” guides will include advice like “tell the AI to play the role of an expert supply chain analyst…” and nothing more, expecting the machine to magically turn into a seasoned logistics consultant. In reality, it’s more like asking a 10-year-old to put on a white coat and plastic stethoscope and play doctor.
To their credit, companies like OpenAI and Anthropic have started recruiting experts in medicine, law, and other fields to help train their models. Yet, despite this, ChatGPT does not come pre-equipped to address complex questions within any given domain of knowledge, and is essentially “cramming for the exam” every time you ask it to embody a physical therapist or workers’ comp attorney, doing a quick survey of publicly available information.
Additionally, generic AI models really want to answer your questions as quickly as possible and thus will often skip the basic discovery that human experts do before offering advice. Case in point: I asked ChatGPT, “Should I switch to a vegan diet?” It immediately launched into an essay-length response concluding that I should “Start gradually and supplement wisely.”
Then I mentioned, “I have celiac disease, does that change things?”
In response, ChatGPT referred me to a dietician while listing a great many additional considerations, from having to triple-check food labels and cooking celiac-safe staples in bulk. Which was all great advice, though it took the human asking to surface it.
Power Up: Asking AI “How Might I Be Wrong?”
As we mentioned earlier, AI models – by their very nature – want to complete patterns and answer questions. But this creates a genie-in-a-bottle situation where you need to be careful what you wish for. With generic models, how you phrase questions massively impacts the output you’ll receive.
Recognizing this, one useful approach is to make an assertion, then ask the AI model “How might I be wrong?” This fundamentally changes the AI’s goal from spouting answers to analyzing for gaps.
For instance, when I asked “I need to lose weight and I’m thinking of going vegan. How might I be wrong?” I got a much more succinct and useful analysis pointing out that eating vegan doesn’t automatically translate to weight loss (you still need to count calories), raises the risk of nutrient deficiencies, and can be psychologically and socially difficult to maintain, and it gave the advice to consult a doctor right off the bat.
Level Up: Building AI Agents That Actually Think Like Experts
It’s one thing to apply prompting hacks to improve an AI model’s answers to questions in everyday life, and quite another if you’re asking AI agents to participate in consequential business / operational decisions at scale.
Case in point: we recently developed an AI “climate finance advisor” for a client that needed to train thousands of bank staff in dozens of countries on how to promote loans to businesses for energy efficiency and renewable energy upgrades. The challenge wasn’t just technical knowledge: it was creating an AI system that could conduct the same kind of structured discovery and analysis that experienced climate finance experts use, with the same level of accountability and oversight.
To accomplish this we:
- Divided the interaction into distinct stages: instead of rushing to give answers or wandering off topic, the AI agent could not proceed to the next phase of a conversation until it had ticked off a predefined checklist of discovery questions to answer and topics to discuss with the user.
- Set up a modular knowledge base to let the AI agent pull in specific guidance for the country and industry in question, so a loan officer in Nairobi focusing on agricultural clients gets guidance on Kenyan regulations and solutions for Kenyan agriculture, not generic advice that assumes the client is an organic tomato farm in California supplying Whole Foods.
- Allowed the AI agent to look up a user profile, populated by the organization, so they would know a relationship manager’s location, portfolio, key accounts, and experience level – even if the user omitted these details during conversations.
- Created a second AI agent to review conversations and alert a human technical advisor to step in if a particular manager seemed to be struggling.
The result was an AI agent that could be trusted to advise not just one but thousands of users in a consistent manner aligned with the organization’s policies and guidelines – asking the right questions in the right order, giving the right answers, leaving a reporting trail, and even knowing when to call in a human expert for support.
Conclusion
While the out-of-the-box capabilities of AI models are impressive, they are not magical. And there’s a difference between “prompting hacks” and enterprise AI platforms, just as there’s a difference between employees learning a few Excel shortcuts and implementing a full ERP system.
The workers and organizations that thrive in the coming decades won’t be the ones that use AI as a shortcut or a lazy “cheat”, but those that invest in the skills and systems to make AI a workforce productivity multiplier on both an individual and institutional level.
Hopefully this article offered some useful insights on how to use – and how not to use – AI, particularly within a workplace context. If your organization is interested in discussing how you could integrate AI agents into your operations, reach out to Sonata Intelligence for a consultation.
Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.
Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.”
If your organization is interested in developing AI-powered training solutions, please reach out to Sonata Learning for a consultation.