SONATAnotes

Does Your Team Need “AI Skills”… Or AI Tools?

The fun part of my job is that I get to talk about artificial intelligence with people from all kinds of organizations: just this week it was a national Ministry of Health, an international bank, a consulting firm for the mining industry, and a well-known fast food chain.

But when we start discussing their plans for incorporating AI into their work, they often say “Well, right now we’re just teaching our people the basics…”

Depending on the organization, “our people” could refer to nurses, loan officers, marketing directors, hotel managers, high school teachers, lawyers, accountants, or chemists.  And by “the basics”… well, that varies.  Half the time the client asks “What do you think we should be teaching our people?”

To which I reply “Good question – what does a [insert profession here] need to know about mega-prompts and machine learning?”

Five Levels of AI Skills

“AI Skills” can mean a wide range of things, from knowing how to generate personalized birthday cards with Canva to developing AI systems that cross-reference MRI scans, genomic sequencing, and real-time blood biomarkers to diagnose early-stage cancer.  It’s all just a matter of degree.

For sanity’s sake, we can classify AI-related tasks – and, by extension, “AI skills” – per the following levels:

Level 1 – Generic AI Autopilot – This covers tasks that most large language models (ChatGPT, Claude) can perform easily with no specific guidance.  For instance, telling ChatGPT to ‘summarize this article’ or ‘help me write an email requesting a refund for a damaged package’ is the equivalent of using a calculator to add numbers – simple and low risk.

Level 2 – Generic AI + Common Sense – This addresses tasks that don’t require any specialized expertise beyond the collective wisdom of the Internet, which an LLM can usually get right with a bit of plain-language guidance from a non-expert human.  For instance, if you ask an LLM to suggest home renovation projects that add value, and it recommends installing a pool, a savvy homeowner from Alaska might point out that pools actually decrease property values in colder climates.

Level 3 – Generic AI + Domain Expertise – These are tasks where a LLM can be helpful as long as the human knows enough to judge the accuracy, quality, and completeness of the output. For instance, if you were to ask ‘What are some reasons why a person might have elevated liver enzymes?’ and the LLM responds “excessive alcohol consumption”, an experienced hepatologist would know it could be caused by other things, such as autoimmune hepatitis.

Level 4 – Generic AI + Clever Prompting – This is more of an add-on to any of the previous three levels: sometimes, applying some simple AI-prompting ‘tips and tricks’ can improve output.  For example, telling an LLM to “structure your output like the following example” or “embody the role of an employment law attorney” or “double-check your previous answer for accuracy” (though most LLM providers are already incorporating these basic strategies into their newer models, by default.)

Level 5 – Purpose Built Solutions – At this level, we’re dealing with highly complex multi-step tasks that LLMs cannot perform reliably out of the box, even with expert guidance and supervision, for instance:

  • High-stakes automation (like analyzing loan applications)
  • Applying expert frameworks or leveraging proprietary data (tasks where the ‘collective wisdom of the Internet’ isn’t enough, like helping a large manufacturing company assess *all* the impacts of government-imposed limits on carbon emissions.)
  • Integrating multiple AI models and/or traditional software applications (for instance, building a system for AI agent to handle patient intake for a medical clinic)
  • Complex prompting (going beyond 10-word or even 500-word prompts and instructing an AI to do things like model the behavior of vendors in an electronics company’s supply chain in different scenarios)

And perhaps you could add a “Level 6” for the people who actually build the AI models that enable all of the above.

All of which begs the question “How high up the AI skills ladder will most people need to go?”

Should People DIY with AI?

There’s an old saying “Give a person a fish, feed them for a day – teach that person to fish, feed them for a lifetime.”  But while I’m all about empowering people with new skills (it’s kind of my “day job” as an organizational learning & development consultant), that saying assumes we all want to be full-time fishermen.

With every technology there will always be a small group of professionals and super-hobbyists who WILL figure out how to push systems to their limit, for fun or profit.  It’s like when my father walked in on me and a geek friend assembling a high-performing gaming computer in the basement: he was puzzled at first then suddenly announced “I get it… this is like when my buddies and I used to hot rod our cars!”  (Ironically, that high school friend is now head of dev ops for my AI startup)

But, as with cars and personal computers, the AI revolution won’t come from everyone learning to fine-tune their own custom Llama 3.1 405b instance running on a Ryzen Threadripper 7995WX.  Rather, it will happen when people with other forms of expertise (in law, architecture, graphic design, etc.) can take AI tools crafted by tech experts and use them to amplify their talents.

To be clear – I’m not saying the DIY ethic is bad (it’s beautiful!)  And I’m not saying an artist shouldn’t understand their tools (i.e., most serious cyclists will swap out a few components whenever they buy a new bike.)   But there’s something to be said for specialization, and focusing on the things you, personally, do best.

The Workforce of Tomorrow

So if we can’t all become AI engineers, does that mean the rest of us will be unemployed?

Hardly.

Assuming we’re able to navigate this technological revolution with a sense of social responsibility (versus the billionaires using AI to seize all the Earth’s resources and blast off to their moon colony, leaving the rest of us to bake in the global warming oven)… most people will fall into one of these employment categories:

    • Hardcore AI technology expert (the aforementioned machine learning geeks with degrees from MIT or Tsinghua, who will create the foundational AI and computing technologies the rest of us depend on for… pretty much everything.)
    • AI application expert with intermediate domain knowledge (e.g. an AI developer / prompt engineer who understands enough about healthcare to help research hospital clinicians build medical diagnostic tools).
    • Domain expert with intermediate AI skills (e.g., high-powered attorneys who know how to use multiple AI tools – some of them custom built specifically for their firm – to conduct research and develop their arguments, but otherwise focus on client relationships and overall case strategy)
    • Human-centered service providers (Because let’s face it – nobody with money to spend wants an AI marriage counselor, kindergarten teacher, or bartender)
    • Workers in fields where automation costs exceed labor costs (At least until robotics technology catches up… though even plumbers, hairdressers and wilderness search & rescue teams will likely have AI assistants)
    • The voluntarily underemployed (Living on Universal Basic Income payments from their government – funded by taxing the surplus production of the other groups – and spending their time pursuing passion projects like running artisanal cupcake shops or simply enjoying a modest but low-stress lifestyle)

While this might seem like science fiction, it’s honestly hard to imagine a different future unless AI technology somehow hits a wall (e.g. energy costs, lack of training data, etc.) that prevents it from growing beyond its current capabilities. And, even if I didn’t co-own an AI company, I wouldn’t bet against the rate of progress slowing any time soon.

But Wait… Shouldn’t AI be Easy?

If the idea of society consisting of a small number of AI-assisted experts and a ton of underemployed people taking it easy sounds far-fetched, the leaders of the major AI tech companies are all promising something even more radical.  What if AI gets so smart that it can pretty much develop itself and do everything better than humans?

Would we not need domain experts anymore?  Would we not even need AI experts anymore?  Would it be as simple as telling your personal AI “cure my cancer” and presto – cancer cured?

Maybe it will be that easy.  Maybe it should be that easy.  In fact, a large part of my work developing Ai coaches and assistants for professionals in other fields focuses on making it seem that easy.

Yet, somehow, I suspect that there will always be at least one human with AI and/or domain expertise hiding somewhere behind the curtain.

Consider race cars: the average Formula 1 car is 5 times faster than then fastest racehorse and 8 times faster than the fastest human Olympian.  And an F1 car can sustain its top speed for around 45 minutes before its tires wear out, while a racehorse can only hold its maximum speed for a minute or two, and a human sprinter for mere seconds.

Yet there’s no denying that an F1 car plus a world champion human driver will win more races over the course of a season than an identical car with a less skilled human driver.  And AI will likely follow the same dynamic: while more and more tasks will fall into the “generic AI on autopilot” level, the highest-performing AI systems will still include a highly skilled human element.

Teaching “AI Skills” Today

So, after that long digression we come back to our original question: “What does a [nurse / hotel manager / lawyer] need to know about AI?”

Here’s a simple framework to get your organization started:

1. Focus on problems

Whatever you teach people about AI, frame it in terms of YOUR real-world problems.

Don’t ask “How are we going to use AI?” instead ask “What takes up too much of our people’s time?” or “Where do we keep making expensive mistakes?” or “What knowledge/expertise do we wish we could clone?” then ask “Can AI help with this?”

For example:

  • A hospital might realize their nurses spend 40% of their time on documentation
  • A construction company might struggle to consistently estimate project costs
  • A nonprofit might want to help field workers make better decisions about resource allocation

2. Match the problem to the task level

Remember those five levels we discussed?

Some problems can be solved by teaching people to ask ChatGPT better questions (Level 1-2) while others might require a team of experts to build your people some custom, purpose-built Ai tools (Level 5).

For instance, if a hospital wants to use AI to automate patient intake and claims processing – that’s probably going to require a purpose-built tool (not just “ChatGPT training for nurses”.)   Meanwhile, a construction company might get huge value just from teaching estimators to use generic AI tools with some expert oversight (Level 3-4).

3. Allocate training according to need

Once you know what level of solution you need to address each problem, you can decide who needs to receive what type of AI skills training.

Level 1-2:

  • Basic awareness (just enough to demystify the technology – on a “Bill Nye the Science Guy” level)
  • Responsible usage (e.g. don’t take everything an AI model says at face value, treat it as something a random coworker told you – don’t get lazy, and verify)
  • General policies (e.g., don’t enter sensitive data into a free consumer-level ChatGPT account – it might be used to train their public model)

Level 3:

  • Helping domain experts validate and leverage AI outputs
  • Using AI as a collaborative brainstorming tool (versus a search engine)
  • Explaining why it’s not easy for AI models to cite their sources
  • How to phrase questions
  • How to give exemplars of desired output

Level 4:

  • Selective prompt engineering training for power users
  • Possibly use of low-code tools for developing agents and automations

Level 5

  • Decide if you need in-house AI solution development expertise (and, at minimum, train managers and executives to be educated consumers of AI related services)
  • Capturing knowledge in AI-friendly formats (for RAG, etc.)
  • Cross-training traditional software developers on non-deterministic programming (i.e. prompt engineering)
  • Training to prompt engineers / developers to translate domain-specific frameworks and methodologies into actionable AI prompts
  • Comparing the relative strengths and weaknesses of different AI models
  • Working with APIs

Conclusion

While generative AI is a new technology, the field has already developed some cliches.  Perhaps the most common is “AI won’t replace humans – but humans with AI will replace humans without AI.

And while I’m old enough to be skeptical about such pronouncements (how many corporate CEOs still don’t comprehend online marketing… or how website domains work?) it’s probably true there won’t be much demand for people who can’t work effectively with AI.

So, to answer the question in the clickbait title – ‘Does your team need AI skills or AI tools?’ –  they probably need both, just in different proportions depending on their role and the problems they’re trying to solve.

Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.

Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.”

If your organization is interested in developing AI-powered training solutions, please reach out to Sonata Learning for a consultation.

LATEST RESOURCES
CASE STUDIES
JOIN OUR NEWSLETTER
Name

By signing up for the newsletter, I agree with the storage and handling of my data by this website. - Privacy Policy

This field is for validation purposes and should be left unchanged.