SONATAnotes

Artificial Generic Intelligence: Why AGI Isn’t the Future (Nor the Limit) of AI

As AI startups go, my company’s ambitions are fairly modest. We created a platform that lets organizations use AI as a coach for employees and generate roleplay simulations for worker training. And while that’s pretty cool and innovative in the context of corporate learning & development, it’s not like we’re using AI to reprogram the human genome to fight cancer or anything. That’s why, most days, I just mind my company’s business and avoid commenting on the larger state of the AI industry.

Still, recent developments in the industry including an MIT report that 95% of AI implementations fail, plus the wave of hype (and subsequent wave of disappointment) around the release of ChatGPT-5 really got to me. For the record, I actually wasn’t dissatisfied with ChatGPT -5 at all: after plugging into some of my company’s AI coaching agents it seems slightly smarter and a lot faster than the previous versions… overall a pretty decent upgrade. And our own initial pilots with clients have been going fairly well, all things considered.

However, the problem is that OpenAI wasn’t promising “pretty decent” while hyping up ChatGPT-5 – they were promising magic – and most organizations implementing generative AI projects are expecting comparably mind-blowing returns with relatively little planning or effort (basically repeating all the early mistakes of cloud computing implementations a decade ago – which had a comparably dismal failure rate in 2015).

Compared to those expectations, GPT-5 and most large-scale AI implementations have utterly misfired (many users even feel that ChatGPT 5 isn’t even as good as the previous generation – 4o, 4.1, and o3).

This roller coaster of public and investor expectations can be irritating for those of us trying to do practical and useful things with current AI technology. The big AI companies like OpenAI and Anthropic have been conditioning the public to expect magic – a future where anyone can tell an AI model “you are the second coming of Steve Jobs… invent an amazing new product on par with the iPhone” then – ABRACADABARA! ALAKAZAM! – it spits out the schematics and marketing materials for the next world-changing gadget.

Each time Sam Altman or other industry leaders proclaim the dawn of “Artificial General Intelligence” (AI models that surpass humans at every intellectual task) – only to walk it back – it risks distracting from what the real miracles that researchers at Open AI, Anthropic, and other companies have already achieved, and the very real utility of current AI models.

So, what would be a more realistic set of expectations for AI development? And would that be enough to satisfy the rest of us?

Why AI Progress Can’t Be Rushed

In 1965, Gordon Moore – cofounder of Intel – observed that the number of transistors that engineers could fit on a microchip – and, by extension, the computing power of those microchips – double every year. In 1975, the prediction was revised down to doubling computing power every two years, but “Moore’s Law” would otherwise hold true for the next 35 years, until 2010. At that point transistors had shrunk down to an atomic scale, imposing a near-absolute physical limit on how much smaller they could get.

When hype started building around generative AI in the 2020s, the companies behind models like ChatGPT, Gemini, and Claude suggested AI technology would follow its own version of Moore’s Law, with the capabilities of AI models doubling every few months, thanks to newer and better AI algorithms running on ever-faster hardware with access to more and more data. And some of the dramatic early leaps between models like GPT 3.5 and 4 seemed to confirm that trajectory.

But while investors took comfort in drawing parallels (“AI is going to keep getting better and better just like microchips back in the day!”) measuring growth in AI “capability” has proven to be a slippery fish.

First, where Moore’s Law was quantitative (you can count the transistors on a microchip), improvements in “AI capabilities” are qualitative. While Claude XX is undoubtedly better than its predecessors at writing software code, there’s no precise way to measure the improvement (Lines of code produced? The number of lines that make it through the quality assurance process into commercial software products? Developer hours saved?). Likewise, while my AI sales and marketing assistant seems smarter with GPT 5 versus 4.1, it’s not like I’m going to run two different sales campaigns using two different versions of the AI agent for 6 months to see whose recommendations produce better results.

In some ways, improving AI models is more like breeding racehorses than stacking transistors on chips. Unlike traditional software where developers start with the previous version then revise and expand the code, different AI models are more like organisms grown in a petri dish of data. No one involved can predict the exact performance, capabilities, and quirks of a new model at the outset, as it develops through a slow process of algorithmic trial and error – analyzing data, predicting what comes next, then being rewarded or penalized by its developers until a new intelligence starts to emerge. What emerges is more a sibling or cousin of previous models – not a software version in the traditional sense – and while developers can apply lessons from previous iterations, each model is very much its own beast.

Artificial General Intelligence = Artificial Generic Intelligence?

Companies like OpenAI and Anthropic find themselves in an awkward position – they have already delivered an incredible technology, but  investors demand more miracles.  It’s as if the first human to control fire presented a flaming torch to their chief, only to have the chieftain reply “Cool – what more do you got?”

If they want to meet such stratospheric expectations, today’s AI model providers have to believe AGI will emerge and replace nearly every other type of software, in every industry.  But if it plays out that way, it would go against decades of precedent where a few big players deliver the foundational technology (think Microsoft, Google, and Amazon Web Services) that small to midsize players (Epic, Intuit, Autodesk) build on to create solutions for healthcare, finance, engineering, and other specific verticals.

If AGI doesn’t magically buck the trend – we will likely see OpenAI or Anthropic handling “generic” knowledge work (the way Microsoft and Google handle everyone’s spreadsheets, email and word processing) while industry specific AI systems are created by a second and third tier of more specialized niche players, with specific domain expertise and proprietary data you won’t find on the public internet.

After the Ice Age: The Next Phase of AI Evolution

The real danger isn’t that AGI will replace all other software and put everybody except Sam Altman out of a job: instead, it’s that unfulfilled AGI hype will make life difficult for all the other AI companies working hard to deliver more modest but tangible value.

If investors call “BS” on AGI, it wouldn’t be the first time. From the 1950s to the 1980s, companies like IBM and the US military invested heavily in early AI development before concluding the technology of that era wasn’t ready for prime time. This led to an era known as “AI winter” where AI research struggled to find funding. And something similar happened to early Internet companies a decade later, in the 1990s, Wall Street investors weren’t sure what companies would make money from the world wide web, so they just poured money into network hardware manufacturers like Cisco before realizing cable and wifi routers were cheap commodities anyone could produce.

If history repeats itself, and the AGI / superintelligence hype cools even slightly, it could lead to another tech industry Ice age. But that doesn’t mean AI itself will disappear. During the dot com crash of 2000, many hyped up early Internet companies took massive hits, and while a few didn’t survive, others repositioned as a new generation of more specialized Internet startups emerged. Cisco’s stock dropped 86% during the crash, but survived at a more reasonable level because networks still needed routers. Amazon lost 90% of its value but still had a clear path to profitability through their web server business and building on their success selling books. And nobody could have anticipated the emergence of new, category-defining players like Facebook, Airbnb, and Uber.

The companies that survive or arise after the next (likely brief) “AI winter” will be those using AI technology to solve specific problems with clear ROI and real revenue models – “we make the best legal document analyzer” or “we’re the premier medical diagnostic assistant” (or, if I may, “the best tool for workforce training”) – not fantasies of digital god-in-a-box.

Make no mistake, AI is still going to transform industries and change life as we know it. But it’s going to take hard work, not AGI magic.

Conclusion

If the past half century of information technology – from the personal computer to the web to cloud computing – has taught us anything, it’s that bubbles burst and early implementations across big organizations seldom go smoothly. Is AI heading for a similar speed bump? Probably, and in a changing world adherence to those familiar patterns is actually kind of reassuring.

The one thing that’s absolutely for certain is that the AI companies delivering tangible value will likely survive the inevitable shake-out of the market, and in a generation we’ll probably be asking the same questions about some other technology… whether it’s quantum communications, teleportation, nano-pharmaceuticals or whatever wonders humans + AI create next.

Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.

Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.”

If your organization is interested in developing AI-powered training solutions, please reach out to Sonata Learning for a consultation.

LATEST RESOURCES
CASE STUDIES
JOIN OUR NEWSLETTER
Name

By signing up for the newsletter, I agree with the storage and handling of my data by this website. - Privacy Policy

This field is for validation purposes and should be left unchanged.