One real reason AI isn't delivering: Meatbags in manglement
Feature Every company today is doing AI. From boardrooms to marketing campaigns, companies proudly showcase new generative AI pilots and chatbot integrations. Enterprise investments in GenAI are growing to about $30-40 billion, yet research indicates 95 percent of organizations report zero measurable returns on these efforts.
In fact, only about 5 percent of custom AI initiatives ever make it from pilot into widespread production, according to a widely shared report over the summer from MIT. This is the paradox of the current AI boom. Adoption is high, hype is higher, but meaningful business impact remains elusive. AI is everywhere except on the bottom line.

Kaseya CEO: Why AI adoption is below industry expectations
Why this disconnect? It's not that AI technology suddenly hit a wall. The models are more powerful than ever. The problem is how companies are using AI, not what AI can or cannot do. Organizations have treated AI like just another software deployment, expecting a plug-and-play solution. But AI behaves less like software and more like a new form of labor, one that requires training, context, and workflow integration.
The GenAI divide is between companies that install AI tools and those that build the capability to use them. Many enterprises are on the wrong side of this divide, convinced that buying an AI tool is equivalent to having an AI solution.
At the same time, employees often get more value from shadow AI than officially sanctioned AI projects. Businesses are deploying plenty of AI, but only a handful have figured out how to extract real value from it.
Why companies fail
Simply bolting AI onto old processes doesn't work. Yet that's what most companies do. They treat AI as a plug-in to existing workflows that were never designed for predictive or adaptive tools. The result is that pilot projects abound, but they die on the vine. In fact, companies on average run tens of AI experiments, but few ever make it past the proof-of-concept stage.
MIT research shows that the vast majority of the pilots were executed in isolation, without rethinking how the work itself should change.
An AI agent might generate accurate outputs in a demo, but in the real world, it breaks the moment it encounters an edge case or an outdated procedure. Enterprises need to note that if they don't redesign the workflow around the AI – for example, to catch its errors, use its predictions, and complement its strengths – the AI will remain a science experiment rather than a production tool.
Another issue stems from the AI model data and context. When AI pilots fail, executives blame the technology. But research found a deeper problem: the AI tools didn't learn. They couldn't retain context or improve over time. In simple terms, the AI was intelligent but suffered from amnesia after every interaction. This is the illusion many firms fall into. They think they have a smart system, but what they really have is a stateless algorithm that never improves.
Companies keep focusing on better models or more training data, but what they need is AI that accumulates context like an employee. It could be learning company terminology, remembering past decisions, and getting better with each task. Lacking this, even a state-of-the-art model will disappoint in practice.
The standout successes did something different. They brought in people who understood processes, not just models, employing or contracting out to process designers, workflow architects, and domain experts who could translate AI capabilities into day-to-day operations.

OpenAI's ChatGPT is so popular that almost no one will pay for it
The MIT study found that companies trying to build everything in-house had much lower success rates. Internal AI projects succeeded only about a third of the time, whereas collaborations with external partners, who often bring in domain-specific solutions, doubled the chances of success.
Another striking pattern with many successful deployments was a bottom-up approach. They often began with employees on the front lines tinkering with AI to solve real problems. When these experiments showed promise, management then supported them and scaled them up. This meant that AI was solving a felt need, rather than a top-down mandated solution in search of a problem.
- Top companies ground Microsoft Copilot over data governance concerns
- AI agents get office tasks wrong around 70% of the time, and a lot of them aren't AI at all
- McKinsey wonders how to sell AI apps with no measurable benefits
- Thousands of AI agents later, who even remembers what they do?
The bottom line is that the 5 percent focus on capability, not just tech. They align projects to real business goals, partner for domain expertise, and continuously adapt.
Where AI actually works
Another contrarian finding here is that the real ROI from AI isn't coming from the shiny, customer-facing projects everyone talks about. It's in the back office, in the "boring" stuff companies often overlook.
There is a massive investment bias that plagues many enterprises, with considerable AI budgets allocated to marketing and sales because those initiatives are visible and excite executives. Yet ironically, the biggest payoffs were being realized in corners like operations, finance, and supply chain.
In fact, some of the most significant cost savings come from automating back-office workflows, such as invoice processing, compliance monitoring, and report generation. One reason is low-hanging fruit, as many back-office processes involve manual drudgery or are outsourced to BPO firms, so an AI that can handle those tasks yields immediate savings.

UK government trial of M365 Copilot finds no clear productivity boost
So why do companies keep throwing money at AI for sales, marketing, and customer chatbots instead? It's a case of visibility over value. Front-office projects have easily observable metrics, which make for great headlines and happy board members. On the other hand, the back-office improvements often go unnoticed outside of CFO circles.
In the end, the story of AI in 2025 is a mirror to every major technology upheaval we've seen. Technology alone changes nothing unless organizations shift too. The grand irony is that we have powerful AI models at our fingertips, yet most businesses are stuck in pilot purgatory, scratching their heads at the lack of ROI.
The evidence is clear that this isn't a tech failure. It's a management failure. The divide between the AI winners and laggards is not driven by model quality or regulation, but by approach. AI won't transform business until the enterprise is willing to transform itself. That is the crux of the paradox, and the challenge that forward-looking leaders must answer. ®