Insight

The AI Recalibration—Why Smart Companies Are Slowing Down

ROI Concerns are Real!

Gartner predicts that 30% of GenAI projects will be abandoned after proof of concept by 2025 due to poor ROI and unclear business value. More strikingly, an MIT Media Lab report found that 95% (Fortune) of generative AI pilots at companies are failing to deliver a measurable financial return on investment, despite $30-40 billion in enterprise investment. 

The disconnect is massive. Tech companies are projected to spend approximately $400 billion this year on AI infrastructure, whereas American consumers pay only $12 billion annually on AI services. 

US Census Bureau data from September show that AI adoption has been declining among companies with more than 250 employees, stagnating in mid-sized companies, and continuing to grow only in small companies with 1-4 employees. 

Interestingly, despite these concerns, nearly two-thirds of deal value in the U.S. went to AI and Machine Learning. Startups in the first half of 2025, up from 23% in 2023. The big tech companies are still committing enormous sums. 

So yes, there’s definitely a recalibration happening, with companies becoming more skeptical and demanding clearer ROI. It's not yet a full retreat, but more of a growing tension between continued massive investments and disappointing practical returns. And this shift is creating real problems for everyone in the industry. 

This growing skepticism has created a significant challenge for anyone trying to sell or implement AI solutions. 

 

The AI Marketing Problem

Decision-makers are increasingly wary of AI pitches after seeing so many failed pilots and unclear ROI. Leading conversations with AI-driven solutions might actually trigger skepticism rather than interest. Everyone claims to do “AI” now, so it doesn’t really provide valid differentiation between vendors; those claims have become noise.  

Companies need proven ROI, practical implementation, and business transformation. They need help separating AI hype from practical value, strategic guidance on when and where to apply AI/ML, properly built data foundations before any ML work, and someone who can deliver ROI-focused solutions.

 

Why I didn't Rush into the AI Gold Rush

After decades in software engineering, I've watched countless "revolutionary" technologies come and go. My approach has always been healthy skepticism. I give new tech time to prove itself before diving in. This isn't about being a Luddite; it's about being strategic. Waiting for the dust to settle has consistently allowed me to learn what actually works, separate signal from noise, and adopt technologies when they're ready for real-world use. 

Here's the thing about AI: it's a tool, not a revolution. Yes, it's a powerful tool—but it belongs in your toolbox alongside databases, APIs, and all the other tools we use to solve problems. From day one, I approached AI the same way I've approached every other technology: with curiosity tempered by pragmatism. I learned its strengths, acknowledged its limitations, and most importantly, recognized what it isn't—a silver bullet. While others were racing to slap "AI-powered" on everything, I've been saying what more people are finally starting to realize: we're moving too fast, and AI isn't the answer to every question. 

 

What Actually Works - A Framework for Thoughtful AI Adoption

As a software engineer, architect, and consultant, I've learned one fundamental truth: everything starts with the business problem. Technology is always secondary. 

Before AI ever enters the conversation, I need to understand the whole picture. What's the actual process we're trying to improve? How does information flow from point A to point B? How are end users currently interacting with the system, whether that's through software, manual workflows, or automated processes? Only after mapping this out can I honestly assess whether AI is even a part of the solution. 

And if AI does make sense? That's just the beginning. You can't simply bolt AI onto unprepared infrastructure and expect magic. There's real work involved: getting your data foundations ready, ensuring your systems can support what you're trying to build, and achieving actual "AI readiness" before a single line of development code gets written. 

However, what most technical discussions overlook is the people problem. AI's promise to "replace" workers has created genuine fear, and fear kills adoption faster than any technical limitation. I've seen brilliant solutions fail not because the technology didn't work, but because the people who needed to use it were never brought along for the journey. 

This is where change management becomes critical—and it needs to start on day zero. Work with your end users, not against them. Train them, involve them in the process, and help them see AI as a tool that enhances their jobs, rather than rendering them obsolete. When you enforce change from the top down without this foundation, you're setting yourself up for resistance, workarounds, and ultimately, failure. 

 

The "AI Readiness" Checklist

Here’s where 70% of AI projects fail: the human element. Even the most technically sound AI implementation will crash if your organization isn’t prepared for algorithmic decision-making. This involves training teams to work alongside AI systems, establishing new approval workflows, and transitioning from intuition-based to data-driven decision-making. Change management isn’t an afterthought; it’s the thread that weaves through every stage of implementation. Start building AI literacy and cultural readiness from day one, not after your technology is deployed.

 

Before AI even enters the conversation, you need a framework to evaluate whether you're actually ready for it. Here's my approach at the highest level: 

  • Audit Data Infrastructure
  • Standardize Processes 
  • Implement Change Management 
  • Automate with Technology 

Notice the order here. It's intentional. 

Do not buy the latest AI technology and then force-fit your processes, hoping it works. That's how you take existing chaos and multiply it exponentially.


Instead, start with the right question:

"What processes can we optimize, standardize, and then automate?"

 NOT

"What technology should we buy?" 

 

The Five Dimensions of AI Readiness

Actual readiness isn't just technical; it spans five critical areas:

Business Readiness

  • Can you clearly define the problem you're solving? 
  • Can you articulate the specific outcomes and metrics you want to improve? 
  • Have you confirmed that a simpler, non-AI solution won't work? 

Technical Readiness

  • Is your data clean, accessible, and properly structured? 
  • Can your infrastructure support what you're building? 
  • Do you have a team to maintain the solution long-term? 
  • Are monitoring and observability capabilities in place? 

Organizational Readiness

  • Do you have executive sponsorship and a realistic budget? 
  • Are there champions within the organization driving adoption? 
  • Is there a clear owner for the AI solution? 

Cultural Readiness

  • Can you implement change management from day zero? 
  • Are you planning to work with end users rather than impose change on them? 
  • Can you build trust through transparent communication and training? 

Risk & Governance Readiness

  • Do you have the necessary resources to ensure data privacy, security, and compliance requirements are adequately addressed? 
  • Do you have proper governance frameworks in place?  
  • Is there a risk management plan for when things go wrong, or can you plan for it BEFORE implementing the solution? 

One final critical point: Ensure the vendor or partner you're working with is a collaborator, not just an implementor. You don't need a "yes-team" that will build whatever you ask for. You need someone who will push back when AI isn't the right answer.


I know that's a lot to consider. In the real world, companies constantly strive to cut corners. They skip steps, save time and money upfront, and launch implementations with the hope that everything will work out. 

That's like walking a loose rope between the World Trade Center and Central Park Tower without a balance pole. You might feel confident for the first few steps, but the fall is inevitable.

 

The Cost of Getting It Wrong

Remember those numbers from the beginning? 95% of generative AI pilots fail to deliver measurable ROI. $30-40 billion in enterprise investment with little to show for it. These aren't abstract statistics. They represent real companies that skipped steps, cut corners, and paid the price. 

But financial waste is only part of the story. Here's what actually happens when you bypass AI readiness: 

Your data becomes a liability instead of an asset. Without proper infrastructure, you're feeding garbage into expensive AI systems.  

The results? Hallucinations, wrong recommendations, and decisions based on flawed outputs. Now, not only have you wasted money on the AI implementation, but you've also undermined trust in your data entirely. 

Your team becomes demoralized. Engineers and data scientists spend months building something that doesn't work because the foundation was never there. They know it's doomed, but leadership is committed—morale tanks. Your best people start looking for the exit. 

Your users revolt—quietly. Without change management, employees will find workarounds. They disregard the new AI tool and continue using their spreadsheets. Your expensive solution becomes shelfware, and adoption rates plummet.  

The ROI you promised? It never materializes because nobody's actually using what you built. 

You've created technical debt on steroids. Now you have AI systems built on unstable foundations that need constant patching. You can't move forward because you're stuck maintaining something that should never have been built this way.  

And unwinding it? That costs more than doing it right the first time. 

This is the pattern I've watched repeat itself across the industry. Companies racing to say they "do AI" without asking whether they should, or more importantly, whether they're ready. 

 

A Different Approach

I’ve learned that the most valuable thing I can tell a client is “no, not yet” or sometimes “no, not AI.”

At Paper Plane Consulting, we don’t lead with technology. We lead with understanding your business. We map your processes, audit your data infrastructure, and assess your organizational readiness, and only then do we discuss whether AI is a suitable solution. Sometimes it does. Often it doesn’t. And when it does, we build the foundation first.

We can help you navigate this exact challenge, of separating AI hype from practical value and delivering solutions that actually generate ROI.

If you’re tired of vendors trying to sell you AI solutions before understanding your problems, let’s talk.

Ready for a different conversation about AI? Let's start with your business problem, not technology.


Related Posts

© 2025 | Paper Plane Consulting
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram