Washington and Brussels Think They Know What's Killing AI Innovation. They're Both Wrong.
How funding structures reward AI adoption over AI success—and why 95% of projects fail as a result.
The Trump administration is dismantling state AI regulations. Brussels is second-guessing the EU AI Act. Somehow, Democrats and Republicans, European regulators and American deregulators have all agreed on the same enemy: compliance requirements are strangling AI innovation.
Here’s the problem.
The data says they’re fighting a battle that doesn’t exist.
The regulations everyone’s mad about barely exist
Let’s examine what some of these regulations actually require—and when they actually take effect.
California SB 53 (signed September 2025, not yet enforceable until January 1, 2026) applies only to “frontier developers”—companies training AI models using more than 10^26 computing power. That’s the top tier: OpenAI, Anthropic, Google DeepMind. The requirements?
Publish a “frontier AI framework” on your website describing how you incorporate safety standards
Report critical safety incidents to California’s Office of Emergency Services
Protect whistleblowers who raise safety concerns
Face civil penalties up to $1 million for non-compliance
That’s it. No approval process. No technical mandates. No required shutdown capabilities. Just: explain your safety approach publicly and report serious incidents.
Colorado’s AI Act (signed May 2024, delayed until June 30, 2026) applies to “high-risk artificial intelligence systems”—AI making “consequential decisions” about employment, education, financial services, healthcare, housing, insurance, or legal services. For companies deploying these systems:
Conduct annual impact assessments examining potential algorithmic discrimination
Implement “reasonable care” to protect consumers from discrimination
Provide notices to consumers when high-risk AI affects them
Notice what’s not in these requirements: no approval processes, no AI-specific licenses, no technology mandates. Colorado is essentially requiring what good companies should already be doing: know what your AI does, check if it discriminates, and tell people when you’re using it to make important decisions.
Both laws take effect in 2026. Yet the failure rate everyone’s citing? That’s happening right now, in November 2025, when comprehensive state AI regulations don’t exist yet.
If regulatory compliance were really the bottleneck, we should see companies outside regulated use cases successfully deploying AI at scale.
That’s emphatically not what’s happening.
Companies are spending billions on AI that doesn’t work
MIT researchers documented something extraordinary: enterprises invested $30-40 billion in AI systems. Ninety-five percent are getting zero return.
Not “disappointing returns.” Not “slower than expected adoption.”
Zero.
The failure rates are staggering. In 2025, 42% of companies abandoned most of their AI initiatives, up from just 17% in 2024. The average company scrapped 46% of AI proofs-of-concept before reaching production. More than 80% of AI projects fail—twice the failure rate of regular IT projects.
Boston Consulting Group surveyed 1,000 executives: 74% of companies struggle to achieve and scale AI value. Only 4% achieve “transformative” AI capabilities at scale.
So why are they doing it?
Follow the money (this is where it gets interesting)
Companies are facing a perfect storm of capital dynamics that reward AI adoption regardless of outcomes.
Venture Capital has pivoted entirely to AI. In 2024, AI startups raised over $100 billion—an 80% increase from 2023. AI companies now command 34% of all VC investment despite representing only 18% of funded companies. Meanwhile, traditional SaaS startups raised just $4.7 billion by May 2024.
The stock market rewards AI theater over substance. A European Central Bank analysis found that a 1 percentage point increase in GenAI discussions in earnings calls correlated with a 0.62% rise in quarterly stock prices—creating direct financial incentive for AI mentions regardless of implementation reality. AI mentions in S&P 500 earnings calls increased more than tenfold since early 2023.
Cloud providers offer AI-specific credits—Microsoft, AWS, and Google collectively provide up to $450,000 in AI-specific cloud credits, while NVIDIA provides free AI development resources. But here’s the catch: these programs fund AI adoption, not business technology strategy. You can get $150,000 to build an AI feature. Try getting that funding for the database migration or process documentation that would actually improve operations.
Even federal small business tech funding has shifted to AI or disappeared. The Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs—historically the largest source of early-stage tech funding for small businesses at $4.4 billion annually—expired September 30, 2025 without reauthorization.
The numbers tell a clear story: When 34% of all venture capital flows to just 18% of companies (those with “AI” in their pitch), you’re not making technology decisions based on business needs. You’re making them based on funding availability.
This is what a perverse incentive structure looks like in real time. Companies aren’t asking “should we adopt AI to solve our operational challenges?” They’re asking “how do we access the capital that requires AI adoption?”
Even when the AI doesn’t work. Even when it costs more than it saves.
Why companies with resources fail anyway
Here’s where it gets interesting: large enterprises with unlimited budgets fail at the same rate as resource-constrained small businesses.
IBM’s May 2025 CEO Study surveyed 2,000 global CEOs and found 64% acknowledge “the risk of falling behind drives investment in some technologies before they have a clear understanding of the value they bring.”
The competitive pressure is measurable. ABBYY’s 2024 survey found 63% of IT leaders worried their company will be left behind if they don’t use AI. Despite these fears—and concerns about implementation costs—companies invested an average of $879,000 in AI last year, with 96% planning to increase investment.
The SEC (under Biden) made a point of prosecuting this behavior. In March 2024, the SEC fined two investment advisors for “AI washing”—marketing AI capabilities that didn’t exist. In April 2025, the SEC and DOJ filed criminal charges against a founder who raised $42 million claiming an AI-powered shopping app when transactions were “being processed manually by contract workers in foreign countries.” It’s probably a safe bet that this Trump administration’s SEC will probably not focus on deterring similar behavior, which means that there is going to be more of it.
Here is the deeper problem: enterprises aren’t just overstating the role of AI in their orgs. The AI they do have has been implemented badly!
RAND Corporation’s research found AI projects fail at twice the rate of traditional IT projects. The reason? Boston Consulting Group documented that successful AI leaders allocate 10% of resources to algorithms, 20% to technology and data, and 70% to people and processes. Most failing companies invert this ratio—they think AI is a technology purchase when it’s actually organizational transformation.
McKinsey’s 2025 State of AI report found 63% of organizations remain stuck in the pilot phase, unable to scale. Only 17% could attribute even 5% of EBIT to AI in the past year.
And then they try to hire their way out (it doesn’t work)
In theory, yes. In practice, the structure of AI funding and organizational dynamics prevent this.
For large enterprises: The problem isn’t money—it’s organizational dysfunction. Large companies lose approximately 20% of their data scientists annually, significantly higher than the 13% average tech turnover. The reason? 87% of data science projects never reach production. Talented people leave when their work doesn’t matter.
Even when enterprises hire successfully, they lack infrastructure. Only 22% of enterprises have architecture capable of supporting AI workloads. Nearly one-third report up to 25% of legacy systems cannot support AI tools. These problems require 24+ months to fix—but boards and investors want AI results in quarters, not years.
For small companies: The barriers are different but equally insurmountable. Responsible AI adoption actually costs $170K-$365K:
Data engineer: $80K-$120K annually
Governance framework: $20K-$40K
Staff training: $5K-$15K
Ongoing evaluation: $20K-$40K annually
Implementation: $30K-$100K+
Most small businesses are getting $50K-$100K in restricted cloud credits but very little by way of support. The funding is enough to deploy AI, but not enough to deploy it well.
Here’s what actually works (and nobody’s doing it)
We don’t need to guess about effective technology incentives. We have decades of data.
The federal Technology Modernization Fund provides incremental funding tied to milestone achievement, requires technical assistance, and conditions continued funding on demonstrated outcomes.
Success rate: 80%.
Traditional IT funding provides upfront money for adoption regardless of results.
Success rate: 13%.
Outcome-based funding works six times better than adoption-based funding.
Pay-for-Success contracts show the same pattern. The global impact bond market has deployed $745 million across 276 projects where governments pay only after verified outcomes.
Payment follows impact, not promises.
So what does current AI policy do? The exact opposite. The EU AI Act threatens fines up to €35 million but offers vague “priority access” to regulatory sandboxes. The NIST AI Risk Management Framework is entirely voluntary with no way to measure whether you’re implementing it well. 79% of AI governance programs are principles without enforcement mechanisms.
They tell companies what responsible AI looks like. They provide zero incentive to actually do it.
What’s missing
Current policy funds adoption, not evaluation. It rewards deployment, not outcomes. And it provides no support for companies that determine AI doesn’t serve their needs.
Evidence-based AI funding would look different:
Phase 1: Fund assessment of governance capacity, technical readiness, and business alignment—payment regardless of whether you decide to adopt AI.
Phase 2: Only for companies completing Phase 1. Fund policy development, training, data governance, evaluation systems—payment tied to infrastructure milestones, not adoption.
Phase 3: Only for companies completing Phase 2 and determining AI serves their needs. Fund deployment with payment tied to demonstrated outcomes.
Alternative track: Companies that decide AI doesn’t serve their needs get equal funding for whatever technology they identify as most valuable.
This rewards evaluation over adoption. It funds infrastructure before tools. It conditions payment on verified outcomes. And it treats “no” as a valid answer.
The barrier isn’t what they think it is
Washington wants to kill state regulations. Brussels is rethinking the EU AI Act. They’re both fighting about compliance costs.
They’re missing the actual problem.
The real barriers aren’t regulatory requirements. They’re funding incentives that reward adoption over evaluation, competitive pressure that forces companies to invest before understanding value, and organizational realities that money can’t quickly fix.
Neither more regulation nor less regulation fixes these dynamics.
The data showing 95% failure rates isn’t evidence that AI doesn’t work. It’s evidence that companies are adopting AI they haven’t evaluated, can’t sustain, and don’t actually need.
That’s not a regulatory problem. That’s an incentive design problem.
We have the evidence. Outcome-based funding works six times better. Infrastructure-first approaches prevent catastrophic failures. But instead of learning from technology transitions that wasted tens of billions, we’re replicating those exact failures with AI while claiming to promote innovation.
The regulations everyone’s fighting about won’t take effect until mid-2026. But every month that passes with adoption-based funding locks in more failed projects, more wasted capital, more companies making decisions driven by a fear of missing out rather than mapped and measured business need.
The barrier isn’t compliance requirements. It’s that we’ve built a system that financially rewards companies for making hasty AI adoption decisions over good ones.
While governments will continue arguing about the scope of AI regulations, investors and industry leadership need to update their evaluation frameworks to reward the most effective AI adoption, not the quickest.
