The $25B Opportunity: Why Verifiable AI Is the Next Great Infrastructure Layer
Open AI models have quietly caught up, delivering ~90% of the performance of closed models at ~15% of the cost. Yet they capture only ~4% of revenue. This $25B trust gap is the single largest inefficiency in the AI economy today.
For all the talk about "frontier AI," one of the most important trends in the AI economy has gone almost unnoticed: open AI models have quietly caught up.
New research analyzing millions of model calls across the inference market shows something remarkable. Open-weight models now deliver ~90% of the performance of closed models at ~15% of the cost. And yet, they capture only ~4% of revenue.
This gap between capability and adoption, the $25B trust gap, is the single largest inefficiency in the AI economy today. [1]
It is also the kind of opportunity Prufold was founded to solve.
The Paradox: Open Models Win on Capability and Price, Yet Lose in the Market
If the market were rational in the classical economic sense, enterprises would be stampeding toward open models.
- 6× cheaper
- 90% performance parity
- Rapid innovation cycles: open models now catch closed models in ~13 weeks
- Multiple providers competing: lower prices, more geographic spread
So why do we not see it happening? Because enterprises are not buying tokens. They are buying trust.
While open models match closed models on benchmarks like GPQA, MMLU-Pro, and LiveCodeBench, benchmarks don't measure the things enterprises actually care about:
- policy stability
- compliance guarantees
- determinism under load
- auditability
- indemnification
- predictable behavior across updates
- safety constraints that cannot be silently bypassed
Closed providers like OpenAI and Anthropic have built their dominance not on perfect performance, but on trust architecture: SLAs, indemnification, responsible-AI infrastructure, and risk absorption.
Open models lack the equivalent foundations. This means the market's "preference" for closed models isn't irrational, it is structural.
The $20–$25B "trust gap" represents annual spend that could immediately shift to open models if reliable guarantees existed. Only a fraction of that requires deep cryptographic verification, but even serving the high-trust, high-compliance slice (finance, healthcare, insurance, autonomous systems) represents a reachable $6–10B serviceable market today, growing rapidly as agentic systems proliferate. In other words: enabling trust in open models is not a niche improvement; it unlocks one of the largest unclaimed efficiency gains in the AI economy.
If history is a guide, AI will mirror the evolution of the cloud. Early adopters gravitated toward fully managed, proprietary platforms; but as the ecosystem matured, cost pressure and standardization pushed most workloads toward open, multi-vendor, commodity infrastructure. We expect the same dynamic in AI. As open models reach performance parity, the dominant long-term equilibrium is a world where the majority of inference runs on open, interchangeable, competition-driven models. This happens once trust, compliance, and verification are solved.
Another lesson from the cloud era is that workloads gravitate toward where the data lives. Enterprises moved to public cloud not just for elasticity, but because colocating compute with data drastically reduces friction, latency, and cost. The same dynamic applies to LLMs. Most enterprise data already sits in public cloud environments, so running open models "next to the data" becomes the natural, efficient default. As soon as trust and verification are solved, open models deployed close to enterprise data will outcompete proprietary endpoints that sit outside the organization's data plane.
The Missing Layer: Verifiable Compute
Today's AI systems, especially agentic systems, rely on brittle scaffolding: prompt filters, behavioral heuristics, and post-hoc monitoring.
None of these guarantee that an AI system actually followed the rules you set.
There is a missing layer in the stack:
A cryptographic trust layer that provides mathematical guarantees about how AI systems behave.
This layer makes it possible to say, with precision:
- This model followed this policy.
- This agent did not access prohibited tools.
- This workflow adhered to GDPR/HIPAA constraints.
- This model did not leak PII in the response.
- These decisions are reproducible and auditable.
- No step of this computation deviated from the verified execution plan.
And critically for the open model ecosystem:
This open model is as reliable as its closed-model alternative and we can prove it.
This is what turns economic potential into economic reality.
Enter Prufold: The Trust Layer for the Autonomous Economy
Prufold exists to make open models and ultimately all AI systems verifiable by design.
Our platform stitches together:
1. Trusted Execution Environments (TEEs)
Ensuring the code running your AI workflow is exactly the code you expect.
2. Zero-Knowledge Proofs (ZKPs)
Providing cryptographic evidence that a workflow's steps followed the rules without exposing sensitive data.
3. Formal Verification
Encoding safety and compliance requirements as provable constraints, not heuristics.
4. A Policy-Aware DSL for AI Workflows
Letting developers express what "safe," "allowed," and "compliant" actually mean and enforce them cryptographically.
We do not compete with model vendors. We make any model, from any provider, provably trustworthy.
Why This Matters Now
Three forces are converging:
1. Capability parity
Open models have never been more competitive and the economics are overwhelmingly in their favor.
2. Regulatory pressure
The EU AI Act, U.S. NIST frameworks, and insurance underwriters all require some form of verifiable AI risk management. This isn't a niche security requirement, instead it is becoming standard operating procedure.
3. The rise of autonomous agents
When AI systems begin executing workflows, calling APIs, or manipulating financial or robotic systems, confidence in "benchmarks" collapses. Enterprises need proof.
The market is moving up the verification curve faster than anyone expected.
Our Thesis: The Trust Advantage Will Flip
Closed models win today because they have trust infrastructure. Open models win when trust is disaggregated from the model provider and provided by an independent verification layer. When that happens, enterprises can choose on:
- capability
- cost
- specialization
- latency
- geography
- privacy
—not fear.
The moment trust becomes portable, the economics shift dramatically:
- Open models become the default.
- Multi-model orchestration becomes normal.
- Closed models compete on performance, not lock-in.
- AI systems evolve from "black boxes" to auditable, verifiable pipelines.
- The $25B gap doesn't just shrink, but it becomes the foundation of a new infrastructure category.
Conclusion: The Autonomous Economy Will Be Verifiable or It Won't Scale
AI is rapidly becoming an economic actor: making decisions, drafting contracts, approving transactions, orchestrating tools, interacting with machines. For this world to function, we need more than performance. We need proof.
Proof that the system followed the rules. Proof that outputs are traceable, that the workflow is compliant, and that trust doesn't require blind faith.
This is the infrastructure Prufold is building, that trust layer that closes the $25B gap. It is a platform that liberates the economic potential of open models, and a foundation on which the next decade of AI systems will rely, not because it is convenient, but because it is necessary.
References
[1] Eddins, S., Ramesh, A., & Chen, M. (2025). "The Economics of Open-Weight AI Models: Performance, Cost, and Market Dynamics." *SSRN Electronic Journal*. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5767103