- Fintech Forward
- Posts
- Why “Do More With Less” May Decide the AI Race
Why “Do More With Less” May Decide the AI Race
Anthropic is betting that efficiency, not brute force, will define the next phase of AI - and that bet has big implications for fintech, cloud economics, and capital markets.
For the last three years, the story of artificial intelligence has been told in one dominant language: scale.
Bigger models. Bigger data centers. Bigger chip orders. Bigger balance sheets. In Silicon Valley, size has increasingly been treated as destiny — and nowhere is that clearer than in OpenAI’s roughly $1.4 trillion in headline compute and infrastructure commitments, a figure that would have sounded absurd even a few years ago.
But as we enter 2026, a quieter counter-narrative is gaining credibility.
At Anthropic, co-founder and president Daniela Amodei keeps returning to a deceptively simple principle: do more with less. It’s not an anti-scaling argument. It’s a challenge to the assumption that brute force alone will decide who wins the AI race.
And for fintech operators, investors, and builders, this debate matters far more than it might first appear.
Scale built the modern AI boom — but it also created fragility
The modern AI economy is built on a belief known as the scaling paradigm: if you increase compute, data, and model size, performance improves in a surprisingly predictable way.
That belief has done more than shape research. It has underwritten:
Hyperscaler capex plans measured in the hundreds of billions
Sky-high valuations for chipmakers
Private-market enthusiasm for companies still burning cash at historic rates
Daniela and Dario Amodei helped popularize this worldview themselves. They know better than most why it works.
But they also see the risk: when an entire industry aligns around one lever — spend more, scale faster — it starts to ignore second-order effects.
What happens if adoption lags capability?
What happens if enterprise integration moves slower than model improvement?
What happens if the exponential curve everyone is counting on… flattens?
As Daniela Amodei put it: “The exponential continues until it doesn’t.”
That sentence captures both the optimism and anxiety underpinning today’s AI buildout.
Anthropic’s bet: capability per dollar, not raw capability
Anthropic is not operating cheaply by normal standards. The company has roughly $100 billion in compute commitments, and those numbers will grow if it wants to stay at the frontier.
But relative to peers, it has consistently operated with a fraction of the capital and compute — and still delivered models that rank among the most powerful in the world.
The difference lies in where the company focuses its effort:
Higher-quality training data instead of simply more data
Post-training techniques that improve reasoning without massive pre-training runs
Product decisions that reduce inference costs — where the real, recurring compute bill lives
This matters because training is a one-time expense; inference is forever.
In fintech terms, Anthropic is optimizing unit economics, not just headline growth. It’s a mindset more familiar to payments processors or SaaS operators than to moonshot research labs.
One of the most revealing parts of Amodei’s comments is her skepticism about the way AI spending numbers are discussed.
Not all “commitments” are apples to apples. Many are long-dated, contingent, or structured in ways that lock companies into future capacity before demand is fully proven.
Why does that matter?
Because AI’s technology curve and its economic adoption curve are not the same thing.
From a technical perspective, Anthropic sees no slowdown. Models keep getting better. Capabilities keep compounding.
From a business perspective, reality moves slower.
Enterprises face procurement cycles, regulatory review, change management, and human resistance. Even the best tool doesn’t generate ROI until it’s deeply embedded into workflows.
This gap between what AI can do and what organizations actually deploy is where financial risk accumulates.
Why this matters for fintech specifically
Fintech sits at the intersection of regulation, legacy systems, and high-stakes decision-making — exactly the environment where adoption friction is highest.
That’s why Anthropic’s enterprise-first positioning is so important.
Rather than chasing consumer virality, much of its revenue comes from companies embedding Claude into products, internal tools, and decision systems. That usage tends to be:
Stickier
More predictable
More aligned with long-term contracts
The company reports 10x revenue growth year over year for three straight years, while making its models available across all major cloud platforms — even those that sell competing AI systems.
This multi-cloud posture isn’t détente. It’s strategy.
By avoiding a single infrastructure bet, Anthropic maintains flexibility on cost, availability, and performance — a classic fintech lesson applied to frontier AI.
Capital markets are quietly entering the room
There’s another reason this debate is intensifying as 2026 begins.
Both Anthropic and OpenAI are behaving like companies that expect public-market scrutiny, even as they remain private. They’re adding governance, forecasting discipline, and operational rigor — while still raising massive sums and signing long-term compute deals.
That tension is unresolved.
Public markets are far less forgiving than private ones when it comes to:
Fixed costs
Capital efficiency
Unclear paths to sustained margins
If capital remains abundant, scale-at-all-costs may continue to dominate. If conditions tighten — or if AI adoption proves bumpier than expected — efficiency will stop being a philosophical preference and start being a survival trait.
The real question for 2026
Anthropic’s bet isn’t that scaling doesn’t work.
It’s that scaling alone is not a business model.
The next phase of AI competition will likely be decided by who can:
Deliver more capability per dollar of compute
Integrate faster into real enterprise workflows
Build economics the broader economy can actually sustain
For fintech leaders, investors, and builders, this should feel familiar. Every financial innovation eventually moves from “what’s possible” to “what’s profitable.”
AI is reaching that inflection point faster than most people expected.
The exponential may continue. Or it may bend.
Either way, the companies that win won’t just be the ones that spent the most — but the ones that learned how to spend well.