What the dependency structure actually looks like
Step back from Anthropic specifically and look at the shape of the problem.
Model providers sit in a strange position. They are publicly visible, they carry the brand, they sign the enterprise contracts. But underneath, they depend on a short list of infrastructure owners: hyperscalers like Amazon, Google, and Microsoft, plus newer concentrated actors like xAI and CoreWeave.
Amazon has invested heavily in Anthropic. Google has too. Both relationships come with compute commitments built in. When Anthropic reaches beyond those arrangements to a competitor's infrastructure, it tells you something: demand is outrunning even well-funded supply agreements.
This is not unique to Anthropic. OpenAI runs primarily on Microsoft Azure, but has also explored additional capacity elsewhere. Meta runs its own infrastructure at scale partly because it learned the hard way what dependency costs. The pattern is consistent. The companies with the most public profile in AI are often the most structurally exposed underneath.
For anyone building products on top of these models, that exposure is worth mapping. Your chosen model provider's uptime, latency, and pricing are downstream of decisions made three layers below you, by people whose interests do not necessarily align with yours.