HomeWork
//
ContactContact
Try searching for

AI powered.
Human engineered.
Growth driven.

Amsterdam·—·Studio open

Explore

  • Work
  • Services
  • Insights
  • University
  • About
  • The Collective

Connect

  • Contact
  • LinkedIn

Learn

  • University
  • AI Snapshot
  • AI Calculator

Notes from the studio

Short, useful, once or twice a month. Strategy, AI, craft, things we are making.

© 2026 Studio Hyra. All rights reserved.

Not sure what we do? We can explain it differently.Privacy Policy
When your vendor's former CTO won't trust the CEO under oath
Technology6 min read

When your vendor's former CTO won't trust the CEO under oath

May 9, 2026

In a courtroom in San Francisco, Mira Murati was asked whether she trusted Sam Altman. She said no. Murati was OpenAI's CTO from 2018 until she resigned in late 2024. She was also the person who ran the company for a few days in November 2023 when the board fired Altman, before the reversal that brought him back. She was not a peripheral figure. She was the person who knew the product roadmap, the safety debates, and the internal mechanics of the organisation better than almost anyone.

Her testimony came during the ongoing lawsuit brought by Elon Musk against Altman and OpenAI. Whatever you think of that case, Murati's words are now part of the public record. And for anyone building a product, a workflow, or a client delivery on top of OpenAI's APIs, that record matters.

A composition of iridescent, layered geometric shapes, such as cubes and spheres, against a dark background.

This is not a drama to watch from the sidelines

The natural reaction is to treat this as tech industry gossip. Executive leaves. Litigation follows. Quotes get taken out of context. Move on.

That reaction is wrong.

When the former technical lead of your primary AI vendor states, under penalty of perjury, that she could not trust its CEO, you are not watching internal politics. You are receiving a signal about organisational coherence at the top of a company whose infrastructure you may be running client work through every single day.

This is not about whether Altman is a good or bad person. It is about what that testimony tells you about the governance of a firm that controls access to one of the most consequential technology stacks in the industry right now. Governance matters because it shapes priorities, safety decisions, product continuity, and the terms under which you can rely on a vendor to behave consistently over time.

Vendor trust used to mean uptime and pricing. Now it includes: do the people running this company have a shared understanding of what it is for?

Max Pinas, Studio Hyra

What procurement teams are actually buying

When an agency or product team integrates a foundation model into a client workflow, they are buying more than an API. They are buying:

  • Continuity. The model they build on today should behave roughly the same in six months. OpenAI has a documented history of deprecating model versions, sometimes quickly.
  • Safety consistency. The outputs their tool produces need to stay within acceptable parameters as the model is updated. Post-training alignment choices are made by the vendor, not by you.
  • Pricing stability. GPT-4 input pricing dropped significantly between 2023 and 2024. That can work in your favour, but a company under legal and governance pressure can also move pricing in the other direction.
  • Reputational alignment. If your client's branded AI assistant is built on a vendor that is making headlines for internal breakdowns, that becomes your problem in a client review.

None of this requires you to believe one side of a lawsuit. It requires you to take the information seriously.

Abstract flowing liquid forms, resembling smoke or waves, with iridescent reflections on a dark background.

The multi-model case just got stronger

For the past two years, building on a single foundation model was a defensible choice. GPT-4 had a real capability lead, the API was stable enough, and the operational overhead of maintaining multiple vendor integrations did not pay off for most teams.

That calculation is shifting. Not because the capability lead has disappeared, though it has narrowed considerably with competition from Anthropic, Google, and Mistral. It is shifting because the non-technical risks are now visible in ways they were not before.

A multi-model architecture does not mean running every request through four APIs. It means designing your system so that the model layer is genuinely swappable. Abstract the model calls behind an internal interface. Keep your prompts, your retrieval logic, and your post-processing in code you own. Treat the foundation model as a commodity utility, not a platform you build on top of.

This is not a new idea. But testimony like Murati's is the kind of event that turns a theoretical best practice into an operational priority. The agencies that did this work quietly over the past eighteen months are now in a much better position with their clients.

Three questions worth asking before you ship

If you are a founder or head of product at an agency with AI in your delivery stack, these are the questions that belong in your next architecture review.

Can you swap the model in under a week? Not theoretically. Practically, with your actual codebase. If the answer is no, you have a dependency that is worth pricing into your client proposals.

Does your client know which vendors are in the stack? Most do not ask. That does not mean they do not have a right to know, especially in regulated sectors. Data residency, content moderation policy, and model governance are increasingly part of procurement and compliance conversations at the enterprise level.

What is your fallback if a key model is deprecated or significantly changed? OpenAI has deprecated GPT-4 variants before. Anthropic has done the same with Claude versions. A fallback plan is not paranoia. It is the same logic as a server redundancy requirement.

None of this requires you to stop building on OpenAI. For many tasks it is still the right tool. But building on it with eyes open is different from building on it with uncritical vendor loyalty.

A series of iridescent concentric rings or spirals expanding outwards against a dark background.

The agencies that will keep client trust through this period are the ones treating their model stack as an engineering decision, not a brand affiliation.

Max Pinas, Studio Hyra

What this period actually asks of us

The AI tooling landscape in 2025 is not a stable utility market. It is a set of young companies, several of them in active legal disputes, some undergoing significant leadership change, all racing to ship product faster than they can resolve internal questions about what they are building and why.

Murati's testimony is one data point. The November 2023 board crisis at OpenAI was another. The ongoing questions about governance at AI labs are a pattern, not a series of isolated incidents.

For an agency, the professional response is not cynicism and it is not blind confidence. It is the same thing we ask of any good technical decision: acknowledge the risk, design for it, and stay honest with clients about what you know and what you do not.

That is what it means to build carefully right now. Not slower. Not with less ambition. Just with a clear view of what the ground actually looks like.

Ready when you are

Momentum starts with a conversation.

No forms, no intake. Just a real conversation with the people who do the work.

Book a callBook a call

Keep reading.

All insightsAll insights
Technology6 min read

AI companies keep borrowing the words we use for being human

Anthropic's 'dreaming' and 'memories' features are the latest in a long pattern. Studio Hyra's Max Pinas on why the AI industry keeps reaching for human metapho

May 9, 2026
Technology6 min read

AI companies keep borrowing words they haven't earned

Anthropic named features 'dreaming' and 'memories' at its developer conference. Studio Hyra's Max Pinas on why that naming habit matters more than it looks.

May 9, 2026