HomeWork
//
ContactContact
Try searching for

AI powered.
Human engineered.
Growth driven.

Amsterdam·—·Studio open

Explore

  • Work
  • Services
  • Insights
  • University
  • About
  • The Collective

Connect

  • Contact
  • LinkedIn

Learn

  • University
  • AI Snapshot
  • AI Calculator

Notes from the studio

Short, useful, once or twice a month. Strategy, AI, craft, things we are making.

© 2026 Studio Hyra. All rights reserved.

Not sure what we do? We can explain it differently.Privacy Policy
What Murati's testimony tells us about AI safety culture
Technology6 min read

What Murati's testimony tells us about AI safety culture

May 9, 2026

Sworn testimony is rare in the AI industry. Press releases are not. So when Mira Murati, OpenAI's former CTO, stated under oath that Sam Altman had lied to her, it landed differently than the usual stream of departures, blog posts, and carefully worded statements that the industry runs on.

This is not a gossip story. It is a governance story. And if you are building anything serious on top of AI infrastructure, or advising organizations that are, you should pay attention to what it actually tells us.

Abstract undulating waves with vibrant iridescent color shifts across their surfaces.

Safety culture is not what the safety blog posts say it is

Every major AI lab publishes a safety philosophy. The documents are serious and long. They invoke concepts like alignment, interpretability, and responsible deployment. They are written by thoughtful people.

But a safety culture is not a document. It is what happens in the room when the decision is hard and the competitive pressure is high. Testimony from inside that room is almost never available.

Murati's deposition, given in the context of Elon Musk's lawsuit against OpenAI and Sam Altman, is one of the closest things to primary source evidence we have ever had about how safety decisions are actually processed at the highest level of the most prominent AI organization in the world. That makes it worth reading carefully, not just quoting for the drama.

Her account describes a pattern that anyone who has worked inside a fast-moving organization will recognize. Information was shared selectively. Decisions moved faster than the stated process allowed. The person nominally responsible for safety was not always in the loop on choices that had safety implications.

None of that is unique to OpenAI. Most of it is structural.

A safety culture is not a document. It is what happens in the room when the decision is hard and the competitive pressure is high.

Max Pinas, Studio Hyra

The structural problem that testimony exposes

Here is the uncomfortable part. OpenAI is not an outlier in the way safety responsibility is distributed inside AI organizations. The pattern is almost universal.

You have a dedicated safety function, often staffed by people who are genuinely skilled and motivated. That function has formal authority on paper. It produces frameworks, red-teaming protocols, and deployment guidelines. It is treated as a serious part of the organization during calm periods.

Then the competitive environment accelerates. A rival ships something unexpected. A board meeting changes the timeline. A product decision is made in a smaller room, faster, and the safety function is consulted after the fact, or not fully consulted at all. The people responsible for safety learn about consequences they were not asked to anticipate.

This is not a story about bad actors. It is a story about incentive structures. Speed is rewarded. Market position is concrete. Safety risk is probabilistic and often invisible until it is not. In that environment, even well-intentioned organizations will tend to compress safety review cycles when they feel they have to.

Murati's testimony gives us a named, sworn account of that compression happening at the top of the hierarchy, not at the middle-management level where it is usually assumed to live.

A series of interconnected geometric cubes with holographic reflections receding into the distance.

What the departure patterns already told us

Murati's deposition did not arrive without context. In the two years before it, OpenAI lost a significant portion of its safety-focused leadership. Ilya Sutskever, a co-founder and the architect of much of OpenAI's early safety thinking, departed in May 2024. The entire "superalignment" team, which had been tasked with solving the problem of aligning superintelligent systems, was effectively disbanded by mid-2024, with its leads, Jan Leike and others, leaving publicly and in several cases explaining why.

Leike's departure statement was blunt. He wrote that safety culture and processes had taken a back seat to product development. That was a voluntary resignation statement, not sworn testimony. But the pattern it described is now corroborated, in a different context, by testimony under oath.

When multiple senior people, across different tenures and roles, describe the same structural dynamic independently and in different legal and professional contexts, that is worth treating as a data point rather than a narrative.

The data point is this. at the organization that has done more than any other to define what responsible AI development looks like publicly, the people responsible for safety have repeatedly found themselves outside the information flow when it mattered.

Speed is rewarded. Market position is concrete. Safety risk is probabilistic and often invisible until it is not.

Max Pinas, Studio Hyra

What this means if you are building on AI infrastructure

I am not writing this as a warning against using AI tools. We use them every day at Studio Hyra. The practical value is real and the systems keep getting better.

But if your organization is making commitments based on the assumption that the AI systems you depend on are governed in the way their public documentation describes, Murati's testimony is a reason to update that assumption.

A few concrete things follow from that.

Vendor safety documentation is not governance evidence. A usage policy, a responsible AI framework, a deployment checklist: these are signals, not proof. The question is not what the document says. The question is what process actually runs when the product team wants to ship something and the safety team is not sure.

The gap between stated and operational safety culture is probably larger than you think. This is not specific to AI. It is true in pharmaceuticals, in finance, in aviation before the Boeing MAX failures forced a reckoning. The AI industry is younger and moves faster. The gap is likely wider.

Regulatory pressure will eventually force disclosure. The EU AI Act requires documentation of risk assessment processes. Litigation, as we are seeing, generates sworn testimony. As these mechanisms mature, the delta between what labs say about their safety culture and what actually happens inside it will become harder to maintain. Organizations that have built dependencies on AI infrastructure should think now about what that visibility will look like when it arrives.

None of this means you stop using AI. It means you stop treating vendor safety claims as a substitute for your own judgment about risk.

A spiraling vortex of luminous, gradient-filled particles on a dark, textured field.

The value of primary sources in a hype cycle

The AI industry generates an enormous volume of opinion. Everyone has a take on whether AI is dangerous, whether the labs are responsible, whether regulation is needed, and in what form. Most of that opinion is built on secondary sources, public statements, and inference.

Deposition testimony is different. It is produced under penalty of perjury. It is produced in adversarial conditions where the other side has an incentive to expose inconsistencies. It is not a press release or a conference talk or a carefully drafted resignation letter.

That does not make it infallible. Witnesses misremember. Context matters. Litigation has its own distortions.

But it is the closest thing to ground truth we are likely to get about what actually happened inside a consequential decision-making process at one of the most important organizations in the current AI moment. The fact that it describes a gap between stated safety culture and operational safety culture should inform how anyone with professional responsibility around AI thinks about the claims they are accepting at face value.

I think the honest conclusion is this. safety culture at the frontier AI labs is real in some places and aspirational in others. The two often look identical from the outside. Murati's testimony is a rare, if unwelcome, instrument for telling them apart.

Ready when you are

Momentum starts with a conversation.

No forms, no intake. Just a real conversation with the people who do the work.

Book a callBook a call

Keep reading.

All insightsAll insights
Technology6 min read

AI companies keep borrowing the words we use for being human

Anthropic's 'dreaming' and 'memories' features are the latest in a long pattern. Studio Hyra's Max Pinas on why the AI industry keeps reaching for human metapho

May 9, 2026
Technology6 min read

AI companies keep borrowing words they haven't earned

Anthropic named features 'dreaming' and 'memories' at its developer conference. Studio Hyra's Max Pinas on why that naming habit matters more than it looks.

May 9, 2026