HomeWork
//
ContactContact
Try searching for

AI powered.
Human engineered.
Growth driven.

Amsterdam·—·Studio open

Explore

  • Work
  • Services
  • Insights
  • University
  • About
  • The Collective

Connect

  • Contact
  • LinkedIn

Learn

  • University
  • AI Snapshot
  • AI Calculator

Notes from the studio

Short, useful, once or twice a month. Strategy, AI, craft, things we are making.

© 2026 Studio Hyra. All rights reserved.

Not sure what we do? We can explain it differently.Privacy Policy
AI companies keep borrowing words they haven't earned
Technology6 min read

AI companies keep borrowing words they haven't earned

May 9, 2026

At its developer conference this year, Anthropic introduced features it called "dreaming" and "memories." Not metaphorically buried in a footnote. Front and centre, on stage, in the keynote.

I am not going to pretend this is a small thing. Language shapes how people reason about technology. When a company the size and credibility of Anthropic reaches for those specific words, it is not a slip. It is a product decision. And it is one worth taking seriously, because a lot of us in agencies and product teams are downstream of it.

This has been building for a while

The naming pattern is not new. OpenAI has "memory." Google DeepMind talks about models that "understand" and "reason." Meta's models "imagine" images. The entire vocabulary of modern AI product marketing has been quietly colonised by terms that were coined to describe conscious, embodied, biological experience.

Dreaming, in the human sense, is what the brain does during REM sleep to consolidate memory, process emotion, and, some researchers argue, rehearse future scenarios. It is deeply tied to consciousness, to a body, to time passing while you are unaware of it. None of those conditions apply to a large language model running inference on a GPU cluster.

What Anthropic almost certainly means by "dreaming" is something closer to an offline processing step, a background pass the model takes to surface or organise stored context. That is a reasonable engineering description. It is also a completely accurate one. The word "dreaming" adds nothing to the accuracy. It adds something else entirely: warmth, relatability, and a faint suggestion that the system has an inner life.

That suggestion is what we should be talking about.

Naming is not neutral. Every word you put on a feature teaches the user how to feel about it. Teach them wrong and you own the confusion that follows.

Max Pinas, Studio Hyra

The business logic is obvious, and that is exactly the problem

I understand why companies do this. Humanising language lowers the barrier to adoption. If a feature "remembers" you rather than "retrieves a vector embedding of your prior session," it feels less alien. Less threatening. More like a colleague and less like a database query.

That is not cynical. Accessibility in product language is genuinely valuable. The problem is that anthropomorphic language does not just simplify. It also misleads. And the misleading happens in two directions at once.

First, it inflates capability expectations. If a system "dreams," users may reasonably expect it to make creative leaps, to surprise them with synthesis, to produce something it was not explicitly trained toward. When it does not, which is most of the time, the disappointment is not just about the feature. It erodes trust in the company, and by extension in AI products more broadly.

Second, it obscures failure modes. A system that "hallucinates" sounds like it is having a vivid moment of creativity. The actual failure is that it is generating plausible-sounding text that is factually wrong, confidently. That is not a dream. That is a defect. Calling it hallucination gives it a poetic quality it has not earned and makes it harder for users and clients to understand what they are actually dealing with.

For those of us building products with these models, this matters every day. Clients come in shaped by the marketing. They expect memory to work like a person's memory works, continuous, contextual, emotionally weighted. They expect reasoning to work like a person's reasoning works, with judgment, with stakes, with something approximating wisdom. When we have to explain that neither is true, we are not just managing expectations. We are undoing the framing that the AI labs themselves built.

The labs know what they are doing

Here is the part I want to resist softening. These are not naive naming decisions. Anthropic employs some of the most careful thinkers in AI safety. The company was founded explicitly on concerns about AI risk, about the gap between what these systems appear to be and what they actually are. And then it named a feature "dreaming."

There are a few ways to read that. One is that the safety team and the product team are operating in separate rooms. That is plausible. Growth pressure is real, and the language that converts users does not always align with the language that is epistemically honest.

Another reading is that the company genuinely believes the metaphor is close enough to be useful. That the underlying process shares enough structural similarity with biological dreaming to justify the name. I find this less convincing. The structural similarities are superficial at best, and the connotations of the word carry far more weight than any structural similarity could justify.

A third reading is that nobody in the room pushed back hard enough. That the name sounded good in a slide deck, tested well in a focus group, and shipped. That is probably the most honest reading, and it is also the most concerning. Because it means the decision was made primarily on marketing grounds at a company that publicly frames itself around getting AI right.

The labs that talk most about AI safety are often the same ones reaching hardest for the vocabulary of consciousness. That tension is worth naming.

Max Pinas, Studio Hyra

What this means for product teams and agencies

If you are building with these models, or advising clients who are, the anthropomorphism problem lands directly on your desk. Here is how I think about it.

Name for what it actually does. When you are designing the UI layer on top of an AI feature, you get to choose the language your users see. You do not have to inherit the lab's framing. A feature that retrieves past session context can be called "recent context" or "your history" rather than "memory." Both are accurate. One of them sets correct expectations.

Brief your clients on the gap. Early in any engagement, I try to spend time on what I call the vocabulary gap: the distance between how AI products are marketed and how they actually behave. It is not a long conversation, but it saves a lot of pain later. Clients who understand that "reasoning" in an LLM context means something very specific, not general human cognition, make better product decisions.

Push back on anthropomorphic briefs. Occasionally a client will come in wanting to build something that "feels like a real person" or "genuinely understands the user." That brief is downstream of the marketing language. My job is not to validate it but to translate it into something buildable and honest. Usually what they actually want is responsiveness, relevance, and consistency. Those are achievable. A digital person who genuinely understands is not, and pretending otherwise sets up the project to fail.

Watch the naming in your own work. It is easy to slip into the same patterns. I catch myself doing it. The language is everywhere and it sounds natural because it has been normalised by companies with enormous reach. But every time you describe an AI output as "creative" or a model as "thinking," you are adding one more brick to a wall that the industry is going to have to climb back over.

The actual stakes

This might sound like an argument about words. It is not. It is an argument about trust, and trust is the only thing that makes AI products viable at scale.

When the vocabulary of AI consistently overclaims, and then the product consistently underdelivers relative to that vocabulary, users do not just update their expectations. They update their assessment of whether they can trust what companies say about AI at all. That is a slow erosion, but it is already happening. You can see it in the length of the disclaimer cycle, in the growing wariness among enterprise buyers, in the number of AI projects that get deprioritised after a first serious deployment.

The labs have enormous influence over public mental models of what AI is and what it can do. Anthropic naming a background process "dreaming" is not just a product decision for Claude. It is a small contribution to the collective imagination of what AI is. Multiply that by every feature name, every keynote metaphor, every press release verb across the industry, and you have something that shapes regulation, investment, and public trust for years.

I am not asking for technical language in consumer products. I am asking for honest language. Those are not the same thing. There is a wide space between "vector retrieval with temporal indexing" and "dreaming." Somewhere in that space are names that are both accessible and true. Finding them is not a hard design problem. It is a will problem.

And right now, the will seems to be pointing in the other direction.

Ready when you are

Momentum starts with a conversation.

No forms, no intake. Just a real conversation with the people who do the work.

Book a callBook a call

Keep reading.

All insightsAll insights
Technology6 min read

OpenAI wants to sell ads inside ChatGPT answers. Are you ready?

OpenAI is building a formal ad product inside ChatGPT conversations. Here is what it demands of brands and agencies before the platform opens.

May 9, 2026
Technology6 min read

OpenAI wants to sell ads in ChatGPT. Here is what that breaks.

OpenAI is building an ads layer inside ChatGPT. For brands that spent the last two years earning visibility in AI answers, this changes the rules entirely.

May 9, 2026