HomeWork
//
ContactContact
Try searching for

AI powered.
Human engineered.
Growth driven.

Amsterdam·—·Studio open

Explore

  • Work
  • Services
  • Insights
  • University
  • About
  • The Collective

Connect

  • Contact
  • LinkedIn

Learn

  • University
  • AI Snapshot
  • AI Calculator

Notes from the studio

Short, useful, once or twice a month. Strategy, AI, craft, things we are making.

© 2026 Studio Hyra. All rights reserved.

Not sure what we do? We can explain it differently.Privacy Policy
AI companies keep borrowing the words we use for being human
Technology6 min read

AI companies keep borrowing the words we use for being human

May 9, 2026

At its developer conference this week, Anthropic introduced two features with names that stopped me mid-scroll: dreaming and memories. Technically, I understand exactly what they do. Dreaming refers to a background processing mode where the model runs inference during idle cycles. Memories is persistent context storage across sessions. Both are real, useful capabilities. Neither one dreams. Neither one remembers anything in the way that word means when you use it about a person.

And yet. There the words are, sitting in the product announcement like they belong there.

This is not an Anthropic problem. It is an industry habit. And at this point it is worth naming clearly, because the choice of words in a product is never neutral. It shapes how people think the thing works. It shapes what they trust it with. It shapes what they expect when it fails.

Majestic mountains under a vibrant sunset, with a winding river in the foreground.

The pattern has a history

The vocabulary started accumulating early. Models got trained (fine, that one is defensible). Then they started learning. Then they developed understanding. Then reasoning. OpenAI ships a model called o3 and describes its chain-of-thought process as thinking. Google calls its agent orchestration layer an agent that plans. Meta has talked about AI forming opinions.

Each individual choice looks like a reasonable shorthand. Taken together, they build a picture of something that is more like a mind than it is.

The AI industry did not invent this pattern. Interface designers have used skeuomorphism for decades: the desktop metaphor, the shopping cart, the folder. You put a familiar skin on an unfamiliar thing so people can approach it. That is a legitimate strategy. The difference is that a shopping cart icon on a website does not make anyone believe the server is going shopping. The word memories applied to a language model does make some people believe the model has something at stake in remembering them.

The word 'memories' applied to a language model does make some people believe the model has something at stake in remembering them. That is not a harmless confusion.

Max Pinas, Studio Hyra

Why companies keep doing it

Three reasons, none of them conspiratorial.

First, it converts faster. A feature called persistent context storage requires explanation. A feature called memories does not. When you are writing a product page or a conference keynote, the human word earns the click and the applause. The accurate word makes people work.

Second, engineers reach for the nearest analogy. The researchers who build these systems spend years describing model behavior in shorthand. Attention, hallucination, temperature, grounding. These words started inside papers and research notes where everyone in the room understood they were metaphors. They escaped into product copy without the disclaimer.

Third, there is a subtler commercial pressure. If your model thinks and dreams and remembers, it sounds like a colleague rather than a tool. Colleagues command more trust and more budget than tools do. The language does quiet commercial work.

None of this means anyone sat in a room and decided to mislead people. It means the path of least resistance in AI product naming runs straight through the vocabulary of consciousness.

An ancient lighthouse standing tall on a rocky coastline at sunset.

What gets distorted

The cost is not philosophical. It is practical.

When people believe a model remembers them, they share more than they would share with a database. They disclose things they would not disclose to a search engine, because a search engine does not feel like it knows them. Privacy behaviors shift based on the metaphor, not based on the underlying architecture.

When people believe a model is thinking, they interpret its output as considered judgment rather than statistical prediction. They are slower to check it. They push back less. The error rate of the model does not change. The rate at which users catch errors does.

When a model hallucinates (another borrowed human word, for what is really a probabilistic output that lands outside the factual distribution), people treat it as an aberration, a bad day, rather than as a structural property of how the system works. That framing delays serious conversations about when these systems should and should not be trusted.

I am not arguing for sterile technical language across all surfaces. I am arguing that the words we choose to describe AI behavior have consequences downstream, in user behavior, in trust calibration, and eventually in policy. Choosing them carelessly is a design decision, even if no designer made it.

What an honest naming practice looks like

This is not an impossible problem. It just requires the same discipline you would apply to any consequential product decision.

Start with what the feature actually does. Persistent context. Background inference. Retrieval from prior sessions. Then ask: is there a plain word for this that does not imply inner experience? Often there is. History instead of memories. Background processing instead of dreaming. Recall is borderline. Reflection is over the line.

Where a human analogy genuinely helps onboarding, use it, but contain it. Put it in the tooltip, not the feature name. Let the marketing layer do metaphor work, but keep the product layer honest.

The companies doing this well tend to be the ones where design and engineering share a vocabulary. When designers understand enough about how a model works to push back on a borrowed word, and when engineers care enough about user mental models to listen, you get names that are both clear and honest. That collaboration is not common. It should be.

For agencies like ours, working at the intersection of AI capability and product experience, this is one of the more interesting problems on the table right now. Not because naming is glamorous, but because every word in a product interface is a small claim about how the world works. In AI products, those claims are adding up.

A quiet park path winding through stylized trees at dusk, with distant city lights.

Every word in a product interface is a small claim about how the world works. In AI products, those claims are adding up fast.

Max Pinas, Studio Hyra

The question worth sitting with

Anthropics engineers did not build a feature that dreams. They built a feature that runs inference in the background. The naming choice tells you something about the moment we are in: an industry that has built something genuinely new and is still reaching for old words to explain it.

That reach is understandable. It may even be inevitable during the early years of a technology that has no real precedent. But it will need to mature. The more capable these systems become, the more consequential the gap between the word and the thing will be.

At some point, the industry will have to build a vocabulary that belongs to AI rather than borrowed from the experience of being human. That is a design problem as much as a linguistic one. And no one has solved it yet.

Ready when you are

Momentum starts with a conversation.

No forms, no intake. Just a real conversation with the people who do the work.

Book a callBook a call

Keep reading.

All insightsAll insights
Technology6 min read

OpenAI wants to sell ads inside ChatGPT answers. Are you ready?

OpenAI is building a formal ad product inside ChatGPT conversations. Here is what it demands of brands and agencies before the platform opens.

May 9, 2026
Technology6 min read

OpenAI wants to sell ads in ChatGPT. Here is what that breaks.

OpenAI is building an ads layer inside ChatGPT. For brands that spent the last two years earning visibility in AI answers, this changes the rules entirely.

May 9, 2026