At its developer conference this week, Anthropic introduced two features with names that stopped me mid-scroll: dreaming and memories. Technically, I understand exactly what they do. Dreaming refers to a background processing mode where the model runs inference during idle cycles. Memories is persistent context storage across sessions. Both are real, useful capabilities. Neither one dreams. Neither one remembers anything in the way that word means when you use it about a person.
And yet. There the words are, sitting in the product announcement like they belong there.
This is not an Anthropic problem. It is an industry habit. And at this point it is worth naming clearly, because the choice of words in a product is never neutral. It shapes how people think the thing works. It shapes what they trust it with. It shapes what they expect when it fails.



