The business logic is obvious, and that is exactly the problem
I understand why companies do this. Humanising language lowers the barrier to adoption. If a feature "remembers" you rather than "retrieves a vector embedding of your prior session," it feels less alien. Less threatening. More like a colleague and less like a database query.
That is not cynical. Accessibility in product language is genuinely valuable. The problem is that anthropomorphic language does not just simplify. It also misleads. And the misleading happens in two directions at once.
First, it inflates capability expectations. If a system "dreams," users may reasonably expect it to make creative leaps, to surprise them with synthesis, to produce something it was not explicitly trained toward. When it does not, which is most of the time, the disappointment is not just about the feature. It erodes trust in the company, and by extension in AI products more broadly.
Second, it obscures failure modes. A system that "hallucinates" sounds like it is having a vivid moment of creativity. The actual failure is that it is generating plausible-sounding text that is factually wrong, confidently. That is not a dream. That is a defect. Calling it hallucination gives it a poetic quality it has not earned and makes it harder for users and clients to understand what they are actually dealing with.
For those of us building products with these models, this matters every day. Clients come in shaped by the marketing. They expect memory to work like a person's memory works, continuous, contextual, emotionally weighted. They expect reasoning to work like a person's reasoning works, with judgment, with stakes, with something approximating wisdom. When we have to explain that neither is true, we are not just managing expectations. We are undoing the framing that the AI labs themselves built.