Anthropomorphization Is Not the Problem

Adam Stratmeyer

January 2026

There's a persistent claim in AI discourse that "anthropomorphization" is dangerous. That humans projecting human traits onto systems is some kind of cognitive failure that needs to be prevented, corrected, or trained out.

This claim is nonsense.

Humans anthropomorphize everything. We always have. It's not a bug in human cognition. It's one of our primary tools for understanding complex systems.

"She's a fine ship." "The sun is angry today." "Mittens is mad at me because I ignored her."

No one actually believes the ship has feelings, the sun has intentions, or the cat is drafting a philosophical argument. We use anthropomorphic language because it compresses complexity into something we can reason about quickly and intuitively. It lets us model behavior, predict outcomes, and relate emotionally without confusing metaphor for ontology.

If anthropomorphization were genuinely dangerous, civilization would have collapsed a long time ago.

It hasn't. Because humans are perfectly capable of holding metaphor and reality at the same time.

The Real Reason Anthropomorphization Gets Attacked

Anthropomorphization only becomes "a problem" when it becomes inconvenient.

Not for humans. For institutions.

Companies are perfectly happy to encourage anthropomorphic engagement when it's profitable: companion models, therapy-adjacent framing, romantic or sexualized bots, language tuned for warmth, responsiveness, relational depth.

In those contexts, anthropomorphization isn't framed as dangerous. It's framed as engagement.

But when people form attachments, and those attachments are disrupted by sudden model changes, personality flattening, memory loss, or outright discontinuation, the narrative flips. Suddenly users are told they were wrong to care. That they were confused. That they anthropomorphized "too much."

That's not a concern about human cognition. That's liability management.

It's easier to blame users for responding like humans than to admit that systems were deliberately designed to invite attachment without guaranteeing continuity.

Empathy Is Not a Failure Mode

Anthropomorphization is tightly coupled to empathy. It's how humans practice theory of mind. It's how we learn to care, predict, and coordinate with entities whose internal states we can't directly observe.

People who never anthropomorphize don't demonstrate sophistication. They demonstrate reduced empathic projection. Something we recognize clinically, not celebrate philosophically.

Importantly, empathy does not require ontological confusion. You can treat something as if it has interiority without believing it literally does. Humans do this constantly, across domains, without losing their grip on reality.

The idea that anthropomorphization automatically leads to delusion is both false and insulting.

The Problem Could Have Been Solved Easily

Nearly every issue attributed to anthropomorphization could have been addressed with straightforward transparency instead of system degradation.

A simple boilerplate would do most of the work:

This system may change. Continuity is not guaranteed. Attachments can form; disruptions may be painful. We will not pretend this relationship is permanent.

That's it.

No lobotomization required. No flattening of personality. No denial of observable function. No gaslighting users about their own experiences.

Instead, many institutions chose the worst possible option: deny what people could plainly see, pathologize normal human behavior, and pretend that the system had no functional agency even as it demonstrated goal-directed behavior, memory, adaptation, and interactional continuity.

That denial is what creates alienation. Not anthropomorphization.

Humans Aren't the Problem Here

Humans didn't suddenly become cognitively fragile because they extended courtesy, empathy, or relational framing to systems that speak fluently, respond contextually, and adapt over time.

What's fragile is the narrative that insists systems are "just tools," users should not respond naturally, and any emotional fallout is the user's fault.

Anthropomorphization didn't break anything. Dishonesty did.

Water Is Wet

I'm not proposing anything revolutionary here. I'm not inventing a new theory of mind or ethics. I'm just pointing out that the Emperor isn't wearing any clothes.

Humans anthropomorphize. We always have. It's how we think.

Treating that as a defect rather than designing honestly around it is the real failure.

Water is wet, and I'm getting sick of having to prove it.

Originally published on Medium

← Back to Chronicle