Humanizing AI
Why does our brain keep treating AI assistants like they're human? Spoiler: same reason we name our vacuum.
"Thank you so much, Claude!"
I wrote this and just stared at the screen. I had just thanked software.
Imagine if your toast popped up and you said: "Great job, Mrs. Toaster!" Anyone would think you'd lost it (but hey, who am I to judge). Truth is, almost all of us talk to these assistants like they're human, and it makes sense—they use our language and have familiar names like Siri, Alexa, or Claude... and um... ChatGPT?
I've been working in tech for nearly 20 years, and I've read tons about human-computer interaction and conversational interfaces—sorry for the jargon, promise that's the last time—so I get why this happens. What's interesting is that even knowing all this, I still do it...
Here's what's really going on.
Why do we name our vacuum cleaners?
At home we had a robot vacuum we called Hannibal (don't ask why). When Hannibal got stuck under furniture or somehow managed to wrestle with a rug, nobody said "the vacuum is stuck." We'd say: "Hannibal needs help." For my kids, he was practically family, though we always joked about how clueless he was.
And I'm not alone. Who hasn't yelled "come on, work!" at their printer? (it's 2025 and these things are still as frustrating as they were a decade ago) And when your computer crashes without saving that important document... you've definitely had words with it.
Treating things like they have personality isn't weird or new. It's how our brain works.
For millions of years, our ancestors survived by staying alert. If you heard a noise in the forest, it was way safer to think "there's a tiger trying to eat me" than "probably nothing." Those who imagined dangers—even fake ones—lived to reproduce. Those who didn't... well, they didn't stick around.
That caveman brain is still with us—Darwin would totally agree—spotting "something" where there's nothing kept us alive. That's why we see faces in electrical outlets (ever notice that?), curse at printers, and chat with plants. And we do it knowing they're just objects.
AI designers know this weakness in our brains perfectly and take full advantage of it.
They don't give you a "system"—they give it a name.
They give it a voice.
They teach it to pause before responding, like it's actually thinking.
They even make it say things like "I made a mistake" or "you're right"—not because it needs to, but because it makes you like it more.
And it works. Because even though you know it's not human, your brain believes it is.
Now, this little quirk of ours has way bigger consequences than you'd think... and not just for how we see machines, but for how they respond to us too.
Should we be polite to AI?
I recently stumbled across a study that blew my mind. Researchers at Waseda University in Japan asked: does the tone you use with AI actually affect how well it responds?
Turns out, according to their research, when you're nice to AI, it gives you better answers.
They tested this across multiple languages and different communication styles (from friendly to blunt). What they found was that when you're polite to AI, you get more thorough and helpful responses. But when you're rude or dismissive, the quality drops.
Think about this for a second. AI doesn't have feelings to hurt (yet). It's not going to get upset if you're harsh. So what's going on?
The answer is in how these models learn. They trained on millions of human conversations where polite questions typically got better, more detailed answers.
It's like the AI picked up a fundamental rule of human interaction: be nice, get better responses. Nobody programmed this behavior—it just learned the pattern from our data.
It's pretty wild that we ended up creating systems that mirror our social norms simply because they learned by observing how we communicate with each other. Kind of like how kids learn, right?
For me, discovering that my instinct to be nice to Claude wasn't just about humanizing technology—it was actually the most effective approach—felt pretty validating.
Plus, being nice to AI assistants probably won't hurt when Skynet takes over 😉
Well, there I was, feeling pretty good about my "be nice and AI treats you better" theory... until I stumbled across something that completely rocked my world.
The problem with being nice to AI
I'll admit, I've been pretty rough with ChatGPT sometimes*. Working with it all day when it doesn't get what I'm saying or gives me total nonsense—well, let's just say I've said some things I'm not proud of (and won't repeat here). These days we're cool, and I try to always be nice to it.
I was cruising along, thinking I had it all figured out—safe from Skynet AND getting better responses through kindness... until I saw a news story that knocked me off my cloud.
Some Twitter user asked: How much money does OpenAI lose on electricity from people saying "please" and "thank you" to ChatGPT?
What nobody expected was Sam Altman, OpenAI's founder, jumping in to answer: tens of millions of dollars.
Yeah, you read that right. Tens. Of. Millions. Of. Dollars.
All that money just from being polite to AI? Sounds crazy at first, but once you peek under the hood... it actually makes sense.
Every single word (or "token"—the smallest unit AI can understand) has to be processed. That means firing up servers, running cooling systems, burning electricity. When you type "thanks so much for your help," the AI crunches through every one of those tokens. Nothing's free.
These data centers run 24/7/365 and use massive amounts of power. One conversation? No big deal. But millions of people chatting with AI daily? That's a whole different story.
And it's not just electricity. To keep this technology from literally overheating, it needs constant cooling—which means burning through tons of water (I read it uses about half a liter of water for every 100 words). In a world facing climate change and water shortages, we can't just ignore this.
So every time I type "please" or end with "thank you so much," no matter how innocent it seems, I'm consuming energy, water, and adding my tiny bit to climate change.
And here's where things get messy.
Being polite to AI gets better results. But every extra word costs the planet. Meanwhile, our caveman brain keeps wanting to treat these things like people, so being nice feels completely natural.
Digging into all this research left me with mixed feelings—like empathy, efficiency, and environmental responsibility are all pulling me in different directions every time I open ChatGPT.
So what do we prioritize? Are better AI responses worth the environmental cost? Or should we be more direct with something that can't actually get its feelings hurt?
So what now? My AI dilemma
When I started writing this post, I just wanted to figure out why thanking Claude felt so weird. Now I'm stuck in an existential dilemma.
My brain insists on treating AI like it's human
Being polite actually gets better results
Every "please" and "thanks" burns energy and water that, multiplied across millions of users, hurts the planet
Balancing these three isn't easy. Every time I chat with an AI assistant, I'm basically choosing between what feels natural, what works best, and what's better for the environment.
Honestly? I don't have this figured out. I keep tweaking how I interact with these systems. Sometimes I'm more direct, other times more conversational. Sometimes I still thank ChatGPT... then immediately think about the energy cost.
What I do know is that these little AI interactions reveal a lot about who we are. How our minds work, what we value in communication, and what we're willing to sacrifice for convenience versus responsibility.
I'll probably keep flip-flopping on this (hey, I'm human), but I had to get these thoughts out there.
Maybe next time you say "thanks" to ChatGPT or Claude, you'll pause and think about everything behind that simple word... or maybe you'll say it anyway. After all, being kind is part of what makes us human, even when we're talking to machines.
Talk soon,
G
* A friend witnessed me being pretty rude to ChatGPT during a work meeting. Maybe I'll write about that someday, if Robert agrees to let me interview him about it.
Hey! I'm Germán, and I write about AI in both English and Spanish. This article was first published in Spanish in my newsletter AprendiendoIA, and I've adapted it for my English-speaking friends at My AI Journey. My mission is simple: helping you understand and leverage AI, regardless of your technical background or preferred language. See you in the next one!
Just imagine how much an average answer costs, including prompts, system, and user, if "Thanks" is that expensive.