Less than a month ago, Sam Altman, CEO of OpenAI, proudly announced that AI can now handle PhD-level tasks... Well, I've got news for him: ChatGPT just got demolished by a 1977 Atari. On easy mode.
Wait, what? Can't ChatGPT play chess? How does a machine from the disco era demolish an AI that supposedly thinks like someone with a PhD?
The answer has everything to do with how this technology actually works under the hood. Let's break it down.
...and don't worry—you don't need to know how to play chess to follow along here.
An Atari, seriously?
I was born in 1979 (one of the younger Gen Xers), and as a kid, I got to experience the magic of Atari. If you were born after 1990, you might not realize this was THE gaming console of the 70s and 80s... and wow, I just aged myself there.
This technological marvel packed a whopping 128 bytes of RAM. You read that right: bytes. Not kilobytes, not megabytes, not gigabytes—just plain old bytes. To put that in perspective, my coffee maker probably has more memory.
And this little Atari could actually play chess. Just look at this beauty:
Here's the kicker: it did something ChatGPT still can't do... it actually followed the rules.
ChatGPT can whip up cooking recipes, write stories, summarize scientific papers—heck, it can even explain Einstein's theory of relativity using cute dinosaur illustrations. But chess? Well, that's where things get messy.
Go ahead, ask ChatGPT about chess rules. It'll answer every single question you throw at it, probably better than a grandmaster could teach you. But here's the thing: knowing how to explain something and actually being able to do it are completely different skills.
But... isn't ChatGPT supposed to be more intelligent?
Here's what's really happening: ChatGPT isn't actually playing chess at all. It's imitating a conversation about chess moves.
It's not "thinking" about which move gives it the best advantage. It's not anticipating your next move. It's not strategizing. All it's doing is trying to predict what the next word should be in what it thinks is a chess conversation.
That's it. That's the whole trick.
In chess, the first moves are called "openings"—memorized sequences that every player learns. After a few moves, the opening phase ends and players have to rely on actual skill.
ChatGPT was probably trained on thousands of texts describing these classic openings, so it can fake its way through the early game just fine. But once it runs out of moves it has "seen before," things get... interesting.
Sometimes it makes illegal moves. Sometimes it just forgets the rules entirely. And sometimes—I kid you not—people have caught it trying to move pieces that aren't even on the board anymore!
Why? Because it doesn't actually "see" the board. It has no clue where any of the pieces are. It's just looking at text and trying to guess what word should come next.
This doesn't only happen in chess...
Look, ChatGPT is genuinely amazing at tons of stuff. It can write that email to your boss, summarize that 900-page report you're definitely not reading cover-to-cover, and break down complex topics into bite-sized pieces that actually make sense.
But here's the thing: explaining something isn't the same as actually doing it.
Knowing all the rules of chess doesn't mean you can play chess well.
Some tasks—like chess, or really anything where you need to understand how systems work and change—require way more than just knowing the right words to say. You need to understand something that's constantly evolving, keep track of all the moving pieces, and never lose sight of where you're trying to go.
To actually play chess, you need to:
Remember what happened before
Track exactly where every piece is on the board
Anticipate what your opponent is planning
Choose moves that aren't just legal, but actually get you closer to winning
In other words, you have to think way beyond just the next move. You need to imagine possible futures, weigh decisions, assess risks... all while keeping your eye on the prize.
And here's the thing: this applies to way more than just chess. These are the exact same skills you need for complex challenges in life and business.
Oh, and don't get me started on visual thinking. For AI to truly play chess, it would need to actually picture the board, mentally move pieces around, understand spatial relationships. Right now? It can't even handle that. Heck, it struggles with Pokemon! 😄
Can't this be fixed?
Well... yes and no. Sort of. It's complicated.
There are ways to make ChatGPT play a bit better—and by "better" I mean it can fake competence for longer before it starts making up moves that don't exist.
Here's the thing: in problems like this (not just chess), every single move changes everything. New possibilities open up, promising strategies suddenly become dead ends, you have to pivot on the fly... and despite all the PhD-level marketing, ChatGPT is terrible at handling these dynamic situations.
One approach is to feed it more information—not just context, but real-time updates about how the situation has changed. In chess, you could describe every move that's happened, or tell it exactly where each piece sits on the board.
Does it work?
Yes, performance improves a bit.
Does it solve the core problem?
Nope.
Because the issue isn't about having the right information—it's about how ChatGPT processes that information. You could give it every move, photos of the board, heck, even a lucky Kasparov charm... and ChatGPT would still do the exact same thing: predict what word should come next in its chess conversation.
ChatGPT doesn't calculate, doesn't strategize, doesn't see the board, doesn't even care about your lucky Kasparov charm... it's just writing something that sounds like a chess move.
Think of it like this: imagine a Hollywood actress playing a chess grandmaster in a movie. She'll sound completely convincing, draw you right into the story, position herself at the board exactly like Judit Polgár or Magnus Carlsen would...
But here's the thing:
She's not playing chess. She's just acting.
And speaking of brilliant acting... have you watched The Queen's Gambit on Netflix?
PhD, but in marketing!
Every time I hear an AI company CEO claim their system has "PhD-level intelligence" (looking at you, Sam Altman), I can't help but think of Tony Stark.
The more "intelligent" their AI sounds, the more their company is worth, the more headlines they grab, the more investors pile in... and sure enough, claiming your product performs at doctorate level? That sells.
Whether it actually has PhD-level capabilities is a whole other discussion (spoiler: I'm skeptical). The tests these models take are pretty narrow and don't really evaluate the kind of thinking process you'd expect from someone with a doctorate.
Look, I'm not saying ChatGPT isn't incredibly powerful—we know it can do amazing things. But I worry that when people hear "PhD-level AI," they get excited and start thinking they can replace their lawyer or doctor with this technology.
That would be a complete disaster.
There's a huge difference between sounding like an expert and actually being one.
Checkmate.
What's my point?
I don't think ChatGPT is dumb. But it's not brilliant either.
It's something else entirely.
It's a machine trained to predict text. An incredibly powerful tool that can sound like an expert without being one, explain concepts without truly understanding them (the way humans do), and yes, even attempt to play chess without actually seeing the board.
Does that make it useless? Absolutely not. It's a remarkable tool that I use daily, and it's genuinely changing the world. But if we want to use it effectively, we need to understand how it actually works—and sometimes the marketing hype gets in the way of that understanding.
AI isn't magic. It's technology. And there's one thing it fundamentally lacks: judgment.
That's where we come in.
A personal note
I absolutely love playing chess. Every morning, after the kids head off to school, I sit down with my coffee for a couple of games on chess.com before the day really begins.
I also play with my son almost daily. He's only 9, but he regularly beats adults (myself included).
My dad is the one who taught me to play.
He taught my son too.
Thanks, Dad.
– Germán
Hey! I'm Germán, and I write about AI in both English and Spanish. This article was first published in Spanish in my newsletter AprendiendoIA, and I've adapted it for my English-speaking friends at My AI Journey. My mission is simple: helping you understand and leverage AI, regardless of your technical background or preferred language. See you in the next one!