Checkmate: Atari 2600 Console From 1977 Pummels ChatGPT In A Chess Match

hero 2600 atari 2600 vs chatgpt chess news
ChatGPT and its friends are great for researching, as assistants, or meme image creators, but if you need actual smarts, what you really want is a 1977-vintage Atari 2600 console. That's the ultimate finding of Robert Caruso, a Citrix engineer who pitted the venerable console that started it all against one of the latest-and-greatest AIs on the planet, in a simple game of chess; Atari Chess to be precise.

The story goes that Caruso was chatting away with Mr. GPT about the history of chess, and it offered to play a game. Well, the tables turn quickly, as ChatGPT got "absolutely wrecked" even in the beginner level, with the bot mistaking some pieces for others.

Initially, Caruso was feeding ChatGPT the actual (and rather primitive) graphical output of the game via the Stella emulator. Perhaps understandably so, the bot pointed out that the piece designs in said graphics are rather abstract, probably due to the scanline effect. Caruso then accommodated the bot by switching to standard notation.

chess board 2600 atari 2600 vs chatgpt chess news
The King has fallen. ChatGPT thought it was a cheese wheel.

Unsurprisingly, this had no positive effect, and the bot even kept forgetting the board state, with Caruso having to re-enter it sometimes multiple times per turn. ChatGPT even went as far as claiming it would perform better "if we just started over", perhaps inspired by Tool's Sober epic. Caruso powered through and even helped the bot when it was making blunders, but gave up on the experiment after a solid 90 minutes.

This small but important experiment very clearly highlights the difference between actual logic and pattern recognition. In one corner, there was ChatGPT, trained with around a trillion tokens over several weeks with millions of TFLOPS/s, while on the other there was just an 8-bit CPU at a whopping 1.17 MHz (mega, not giga). It's worth pointing out modern chess engines like Stockfish have long surpassed human players, as they're able to solve board states tens of moves ahead, while even a grandmaster reportedly can do only about three. In fact, Apple's recently published and widely publicized report on the actual intelligence (or lack thereof) of "AI" bots is quite enlightening, as self-serving as it may be for the company.