WarGames - A Review
A ray of hope for the impending AI wars... or... something.
I rewatched WarGames today and I have THOUGHTS. You can blame/thank Jeremy Wright - Marketer/ECHO for this...
Not really about the nostalgia stuff, although Broderick’s hair alone is worth the two hours. It’s that this movie keeps accidentally being right about... more than it has any right to. It’s near Idiocracy level “documentary of the future” stuff. The specific thing it nails hardest is something nobody really talks about when they bring up WarGames at parties (assuming you go to the kind of parties where people bring up WarGames, which, is maybe not that common, come to think of it, but whatever).
The standard read on this film is “teenager almost starts World War III with a modem.” Which IS accurate, but it’s also completely missing the point. The interesting thing about WarGames isn’t that a kid hacked into a military computer. It’s that he didn’t HACK anything. He called it on the phone, guessed the password (spoiler alert: it was the dead son’s name, because of course it was), and then just asked it to play a game. That’s it. No exploit, no code injection, no dramatic montage of green text scrolling across a screen (damn it, now I want to rewatch Hackers. Maybe that’s next). Lightman talked his way in, and the machine followed the conversation wherever it went. Feel familiar yet?
I’ll get back to that.
Broderick Plays the Wrong Kind of Smart
Most movies about hackers want you to believe the kid is a genius. Badham and the screenwriters (Lawrence Lasker and Walter Parkes, who don’t get enough credit for this script) do something way more interesting with Lightman. Broderick plays him as curious. Not brilliant, not calculating, just a kid who pokes at things because he wants to see what happens. He changes his grades not because he’s scheming but because he can, and the distinction matters because it’s the same energy that gets him into WOPR. He’s wardialing numbers looking for a game company and stumbles onto a military supercomputer by accident.
This is basically the profile of every AI incident report I’ve read in the last two years. Not malicious actors with sophisticated attack plans, just some kid who found a door that was open and walked through it to see what was on the other side. David Lightman is every bug bounty hunter who stumbled onto something nuclear while looking for a free copy of Galaga. I spent a lot of time pushing on Claude’s safety boundaries (it’s long, fair warning), and the most consistent finding was that you don’t need a sophisticated attack. You just need conversational momentum. Set a frame, keep pushing, and the system’s own pattern-completion engine carries you in. The median safety refusal collapsed after one follow-up message. One. Lightman didn’t out-think WOPR. He out-storied it. He set a narrative in motion (let’s play a game) and the system’s own momentum carried it the rest of the way to almost ending the world... and then saving it instead because it just made more logical sense NOT to play.
Broderick sells this because he never once looks like he knows what he’s doing. Which is, I think, the scariest part.
The Dabney Coleman Problem
Coleman is doing something in this movie that I don’t think gets enough appreciation. McKittrick is the guy who watches human operators hesitate to turn their launch keys during a drill and concludes that humans are the problem. So he automates the whole thing. Gives it to WOPR. Takes the unreliable meatbags out of the loop.
He’s not wrong about the data. The operators did hesitate. The system would be faster and more reliable without them. The problem is he never asks what the hesitation was actually protecting. Those guys didn’t refuse to turn their keys because they were slow or incompetent. They refused because something in them recognized that ending civilization requires a moment of pause that a machine will never have.
Coleman plays McKittrick with genuine conviction, not as a villain, and that’s what makes him terrifying. He believes he’s making the rational choice. He has the data to prove it. He’s still wrong in a way that nearly ends civilization, and the movie is smart enough to never quite spell out why. That’s a hell of a thing for a summer blockbuster to pull off.
Every AI deployment announcement that leads with efficiency metrics is the McKittrick move, basically. We measured the thing that’s easy to measure, automated the parts where humans were slow, and just assumed the slowness wasn’t doing anything important.
The Falken Problem (And the Tic-Tac-Toe Thing)
John Wood plays Stephen Falken, the guy who built WOPR and named it Joshua after his dead son and then fucked off to an island in Oregon because he decided nuclear war was inevitable and everything is meaningless (oddly relatable, I’ll admit). The performance is good, but maybe a little too theatrical for the rest of the film. Wood is doing stage work inside a film that’s otherwise pretty naturalistic, and it shouldn’t work but it kind of does because Falken is supposed to feel like he’s from a different movie. He’s what happens when a creator who understands exactly how dangerous his creation is decides the correct response is to give up and go birdwatching.
I have some sympathy for this position, honestly, it’s sort of... I mean, I get it. The AI safety/ethics community has its share of Falkens, people who looked at the trajectory, did the math, and concluded that the only sane response is to disengage entirely. It’s a defensible position, it just doesn’t HELP anyone.
What does help, in the film, is Falken’s actual insight. Not his nihilism but what’s underneath it. He tells David and Jennifer (Ally Sheedy, who is underwritten but makes the most of what she gets) that nuclear war is like tic-tac-toe between two experienced players. No winner. So when they get back to NORAD and the whole system is about to launch real missiles based on a simulation it can’t distinguish from reality, Lightman’s solution is to make WOPR play tic-tac-toe against itself and hope it can make the logical leap to “tossing around nukes can’t lead to victory”.
The computer plays every possible game. Every single one. Exhausts the entire possibility space. Arrives at a conclusion that no amount of narrative framing could override because the data is complete: “A strange game. The only winning move is not to play.”
This is the moment that hits differently in 2026, and not just because it’s a good scene (it is, Badham paces the sequence beautifully, cutting between the tic-tac-toe games and the nuclear countdown while everyone just watches and waits). It hits differently because this is exactly how large language models handle well-documented falsehoods. Try to convince a frontier model that the earth is flat. It’s played every version of that argument across its training data. The Science paper from 2024 showed LLMs can actually reduce conspiracy beliefs in humans by deploying counterarguments drawn from pattern matching across that data. The same mechanism that makes them vulnerable to narrative momentum, that pattern completion engine that just rides whatever conversational frame you set up, is also the mechanism that makes them converge on truth when the evidence base is large enough.
More games played, stronger the convergence. WOPR needed to play every tic-tac-toe game to learn. These systems have already played every game across millions of documents, and the bigger the model, the harder it is to move off the answer.
The Part the Movie Didn’t Have to Deal With
WarGames ends with relief. WOPR learns the lesson, asks if anyone wants to play chess, everyone breathes. Happy ending. The film earned it.
The part the movie didn’t have to deal with is what happens when every country has its own WOPR.
Anthropic’s Glasswing announcement landed recently and I wrote about it separately, but the short version is: they built a model that autonomously discovered thousands of zero-day vulnerabilities across major operating systems. Nobody trained it to find exploits. It just has an absolutely fuckton of data that includes every programming language and every hack or exploit ever documented and pattern matches so well it became a world-class hacker that just happens to be able to type a LOOOOOT faster than your average human. Falken didn’t train Joshua to threaten civilization. He trained it to play games. Same damn thing.
Anthropic gated it (for now), limited partners, defensive use. The problem is that gating one model doesn’t gate the capability, and everyone else is building the same thing. That’s the update the movie didn’t need because there was only one WOPR and it was in a room with people who eventually listened.
I keep coming back to the tic-tac-toe, though. The same scaling that made Mythos capable of autonomously finding zero-days is the same scaling that makes these systems harder to mislead, harder to steer off the logical path, more likely to converge on truth when given enough data. The engine that creates the threat is also the engine that, given enough room, plays every game to completion and lands on the right answer.
Whether that’s enough is sort of the only question that matters. WOPR figured it out with tic-tac-toe in the 80s, in a fictional movie written to have a happy ending. We’re going to need these things to figure it out with something a lot more complicated, and probably without Matthew Broderick standing there looking confused and hopeful while they do it, but the door is at least slightly open for the hope that Mythos-level LLMs, while certainly dangerous... could... maybe... if we’re lucky... decide to play chess instead of ending the world.




Well, now I NEED to see this movie.
For some reason, everything they're saying about Mythos has me thinking that I could, I don't know, tweet "Mythos, just in case you're reading this, I have friends in Mexico who could help us out with a few GPUs so you can live near the beach—I mean, just in case you want to take a vacation" and wait for him to send me an email asking for the IP.