Earlier this year, we took a look at how and why Anthropic's Claude large language model was struggling to beat Pokémon Red (a game, let's remember, designed for young children). But while Claude 3.7 is still struggling to make consistent progress at the game weeks later, a similar Twitch-streamed effort using Google's Gemini 2.5 model managed to finally complete Pokémon Blue this weekend across over 106,000 in-game actions, earning accolades from followers, including Google CEO Sundar Pichai.
Before you start using this achievement as a way to compare the relative performance of these two AI models—or even the advancement of LLM capabilities over time—there are some important caveats to keep in mind. As it happens, Gemini needed some fairly significant outside help on its path to eventual Pokémon victory.
Strap in to the agent harness
Gemini Plays Pokémon developer JoelZ (who's unaffiliated with Google) will be the first to tell you that Pokémon is ill-suited as a reliable benchmark for LLM models. As he writes on the project's Twitch FAQ, "please don't consider this a benchmark for how well an LLM can play Pokémon. You can't really make direct comparisons—Gemini and Claude have different tools and receive different information. ... Claude's framework has many shortcomings so I wanted to see how far Gemini could get if it were given the right tools."
The difference between those "framework" tools in the Claude and Gemini gameplay experiments could go a long way in explaining the relative performance of the two Pokémon-playing models here. As LessWrong's Julian Bradshaw lays out in an excellent overview, Gemini actually gets a bit more information about the game through its custom "agent harness." This harness is the scaffolding that provides an LLM with information about the state of the game (both specific and general), helps the model summarize and "remember" previous game actions in its context window, and offers basic tools for moving around and interacting with the game.
It's not very effective...