Skip to content
Everyone and their mother

Google hides secret message in name list of 3,295 AI researchers

Gemini 2.5 paper hides Easter egg in massive author list—but why so many contributors?

Benj Edwards | 21
Story text

How many Google AI researchers does it take to screw in a lightbulb? A recent research paper detailing the technical core behind Google's Gemini AI assistant may suggest an answer, listing an eye-popping 3,295 authors.

It's a number that recently caught the attention of machine learning researcher David Ha (known as "hardmaru" online), who revealed on X that the first 43 names also contain a hidden message. "There’s a secret code if you observe the authors’ first initials in the order of authorship," Ha wrote, relaying the Easter egg: "GEMINI MODELS CAN THINK AND GET BACK TO YOU IN A FLASH."

The paper, titled "Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities," describes Google's Gemini 2.5 Pro and Gemini 2.5 Flash AI models, which were released in March. These large language models, which power Google's chatbot AI assistant, feature simulated reasoning capabilities that produce a string of "thinking out loud" text before generating responses in an attempt to help them solve more difficult problems. That explains "think" and "flash" in the hidden text.

But clever Easter egg aside, the sheer scale of authorship tells its own story about modern AI development. Just seeing the massive list made us wonder: Is 3,295 authors unprecedented? Why so many?

Not the biggest, but still massive

While 3,295 authors represents an enormous collaborative effort within Google, it doesn't break the record for academic authorship. According to Guinness World Records, a 2021 paper by the COVIDSurg and GlobalSurg Collaboratives holds that distinction, with 15,025 authors from 116 countries. In physics, a 2015 paper from CERN's Large Hadron Collider teams featured 5,154 authors across 33 pages—with 24 pages devoted solely to listing names and institutions.

The CERN paper provided the most precise estimate of the Higgs boson mass at the time and represented a collaboration between two massive detector teams. Similarly large author lists have become common in particle physics, where experiments require contributions from thousands of scientists, engineers, and support staff.

In the case of Gemini development at Google DeepMind, building a family of AI models requires experience spanning multiple disciplines. It involves not just machine learning researchers but also software engineers building infrastructure, hardware specialists optimizing for specific processors, ethicists evaluating safety implications, product managers coordinating efforts, and domain experts ensuring the models work across different applications and languages.

And complexity in AI model development has ballooned over a short period. Google's initial Gemini paper from 2023 included a "mere" 1,350 authors. That's a 144 percent increase in authorship headcount in under two years.

A collaborative future

All that being said, we wonder: Does the Gemini 2.5 paper show how modern AI research has become a massive team sport—one where traditional notions of authorship struggle to capture the collaborative reality of pushing the tech frontier, or is Google simply being unusually generous in granting credit?

For comparison, the astronomical author count trend in AI papers does not necessarily extend beyond Google. At competitor OpenAI, the company's o1 System Card lists 260 authors, and its GPT-4o System Card lists 417 authors. Numerous, no doubt, but not numbering in the thousands. The difference may come down to OpenAI being a smaller company, but also to management decisions about who gets their name on the list. Apparently, Google has adopted very inclusive authorship criteria.

With so many authors on one paper, one might wonder if listing them all might muddy some part of the academic process. For example, should papers include everyone involved, even the guy who mops the floor in the server room? Such large author lists can blur the distinction between core contributors and peripheral participants, making it difficult to assess individual contributions. Additionally, with 3,295 authors who might naturally cite the paper in their future work, there's a risk of inflating citation counts in ways that may not accurately reflect the paper's scientific impact.

As one science blogger noted about large physics collaborations, "Papers simply do not have 5000 'authors.' In fact, I would bet that no more than a handful of the 'authors' listed on the record-breaking paper have even read the article, never mind written any of it."

We're not saying that all of those 3,295 people don't deserve credit, but it's a large and unwieldy number to chew. Meanwhile, AI projects continue to expand in complexity. In fact, if we keep seeing 144 percent increases in authorship numbers every two years, by 2040, Google's AI papers may have over 2.65 million authors. We'll need AI models just to read the author list.

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
21 Comments
Staff Picks