When it comes to concrete use cases for large language models, AI companies love to point out the ways coders and software developers can use these models to increase their productivity and overall efficiency in creating computer code. However, a new randomized controlled trial has found that experienced open source coders became less efficient at coding-related tasks when they used current AI tools.
For their study, researchers at METR (Model Evaluation and Threat Research) recruited 16 software developers, each with multiple years of experience working on specific open source repositories. The study followed these developers across 246 individual "tasks" involved with maintaining those repos, such as "bug fixes, features, and refactors that would normally be part of their regular work." For half of those tasks, the developers used AI tools like Cursor Pro or Anthropic's Claude; for the others, the programmers were instructed not to use AI assistance. Expected time forecasts for each task (made before the groupings were assigned) were used as a proxy to balance out the overall difficulty of the tasks in each experimental group, and the time needed to fix pull requests based on reviewer feedback was included in the overall assessment.

Before performing the study, the developers in question expected the AI tools would lead to a 24 percent reduction in the time needed for their assigned tasks. Even after completing those tasks, the developers believed that the AI tools had made them 20 percent faster, on average. In reality, though, the AI-aided tasks ended up being completed 19 percent slower than those completed without AI tools.
Trade-offs
By analyzing screen recording data from a subset of the studied developers, the METR researchers found that AI tools tended to reduce the average time those developers spent actively coding, testing/debugging, or "reading/searching for information." But those time savings were overwhelmed in the end by "time reviewing AI outputs, prompting AI systems, and waiting for AI generations," as well as "idle/overhead time" where the screen recordings show no activity.
Overall, the developers in the study accepted less than 44 percent of the code generated by AI without modification. A majority of the developers reported needing to make changes to the code generated by their AI companion, and a total of 9 percent of the total task time in the "AI-assisted" portion of the study was taken up by this kind of review.