Software program engineer workflows have been remodeled in recent times by an inflow of AI coding instruments like Cursor and GitHub Copilot, which promise to boost productiveness by mechanically writing traces of code, fixing bugs, and testing adjustments. The instruments are powered by AI fashions from OpenAI, Google DeepMind, Anthropic, and xAI which have quickly elevated their efficiency on a spread of software program engineering checks in recent times.
Nevertheless, a new examine revealed Thursday by the non-profit AI analysis group METR calls into query the extent to which as we speak’s AI coding instruments improve productiveness for knowledgeable builders.
METR carried out a randomized managed trial for this examine by recruiting 16 skilled open supply builders and having them full 246 actual duties on massive code repositories they often contribute to. The researchers randomly assigned roughly half of these duties as “AI-allowed,” giving builders permission to make use of state-of-the-art AI coding instruments similar to Cursor Professional, whereas the opposite half of duties forbade the usage of AI instruments.
Earlier than finishing their assigned duties, the builders forecasted that utilizing AI coding instruments would scale back their completion time by 24%. That wasn’t the case.
“Surprisingly, we discover that permitting AI really will increase completion time by 19% — builders are slower when utilizing AI tooling,” the researchers mentioned.
Notably, solely 56% of the builders within the examine had expertise utilizing Cursor, the principle AI device supplied within the examine. Whereas almost all of the builders (94%) had expertise utilizing some web-based LLMs of their coding workflows, this examine was the primary time some used Cursor particularly. The researchers be aware that builders have been skilled on utilizing Cursor in preparation for the examine.
However, METR’s findings increase questions concerning the supposed common productiveness good points promised by AI coding instruments in 2025. Primarily based on the examine, builders shouldn’t assume that AI coding instruments — particularly what’s come to be often called “vibe coders” — will instantly velocity up their workflows.
METR researchers level to a couple potential the explanation why AI slowed down builders slightly than rushing them up: Builders spend way more time prompting AI and ready for it to reply when utilizing vibe coders slightly than really coding. AI additionally tends to battle in massive, advanced code bases, which this check used.
The examine’s authors are cautious not to attract any sturdy conclusions from these findings, explicitly noting they don’t consider AI techniques at present fail to hurry up many or most software program builders. Different large-scale research have proven that AI coding instruments do velocity up software program engineer workflows.
The authors additionally be aware that AI progress has been substantial in recent times and that they wouldn’t anticipate the identical outcomes even three months from now. METR has additionally discovered that AI coding instruments have considerably improved their means to full advanced, long-horizon duties in recent times.
Nevertheless, the analysis presents but one more reason to be skeptical of the promised good points of AI coding instruments. Different research have proven that as we speak’s AI coding instruments can introduce errors and, in some circumstances, safety vulnerabilities.