
No severe developer nonetheless expects AI to magically do their work for them. We’ve settled right into a extra pragmatic, albeit nonetheless barely uncomfortable, consensus: AI makes an important intern, not a substitute for a senior developer. And but, if that is true, the corollary can be true: If AI is the intern, that makes you the supervisor.
Sadly, most builders aren’t nice managers.
We see this day by day in how builders work together with instruments like GitHub Copilot, Cursor, or ChatGPT. We toss round imprecise, half-baked directions like “make the button blue” or “repair the database connection” after which act stunned when the AI hallucinates a library that has not existed since 2019 or refactors a important authentication stream into an open safety vulnerability. We blame the mannequin. We are saying it isn’t good sufficient but.
However the issue often will not be the mannequin’s intelligence. The issue is our lack of readability. To get worth out of those instruments, we don’t want higher immediate engineering methods. We want higher specs. We have to deal with AI interplay much less like a magic spell and extra like a proper delegation course of.
We must be higher managers, in different phrases.
The lacking talent: Specification
Google Engineering Supervisor Addy Osmani just lately printed a masterclass on this precise matter, titled merely “Find out how to write a great spec for AI brokers.” It is among the most sensible blueprints I’ve seen for doing the job of AI supervisor nicely, and it’s an important extension on some core rules I laid out just lately.
Osmani will not be attempting to promote you on the sci-fi way forward for autonomous coding. He’s attempting to maintain your agent from wandering, forgetting, or drowning in context. His core level is straightforward however profound: Throwing a large, monolithic spec at an agent typically fails as a result of context home windows and the mannequin’s consideration finances get in the best way.
The answer is what he calls “good specs.” These are written to be helpful to the agent, sturdy throughout periods, and structured so the mannequin can comply with what issues most.
That is the lacking talent in most “AI will 10x builders” discourse. The leverage doesn’t come from the mannequin. The leverage comes from the human who can translate intent into constraints after which translate output into working software program. Generative AI raises the premium on being a senior engineer. It doesn’t decrease it.
From prompts to product administration
In case you have ever mentored a junior developer, you already understand how this works. You don’t merely say “Construct authentication.” You lay out all of the specifics: “Use OAuth, help Google and GitHub, preserve session state server-side, don’t contact funds, write integration checks, and doc the endpoints.” You present examples. You name out landmines. You insist on a small pull request so you may verify their work.
Osmani is translating that very same administration self-discipline into an agent workflow. He suggests beginning with a high-level imaginative and prescient, letting the mannequin develop it right into a fuller spec, after which enhancing that spec till it turns into the shared supply of fact.
This “spec-first” method is shortly turning into mainstream, transferring from weblog posts to instruments. GitHub’s AI group has been advocating spec-driven growth and launched Spec Equipment to gate agent work behind a spec, a plan, and duties. JetBrains makes the identical argument, suggesting that you just want overview checkpoints earlier than the agent begins making code modifications.
Even Thoughtworks’ Birgitta Böckeler has weighed in, asking an uncomfortable query that many groups are quietly dodging. She notes that spec-driven demos are likely to assume the developer will do a bunch of necessities evaluation work, even when the issue is unclear or massive sufficient that product and stakeholder processes usually dominate.
Translation: In case your group already struggles to speak necessities to people, brokers is not going to prevent. They may amplify the confusion, simply at a better token price.
A spec template that really works
A great AI spec will not be a request for feedback (RFC). It’s a software that makes drift costly and correctness low cost. Osmani’s advice is to start out with a concise product transient, let the agent draft a extra detailed spec, after which appropriate it right into a dwelling reference you may reuse throughout periods. That is nice, however the actual worth stems from the precise elements you embody. Based mostly on Osmani’s work and my very own observations of profitable groups, a useful AI spec wants to incorporate a number of non-negotiable parts.
First, you want targets and non-goals. It isn’t sufficient to write down a paragraph for the purpose. You will need to checklist what’s explicitly out of scope. Non-goals forestall unintended rewrites and “useful” scope creep the place the AI decides to refactor your whole CSS framework whereas fixing a typo.
Second, you want context the mannequin received’t infer. This consists of structure constraints, area guidelines, safety necessities, and integration factors. If it issues to the enterprise logic, it’s important to say it. The AI can not guess your compliance boundaries.
Third, and maybe most significantly, you want boundaries. You want specific “don’t contact” lists. These are the guardrails that preserve the intern from deleting the manufacturing database config, committing secrets and techniques, or modifying legacy vendor directories that maintain the system collectively.
Lastly, you want acceptance standards. What does “carried out” imply? This must be expressed in checks: checks, invariants, and a few edge instances that are likely to get missed. If you’re pondering that this feels like good engineering (and even good administration), you’re proper. It’s. We’re rediscovering the self-discipline we had been letting slide, dressed up in new instruments.
Context is a product, not a immediate
One purpose builders get pissed off with brokers is that we deal with prompting like a one-shot exercise, and it isn’t. It’s nearer to establishing a piece surroundings. Osmani factors out that enormous prompts typically fail not solely resulting from uncooked context limits however as a result of fashions carry out worse once you pile on too many directions directly. Anthropic describes this similar self-discipline as “context engineering.” You will need to construction background, directions, constraints, instruments, and required output so the mannequin can reliably comply with what issues most.
This shifts the developer’s job description to one thing like “context architects.” A developer’s worth will not be in figuring out the syntax for a selected API name (the AI is aware of that higher than we do), however somewhat in figuring out which API name is related to the enterprise downside and making certain the AI is aware of it, too.
It’s price noting that Ethan Mollick’s submit “On-boarding your AI intern” places this in plain language. He says it’s important to study the place the intern is beneficial, the place it’s annoying, and the place you shouldn’t delegate as a result of the error price is simply too expensive. That may be a fancy means of claiming you want judgment. Which is one other means of claiming you want experience.
The code possession entice
There’s a hazard right here, after all. If we offload the implementation to the AI and solely concentrate on the spec, we threat shedding contact with the truth of the software program. Charity Majors, CTO of Honeycomb, has been sounding the alarm on this particular threat. She distinguishes between “code authorship” and “code possession.” AI makes authorship low cost—close to zero. However possession (the power to debug, preserve, and perceive that code in manufacturing) is turning into costly.
Majors argues that “once you overly depend on AI instruments, once you supervise somewhat than doing, your personal experience decays somewhat quickly.” This creates a paradox for the “developer as supervisor” mannequin. To jot down a great spec, as Osmani advises, you want deep technical understanding. If you happen to spend all of your time writing specs and letting the AI write the code, you may slowly lose that deep technical understanding. The answer is probably going a hybrid method.
Developer Sankalp Shubham calls this “driving in decrease gears.” Shubham makes use of the analogy of a handbook transmission automobile. For easy, boilerplate duties, you may shift right into a excessive gear and let the AI drive quick (excessive automation, low management). However for advanced, novel issues, you have to downshift. You may write the pseudocode your self. You may write the troublesome algorithm by hand and ask the AI solely to write down the take a look at instances.
You stay the motive force. The AI is the engine, not the chauffeur.
The long run is spec-driven
The irony in all that is that many builders selected their profession particularly to keep away from being managers. They like code as a result of it’s deterministic. Computer systems do what they’re advised (principally). People (and by extension, interns) are messy, ambiguous, and require steerage.
Now, builders’ main software has turn into messy and ambiguous.
To reach this new surroundings, builders must develop comfortable expertise which might be really fairly exhausting. It’s essential discover ways to articulate a imaginative and prescient clearly. It’s essential discover ways to break advanced issues into remoted, modular duties that an AI can deal with with out shedding context. The builders who thrive on this period received’t essentially be those who can sort the quickest or memorize probably the most normal libraries. They would be the ones who can translate enterprise necessities into technical constraints so clearly that even a stochastic parrot can not mess it up.

