ICs Using AI Agents Should Think Like Managers

Developers new to agentic coding are at risk of having the wrong expectations. When they try coding agents for the first time, it feels like magic. But cracks appear when AI is used without understanding its limitations. People tend to dismiss agentic coding if it doesn’t work “perfectly”. This is unfortunate, as coding agents are in fact very impressive.
It is said that managers have been vibe coding forever. Now, if ICs are also vibe coding, are they like managers?

What do managers do

Any new engineer joining a team needs to learn how to work with the team’s codebase. They need to know the conventions, constraints, pitfalls, configurations etc. All relevant information should be explicitly spelled out to them, sometimes regardless of their experience level.  Managers facilitate onboarding of new engineers to the team’s way of working. In turn, managers also need to learn how to work with the person joining their team.

Managers work with product or business stakeholders to clarify requirements and communicate them to the team.

Managers review technical documents, PRDs, RFCs to provide critique and ensure alignment with business goals.

Managers help developers break down initiatives into milestones and create Jira tickets. They monitor the progress and ensure the development isn’t drifting away from the goals.

Managers must provide the right amount of management or prompting to each team member. For example, over-explain something to a senior dev and they might get annoyed.
But under-explain to an inexperienced engineer, and they may go their merry way with incorrect assumptions.

Managers should be level-headed and challenge developers’ assumptions, especially if the latter seem over-confident.

But most importantly, managers are accountable for all technical decisions and outcomes of a team.

So just like a manager onboards a new engineer to the team’s code, the ICs should know that the same applies to AI agents. All rules, constraints, conventions must be spelled out.
The things that an IC might be doing subconsciously on a day to day basis, must be explicitly prompted. ICs need to learn how to work with AI agents, just like the agents need to learn how to work with the team’s code.

SDLC with AI

At first appearance it looks like AI has completely transformed how code is written. So it’s worth recalling how software development was done before AI agents.

Suppose you are a software developer on a project. Business comes up with a new feature requirement. Do you start coding? No. You need to fully understand the feature being requested. How it fits in your application. You need to know the constraints, edge cases and fallbacks etc. How would you verify if the feature has been built correctly?

After the requirement has been clarified, now do you start coding? Not yet. You need to think about what’s the right way to implement the feature. What should be the development milestones? How to make each step backward compatible, in case a release needs to be rolled back. How to test your code changes etc.

So basically a lot of planning is required before the first line of code is written. As it turns out, the process looks pretty much the same with AI agents. You still need to properly plan your work to ensure you’re building the correct thing. Having small incremental steps for the implementation is a great way to keep the context short for the agent and easily verify its work.

AI mistakes are human-like

It is well known that AI tends to hallucinate. It may fail to understand the problem statement, make incorrect assumptions, jump to conclusions, outright make up requirements or fail to sufficiently challenge inconsistent requirements (You are absolutely right!). As the context of an AI agent fills up, it may forget the information that was previously provided. AI can go completely off-track from what you are trying to accomplish, while remaining suspiciously overconfident.

But do you know who else hallucinates? Humans! As a people manager it is far too common to observe developers misunderstand requirements, deviate from the plan, lose sight of the bigger picture as they get deep into implementation details, or deliver something that technically works but doesn’t solve the actual problem. Junior developers may implement the first solution that comes to mind without considering edge cases. Senior developers might over-engineer a solution based on incorrect assumptions about future requirements.

And that’s an important job in management. A manager must keep their reportees accountable, ensure they stay aligned with the goals, course-correct when they drift, and verify that the work being done actually addresses the problem at hand. The principles of people management also somewhat apply to managing AI agents.

Conclusion

I have come to believe that it is useful to think of AI as a person. AI agents need management just like developers do. If you are an IC, try to put yourself in your manager’s shoes and ask yourself - would you expect developers to never be misaligned? Do you accept that developers can make mistakes every now and then? Would you expect developers to somehow know your intent if you don’t communicate it to them properly? Would you expect a developer, who is new to your project, to be familiar with every nook and corner, conventions and pitfalls of your code? Would you trust that your developers (to the best of their knowledge) are giving you correct information 100% of time? Should you think your developers are above scrutiny? Should you think that your developers can never fail?

If your answer to the above questions is no, then why should you expect stateless AI agents to be infallible?

See also