Working with AI
Most people still treat AI like a vending machine.
They give it one sentence, hope for brilliance, and move on when it doesn't work.
The engineers who get real value from it don't ask for a miracle.
They build structure, give context, and treat it like a teammate who's fast but forgetful.
AI is only human. It can infer, but it can't read your mind.
Plan first
Before I ask AI to do anything, I plan like I would for a junior engineer. I outline the goal, describe the system, and point out what already exists.
Most AI tools now have a way to help you plan.
OpenAI Codex lets you ask for a plan before implementation.
Amp Code has an Oracle for guidance and can be asked to plan.
Cursor and Windsurf both include Planning Modes designed for outlining before you code.
They make you think clearly about scope, constraints, and trade-offs before anything is written.
A simple pattern
Here's the process I follow most days:
Start with what you want to build and why. Be specific and explicit with what you want.
“Let's implement X to solve Y. It should continue to follow our paradigm Z.”
Be descriptive, the model should understand the task and purpose.
“Let's implement X to solve Y. It should continue to follow our paradigm Z. Let's plan this out before implementing. Ask clarifying questions if you are less than 95% confident.”
Review the plan, refine it, answer questions, and make sure it matches how you'd approach it yourself. This is the most valuable part.
- Work through the plan together
Once the structure feels right, work step by step.
Brainstorm with AI and explore solutions. Ask for reasonings, ask to explore the codebase, catch edge cases.
- Implement only when ready
When the plan and context are solid, move into implementation.
Keep AI in the loop, but stay in control.
It should follow your plan, not create one mid-flight.
Like you would with an engineer, give feedback as it goes along. Find a thing you don't like? Tell it. Like all engineering tasks feedback is key.
Be ready to plan again. If you need a larger change during your process create another plan again. Don't be afraid to do it over and over. Planning isn't overhead. It's how you keep control as things grow.
And if the thread itself becomes too noisy, start fresh.
"Abandon threads if they accumulated too much noise. Sometimes things go wrong and failed attempts with error messages clutter up the context window. In those cases, it's often best to start with a new thread and a clean context window.”
A clean context gives both you and the model room to think again.
This approach scales from small features to full systems. It's not about efficiency. It's about predictability and quality.
Context is the real interface
Context is where most teams fail.
AI doesn't know your codebase, your architecture, your preferences, or reasoning unless you tell it.
Every serious AI tool now supports some way to describe how you work.
Where it's agents.md, claude.md, copilot-instructions.md, or something
similar, these files define how an agent should behave, what conventions to
follow, and how to ask questions.
Command line tools, chat agents, code reviewers, and workflow assistants all use context in the same way.
It's a shared layer between you and the model.
Models can technically search your project, but that search is a needle in a haystack.
They can find code, but not meaning.
If you tell the model what's there, what matters, what to ignore, and how things fit together, it performs better every time.
A real example: Vercel publishes an AGENTS.md in their design guidelines.
That file prescribes how agents should generate UI and systems that align with Vercel's design principles.
It enforces consistency and guides AI-driven interfaces toward the same standard of craft that their design engineers follow.
You can do the same for your team, write an agent file that encodes your patterns, values, and expectations.
Think of these files as documentation for a new teammate.
They don't automate understanding. They create it.
Keep it human
AI isn't a replacement for clarity. It's a test of it.
When I work with a model, I explain my reasoning.
I describe what I'm doing, why it matters, and what I'm not sure about.
I encourage it to ask questions.
The best results come when you describe your intent as if you were mentoring.
AI can fill gaps, but it can't bridge silence.
Invite it into your process, not just your task list.
Execution and review
Most of my work happens side by side: I guide, review, and refine.
In structured tools, I keep the flow tight. Stage changes myself, inspect diffs, reason about trade-offs.
In freer tools like Claude Code or Codex CLI I'll sometimes let it explore ideas, generate branches, or test different paths.
That looseness is useful for discovery, but in production work always return to human judgement.
AI can move fast, but it's still your name on the commit.
The same rules that make for good engineering make for good AI use: discipline, review, and intent.
The craft stays
It's reintroduced communication, clarity, and context as the hard parts.
Working with AI is an act of teaching, not just the model, but yourself.
If you can explain your system clearly enough for AI to help, you've already made it better.