Claude Code - Beyond vanilla

As an engineering manager and someone who still writes code daily, I went through the process of integrating Claude Code into our team's workflow.

It's not easy.

Especially taking it past the copilot phase, to agent phase. And while coding copilot is very valuable — that's leaving most of the value on the table.

I know there are plenty of writings on this topic and by now you've probably scrolled through countless "here's my Claude Code setup" kinda posts. But for me it was super helpful whenever a fellow builder or engineering manager shared their honest experience, and so here's mine.

The four pillars

After months of iteration, I've boiled down what matters into four things. Iterate. Validate. Collaborate. Delegate.

1. Iterate

"The soon-to-be most used programming language is Markdown."

Someone told me this recently and it stuck. What I want you to take from this is — all of Claude Code's core concepts are just .md files.

.claude/
├── CLAUDE.md          # (rule files) — simple md, always loaded to context
├── skills/            # loaded on demand (when llm decides)
└── subagents/         # dispatched explicitly, run in parallel

Just start with a rough CLAUDE.md, see what happens, fix what's wrong. Repeat. It's better not to worry about context / cost until it becomes an issue — it's so easy to fix things.

2. Validate

Validating what your AI does is critical. First and foremost — because it just is. We all have that friend of a friend that worked at a company that lost control to AI and their production crashed. Then also — because I've found that for some engineers and managers, validation is a serious mental barrier to moving from copilot to agent.

How? Honestly, I'm still figuring this out, but I found three things to make a real difference for me:

First: make sure your CI and tests are meaningful. Good news — once you have solid test infrastructure, AI makes adding coverage easy.

Second: ownership. A great example would be Two Delta, where I found some people to have intense ownership over infrastructure. They go deep on every PR — question each and every line of code. Which is, in the right environment, a contagious habit.

Third: structured workflow. Use skills and rules that make your agent work in a structured manner:

Use this "checkpoint" to review the agent's work, at a point where it's easier for humans to review.

3. Collaborate

"Check all settings and skills into git so new team members get the full setup from day one." — Boris Cherny, Claude Code creator

This is already happening at scale — tech enterprises now have dedicated core teams writing skills, MCP servers, and plugins for the entire company.

Because we use multiple repos, I built a dedicated plugin for cross-repo skills. Put CLAUDE.md in your repos and encourage people to contribute to it. This is where the compounding starts.

Concrete example: I spent days getting Claude to write tests with our specific fixtures and conventions. After that — anyone on the team could just say "write unit tests for my module.py" and it simply worked.

4. Delegate

Writing good markdown instructions gets you far (very far), and connecting Claude to your actual systems is another big and important step towards Claude being autonomous.

Two examples that made a real impact for us:

We used to write technical system design documents for big features in Notion. Once Claude Code was connected via Notion's MCP — writing them became significantly easier, and so did implementing from them.

Connecting the Grafana MCP (Loki + Prometheus) made triaging super quick — scanning logs and correlating with metrics. My favorite move: "give me a Grafana Explore link with graphs that tell the story of what happened" — instant visualization for post-mortems.

How to connect:

Ok, now what?

Now you'd want to get your entire team to benefit from everything you've built.

First of all — gather the team regularly. The pace of change in AI tooling is high, and delivering the progress softly to the team is important. Maybe spend some time pair-programming with individuals until you feel they are comfortable with the tools.

If you're an engineering manager — invest in code review. Making Claude Code do code reviews requires zero buy-in from the team, and saves you time. It's also very easy to customize to put more emphasis on things that matter to your team.

And yes — I have a killer prompt for agentic code review designed for human devs. I might share.

Iterate. Validate. Collaborate. Delegate. That's it.