pavel 1ar.ionov

20 tips for building software with AI

Practical, no-fluff rules I follow when building with AI coding agents. Numbered list, no philosophy.

I prepared these for a friend who asked me to share my workflow. Turned out to be a pretty decent general checklist. So here it is, cleaned up for everyone.

These assume you’re using an agentic coding tool - Claude Code, Cursor, Codex, or similar. Most tips apply regardless of the tool.

Planning

1. Spend more time in plan mode than in build mode. The best sessions start with a plan, not with code. Ask the agent to draft an implementation plan before writing a single line.

2. Read the plans your AI creates. Don’t just approve and go. If something looks off - edit the plan, don’t wait for the code to be wrong.

3. Tell the AI to interview you. Before it starts planning, ask it to ask you as many questions as it needs to understand the scope. Most agents have interview or clarification modes. Use them.

4. Make sure the AI understands scale. Building for 10 users is different from building for 10,000 concurrent users. State this upfront - it changes architecture, database choices, and cost.

5. Ask the AI to explain tech stack decisions in plain language. Then ask if there are better alternatives. Then ask it to look things up online. The space changes fast - don’t let the model rely only on training data.

Instructions and context

6. Don’t overblow your instructions file. Whether it’s CLAUDE.md, AGENTS.md, or .cursorrules - keep it focused. Don’t micro-manage the AI. It is likely smarter than us (at code) already. Guide intent and constraints, not implementation steps.

7. Do commit your instructions file. Share it with collaborators via the repo. But never put secrets or API keys in there.

8. Add scoped instructions in subfolders. If you have src/components/, put an additional instructions file there describing how those components should be used. The agent picks it up when it enters that directory.

9. If you have a design system, describe it in a separate doc. Some people use JSON, some use markdown. Point to it from your main instructions file: “whenever we work on design, follow the path/to/design-system.”

10. Don’t add too many MCPs, skills, and integrations. Every connection you add eats context window and confuses the model. Only connect what you actually use in most sessions in that particular project.

Code quality

11. Create granular commits with clear messages. One feature per commit. You can always squash later, but you can’t split a 500-line commit into meaningful history after the fact.

12. Ask AI to write and run tests. “Builds without errors” does not mean “works as intended”, despite the fact that AI assumes that by default. If you’re implementing new functionality, ask the agent to write tests for the core logic and run them before committing.

13. Have a code review pipeline. Ideally, use a different model to review. If you build with Claude, ask Codex to review, or vice versa. Run a review skill or command before committing - fix what matters, then commit.

14. Spend time on maintenance. Simplify code, clean up components, remove dead paths. Create dedicated cleanup plans. There are skills like code-simplifier you can run after implementation.

15. Automate visual checks. If you do frontend work - use Playwright, built-in browser automation, or MCP extensions to catch regressions. This provides a feedback loop to the agent, and maintains momentum. Otherwise QA will become your bottleneck at some point.

Dependencies and deployment

16. If it’s critical, own it. If your app depends on something loaded from a CDN or external URL, it can break without you changing anything. Install critical dependencies properly - unless you know what you are doing, don’t load them from someone else’s server.

17. Work in branches if you auto-deploy. If pushing to main triggers a deploy (e.g. you have setup auto-deployment on Vercel or Netlify), work in feature branches. Merge when you’ve tested locally.

18. Don’t overcomplicate your workflow. If you’re the only maintainer, you probably don’t need pull requests. Review code locally before you push. PRs are useful for teams, but too many ceremonies will slow you down.

The workflow

19. Test locally, deploy from main. Deploy from main to your infra. Work in branches. Test locally. If you need a web preview of a branch - most platforms let you point a preview URL to any branch. But honestly, for most solo work: commit often, run and test it locally, push when it works.

20. Use issues, not your head. GitHub Issues is free and built in. Write down bugs and feature ideas there. You can tell the AI “tackle issue #3” - it reads the issue, plans it, and you commit with the issue number to close it automatically. For small teams, Issues with GitHub Projects is more than enough.


None of this is revolutionary. That’s the point - the best AI-assisted workflow is a disciplined one. The genie might be powerful; your job is to clearly tell it what you want.

If you want to see this in practice - 1ar labs runs entirely on these principles. 1ar labs can help you ideate, bring renown expertise and vision, hold workshops and keynotes, provide consultation services, or build your project entirely.