The developer world has been buzzing this week about a Twitter thread from Boris Cherny—the person who actually built Claude Code at Anthropic. When asked how he uses his own creation, his answer surprised everyone.
It's not some elaborate setup with custom scripts and complicated configurations. It's what he calls "surprisingly vanilla."
But here's what caught my attention as a solo builder: his approach isn't about being a better coder. It's about being a better manager of AI tools. And that's something any of us can do.
The Big Shift: From Coder to Commander
Boris doesn't sit there typing code line by line. Instead, he runs 10-15 Claude sessions simultaneously—five in his terminal, another five to ten in his browser, and a few more from his phone.
One developer who tried this approach said it "feels more like Starcraft than traditional coding." You're not writing syntax anymore. You're commanding autonomous units.
Think about that for a second. The creator of one of the most powerful coding tools in the world doesn't write code the old-fashioned way. He orchestrates AI agents.
This is exactly what I keep saying about the barrier to building software: it's shifted from "can you code?" to "can you think clearly about problems and communicate with AI?"
Pay the Compute Tax, Skip the Correction Tax
Here's an insight that applies whether you're a seasoned developer or someone just starting to build with AI:
Boris uses Opus 4.5 (the most capable model) for everything—even though it's slower and more expensive. Why? Because steering a smarter model takes less effort. You spend compute upfront instead of spending time correcting mistakes later.
For solo builders, this translates to a simple principle: use the best model you can for important tasks. The time you save fixing AI mistakes is worth more than the extra tokens.
The Memory File That Changes Everything
Every team member at Claude Code contributes to a single file called CLAUDE.md that lives in their code repository. Whenever Claude makes a mistake, someone adds a note to this file so it won't happen again.
The file might include things like:
- "Use early returns instead of nested if statements"
- "Run tests before pushing code"
- "Update documentation when behavior changes"
This turns your codebase into what one observer called "a self-correcting organism." The AI learns from its mistakes—not by magic, but because you're documenting the lessons.
What this means for you: If you're building anything with AI assistance, start keeping a simple file of "lessons learned." Every time Claude (or any AI) gets something wrong, write it down. Then include those notes in future prompts. Your AI interactions will get better over time.
→ Learn how to build your own CLAUDE.md file
Plan First, Execute Fast
Boris starts almost every session in "Plan Mode." He goes back and forth with Claude until the plan looks solid. Only then does he switch to auto-accept mode and let Claude execute.
His words: "A good plan is really important."
This is the opposite of what most people do. They jump straight into asking the AI to build something, then spend hours fixing the mess. Boris invests time upfront to get the plan right, then the execution happens quickly—often in a single shot.
The takeaway: Don't rush to "just build it." Spend time getting your requirements clear. Ask the AI to outline its approach before writing any code. The five minutes you spend planning saves an hour of debugging.
Give AI a Way to Check Its Own Work
Boris calls this "probably the most important thing to get great results."
When Claude can verify its own output—by running tests, checking a browser, or validating against some criteria—the quality of the final result improves dramatically. He says it's a 2-3x improvement.
This is why building with AI isn't just about prompting. It's about creating feedback loops. The AI needs a way to know if it got things right.
For builders: Whenever possible, give your AI assistant a way to verify what it produced. Ask it to test its code. Have it check its work against your requirements. Build that verification step into your process.
The Bottom Line for Non-Technical Builders
Here's what strikes me most about Boris's setup: it's not complicated. There's no elaborate customization or clever hacks. It's disciplined application of simple fundamentals.
The developers who are struggling? They're often the ones skipping planning to "save time," then spending more time fixing mistakes.
The ones who are winning? They're building systems around AI. They're treating it less like a chatbot and more like a team member who needs clear instructions, documentation, and ways to check their work.
You don't need to be a traditional programmer to do this. You need to think clearly about what you want, communicate it well, and build in feedback loops.
That's the real skill that matters now.
Claude Code Learning Path
- Claude Code Developer Cheatsheet - Quick reference for Boris Cherny's workflow patterns
- Building Your CLAUDE.md: The File That Makes AI Remember - How to create persistent project context
- How I Built This Website with Claude Code - A real-world case study applying these principles
Official Resources
- Claude Code Documentation - Complete guide to Claude Code features and setup
- Boris Cherny on Twitter - Follow the creator of Claude Code for insights
- Anthropic Blog - Latest updates and announcements from Anthropic
Building something with AI? I'd love to hear what's working for you. Drop me a note or share your approach—we're all figuring this out together.