Dear readers,

The following is a pinned post. Hoctro's Place (Góc Học Trò) is a place for me to deliver my past, present, and future thoughts about music and about my "vibe-coding" experiences with Claude Code, tips and tricks, so to speak. It's also a place to post my collaboration with Claude Code, ranging from supervising it to write analysis essays about prominent Vietnamese musicians such as Phạm Duy and Trịnh Công Sơn, to everything else that I find interesting.

For me, Claude AI's analysis essays are so in-depth and showing many new perspectives, it would be wasteful not to share with the world. It is a collaboration, because just like "vibe-coding", I might have not written the words, but I was the one whom conveys the original ideas, supplies the documents for Claude to research from, reads and corrects hallucinations, and gives final approval for the analysis.

2.21.2026

The Secrets of Claude Code From the Engineers Who Built It

Boris Cherny (creator of Claude Code) and Cat Wu (product lead) sit down with Dan Shipper to reveal how Claude Code was built, how Anthropic dogfoods it internally, and where the future of AI-powered coding is headed.

From "AI and I" — an Every podcast by Dan Shipper.
Guests: Boris Cherny, Creator and Head of Claude Code at Anthropic; Cat Wu, Product Lead for Claude Code at Anthropic.
Host: Dan Shipper, CEO of Every.








Introduction

"What made it work really well is that Claude Code has access to everything that an engineer does at the terminal. Everything you can do, Claude Code can do. There's nothing in between." — "There's this really old idea in product called latent demand. You build a product in a way that is hackable, that is kind of open-ended enough that people can abuse it for other use cases it wasn't really designed for, and you build for that because you kind of know there's demand for it."

DAN: Cat, Boris, thank you so much for being here.

BORIS: Thanks for having us.

DAN: So for people who don't know you, you are the creators of Claude Code. Thank you very much from the bottom of my heart. I love Claude Code.

BORIS: That's amazing to hear. That's what we love to hear.

DAN: Okay, I think the place I want to start is when I first used it. There was like this moment — I think it was around when Sonnet 3.7 came out where I used it and I was like, "Holy — this is like a completely new paradigm. It's a completely new way of thinking about code." And the big difference was you went all the way and just eliminated the text editor and you're just like all you do is talk to the terminal and that's it. Previous paradigms of AI programming, previous harnesses have been like you have a text editor and you have the AI on the side and it's kind of like — or it's a tab complete. So, take me through that decision process.


I. Claude Code's Origin Story

BORIS: I think the most important thing is it was not intentional at all. We sort of ended up with it. So at the time when I joined Anthropic we were still on different teams at the time. There was this previous predecessor to Claude Code. It was called Clide — C-L-I-D-E. And it was this research project, you know, it took like a minute to start up. It was this really heavy Python thing. It had to run a bunch of indexing and stuff. And when I joined I wanted to ship my first PR and I hand wrote it like a, you know, like a noob — I didn't know about any of these tools.

DAN: Thank you for admitting that.

BORIS: I didn't know any better and then I put up this PR and Adam Wolf who was the manager for our team for a while. He was my ramp up buddy and he just rejected the PR and he was like, "You wrote this by hand. What are you doing? Use Clyde." Because he was also hacking a lot on Clyde at the time. And so I tried Clyde. I gave it the description of the task and it just one-shot this thing and this was, you know, Sonnet 3.5. So I still had to fix a thing even for this kind of basic task and the harness was super old. So it took like 5 minutes to turn this thing out and just took forever. But it worked and I was just mind-blown that this was even possible and that just kind of got the gears turning. Maybe you don't actually need an IDE.

And then later on I was prototyping using the Anthropic API and the easiest way to do that was just building a little app in the terminal because that way I didn't have to build a UI or anything. And I started just making a little chat app and then I just started thinking maybe we could do something a little bit like Clyde. So let me build a little Clyde and it actually ended up being a lot more useful than that without a lot of work. And I think the biggest revelation for me was when we started to give the model tools. It just started using tools and it was this insane moment. Like the model just wants to use tools. We gave it bash and it just started using bash, writing AppleScript to automate stuff in response to questions. And I was like this is just the craziest thing. I've never seen anything like this. Because at the time I had only used IDEs with like text editing, a little one-line autocomplete, multi-line autocomplete, whatever.

So that's where this came from. It was this kind of convergence of prototyping but also seeing what's possible in a very rough way. And this thing ended up being surprisingly useful. And I think it was the same for us. For me it was like kind of Sonnet 4, Opus 4. That's where that magic moment was. I was like, "Oh my god, this thing works."


II. The Tool Moment — Bash and Beyond

DAN: That's interesting. Tell me about that tool moment because I think that is one of the special things about Claude Code — it just writes bash and it's really good at it. And I think a lot of previous agent architectures or even anyone building agents today, your first instinct might be okay, we're going to give it a find file tool and then we're going to give it an open file tool and you build all these custom wrappers for all the different actions you might want the agent to take. But Claude Code just uses bash and it's really good at it. How do you think about what you learned from that?

BORIS: I think we're at this point right now where Claude Code actually has a bunch of tools. I think it's like a dozen or something like this. We actually add and remove tools most weeks. So this changes pretty often. But today there actually is a search tool — there's a tool for searching. And we do this for two reasons. One is the UX, so we can show the result a little nicer to the user because there's still a human in the loop right now for most tasks. And the second one is for permissions. So if you say in your Claude Code settings.json "on this file you cannot read," we have to enforce this. We enforce it for bash but we can do it a little bit more efficiently if we have a specific search tool.

But definitely we want to unship tools and kind of keep it simple for the model. Like last week or two weeks ago we unshipped the LS tool because in the past we needed it but then we actually built a way to enforce this permission system for bash. So in bash, if we know that you're not allowed to read a particular directory, Claude's not allowed to LS that directory. And because we can enforce that consistently, we don't need this tool anymore. And this is nice because it's a little less choice for Claude. A little less stuff in context.


III. How Anthropic Dogfoods Claude Code

DAN: And how do you guys split responsibility on the team?

CAT: I would say Boris sets the technical direction and has been the product visionary for a lot of the features that we've come out with. I see myself as more of a supporting role to make sure that our pricing and packaging resonates with our users. Two, making sure that we're shepherding all our features across the launch process. So from deciding, "All right, these are the prototypes that we should definitely ant-food" to setting the quality threshold for ant-fooding through to communicating that to our end users. And there's definitely some new initiatives that we're working on. I would say historically a lot of Claude Code has been built bottoms up — like Boris and a lot of the core team members have just had these great ideas for to-do list, sub agents, hooks — all these are bottoms up. As we think about expanding to more services and bringing Claude Code to more places, I think a lot of those are more like, "All right, let's talk to customers. Let's bring engineers into those conversations and prioritize those services and knock them out."

DAN: What is ant-fooding?

CAT: Oh, ant-fooding. It means dog-fooding. So, Anthropic — ant. Our nickname for internal employees is ant. And so ant-fooding is our version of dog-fooding. Internally over 70 or 80% of ants — technical Anthropic employees — use Claude Code every day. And so every time we are thinking about a new feature, we push it out to people internally and we get so much feedback. We have a feedback channel. I think we get a post every five minutes. And so you get really quick signal on whether people like it, whether it's buggy, or whether it's not good and we should unship it.

DAN: You can tell that someone that is building stuff is using it all the time to build it because the ergonomics just make sense if you're trying to build stuff and that only happens if you're ant-fooding.

BORIS: Yeah. And I think that's a really interesting paradigm for building new stuff — that sort of bottoms up "I make something for myself."

BORIS: And Cat is also so humble. I think Cat has a really big role in the product direction also — like it comes from everyone on the team. And these specific examples — this actually came from everyone on the team. Like to-do lists and sub agents, that was Sid. Hooks, Dixon shipped that. Plugins, Daisy shipped that. So everyone on the team — these ideas come from everyone.

BORIS: And so I think for us, we build this core agent loop and this core experience and then everyone on the team uses the product all the time. And everyone outside the team uses the product all the time. And so there's just all these chances to build things that serve these needs. Like for example, bash mode — you know, the exclamation mark and you can type in bash commands. This was just many months ago. I was using Claude Code and I was going back and forth between two terminals and just thought it was kind of annoying. And just on a whim, I asked Claude to kind of think of ideas. It thought of this exclamation mark bash mode. And then I was like, "Great, make it pink and then ship it." It just did it. And that's the thing that still kind of persisted. And you know, now you see kind of others also catching on to that.

DAN: That's funny. I actually didn't know that. And that's extremely useful because I always have to open up a new tab to run any bash commands. So you just do an exclamation point and then it just runs it directly instead of filtering it through all the Claude stuff.

BORIS: Yeah. And Claude Code sees the full output too.

DAN: Interesting. That's perfect. So anything you see in the Claude Code view, Claude Code also sees.

BORIS: Yeah. And this is kind of a UX thing that we're thinking about. In the past tools were built for engineers, but now it's equal parts engineers and model. And so as an engineer, you can see the output, but it's actually quite useful for the model also. And this is part of the philosophy — everything is dual use. So for example, the model can also call slash commands. Like I have a slash command for /commit where I run through a few different steps like diffing and generating a reasonable commit message and this kind of stuff. I run it manually but also Claude can run this for me. And this is pretty useful because we get to share this logic. We get to define this tool and then we both get to use it.

DAN: What are the differences in designing tools that are dual use from designing tools that are used by one or the other?

BORIS: Surprisingly, it's the same. So far. I sort of feel like this kind of elegant design for humans translates really well to the models. So, you're just thinking about what would make sense to you and the model generally — it makes sense to the model too if it makes sense to you.

CAT: I think one of the really cool things about Claude Code being a terminal UI and what made it work really well is that Claude Code has access to everything that an engineer does at the terminal. And I think when it comes to whether the tool should be dual use or not, making them dual use actually makes the tools a lot easier to understand. It just means that everything you can do, Claude Code can do. There's nothing in between.

DAN: There are a couple of those decisions. No code editor, it's in the terminal, so it has access to your files. And it's on your computer versus in the cloud in a virtual machine. So you get to use it in a repeated way where you can build up your CLAUDE.md file or build slash commands and all that kind of stuff where it becomes very composable and extensible from a very simple starting point. And I'm curious about how you think about, for people who are thinking about "I want to build an agent" — probably not Claude Code, but something else — how you get that simple package that then can extend and be really powerful over time.

BORIS: For me, I start by just thinking about it like developing any kind of product where you have to solve the problem for yourself before you can solve it for others. And this is something that they teach in YC — you have to start with yourself. If you can solve your own problem, it's much more likely you're solving the problem for others. And I think for coding, starting locally is the reasonable thing. And you know now we have Claude Code on the web. So you can also use it with a virtual machine and you can use it in a remote setting. And this is super useful when you're on the go — you want to take that from your phone. And this is sort of — we started proving this out a step at a time where you can do @claude in GitHub and I use this every day. Like on the way to work I'm at a red light, I probably shouldn't be doing this, but I'm on GitHub at a red light and then I'm like @claude, fix this issue or whatever. And so it's just really useful to be able to control it from your phone. And this kind of proves out this experience. I don't know if this necessarily makes sense for every kind of use case. For coding, I think starting local is right. I don't know if this is true for everything, though.


IV. Boris and Cat's Favorite Slash Commands

DAN: What are the slash commands you guys use?

CAT: /commit. Yeah, the /commit command makes it a lot faster for Claude to know exactly what bash commands to run in order to make a commit.

DAN: And what does the /commit slash command do for people who are unfamiliar?

CAT: It just tells it exactly how to make a commit. And you can dynamically say, "Okay, these are the three bash commands that need to be run." And what's pretty cool is also we have this templating system built into slash commands. So we actually run the bash commands ahead of time. They're embedded into the slash command. And you can also pre-allow certain tool invocations. So for that slash command we say allow git commit, git push, gh — and so you don't get asked for permission after you run the slash command because we have a permission-based security system. And then also it uses Haiku, which is pretty cool. So it's a cheaper model and faster.

BORIS: Yeah, and for me I use commit, PR, feature dev — we use a lot. So Sid created this one. It's kind of cool. It walks you through step by step building something. So we prompt Claude to first ask me exactly what I want — build the specification — and then build a detailed plan and then make a to-do list, walk through step by step. So it's kind of like more structured feature development. And then I think the last one that we probably use a lot — we use security review for all of our PRs and then also code review. So Claude does all of our code review internally at Anthropic. You know, there's still a human approving it, but Claude does the first step in code review. That's just a /code-review command.


V. How Boris Uses Claude Code to Plan Feature Development

DAN: I would love to go deeper into the "how do you make a good plan?" So the feature dev thing — because I think there's a lot of little tricks that I'm starting to find or people are starting to find that work and I'm curious what are things that we're missing. So for example, one unintuitive step of the plan development process is even if I don't exactly know what the thing that needs to be built is — I just have a little sentence in my mind like "I want feature X" — I have Claude just implement it without giving it anything else and I see what it does. And that helps me understand like, "Okay, here's actually what I mean because it made all these different mistakes or it did something that I didn't expect that might be better." And then I use the learning from the sort of throwaway development. I just clear it out. And then that helps me write a better plan spec for the actual feature development, which is something that you would never do before because it'd be too expensive to just YOLO send an engineer on a feature that you hadn't actually speced out. But because you have Claude going through your codebase and doing stuff, you can learn stuff from it. That helps inform the actual plan that you make.

BORIS: Yeah. And I can start and I'm curious how you use it too. I think there's a few different modes. One is prototyping mode. So traditional engineering prototyping — you want to build the simplest possible thing that touches all the systems just so you can get a vague sense of like what are the systems, there's unknowns, and just to trace through everything. And so I do the exact same thing as you, Dan — Claude just does the thing and then I see where it messes up and then I'll ask it to just throw it away and do it again. So just hit escape twice, go back to the old checkpoint and then try again.

I think there's also maybe two other kinds of tasks. One is just things that Claude can one-shot and I feel pretty confident it can do it. So I'll just tell it and then I'll just go to a different tab and I'll Shift-Tab to auto-accept and then just go do something else or go to another one of my Claudes and tend to that while it does this.

But also there's this kind of harder feature development. These are things that maybe in the past it would have taken a few hours of engineering time. And for this usually I'll Shift-Tab into plan mode and then align on the plan first before it even writes any code. And I think what's really hard about this is the boundary changes with every model and in kind of a surprising way — the newer models, they're more intelligent so the boundary of what you need plan mode for got pushed out a little bit. Before you used to need to plan, now you don't. And I think it's this general trend of stuff that used to be scaffolding — with a more advanced model, it gets pushed into the model itself. And the model kind of tends to subsume everything over time.


VI. Building Scaffolding the Model Will Subsume

DAN: How do you think about building an agent harness that isn't just going to be — you're not spending a bunch of time building stuff that is just going to be subsumed into the model in 3 months when the new Claude comes out? How do you know what to build versus what to just say, "It doesn't work quite yet, but next time it's going to work, so we're not going to spend time on it."

CAT: I think we build most things that we think would improve Claude Code's capabilities, even if that means we'll have to get rid of it in 3 months. If anything, we hope that we will get rid of it in three months. I think for now, we just want to offer the most premium experience possible and so we're not too worried about throwaway work.

BORIS: And an example of this is something like even plan mode itself. I think we'll probably unship it at some point when Claude can just figure out from your intent that you probably want to plan first. Or you know, for example, I just deleted like 2,000 tokens or something from the system prompt yesterday just because Sonnet 4.5 doesn't need it anymore. But Opus 4.1 did need it.

DAN: What about the case where the latest frontier model doesn't need it but you're trying to figure out how to make it more efficient because you have so many users that you're maybe not going to use Opus or Sonnet 4.5 for everything. Maybe you're going to use Haiku. So there's a trade-off between having a more elaborate harness for Haiku versus just not spending time on it, using Sonnet, eating the cost, and working on more frontier type stuff.

CAT: In general, we've positioned Claude Code to be a very premium offering. So our north star is making sure that it works incredibly well with the absolutely most powerful model we have, which is Sonnet 4.5 right now. We are investigating how to make it work really well for future generations of smaller models, but it's not the top priority for us.

DAN: One thing that I notice — we get models often and thank you very much for this. We get models a lot before they come out and it's our job to kind of figure out if it's any good. And over the last six months, when I'm testing Claude, for example in the Claude app with a new frontier model, it's actually very hard to tell whether it's better immediately. But it's really easy to tell in Claude Code because the harness matters a lot for the performance that you get out of the model. And you guys have the benefit of building Claude Code inside of Anthropic. So there's a much tighter integration between the fundamental model training and the harness that you're building and they seem to really impact each other. How does that work internally?

BORIS: Yeah, I think the biggest thing is researchers just use this. And so as they see what's working, what's not, they can improve stuff. We do a lot of eval to communicate back and forth and understand where exactly the model's at. But yeah, there's this frontier where you need to give the model a hard enough task to really push the limit of the model. And if you don't do this, then all models are kind of equal. But if you give it a pretty hard task, you can tell the difference.


VII. Everything Anthropic Has Learned About Using Sub-Agents Well

DAN: What sub-agents do you use?

BORIS: I have a few. I have a planner sub-agent that I use. I have a code review sub-agent. Code review is actually something where sometimes I use a sub-agent, sometimes I use a slash command. Usually in CI it's a slash command, but in synchronous use I use a sub-agent for the same thing. It's kind of a matter of taste. I think when you're running synchronously, it's kind of nice to fork off the context window a little bit because all the stuff that's going on in the code review, it's not relevant to what I'm doing next. But in CI, it just doesn't matter.

DAN: Are you ever spawning like 10 sub-agents at once? And for what?

BORIS: For me, I do it mostly for big migrations. Actually we have this coder slash command that we use — there's a bunch of sub-agents there. And so one of the steps is like find all the issues. So there's one sub-agent that's checking for CLAUDE.md compliance. There's another sub-agent that's looking through git history to see what's going on. Another sub-agent that's looking for obvious bugs. And then we do this deduping quality step after. So they find a bunch of stuff. A lot of these are false positives and so then we spawn like five more sub-agents and these are all just checking for false positives. And in the end, the result is awesome. It finds all the real issues without the false issues.

DAN: That's great. I actually do that. So one of my non-technical Claude Code use cases is expense filing. So like when I'm in SF, I have all these expenses. And so I built this little Claude project that uses one of these finance APIs to just download all my credit card transactions. And then it decides these are probably the expenses that I'm going to have to file. And then I have two sub-agents, one that represents me and one that represents the company. And they do battle to figure out what's the proper actual set of expenses — it's like an auditor sub-agent and a pro-Dan sub-agent.

BORIS: Yeah, the sort of opponent processor pattern seems to be an interesting one. I feel like when sub-agents were first becoming a thing, actually what inspired us — there's a Reddit thread a while back where someone made sub-agents for like there was a front-end dev and a backend dev and like a designer, testing dev, PM sub-agent. And this is like, you know, it's cute — it feels a little maybe too anthropomorphic — maybe there's something to this. But I think the value is actually the uncorrelated context windows where you have these two context windows that don't know about each other and this is kind of interesting and you tend to get better results this way.

DAN: What about you? Do you have any interesting sub-agents you use?

CAT: I've been tinkering with one that is really good at front-end testing. So it uses Playwright to see all right, what are all the errors that are client side and pull them in and try to test more steps of the app. It's not totally there yet, but I'm seeing signs of life and I think it's the kind of thing that we could potentially bundle in one of our plugin marketplaces.

BORIS: I've used something like that just with Puppeteer and just watching it build something and then open up the browser and then be like, "Oh, I need to change this." It's like, "Oh my god." It's really cool. I think we're starting to see the beginnings of this massive multi-sub-agent thing. I don't know what they call this — swarms or something like that. There's actually an increasing number of people internally at Anthropic that are using a lot of credits every month — like spending over a thousand bucks every month. And this percent of people is growing actually pretty fast. And I think the common use case is code migration. What they're doing is framework A to framework B. There's the main agent, it makes a big to-do list for everything and then just kind of map-reduces over a bunch of sub-agents. So you instruct Claude like "start 10 agents and then just go 10 at a time and just migrate all the stuff over."

DAN: What would be a concrete example of the kind of migration that you're talking about?

BORIS: I think the most classic is lint rules. There's some kind of lint rule you're rolling out. There's no autofixer because static analysis can't really — it's kind of too simplistic for it. I think other stuff is framework migrations. We just migrated from one testing framework to a different one. That's a pretty common one where it's super easy to verify the output.


VIII. Use Claude Code to Turn Past Code Into Leverage

DAN: One of the things I found — and this is both for projects inside of Every and then just open source projects — if you're someone building a product and you want to build a feature that's been done before, so maybe an example that people might need to implement a bunch is memory. How do you do memory? Because we have a bunch of different products internally, you can just spawn Claude sub-agents to be like, "How do these three other products do it?" And there's possibility for just tacit code sharing where you don't need to have an API or you don't need to ask anyone. You can just be like, "How do we do this already?" And then use the best practices to build your own. And you can also do that with open source because there's tons of open source projects where people have been working on memory for a year and it's really good. You can be like, "What are the patterns that people have figured out and which ones do I want to implement?"

CAT: Totally. You can also connect your version control system. If you've built a similar feature in the past, Claude Code can use those APIs like query GitHub directly and find how people implemented a similar feature in the past and read that code and copy the relevant parts.


IX. Memory, Logs, and Compounding Engineering

DAN: Is there — have you found any use for log files of, "Okay, here's the full history of how I implemented it." And is that important to give to Claude? And how are you making it useful?

BORIS: Some people swear by it. There are some people at Anthropic where for every task they do, they tell Claude Code to write a diary entry in a specific format that just documents like what did it do, what did it try, why didn't it work. And then they even have these agents that look over the past memory and synthesize it into observations. I think this is the starting — the budding — there's something interesting here that we could productize. But it's a new emerging pattern that we're seeing that works well. I think the hard thing about one-shotting memory from just one transcript is that it's hard to know how relevant a specific instruction is to all future tasks. Like our canonical example is if I say "make the button pink," I don't want you to remember to make all buttons pink in the future. And so I think synthesizing memory from a lot of logs is a way to find these patterns more consistently.

DAN: It seems like you probably need — there's some things where you're going to know you'll be able to synthesize or summarize in this sort of top-down way — like this will be useful later — and you'll know the right level of abstraction at which it might be useful. But then there's also a lot of stuff where it's like any given commit log like "make the button pink" could be useful for kind of an infinite number of different reasons that you're not going to know beforehand. So you also need the model to be able to look up all similar past commits and surface that at the right time. Is that something that you're also thinking about?

BORIS: Yeah, I think there could be something like that. And maybe one way to see it is this kind of traditional memory storage work — like memex kind of stuff — where you just want to put all the information into the system and then it's kind of a retrieval problem after that. I think as the model also gets smarter, it naturally — I've seen it start to naturally do this with Sonnet 4.5 where if it's stuck on something, it'll just naturally start looking through git history and be like, "Oh, okay. Yeah, this is kind of an interesting way to do it."

DAN: One of the things we're doing inside of Every — I feel like it has really changed the way that we do engineering because everyone is Claude Code, CLI-build. And we have this engineering paradigm that we call compounding engineering where in normal engineering every feature you add makes it harder to add the next feature. And in compounding engineering your goal is to make the next feature easier to build from the feature that you just added. And the way that we do that is we try to codify all the learnings from everything that we've done to build the feature. So like how did we make the plan and what parts of the plan needed to be changed? Or when we started testing it, what issues did we find? What are the things that we missed? And then we codify them back into all the prompts and all the sub-agents and all the slash commands so that the next time when someone does something like this, it catches it and that makes it easier.

And that's why for me, for example, I can hop into one of our codebases and start being productive even though I don't know anything about how the code works because we have this built-up memory system of all the stuff that we've learned as we've implemented stuff. But we've had to build that ourselves. I'm curious, are you working on that kind of loop so that Claude Code does that automatically?

BORIS: Yeah, we're starting to think about it. It's funny. We just heard the same thing from Fiona. She just joined the team. She's our manager. She hasn't coded in like 10 years, something like that. And she was landing PRs on her first day. And she was like, "Yeah, not only did I kind of forgot how to code and Claude Code made it super easy to just get back into it, but also I didn't need to ramp up on any context because I kind of knew all this." And I think a lot of it is about when people put up pull requests for Claude Code itself — and I think our customers tell us that they do similar stuff pretty often — if you see a mistake I'll just be like, "@claude add this to CLAUDE.md" so that the next time it just knows this automatically.

You can instill this memory in a variety of ways. You can say @claude add it to CLAUDE.md. You can also say "@claude write a test." You know, that's an easy way to make sure this doesn't regress. And I don't feel bad asking anyone to write tests anymore, right? It's just super easy. I think probably close to 100% of our tests are just written by Claude. And if they're bad, we just won't commit it. And then the good ones stay committed. And then also lint rules are a big one. For stuff that's enforced pretty often, we actually have a bunch of internal lint rules. Claude writes 100% of these. And this is mostly just "@claude in a PR write this lint rule."


X. The Product Decisions for Building an Agent That's Simple and Powerful

CAT: And yeah, there's sort of this problem right now about how do you do this automatically? And I think generally how Cat and I think about it is we see this power user behavior and the first step is how do you enable that by making the product hackable so the best users can figure out how to do this cool new thing. But then really the hard work starts of how do you take this and bring it to everyone else.

BORIS: And for me, I keep myself in the "everyone else" bucket. Like, you know, I don't really know how to use Vim. I don't have this crazy tmux setup. I have a pretty vanilla setup. So if you can make a feature that I'll use, it's a pretty good indicator that other average engineers will use it.

DAN: Tell me about that because that's something I think about all the time — making something that is extensible and flexible enough that power users can find novel ways to use it that you would not have even dreamed of. But it's also simple enough that anyone can use it and they can be productive with it. And you can pull what the power users find back into the basic experience. How do you think about making those design and product decisions so that you enable that?

BORIS: In general we think that every engineering environment is a little bit different from the others and so it's really important that every part of our system is extensible. Everything from your status line to adding your own slash commands through to hooks which let you insert a bit of determinism at pretty much any step in Claude Code. So we think these are the basic building blocks that we give to every engineer that they can play with.

CAT: For plugins — plugins is actually our attempt to make it a lot easier for the average user like us to bring these slash commands and hooks into our workflows. And so what plugins does is it lets you browse existing MCP servers, existing hooks, existing slash commands and just write one command in Claude Code to pull that in for yourself.

BORIS: There's this really old idea in product called latent demand which I think is probably the main way that I personally think about product and thinking about what to build next. It's a super simple idea. You build a product in a way that is hackable that is kind of open-ended enough that people can abuse it for other use cases it wasn't really designed for. Then you see how people abuse it and then you build for that because you kind of know there was demand for it. And when I was at Meta, this is how we built all the big products. I think almost every single big product had this nugget of latent demand in it. For example, something like Facebook Dating — it came from this idea that when we looked at who looks at people's profiles, I think 60% of views were between people of opposite gender — kind of traditional setup — that were not friends with each other. And so we're like, "Okay, maybe if we launch a dating product we can harness this demand that exists." For Marketplace it was pretty similar — I think 40% of posts in Facebook groups at the time were buy/sell posts. And so, "Okay, people are trying to use this product to buy and sell. We just build a product around it — that's probably going to work."

And so we think about it kind of similarly. But also we have the luxury of building for developers and developers love hacking stuff and they love customizing stuff. And as a user of our own product, it makes it so fun to build and use this thing. And so we just build the right extension points. We see how people use it and that kind of tells us what to build next. Like for example, we got all these user requests where people were like, "Dude, Claude Code is asking me for all these permissions and I'm out here getting coffee. I don't know that it's asking me for permissions. How could I just get it to ping me on Slack?" And so we built hooks. Dixon built hooks so that people could get pinged on Slack. And you could get pinged on Slack for anything that you want to get pinged on Slack for. And it was very much — people really wanted the ability to do something. We didn't want to build the integration ourselves. And so we exposed hooks for people to do that.


XI. Making Claude Code Accessible to the Non-Technical User

DAN: You recently rebranded how you talk about Claude Code to be this more general purpose agent SDK. Was that driven by some latent demand where you sort of saw there's a more general purpose use case for what you built?

CAT: We realized that similar to how you were talking about using Claude Code for things outside of coding, we saw this happen a lot. We get a ton of stories of people who are using Claude Code to help them write a blog and manage all the data inputs and take a first pass in their own tone. We find people building email assistants on this. I use it for a lot of just market research. Because at the core it's an agent that can just go on for an infinite amount of time as long as you give it a concrete task and it's able to fetch the right underlying data. So one of the things I was working on was I wanted to look at all the companies in the world and how many engineers they had and to create a ranking. And this is something that Claude Code can do even though it's not a traditional coding use case.

So we realized that the underlying primitives were really general. As long as you have an agent loop that can continue running for a long period of time and you're able to access the internet and write code and run code, pretty much — if you squint — you can kind of build anything on it. And by the point where we rebranded it from the Claude Code SDK to the Claude Agent SDK, there were already many thousands of companies using this thing and a lot of those use cases were not about coding. Both internally and externally we saw that — health assistants, financial analysts, legal assistance. It was pretty broad.

DAN: What are the coolest ones?

BORIS: I feel like actually you had Noah Brier on the podcast recently. I thought the Obsidian mind-mapping note-keeping use case is really cool. It's funny — it's insane how many people use it for this particular combination. I think some other coding or coding-adjacent use cases that are kind of cool — we have this issue tracker for Claude Code. The team's just constantly underwater trying to keep up with all the issues coming in. There's just so many. And so Claude dedupes the issues and it automatically finds duplicates and it's extremely good at it. It also does first pass resolution. So usually when there's an issue it'll proactively put up a PR internally — this is a new thing that Enigo on the team built. So this is pretty cool. There's also on-call and collecting signals from other places like getting Sentry logs and getting logs from BigQuery and collating all this. Plus just really good at doing this because it's all just bash in the end.

DAN: Is it — when it's collating logs or doing issues, is that like you have Claudes continually running in the background? And is that something that you're building for?

BORIS: It gets triggered for that particular one. It gets triggered whenever a new issue is filed. So it runs once but it can choose to run for as long as it needs.

DAN: What about the idea of Claudes always running?

BORIS: Ooh, proactive Claudes. I think it's definitely where we want to get to. I would say right now we're very focused on making Claude Code incredibly reliable for individual tasks. And if you think about multi-line autocomplete and then single-turn agents and then now we're working on Claude Code that can complete tasks — if you trace this curve eventually you go to even higher levels of abstraction, even more complicated tasks. And then hopefully the next step after that is a lot more productivity. Just understanding what your team's goals are, what your goals are, being able to say, "Hey, I think you probably want to try this feature and here's a first pass at the code and here are the assumptions I made. Are these correct?"

CAT: I can't wait. And I think probably right after that is Claude is now your manager.

BORIS: That's not in the plan.


XII. The Next Form Factor for Coding With AI

DAN: Here's a good one from the team. Why did you choose agentic RAG over vector search in your architecture? And are vector embeddings still relevant?

BORIS: Actually initially we did use vector embeddings. They're just really tricky to maintain because you have to continuously reindex the code and they might get out of date and you have local changes. So those need to make it in. And then as we thought about what does it feel like for an external enterprise to adopt it, we realized that this exposes a lot more surface area and security risk. We also found that actually Claude Code is really good and Claude models are really good at agentic search. So you can get to the same accuracy level with agentic search and it's just a much cleaner deployment story. If you do want to bring semantic search to Claude Code, you can do so via an MCP tool. So if you want to manage your own index and expose an MCP tool that lets Claude Code call that, that would work.

DAN: What do you think are the top MCPs to use with Claude Code?

BORIS: Puppeteer and Playwright are pretty high up there. Definitely. Sentry has a really good one. Asana has a really good one.

DAN: Do you think there are any power user tips that you see people inside of Anthropic or other big Claude Code power users that people don't know about but should?

BORIS: One thing that Claude Code doesn't naturally like to do, but that I personally find very useful, is — Claude Code doesn't naturally like to ask questions. But if you're brainstorming with a thought partner, a collaborator, usually you do ask questions back and forth. And so this is one of the things that I like to do, especially in plan mode. I'll just tell Claude Code, "Hey, we're just brainstorming this thing. Please ask me questions if there's anything you're unsure about." I want you to ask questions and it'll do it. And I think that actually helps you arrive at a better answer there.

There's also so many tips that we can share. I think there's a few really common mistakes I see people make. One is not using plan mode enough. This is just super important. And I think people that are kind of new to coding — they kind of assume this thing can do anything and it can't. It's not that good today and it's going to get better but today it can one-shot some tests. It can't one-shot most things. And so you kind of have to understand the limits and you have to understand where you get in the loop. Something like plan mode can 2–3x success rates pretty easily if you land on the plan first.

Other stuff that I've seen power users do really well — companies that have really big deployments of Claude Code — having settings.json that you check into the codebase is really important because you can use this to pre-allow certain commands so you don't get permission-prompted every time and also to block certain commands. Let's say you don't want web fetch or whatever. And this way as an engineer I don't get prompted and I can check this in and share it with the whole team so everyone gets to use it.

DAN: I get around that by just using "dangerously skip permissions."

BORIS: Yeah, we kind of have this but we don't recommend it. It's a model, you know, it can do weird stuff. I think another cool use case that we've seen is people using stop hooks for interesting stuff. So stop hook runs whenever the turn is complete. The assistant did some tool calls back and forth and it's done and it returns control back to the user — then we run the stop hook. And so you can define a stop hook that's like, "If the tests don't pass, return the text 'keep going.'" And essentially you can just make the model keep going until the thing is done. And this is just insane when you combine it with the SDK and this kind of programmatic usage — you know, this is a stochastic thing, it's a nondeterministic thing, but with scaffolding you can get these deterministic outcomes.

DAN: So you guys started this CLI paradigm shift. Do you think the CLI is the final form factor? Are we going to be using Claude Code in the CLI primarily in a year or in three years, or is there something else that's better?

CAT: I mean, it's not the final form factor, but we are very focused on making sure the CLI is the most intelligent that we can make it and that it's as customizable as possible.

BORIS: Yeah, Cat's asking me to talk about this because no one knows — this stuff's just moving so fast. No one knows what these form factors are. Right now I think our team is in experimentation mode. So we have CLI, then we came out with the IDE extension. Now we have a new IDE extension that's a GUI — it's a little more accessible. We have @claude on GitHub so you can just add Claude anywhere. Now there's @claude, there's Claude on web and on mobile, so you can use it on any of these places. And we're just in experimentation mode, so we're trying to figure out what's next.

I think if we kind of zoom out and see where this stuff is headed, one of the big trends is longer periods of autonomy. And so with every model, we kind of time how long can the model just keep going and do tasks autonomously. And just, you know, in dangerous mode in a container, keep auto-compacting until the task is done. And now we're on the order of double-digit hours. I think the last model is like 30 hours, something like this. And the next model is going to be days.

And as you think about parallelizing models, there's a bunch of problems that come out of this. One is what is the container this thing runs in because you don't want to have to keep your laptop open.

DAN: I have that right now because I'm doing a lot of DSPY prompt optimization and it's on my laptop and I don't want to close it — I'm in the middle with my laptop open because I don't want to close it.

BORIS: Yeah. That's right. We've visited companies before — customers — and everyone's just walking around with their Claude Codes open. "Is this running?" So I think one is kind of getting away from this mode. And then I also think pretty soon we're going to be in this mode of Claudes monitoring Claudes. And I don't know what the right form factor for this is because as a human you need to be able to inspect this and see what's going on. But also it needs to be Claude-optimized where you're optimizing for bandwidth between the Claude-to-Claude communication. So my prediction is terminal is not the final form factor. My prediction is there's going to be a few more form factors in the coming months — maybe like a year or something like that. And it's going to keep changing very quickly.


XIII. UX Discoveries and Terminal Design

DAN: I teach a lot of Claude Code to a lot of Every subscribers. And I think one of the big things is just the terminal is intimidating. And just being on a call with subscribers being like, "Here's how you open the terminal and you're allowed to do this even if you're non-technical" — that is a big deal. How do you think about that?

BORIS: One of the people on our marketing team started using Claude Code because she was writing some content that touched on Claude Code and I was like, "You should really experience it." And she got like 30 popups on her screen where she had to accept various permissions because she'd never used a terminal before. So I completely see eye to eye with you on that. It's definitely hard for non-engineers and there's even some engineers we've found who aren't fully comfortable with working day-to-day in the terminal. Our VS Code GUI extension is our first step in that direction because you don't have to think about the terminal at all. It's like a traditional interface with a bunch of buttons. I think we are working on more graphical interfaces. Claude Code on the web is a GUI. I think that actually might be a good starting point for people who are less technical.

There was this magic moment maybe a few months ago where I walked into the office and the data scientists at Anthropic — they sit right next to the Claude Code team — and the data scientist just had Claude Code running on their computers and I was like, "What is this? How did you figure this out?" I think it was Brandon — he was the first one to do it and he was like, "Oh yeah, I just installed it. I work on this product so I should use it." And I was like, "Oh my god." So he figured out how to use a terminal and JS — he hasn't really done this kind of workflow before. Obviously very technical. So I think now we're starting to see all these code-adjacent functions — people that use Claude Code. And yeah, it's kind of interesting from a latent demand point of view. These are people hacking the product so there's demand to use it for this. And so we want to make it a little bit easier with more accessible interfaces. But at the same time, for Claude Code, we're laser focused on building the best product for the best engineers. We're focused on software engineering and we want to make this really good but we want to make it a thing that other people can hack.

DAN: Sometimes Claude Code will write code that's a bit verbose. But you can just tell it to simplify it and it does a really good job.

BORIS: Yeah. Sometimes you're like, "Hey, this should be a one-line change" and it'll write five lines and you're like, "Simplify it" and it understands immediately what you mean and it'll fix it. I think a lot of people on our team do that, too.

DAN: Why not then push that into a slash command or the harness to make it just happen automatically?

BORIS: We do have instructions for this in the CLAUDE.md. I think it impacts such a low percentage of conversations that we don't want it to over-rotate in the other direction. And the reason why not a slash command is because you actually don't need that much context. I think slash commands are really good for situations where you would otherwise need to write two-three lines. But for "simplify it" you can just write "simplify it" and it gets it.

DAN: How do you keep track of and carry forward the things you learn from prototype to prototype? Especially if one person is prototyping it and then you're like, "I'm going to take it over, I'm going to do 20 more."

BORIS: There's maybe a few elements of it. One is the style guide. There's elements of style that we discover. And I think a lot of this is building for the terminal — we're kind of discovering a new design language for the terminal and building it as we go. And I think some of this you can codify in a style guide. So this is our CLAUDE.md. But then there's this other part that's kind of product sense where I don't think the model totally gets it yet. And maybe we should be trying to find ways to teach the model this product sense about "this works and this doesn't." Because in product, you want to solve the person's problem in the simplest way possible and then delete everything else that's not that and just get everything out of the way. You align the product to the intent as cleanly as possible. And maybe the model doesn't totally get that yet.

DAN: It never — it doesn't really feel what it's like to use Claude Code. The model doesn't use Claude Code.

BORIS: Yeah. And so I think when Claude Code can test itself and it can use itself — and we do this when developing and it can see UI bugs and things like that — I don't know, maybe we should just try prompting it though. Honestly a lot of the stuff is as simple as that. When there's some new idea usually you just prompt it and often it just works. Maybe we should just try that.

CAT: A lot of the prototypes are actually the UX interactions. And so I think once we discover a new UX interaction like Shift-Tab for auto-accept — I think Boris figured out —

BORIS: That was Igor actually. We went back and forth — we did like dueling prototypes for like a week.

CAT: Yeah, Shift-Tab felt really nice. And then one of the current plan mode iterations uses Shift-Tab because it's actually just another way to tell the model how agentic it should be. And so I think as more features use the same interaction, you form a stronger mental model for what should go where.

BORIS: Or like thinking — I think is another really good one. Before we released Claude Code, or maybe it was the first thinking model — was it 3.7? I forget. But it was able to think and we're brainstorming how do we toggle thinking? And then someone was just like, "What if you just ask the model to think in natural language?" And it knows how to think. And we're like, "Okay, sweet, let's do that." And we did that for a while and then we realized that people were accidentally toggling it. So they were like "don't think" and then the model was like, "Oh, I should think." It just started thinking. And so we had to tune it out so "don't think" didn't trigger it. But then it still wasn't obvious. But then we made a UX improvement to highlight the thinking and that was so fun. It felt really magical. When you do "ultra think" it's like rainbow or whatever.

And then with Sonnet 4.5 we actually find a really big performance improvement when you turn on extended thinking. And so we made it really easy to toggle it because sometimes you want it, sometimes you don't — for a really simple task, you don't want the model to think for five minutes. You want it to just do the thing. And so we used Tab as the interaction to toggle it. And then we unshipped a bunch of the thinking words. Although I think we kept "ultra think" just for sentimental reasons. It was such a cool UX.


XIV. The Art of Unshipping

DAN: Do you think there's some new metric that's about what you deleted? I think programmers have always felt like deleting a bunch of code feels really good, but there's something about — because you can build stuff so fast, it becomes more important to also delete stuff.

BORIS: I think my favorite kind of diff to see is a red diff. This is the best. Whenever I'm like, "Yeah, bring it on. Another one." But it's hard because anything you ship, people are using it. And so you got to keep people happy. I think generally our principle is if we unship something, we need to ship something even better that people can take advantage of that matches that intent even better.


XV. Productivity and the Competitive Landscape

BORIS: And yeah, I think this is kind of back to how do you measure Claude Code and the impact of it. This is something every company, every customer asks us about. Internally at Anthropic I think we doubled in size since January or something like that but then productivity per engineer has increased almost 70% in that time, measured by — I think we actually measured it in a few ways — but PRs are the simplest one and the main one. But like you said, this doesn't capture the full extent of it because a lot of this is making it easier to prototype, making it easier to try new things, making it easier to do these things that you never would have tried because they're way below the cut line. You're launching a feature and there's this wish list of stuff — now you just do all of it because it's so easy and you just wouldn't have done it.

So yeah, it's really hard to talk about. And then there's this flip side of it where more code is written. So you have to delete more code. You have to code-review more carefully and automate code review as much as you can. There's also an interesting new product management challenge because you can ship so much that you end up — it doesn't feel as cohesive because you could just add a button here and a tab there and a little thing here. It's much easier to build a product that has all the features you want but doesn't have any sort of organizing principle because you're just shipping lots of stuff all the time.

CAT: I think we try to be pretty disciplined about this and making sure that all the abstractions are really easy to understand for someone even if they just hear the name of the feature. We have this principle that I believe Boris brought to the team that I really like where we don't want a "new user experience." Everything should be so intuitive that you just drop in and it just works. And I think that's really set the bar really high for making sure every feature is really intuitive.

DAN: How do you do that with a conversational UI? Because when there's not a bunch of buttons and knobs and it's just a blank text box to start, how do you think about making it intuitive?

BORIS: There's a lot of little things that we do. We teach people that they can use the question mark to see tips. We show tips as Claude Code is working. We have the change log on the side. We tell you about, "Oh, there's a new model that's out" or we show you at the bottom — we have a notification section for thinking. I think there's just subtle ways in which we tell users about features. The other thing that's really important is to just make sure that all the primitives are very clearly defined — hooks have a common meaning in the developer ecosystem. Plugins have a very common meaning. And just making sure that what we build matches what the average developer would immediately think of when they hear that.

There's also this progressive disclosure thing — anytime in Claude Code when you run it you can hit Ctrl-O to see the full raw transcript, the same thing the model sees. And we don't show you this until it's actually relevant. So when there's a tool result that's collapsed, then we'll say "use Ctrl-O to see it." So we don't want to put too much complexity on you at the start because this thing can do anything.

I think there's this other new principle which we've just started exploring which is the model teaches you how to use the thing. So you can ask Claude Code about itself and it kind of knows to look up its own documentation to tell you about it. But we can also go even deeper — for example, slash commands are a thing that people can use but also the model can call slash commands. And maybe you see the model calling it and then you'll be like, "Oh yeah, I guess I can do that too."

DAN: How has it changed — when you first started doing this, Claude Code was this singular thing, this singular way of thinking about using AI through a CLI. Other people had stuff like this but it felt like this shift. And now there's a whole landscape of everyone going "CLI, CLI, CLI." How has that changed how you think about building, how it feels to build, and how are you dealing with the pressure of the race that you're in?

BORIS: I think for me, imitation is the greatest flattery. So it's awesome and it's cool to see all this other stuff that everyone else is building inspired by this. And I think this is ultimately the goal — to inspire people to build this next thing for this incredible technology that's coming. And that's just really exciting. Personally, I don't really use a lot of other tools. Usually when something new comes out, I'll maybe just try it to get a vibe. But otherwise I think we're pretty focused on just solving problems that we have and our customers have and building the next thing.

DAN: I think there's this underlying expectation that using AI shouldn't have to be a skill because it just does whatever you say. And you're like, well, whatever you say is going to matter for what it does. So if you can say things better it's going to do better.

BORIS: It changes with every model though. That's the hard part. Prompt engineer was a job and now famously it's not a job anymore. And there's going to be more jobs that are not jobs anymore — these kind of micro-skills that you have to learn to use this thing. And as the model gets better it can just interpret it better. But I think that's also for us — this is part of this humility that we have to have building a product like this that we just really don't know what's next and we're just trying to figure it out along with everyone else. We're just here for the ride.

DAN: That's why it's cool that you're building it for yourself because I think that's the best way to know. You're sort of living in the future. You're using it all the time. And it's pretty clear what's missing.

BORIS: Yeah. This is the luxurious thing about building dev tools — you're your own customer. I think it's also really a unique thing about AI because it sort of reset the game board for all software. Anything that you do for something that you want to use on your computer — if you're building it with AI, there's a good chance that hasn't been done before because the whole landscape has been reset. And so it's a uniquely exciting time to build stuff for yourself.


XVI. Outro

DAN: I also have my little email response agent that drafts responses for me but I don't use email that much so —

BORIS: Oh, and I knew it wasn't you responding. That's why it's seven days delayed.

DAN: The agent's just doing a very thorough job.

BORIS: Yeah, Agent SDK is cool though. It always just feels amazing how much we're able to build with such a small team. I feel like the other thing that's really cool is that people are just shifting their mindset from docs to demos. Internally, our currency is actually demos. You want people to be excited about your thing — show us 15 seconds of what it can do. And we find that everyone on the team now has this indoctrinated demo culture for sure. And I think that's better because there's a lot of things that you might have in your head that if you're a great writer, maybe you could figure out how to explain it. But it's just really hard to explain. But if someone can see it, they get it immediately.

And I think that's happening for product building, but it's also happening for all sorts of other types of creative endeavors like making a movie for example. You had to pitch it, but now you can just be like, "I made this Sora video" and you can kind of see the glimmer of the thing you're trying to make for very cheap. And so that means you don't have to spend time convincing people as much. You can just be like, "Here, I made it."

DAN: And also as a builder you can just make it and then make it again and then make it again until you're happy. I feel like the flip side is you used to make a doc or whiteboard something or I would draw stuff in Sketch or Figma or whatever. And now we'll just build it until I like how it feels. And it's just so easy to get that feeling out of it now. You could see it visually before or you could describe it in words but you could never get the vibe. And now the vibe is really easy.

BORIS: Yeah. And you built plan mode like three times. Yeah, because of this. You built it and then you threw it out and rebuilt it and then threw it out and rebuilt it.

DAN: Or like to-do's — Sid built the original version, also three or four prototypes, and then I prototyped maybe 20 versions after that in a day. I think pretty much everything we released there was at least a few prototypes behind it.

DAN: I loved this. Did we answer all of your team's questions?

BORIS: I think we did.

DAN: Well, thank you. This was amazing. I'm really glad I got to talk to you and keep building.

BORIS: Thank you for having us.

CAT: Yeah. Thanks.


Transcript source: "AI and I" (Every) — The Secrets of Claude Code From the Engineers Who Built It. Formatted for readability.