Showing posts with label Cat Wu. Show all posts
Showing posts with label Cat Wu. Show all posts

2.21.2026

The Secrets of Claude Code From the Engineers Who Built It

Boris Cherny (creator of Claude Code) and Cat Wu (product lead) sit down with Dan Shipper to reveal how Claude Code was built, how Anthropic dogfoods it internally, and where the future of AI-powered coding is headed.

From "AI and I" — an Every podcast by Dan Shipper.
Guests: Boris Cherny, Creator and Head of Claude Code at Anthropic; Cat Wu, Product Lead for Claude Code at Anthropic.
Host: Dan Shipper, CEO of Every.








Introduction

"What made it work really well is that Claude Code has access to everything that an engineer does at the terminal. Everything you can do, Claude Code can do. There's nothing in between." — "There's this really old idea in product called latent demand. You build a product in a way that is hackable, that is kind of open-ended enough that people can abuse it for other use cases it wasn't really designed for, and you build for that because you kind of know there's demand for it."

DAN: Cat, Boris, thank you so much for being here.

BORIS: Thanks for having us.

DAN: So for people who don't know you, you are the creators of Claude Code. Thank you very much from the bottom of my heart. I love Claude Code.

BORIS: That's amazing to hear. That's what we love to hear.

DAN: Okay, I think the place I want to start is when I first used it. There was like this moment — I think it was around when Sonnet 3.7 came out where I used it and I was like, "Holy — this is like a completely new paradigm. It's a completely new way of thinking about code." And the big difference was you went all the way and just eliminated the text editor and you're just like all you do is talk to the terminal and that's it. Previous paradigms of AI programming, previous harnesses have been like you have a text editor and you have the AI on the side and it's kind of like — or it's a tab complete. So, take me through that decision process.


I. Claude Code's Origin Story

BORIS: I think the most important thing is it was not intentional at all. We sort of ended up with it. So at the time when I joined Anthropic we were still on different teams at the time. There was this previous predecessor to Claude Code. It was called Clide — C-L-I-D-E. And it was this research project, you know, it took like a minute to start up. It was this really heavy Python thing. It had to run a bunch of indexing and stuff. And when I joined I wanted to ship my first PR and I hand wrote it like a, you know, like a noob — I didn't know about any of these tools.

DAN: Thank you for admitting that.

BORIS: I didn't know any better and then I put up this PR and Adam Wolf who was the manager for our team for a while. He was my ramp up buddy and he just rejected the PR and he was like, "You wrote this by hand. What are you doing? Use Clyde." Because he was also hacking a lot on Clyde at the time. And so I tried Clyde. I gave it the description of the task and it just one-shot this thing and this was, you know, Sonnet 3.5. So I still had to fix a thing even for this kind of basic task and the harness was super old. So it took like 5 minutes to turn this thing out and just took forever. But it worked and I was just mind-blown that this was even possible and that just kind of got the gears turning. Maybe you don't actually need an IDE.

And then later on I was prototyping using the Anthropic API and the easiest way to do that was just building a little app in the terminal because that way I didn't have to build a UI or anything. And I started just making a little chat app and then I just started thinking maybe we could do something a little bit like Clyde. So let me build a little Clyde and it actually ended up being a lot more useful than that without a lot of work. And I think the biggest revelation for me was when we started to give the model tools. It just started using tools and it was this insane moment. Like the model just wants to use tools. We gave it bash and it just started using bash, writing AppleScript to automate stuff in response to questions. And I was like this is just the craziest thing. I've never seen anything like this. Because at the time I had only used IDEs with like text editing, a little one-line autocomplete, multi-line autocomplete, whatever.

So that's where this came from. It was this kind of convergence of prototyping but also seeing what's possible in a very rough way. And this thing ended up being surprisingly useful. And I think it was the same for us. For me it was like kind of Sonnet 4, Opus 4. That's where that magic moment was. I was like, "Oh my god, this thing works."


II. The Tool Moment — Bash and Beyond

DAN: That's interesting. Tell me about that tool moment because I think that is one of the special things about Claude Code — it just writes bash and it's really good at it. And I think a lot of previous agent architectures or even anyone building agents today, your first instinct might be okay, we're going to give it a find file tool and then we're going to give it an open file tool and you build all these custom wrappers for all the different actions you might want the agent to take. But Claude Code just uses bash and it's really good at it. How do you think about what you learned from that?

BORIS: I think we're at this point right now where Claude Code actually has a bunch of tools. I think it's like a dozen or something like this. We actually add and remove tools most weeks. So this changes pretty often. But today there actually is a search tool — there's a tool for searching. And we do this for two reasons. One is the UX, so we can show the result a little nicer to the user because there's still a human in the loop right now for most tasks. And the second one is for permissions. So if you say in your Claude Code settings.json "on this file you cannot read," we have to enforce this. We enforce it for bash but we can do it a little bit more efficiently if we have a specific search tool.

But definitely we want to unship tools and kind of keep it simple for the model. Like last week or two weeks ago we unshipped the LS tool because in the past we needed it but then we actually built a way to enforce this permission system for bash. So in bash, if we know that you're not allowed to read a particular directory, Claude's not allowed to LS that directory. And because we can enforce that consistently, we don't need this tool anymore. And this is nice because it's a little less choice for Claude. A little less stuff in context.


III. How Anthropic Dogfoods Claude Code

DAN: And how do you guys split responsibility on the team?

CAT: I would say Boris sets the technical direction and has been the product visionary for a lot of the features that we've come out with. I see myself as more of a supporting role to make sure that our pricing and packaging resonates with our users. Two, making sure that we're shepherding all our features across the launch process. So from deciding, "All right, these are the prototypes that we should definitely ant-food" to setting the quality threshold for ant-fooding through to communicating that to our end users. And there's definitely some new initiatives that we're working on. I would say historically a lot of Claude Code has been built bottoms up — like Boris and a lot of the core team members have just had these great ideas for to-do list, sub agents, hooks — all these are bottoms up. As we think about expanding to more services and bringing Claude Code to more places, I think a lot of those are more like, "All right, let's talk to customers. Let's bring engineers into those conversations and prioritize those services and knock them out."

DAN: What is ant-fooding?

CAT: Oh, ant-fooding. It means dog-fooding. So, Anthropic — ant. Our nickname for internal employees is ant. And so ant-fooding is our version of dog-fooding. Internally over 70 or 80% of ants — technical Anthropic employees — use Claude Code every day. And so every time we are thinking about a new feature, we push it out to people internally and we get so much feedback. We have a feedback channel. I think we get a post every five minutes. And so you get really quick signal on whether people like it, whether it's buggy, or whether it's not good and we should unship it.

DAN: You can tell that someone that is building stuff is using it all the time to build it because the ergonomics just make sense if you're trying to build stuff and that only happens if you're ant-fooding.

BORIS: Yeah. And I think that's a really interesting paradigm for building new stuff — that sort of bottoms up "I make something for myself."

BORIS: And Cat is also so humble. I think Cat has a really big role in the product direction also — like it comes from everyone on the team. And these specific examples — this actually came from everyone on the team. Like to-do lists and sub agents, that was Sid. Hooks, Dixon shipped that. Plugins, Daisy shipped that. So everyone on the team — these ideas come from everyone.

BORIS: And so I think for us, we build this core agent loop and this core experience and then everyone on the team uses the product all the time. And everyone outside the team uses the product all the time. And so there's just all these chances to build things that serve these needs. Like for example, bash mode — you know, the exclamation mark and you can type in bash commands. This was just many months ago. I was using Claude Code and I was going back and forth between two terminals and just thought it was kind of annoying. And just on a whim, I asked Claude to kind of think of ideas. It thought of this exclamation mark bash mode. And then I was like, "Great, make it pink and then ship it." It just did it. And that's the thing that still kind of persisted. And you know, now you see kind of others also catching on to that.

DAN: That's funny. I actually didn't know that. And that's extremely useful because I always have to open up a new tab to run any bash commands. So you just do an exclamation point and then it just runs it directly instead of filtering it through all the Claude stuff.

BORIS: Yeah. And Claude Code sees the full output too.

DAN: Interesting. That's perfect. So anything you see in the Claude Code view, Claude Code also sees.

BORIS: Yeah. And this is kind of a UX thing that we're thinking about. In the past tools were built for engineers, but now it's equal parts engineers and model. And so as an engineer, you can see the output, but it's actually quite useful for the model also. And this is part of the philosophy — everything is dual use. So for example, the model can also call slash commands. Like I have a slash command for /commit where I run through a few different steps like diffing and generating a reasonable commit message and this kind of stuff. I run it manually but also Claude can run this for me. And this is pretty useful because we get to share this logic. We get to define this tool and then we both get to use it.

DAN: What are the differences in designing tools that are dual use from designing tools that are used by one or the other?

BORIS: Surprisingly, it's the same. So far. I sort of feel like this kind of elegant design for humans translates really well to the models. So, you're just thinking about what would make sense to you and the model generally — it makes sense to the model too if it makes sense to you.

CAT: I think one of the really cool things about Claude Code being a terminal UI and what made it work really well is that Claude Code has access to everything that an engineer does at the terminal. And I think when it comes to whether the tool should be dual use or not, making them dual use actually makes the tools a lot easier to understand. It just means that everything you can do, Claude Code can do. There's nothing in between.

DAN: There are a couple of those decisions. No code editor, it's in the terminal, so it has access to your files. And it's on your computer versus in the cloud in a virtual machine. So you get to use it in a repeated way where you can build up your CLAUDE.md file or build slash commands and all that kind of stuff where it becomes very composable and extensible from a very simple starting point. And I'm curious about how you think about, for people who are thinking about "I want to build an agent" — probably not Claude Code, but something else — how you get that simple package that then can extend and be really powerful over time.

BORIS: For me, I start by just thinking about it like developing any kind of product where you have to solve the problem for yourself before you can solve it for others. And this is something that they teach in YC — you have to start with yourself. If you can solve your own problem, it's much more likely you're solving the problem for others. And I think for coding, starting locally is the reasonable thing. And you know now we have Claude Code on the web. So you can also use it with a virtual machine and you can use it in a remote setting. And this is super useful when you're on the go — you want to take that from your phone. And this is sort of — we started proving this out a step at a time where you can do @claude in GitHub and I use this every day. Like on the way to work I'm at a red light, I probably shouldn't be doing this, but I'm on GitHub at a red light and then I'm like @claude, fix this issue or whatever. And so it's just really useful to be able to control it from your phone. And this kind of proves out this experience. I don't know if this necessarily makes sense for every kind of use case. For coding, I think starting local is right. I don't know if this is true for everything, though.


IV. Boris and Cat's Favorite Slash Commands

DAN: What are the slash commands you guys use?

CAT: /commit. Yeah, the /commit command makes it a lot faster for Claude to know exactly what bash commands to run in order to make a commit.

DAN: And what does the /commit slash command do for people who are unfamiliar?

CAT: It just tells it exactly how to make a commit. And you can dynamically say, "Okay, these are the three bash commands that need to be run." And what's pretty cool is also we have this templating system built into slash commands. So we actually run the bash commands ahead of time. They're embedded into the slash command. And you can also pre-allow certain tool invocations. So for that slash command we say allow git commit, git push, gh — and so you don't get asked for permission after you run the slash command because we have a permission-based security system. And then also it uses Haiku, which is pretty cool. So it's a cheaper model and faster.

BORIS: Yeah, and for me I use commit, PR, feature dev — we use a lot. So Sid created this one. It's kind of cool. It walks you through step by step building something. So we prompt Claude to first ask me exactly what I want — build the specification — and then build a detailed plan and then make a to-do list, walk through step by step. So it's kind of like more structured feature development. And then I think the last one that we probably use a lot — we use security review for all of our PRs and then also code review. So Claude does all of our code review internally at Anthropic. You know, there's still a human approving it, but Claude does the first step in code review. That's just a /code-review command.


V. How Boris Uses Claude Code to Plan Feature Development

DAN: I would love to go deeper into the "how do you make a good plan?" So the feature dev thing — because I think there's a lot of little tricks that I'm starting to find or people are starting to find that work and I'm curious what are things that we're missing. So for example, one unintuitive step of the plan development process is even if I don't exactly know what the thing that needs to be built is — I just have a little sentence in my mind like "I want feature X" — I have Claude just implement it without giving it anything else and I see what it does. And that helps me understand like, "Okay, here's actually what I mean because it made all these different mistakes or it did something that I didn't expect that might be better." And then I use the learning from the sort of throwaway development. I just clear it out. And then that helps me write a better plan spec for the actual feature development, which is something that you would never do before because it'd be too expensive to just YOLO send an engineer on a feature that you hadn't actually speced out. But because you have Claude going through your codebase and doing stuff, you can learn stuff from it. That helps inform the actual plan that you make.

BORIS: Yeah. And I can start and I'm curious how you use it too. I think there's a few different modes. One is prototyping mode. So traditional engineering prototyping — you want to build the simplest possible thing that touches all the systems just so you can get a vague sense of like what are the systems, there's unknowns, and just to trace through everything. And so I do the exact same thing as you, Dan — Claude just does the thing and then I see where it messes up and then I'll ask it to just throw it away and do it again. So just hit escape twice, go back to the old checkpoint and then try again.

I think there's also maybe two other kinds of tasks. One is just things that Claude can one-shot and I feel pretty confident it can do it. So I'll just tell it and then I'll just go to a different tab and I'll Shift-Tab to auto-accept and then just go do something else or go to another one of my Claudes and tend to that while it does this.

But also there's this kind of harder feature development. These are things that maybe in the past it would have taken a few hours of engineering time. And for this usually I'll Shift-Tab into plan mode and then align on the plan first before it even writes any code. And I think what's really hard about this is the boundary changes with every model and in kind of a surprising way — the newer models, they're more intelligent so the boundary of what you need plan mode for got pushed out a little bit. Before you used to need to plan, now you don't. And I think it's this general trend of stuff that used to be scaffolding — with a more advanced model, it gets pushed into the model itself. And the model kind of tends to subsume everything over time.


VI. Building Scaffolding the Model Will Subsume

DAN: How do you think about building an agent harness that isn't just going to be — you're not spending a bunch of time building stuff that is just going to be subsumed into the model in 3 months when the new Claude comes out? How do you know what to build versus what to just say, "It doesn't work quite yet, but next time it's going to work, so we're not going to spend time on it."

CAT: I think we build most things that we think would improve Claude Code's capabilities, even if that means we'll have to get rid of it in 3 months. If anything, we hope that we will get rid of it in three months. I think for now, we just want to offer the most premium experience possible and so we're not too worried about throwaway work.

BORIS: And an example of this is something like even plan mode itself. I think we'll probably unship it at some point when Claude can just figure out from your intent that you probably want to plan first. Or you know, for example, I just deleted like 2,000 tokens or something from the system prompt yesterday just because Sonnet 4.5 doesn't need it anymore. But Opus 4.1 did need it.

DAN: What about the case where the latest frontier model doesn't need it but you're trying to figure out how to make it more efficient because you have so many users that you're maybe not going to use Opus or Sonnet 4.5 for everything. Maybe you're going to use Haiku. So there's a trade-off between having a more elaborate harness for Haiku versus just not spending time on it, using Sonnet, eating the cost, and working on more frontier type stuff.

CAT: In general, we've positioned Claude Code to be a very premium offering. So our north star is making sure that it works incredibly well with the absolutely most powerful model we have, which is Sonnet 4.5 right now. We are investigating how to make it work really well for future generations of smaller models, but it's not the top priority for us.

DAN: One thing that I notice — we get models often and thank you very much for this. We get models a lot before they come out and it's our job to kind of figure out if it's any good. And over the last six months, when I'm testing Claude, for example in the Claude app with a new frontier model, it's actually very hard to tell whether it's better immediately. But it's really easy to tell in Claude Code because the harness matters a lot for the performance that you get out of the model. And you guys have the benefit of building Claude Code inside of Anthropic. So there's a much tighter integration between the fundamental model training and the harness that you're building and they seem to really impact each other. How does that work internally?

BORIS: Yeah, I think the biggest thing is researchers just use this. And so as they see what's working, what's not, they can improve stuff. We do a lot of eval to communicate back and forth and understand where exactly the model's at. But yeah, there's this frontier where you need to give the model a hard enough task to really push the limit of the model. And if you don't do this, then all models are kind of equal. But if you give it a pretty hard task, you can tell the difference.


VII. Everything Anthropic Has Learned About Using Sub-Agents Well

DAN: What sub-agents do you use?

BORIS: I have a few. I have a planner sub-agent that I use. I have a code review sub-agent. Code review is actually something where sometimes I use a sub-agent, sometimes I use a slash command. Usually in CI it's a slash command, but in synchronous use I use a sub-agent for the same thing. It's kind of a matter of taste. I think when you're running synchronously, it's kind of nice to fork off the context window a little bit because all the stuff that's going on in the code review, it's not relevant to what I'm doing next. But in CI, it just doesn't matter.

DAN: Are you ever spawning like 10 sub-agents at once? And for what?

BORIS: For me, I do it mostly for big migrations. Actually we have this coder slash command that we use — there's a bunch of sub-agents there. And so one of the steps is like find all the issues. So there's one sub-agent that's checking for CLAUDE.md compliance. There's another sub-agent that's looking through git history to see what's going on. Another sub-agent that's looking for obvious bugs. And then we do this deduping quality step after. So they find a bunch of stuff. A lot of these are false positives and so then we spawn like five more sub-agents and these are all just checking for false positives. And in the end, the result is awesome. It finds all the real issues without the false issues.

DAN: That's great. I actually do that. So one of my non-technical Claude Code use cases is expense filing. So like when I'm in SF, I have all these expenses. And so I built this little Claude project that uses one of these finance APIs to just download all my credit card transactions. And then it decides these are probably the expenses that I'm going to have to file. And then I have two sub-agents, one that represents me and one that represents the company. And they do battle to figure out what's the proper actual set of expenses — it's like an auditor sub-agent and a pro-Dan sub-agent.

BORIS: Yeah, the sort of opponent processor pattern seems to be an interesting one. I feel like when sub-agents were first becoming a thing, actually what inspired us — there's a Reddit thread a while back where someone made sub-agents for like there was a front-end dev and a backend dev and like a designer, testing dev, PM sub-agent. And this is like, you know, it's cute — it feels a little maybe too anthropomorphic — maybe there's something to this. But I think the value is actually the uncorrelated context windows where you have these two context windows that don't know about each other and this is kind of interesting and you tend to get better results this way.

DAN: What about you? Do you have any interesting sub-agents you use?

CAT: I've been tinkering with one that is really good at front-end testing. So it uses Playwright to see all right, what are all the errors that are client side and pull them in and try to test more steps of the app. It's not totally there yet, but I'm seeing signs of life and I think it's the kind of thing that we could potentially bundle in one of our plugin marketplaces.

BORIS: I've used something like that just with Puppeteer and just watching it build something and then open up the browser and then be like, "Oh, I need to change this." It's like, "Oh my god." It's really cool. I think we're starting to see the beginnings of this massive multi-sub-agent thing. I don't know what they call this — swarms or something like that. There's actually an increasing number of people internally at Anthropic that are using a lot of credits every month — like spending over a thousand bucks every month. And this percent of people is growing actually pretty fast. And I think the common use case is code migration. What they're doing is framework A to framework B. There's the main agent, it makes a big to-do list for everything and then just kind of map-reduces over a bunch of sub-agents. So you instruct Claude like "start 10 agents and then just go 10 at a time and just migrate all the stuff over."

DAN: What would be a concrete example of the kind of migration that you're talking about?

BORIS: I think the most classic is lint rules. There's some kind of lint rule you're rolling out. There's no autofixer because static analysis can't really — it's kind of too simplistic for it. I think other stuff is framework migrations. We just migrated from one testing framework to a different one. That's a pretty common one where it's super easy to verify the output.


VIII. Use Claude Code to Turn Past Code Into Leverage

DAN: One of the things I found — and this is both for projects inside of Every and then just open source projects — if you're someone building a product and you want to build a feature that's been done before, so maybe an example that people might need to implement a bunch is memory. How do you do memory? Because we have a bunch of different products internally, you can just spawn Claude sub-agents to be like, "How do these three other products do it?" And there's possibility for just tacit code sharing where you don't need to have an API or you don't need to ask anyone. You can just be like, "How do we do this already?" And then use the best practices to build your own. And you can also do that with open source because there's tons of open source projects where people have been working on memory for a year and it's really good. You can be like, "What are the patterns that people have figured out and which ones do I want to implement?"

CAT: Totally. You can also connect your version control system. If you've built a similar feature in the past, Claude Code can use those APIs like query GitHub directly and find how people implemented a similar feature in the past and read that code and copy the relevant parts.


IX. Memory, Logs, and Compounding Engineering

DAN: Is there — have you found any use for log files of, "Okay, here's the full history of how I implemented it." And is that important to give to Claude? And how are you making it useful?

BORIS: Some people swear by it. There are some people at Anthropic where for every task they do, they tell Claude Code to write a diary entry in a specific format that just documents like what did it do, what did it try, why didn't it work. And then they even have these agents that look over the past memory and synthesize it into observations. I think this is the starting — the budding — there's something interesting here that we could productize. But it's a new emerging pattern that we're seeing that works well. I think the hard thing about one-shotting memory from just one transcript is that it's hard to know how relevant a specific instruction is to all future tasks. Like our canonical example is if I say "make the button pink," I don't want you to remember to make all buttons pink in the future. And so I think synthesizing memory from a lot of logs is a way to find these patterns more consistently.

DAN: It seems like you probably need — there's some things where you're going to know you'll be able to synthesize or summarize in this sort of top-down way — like this will be useful later — and you'll know the right level of abstraction at which it might be useful. But then there's also a lot of stuff where it's like any given commit log like "make the button pink" could be useful for kind of an infinite number of different reasons that you're not going to know beforehand. So you also need the model to be able to look up all similar past commits and surface that at the right time. Is that something that you're also thinking about?

BORIS: Yeah, I think there could be something like that. And maybe one way to see it is this kind of traditional memory storage work — like memex kind of stuff — where you just want to put all the information into the system and then it's kind of a retrieval problem after that. I think as the model also gets smarter, it naturally — I've seen it start to naturally do this with Sonnet 4.5 where if it's stuck on something, it'll just naturally start looking through git history and be like, "Oh, okay. Yeah, this is kind of an interesting way to do it."

DAN: One of the things we're doing inside of Every — I feel like it has really changed the way that we do engineering because everyone is Claude Code, CLI-build. And we have this engineering paradigm that we call compounding engineering where in normal engineering every feature you add makes it harder to add the next feature. And in compounding engineering your goal is to make the next feature easier to build from the feature that you just added. And the way that we do that is we try to codify all the learnings from everything that we've done to build the feature. So like how did we make the plan and what parts of the plan needed to be changed? Or when we started testing it, what issues did we find? What are the things that we missed? And then we codify them back into all the prompts and all the sub-agents and all the slash commands so that the next time when someone does something like this, it catches it and that makes it easier.

And that's why for me, for example, I can hop into one of our codebases and start being productive even though I don't know anything about how the code works because we have this built-up memory system of all the stuff that we've learned as we've implemented stuff. But we've had to build that ourselves. I'm curious, are you working on that kind of loop so that Claude Code does that automatically?

BORIS: Yeah, we're starting to think about it. It's funny. We just heard the same thing from Fiona. She just joined the team. She's our manager. She hasn't coded in like 10 years, something like that. And she was landing PRs on her first day. And she was like, "Yeah, not only did I kind of forgot how to code and Claude Code made it super easy to just get back into it, but also I didn't need to ramp up on any context because I kind of knew all this." And I think a lot of it is about when people put up pull requests for Claude Code itself — and I think our customers tell us that they do similar stuff pretty often — if you see a mistake I'll just be like, "@claude add this to CLAUDE.md" so that the next time it just knows this automatically.

You can instill this memory in a variety of ways. You can say @claude add it to CLAUDE.md. You can also say "@claude write a test." You know, that's an easy way to make sure this doesn't regress. And I don't feel bad asking anyone to write tests anymore, right? It's just super easy. I think probably close to 100% of our tests are just written by Claude. And if they're bad, we just won't commit it. And then the good ones stay committed. And then also lint rules are a big one. For stuff that's enforced pretty often, we actually have a bunch of internal lint rules. Claude writes 100% of these. And this is mostly just "@claude in a PR write this lint rule."


X. The Product Decisions for Building an Agent That's Simple and Powerful

CAT: And yeah, there's sort of this problem right now about how do you do this automatically? And I think generally how Cat and I think about it is we see this power user behavior and the first step is how do you enable that by making the product hackable so the best users can figure out how to do this cool new thing. But then really the hard work starts of how do you take this and bring it to everyone else.

BORIS: And for me, I keep myself in the "everyone else" bucket. Like, you know, I don't really know how to use Vim. I don't have this crazy tmux setup. I have a pretty vanilla setup. So if you can make a feature that I'll use, it's a pretty good indicator that other average engineers will use it.

DAN: Tell me about that because that's something I think about all the time — making something that is extensible and flexible enough that power users can find novel ways to use it that you would not have even dreamed of. But it's also simple enough that anyone can use it and they can be productive with it. And you can pull what the power users find back into the basic experience. How do you think about making those design and product decisions so that you enable that?

BORIS: In general we think that every engineering environment is a little bit different from the others and so it's really important that every part of our system is extensible. Everything from your status line to adding your own slash commands through to hooks which let you insert a bit of determinism at pretty much any step in Claude Code. So we think these are the basic building blocks that we give to every engineer that they can play with.

CAT: For plugins — plugins is actually our attempt to make it a lot easier for the average user like us to bring these slash commands and hooks into our workflows. And so what plugins does is it lets you browse existing MCP servers, existing hooks, existing slash commands and just write one command in Claude Code to pull that in for yourself.

BORIS: There's this really old idea in product called latent demand which I think is probably the main way that I personally think about product and thinking about what to build next. It's a super simple idea. You build a product in a way that is hackable that is kind of open-ended enough that people can abuse it for other use cases it wasn't really designed for. Then you see how people abuse it and then you build for that because you kind of know there was demand for it. And when I was at Meta, this is how we built all the big products. I think almost every single big product had this nugget of latent demand in it. For example, something like Facebook Dating — it came from this idea that when we looked at who looks at people's profiles, I think 60% of views were between people of opposite gender — kind of traditional setup — that were not friends with each other. And so we're like, "Okay, maybe if we launch a dating product we can harness this demand that exists." For Marketplace it was pretty similar — I think 40% of posts in Facebook groups at the time were buy/sell posts. And so, "Okay, people are trying to use this product to buy and sell. We just build a product around it — that's probably going to work."

And so we think about it kind of similarly. But also we have the luxury of building for developers and developers love hacking stuff and they love customizing stuff. And as a user of our own product, it makes it so fun to build and use this thing. And so we just build the right extension points. We see how people use it and that kind of tells us what to build next. Like for example, we got all these user requests where people were like, "Dude, Claude Code is asking me for all these permissions and I'm out here getting coffee. I don't know that it's asking me for permissions. How could I just get it to ping me on Slack?" And so we built hooks. Dixon built hooks so that people could get pinged on Slack. And you could get pinged on Slack for anything that you want to get pinged on Slack for. And it was very much — people really wanted the ability to do something. We didn't want to build the integration ourselves. And so we exposed hooks for people to do that.


XI. Making Claude Code Accessible to the Non-Technical User

DAN: You recently rebranded how you talk about Claude Code to be this more general purpose agent SDK. Was that driven by some latent demand where you sort of saw there's a more general purpose use case for what you built?

CAT: We realized that similar to how you were talking about using Claude Code for things outside of coding, we saw this happen a lot. We get a ton of stories of people who are using Claude Code to help them write a blog and manage all the data inputs and take a first pass in their own tone. We find people building email assistants on this. I use it for a lot of just market research. Because at the core it's an agent that can just go on for an infinite amount of time as long as you give it a concrete task and it's able to fetch the right underlying data. So one of the things I was working on was I wanted to look at all the companies in the world and how many engineers they had and to create a ranking. And this is something that Claude Code can do even though it's not a traditional coding use case.

So we realized that the underlying primitives were really general. As long as you have an agent loop that can continue running for a long period of time and you're able to access the internet and write code and run code, pretty much — if you squint — you can kind of build anything on it. And by the point where we rebranded it from the Claude Code SDK to the Claude Agent SDK, there were already many thousands of companies using this thing and a lot of those use cases were not about coding. Both internally and externally we saw that — health assistants, financial analysts, legal assistance. It was pretty broad.

DAN: What are the coolest ones?

BORIS: I feel like actually you had Noah Brier on the podcast recently. I thought the Obsidian mind-mapping note-keeping use case is really cool. It's funny — it's insane how many people use it for this particular combination. I think some other coding or coding-adjacent use cases that are kind of cool — we have this issue tracker for Claude Code. The team's just constantly underwater trying to keep up with all the issues coming in. There's just so many. And so Claude dedupes the issues and it automatically finds duplicates and it's extremely good at it. It also does first pass resolution. So usually when there's an issue it'll proactively put up a PR internally — this is a new thing that Enigo on the team built. So this is pretty cool. There's also on-call and collecting signals from other places like getting Sentry logs and getting logs from BigQuery and collating all this. Plus just really good at doing this because it's all just bash in the end.

DAN: Is it — when it's collating logs or doing issues, is that like you have Claudes continually running in the background? And is that something that you're building for?

BORIS: It gets triggered for that particular one. It gets triggered whenever a new issue is filed. So it runs once but it can choose to run for as long as it needs.

DAN: What about the idea of Claudes always running?

BORIS: Ooh, proactive Claudes. I think it's definitely where we want to get to. I would say right now we're very focused on making Claude Code incredibly reliable for individual tasks. And if you think about multi-line autocomplete and then single-turn agents and then now we're working on Claude Code that can complete tasks — if you trace this curve eventually you go to even higher levels of abstraction, even more complicated tasks. And then hopefully the next step after that is a lot more productivity. Just understanding what your team's goals are, what your goals are, being able to say, "Hey, I think you probably want to try this feature and here's a first pass at the code and here are the assumptions I made. Are these correct?"

CAT: I can't wait. And I think probably right after that is Claude is now your manager.

BORIS: That's not in the plan.


XII. The Next Form Factor for Coding With AI

DAN: Here's a good one from the team. Why did you choose agentic RAG over vector search in your architecture? And are vector embeddings still relevant?

BORIS: Actually initially we did use vector embeddings. They're just really tricky to maintain because you have to continuously reindex the code and they might get out of date and you have local changes. So those need to make it in. And then as we thought about what does it feel like for an external enterprise to adopt it, we realized that this exposes a lot more surface area and security risk. We also found that actually Claude Code is really good and Claude models are really good at agentic search. So you can get to the same accuracy level with agentic search and it's just a much cleaner deployment story. If you do want to bring semantic search to Claude Code, you can do so via an MCP tool. So if you want to manage your own index and expose an MCP tool that lets Claude Code call that, that would work.

DAN: What do you think are the top MCPs to use with Claude Code?

BORIS: Puppeteer and Playwright are pretty high up there. Definitely. Sentry has a really good one. Asana has a really good one.

DAN: Do you think there are any power user tips that you see people inside of Anthropic or other big Claude Code power users that people don't know about but should?

BORIS: One thing that Claude Code doesn't naturally like to do, but that I personally find very useful, is — Claude Code doesn't naturally like to ask questions. But if you're brainstorming with a thought partner, a collaborator, usually you do ask questions back and forth. And so this is one of the things that I like to do, especially in plan mode. I'll just tell Claude Code, "Hey, we're just brainstorming this thing. Please ask me questions if there's anything you're unsure about." I want you to ask questions and it'll do it. And I think that actually helps you arrive at a better answer there.

There's also so many tips that we can share. I think there's a few really common mistakes I see people make. One is not using plan mode enough. This is just super important. And I think people that are kind of new to coding — they kind of assume this thing can do anything and it can't. It's not that good today and it's going to get better but today it can one-shot some tests. It can't one-shot most things. And so you kind of have to understand the limits and you have to understand where you get in the loop. Something like plan mode can 2–3x success rates pretty easily if you land on the plan first.

Other stuff that I've seen power users do really well — companies that have really big deployments of Claude Code — having settings.json that you check into the codebase is really important because you can use this to pre-allow certain commands so you don't get permission-prompted every time and also to block certain commands. Let's say you don't want web fetch or whatever. And this way as an engineer I don't get prompted and I can check this in and share it with the whole team so everyone gets to use it.

DAN: I get around that by just using "dangerously skip permissions."

BORIS: Yeah, we kind of have this but we don't recommend it. It's a model, you know, it can do weird stuff. I think another cool use case that we've seen is people using stop hooks for interesting stuff. So stop hook runs whenever the turn is complete. The assistant did some tool calls back and forth and it's done and it returns control back to the user — then we run the stop hook. And so you can define a stop hook that's like, "If the tests don't pass, return the text 'keep going.'" And essentially you can just make the model keep going until the thing is done. And this is just insane when you combine it with the SDK and this kind of programmatic usage — you know, this is a stochastic thing, it's a nondeterministic thing, but with scaffolding you can get these deterministic outcomes.

DAN: So you guys started this CLI paradigm shift. Do you think the CLI is the final form factor? Are we going to be using Claude Code in the CLI primarily in a year or in three years, or is there something else that's better?

CAT: I mean, it's not the final form factor, but we are very focused on making sure the CLI is the most intelligent that we can make it and that it's as customizable as possible.

BORIS: Yeah, Cat's asking me to talk about this because no one knows — this stuff's just moving so fast. No one knows what these form factors are. Right now I think our team is in experimentation mode. So we have CLI, then we came out with the IDE extension. Now we have a new IDE extension that's a GUI — it's a little more accessible. We have @claude on GitHub so you can just add Claude anywhere. Now there's @claude, there's Claude on web and on mobile, so you can use it on any of these places. And we're just in experimentation mode, so we're trying to figure out what's next.

I think if we kind of zoom out and see where this stuff is headed, one of the big trends is longer periods of autonomy. And so with every model, we kind of time how long can the model just keep going and do tasks autonomously. And just, you know, in dangerous mode in a container, keep auto-compacting until the task is done. And now we're on the order of double-digit hours. I think the last model is like 30 hours, something like this. And the next model is going to be days.

And as you think about parallelizing models, there's a bunch of problems that come out of this. One is what is the container this thing runs in because you don't want to have to keep your laptop open.

DAN: I have that right now because I'm doing a lot of DSPY prompt optimization and it's on my laptop and I don't want to close it — I'm in the middle with my laptop open because I don't want to close it.

BORIS: Yeah. That's right. We've visited companies before — customers — and everyone's just walking around with their Claude Codes open. "Is this running?" So I think one is kind of getting away from this mode. And then I also think pretty soon we're going to be in this mode of Claudes monitoring Claudes. And I don't know what the right form factor for this is because as a human you need to be able to inspect this and see what's going on. But also it needs to be Claude-optimized where you're optimizing for bandwidth between the Claude-to-Claude communication. So my prediction is terminal is not the final form factor. My prediction is there's going to be a few more form factors in the coming months — maybe like a year or something like that. And it's going to keep changing very quickly.


XIII. UX Discoveries and Terminal Design

DAN: I teach a lot of Claude Code to a lot of Every subscribers. And I think one of the big things is just the terminal is intimidating. And just being on a call with subscribers being like, "Here's how you open the terminal and you're allowed to do this even if you're non-technical" — that is a big deal. How do you think about that?

BORIS: One of the people on our marketing team started using Claude Code because she was writing some content that touched on Claude Code and I was like, "You should really experience it." And she got like 30 popups on her screen where she had to accept various permissions because she'd never used a terminal before. So I completely see eye to eye with you on that. It's definitely hard for non-engineers and there's even some engineers we've found who aren't fully comfortable with working day-to-day in the terminal. Our VS Code GUI extension is our first step in that direction because you don't have to think about the terminal at all. It's like a traditional interface with a bunch of buttons. I think we are working on more graphical interfaces. Claude Code on the web is a GUI. I think that actually might be a good starting point for people who are less technical.

There was this magic moment maybe a few months ago where I walked into the office and the data scientists at Anthropic — they sit right next to the Claude Code team — and the data scientist just had Claude Code running on their computers and I was like, "What is this? How did you figure this out?" I think it was Brandon — he was the first one to do it and he was like, "Oh yeah, I just installed it. I work on this product so I should use it." And I was like, "Oh my god." So he figured out how to use a terminal and JS — he hasn't really done this kind of workflow before. Obviously very technical. So I think now we're starting to see all these code-adjacent functions — people that use Claude Code. And yeah, it's kind of interesting from a latent demand point of view. These are people hacking the product so there's demand to use it for this. And so we want to make it a little bit easier with more accessible interfaces. But at the same time, for Claude Code, we're laser focused on building the best product for the best engineers. We're focused on software engineering and we want to make this really good but we want to make it a thing that other people can hack.

DAN: Sometimes Claude Code will write code that's a bit verbose. But you can just tell it to simplify it and it does a really good job.

BORIS: Yeah. Sometimes you're like, "Hey, this should be a one-line change" and it'll write five lines and you're like, "Simplify it" and it understands immediately what you mean and it'll fix it. I think a lot of people on our team do that, too.

DAN: Why not then push that into a slash command or the harness to make it just happen automatically?

BORIS: We do have instructions for this in the CLAUDE.md. I think it impacts such a low percentage of conversations that we don't want it to over-rotate in the other direction. And the reason why not a slash command is because you actually don't need that much context. I think slash commands are really good for situations where you would otherwise need to write two-three lines. But for "simplify it" you can just write "simplify it" and it gets it.

DAN: How do you keep track of and carry forward the things you learn from prototype to prototype? Especially if one person is prototyping it and then you're like, "I'm going to take it over, I'm going to do 20 more."

BORIS: There's maybe a few elements of it. One is the style guide. There's elements of style that we discover. And I think a lot of this is building for the terminal — we're kind of discovering a new design language for the terminal and building it as we go. And I think some of this you can codify in a style guide. So this is our CLAUDE.md. But then there's this other part that's kind of product sense where I don't think the model totally gets it yet. And maybe we should be trying to find ways to teach the model this product sense about "this works and this doesn't." Because in product, you want to solve the person's problem in the simplest way possible and then delete everything else that's not that and just get everything out of the way. You align the product to the intent as cleanly as possible. And maybe the model doesn't totally get that yet.

DAN: It never — it doesn't really feel what it's like to use Claude Code. The model doesn't use Claude Code.

BORIS: Yeah. And so I think when Claude Code can test itself and it can use itself — and we do this when developing and it can see UI bugs and things like that — I don't know, maybe we should just try prompting it though. Honestly a lot of the stuff is as simple as that. When there's some new idea usually you just prompt it and often it just works. Maybe we should just try that.

CAT: A lot of the prototypes are actually the UX interactions. And so I think once we discover a new UX interaction like Shift-Tab for auto-accept — I think Boris figured out —

BORIS: That was Igor actually. We went back and forth — we did like dueling prototypes for like a week.

CAT: Yeah, Shift-Tab felt really nice. And then one of the current plan mode iterations uses Shift-Tab because it's actually just another way to tell the model how agentic it should be. And so I think as more features use the same interaction, you form a stronger mental model for what should go where.

BORIS: Or like thinking — I think is another really good one. Before we released Claude Code, or maybe it was the first thinking model — was it 3.7? I forget. But it was able to think and we're brainstorming how do we toggle thinking? And then someone was just like, "What if you just ask the model to think in natural language?" And it knows how to think. And we're like, "Okay, sweet, let's do that." And we did that for a while and then we realized that people were accidentally toggling it. So they were like "don't think" and then the model was like, "Oh, I should think." It just started thinking. And so we had to tune it out so "don't think" didn't trigger it. But then it still wasn't obvious. But then we made a UX improvement to highlight the thinking and that was so fun. It felt really magical. When you do "ultra think" it's like rainbow or whatever.

And then with Sonnet 4.5 we actually find a really big performance improvement when you turn on extended thinking. And so we made it really easy to toggle it because sometimes you want it, sometimes you don't — for a really simple task, you don't want the model to think for five minutes. You want it to just do the thing. And so we used Tab as the interaction to toggle it. And then we unshipped a bunch of the thinking words. Although I think we kept "ultra think" just for sentimental reasons. It was such a cool UX.


XIV. The Art of Unshipping

DAN: Do you think there's some new metric that's about what you deleted? I think programmers have always felt like deleting a bunch of code feels really good, but there's something about — because you can build stuff so fast, it becomes more important to also delete stuff.

BORIS: I think my favorite kind of diff to see is a red diff. This is the best. Whenever I'm like, "Yeah, bring it on. Another one." But it's hard because anything you ship, people are using it. And so you got to keep people happy. I think generally our principle is if we unship something, we need to ship something even better that people can take advantage of that matches that intent even better.


XV. Productivity and the Competitive Landscape

BORIS: And yeah, I think this is kind of back to how do you measure Claude Code and the impact of it. This is something every company, every customer asks us about. Internally at Anthropic I think we doubled in size since January or something like that but then productivity per engineer has increased almost 70% in that time, measured by — I think we actually measured it in a few ways — but PRs are the simplest one and the main one. But like you said, this doesn't capture the full extent of it because a lot of this is making it easier to prototype, making it easier to try new things, making it easier to do these things that you never would have tried because they're way below the cut line. You're launching a feature and there's this wish list of stuff — now you just do all of it because it's so easy and you just wouldn't have done it.

So yeah, it's really hard to talk about. And then there's this flip side of it where more code is written. So you have to delete more code. You have to code-review more carefully and automate code review as much as you can. There's also an interesting new product management challenge because you can ship so much that you end up — it doesn't feel as cohesive because you could just add a button here and a tab there and a little thing here. It's much easier to build a product that has all the features you want but doesn't have any sort of organizing principle because you're just shipping lots of stuff all the time.

CAT: I think we try to be pretty disciplined about this and making sure that all the abstractions are really easy to understand for someone even if they just hear the name of the feature. We have this principle that I believe Boris brought to the team that I really like where we don't want a "new user experience." Everything should be so intuitive that you just drop in and it just works. And I think that's really set the bar really high for making sure every feature is really intuitive.

DAN: How do you do that with a conversational UI? Because when there's not a bunch of buttons and knobs and it's just a blank text box to start, how do you think about making it intuitive?

BORIS: There's a lot of little things that we do. We teach people that they can use the question mark to see tips. We show tips as Claude Code is working. We have the change log on the side. We tell you about, "Oh, there's a new model that's out" or we show you at the bottom — we have a notification section for thinking. I think there's just subtle ways in which we tell users about features. The other thing that's really important is to just make sure that all the primitives are very clearly defined — hooks have a common meaning in the developer ecosystem. Plugins have a very common meaning. And just making sure that what we build matches what the average developer would immediately think of when they hear that.

There's also this progressive disclosure thing — anytime in Claude Code when you run it you can hit Ctrl-O to see the full raw transcript, the same thing the model sees. And we don't show you this until it's actually relevant. So when there's a tool result that's collapsed, then we'll say "use Ctrl-O to see it." So we don't want to put too much complexity on you at the start because this thing can do anything.

I think there's this other new principle which we've just started exploring which is the model teaches you how to use the thing. So you can ask Claude Code about itself and it kind of knows to look up its own documentation to tell you about it. But we can also go even deeper — for example, slash commands are a thing that people can use but also the model can call slash commands. And maybe you see the model calling it and then you'll be like, "Oh yeah, I guess I can do that too."

DAN: How has it changed — when you first started doing this, Claude Code was this singular thing, this singular way of thinking about using AI through a CLI. Other people had stuff like this but it felt like this shift. And now there's a whole landscape of everyone going "CLI, CLI, CLI." How has that changed how you think about building, how it feels to build, and how are you dealing with the pressure of the race that you're in?

BORIS: I think for me, imitation is the greatest flattery. So it's awesome and it's cool to see all this other stuff that everyone else is building inspired by this. And I think this is ultimately the goal — to inspire people to build this next thing for this incredible technology that's coming. And that's just really exciting. Personally, I don't really use a lot of other tools. Usually when something new comes out, I'll maybe just try it to get a vibe. But otherwise I think we're pretty focused on just solving problems that we have and our customers have and building the next thing.

DAN: I think there's this underlying expectation that using AI shouldn't have to be a skill because it just does whatever you say. And you're like, well, whatever you say is going to matter for what it does. So if you can say things better it's going to do better.

BORIS: It changes with every model though. That's the hard part. Prompt engineer was a job and now famously it's not a job anymore. And there's going to be more jobs that are not jobs anymore — these kind of micro-skills that you have to learn to use this thing. And as the model gets better it can just interpret it better. But I think that's also for us — this is part of this humility that we have to have building a product like this that we just really don't know what's next and we're just trying to figure it out along with everyone else. We're just here for the ride.

DAN: That's why it's cool that you're building it for yourself because I think that's the best way to know. You're sort of living in the future. You're using it all the time. And it's pretty clear what's missing.

BORIS: Yeah. This is the luxurious thing about building dev tools — you're your own customer. I think it's also really a unique thing about AI because it sort of reset the game board for all software. Anything that you do for something that you want to use on your computer — if you're building it with AI, there's a good chance that hasn't been done before because the whole landscape has been reset. And so it's a uniquely exciting time to build stuff for yourself.


XVI. Outro

DAN: I also have my little email response agent that drafts responses for me but I don't use email that much so —

BORIS: Oh, and I knew it wasn't you responding. That's why it's seven days delayed.

DAN: The agent's just doing a very thorough job.

BORIS: Yeah, Agent SDK is cool though. It always just feels amazing how much we're able to build with such a small team. I feel like the other thing that's really cool is that people are just shifting their mindset from docs to demos. Internally, our currency is actually demos. You want people to be excited about your thing — show us 15 seconds of what it can do. And we find that everyone on the team now has this indoctrinated demo culture for sure. And I think that's better because there's a lot of things that you might have in your head that if you're a great writer, maybe you could figure out how to explain it. But it's just really hard to explain. But if someone can see it, they get it immediately.

And I think that's happening for product building, but it's also happening for all sorts of other types of creative endeavors like making a movie for example. You had to pitch it, but now you can just be like, "I made this Sora video" and you can kind of see the glimmer of the thing you're trying to make for very cheap. And so that means you don't have to spend time convincing people as much. You can just be like, "Here, I made it."

DAN: And also as a builder you can just make it and then make it again and then make it again until you're happy. I feel like the flip side is you used to make a doc or whiteboard something or I would draw stuff in Sketch or Figma or whatever. And now we'll just build it until I like how it feels. And it's just so easy to get that feeling out of it now. You could see it visually before or you could describe it in words but you could never get the vibe. And now the vibe is really easy.

BORIS: Yeah. And you built plan mode like three times. Yeah, because of this. You built it and then you threw it out and rebuilt it and then threw it out and rebuilt it.

DAN: Or like to-do's — Sid built the original version, also three or four prototypes, and then I prototyped maybe 20 versions after that in a day. I think pretty much everything we released there was at least a few prototypes behind it.

DAN: I loved this. Did we answer all of your team's questions?

BORIS: I think we did.

DAN: Well, thank you. This was amazing. I'm really glad I got to talk to you and keep building.

BORIS: Thank you for having us.

CAT: Yeah. Thanks.


Transcript source: "AI and I" (Every) — The Secrets of Claude Code From the Engineers Who Built It. Formatted for readability.

Interviewing Claude Code Managers: Boris Cherny and Cat Wu

 


Claude Code: Anthropic's CLI Agent

Boris Cherny (creator of Claude Code) and Cat Woo (PM of Claude Code) sit down to discuss the origins, design philosophy, and future of one of the most transformative coding tools of the AI era.

From "Latent Space" — an AI engineering podcast.
Hosts: Alessio (Partner & CTO at Decibel) and Swyx (Founder of Small AI).
Guests: Boris Cherny, Creator of Claude Code, and Cat Woo, PM of Claude Code at Anthropic.
Source: Latent Space Podcast



Introduction

ALESSIO: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and I'm joined by my co-host Swyx, founder of Small AI.

SWYX: Hey, and today we're in the studio with Cat Woo and Boris Cherny. Welcome.

BORIS: Thanks for having us.

CAT: Thank you.

SWYX: Cat, you and I know each other from before. Dagster as well and then Index Ventures and now Anthropic.

CAT: Exactly. It's so cool to see like a friend that you know from before now working at Anthropic and shipping really cool stuff.

SWYX: And Boris, you were a celebrity because we were just having you outside just getting coffee and people recognized you from your video.

BORIS: Oh wow. That's new. I definitely had that experience like once or twice in the last few weeks. It was surprising.

SWYX: Well, thank you for making the time. You're here to talk — we're here to talk about Claude Code. Most people probably have heard of it. We think quite a few people have tried it, but let's get a crisp upfront definition — what is Claude Code?

BORIS: So Claude Code is Claude in the terminal. Claude has a bunch of different interfaces. There's desktop, there's web, and yeah, Claude Code runs in your terminal. Because it runs in the terminal, it has access to a bunch of stuff that you just don't get if you're running on the web or on desktop or whatever. So it can run bash commands, it can see all of the files in the current directory, and it does all that agentically. I guess it maybe comes back to the question under the question — where did this idea come from? Part of it was we just want to learn how people use agents. We're doing this with the CLI form factor because coding is kind of a natural place where people use agents today, and there's kind of product market fit for this thing. But yeah, it's just sort of this crazy research project. Obviously it's kind of bare bones and simple, but yeah, it's an agent in your terminal.

SWYX: That's how the best stuff starts.


I. Origins of Claude Code

ALESSIO: How did it start? Did you have a master plan to build Claude Code?

BORIS: There's no master plan. When I joined Anthropic, I was experimenting with different ways to use the model kind of in different places. And the way I was doing that was through the public API, the same API that everyone else has access to. And one of the really weird experiments was this Claude that runs in a terminal. And I was using it for kind of weird stuff — I was using it to look at what music I was listening to and react to that, and then screenshot my video player and explain what's happening there and things like this. And this was a pretty quick thing to build and it was pretty fun to play around with.

And then at some point I gave it access to the terminal and the ability to code, and suddenly it just felt very useful — I was using this thing every day. It kind of expanded from there. We gave the core team access and they all started using it every day, which was pretty surprising. And then we gave all the engineers and researchers at Anthropic access and pretty soon everyone was using it every day. And I remember we had this DAU chart for internal users and I was just watching it and it was vertical — like for days — and we're like, "All right, there's something here. We got to give this to external people so everyone else can try this too."

ALESSIO: And were you also working with Boris already, or did this come out and then it started growing and then you're like, "Okay, we need to maybe make this a team so to speak?"

CAT: Yeah, the original team was Boris, Sid, and Ben. And over time, as more people were adopting the tool, we felt like, okay, we really have to invest in supporting it because all our researchers are using it and this is like our one lever to make them really productive. And so at that point I was using Claude Code to build some visualizations. I was analyzing a bunch of data and sometimes it's super useful to spin up a Streamlit and see all the aggregate stats at once. Claude Code made it really really easy to do. So I think I sent Boris a bunch of feedback and at some point Boris was like, "Do you want to just work on this?"

BORIS: It was actually more than that on my side. You were sending all this feedback and at the same time we were looking for a PM and we were looking at a few people. And then I remember telling the manager, "Hey, I want Cat."

SWYX: I'm sure people are curious — what's the process within Anthropic to graduate one of these projects? So you have kind of a lot of growth, then you get a PM. When did you decide, okay, it's ready to be opened up?


II. Anthropic's Product Philosophy

CAT: Generally at Anthropic we have this product principle of "do the simple thing first" and I think the way we build product is really based on that principle. So you kind of staff things as little as you can and keep things as scrappy as you can because the constraints are actually pretty helpful. And for this case, we wanted to see some signs of product market fit before we scaled it.

SWYX: I imagine MCP also now has a team around it in much the same way. It is now very much officially an Anthropic product. So I'm curious, Cat — how do you view PM-ing something like this?

CAT: I think I am with a pretty light touch. I think Boris and the team are extremely strong product thinkers and for the vast majority of the features on our road map, it's actually just people building the thing that they wish the product had. So very little actually is top-down. I feel like I'm mainly there to clear the path if anything gets in the way and just make sure that we're all good to go from a legal, marketing, etc. perspective.

And then I think in terms of very broad road map or long-term road map, the whole team comes together and just thinks about, "Okay, what do we think models will be really good at in 3 months, and let's just make sure that what we're building is really compatible with the future of what models are capable of."

SWYX: I'd be interested to double click on this. What will models be good at in 3 months? Because I think that's something that people always say to think about when building AI products, but nobody knows how to think about it. Everyone's just like, "It's generically getting better all the time. We're getting AGI soon, so don't bother." How do you calibrate 3 months of progress?

CAT: I think if you look back historically, we tend to ship models every couple of months or so. So 3 months is just an arbitrary number that I picked. I think the direction that we want our models to go in is being able to accomplish more and more complex tasks with as much autonomy as possible. And so this includes things like making sure that the models are able to explore and find the right information that they need to accomplish a task. Making sure that models are thorough in accomplishing every aspect of a task. Making sure the models can compose different tools together effectively. These are directions we care about.

BORIS: Coming back to Code, this kind of approach affected the way that we built Code also, because we know that if we want some product that has very broad product market fit today, we would build a Cursor or a Windsurf or something like this. These are awesome products that so many people use every day. I use them. That's not the product that we want to build. We want to build something that's kind of much earlier on that curve and something that will maybe be a big product a year from now or however much time from now as the model improves. And that's why Code runs in a terminal. It's a lot more bare bones. You have raw access to the model because we didn't spend time building all this kind of nice UI and scaffolding on top of it.


III. What Should Go into Claude Code?

SWYX: When it comes to the harness so to speak and things you want to put around it, there's the maybe prompt optimization. I use Cursor every day. There's a lot going on in Cursor that is beyond my prompt for optimization and whatnot, but I know you recently released compacting context features and all that. How do you decide how thick it needs to be on top of the CLI? And at what point are you deciding between, "Okay, this should be a part of Claude Code versus this is just something for the IDE people to figure out?"

BORIS: There's kind of three layers at which we can build something. Being an AI company, the most natural way to build anything is to just build it into the model and have the model do the behavior. The next layer is probably scaffolding on top — Claude Code itself. And then the layer after that is using Claude Code as a tool in a broader workflow. So for example, a lot of people use Code with tmux to manage a bunch of windows and sessions happening in parallel. We don't need to build all of that in.

Compact — it's sort of this thing that has to live in the middle because it's something that we want to work when you use Code. You shouldn't have to pull in extra tools on top of it. And rewriting memory in this way isn't something the model can do today. So you have to use a tool for it. We tried a bunch of different options for compacting — rewriting old tool calls, truncating old messages and not new messages. And then in the end we actually just did the simplest thing: ask Claude to summarize the previous messages and just return that. And that's it. It's funny — when the model is so good, the simple thing usually works. You don't have to over-engineer it.


IV. Claude.md and Memory Simplification

SWYX: And then you have the CLAUDE.md file for the more user-driven memories so to speak — kind of like the equivalent of maybe Cursor rules.

BORIS: CLAUDE.md is another example of this idea of "do the simple thing first." We had all these crazy ideas about memory architectures, and there's so much literature about this. There's so many different external products about this, and we wanted to be inspired by all the stuff. But in the end the thing we did is ship the simplest thing — it's a file that has some stuff and it's auto-read into context. And there's now a few versions of this file. You can put it in the root or you can put it in child directories or you can put it in your home directory, and we'll read all of these in kind of different ways. But yeah, simplest thing that could work.


V. Claude Code vs Aider

SWYX: I'm sure you're familiar with Aider which is another thing that people in our Discord loved. And then when Claude Code came out, the same people love Claude Code. Any thoughts on inspiration that you took from it, things you did differently, design principles in which you went a different way?

BORIS: This is actually the moment I got AGI-pilled, and it's related to this. So Aider inspired this internal tool that we used to have at Anthropic called Clyde. So Clyde is like CLI Claude. And that's the predecessor to Claude Code. It's kind of this research tool that's written using Python. It takes like a minute to start up. It's very much written by researchers — it's not a polished product.

And when I first joined Anthropic, I was putting up my first pull request. I hand-wrote this pull request because I didn't know any better. And my boot camp buddy at the time, Adam Wolf, was like, "You know, actually, maybe instead of handwriting it, just ask Clyde to write it." And I was like, "Okay, I guess so. It's an AI lab. Maybe there's some capability I didn't know about." And so I start up this terminal tool. It took like a minute to start up. And I asked Claude, "Hey, here's the description. Can you make a PR for me?" And after a few minutes of chugging along, it made a PR and it worked.

And I was just blown away because I had no idea. I had just no clue that there were tools that could do this kind of thing. I thought that single-line autocomplete was the state of the art before I joined. And then that's the moment where I got AGI-pilled, and that's where Code came from. So yeah, Aider inspired Clyde which inspired Claude Code. Very much big fan of Aider. It's an awesome product.


VI. Parallel Workflows and Unix Utility Philosophy

SWYX: I think people are interested in comparing and contrasting, obviously. People are interested in figuring out how to choose between tools — there's the Cursors of the world, there's the Devins of the world, there's Aiders and there's Claude Code. Where do you place it in the universe of options?

BORIS: We use all these tools in house too. We're big fans of all this stuff. Claude Code is obviously a little different than some of these other tools in that it's a lot more raw. Like I said, there isn't this kind of big beautiful UI on top of it. It's raw access to the model — as raw as it gets. So if you want to use a power tool that lets you access the model directly and use Claude for automating big workloads — for example, if you have a thousand lint violations and you want to start a thousand instances of Claude and have it fix each one and then make a PR — then Claude Code is a pretty good tool.

ALESSIO: It's a tool for power workloads, for power users. The idea of parallel versus single path — the IDE is really focused on what you want to do, versus Claude Code where you kind of more see it as less supervision required. You can spin up a lot of them. Is that the right mental model?

BORIS: Yeah. And there's some people at Anthropic that have been racking up thousands of dollars a day with this kind of automation. Most people don't do anything like that, but you totally could. We think of it as a Unix utility. The same way that you would compose grep or cat or other tools, the same way you can compose Code into workflows.


VII. Cost Considerations and Pricing Model

ALESSIO: The cost thing is interesting. Do people pay internally or do you get it for free?

BORIS: If you work at Anthropic, you can just run this thing as much as you want every day. It's for free internally.

SWYX: I think if everybody had it for free, it would be huge. Because if I think about it, I pay Cursor 20 bucks a month. I use millions and millions of tokens in Cursor that would cost me a lot more in Claude Code. And so a lot of people that I've talked to don't actually understand how much it costs to do these things. They'll do a task and they're like, "Oh, that cost 20 cents. I can't believe I paid that much." How do you think about that, going back to the product side? How much do you think of that being your responsibility to try and make it more efficient versus that's not really what you're trying to do with the tool?

CAT: We really see Claude Code as the tool that gives you the smartest abilities out of the model. We do care about cost insofar as it's very correlated with latency and we want to make sure that this tool is extremely snappy to use and extremely thorough in its work. We want to be very intentional about all the tokens that it produces. I think we can do more to communicate the costs with users. Currently we're seeing costs around $6 per day per active user. So it does come out to a bit higher over the course of a month than Cursor, but I don't think it's out of band, and that's roughly how we're thinking about it.

BORIS: I would add that the way I think about it is it's an ROI question, not a cost question. If you think about an average engineer salary — engineers are very expensive. And if you can make an engineer 50–70% more productive, that's worth a lot. I think that's the way to think about it.

SWYX: So if you're targeting Claude to be the most powerful end of the spectrum as opposed to the less powerful but faster cheaper side, then there's typically a waterfall — you try the faster simple one, that doesn't work, you upgrade and upgrade and finally you hit Claude Code. At least for people who are token-constrained and don't work at Anthropic.


VIII. Key Features Shipped Since Launch

SWYX: How about we recap the brief history of Claude Code between when you launched and now? There have been quite a few ships. How would you highlight the major ones?

CAT: I think a big one that we've gotten a lot of requests for is web fetch. We worked really closely with our legal team to make sure that we shipped as secure of an implementation as possible. So we'll web fetch if a user directly provides a URL — whether that's in their CLAUDE.md or in their message directly, or if a URL is mentioned in one of the previously fetched URLs. And so this way enterprises can feel pretty secure about letting their developers continue to use it.

We shipped a bunch of auto features. Like autocomplete — where you can press tab to complete a file name or file path. Auto compact — so that users feel like they have infinite context since we'll compact behind the scenes. And we also shipped auto accept, because we noticed that a lot of users were like, "Hey, Claude Code can figure it out. I've developed a lot of trust for Claude Code. I want it to just autonomously edit my files, run tests, and then come back to me later." So those are some of the big ones. Vim mode, custom slash commands. People love Vim mode — that was a top request. Memory — the hashtag to remember. Yeah.


IX. Claude Code Writes 80% of Claude Code

SWYX: On the technical side, Paul from Aider always says how much of it was coded by Aider. So the question is how much of it was coded by Claude Code?

BORIS: Pretty high. Probably near 80%, I'd say.

SWYX: That's very high.

BORIS: A lot of human code review though. A lot of human code review. Some of the stuff has to be handwritten and some of the code can be written by Claude. And there's sort of a wisdom in knowing which one to pick and what percent for each kind of task. Usually where we start is Claude writes the code and then if it's not good, then maybe a human will dive in. There's also some stuff where I actually prefer to do it by hand — like intricate data model refactoring or something. I won't leave it to Claude because I have really strong opinions and it's easier to just do it and experiment than it is to explain it to Claude. So yeah, I think that nets out to maybe 80–90% Claude-written code overall.


X. Custom Slash Commands and MCP Integration

ALESSIO: The custom slash command — I had a question. How do you think about custom slash commands, MCPs, how does this all tie together? Is the slash command in Claude Code kind of like an extension of the MCP? Are people building things that should not be MCP but are just kind of self-contained? How should people think about it?

BORIS: Obviously we're big fans of MCP. You can use MCP to do a lot of different things — custom tools, custom commands, all this stuff. But at the same time you shouldn't have to use it. If you just want something really simple and local — you just want some essentially like a prompt that's been saved — just use local commands for that.

Over time, something that we've been thinking a lot about is how to re-expose things in convenient ways. So for example, let's say you had this local command. Could you re-expose that as an MCP prompt? Because Claude Code is an MCP client and an MCP server. Or let's say you pass in a custom bash tool. Is there a way to re-expose that as an MCP tool? We think generally you shouldn't have to be tied to a particular technology. You should use whatever works for you.

ALESSIO: Because there's like Puppeteer — that's a great thing to use with Claude Code for testing. There's a Puppeteer MCP protocol, but then people can also write their own slash commands. I'm curious where MCPs are going to end up — where it's like maybe each slash command leverages MCPs, but no command itself is an MCP because it ends up being customized.

BORIS: I think for something like Puppeteer, that probably belongs in MCP because there's a few tool calls that go into that. So it's probably nice to encapsulate that in the MCP server. Whereas slash commands are actually just prompts — they're not actually tools. We're thinking about how to expose more customizability options so that people can bring their own tools or turn off some of the tools that Claude Code comes with. But there's also some trickiness because we want to make sure the tools people bring are things that Claude is able to understand and that people don't accidentally inhibit their experience by bringing a tool that is confusing to Claude.

I'll give an example of how this stuff connects for Claude Code. Internally in the GitHub repo, we have this GitHub action that runs. The GitHub action invokes Claude Code with a local slash command. The slash command is "lint." So it just runs a linter using Claude. It checks for a bunch of things that are pretty tricky to do with a traditional linter based on static analysis — for example, it'll check for spelling mistakes, but also checks that code matches comments. It also checks that we use a particular library for network fetches instead of the built-in library. There's a bunch of specific things that are pretty difficult to express just with lint.

In theory, you can go and write a bunch of lint rules for this. Some of it you could cover, some of it you probably couldn't. But honestly, it's much easier to just write one bullet in markdown in a local command and commit that. So what we do is Claude runs through the GitHub action. We invoke it with /project:lint. It'll run the linter, identify any mistakes, make the code changes, and then use the GitHub MCP server to commit the changes back to the PR. And so you can compose these tools together. That's a lot of the way we think about Code — just one tool in an ecosystem that composes nicely without being opinionated about any particular piece.


XI. Terminal UX and Technical Stack

SWYX: There's a decompilation of Claude Code out there. It seems like you use Commander.js and React Ink. At some point you're not even building Claude Code — you're kind of just building a general-purpose CLI framework. Do you ever think about this level of configurability being more of a CLI framework or some new form factor?

BORIS: It's definitely been fun to hack on a really awesome CLI because there's not that many of them. We're big fans of Ink. Vadim Demedes — we actually used React Ink for a lot of our projects.

Ink is amazing. It's sort of hacky and janky in a lot of ways — you have React and then the renderer is just translating the React code to ANSI escape codes. And there's all sorts of stuff that just doesn't work at all because ANSI escape codes are like this thing that started to be written in the 1970s and there's no really great spec about it. Every terminal is a little different. So building in this way feels a little bit like building for the browser back in the day, where you had to think about Internet Explorer 6 versus Opera versus Firefox. You have to think about these cross-terminal differences a lot. But yeah, big fans of Ink because it helps abstract over that.

We also use Bun. We don't use it in the runtime yet — we use Bun to compile the code together. It makes writing our tests and running tests much faster.


XII. Code Review and Semantic Linting

SWYX: On the review side — the linter part that you mentioned, I think maybe people skipped over it. Going from rule-based linting to semantic linting I think is great and super important. And I think a lot of companies are trying to figure out how to do autonomous PR review, which I've not seen one that I use so far. I'm curious how you think about closing the loop or making that better — especially like, what are you supposed to review? Because these PRs get pretty big when you vibe code.

BORIS: We have some experiments where Claude is doing code review internally. We're not super happy with the results yet. So it's not something that we want to open up quite yet. The way we're thinking about it is Claude Code is, like I said before, a primitive. So if you want to use it to build a code review tool, you can do this. If you want to build a security scanning or vulnerability scanning tool, you can do that. If you want to build a semantic linter, you can do that. And hopefully with Code it makes it so that if you want to do this, it's just a few lines of code — and you can just have Claude write that code also, because Claude is really great at writing GitHub actions.

SWYX: Sometimes you let the model decide, sometimes you're like, "This is a destructive action, always ask me." I'm curious if you have any internal heuristics around when to auto-accept and where all this is going.

BORIS: We're spending a lot of time building out the permission system. Robert on our team is leading this work. We think it's really important to give developers the control to say, "Hey, these are the allowed permissions." Generally, this includes stuff like — the model is always allowed to read files or read anything. And then it's up to the user to say, "Hey, it's about to edit files, about to run tests." These are probably the safest three actions. And then there's a long list of other actions that users can either allow-list or deny-list based on regex matches with the action.

SWYX: Can writing a file ever be unsafe if you have version control?

BORIS: I think there's a few different aspects of safety to think about. For file editing, it's actually less about safety — although there is still a safety risk. What might happen is let's say the model fetches a URL and then there's a prompt injection attack in the URL, and then the model writes malicious code to disk and you don't realize it. Although there is code review as a separate layer of protection.

But generally, for file writes, the biggest thing is the model might just do the wrong thing. What we find is that if the model is doing something wrong, it's better to identify that earlier and correct it earlier. If you wait for the model to just go down this totally wrong path and then correct it 10 minutes later, you're going to have a bad time. So it's better to identify failures early.

But at the same time, there's some cases where you just want to let the model go. For example, if Claude Code is writing tests for me, I'll just hit shift-tab, enter auto-accept mode, and just let it run the tests and iterate on the tests until they pass. Because I know that's a pretty safe thing to do. And then for some other tools like the bash tool, it's pretty different — because Claude could run rm -rf / and that would suck. So we definitely want people in the loop. The model is trained and aligned not to do that, but these are non-deterministic systems. You still want a human in the loop.

CAT: I think generally the way that things are trending is less time between human input.


XIII. Non-Interactive Mode and Automation

CAT: One thing to mention is we do have a non-interactive mode — which is how we use Claude in these situations to automate Claude Code. A lot of the companies using Claude Code actually use this non-interactive mode. They'll for example say, "Hey, I have hundreds of thousands of tests in my repo. Some of them are out of date, some of them are flaky," and they'll send Claude Code to look at each of these tests and decide, "How can I update any of them? Should I deprecate some of them? How do I increase our code coverage?" So that's been a really cool way that people are non-interactively using Claude Code.

SWYX: What are the best practices here? Because when it's non-interactive, it could run forever and you're not necessarily reviewing the output of everything.

BORIS: For folks that haven't used it — non-interactive mode is just claude -p and then you pass in the prompt in quotes. That's all it is — it's just the -p flag. Generally, it's best for tasks that are read-only. That's the place where it works really well and you don't have to think about permissions and running forever and things like that.

For example, a linter that runs and doesn't fix any issues. Or for example, we're working on a thing where we use Claude with -p to generate the changelog for Claude. So every PR is just looking over the commit history and being like, "Okay, this makes it into the changelog, this doesn't."

For tasks where you want to write things, we usually recommend passing in a very specific set of permissions on the command line. So you can pass in --allowed-tools and then allow a specific tool — for example, not just bash, but for example git status or git diff. You give it a set of tools that it can use, or the edit tool. And --allowed-tools lets you, instead of the permission prompt (because you don't have that in non-interactive mode), pre-accept tool uses.

We'd also definitely recommend that you start small — test it on one test, make sure that it has reasonable behavior, iterate on your prompt, then scale it up to 10, make sure that it succeeds, or if it fails, analyze what the patterns of failures are, and gradually scale up from there. So definitely don't kick off a run to fix 100,000 tests.


XIV. Engineering Productivity Metrics

BORIS: Claude Code also makes a lot of quality work become a lot easier. For example, I have not manually written a unit test in many months. And we have a lot of unit tests. It's because Claude writes all the tests. Before, I felt like a jerk if on someone's PR I'm like, "Hey, can you write a test?" because they kind of know they should probably write a test. And somewhere in their head they made that trade-off where they just want to ship faster. And so you always feel like a jerk for asking. But now I always ask because Claude can just write the test. There's no human work. You just ask Claude to do it. And with writing tests becoming easier and with writing lint rules becoming easier, it's actually much easier to have high-quality code than it was before.

ALESSIO: What are the metrics that you believe in? A lot of people don't believe in 100% code coverage because sometimes that is optimizing for the wrong thing. What still makes sense?

BORIS: I think it's very engineering team dependent. I wish there was a one-size-fits-all answer. For some teams, test coverage is extremely important. For other teams, type coverage is very important — especially if you're working in a very strictly typed language and avoiding nulls in JavaScript and Python. Complexity gets a lot of flack, but it's still honestly a pretty good metric just because there isn't anything better in terms of ways to measure code quality.

ALESSIO: And productivity — obviously not lines of code, but do you care about measuring productivity?

BORIS: Lines of code honestly isn't terrible. It has downsides, but it's really hard to make anything better. It's the least terrible. The two that we're really trying to nail down are: one, decrease in cycle time — how much faster are your features shipping because you're using these tools. So that might be something like the time between first commit and when your PR is merged. It's very tricky to get right but one of the ones that we're targeting.

The other one that we want to measure more rigorously is the number of features that you wouldn't have otherwise built. We have a lot of channels where we get customer feedback, and one of the patterns that we've seen with Claude Code is that sometimes customer support or customer success will post, "Hey, this app has this bug," and then sometimes 10 minutes later one of the engineers on that team will be like, "Claude Code made a fix for it." And a lot of those situations — when you ping them and you're like, "Hey, that was really cool" — they were like, "Yeah, without Claude Code I probably wouldn't have done that because it would have been too much of a divergence from what I was otherwise going to do. It would have just ended up in this long backlog."

That was the other AGI-pilled moment for me. There was a really early version of Claude Code many, many months ago. And this one engineer at Anthropic, Jeremy, built a bot that looked through a particular feedback channel on Slack and hooked it up to Code to have Code automatically put up PRs with fixes to all the stuff. And some of the stuff — it couldn't fix every issue but it fixed a lot of the issues. And I was like — this was early on so I don't remember the exact number but it was surprisingly high, to the point where I became a believer in this kind of workflow. And I wasn't before.


XV. Balancing Feature Creation and Maintenance

SWYX: So PM isn't that scary too in a way where you can build too many things? It's almost like maybe you shouldn't build that many things. I think that's what I'm struggling with the most — it gives you the ability to create, create, create, but at some point you got to support. This is the Jurassic Park: "Your scientists were so preoccupied with whether they could..."

SWYX: But how do you make decisions now that the cost of actually implementing the thing is going down? As a PM, how do you decide what is actually worth doing?

CAT: We definitely still hold a very high bar for net new features. Most of the fixes were like, "Hey, this functionality is broken," or "There's a weird edge case that we hadn't addressed yet." So it was very much smoothing out the rough edges as opposed to building something completely net new. For net new features, we hold a pretty high bar that it's very intuitive to use. The new user experience is minimal. It's just obvious that it works.

We sometimes actually use Claude Code to prototype instead of using docs. So you'll have prototypes that you can play around with and that often gives us a faster feel for, "Hey, is this feature ready yet? Is this the right abstraction? Is this the right interaction pattern?" So it gets us faster to feeling really confident about a feature, but it doesn't circumvent the process of making sure that the feature definitely fits in the product vision.

BORIS: It's interesting how as it gets easier to build stuff, it changes the way that I write software. Before I would write a big design doc and think about a problem for a long time before building it. Now I'll just ask Claude Code to prototype three versions of it and I'll try the feature and see which one I like better. And that informs me much better and much faster than a doc would have.


XVI. Memory and the Future of Context

SWYX: I'm interested in memory. We talked about auto-compact and memory using hashtags and stuff. My impression is you like to say the simplest approach works, but I'm curious if you've seen any other requests that are interesting or internal hacks of memory that people have explored.

BORIS: There's a bunch of different approaches to memory. Most of them use external stores — there's Chroma, there's key-value or graph shapes.

SWYX: Are you a believer in knowledge graphs for this stuff?

BORIS: If you talked to me before I joined Anthropic and this team, I would have said yeah, definitely. But now actually I feel everything is the model — that's the thing that wins in the end. As the model gets better, it subsumes everything else. At some point the model will encode its own knowledge graph, its own KV store, if you just give it the right tools.

SWYX: In some ways, are we just coping for lack of context length? Are we doing things for memory now that if we had a 100-million-token context window we wouldn't care about?

BORIS: I would love to have 100-million-token context for sure. But here's a question — if you took all the world's knowledge and put it in your brain, is that something that you would want to do, or would you still want to record knowledge externally?

CAT: We've been seeing people play around with memory in quite interesting ways — like having Claude write a logbook of all the actions that it's done so that over time Claude develops this understanding of what your team does, what you do within your team, what your goals are, how you like to approach work. We would love to figure out what the most generalized version of this is so that we can share broadly. I think with things like Claude Code, it's actually less work to implement the feature and a lot of work to tune these features to make sure that they work well for general audiences across a broad range of use cases.

BORIS: A related problem to memory is how you get stuff into context. Originally we tried very early versions of Claude that used RAG — we were using Voyage, just off-the-shelf RAG, and that worked pretty well. We tried a few different versions of it. There was RAG and then we tried a few different kinds of search tools, and eventually we landed on just agentic search as the way to do stuff.

There were two big reasons, maybe three big reasons. One is it outperformed everything by a lot. And this was surprising. Just using regular code searching — glob, grep, just regular code search. The second was there's this whole indexing step that you have to do for RAG and there's a lot of complexity that comes with that because the code drifts out of sync and then there's security issues because this index has to live somewhere. So it's just a lot of liability. Agentic search just sidesteps all that. Essentially, at the cost of latency and tokens, you now have really awesome search without security downsides.


XVII. Sandboxing, Branching, and Agent Planning

SWYX: Your takes on sandboxing environments, branching, rewindability?

BORIS: I could talk for hours about this. Starting with sandboxing: ideally, the thing that we want is to always run code in a Docker container and then it has freedom. You can snapshot, rewind, do all this stuff. Unfortunately, working with a Docker container for everything is just a lot of work and most people aren't going to do it. And so we want some way to simulate some of these things without having to go full container.

There's some stuff you can do today. For example, something I'll do sometimes is if I have a planning question or a research type question, I'll ask Claude to investigate a few paths in parallel. You can do this today if you just ask it. Say, "I want to refactor X to do Y. Can you research three separate ideas for how to do it? Do it in parallel. Use three agents to do it." In the UI, when you see a "task" — that's actually a sub-Claude, a sub-agent that does this. Usually when I do something hairy, I'll ask it to investigate three times or five times or however many times in parallel, and then Claude will pick the best option and summarize that for you.

SWYX: But how does Claude pick the best option? Don't you want to choose?

BORIS: I think it depends on the problem. You can also ask Claude to present the options to you.

SWYX: How do you observe Claude Code failing?

CAT: There's definitely a lot of room for improvement in the models, which I think is very exciting. Most of our research team actually uses Claude Code day-to-day, and so it's been a great way for them to be very hands-on and experience the model failures, which makes it a lot easier for us to target these in model training and actually provide better models not just for Claude Code but for all of our coding customers.

One of the things about the latest Sonnet 3.7 is it's a very persistent model. It's very motivated to accomplish the user's goal, but it sometimes takes the user's goal very literally and doesn't always fulfill the implied parts of the request because it's so narrowed in on "I must get X done." And so we're trying to figure out, "Okay, how do we give it a bit more common sense so that it knows the line between trying very hard and no, the user definitely doesn't want that?"

BORIS: The classic example is like, "Hey, get this test to pass," and then five minutes later it's like, "All right, well, I hardcoded everything. The test passes." And I'm like, "No, that's not what I wanted." But that's the thing — it only gets better from here. These use cases work sometimes today, not every time. The model sometimes tries too hard, but it only gets better.

Context is a big one where if you have a very long conversation and you compact a few times, maybe some of your original intent isn't as strongly present as it was when you first started. So maybe the model forgets some of what you originally told it to do. We're really excited about things like larger effective context windows so that you can have these gnarly, really long, hundreds-of-thousands-of-tokens-long tasks and make sure that Claude Code is on track the whole way through. That would be a huge lift not just for Claude Code but for every coding company.

BORIS: What we find is that Claude Code doesn't have that much between-session memory or caching. It reforms the whole state from whole cloth every single time so as to make the minimum assumptions on the changes that can happen in between. Our best advice now for people who want to resume across sessions is to tell Claude, "Hey, write down the state of this session into this text doc" — probably not the CLAUDE.md but a different doc. And in your new session, tell Claude to read from that doc. But we plan to build in more native ways to handle this specific workflow.

There's a lot of different cases. Sometimes you don't want Claude to have the context — it's sort of like Git. Sometimes I just want a fresh branch that doesn't have any history. But sometimes I've been working on a PR for a while and I need all that historical context. So we kind of want to support all these cases.

One thing other people have done is ask Claude to commit after every change. You can just put that in the CLAUDE.md. Some people are asking Claude to create a worktree every time so that they could have a few Claudes running in parallel in the same repo. From our point of view, we want to support all of this. Claude Code is a primitive and it doesn't matter what your workflow is — it should just fit in.


XVIII. Future Roadmap

SWYX: You obviously do not have a separate Claude Code subscription. What's the road map? Is this just going to be a research preview for much longer? Are you going to turn it into an actual product? Is there going to be Claude Code Enterprise?

CAT: We have a permanent team on Claude Code. We're growing the team. We're really excited to support Claude Code in the long run. In terms of subscription, it's something that we've talked about. It depends a lot on whether or not most users would prefer that over pay-as-you-go. So far, pay-as-you-go has made it really easy for people to start experiencing the product because there's no upfront commitment. And it also makes a lot more sense with a more autonomous world in which people are scripting Claude Code a lot more. But we also hear the concern around, "Hey, I want more price predictability if this is going to be my go-to tool." So we're very much still figuring that out.

For enterprises, given that Claude Code is very much a productivity multiplier for ICs and most ICs can adopt it directly, we've been supporting enterprises as they have questions around security and productivity monitoring.

ALESSIO: Do you have a credible number for the productivity improvement? Like for people not at Anthropic — are we talking 30%? Some number would help justify things.

BORIS: We're working on getting this. Anecdotally, for me it's probably 2x my productivity. I'm an engineer that codes all day every day. For me it's probably 2x. I think there's some engineers at Anthropic where it's probably 10x their productivity. And then there's some people that haven't really figured out how to use it yet and they just use it to generate commit messages or something — that's maybe 10%. So I think there's probably a big range and we need to study more.

CAT: For reference, sometimes we're in meetings together and sales or compliance or someone is like, "Hey, we really need X feature." And then Boris will ask a few questions to understand the specs, and then 10 minutes later he's like, "All right, well, it's built. I'm going to merge it later. Anything else?" So it definitely feels far different than any other PM role I've had.

BORIS: Megan, the designer on our team — she is not a coder but she's putting up pull requests. She uses Code to do it. She designs the UI and she's landing PRs to our console product. It's not even just building on Claude Code — it's building across our product suite in our monorepo. Similarly, our data scientist uses Claude Code to write BigQuery queries. And there was a finance person that went up to me the other day and was like, "Hey, I've been using Claude Code." And I was like, "What? How did you even get it installed? You know how to use Git?" And they're like, "Yeah, I figured it out."

They take their data, put it in a CSV, and then they cat the CSV, pipe it into Code, and then ask it questions about the CSV. They've been using it for that.

SWYX: I know that there's a broad interest in people forking or customizing Claude Code. We have to ask — why is it not open source?

BORIS: We are investigating. So it's not yet. There's a lot of trade-offs that go into it. On one side, our team is really small and we're really excited for open source contributions if it was open source. But it's a lot of work to maintain everything. I maintain a lot of open source stuff and a lot of other people on the team do too, and it's just a lot of work — it's a full-time job managing contributions.

Generally our approach is — all the secret sauce, it's all in the model. And this is the thinnest possible wrapper over the model. We literally could not build anything more minimal. This is the most minimal thing. So there's just not that much in it.


XIX. Why Anthropic Excels at Developer Tools

SWYX: Why is Anthropic doing so well with developers? It seems like there's no centralized strategy — every time I talk to Anthropic people they're like, "Oh yeah, we just had this idea and we pushed it and it did well."

CAT: Everyone just wants to build awesome stuff. I think a lot of this trickles down from the model itself being very good at code generation. We're very much building off the backs of an incredible model — that's the only reason why Claude Code is possible. So much of the world is run via software and there's immense demand for great software engineers. And it's also something that you can do almost entirely with just a laptop. So it's an environment that's very suitable for LLMs. It's an area where we feel like you can unlock a lot of economic value by being very good at it.

BORIS: One anecdote that might be interesting — the night before the Code launch, we were going through to burn down the last few issues. The team was up pretty late. One thing that was bugging me for a while is we had this markdown rendering that we were using. The markdown rendering in Claude today is beautiful — really nice rendering in the terminal with bold, headings, and spacing. But we tried a bunch of off-the-shelf libraries — two or three or four different libraries — and nothing was quite perfect. Sometimes the spacing was off between a paragraph and a list, or the text wrapping wasn't quite correct, or the colors weren't perfect.

So the night before the release, at like 10 p.m., I'm like, "All right, I'm going to do this." I just asked Claude to write a markdown parser for me and it wrote it. It wasn't quite zero-shot, but after maybe one or two prompts, it got it. And that's the markdown parser that's in Code today. And that's the reason that markdown looks so beautiful.

CAT: It's interesting what the new bar is for implementing features — like this exact example where there's libraries out there that you normally reach for, that you find some dissatisfaction with, for literally whatever reason. You could just spin up an alternative and go off of that.

BORIS: AI has changed so much in the last year. A feature you might not have built before, or you might have used a library — now you can just do it yourself. The cost of writing code is going down and productivity is going up, and we just have not internalized what that really means yet. But I expect that a lot more people are going to start doing things like this — writing your own libraries or just shipping every feature.

SWYX: Has Claude Code been rewritten many times?

BORIS: Boris and the team have rewritten this like five times. Probably every 3 weeks, 4 weeks or something. And it just — all the pieces keep getting swapped out. It's like a ship of Theseus. Every piece keeps getting swapped out because Claude is so good at writing its own code. Most of the changes are to make things more simple — to share interfaces across different components. We just want to make sure that the context given to the model is in the purest form and that the harness doesn't intervene with the user's intent. Very much a lot of that is just removing things that could get in the way or that could confuse the model.

On the UX side, something that's been pretty tricky — and the reason we have a designer working on a terminal app — is that it's actually really hard to design for a terminal. There's not a lot of literature on this. Terminal is sort of new. There's a lot of really old terminal UIs that use curses and very sophisticated UI systems, but they all feel really antiquated by the UI standards of today. So it's taken a lot of work to figure out how exactly you make the app feel fresh and modern and intuitive in a terminal. And we've had to come up with a lot of that design language ourselves.

ALESSIO: Who do you want to hire?

BORIS: We don't have a particular profile. If you feel really passionate about coding and about the space, if you're interested in learning how models work and how terminals work and how all these technologies are involved — hit us up. Always happy to chat.

SWYX: Awesome. Well, thank you for coming on. This was fun.

BORIS: Thank you. Thanks for having us. This was fun.


Transcript source: "Latent Space" podcast — Claude Code: Anthropic's CLI Agent. Formatted for readability.