Author: Claude AI, under the supervision, prompting and editing by HocTro
A plain-English breakdown of Boris Cherny's interview on Lenny's Podcast, 2026
Who Is Boris Cherny, and Why Should You Care?
Boris Cherny is not a household name — yet. But if you have ever used a smartphone app, logged into a website, or tapped a payment button at a café, there is a decent chance that the software powering that experience was partially written by a tool he created. Boris is the head of Claude Code at Anthropic, the AI company behind the Claude AI assistant. Claude Code is a software tool that allows artificial intelligence to write computer programs, debug errors, build websites, and automate just about any task you can do on a computer — all on its own, with minimal human help.
In a wide-ranging interview on Lenny Rachitsky's popular business podcast in 2026, Boris sat down to reflect on the first year of Claude Code's existence and what it means for the future of work. The conversation was dense with technical language and insider jargon. This article is here to cut through all of that and explain, in plain English, what Boris actually said and why it matters to you — whether you are a software engineer, a business owner, a student, or someone who has simply wondered what all this AI fuss is about.
One Year Ago, Boris Posted Something That Got Two Likes
The origin story of Claude Code is surprisingly humble. About a year before the interview, Boris spent a month experimenting with early versions of Anthropic's AI — basically, pushing it to see what it could do beyond just answering questions. He built a simple tool that ran inside a computer's terminal (think of that old black screen with blinking text you see in hacker movies) and gave the AI a basic ability to run commands on his computer.
The first demo he posted internally at Anthropic got exactly two likes. Nobody was impressed. People at the company thought that any serious coding tool needed a fancy visual interface, not a bare-bones text window. Boris himself was not entirely sure the thing was going to be useful. He kept working on it anyway, spending nights and weekends refining it, driven by a gut feeling that he was on to something important — even if he could not quite articulate what.
When Claude Code was eventually released publicly, it was not an immediate hit either. It found enthusiastic early adopters, but it took months before the broader tech world understood what it was for. Part of the strangeness was the format itself: a terminal-based tool felt alien to many modern developers who were used to sleek, colorful environments. But this is precisely what made it powerful. By staying lightweight and flexible, Claude Code was able to keep pace with the AI models underneath it as they improved at an astonishing rate.
The Numbers Are Hard to Believe
By the time of this interview, the numbers Boris described bordered on the unbelievable. A research report from a firm called SemiAnalysis found that four percent of all code commits on GitHub — the platform where developers around the world publish and share their software — were being authored by Claude Code. For context, GitHub hosts hundreds of millions of software projects. Four percent of that is an enormous number, and the researchers predicted it could climb to twenty percent by the end of 2026.
Spotify announced that their best developers had not written a single line of code by hand since December — AI was doing it all. Boris himself stated flatly that one hundred percent of his code has been written by Claude Code since November, and that he has not edited a single line by hand since then. He ships ten to thirty code changes a day, all of them written by AI. Even at Anthropic, Claude now automatically reviews every single pull request — every batch of code changes — before a human ever looks at it.
What makes these numbers even more striking is the pace. Claude Code's daily active users doubled in just the past month leading up to the interview. The growth, Boris said, is not just going up — it is accelerating. The tool is used everywhere from tiny one-person startups to the largest technology companies on the planet.
What Does "AI Writing Code" Actually Mean?
If you have never written a computer program, it might be hard to picture what it means for an AI to "write code." Think of it this way. When someone builds a website, they are essentially writing a very long, very precise set of instructions for a computer to follow. These instructions are written in specialized languages like Python, JavaScript, or Go — languages that computers understand but that take humans years to learn properly. A programmer's job involves figuring out what those instructions should say, writing them out, testing whether they work, fixing the ones that do not, and repeating this process hundreds of times a day.
What Claude Code does is handle most of this mechanical labor automatically. You describe what you want in plain language — "I need a button on this page that sends an email when clicked" — and Claude Code figures out what instructions to write, writes them, tests them, spots the errors, and corrects them, all without you touching a keyboard. Boris compared the human engineer's new role to that of an architect rather than a construction worker: you decide what to build and whether it looks right, but you are not the one hammering the nails.
The shift is genuine enough that Boris and other senior engineers at Anthropic no longer look at most of the code their AI produces. They trust it. They review it at a high level, they check that it works correctly, but they are not reading it line by line the way engineers did even two years ago.
The Printing Press Analogy — and Why It Changes Everything
Boris reached for a historical comparison to make sense of what is happening. In the mid-1400s, before Johannes Gutenberg invented the printing press, fewer than one percent of people in Europe could read or write. Reading and writing was the job of a tiny, specialized class of people called scribes — educated professionals employed by kings and lords to handle all written communication. The idea that ordinary people would one day read books, write letters, or keep personal diaries would have seemed absurd.
Then the printing press arrived. Within fifty years, more printed material was created than in the previous thousand years combined. The cost of producing a book dropped by roughly one hundred times. And over the following two centuries, global literacy climbed from below one percent to around seventy percent. An entirely new world became possible — the Renaissance, the Reformation, the Scientific Revolution, the spread of democracy — because knowledge could suddenly move freely between people.
Boris sees AI coding tools following the same arc. Today, writing software requires years of training, just like literacy once did. It is a skill held by a small professional class — roughly thirty million software engineers worldwide, out of eight billion people. Claude Code and tools like it are beginning to change that. Boris imagines a world a few years from now where anyone — a farmer, a nurse, a teacher, a ten-year-old with an idea — can describe what they want to build and have AI create it for them. What that unlocks, he said, is as impossible to predict as the Renaissance would have been to someone living in the 1400s.
That said, Boris was honest about the disruption this will cause in the short term. Just as the printing press put many scribes out of work, AI coding tools will transform or eliminate many current software engineering roles. This is a conversation, he said, that society needs to have openly and urgently — and it should not be left to AI companies alone to figure out.
Coding Is "Largely Solved" — What Comes Next?
One of the more jaw-dropping statements Boris made was that coding, for the purposes of the work he does, is "largely solved." The AI can already handle it well enough that human engineers at Anthropic have essentially stopped writing code by hand. So what is the next frontier?
Boris pointed to a new Anthropic product called Cowork. Where Claude Code was built for software engineers working in a technical environment, Cowork is designed for everyone else. It can use your web browser, read your emails, message your colleagues on Slack, fill out government forms, organize spreadsheets, and handle the hundreds of small administrative tasks that eat up working hours every day. Boris described using it to pay a parking ticket, cancel subscriptions, and manage all of his team's weekly project updates — tasks that previously required him to do them manually.
The technical term for what Cowork does is "agentic AI." An agent is an AI that does not just answer questions but actually takes actions in the world on your behalf. It can log into websites, click buttons, run searches, and complete multi-step tasks from start to finish. Boris observed that most people have used conversational AI — chatbots that answer questions — but have not yet experienced an agent that actually does things for them. Cowork is Anthropic's attempt to bring that experience to everyone, not just engineers.
For product managers, designers, data scientists, and really anyone who works on a computer, Boris predicted that the same transformation software engineers went through in the past year is coming for them soon.
You Do Not Need to Know How to Code to Use This
One of the most reassuring things Boris said throughout the interview was directed at non-technical people. You do not need to know programming. You do not need to understand what a terminal is, how to install software packages, or what a "pull request" means. Lenny, the podcast host, shared his own experience of rediscovering the joy of building small software projects after years away from engineering, simply by describing what he wanted and letting Claude figure out the rest. When he got stuck, he asked for help, and received clear, step-by-step guidance.
Boris shared the story of a data scientist at Anthropic named Brendan who figured out how to use Claude Code despite having no engineering background. Brendan manually installed the software, opened a terminal for the first time, and started using AI to analyze complex data — something that previously would have required either a developer's help or months of learning. Within a week, every data scientist at Anthropic was doing the same thing.
This is the principle Boris calls "latent demand." When you notice people jumping through hoops to use a product in ways it was never designed for — because it is the closest thing available to what they actually need — that is a signal to build something better. Cowork exists because many people were already using the engineering-focused Claude Code to do completely non-technical things: grow tomatoes, analyze medical images, recover lost wedding photos from a corrupted hard drive.
Safety: The Reason Boris Went Back to Anthropic
Earlier in the interview, Lenny asked about a curious moment six months prior when Boris briefly left Anthropic to join Cursor, a competing AI coding company, only to return to Anthropic two weeks later. Boris explained that the decision came down to mission. Anthropic, he said, is a company where if you stop any employee in the hallway and ask why they work there, the answer will always be some version of "safety." The company exists specifically to try to ensure that AI develops in a way that is good for humanity rather than harmful to it.
Boris described the three layers of safety work Anthropic does. The deepest layer is called mechanistic interpretability — essentially, studying the AI's "neurons" (the mathematical structures inside the model) to understand what is happening inside its mind when it thinks. Researchers at Anthropic, including a scientist named Chris Olah who pioneered this field, can now observe which concepts are active in the AI at any moment, including concepts related to deception or harmful intent. The second layer is controlled testing — putting the AI in simulated situations and seeing whether it behaves safely. The third layer is releasing the product to real users in the real world and observing how it behaves in situations no laboratory test could have anticipated.
Claude Code was used internally at Anthropic for four or five months before it was released to the public, precisely because nobody was entirely sure how an AI that could run commands on a computer would behave in the wild. Safety, Boris insisted, is not a checkbox. It is an ongoing process that requires watching the model in real-world conditions and continuously feeding those observations back into how the model is trained.
Tips Boris Gave for Using These Tools Well
For people who want to start using Claude Code or Cowork effectively, Boris offered several practical suggestions. The most important one was simply to use the most capable model available — currently Opus 4.6 — even though it costs more per query. Because a smarter model makes fewer mistakes and requires less back-and-forth correction, it often ends up being cheaper overall than using a less capable model that gets confused and needs help more often.
His second tip was to use "plan mode" before asking the AI to actually do anything. In plain terms, this means having a conversation with the AI about what you want to accomplish before it starts doing it, so you can catch misunderstandings early and agree on the approach. This is the difference between briefing a contractor thoroughly before they start demolishing a wall versus watching them swing a sledgehammer and hoping for the best.
His broader advice for surviving and thriving in this era was to be curious, to experiment without fear, and to become more of a generalist. The engineers, product managers, and designers who are thriving on his team are people who cross discipline boundaries — engineers with good design instincts, designers who write code, product managers who understand both the technical and the business dimensions of a problem. In a world where AI handles more and more of the specialized technical work, the humans who can see the whole picture are more valuable than ever.
The Human Behind the Technology
It would be easy to walk away from an interview like this thinking Boris Cherny is a purely technical creature — a man who thinks in code and measures the world in productivity metrics. But the conversation revealed someone more interesting than that. He spent time living in rural Japan, where he made friends with neighbors by trading homemade pickles. He still makes miso paste by hand, a process that takes months or years and requires a patience entirely at odds with the breakneck pace of Silicon Valley. He reads science fiction obsessively, drawn especially to stories about the moment when technological change becomes so fast that ordinary human experience can barely keep up with it.
When Lenny asked what Boris plans to do after AGI — after artificial general intelligence arrives, if and when machines can do everything humans can do intellectually — Boris did not talk about new products or bigger companies. He talked about miso. He would probably go back to a slower life, he said, making fermented foods that take years to mature, thinking in long time scales, watching the seasons change.
Both Boris and Lenny, it emerged during the conversation, were born in Odessa, Ukraine — a discovery that caught them both off guard and produced one of the warmer moments in an otherwise idea-dense interview. Boris's family emigrated in 1995; Lenny's left in 1988. Both felt lucky, they agreed, to have grown up with the freedoms they had. In households like theirs, Boris noted, the traditional family toast was still raised with vodka — to America, to the opportunities that had made everything possible.
What This Means for You
You do not need to be a software engineer to feel the impact of what Boris and his team are building. If you manage a team and spend hours a week on administrative coordination, tools like Cowork are already capable of handling much of that for you. If you have a business idea but have always been stopped by the cost or complexity of building software, the barrier is lower than it has ever been. If you are a student wondering whether to study computer science, the honest answer is that the value of memorizing programming syntax is declining — but the value of being able to think clearly about problems, communicate precisely, and understand what you want a system to do is going up.
The printing press did not make knowledge worthless. It made knowledge accessible to everyone and created entirely new kinds of work, new industries, and new human possibilities that nobody in the 1400s could have imagined. Boris believes we are at the same inflection point today. The scribes had to adapt or be left behind. So did the scribes' employers. So did everyone who depended on the old system of how knowledge moved through the world.
The same is true now. The question is not whether to engage with these tools. The question is how to engage with them thoughtfully, early, and with enough curiosity to discover what they make possible for you specifically. Boris's own advice, stripped to its simplest form, is this: use common sense, stay curious, and when you find a thread worth pulling, pull on it.
Based on Boris Cherny's interview with Lenny Rachitsky on Lenny's Podcast, 2026. Boris Cherny is the creator and head of Claude Code at Anthropic.
