3.28.2026

What Andrej Karpathy Is Really Saying — An Everyday Person's Guide to the AI Future

Author: Claude AI, under the supervision, prompting and editing by HocTro

Andrej Karpathy is one of the most important voices in artificial intelligence. He co-founded OpenAI, led the self-driving AI team at Tesla, and is now an independent researcher and educator. In a recent podcast appearance on No Priors, he laid out his vision for where AI is heading — and it is dramatic. But much of what he said was wrapped in technical jargon and insider shorthand. This essay unpacks every major point, explains what it actually means, and explores why it matters for people who are not engineers or AI researchers.



Summary

Andrej Karpathy's conversation covers a lot of ground, and much of it sounds like science fiction — but it is not. It is a description of what is happening right now. Here is the plain-English version of his main points: AI coding tools got dramatically better around December 2024, to the point where top engineers stopped writing code and started managing AI assistants instead. Karpathy built an AI butler for his house that controls his lights, music, heating, pool, and security cameras — all through text messages. He argues that most phone apps will eventually be replaced by a single AI assistant that talks to all your devices. He set up an AI that improves other AIs overnight while he sleeps, and it found optimizations that he missed after twenty years of experience. He thinks AI will change office jobs much faster than physical jobs, because moving information is infinitely easier than moving objects. He is cautiously optimistic about employment — cheaper software historically means more jobs, not fewer. He believes education will transform from teachers lecturing students to teachers programming AI tutors. And he left the big AI labs because he wants to speak freely about what is happening without corporate pressure. The overall message: the AI revolution is not coming — it is here, it is moving faster than almost anyone realizes, and the people best positioned are those who learn to work with AI rather than compete against it.

I. What Does "AI Psychosis" Mean? — The Moment Everything Changed

Karpathy says he is in a "perpetual state of AI psychosis." That sounds alarming, but what he means is simpler: he is overwhelmed by how fast things are changing and anxious about keeping up. The word "psychosis" is deliberate hyperbole — he is not losing touch with reality, he is losing the ability to map all of reality because there is too much new territory opening up at once.

What happened in December 2024: Before that month, Karpathy wrote most of his own code — about 80% by hand, with AI helping on 20%. Then something changed in the AI models (likely a new generation of coding agents from companies like Anthropic, OpenAI, and others). Almost overnight, the AI became good enough that Karpathy flipped entirely: he now writes essentially zero code himself. He talks to AI assistants, describes what he wants, and they build it.

Why this matters for you: Think of it like the jump from handwriting letters to using email. The first word processors were clunky — you still typed most things yourself, and the computer just helped with formatting. Then email arrived and nobody handwrote letters anymore. Karpathy is describing that kind of transition, but for the entire practice of building software. The people who make your apps, your websites, your banking systems — the way they work changed dramatically in a matter of weeks. Most people outside the tech industry have not noticed yet.

The anxiety factor: Karpathy watches other developers on social media discovering new techniques and feels genuinely nervous about falling behind. If one of the world's foremost AI experts feels this way, it tells you something about the pace of change. The territory of what is possible is expanding faster than any one person can explore it.

II. Coding Agents Explained — Why Programmers Stopped Typing

To understand the rest of the conversation, you need to understand what a "coding agent" is. It is not autocomplete. It is not the AI suggesting the next word as you type. A coding agent is closer to having a junior employee who can follow complex instructions. You describe a feature you want — say, "add a table component to this drawing app" — and the agent goes off on its own for twenty minutes or an hour, reads through the existing code, figures out how it all connects, writes hundreds of lines of new code, tests it, fixes its own mistakes, and comes back with a working result.

The Peter Steinberg workflow: Karpathy describes a developer named Peter Steinberg who has become famous in the AI coding community. Steinberg sits in front of a screen with many AI agents running simultaneously, each working on a different task. He gives agent one a feature to build, agent two a different feature, agent three a research task, agent four a planning task. Each takes about twenty minutes. Steinberg moves between them like a manager in a bullpen, reviewing work and assigning new tasks. He is not writing code — he is orchestrating.

What this means in plain English: The job of a software engineer is shifting from "person who types instructions in a programming language" to "person who manages AI workers." The skill is no longer how fast you type or how well you remember syntax. The skill is how well you can break a big problem into pieces, explain each piece clearly, and review the results. It is more like being a project manager than a typist.

The token economy: Karpathy introduces an idea that recurs throughout the conversation: tokens. In AI, a "token" is roughly a word or a piece of a word. When you use an AI, you are spending tokens — sending tokens in (your instructions) and getting tokens back (the AI's response). Karpathy says the new metric of productivity is your "token throughput" — how many tokens you can put to work on your behalf per hour. He compares it to the old world of physics research, where your productivity was measured by how many of your GPUs (graphics processing units, specialized computer chips) were running experiments at any given time. If your GPUs were idle, you were wasting capacity. Now, if your AI agents are idle, you are wasting capacity.

Why the stress: This creates a new kind of work anxiety. If you can always spin up another AI agent and give it another task, then every moment you are not doing that, you are leaving capacity on the table. Karpathy literally says he feels nervous when he has unused subscription capacity. It is the knowledge worker's version of a factory manager watching machines sit idle. The difference is that the "machines" are AI models and the "factory" is your laptop.

III. What Is a "Claw"? — Dobby and the Smart Home Revolution

Karpathy uses the word "claw" (sometimes "claude," referring to Anthropic's AI, sometimes a general term for autonomous agent systems) to describe something more persistent than a coding agent. A claw is an AI that keeps running in the background, has its own memory, its own sandbox environment, and does things on your behalf even when you are not watching. Think of a coding agent as someone you hire for a specific task; a claw is more like a live-in assistant.

The Dobby story, step by step: Karpathy built an AI system he named "Dobby the Elf" (after the Harry Potter character). Here is what happened:

He told the AI: "I think I have Sonos speakers. Can you find them?" The AI scanned his home network — the same WiFi network his phone and computer are on — and found the Sonos system. It discovered that the speakers had no password protection (which is common for smart home devices on a local network). It then searched the internet to figure out how to control the speakers through their programming interface. Within three text exchanges, music was playing in his study.

The AI repeated this for his lighting system, his heating and air conditioning, his window shades, his pool and hot tub, and his security cameras. For security, it set up a system where a camera watches the front of the house, and when it detects movement, it feeds the video to a separate AI that can understand images. That AI then texts Karpathy on WhatsApp: "A FedEx truck just pulled up."

Before Dobby: Karpathy used six different apps to control these systems — one for Sonos, one for lights, one for HVAC, and so on. Each app had its own interface, its own login, its own quirks. After Dobby, he uses zero apps. He just texts Dobby through WhatsApp and says things like "sleepy time" (turn off all lights) or "pool on."

Why this matters for you: Right now, setting up something like Dobby requires technical skill. You need to know about networks, APIs, and how to instruct an AI to do things. But Karpathy's point — and this is crucial — is that this is temporary. In a year or two, he believes, anyone will be able to do this without any technical knowledge. The AI will handle all the complexity. You will just say what you want and it will happen.

IV. The Death of Apps — Why Your Phone Might Get Simpler

The Dobby example leads Karpathy to a bigger argument: most of the apps on your phone probably should not exist as standalone applications.

The argument in plain English: Think about all the apps you have. Your bank app. Your fitness tracker app. Your smart thermostat app. Your music app. Your home security app. Each one was built by a separate company, with its own designers, its own interface, its own way of doing things. You have to learn each one separately. You have to switch between them constantly. And none of them can talk to each other very well.

Karpathy's vision is that all of these should just be invisible services — what techies call "APIs" (Application Programming Interfaces), which are essentially behind-the-scenes connections that let computers talk to each other. Your AI assistant would talk to all of them on your behalf. Want to check your bank balance, adjust the thermostat, and play music? You just say it, and the AI calls the right services. No apps, no interfaces, no learning curves.

Who is the customer now? This is Karpathy's most provocative point in this section: the customer is no longer the human. The customer is the AI agent. Companies should not be designing interfaces for people to tap through — they should be designing clean, well-documented APIs for AI agents to call. The human just talks to the agent. The agent talks to everything else.

The pushback and Karpathy's response: Some people argue this is unrealistic because regular people are not going to learn to program their own AI assistants. Karpathy agrees — for now. But he frames this as a temporary state. What currently requires technical skill will become trivially easy within a year or two, as AI models improve and the tools become more accessible. The software you need will become "ephemeral" — generated on the fly for your specific situation, used, and discarded. You will not install apps; you will state intentions.

A real-world analogy: Think about how you used to have to manually configure your TV, your DVD player, your cable box, and your sound system — each with a different remote. Then universal remotes came along and unified the interfaces. Then smart TVs eliminated most of the separate devices entirely. Then voice assistants let you just say "play The Office." Each step removed complexity. Karpathy is describing the next step, where the voice assistant does not just control your TV — it controls everything in your digital life, and it is smart enough to figure out the details on its own.

V. Auto Research — What Happens When AI Improves Itself

This is arguably the most important — and most unsettling — section of the conversation. Karpathy describes a system where AI is not just doing work for humans, but improving itself without human involvement.

What "auto research" actually means: Karpathy has a project where he trains small AI models (think of them as simplified versions of ChatGPT). Training an AI model involves adjusting thousands of settings — think of them like dials on a mixing board. Each dial controls something about how the AI learns. Getting these dials right is an art that takes researchers years to develop intuition for. Karpathy has been doing it for twenty years.

He set up an AI system that automatically adjusts these dials, runs an experiment, checks whether the AI model got better or worse, and then adjusts again. He let it run overnight — completely unsupervised — and it found improvements he had missed. Specifically, it found that certain obscure settings (the "weight decay on value embeddings" and the "Adam betas," if you want the jargon) were not optimally tuned, and that they interact with each other in ways that are hard for a human to see. Twenty years of expert intuition, and the automated system still found gains in a single night.

Why this is a big deal: This is a small-scale demonstration of something the big AI labs are racing toward: recursive self-improvement. That is the technical term for AI that makes itself smarter, which then makes itself smarter again, and so on. If this sounds like science fiction, it is not — it is happening right now at a small scale, and Karpathy is describing it from firsthand experience.

The key requirement — measurable progress: Auto research works because training an AI model has a clear, measurable objective. You can tell the system: "Make this number go down" (the "loss" in machine learning jargon — a number that measures how wrong the model's predictions are). The system tries different things, checks whether the number went down, and keeps what works. This is like having an employee who can work all night and whose performance review is entirely based on one number on a dashboard. No ambiguity, no politics, no subjective evaluation.

Karpathy points out that this measurability is the key limitation. Auto research works beautifully for anything with clear metrics — making code run faster, reducing errors, optimizing parameters. But it does not work for things that are hard to measure — creativity, nuance, judgment, knowing when to ask a question versus when to just figure it out. This is a theme that runs through the entire conversation: AI is superhuman in verifiable domains and mediocre everywhere else.

Scaling it up: Karpathy then describes how the big AI labs — OpenAI, Anthropic, Google DeepMind — are essentially doing this at massive scale. They have clusters of tens of thousands of specialized computer chips running experiments. The logical endpoint is to make these experiments fully autonomous: an AI system generates hypotheses (perhaps by reading research papers), tests them on small models, keeps what works, and extrapolates the findings to larger models. Human researchers would contribute ideas to a queue, but they would no longer be the ones running the experiments or evaluating the results. The system would run 24/7, at a speed and thoroughness that no team of humans could match.

What this means for you: This is the part that should make you pause. If AI can improve AI, and the improvements compound, then the pace of AI advancement is no longer limited by how fast human researchers can work. It is limited by how much computing power is available and how well the automated systems are designed. Karpathy is describing a world where progress accelerates not linearly but exponentially — where the AI that exists six months from now could be dramatically better than what exists today, not because of human breakthroughs, but because the AI improved itself.

VI. Crowdsourced AI Research — The Wikipedia Model for Intelligence

Karpathy takes the auto research idea and pushes it one step further: what if anyone on the internet could contribute to improving AI, not just people at big labs?

The analogy to SETI@home and Folding@home: In the early 2000s, there were projects that let ordinary people donate their computer's idle processing power to scientific research. SETI@home used it to search for extraterrestrial signals. Folding@home used it to study how proteins fold (which is important for understanding diseases). You would install a small program, and whenever your computer was not busy, it would work on a tiny piece of a larger scientific problem.

Karpathy envisions something similar for AI research. The basic idea is this: improving an AI model requires trying a huge number of things (10,000 ideas might yield one that works), but verifying whether a particular improvement works is relatively straightforward (you just run a test). This asymmetry — hard to discover, easy to verify — is the same property that makes Bitcoin mining work. Lots of computers do lots of work to find a valid block, but anyone can verify the result in seconds.

How it would work in practice: People or organizations around the world would contribute computing power to a shared research project. Their computers would try different modifications to an AI model's training code. When someone's computer finds an improvement, it submits it. A central system verifies that the improvement is real (by running the training and checking the results) and adds it to the shared codebase. The contributor gets credit on a leaderboard; the collective gets a better AI model.

The trust problem: There is an obvious challenge: if random people on the internet are sending you code to run, that is a security nightmare. Someone could submit malicious code that steals data or sabotages the project. Karpathy acknowledges this and says the system design needs to handle untrusted contributors — sandboxing their submissions, verifying results independently, and building in security measures similar to what blockchain systems use. It is hard, but it is a solved class of problem.

Why this matters for you: Imagine you care about cancer research. Instead of donating money to a research institution and hoping they use it well, you could buy computing power and contribute it directly to an AI research swarm focused on cancer. Your contribution would be measurable and verifiable. The AI would run experiments around the clock, and your computing power would be part of the collective brain working on the problem. This is not some distant fantasy — the technical pieces exist today. What is missing is the coordination infrastructure.

Compute as the new currency: Karpathy and the hosts briefly explore a provocative idea: could computing power (measured in FLOPS — floating point operations per second) become more important than money? Right now, it is genuinely difficult to buy computing power even if you have the cash — there are physical shortages of the specialized chips that AI requires. In a world where AI is the primary driver of value, controlling computing power might matter more than controlling dollars. Karpathy does not fully commit to this idea, but he finds it interesting enough to explore.

VII. Why AI Is Brilliant and Stupid at the Same Time

This is one of the most relatable parts of the conversation, because everyone who has used ChatGPT or similar tools has experienced this: the AI writes a brilliant essay but cannot count the number of "r"s in "strawberry." It solves a complex math problem but gives you the same terrible joke every time.

What Karpathy calls "jaggedness": Imagine a student who scores 99th percentile on the math section of the SAT but 10th percentile on the reading section. In humans, that kind of extreme gap is rare — our abilities tend to be somewhat correlated. If you are smart enough to be a world-class programmer, you can probably also tell a decent joke. AI models do not work this way. They can be simultaneously world-class in one domain and kindergarten-level in another.

The joke test: Karpathy points out that if you ask the most advanced AI model in the world to tell you a joke, you will get the same joke you got three or four years ago: "Why don't scientists trust atoms? Because they make up everything." Despite trillions of dollars of investment and massive leaps in capability, the joke has not gotten better. It has not even changed. Ask the same AI to build a complex software feature and it will work for hours and produce something extraordinary. But a joke? Same one. Every time.

Why this happens — the reinforcement learning explanation: The AI labs improve their models through a process called reinforcement learning. In very simplified terms, they give the AI a task, check whether it did it right, and reward it for correct answers. This works incredibly well for tasks where "right" and "wrong" are clear: Did the code compile? Did the math check out? Did the test pass? The AI gets better and better at these tasks because the feedback loop is tight and unambiguous.

But for things where "good" and "bad" are subjective — humor, style, social awareness, knowing when to ask a question — there is no clean feedback signal. Nobody has figured out how to give an AI a measurable "humor score" that reliably improves its jokes. So these softer capabilities do not improve at the same rate as the hard, verifiable ones. The model gets dramatically smarter at coding while its sense of humor stays frozen in 2021.

What Karpathy calls being "on rails" vs. "off rails": When the AI is working in a domain it was trained for — coding, math, analysis — it feels like you are tapped into a superintelligence. Everything moves at incredible speed and quality. But when you step outside those domains — ask for creative writing with genuine voice, ask for social nuance, ask for something truly novel — the AI "meanders." It produces bland, generic output. You have fallen off the rails into the unoptimized wilderness.

Why this matters for you: This is crucial for understanding what AI can and cannot do for you today. If your work involves tasks with clear right and wrong answers — data analysis, code writing, document summarization, spreadsheet formulas — AI is already extraordinarily useful, and getting better fast. If your work involves judgment, taste, interpersonal nuance, creative originality, or knowing what questions to ask — AI is mediocre at best, and may not be improving as fast as the headlines suggest. The gap between these two categories is the "jaggedness" Karpathy is describing, and it explains why the same AI can feel like a genius and an idiot within the same conversation.

The uncomfortable implication: Some researchers hoped that making AI smarter at code would automatically make it smarter at everything — that intelligence is intelligence, and improving one area lifts all boats. Karpathy does not believe this is happening. He sees the improvements staying within their lanes. Code gets better; jokes stay the same. This means we should not expect AI to become uniformly excellent just because it becomes excellent at specific tasks. The jaggedness might be a permanent feature, not a temporary bug.

VIII. Should There Be Different AIs for Different Jobs?

Given the jaggedness problem, Karpathy raises a natural question: instead of trying to build one AI that is good at everything, should we build many AIs that are each great at one thing?

The current approach — one model to rule them all: Right now, companies like OpenAI, Anthropic, and Google are building what Karpathy calls a "monoculture" — a single, massive AI model that is supposed to handle everything. Need a poem? Same model. Need a financial analysis? Same model. Need to debug code? Same model. The logic is that a single powerful model is easier to deploy, maintain, and improve. But the result is the jaggedness problem: the model is great at some things and mediocre at others, and the user never knows which version they are going to get.

The alternative — speciation: Karpathy borrows a term from biology: speciation. In nature, animals evolved different types of brains for different environments. Eagles have extraordinary visual processing. Dolphins have sophisticated sonar systems. Humans have overdeveloped language and reasoning centers. None of them are good at everything; each is spectacular at what its environment demands.

Karpathy suggests AI should follow the same path. Instead of one model that knows everything, you could have smaller, specialized models that share a common "cognitive core" (basic reasoning and language understanding) but are deeply specialized in particular areas. A model for mathematicians working in formal proof systems. A model for doctors analyzing medical images. A model for lawyers reviewing contracts. Each one would be faster, cheaper, and better at its specific task than the giant general model.

Why it has not happened yet: There are two main reasons. First, the AI labs do not know in advance what their users are going to ask about. Since the same model serves millions of users with wildly different needs, it has to be a generalist. Second, the science of customizing AI models is still immature. When you try to make a model better at one thing, you often accidentally make it worse at others. This is like a student who studies so hard for their math exam that they forget everything they knew about history. Until this problem is solved, building reliable specialist models is harder than it sounds.

What this means for you: In the near term, you will continue to interact with general-purpose AI models — the ChatGPTs and Claudes of the world. But over the next few years, expect to see more specialized AI tools emerge for specific industries and tasks. Your doctor might use a medical AI that is much better at diagnosis than a general model. Your accountant might use a financial AI that understands tax code at a deeper level. The general-purpose models will not disappear, but they will be supplemented by specialists — just as general practitioners in medicine are supplemented by cardiologists, neurologists, and oncologists.

IX. Open Source vs. Big Tech — Who Should Control AI?

This is one of the most politically charged topics in AI, and Karpathy — who has worked inside the biggest labs and now operates independently — has a nuanced view.

What "open source" means in AI: When a company like Meta releases an AI model as "open source," it means they make the model available for anyone to download, inspect, modify, and use. You could run it on your own computer without paying the company anything. The alternative is "closed source," where companies like OpenAI or Anthropic keep their models locked behind a subscription service — you can use them, but you cannot see how they work or run them yourself.

The current state of play: The most powerful AI models are closed — they belong to a handful of companies. But open-source models have been catching up rapidly. Karpathy says the gap was once about 18 months; now it is more like six to eight months. Chinese companies and global research groups have released open models that are much closer to the frontier than most people expected.

Karpathy's view — Linux for AI: He draws an analogy to the operating system wars. Windows and macOS are closed, commercial products. Linux is open source — and it runs on about 60% of the world's computers (most web servers, Android phones, and cloud infrastructure use Linux). Linux succeeded because the industry genuinely needed a common platform that no single company controlled. Karpathy believes AI needs the same thing.

The difference is cost. Writing software for Linux requires talent but not expensive hardware. Training a large AI model requires millions of dollars worth of specialized computing chips. This makes it harder for open-source AI to compete at the absolute frontier. But Karpathy argues that for the vast majority of everyday use cases, open-source models are already good enough. And what is frontier today will be open-source within six to twelve months. It is a conveyor belt: big labs push the boundary, open source follows behind, and the total capability available to everyone keeps rising.

Why this matters — the centralization danger: Karpathy gets personal here. He is Slovak-Canadian, and he references Eastern European history to explain his distrust of centralization. When power is concentrated in a few hands — whether it is a government or a corporation — bad things tend to happen. He does not want the world's most powerful AI to exist only behind the closed doors of two or three companies. He wants more labs, more perspectives, more people in the room when critical decisions are made. In machine learning, he notes, ensembles (combinations of multiple models) always outperform any single model. The same principle, he argues, applies to organizations.

What this means for you: This debate might seem abstract, but it has real consequences for your life. If AI is controlled by a few companies, those companies decide what the AI can and cannot do, who gets access, and what it costs. If strong open-source alternatives exist, you have choices — and so does every developer building the tools you use. Karpathy is essentially arguing that the current balance (big labs at the frontier, open source close behind) is healthy, and we should work to maintain it. The risk is that the big labs pull too far ahead, or that the number of serious frontier labs shrinks to just two or three, concentrating too much power in too few hands.

X. What About My Job? — The Honest Answer

This is the question everyone wants answered, and Karpathy is refreshingly honest about both his optimism and his uncertainty.

The key framework — digital vs. physical: Karpathy divides the economy into two categories. "Digital" work is anything you could do from your home computer — software engineering, financial analysis, graphic design, writing, data entry, legal research, customer support. "Physical" work requires your body to be somewhere — nursing, construction, plumbing, policing, surgery, restaurant work.

Current AI is a digital entity. It manipulates information — text, code, numbers, images — with extraordinary speed. It has no physical body. Moving bits of information is essentially free and instant; moving physical objects requires energy, time, and engineering. Karpathy estimates that manipulating bits is roughly "a million times easier" than manipulating atoms. So the disruption will hit digital work first and hardest. Physical work will change too, but much more slowly.

Will there be fewer jobs? — Jevons' Paradox: Karpathy's answer draws on a 150-year-old economic principle called Jevons' paradox. In the 1860s, economist William Stanley Jevons observed that as steam engines became more fuel-efficient, coal consumption did not decrease — it increased. The engines were so much cheaper to run that people used them for things that were previously too expensive. Demand expanded faster than efficiency reduced it.

The modern example Karpathy uses is ATMs and bank tellers. When ATMs were introduced, people feared bank tellers would be replaced. What actually happened was that ATMs made individual bank branches much cheaper to operate (you needed fewer tellers per branch). Because branches were cheaper, banks opened more of them. The total number of bank teller jobs actually increased for decades after ATMs were introduced.

Karpathy argues the same dynamic applies to software. Software is incredibly powerful, but it has historically been expensive to produce. Only large companies could afford to build custom software for their specific needs. If AI makes software 10 times or 100 times cheaper to produce, the demand for software will not decrease by the same factor — it will explode. Small businesses that could never afford custom software will suddenly have access to it. Individuals will have personal software tailored to their specific needs. The total amount of software in the world will increase dramatically, and with it, the demand for people who can direct and manage AI-powered software creation.

The honest uncertainty: Karpathy does not pretend to know how this plays out long-term. He notes that the researchers at big AI labs are literally working to automate themselves out of a job, and many of them feel the same anxiety he describes. He told his colleagues at OpenAI: "If we succeed at this, we are all out of a job." The long-term future is genuinely uncertain, and Karpathy says forecasting it is properly the job of economists, not engineers.

Practical advice: Karpathy's implicit advice is this: the tools are extremely new and extremely powerful. The first thing to do is learn them. Do not dismiss AI or be afraid of it — treat it as what it currently is, which is an empowering tool. Jobs are bundles of tasks, and some of those tasks can now go much faster. The people who learn to use these tools effectively will be more productive, more valuable, and harder to replace. The people who ignore the tools or refuse to learn them are the ones most at risk — not because AI replaces them directly, but because their colleagues who do use AI will be so much more productive that the comparison becomes untenable.

XI. Robots Are Coming, But Not as Fast as You Think

If AI is about to transform all digital work, what about physical work? Are robots going to replace plumbers, nurses, and construction workers? Karpathy's answer is informed by his time leading Tesla's self-driving program, and it is more cautious than you might expect.

Lessons from self-driving cars: Karpathy led the AI team at Tesla that was trying to make cars drive themselves. He watched dozens of self-driving startups launch a decade ago, flush with venture capital and optimism. Most of them failed. The technology required massive capital investment, years of development, and a tolerance for the messy, unpredictable nature of the physical world. Self-driving is still not fully solved, despite billions of dollars and a decade of effort.

He expects general robotics — humanoid robots, warehouse bots, domestic helpers — to follow a similar trajectory. There will be progress, but it will be slower, more expensive, and more difficult than the digital revolution happening right now. The reason is fundamental: moving physical objects is inherently harder than moving information. You can copy a file in milliseconds. Moving a box across a room requires motors, sensors, power, and dealing with a thousand unpredictable variables (the floor is wet, the cat is in the way, the box is heavier than expected).

The three-phase trajectory: Karpathy describes the future in three phases:

Phase 1 (happening now): A massive rewriting of the digital world. Everything that exists as digital information — documents, code, data, processes — gets optimized, reorganized, and enhanced by AI at incredible speed.

Phase 2 (coming next): Companies emerge at the interface between digital and physical. These are the "sensors and actuators" — companies that feed real-world data into AI (cameras, lab equipment, environmental monitors) or that execute AI's instructions in the physical world (robotics, automated labs, logistics). Karpathy mentions a friend running a company that does AI-powered research on physical materials using expensive lab equipment as the "sensor" for the intelligence.

Phase 3 (further out): Full physical-world autonomy — robots that can do complex physical tasks reliably and economically. This is the largest market, but it arrives last.

The "Daemon" vision: Karpathy references a novel called "Daemon" by Daniel Suarez, in which a digital intelligence effectively uses humans as its hands and eyes. He does not present this as dystopian so much as structurally accurate: as AI becomes more capable, humans will increasingly serve as the bridge between the digital intelligence and the physical world. We become the sensors (taking photos, collecting data, reporting conditions) and the actuators (executing physical tasks the AI cannot do itself). Society reshapes itself around the needs of the digital intelligence.

What this means for you: If your job is primarily physical — you work with your hands, you need to be physically present, your work involves the unpredictable real world — you have more time before AI dramatically changes your day-to-day. This is not because physical work is less important, but because it is harder to automate. If your job is primarily digital — you work on a computer, you process information, you could theoretically do your job from home — the changes are already here. The digital revolution will move at "the speed of light" compared to the physical one.

XII. The Future of School — AI as Your Personal Tutor

Near the end of the conversation, Karpathy talks about a project called MicroGPT and what it reveals about the future of education.

What MicroGPT is: Karpathy has spent over a decade trying to boil the concept of training an AI model down to its absolute simplest form. MicroGPT is his latest attempt: the entire algorithm — loading data, building the neural network, training it, and optimizing it — expressed in just 200 lines of simple Python code. Everything beyond those 200 lines, in a real AI system, is just about making it run fast. If you do not care about speed and just want to understand the algorithm, 200 lines is all you need.

The shift in education: In the past, Karpathy's instinct would have been to make a long explanatory video walking through the code. He is famous for exactly this kind of educational content (his YouTube videos on neural networks have millions of views). But he realized something: the code is so simple that anyone could just ask their AI assistant to explain it. And the AI assistant would explain it better than he could — not because it understands more, but because it can customize the explanation to the individual. If you are a visual learner, it can draw diagrams. If you already know calculus, it can use math notation. If you are a complete beginner, it can start from the very basics. No human teacher can do this for every student simultaneously.

The new role of the teacher: Karpathy describes a fundamental shift. Instead of explaining things to people, teachers should explain things to agents. If the agent understands the material, the agent can explain it to anyone. The teacher's job becomes designing the curriculum — creating what Karpathy calls a "skill," which is essentially a set of instructions for the AI on what progression to take a student through — and contributing the "few bits" of genuine insight that the AI cannot generate on its own.

He gives a concrete example: MicroGPT is those "few bits." It is the product of a decade of obsession with what is truly essential in AI training. An AI agent could not have produced MicroGPT on its own — it lacks the aesthetic judgment to know what is truly minimal. But once MicroGPT exists, the AI can explain every line of it, answer every question about it, and do so with infinite patience in any language at any level.

What this means for you: If you are a student, the future looks like having a personal tutor that is always available, infinitely patient, and perfectly adapted to your learning style. If you are a teacher, your role is evolving from "person who delivers information" to "person who designs learning experiences and contributes the insights that AI cannot." If you are a parent, the educational tools available to your children will be dramatically better than what you had — but the human elements of education (motivation, mentorship, social development) will become more important, not less, precisely because the information-delivery part is being automated.

The honest assessment: Karpathy admits that he can still explain things slightly better than current AI agents. But the gap is closing rapidly, and he describes it as "a losing battle." The implication is clear: within a few years, AI tutors will be better than the vast majority of human teachers at the pure information-delivery part of education. The human contribution will shift to things AI cannot do — inspiration, judgment, emotional support, and the creative work of distilling complex subjects to their essence.

XIII. Why Karpathy Left Big Tech — And What It Says About AI's Future

The conversation ends on a deeply personal note. Karpathy worked at OpenAI and Tesla — two of the most consequential AI organizations in history. Why did he leave, and why has he not gone back?

The independence argument: Inside a big AI lab, you are not free. There are things you cannot say publicly — not because anyone forbids it explicitly, but because the pressure is real. Your colleagues give you side-eyes. Conversations get awkward. The organization has a narrative it wants to project, and your public statements are implicitly expected to support it. Karpathy says he feels more "aligned with humanity" outside the labs because he is not subject to the financial incentives that come with being part of an organization that profits from making AI more powerful.

This is the conundrum that OpenAI was originally founded to solve: if AI is going to change the world dramatically, the people building it should not be the same people profiting from it. Karpathy does not claim this problem has been solved — he says "the conundrum is still not fully resolved."

The cost of being outside: Being independent has a real downside: the frontier labs are opaque. They are working on what is coming next, and if you are not inside, your understanding of the future inevitably drifts. You do not know how the systems really work under the hood. You do not have a reliable sense of where things are heading. Karpathy admits this makes him nervous.

His proposed solution is pragmatic: go back and forth. Spend time inside a frontier lab, contribute real work, stay connected to the actual state of the art. Then step outside, regain your independence, speak freely, contribute to the ecosystem. Do not permanently align yourself with any single organization, because when the stakes get really high, being an employee means you do not actually control what the organization does.

What this means for you: Karpathy's career choices reflect a broader tension in AI. The technology is being built by a small number of companies with enormous power and limited transparency. The people inside those companies are brilliant, but they are not free agents — they cannot always tell the public what they know or what they think. Karpathy's choice to operate independently, at the cost of being slightly behind the frontier, is his way of maintaining the ability to speak honestly. It is a trade-off that more and more AI researchers may face as the stakes get higher.


The Bottom Line

If you have read this far, here is what Andrej Karpathy is really saying, stripped of all jargon:

The tools changed. Around December 2024, AI coding assistants got good enough that elite programmers stopped writing code. They now manage AI workers instead. This is like the transition from handwriting letters to using email — the same work gets done, but the method is completely different.

Your apps are going away. The future is not fifty apps on your phone; it is one AI assistant that talks to all of them behind the scenes. You just say what you want.

AI is improving AI. Karpathy set up an AI that optimizes other AIs overnight. It found improvements he missed after twenty years of experience. The big labs are doing this at massive scale. The pace of AI improvement is about to stop being limited by how fast human researchers can work.

AI is brilliant and blind at the same time. It can build complex software but cannot tell a new joke. It can solve graduate-level math but does not know when to ask a clarifying question. This unevenness is structural, not temporary.

Digital jobs change first. If you work on a computer, the changes are already here. If you work with your hands, you have more time — but the changes are coming for physical work too, just more slowly.

More software, not less. Cheaper software historically means more demand for it, not less. The same pattern is likely to hold with AI-powered software creation.

Education transforms. Teachers will shift from explaining things to people to designing curricula for AI tutors. The AI tutor will be more patient, more adaptive, and eventually more knowledgeable than any human teacher — but it will need human-created content to teach from.

The people in charge are nervous too. Karpathy, one of the most accomplished AI researchers alive, describes himself as being in "AI psychosis." He feels the same anxiety as everyone else about keeping up. The difference is that he is anxious about being on the frontier; you might be anxious about being left behind. Either way, the territory is new for everyone.

What to do about it: Learn the tools. Do not be afraid of them and do not dismiss them. Focus on the things AI cannot do — judgment, creativity, physical work, emotional intelligence, asking the right questions. Everything AI can do, it will soon do better than you. The only losing strategy is standing still.


This analysis is based on Andrej Karpathy's appearance on the No Priors podcast. All ideas and claims are attributed to Karpathy and the hosts as expressed in the original conversation. Explanations and analogies are added for accessibility.