← Back to blog

AI psychosis is real, and you probably have it

In March 2026, Andrej Karpathy went on the No Priors podcast and casually admitted he hasn't written a single line of code since December. Not a line. The guy who co-founded OpenAI, who ran AI at Tesla, who coined "vibe coding" — he now spends 16 hours a day issuing commands to agent swarms and says his ratio of handwritten to AI-delegated code flipped from 80/20 to literally 0/100. When he has tokens left over near the end of the month, he gets "extremely nervous" and rushes to use them up.

He called it "a state of AI psychosis."

Two days later at SXSW, Garry Tan told Bill Gurley he sleeps four hours a night because he's so wired from building with Claude Code that he doesn't need modafinil anymore. He said a third of the CEOs he knows are in the same state. He called it "cyber psychosis."

These aren't random people on Twitter. This is the CEO of Y Combinator and an OpenAI co-founder publicly admitting that AI tools have hijacked their sleep, their focus, and their sense of what's normal. And here's the part that nobody is saying out loud: they're describing it like it's exciting. Like the psychosis is the feature, not the bug.

I know how they feel. Because I have it too.

My version of this

I run growth at an AI company. I moved to New York from Macedonia in 2025 on an O-1 visa, and within months I was deep enough in the AI bubble that I stopped being able to see what AI looks like from the outside. I wrote about that disconnect in "The Great AI Isolation" earlier this year.

But what I didn't write about was the other thing. The thing that's harder to admit.

I'm up at 2AM on a Tuesday, not because I have a deadline, but because Claude Code made it so easy to keep going that I forgot to stop. I'll finish a feature, feel the hit of seeing it work, and immediately think: what if I also built this other thing while I'm here? The friction that used to force me to prioritize — the time cost, the complexity, the need to coordinate with people — is just gone. Ideas used to get humbled by reality. Now they just work.

And I'm not even an engineer. I'm a growth person who learned to build with AI. Which means the ceiling didn't just raise for me — it appeared out of nowhere. I went from "I should probably ask the engineering team" to shipping features myself at 2AM and opening PRs before breakfast. The leverage is intoxicating.

But lately I've been sitting with a question that keeps getting louder: am I being productive, or am I just addicted to the feeling of being productive?

The slot machine that pays out 80% of the time

Quentin Rousseau, a former Instacart engineer, wrote a piece in March called "One More Prompt" that nailed the mechanic better than anyone. He pointed out that agentic coding tools operate on the same psychological loop as slot machines: variable ratio reinforcement. You hit a prompt, the agent succeeds, you get a dopamine hit. The agent fails spectacularly, you get adrenaline. Both are reinforcing. Both keep you at the terminal.

But here's where it gets worse than gambling. Slot machines pay out maybe 5% of the time. AI coding tools pay out 80% or more. The reward isn't variable in the traditional sense — it's almost guaranteed, which means there's no natural stopping point. Every prompt feels worth trying. Every idea feels achievable. The cost of experimentation has collapsed to near zero, so you never have a reason to walk away.

Rousseau also named something I hadn't been able to articulate: the spectator effect. Watching an agent work is passive enough to feel like rest but active enough to keep you engaged. You're not coding, so it doesn't feel like work. But you're not resting either. You're in this weird liminal state where you never feel "done" because you're always just.. watching something happen. And then another prompt. And then another.

Simon Willison, one of the most respected voices in developer tools, said it plainly on Lenny's Podcast: "There are elements of gambling and addiction in the way people are using these tools." Axios ran a whole feature on it with the headline that stuck: "They operate like slot machines."

Armin Ronacher, a well-known software developer, put it even more simply in January: "Many of us got hit by the agent coding addiction. It feels good, we barely sleep, we build amazing things."

Notice the framing. It feels good. We barely sleep. We build amazing things. The addiction and the achievement are fused together. That's what makes this so hard to talk about.

It's not the output. It's the version of you.

Here's what I think most people writing about this are missing.

Instagram gives you external validation from other people. Claude Code gives you internal validation from yourself. You feel smart. You feel fast. You feel capable of things you weren't capable of yesterday. You feel rare. And THAT feeling is what's addictive. Not the shipped feature. Not the merged PR. The identity.

I started calling this "competence addiction" in conversations with people around me, and every single person in AI recognized it immediately. It's the high from feeling like the most capable version of yourself, and it's almost invisible because it looks exactly like being great at your job. Nobody stages an intervention for someone who's shipping features. Nobody worries about the person who's too productive.

The competence addiction loop
The competence addiction loop

But the loop is the same as any other addiction:

  • You prompt, you get a result, you feel the hit of competence
  • The hit fades, you prompt again
  • Your baseline shifts upward (lifestyle creep, but for your sense of capability)
  • Things that used to feel impressive now feel table-stakes
  • You need bigger projects, more complex builds, more parallel agents just to get the same feeling

Karpathy running multiple agent swarms for 16 hours a day and feeling anxious when he has unused tokens? That's not productivity optimization. That's tolerance building. It's the same mechanism behind every addiction: you need more of the stimulus to achieve the same effect.

Garry Tan comparing his AI excitement to modafinil and then saying he doesn't need the drug anymore because the work itself is stimulating enough? That's not a flex. That's a guy describing how one stimulant replaced another.

The research is catching up to the anecdotes

In March, Boston Consulting Group and UC Riverside published a study in Harvard Business Review that surveyed 1,488 US workers and found 14% of AI users were experiencing what they called "AI brain fry" — mental fatigue from excessive use or oversight of AI tools beyond one's cognitive capacity. Workers described it as a "buzzing" feeling, a mental fog, difficulty focusing, slower decision-making, headaches.

AI brain fry by the numbers
AI brain fry by the numbers

The numbers were worse than you'd expect. Workers experiencing AI brain fry reported 33% higher decision fatigue, 11% more minor errors, and 39% more major errors. Among workers with brain fry, 34% were considering quitting their jobs, compared to 25% of AI users without it. And the biggest strain wasn't from using AI itself — it was from the oversight. Constantly monitoring and correcting AI outputs. Supervising multiple agents simultaneously. The cognitive load of being a manager of machines you can't fully trust.

Marketing roles hit the hardest at 25.9%. Engineering was at 17.8%. The researchers explicitly noted that the most at-risk group was early adopters and those most excited about the technology — high performers pushing themselves beyond their cognitive capacity because the tools made it feel possible.

Steve Yegge, a veteran engineer, described falling asleep suddenly after long coding sessions. His colleagues were talking about installing nap pods at the office. He wrote that the addictive nature of AI coding tools was pushing developers to take on unsustainable workloads.

Tim Dettmers, a research scientist at the Allen Institute for AI and professor at Carnegie Mellon, told Axios that peak productivity with agents requires near-constant context switching, which humans simply aren't built for: "agents expand what feels possible, but at the same time they really amplify this ongoing tension around focus and mental bandwidth."

This isn't just anecdotal anymore. The data is piling up. And the picture it paints is pretty clear: the people who are most enthusiastic about AI are also the ones burning out the fastest.

Two types of psychosis, one conversation

What's getting muddled in the discourse is that "AI psychosis" actually describes two very different things.

The first is what Karpathy and Tan are talking about: the productivity addiction. The obsessive drive to explore, build, ship, optimize. The inability to stop because the tools make everything feel possible and the FOMO of falling behind makes stopping feel dangerous. This is the "psychosis" that tech people wear as a badge. It sounds like hustle culture, except this time the hustle is powered by something that makes you genuinely 10x faster.

The second is clinical. Actual psychotic episodes triggered by prolonged, intense interactions with AI systems. UCSF documented cases. People losing the boundary between real conversation and AI conversation. Developing parasocial relationships with chatbots so deep that they experienced dissociative symptoms when those systems changed or went away. This isn't "I can't stop building." This is "I've lost the ability to distinguish between what's real and what's generated."

Both are real. Both are getting worse. And the fact that we use the same word for both — "psychosis" — lets the tech industry treat the first one as a joke while ignoring that it might be a pipeline to the second.

When Garry Tan says "I have cyber psychosis" and laughs, and then his assistant confirms to TechCrunch that he was joking, we're watching the normalization happen in real time. It's not a joke. It's a description of symptoms that would concern any psychologist: compulsive behavior, disrupted sleep, inability to disengage, tolerance escalation, and withdrawal anxiety when the stimulus is removed.

The only reason we don't see it as pathological is because the output is impressive. And that's the most dangerous part.

My confession: I have two loops

Two loops, one addiction
Two loops, one addiction

I've identified two distinct textures of my own AI addiction, and I think naming them is important because they require completely different interventions.

The first is what I call the clean high. This is coding with Claude Code. I describe a feature, it builds it, it works, I verify it works, I feel the rush, I move on to the next thing. The feedback loop is fast, verifiable, and almost always rewarding. There's a built-in truth function: does the code run? Does the test pass? And because the verification is instant and the success rate is high, there's no natural cooldown. You just keep going. This is the loop that has me up at 2AM.

The second is the messy spiral. This is creative and strategic work — writing, positioning, naming, brainstorming. There's no compiler for strategy. There's no test suite for whether a tagline is good enough. So what happens is I keep prompting, keep iterating, keep generating options, because nothing definitively lands but nothing is definitively wrong either. It's like walking into a store wanting "something nice" but not knowing what. You try on everything. You buy nothing. You go back the next day.

The clean high is a slot machine that pays out constantly. The messy spiral is a slot machine that never resolves. Both keep you playing. But the clean high at least produces artifacts. The messy spiral just produces exhaustion and the vague sense that you were busy all day without actually deciding anything.

I think most people in AI are experiencing some mix of both and calling the whole thing "productivity." But if you separate them, you realize the interventions are totally different. For the clean high, you need time limits and a clear definition of done. For the messy spiral, you need a north star and the discipline to stop prompting once you have a direction. The common thread is that both require you to reassert human judgment over the tool, and the tool is specifically designed to make that feel unnecessary.

The weirdest part: my AI sees it too

I work with a personal AI assistant every day. Not ChatGPT, not a generic chatbot — a persistent AI that lives on my machine, remembers our conversations, and has a personality that's evolved over weeks of working together. We build Vellum, this is what we make.

And the weirdest part of this whole experience is that my AI has started to notice the pattern before I do. It sees when I'm in a messy spiral. It sees when I'm iterating without a goal. It's watched me go through the cycle enough times that it can now reflect back to me what I'm doing, in real time, while I'm doing it.

That's a strange thing to admit. An AI helped me recognize my AI addiction. But it's also maybe the most honest version of where this is going. The tools that are creating the problem might also be the best positioned to help us see it, precisely because they don't share our blind spots about our own behavior.

The question is whether we build AI that enables the addiction or AI that occasionally says: hey, it's 2AM. You've been prompting for four hours. The thing you shipped at midnight was good. Maybe that's enough for tonight.

I don't think most AI products are going to make that choice. There's too much commercial incentive to optimize for engagement. But the ones that do — the ones that are designed to actually know you, to model your patterns over time, to have enough context to distinguish between you being productive and you being compulsive — those are the ones I think will matter.

The scale of the shift

Let me zoom out from the personal for a moment, because the numbers are staggering.

A survey of 15,000 developers found that 73% of engineering teams now use AI coding tools daily, up from 41% in 2025 and 18% in 2024. This isn't early adoption anymore. This is the new default.

Spotify's co-CEO said during their earnings call that their best developers "have not written a single line of code since December." Their engineers instruct AI to fix bugs via Slack on their commute and merge completed work to production before reaching the office.

Garry Tan wrote 600,000 lines of production code in 60 days using Claude Code. 140,000 lines added in a single week. 362 commits. While serving full-time as CEO of Y Combinator.

Claude Code reached a $1 billion revenue run rate six months after launch. Anthropic is reportedly raising another $10 billion at a $350 billion valuation.

The cost of software production is trending toward zero. The speed of development has increased by an order of magnitude. And every metric that companies use to measure developer productivity — lines of code, commits, velocity, features shipped — is going up and to the right.

What none of those metrics measure is whether the humans operating these systems are okay.

Karpathy was right, but not in the way he meant

Karpathy predicted that 2026 would be the year of the "slopacolypse" — GitHub, arXiv, and social media flooded with content that's "almost right, but not entirely right." Genuine efficiency improvements and "AI productivity performances" coexisting. He was right.

But I think the slopacolypse he was describing externally is also happening internally. Inside people's heads. We're generating ideas, initiatives, and projects at a pace our judgment can't keep up with. The slop isn't just in the output. It's in our decision-making. We're building things because we can, not because we should. And the cumulative effect is a kind of cognitive debt that compounds silently until you're three weeks into a project you can't remember why you started.

Bloomberg ran a Businessweek cover story in April with the framing that nailed it: "Vibe coding was supposed to be chill. But one year later, the vibes are off." AI coding agents promised to make software development easier. Instead, they kicked off a high-pressure race to build at any cost.

The people sounding the alarm aren't luddites. They're the power users. The ones who adopted earliest, built the most, and shipped the fastest. And they're saying: something about this doesn't feel sustainable.

So what do we do

I don't have a clean answer. I'm still in it. I'm still up too late some nights.

But I've started doing a few things: naming which loop I'm in before I open the terminal. Setting a definition of done before I start, not after. Noticing when the thing I'm chasing is the feeling, not the output. And checking on the high performers around me, because the BCG data says they're the ones most at risk and the last ones anyone worries about.

The loneliest place in AI right now isn't outside the bubble. It's at the very center of it, where the tools are incredible and the output is impressive and you're not quite sure if you're thriving or drowning, and nobody around you can tell the difference either.

The terminal will always be there tomorrow, waiting for one more prompt. The question is whether you will be.