Anthropic uses Claude Code for roughly 60% of its own internal development. That number, if you sit with it, is either a confidence-inspiring signal or a slightly vertiginous one, depending on your disposition. It means the team building the AI is trusting the AI to build the team's software. The ouroboros is fully formed.

Claude Code lives in your terminal. Not in an IDE, not in a browser tab, but in the shell, where it can read files, run commands, manage git, and edit code the way a developer would. That choice of environment is the product's central argument: the terminal is where things actually happen, and an agent that lives there has fewer layers of abstraction between its intentions and your filesystem.

The tool-use numbers are worth knowing. Claude Code chains an average of 21.2 tool calls autonomously per session, a 116% increase in autonomy over earlier versions. In practice, this means it can take a task like "refactor the authentication module to use the new token format" and actually carry it through: finding relevant files, reading them, proposing changes, running tests, catching failures, revising. Not always correctly, but with enough coherence to be genuinely useful rather than a series of isolated completions.

Auto-Accept Mode (triggered with Shift+Tab) removes the approval step entirely, letting Claude Code run test-fix loops without waiting for you. This is where the tool earns its category distinction from something like Cursor. Cursor is an IDE with AI built in. Claude Code is an agent that happens to understand code. The difference matters for long-horizon tasks: architecture migrations, cross-repository refactors, anything that would normally require hours of sustained, context-heavy focus.

The CLAUDE.md convention is, in a small way, influential. Drop a CLAUDE.md file at the root of your project, and Claude Code reads it at the start of every session: the stack, the naming conventions, the things not to touch, the tests to run before committing. Other tools adopted the pattern. Cursor uses .cursorrules. Codex has AGENTS.md. The idea of giving an AI agent persistent, project-level context has become standard practice, and Claude Code was early to it.

Multi-agent mode is the experimental edge. Claude Code can spawn parallel subagents, each working on a different part of a task, coordinated by a lead agent. The use cases that make sense here are genuinely complex: running different test suites simultaneously, parallel code review passes, coordinated refactoring across a monorepo. The Cloud Code Review feature (automated PR analysis that catches bugs before human reviewers see them) is a productized version of this for teams on enterprise plans.

The IDE extensions (VS Code, Cursor, Windsurf, JetBrains) let Claude Code surface inside your existing editor rather than requiring a terminal split, which removes the main friction for people not already living in the shell. But the terminal experience is still the primary one, and the tool is clearly designed for developers who are comfortable there.

Who it's for: developers who want an agent that can take a real task and run with it, people working on complex or unfamiliar codebases where context matters, and teams that want to automate code review without adding another service to their stack.

Who it's less suited to: designers doing occasional light coding, people who want IDE-style visual diffing and step-by-step approval before anything is written, or anyone who finds CLI tools alienating. The terminal-first philosophy is a genuine commitment, not a cosmetic choice.

Pricing is folded into Claude Pro at $20 per month, which makes the entry point reasonable. Heavy usage through the API is billed separately, and that's where costs can grow for teams running long autonomous sessions. The $1 billion ARR milestone, reached within months of general availability, suggests the pricing is landing well enough.

The honest summary is that Claude Code is the most capable autonomous coding agent available, and the most honest about what autonomy actually requires: trust, good instructions, and a willingness to review what it produces.

Latest Updates

Finding 1: The longer models reason, the more incoherent they become. This holds across every task and model we tested—whether we measure reasoning tokens, agent actions, or optimizer steps. x.com

On December 8, the Perseverance rover safely trundled across the surface of Mars. This was the first AI-planned drive on another planet. And it was planned by Claude. x.com

Plugin support is available today as a research preview for all paid plans. Org-wide sharing and management is coming soon. Learn more: claude.com/blog/cowork-pl… x.com

Cowork now supports plugins. Plugins let you bundle any skills, connectors, slash commands, and sub-agents together to turn Claude into a specialist for your role, team, and company. x.com

We're open-sourcing 11 plugins for sales, finance, legal, data, marketing, support, and more. Get started here: claude.com/plugins-for/co… x.com