Bogdan Dragomir
/
Writing

Feb 18, 2026

How about now?

A year ago, the best you could do with AI was autocomplete on steroids. Copilot would finish your line, maybe guess the next function. Useful, but not transformative. You still wrote the code, made the decisions, did the debugging. The AI was a sidekick with a good memory.

That's not where we are anymore.

In February 2025, Andrej Karpathy posted a throwaway tweet about something he called "vibe coding" — describing what you want in plain language and letting AI generate the code. A year later, he retired the term. Not because it failed, but because it evolved into something bigger. He now calls it "agentic engineering" — you're not writing code 99% of the time, you're orchestrating agents who do. The skill shifted from typing to directing.

This isn't hypothetical. 92% of US developers now use AI coding tools daily. Y Combinator reported that 25% of their Winter 2025 batch had codebases that were 95% AI-generated. Claude Code alone hit $1 billion in annual run-rate revenue by December 2025. These aren't early adopter numbers. This is mainstream.

The barriers fell quietly

What changed isn't one thing. It's the stack.

Reasoning models got good. OpenAI's o3, DeepSeek R1, Gemini 3 — they don't just predict the next token anymore. They break problems into steps, test hypotheses, backtrack when something doesn't work. The kind of thinking that used to require a senior developer staring at a whiteboard now happens in seconds. DeepSeek trained their frontier model for roughly $5.5 million. A year before, that would've cost hundreds of millions.

Context windows expanded. Models can now hold entire codebases in memory. Claude operates with a 1M token context window in beta. That means an agent can read your whole project, understand how the pieces fit, and make changes that actually make sense across files. No more "it fixed this file but broke that one."

Agents went local. Claude Code runs on your machine, not in some cloud sandbox. It reads your files, runs your tests, commits to your repo. It has access to the same tools you do. The agent isn't looking at your code through a keyhole anymore — it's sitting in your chair.

And the open-source side kept pace. DeepSeek, Qwen, Mistral — Chinese and European labs are releasing models that compete with the best proprietary ones. Competition drove prices down and quality up. Running a capable model locally on a decent machine is no longer science fiction.

The bottleneck moved

Here's the part that matters: the bottleneck in building things is no longer the ability to write code.

Read that again. The thing that stopped most ideas from becoming real — "I'd need to learn React", "I'd need a backend developer", "I'd need six months" — that's gone. Not reduced. Gone.

The bottleneck now is the ability to shape the product itself. What should it do? Who's it for? What's the right interaction model? Clarity! The creative and strategic parts that were always the hard part but got overshadowed by implementation complexity.

Solo developers are shipping things that used to require teams. A single person can build a full-stack app, generate professional-quality assets, write tests, deploy — in a weekend. Not a prototype. A real thing. The gap between "idea" and "working product" collapsed from months to days.

The tools matured

2025 was messy. Everyone was figuring out how to use these things. Prompt engineering felt like alchemy. Agents would hallucinate, lose context, declare victory halfway through a task.

2026 is different. The tools caught up to the capability.

MCP (Model Context Protocol) gave agents a standard way to interact with external tools — databases, browsers, APIs, design tools. Skills and memory systems let agents retain context across sessions instead of starting fresh every time. Coding agents now run benchmarks on their own work, take screenshots to verify UI changes, and flag when they're stuck instead of silently producing garbage.

The workflow shifted from "babysitting AI" to "reviewing AI's work." You set the direction, the agent does the work, you check the output. More like managing a junior developer than writing code yourself. Except this junior developer works at 3 AM, doesn't get tired, and learns from every mistake instantly.

So why now

Every few months for the past decade, someone wrote a "now is the time" article about some technology. Most of them were premature. The tech wasn't ready, the tooling was bad, the learning curve was too steep.

This time the convergence is real. Reasoning models that actually reason. Agents that persist across sessions. Context windows that fit real projects. Open-source alternatives that keep costs down. A developer ecosystem that's matured past the "wow, it can write a function" phase into serious production tooling.

The numbers tell the story. 40% of enterprise applications will work with AI agents by end of 2026, up from under 5% in 2025. The vibe coding market is projected at nearly $3 billion in 2025, heading toward $325 billion by 2040. Gartner says 33% of enterprise apps will include autonomous agents by 2028.

But forget the enterprise numbers. The real story is what this means for someone with an idea and a laptop. You don't need to raise money to hire a team. You don't need to spend years learning a framework. You don't need permission from anyone.

The question people kept asking was "is AI ready?" It's been ready for a while now. The question was always whether you were going to use it.

How about now.