I need to interrupt the story I've been telling you.
If you've been following this blog from the beginning, you'll know it's structured chronologically. I started with the embarrassing bit - asking Claude whether it could read my screen - and I've been working forward through the timeline, post by post, telling you how a sceptical .NET developer gradually fell down the AI rabbit hole. Each post builds on the last. Context accumulates. You get to watch me make mistakes in real time, which I'm told is the main appeal.
Here's the problem: the world isn't waiting for me to finish the story.
The Problem with Chronological Storytelling
I chose to tell this chronologically for good reasons. If I'd jumped straight to "I now run autonomous AI agent teams that manage their own sprint boards", you'd have thought I was exaggerating. Or worse, you'd have thought I was one of those people on LinkedIn who discovered ChatGPT last Tuesday and now considers themselves a thought leader. I am neither of those things. I'm a bloke who asked an AI about archery sights and gradually worked out that it could also write code.
The chronological approach means you see the learning curve. You see me paste entire error messages into Claude and get back confident nonsense. You see me slowly figure out that the quality of the output depends entirely on the quality of the input. You see the specific moment when it stopped feeling like a search engine and started feeling like a colleague. That context matters. Without it, the later posts would sound like science fiction.
But the gap between where the blog is and where I actually am has become absurd. The blog is telling the story of late 2025. I'm living in February 2026. That might not sound like much in normal time, but in AI time it's roughly equivalent to a geological epoch. Things that were experimental when I was writing about them are now standard practice. Tools I was cautiously testing are now load-bearing parts of my workflow. The narrative is months behind reality, and reality keeps accelerating.
Coding Is Solved, Apparently
Boris Cherny - the head of Claude Code - went on Lenny's podcast recently and said, with a straight face, that "coding is solved". Claude Code now accounts for roughly 4% of all GitHub commits. Four percent. Of all commits. On the entire platform.
I wrote a post back in December called Coding Is Dead, Long Live Coding, which was my attempt to push back against the hyperbolic headlines while acknowledging that something genuinely significant was happening. At the time, it felt like a balanced take. The tools were impressive but rough around the edges. The workflow was promising but unreliable. "Coding isn't dead," I wrote, "but it's being fundamentally restructured."
That was ten weeks ago. In those ten weeks, Anthropic shipped agent teams, Claude Code started running as a background daemon, and someone published actual metrics showing that AI-written code has gone from novelty to infrastructure. My "balanced take" is already looking quaint, and the blog post where I plan to cover this period hasn't been written yet because I'm still catching up to November.
This is the structural problem with documenting a fast-moving field in chronological order. By the time I've finished describing what happened, what happened has already been superseded by what happened next.
Where I Actually Am Right Now
So let me break the fourth wall for a moment and tell you where things actually stand, right now, in February 2026.
I run agent teams. Not as an experiment - as my actual development methodology. Claude Code agents work in parallel across isolated git worktrees, each handling a different part of a feature while a lead agent coordinates the work. I've built MCP servers for both Task Board and TestPlan, which means the AI can read tickets, update sprint boards, log test results, and manage its own workflow without me copying and pasting context back and forth.
The AI manages its own sprint board. I want to let that sink in for a moment, because six months ago I was asking it whether my conversations were being used for training data.
CoSurf - the co-browsing product - was built with AI handling significant chunks of the implementation. The Chrome extension, the signalling layer, the real-time session management. Not all of it, but enough that the development timeline would have been completely different without AI involvement.
I've gone from "can it read my screen" to "it's managing its own tasks and deploying its own code" in about six months. The chronological blog is going to get there eventually. But I wanted you to know where the story is heading, because some of these later posts are genuinely wild and I don't want you to think I'm making it up when you get there.
This Isn't Just an AI Blog Anymore
You might have noticed I published a post about training as a Forest School Leader three days ago. A developer blog about AI suddenly featuring a post about learning to light fires in the woods. That wasn't an accident, and it wasn't a non-sequitur.
The more I work with AI, the more I think about what's distinctly human. What can't be replicated by a language model, no matter how capable. What matters precisely because it's inefficient and physical and doesn't scale. Forest School is the opposite of everything I do professionally, and I think that's exactly why it belongs here.
Life doesn't organise itself into neat categories. The same week I'm debugging an MCP server connection, I'm also standing in a forest learning about risk-benefit assessments for children climbing trees. The same brain that's figuring out how to coordinate AI agent teams is also trying to remember which mushrooms are edible and which ones will kill you. These aren't separate stories. They're the same story - a person trying to figure out what to do with their one life while the ground shifts underneath them.
So the blog is broadening. The AI journey continues, but it sits alongside everything else now. Forest School, the products, the occasional existential crisis about whether any of this matters. I'm not going to pretend these things exist in separate compartments, because they don't.
Nobody Writes About This Part
Here's the thing though. Most AI blogs fall into one of two categories. There's the "here's what I built today" crowd - useful, practical, but no narrative arc. And there's the hot take merchants - "AI will replace all developers by Thursday" - who generate clicks but not insight. I tried to do something different: tell the story in order, with context, showing the actual learning curve including the embarrassing bits.
That approach has a structural flaw that nobody warns you about. The field moves faster than the narrative. By the time you've written a thoughtful, contextual post about discovering MCP servers, MCP servers have already evolved twice and everyone's moved on to the next thing. You're writing history while history is still happening.
I don't have a clean solution for this. The chronological posts are valuable - people tell me they're the most useful thing on the blog, precisely because they show the actual journey rather than just the destination. But I also can't pretend the blog exists in a time bubble where the outside world politely waits for me to catch up.
So this post is the compromise. An author's note between chapters. A quick pan across to what's happening in the present before we return to the scheduled programming. I'll probably do this again when the gap gets too wide, because I don't see the pace slowing down.
The Story Continues
The chronological posts aren't going anywhere. The next one covers building the MCP server for Task Board, and I can promise you it involves 123 messages of increasingly desperate debugging, at least two moments where I questioned my career choices, and a breakthrough that happened at the exact point where I'd given up and was about to do it the old-fashioned way. It's a good one. You'll enjoy watching me suffer.
After that, there's the TestPlan MCP server, agent teams, the moment I first saw AI agents coordinating work across multiple repositories simultaneously, and the philosophical crisis that followed when I realised I was spending more time reviewing AI-generated code than writing my own. Each post builds on the last, and each one gets a little more surreal than the previous.
But I wanted to be honest with you about where we are. The blog is behind. I'm writing as fast as I can, but reality has a head start and it's not slowing down. If you're reading the chronological posts and thinking "this must be building to something big" - it is. If you're impatient to know what happens next, I understand. I lived through it and I'm still processing half of it.
The story of how a sceptical developer fell into AI is also the story of how that developer's entire working life restructured itself around a technology he didn't take seriously eight months ago. That's worth telling properly, even if telling it properly means the telling falls behind the living.
Right. Author's note over. Back to the story. Next up: an MCP server, 123 messages, and the kind of debugging that makes you question whether you actually know how computers work.