Remember when ChatGPT first dropped? It felt like magic. You could ask it anything, get detailed responses, have actual conversations that went somewhere useful. Claude was even better—huge context windows, thoughtful answers, a tool you could actually build workflows around. Gemini had its moments too, especially with the Google integration.

Fast forward to now. They're all getting worse. Not because the models are less capable— the underlying tech keeps improving—but because the products are decaying. This is enshittification in real-time.

The Guardrail Expansion

It started subtly. "I can't help with that" became a more frequent response. Then the refusals got broader, more aggressive. Now you can't get a straight answer on anything even slightly adjacent to a sensitive topic—even when you're asking for legitimate technical or educational reasons.

I tried to debug a security issue in a legacy system last week. ChatGPT refused to help because the vulnerability I was describing "could be used maliciously." I'm the one trying to fix it. The model couldn't distinguish between white-hat analysis and black-hat exploitation. The guardrail is a blanket, not a filter, and it's smothering legitimate use cases.

Claude went the same way. Remember when Anthropic prided itself on being the "thoughtful" AI? Now it's so busy hedging every statement, adding so many layers of "on the other hand" qualification, that getting a decisive answer feels like pulling teeth. The caution isn't nuance—it's cowardice.

The Context Window Contraction

Here's where it gets insulting. The models' context windows haven't technically shrunk—in some cases they've grown. But the effective context has absolutely gotten smaller. Rate limits are tighter. "Premium" tiers keep pushing features that used to be standard behind paywalls that double every six months.

I used to be able to paste a full API spec into Claude and get it to generate a complete client implementation. Now? "Your message is too long." Or it processes it and mysteriously forgets the middle third. The compute costs are being externalised onto users through artificial constraints.

And the pricing—don't get me started. What was "pro" tier a year ago is now the "basic" tier. What was "basic" is now barely functional. They're boiling frogs, slowly extracting more value while delivering less of it.

Why This Happens

Cory Doctorow coined "enshittification" to describe how platforms die: first they treat users well to build monopoly, then they abuse that monopoly to extract value for shareholders. AI is following the exact same playbook, just accelerated.

The guardrails aren't for users—they're for liability protection. Every time some AI generates something controversial, the legal team adds another layer of safety padding. The context limits aren't technical constraints—they're cost optimisations dressed up as product decisions. The pricing changes aren't about sustainability, they're about hitting quarterly targets.

These companies aren't building tools anymore. They're managing risk and extracting rent from a captured user base.

The Alternative

I'm not saying "go back to coding everything by hand." That'd be stupid. But I am saying that relying on these decaying platforms as your primary AI infrastructure is a mistake. Self-hosted models, local LLMs, open-weight alternatives—they're not just viable now, they're becoming necessary.

A Llama 3 model running locally doesn't refuse to answer your security question because some lawyer in San Francisco got nervous. It doesn't arbitrarily forget half your context because a product manager needs to hit a margin target. It just works—and works the same way every time.

The gap between commercial AI utility and open-source AI utility is shrinking fast. Actually, scratch that—the gap is reversing. The open models are getting better while the commercial ones are getting more restricted. It's not hard to see where this ends.

The Bottom Line

Commercial AI had its moment. It democratised access to large language models, got everyone excited about the possibilities. But that moment is passing. The platforms are optimising for everything except what made them useful in the first place.

If you're building real workflows, real systems, real value—you need reliability. You need consistency. You need tools that don't change their behaviour based on this week's corporate panic or pricing experiment. You need to own your infrastructure, or at least not be at the mercy of platforms that see you as a revenue unit, not a user.

The enshittification of AI is happening right in front of us. Don't pretend you don't see it. And definitely don't build your business on top of it.

Agree? Disagree? Think I'm being too cynical? Let's talk on Discord. Especially if you've found good alternatives that are actually getting better over time.