{ "@context": "https://schema.org", "@type": "Article", "publisher": { "@type": "Organization", "name": "Aranya", "url": "https://aranya.tech" }, "isPartOf": { "@type": "Blog", "name": "Engineering notes and research behind ClusterdOS", "url": "https://aranya.tech/blog" } }
BACK TO BLOG

AI’s Got a Ferrari Engine while Infrastructure’s Got a Shopping Cart

April 15, 2026
Research
 By 
Sasivarnan Kanaghasalam Sathyapriya

Why is AI infrastructure lagging behind on AI adoption even though they power the whole AI ecosystem as its backbone? This question has bugged me for so long especially with the pace at which the industry is growing. We are literally running Formulae 1 cars (cutting edge AI applications) on dirt roads (AI infrastructure layer). To understand why I call infrastructure dirt roads, we’d need to do a quick recap of our current AI application layer.

And one of the most electric examples? Vibe coding.

The Rise of "Just Describe What You Want"

The term was coined by Andrej Karpathy, co-founder of OpenAI and former AI lead at Tesla, back in February 2025. The idea is dead simple: you describe what you want in plain English, and AI writes the code. No syntax. No Stack Overflow rabbit holes. Just intent → output.

And it's not a toy anymore. 25% of Y Combinator's Winter 2025 batch had codebases that were 95% AI-generated. Google's CEO said over 25% of Google's code is now AI-generated. Microsoft's Satya Nadella confirmed 30% of their code is written by AI. Spotify's best developers reportedly haven't written a single line of code since December 2025, shipping 50+ features using AI coding workflows instead. Anthropic's own head of Claude Code hasn't personally written code in over two months. 70–90% of Anthropic's code is now AI-generated.

Platforms like Claude Code and Cursor have gotten so good that someone with zero coding experience can prototype an idea, test it, iterate on it, and ship it. The barrier to building software has effectively dropped to zero. As Microsoft's David Fowler put it: "Anyone can do it."

These are facts that everyone is talking about these days. But here’s the thing nobody wants to talk about…

The Infrastructure Behind All This? It's Not Ready.

Let me draw a parallel that I think captures this perfectly using the analogy of autonomous driving:

Level Autonomous Driving AI Analogy
L0 No automation – driver does everything Manual coding, no AI assistance
L1 Basic assist (cruise control, lane keeping) Code autocomplete, simple suggestions
L2 Partial automation – car handles steering + acceleration, driver supervises AI pair programming – Copilot style tools, human still in control
L3 Conditional automation – car drives in defined conditions, human takes over when asked Vibe coding – AI handles entire features, human reviews and guides
L4 High automation – car drives itself in defined domains, no human fallback needed Autonomous coding agents – Claude Code running 30+ hour sessions, multi-agent teams
L5 Full automation – no human needed, anywhere, anytime Fully autonomous software development – we're not here yet

Coding IDEs like Claude Code and Cursor? They're solidly at L3, pushing into L4. Claude Code can now run autonomously for 30+ hours on complex tasks, deploy multi-agent teams that tackle different parts of a project simultaneously, and even use sandboxing to work securely without constant permission prompts. That's not autocomplete. That's an autonomous system with real agency.

AI infrastructure? It's stuck at L1–L2. You still have to babysit everything. Manual cluster provisioning. Brittle scaling. Disconnected systems that don't talk to each other. Salesforce's 2026 Connectivity Report found that 50% of deployed AI agents operate in complete isolation - they can't share context, can't coordinate, can't even see what the agent next to them is doing. 96% of organizations report barriers to using data for AI, with 40% pointing directly at outdated IT architecture.

This is a big gap and we need something changed from the foundational level.

We Need a Total Infrastructure Overhaul

Here's what I believe, and I'll say it plainly: we need a new operating system. Reimagined from scratch for the AI era.

Not a patched-together Frankenstein of legacy tools. Not another Kubernetes wrapper with a marketing rebrand. A genuinely new, AI-native distributed operating system - built to run clusters of compute, not a single PC.

Because let's be honest - even our PC operating systems are archaic. Windows. macOS. They were designed for an era of mice, keyboards, and file folders. An era before AI. They're beautiful relics of a computing paradigm that's rapidly becoming irrelevant.

The numbers back this up. Deloitte's 2026 Tech Trends report explicitly states that enterprise infrastructure is "misaligned with AI's unique demands." DDN's 2026 State of AI Infrastructure Report found that 44% of IT leaders say infrastructure constraints are the top barrier to expanding AI, and a staggering 65% of AI infrastructure sits idle while still consuming power. The American Action Forum is blunt: the transition to 2026 "places infrastructure and regulation at the core of the AI agenda."

We're trying to run a Formula 1 car on dirt roads as mentioned earlier.

If I Had to Imagine an AI-Native OS...

OK, don't judge me - but the first thing that comes to mind is Jarvis from Iron Man. 😄

Think about it though. Seriously. Voice-enabled. Context-aware. Proactive, not reactive. Non-deterministic - it doesn't just follow scripts, it reasons. It anticipates what you need before you ask. It orchestrates complex systems behind the scenes while presenting you with a simple, intuitive interface.

Now here's the question that keeps me up at night: we have all the tools needed to build this. Large language models. Voice interfaces. Real-time reasoning. Multi-agent orchestration. Context-aware computing. The building blocks are all here.

So why don't we have it yet???

That's the trillion-dollar question. And I think the answer is that we need to build something for distributed compute at scale, not for a single machine.

Enter Aranya

Aranya is building exactly this.

The first AI-native distributed operating system. Not another layer on top of legacy infrastructure - a fundamentally new OS designed for clusters. Built to deploy 1000+ node GPU clusters in minutes. Built with ClusterdOS, a portable Kubernetes operating system that enables true multi-tenancy and federated architecture. Built to manage hundreds of clusters from a single dashboard, with unused compute automatically recycled to cut costs by up to 30%.

It’s not another wrapper. Not another band-aid. The actual ride out.

The AI revolution has its application layer. It has its models. What it's been missing is the infrastructure to actually run at the scale this moment demands.

Current solutions work, just not for me. About time that changes.