Yuval Avidani
Author
AI Just Settled the Typed vs. Untyped Debate - And the Data Is Stunning
A recent GitHub Blog post by Cassidy Williams reveals a statistic that should change how we think about language choice: 94% of LLM-generated compilation errors are type-check failures. This single data point explains a massive shift happening right now in our industry.
We're watching TypeScript overtake both Python and JavaScript as the most-used language on GitHub. The Octoverse 2025 report shows TypeScript grew by over 1 million contributors in 2025 alone, reaching 2.6 million developers total. But this isn't just about one language's popularity - it's about how AI is fundamentally changing which tools we reach for.
The Core Problem: Code We Didn't Write
Here's what we're all dealing with: AI tools are generating more of our code than ever before. GitHub Copilot, Claude, ChatGPT - these tools are writing functions, implementing features, and scaffolding entire projects. The challenge? We're no longer in control of every single line.
Think of it like this: when we write code ourselves, we understand the assumptions and contracts inherent in each function. But when AI generates that code, those implicit understandings aren't automatically transferred. Subtle type mismatches can slip through - sending a string where we expected a number, or returning undefined where we needed an object.
With dynamic languages, these errors only surface at runtime. And that runtime might be in production, after deployment, when a user hits a specific edge case we didn't test. The "move fast and break things" philosophy becomes genuinely risky when the things breaking aren't even code we wrote.
Why Type Systems Are the Safety Net We Need
Type systems solve this problem elegantly. They act as a shared contract between us, our frameworks, and our AI coding tools. Every function signature, every interface, every type annotation is a guarantee about what data flows where.
Here's what this looks like in practice:
// Without types - AI might generate this
function calculateTotal(items) {
return items.reduce((sum, item) => sum + item.price, 0);
}
// What happens when items is undefined? Or when price is a string?
// We only find out at runtime.
// With TypeScript - the contract is clear
interface Item {
price: number;
name: string;
}
function calculateTotal(items: Item[]): number {
return items.reduce((sum, item) => sum + item.price, 0);
}
// Now the compiler catches mismatches before our code runs.
// AI-generated code must conform to this contract.
The academic research backs this up dramatically. A 2025 study found that 94% of LLM compilation errors were type-check failures. Imagine eliminating 94% of our integration bugs just by using types. Our skin would clear, we'd get taller, and we'd certainly have fewer "why is this returning a string now?" debugging sessions.
It's Not Just TypeScript
The trend extends beyond TypeScript. The Octoverse 2025 data shows growth across the typed language ecosystem:
- Luau (Roblox's scripting language) - Saw >194% YoY growth as a gradually typed language. Gradual typing lets us add type safety incrementally, which is perfect for migrating existing codebases.
- Typst - A functional language with strong typing, often compared to LaTeX, grew >108% YoY. This shows demand for type safety even in document generation.
- Java, C++, C# - These established typed languages are seeing renewed momentum. Not because they're trendy, but because they provide the guardrails we need when AI is generating scaffolding.
From My Experience: The Practical Impact
As a GitHub Star working with AI tools daily, I've seen this shift firsthand. When I'm using GitHub Copilot to generate boilerplate or implement features, having TypeScript as the foundation means the AI's suggestions already respect our type contracts.
The compiler becomes a second pair of eyes, catching integration issues before I even run the code. This is especially valuable in agent mode, where AI tools are iterating on code autonomously. Types give the agent clear boundaries and contracts to work within.
Here's a concrete example from a recent project:
// AI-generated function that respects existing types
interface User {
id: string;
email: string;
preferences: UserPreferences;
}
interface UserPreferences {
theme: 'light' | 'dark';
notifications: boolean;
}
// Copilot generated this function, and TypeScript
// ensured it matched our existing contracts
function updateUserTheme(user: User, theme: 'light' | 'dark'): User {
return {
...user,
preferences: {
...user.preferences,
theme
}
};
}
// If AI tried to pass an invalid theme or return the wrong type,
// the compiler would catch it immediately.
What This Means for Our Workflow
The implications are significant for how we structure our development process:
- Type-first architecture - We should define interfaces and types before implementing features. This gives AI tools clear contracts to work within when generating implementations.
- Gradual migration paths - Tools like TypeScript allow us to add type safety incrementally. We can start with loose types and tighten them as AI helps refactor.
- Better AI prompts - When asking AI to generate code, including type information in our prompts leads to more reliable output. "Generate a function that takes a User object and returns a formatted string" is much clearer than "make a user formatter."
- Confidence in automation - With types in place, we can trust AI agents to make more autonomous changes to our codebase. The type system prevents them from breaking contracts.
The Balance: When Dynamic Languages Still Make Sense
This isn't about declaring typed languages "better" across the board. Dynamic languages like Python and JavaScript are still phenomenal for rapid prototyping, scripting, and projects where we control every line of code.
The shift toward types is specifically about managing code we didn't write ourselves. When AI is generating features, implementing scaffolding, or building entire components, that's when we need the safety net.
Think of it this way: types are like automated tests for our data flow. We wouldn't skip testing in a production codebase with multiple contributors. Similarly, we shouldn't skip type checking when AI is a major contributor.
My Take: This Is Just the Beginning
In my view, this trend toward typed languages is accelerating as AI coding tools evolve. We're moving from simple autocomplete to true agent mode - where AI plans, iterates, and fixes its own mistakes autonomously.
As these agents get more sophisticated, the need for clear contracts and guardrails becomes even more critical. Types aren't just catching errors anymore; they're defining the boundaries within which AI can operate safely.
The data from Octoverse 2025 and the research showing 94% type-check failures aren't anomalies. They're signals of a fundamental shift in how we build software. We're entering an era where the language we choose affects not just our productivity, but our ability to safely leverage AI assistance.
For teams adopting AI coding tools, investing in type systems now will pay dividends. Whether that's migrating to TypeScript, adopting gradual typing, or strengthening type annotations in existing codebases - the safety net is worth building.
Read Cassidy Williams' full analysis on the GitHub Blog: Why AI is pushing developers toward typed languages
