Why Every Piece of Software Got 3-10x Less Valuable40m
BUILDINGFILM

Why Every Piece of Software Got 3-10x Less Valuable

Samuel Colvin on shallow moats, slop forks, and what survives in the agentic era.

May 11, 2026 40m

Samuel Colvin

Expert Insights

Samuel Colvin, creator of Pydantic and founder of the company behind it, makes the case that software just went through a foundational repricing. Every piece of software written before Q4 2024, in his estimate, is three to ten times less valuable than it was. The moat it provided just became that much shallower. AI can clone Redis in Rust on a weekend. It can replicate enterprise platforms in days. The old defensibility playbook stopped working.

He argues that we're somewhere in 1992 again: a fundamental capability change, no clear winners, and nobody (not even VCs) has a clue what's going to take off. The way teams build software is being rewired underneath. Pydantic's 25-person engineering team now operates more like 20 managers, each overseeing three to five AI agents in parallel. Features that took three weeks now take a few hours of pressing OK in Claude Code.

Here are the key insights into his perspective:

    • Every piece of software written before Q4 2024 is three to ten times less valuable than it was, because the moat it provided just became that much shallower under AI-assisted competition.

    • Source-available as a defensible market is largely dead. When AI can replicate Redis or CPython in days, "complex code as moat" stops being durable defensibility.

    • Type safety has become load-bearing infrastructure for AI-written code. It gives the AI a fast, benign way to verify semantic correctness without human review.

    • Teams of engineers are reorganizing into teams of managers overseeing parallel AI agents. Pydantic's 25 people effectively operate as 100, with each developer running 3-5 agents simultaneously.

    • LLMs excel at well-defined technical problems (B-tree implementations, sandbox runtimes), but consistently undervalue bold business decisions like raising prices. They're trained on data where loud complainers outnumber silent supporters.

We put our prices up in January. We were hopelessly too cheap last year. One of our team asked an AI: here's the pricing change, imagine you're five different ICPs, what will you think about this? And it came back saying people will be furious about the pricing change. We were like - we've got to do it, we're going ahead, let's do it. And sure enough, almost no one complained. A bunch of people messaged just saying: thank goodness you put your price up. What the LLM did was what people normally say about pricing changes. People complain loudly and are happy quietly. So if you let LLMs just make decisions for you, you would never do a bold thing like actually change how your prices work.

Monterail Team Analysis

What this means for software development teams operating in the agentic era:

    • Reassess what's defensible in your codebase: Complex algorithmic work that can be reproduced from public specifications (databases, runtimes, parsers) is now low-defensibility. Defensibility shifts toward platforms with infrastructure depth, brand trust, and integrations that can't be cleanly unit-tested. Ask yourself: if Claude Code had a week, could it reproduce my product?

    • Treat type safety as critical infrastructure: When AI is writing the majority of your code, type safety becomes the cheapest way to verify semantic correctness without human review. Languages and libraries that lack strong type checking are accumulating hidden costs in the agentic era.

    • Reorganize teams around agent oversight: If your engineers are still writing the majority of their own code, you're missing the productivity ceiling. The model that scales now: 1 engineer = manager of 3-5 parallel agents. Features that used to take weeks become hours of supervision.

    • Don't let LLMs make your bold business decisions: LLMs underestimate willingness to pay and overweight historic complaint patterns. They're trained on data where loud complainers outnumber silent supporters. If you let an AI make pricing or strategy calls, you'll consistently default to conventional wisdom. Real opportunity sits outside it.

    • Plan for the headless platform debate: As AI agents become primary consumers of your product, your UI may become optional. Decide deliberately: platform users visit, or API agents call? Both can work. Pretending you don't have to choose is the failure mode.

    • Bet on talent and adaptability: When the rules of defensibility are being rewritten, the only investment that compounds is bright people who can move fast. Samuel: "I would be making talent bets — finding bright people who can move fast, who have a track record of innovating, and investing in them."