I. The Disruption is Real (But Misunderstood)
The past week in the markets felt less like a correction and more like a capitulation. When software stocks collectively shed 20% of their value in days, it triggered the usual chorus of binary takes, mostly screaming that "Software is Dead".
The trigger was specific: the January release of AI tools like Clawdbot. For the first time, the market stopped treating AI as a background feature and started treating it as a credible threat to the software layer itself.
My initial reaction was skepticism. On a technical level, these agents are not magic; they are just efficient engineering. I wondered, "Haven't we been through this hype cycle before already?" But dismissing the market’s reaction as irrational is a mistake. The sell-off is not panic; it is rational risk management in the face of uncertainty.
Think of it as hearing a structural creak in your house. You do not know for sure if the roof is collapsing, but the rational move is to get out of the house until you know. Investors are excited because the threat to software margins has moved from "theoretical" to "plausible".
Why This Time is Different
We have seen this movie before. In 2023, ChatGPT just came out. AI hype spiked, threatened software valuations, and then receded when companies realized the models were good at poetry but bad at complex logic. The incumbents posted strong earnings, and everyone went back to business as usual.
This cycle feels different because the friction has dropped significantly. We are seeing a convergence of new concepts like "Skills" and "Cowork" in tools like Claude, along with the emergence of usable agents like Clawdbot.
It is not that these agents are perfect or fully reliable yet. They are not. But they have crossed a threshold of usability that proves the concept is viable.
In 2023, AI was a concept car on a rotating stage, beautiful to look at but purely theoretical. Today, it is a prototype being driven around the parking lot. It may still be clunky, and it is definitely not ready for the highway, but the fact that it moves under its own power changes the conversation. The threat has moved from "is this physically possible?" to "how soon will this be good enough?"
The Real Risk (Not What Wall Street Thinks)
The prevailing narrative is that AI will replace all software companies. That is the wrong lens. The probability of massive incumbents vanishing overnight is low.
The actual risk is more subtle, and therefore more dangerous: the dismantling of traditional moats.
Based on the market's reaction, three specific barriers that SaaS companies have relied on for decades are now under fire:
- High Cost of Entry & Scale: Historically, the sheer cost of engineering and the advantage of scale kept competitors out. AI coding tools lower this barrier, allowing smaller teams to build "good enough" alternatives cheaply.
- User Habits & Interface: Companies protected themselves with complex interfaces that users learned and did not want to leave. If an AI agent can execute tasks via API without the user ever opening the app, that "interface stickiness" dissolves.
- The Per-Seat Business Model: Most SaaS revenue grows linearly with headcount. If AI allows companies to do the same work with fewer people, the per-seat revenue model faces a structural contraction.
The danger is not that software companies die immediately. It is that their "hidden value" (the ability to charge premium prices for scale and seats) gets stripped away. If you lose your pricing power and your interface lock-in, you are no longer a high-margin technology business; you are a utility.
II. Three Disruption Vectors
The question "Will AI replace software?" is too broad. Instead, we are seeing three specific currents eroding the traditional SaaS business model. Each demands a distinct response.
Vector 1: Commoditization Through Low-Cost Alternatives
AI has dropped the cost of building software by an order of magnitude. Small teams can now spin up "good enough" alternatives to core features in weeks, not years. This supply shock destroys the premium pricing incumbents once charged simply for existing.
Vector 2: The Per-Seat Model Under Siege
For two decades, SaaS revenue grew linearly with customer headcount. AI breaks this link. If AI tools allow one employee to do the work of five, your customer will hire fewer people.
Under the per-seat model, you are now financially punished for your customer's efficiency. A structural conflict exists where the customer wants to automate work, but the vendor needs them to add seats.
Vector 3: The Entrance Layer is Migrating
This is the most critical shift. In the traditional workflow, users open your app to get a job done. In the agentic workflow, users state an intent to an AI model ("Update the project status"), and the AI orchestrates the API calls. The user may never see your interface.
The Large Language Model is becoming the new distribution layer. It, not your app icon, now owns the user's attention.
III. What Will NOT Be Disrupted (Your Defensible Territory)
While the disruption vectors are real, the narrative that "AI eats everything" is structurally flawed. AI models are probabilistic engines; they are brilliant at predicting the next token but terrible at guaranteeing the next outcome. This limitation creates clear, defensible territories where traditional software architecture remains superior.
Complex, Multi-Step Workflows
AI excels at narrow, discrete tasks with clear goals, such as "write a SQL query" or "summarize this PDF". However, it struggles significantly with end-to-end enterprise workflows that involve edge cases, exceptions, and human judgment.
In a complex business process—like closing a financial quarter or managing a supply chain disruption—the value lies not in the happy path but in the handling of deviations. These workflows require a rigid scaffolding that probabilistic models cannot reliably provide on their own. The software that defines, manages, and constrains these complex chains of logic will remain the operating system of the enterprise.
Accountability and Auditability
The single biggest barrier to AI adoption in the enterprise is not capability; it is liability. When an AI agent executes a trade, deletes a database record, or approves a loan, the immediate question is: who is responsible if it is wrong?
This tension creates a permanent demand for a governance layer. Enterprises need software that wraps AI execution in strict audit trails, permission systems, and rollback mechanisms. The value shifts from the doing of the task to the verifying of the task. Software that provides the "chain of custody" for AI decisions becomes essential infrastructure for compliance.
Proprietary Data and Domain Expertise
General-purpose models are trained on the public internet, which makes them essentially "average" at everything. They lack the context of private, vertical-specific realities.
Vertical SaaS companies that possess closed-loop data, proprietary workflows, and expert-built logic (such as EDA tools for chip design or specialized clinical trial software) hold a significant advantage. This domain expertise acts as a firewall. A general model cannot hallucinate its way through a strict regulatory framework or a physics-based simulation. The deep, messy, un-scrapable knowledge embedded in these platforms is hard to replicate.
Infrastructure and Low-Level Tooling
Paradoxically, an AI-first world requires more traditional software infrastructure, not less. AI agents are voracious consumers of compute, data, and APIs.
This insatiable demand strengthens the position of the underlying layers: cloud providers, security platforms, observability tools, and data pipelines. As the application layer becomes more fluid and agent-driven, the infrastructure layer that powers, secures, and monitors these agents becomes the bedrock of the new stack. These are the pick-and-shovel plays that grow directly alongside AI consumption.
IV. Why the “AI Eats All SaaS” Story Is Overstated
The bear case for software rests on simple logic: if AI drives the cost of writing code to zero, the value of software must also fall to zero. This argument fails because it confuses "feature replication" with "workflow replacement".
Feature Replication vs. Workflow Replacement
Investors often mistake the visible part of software for its total value. AI is exceptionally good at cloning features. A single developer with an AI assistant can now build a functional clone of a popular tool's interface in a weekend.
However, enterprises do not buy features; they buy workflows, controls, and reliability. The value of a platform like Workday is not its interface. The value is in the permissions architecture, the audit logs, and the deep integrations with legacy systems.
Think of it like the difference between a movie set and a functioning building. From the street, a facade built in a day looks identical to a skyscraper. But if you try to move a business into the movie set, you realize there is no plumbing, no electricity, and the walls cannot hold weight. Incumbents own the infrastructure, not just the facade.
The Persistence of the "Hard Parts"
The "AI makes coding free" argument assumes that coding is the bottleneck. It is not. The real friction points are edge cases, data quality, compliance, and change management.
Faster coding increases the supply of apps, but it does not fix the messy reality. An AI might generate a contract management app instantly, but it cannot automatically clean twenty years of disorganized data or navigate multinational compliance needs without human tuning. The software that manages this complexity retains its value because the complexity itself has not gone away.
V. The Winner’s Scorecard: 4 Filters for “AI-Resilient” Software
We must shift from abstract debates about whether "AI disrupts SaaS" to a practical framework for underwriting and operations. Not all software is equally exposed. The following four filters help separate the companies likely to survive reshuffling from those that will be hollowed out.
1. Pricing Models that Survive Efficiency
The Core Idea: Companies that already charge by usage are safer. Those charging by headcount face a double trap. Even if they successfully pivot to usage-based pricing to capture AI value, their profit margins will be permanently compressed.
The Hard Truth: In the traditional per-seat model, the marginal cost of serving one more user is effectively zero, creating massive gross margins. In a usage-based AI model, every API call and inference incurs real costs in the form of compute tokens. A switch in pricing preserves revenue but destroys the "zero marginal cost" economy that made SaaS so lucrative in the first place.
- Operator Test: "If we switch to usage pricing, does our gross margin hold up against the cost of inference?"
- Investor Signal: Avoid companies where revenue is strictly tied to seats in highly automatable roles; prefer those with existing usage-based unit economics.
2. Low-Error-Tolerance Domains
The Core Idea: The lower the tolerance for mistakes, the more durable the software. AI can propose actions, but regulated or high-stakes industries need rigid systems that prove what happened, when, and why. In industries like payments or healthcare, the inability to explain an error is an existential risk.
Think of it as the difference between a music recommendation and a bank transfer. If the AI suggests a bad song, you skip it. If it misroutes a million dollars, you get sued. The software that prevents lawsuits is safe.
- Operator Test: "Can we guarantee a chain of custody for every decision and data point, including the ability to roll back?"
- Investor Signal: Compliance-first workflows and sticky retention in regulated sectors.
3. Deep Vertical Specialization
The Core Idea: Vertical software wins when the "messy truth" of an industry is private and not learnable from the public internet. Long-tail edge cases and expert constraints become defensible advantages that general models cannot replicate. A general LLM might write code, but it cannot navigate the closed-loop physics of chip design or the localized regulations of clinical trials without proprietary data.
- Operator Test: "Do we own unique workflows and data that a competitor cannot replicate just by scraping the web?"
- Investor Signal: High switching costs driven by deep integration into daily operations, not just UI preference.
4. Backend Moats Over Frontend Moats
The Core Idea: If an AI agent becomes the primary interface, UI differentiation weakens, and backend capability matters more. The "entrance layer" moves away from the app, so defensibility must come from execution, data governance, and integration depth.
- Operator Test: "If customers stopped logging into our dashboard daily but still used our API, would we still control the workflow and the value?"
- Investor Signal: Companies that are API-first and act as a "System of Record" rather than just a "System of Engagement".
These filters do not guarantee winners, but they sharply improve your odds. They tell you where to spend capital and where to build defenses now.
VI. Closing: The Market’s “Rational Exit” and the Real Opportunity
The recent sell-off in software is not an overreaction. It is a rational pricing of new risks. When the fundamental assumptions of an industry (pricing power, distribution, and margin structure) are challenged, the market correctly discounts uncertainty, even before the specific outcomes are known.
However, the opportunity is just as rational as the exit. Markets are efficient at pricing fear but inefficient at distinguishing nuance in the middle of a storm. Right now, every software company is being painted with the same brush.
This creates a window for disciplined investors and operators. The scorecard above provides a way to separate the "UI rent collectors" (who are effectively dead but don't know it yet) from the "moat reset survivors" (who will rebuild their value on firmer ground).
We do not need to know exactly when AI agents will become dominant. We simply need to identify which structures survive that transition. The goal is not to predict the exact date of the takeover; it is to build and back the businesses that are engineered to win when it happens.