Your AI Project Doesn’t Have an LLM Problem. It Has a Process Problem. 

Your AI Project Doesn’t Have an LLM Problem. It Has a Process Problem. 

Enterprise AI process modeling illustration showing AI systems and business workflows
Published By : Brian French March 23, 2026

Recently, George Sivulka, the CEO of Hebbia, published a piece through Andreessen Horowitz arguing that vertical software is not being eaten by foundation models. His argument was sharp, and I think it deserves a lot more attention in the business process space than it will probably get.

His core point: the value of enterprise software is not the code. It’s the encoded understanding of how a specific team, at a specific firm, does their specific work. He calls it “process engineering.” He argues no general-purpose AI system, regardless of how powerful it becomes, can replicate that.

As of today, and probably the foreseeable future, he is absolutely right. And if you are running an enterprise AI initiative right now, the implications of this should stop you in your tracks.

One quick note on language before we go further

When I use the word “automation” in this piece, I mean the full spectrum: AI deployments, agentic systems, traditional process automation, and everything in between. The line between AI and automation is blurring fast, and treating them as separate conversations is already an outdated way to think about it. So wherever you see “automation” here, assume AI is part of that picture unless I say otherwise.

Now, back to why your AI project probably has the wrong problem diagnosis.

The Last Mile Is Not a Configuration Problem

Sivulka uses the phrase “last mile” in a way I think is far more useful than how it usually gets thrown around in software conversations. He is not talking about final deployment steps or go-live checklists. He is talking about something much harder to quantify.

He states:

“Last mile” is a …recognition that what you’re deploying isn’t just software but the embodiment of how a specific team of specific people does their specific job. The last mile is where all the differentiated value resides.

He describes two teams at the same bank, doing the same type of work, with entirely different standards for what a good output looks like.

  • One team runs due diligence through a 40-page template.
  • The team down the hall does it in a shared spreadsheet that gets emailed around.
  • Both approaches have worked for years.
  • Both approaches are deeply embedded in how those teams function.
  • Both approaches represent institutional knowledge that took a long time to develop and is genuinely hard to replace.

That last mile, he argues, is not 10 percent of the problem. It is the entire problem.

I’ve been saying a version of this for a long time, but usually in the context of process modeling and automation programs. The situation I keep seeing is this: a company decides to pursue AI, or automation, or both. They pick a platform, they stand up a pilot, they get some early results that look promising, and then they hit a wall.

The wall is not the technology. The wall is that nobody documented how the process works, at least not at the level of detail that matters.

They have:

  • SOPs that describe the high-level steps.
  • Documentation written for compliance purposes, not for actual operational guidance.
  • Tribal knowledge sitting in the heads of three people who have been doing the job for twelve years, and those three people are not in the room when the implementation team is asking questions.

Sound familiar?

This Is What BPMN Was Built For

Here is where I want to make a case that will probably be controversial in some corners of the industry: BPMN and structured process modeling are not relics of the pre-AI era. They are exactly the mechanism that Sivulka is describing when he talks about encoding process knowledge into durable systems.

When people say AI will make BPMN obsolete, I think they are confusing two very different things. They are conflating the automation of tasks with the orchestration of processes. They are assuming that because a foundation model can write code, summarize documents, or draft a response in natural language, it therefore understands how your particular claims team, or your particular credit desk, or your particular compliance workflow actually operates.

It does not. It cannot. That is not what it was built to do.

What BPMN does, when done properly, is force an organization to articulate exactly what Sivulka is describing.

Not the generic version of how an insurance claim gets processed.

The specific version:

  • who touches it
  • when
  • under what conditions
  • what exceptions exist
  • what the handoffs look like
  • what the decision criteria are at each gateway

The details that are invisible until you try to automate something and realize those details are actually the foundation of the process; in other words, they can’t be ignored.

Sivulka writes that “software is a stored process.” I would argue BPMN is the language you use to specify what that stored process should be. It is the mechanism for making implicit knowledge explicit, for making tribal knowledge transferable, and for making sure the thing you build reflects how your organization works instead of how someone imagined it works during a whiteboard session.

If you’re wondering where to start, tools like SPADE can accelerate this dramatically by generating BPMN models directly from your existing documents, recordings, and process artifacts. The point is not to spend six months in a modeling exercise before you touch AI. The point is to make sure the process knowledge exists in a form that an AI system can work with.

Better LLM Models Make This More Important, Not Less

One of the most interesting observations in the a16z piece is what happened when OpenAI released their O-series models. The conventional wisdom was more powerful foundation models would thin out the application layer and squeeze out vertical software companies. The opposite happened. Legal AI had an exceptional year because better models made the orchestration layer more powerful, not redundant.

This same dynamic applies directly to enterprise process automation. When you have IBM BAW running as your orchestration layer, and you are deploying agentic AI capabilities within that framework, a more capable underlying model does not replace the need for a well-modeled process. It amplifies whatever is already there.

If you have a well-designed, properly documented, BPMN-compliant process model, a more capable AI makes that process:

  • faster
  • smarter
  • and more autonomous

It can:

  • handle more of the exceptions
  • make better decisions at the gateways
  • complete tasks with less human intervention

But if your process model is vague, incomplete, or missing entirely, a more capable AI just finds new ways to go wrong at scale.

There is a version of this I have seen many times with RPA: the technology was doing exactly what it was asked to do, but what it was asked to do turned out to be a description of a broken process. The automation made the broken process execute faster, but what good is that? You just have bad results sooner than you would have before you “fixed” the process.

The same risk exists with agentic AI, and arguably it is larger because the failure modes are less obvious and the scope of what the AI can touch is broader.

The Social Contract That Nobody Talks About

Sivulka makes one observation in the piece I think is genuinely underappreciated, and it connects to something I have encountered in every meaningful process improvement engagement.

He calls load-bearing software “a social contract.”

What he means is that when a team has been working within a particular set of tools and processes for years, that system contains more than just functionality. It contains shared expectations, norms, and institutional memory. Replacing it is not just a technical migration; it is a renegotiation of how people agree to work together.

This is why process-first organizations with existing process modeling and architecture investments have an advantage in the agentic AI era that does not get discussed enough.

They already have that social contract partially established.

They have processes that are:

  • already modeled
  • already documented
  • already embedded in how their people operate

The question for them is not how to build a process foundation from scratch. It is how to extend and augment what they have with AI capabilities.

They have a running head start.

What Enterprise AI Process Modeling Means for AI Deployment

If you are in the middle of an AI deployment, or evaluating one, the a16z piece should prompt a very specific question: can you describe, in enough detail to run execute it, the exact process you are asking the AI to participate in?

Not the general shape of it.

The details:

  • the steps
  • the decision points
  • the exception paths

If the answer is no, the priority should not be LLM or technology selection. The priority should be process modeling.

You cannot orchestrate what you have not documented. You cannot trust an AI agent to navigate a workflow that you have not yet bothered to define.

The organizations that will get the most out of agentic AI over the next three years are not necessarily the ones who move the fastest. They are the ones who do the disciplined work of understanding their processes before they hand them off to a system that will execute those processes at a speed and scale that makes course correction expensive, especially in highly regulated industries.

Process engineering, as Sivulka calls it, is the differentiator. It always has been. AI did not change that. If anything, it raised the stakes for getting it right.always has been. AI did not change that. If anything, it raised the stakes for getting it right.

The last mile is not a final step. It is the entire foundation. Build it before you build anything else.

Want to see what process-first AI deployment actually looks like in practice? Check out spade.salientprocess.com and see how SPADE accelerates the process documentation work that makes everything else possible.