You Can't Automate What You Haven't Defined
Process Diagrams Are the Foundation — Not an Afterthought
I have been sitting with this post for a while. The prompt that finally got me to write it was a recent a16z podcast featuring Steven Sinofsky — a16z board partner and former president of Microsoft’s Windows division — alongside Aaron Levie and a16z general partners Erik Torenberg and Martin Casado. The conversation covers what AI agents actually are, how they will reshape work, and whether existing processes should dictate how agents operate — or whether agents should just reinvent the workflow from scratch.
The thread that has stayed with me is something Sinofsky said about the nature of the work itself. He observed that algorithmic thinking — breaking work down into discrete, well-defined steps that can be executed consistently — is genuinely hard for the vast majority of people. Most organizations do not operate with that level of process clarity. They have loosely defined ownership, informal handoffs, and a “we’ll figure it out as we see it” approach to edge cases. This works, sort of, when you have experienced people who carry the process in their heads. But it does not scale — not with humans, and certainly not with machines. You cannot automate what you have not defined. And you cannot define what you have never been forced to think through.
That observation crystallized something I have been seeing across tech-enabled services organizations for years. The urgency to adopt AI is real and the opportunity is significant. But the organizations racing to deploy agents on top of undefined processes are going to struggle — not because the technology is not capable, but because they are skipping the foundational step that makes any of it work. That foundational step is the process diagram.
1. An Enforcement Mechanism for Crisp Thinking
The most important function of a process diagram is not documentation. It is the act of forcing people to think through the process with precision — when does something happen, what triggers it, who does it, what do they do it to, and what does the output need to look like for the next step to work. This sounds obvious. In practice, it is genuinely hard and most organizations never do it.
Think about a common operational task in a tech-enabled services organization: delivering an intervention or a campaign to a cohort of members. At face value, this seems straightforward. You identify the population, you deliver the intervention, you track the outcome. But the moment you sit down to actually write the process — to define every state, every decision point, every handoff — the complexity surfaces. Who is included? What are the exclusion criteria, and when are they evaluated? What happens if a member meets inclusion criteria today but an exclusion criterion triggers tomorrow? How do you handle recurrence? What does the handoff to the next step look like, and what data needs to travel with it? What is the definition of “done” for this step?
These are not edge cases. They are the process. And until you have worked through them explicitly, your organization is not really operating a process — it is operating a loose approximation of one that depends on the judgment of a few experienced people to hold it together. That works until it does not: until volume grows, until those people leave, until you try to train someone new, or until you try to hand any part of it to an agent.
The discipline of drawing the process diagram is what forces this thinking to happen. It is the enforcement mechanism — not for compliance, but for clarity. It requires the organization to agree, in explicit terms, on how work actually moves. The artifact that comes out the other end is valuable. But the process of creating it is where the real work happens.
2. A Diagnostic Instrument — and a Measurement Framework
Once you have a process diagram, you gain the ability to diagnose problems rather than just observe symptoms. This is the second function, and it follows directly from the first. You can only diagnose what you have defined.
The most common failure mode in operational organizations is that you know something is not working but you cannot locate the break. Volume is dropping. Quality is inconsistent. Certain outcomes are not materializing. Without a defined process, you are tracing a path that only exists in people’s memories. A well-drawn process diagram changes this. When each step is explicit — inputs, outputs, decision criteria, handoff points, desired outcomes — you can measure against it. Where is volume falling off? Where are delays accumulating? Where are handoffs producing malformed inputs for the next step? The diagram gives you coordinates. You can see where things are dropping off, and you can trace back to understand why.
This diagnostic capability is also the prerequisite for meaningful measurement. I see this pattern frequently: organizations want KPIs and dashboards, they want to know if things are working, but they have not done the process design work first. The result is that the metrics they track are disconnected from the actual flow — they measure activity rather than the quality of outcomes at each step, and they do not have the data elements they would need to actually diagnose a problem. You cannot know what to measure if you have not defined what the process is supposed to produce at each stage. And you cannot ensure that your systems are capturing the right data if you have not mapped the workflow that generates it.
I have spoken about this directly in the context of building data and analytics frameworks — the point being that the framework only works if the underlying workflows and desired outcomes are clearly defined first. The data model follows the process model. When the process is articulated at the level of individual steps, diagnosis becomes tractable: you can decompose a problem, trace the data across the workflow, and identify with precision where something is breaking down and what data you would need to prove it.
3. A Platform for Modular Change
There is a third function that connects the first two to everything happening in AI right now. When your process is defined to the level of individual, well-scoped steps, each step becomes independently modifiable. You can change one component — improve it, replace it, automate it — while understanding the upstream inputs and downstream effects. This is the difference between surgical change and disruptive change, and in a complex operational environment with interdependent workflows, it matters enormously.
Without this decomposition, every modification carries unknown risk. You think you are improving the intake step; you do not realize you have broken how the downstream eligibility check receives its inputs. With a properly drawn state machine as your foundation, the scope of any change is clear. You understand exactly what you are touching, what depends on it, and what the logical flow looks like before and after the change. You can reason about the second-order effects before you make the first-order change.
This modular design discipline is not a new idea — it is a foundational principle of how good systems are built. What the current moment is surfacing is that it applies equally to operational processes as it does to software architecture. The organizations that have invested in this kind of process clarity are the ones that can make changes with confidence, can train people against a defined structure, and — when the time comes — can identify exactly where an agent can take on a step and what the boundaries of its ownership should be.
Now You Are Ready to Talk About AI Agents
The thesis of the AI adoption conversation right now is that agents will transform how organizations operate. I agree with that. But the practical question is not whether to adopt agents — it is whether your organization is ready to do it in a way that actually produces durable, measurable results. And that readiness depends almost entirely on whether you have done the process work first.
Organizations that are trying to deploy agents on top of loosely defined workflows are not going to get there. As Sinofsky pointed out, most organizations have not developed the algorithmic thinking required to define their work precisely enough for a human to execute consistently, let alone a machine. The agent is not the bottleneck. The undefined process is.
Once you have your process diagram, the path to incorporating agents becomes deliberate rather than speculative. You can look at each step and ask two concrete questions: what is the volume here, and what is the friction? High volume and high friction is your starting point. The ROI is arithmetic, not abstract — you know the step, the frequency, the cost in human time and error rate, and what good looks like at the output. You can measure an agent against that baseline from the first day you deploy it.
More importantly, you can introduce agents incrementally, in the same way you would expand the scope of a new team member. You do not give a new hire the entire operation on their first day. You give them a clearly scoped responsibility, define what success looks like for that scope, and measure their performance against it. Over time, as they prove reliable, you expand their ownership into adjacent steps. You maintain oversight of the edge cases until you are confident the agent can handle them. This is just good management — and it applies to agents precisely as it applies to people.
The inverse is also true: just as you cannot develop a person — give them real feedback, set meaningful expectations, help them grow — without a clear definition of their role and responsibilities, you cannot govern an agent without the same clarity. You will not know if it is succeeding. You will not know what to fix when it fails. You will not know when it is safe to expand its scope. The process diagram is the job description. Without it, you are not running an agentic workflow. You are running a black box and hoping for the best.
The Best Practices Have Not Changed
In the midst of all the excitement around AI and the rapid pace of change, it is easy to lose sight of the fundamentals. Careful process design, modular systems, defined ownership, measurement frameworks — these are not legacy ideas that the current moment has made obsolete. They are the foundation. And if anything, they matter more now than they ever have.
The reason is straightforward: agents execute at a speed and scale that amplifies both what is working and what is not. A poorly defined process that a skilled human can navigate through judgment and context becomes a systematic failure mode at machine speed. The edge cases that your most experienced people handle instinctively — and that never get written down — become the failure surface that your agents will hit repeatedly.
The organizations that will successfully adopt and expand AI are not the ones that moved fastest in the short term. They are the ones that invested in process clarity first — that took the time to draw the diagram, define the steps, instrument the workflows, and develop the organizational fluency to think algorithmically about their own operations. That work is harder than buying a tool or deploying a platform. It is also the work that cannot be skipped.
Till next time,
Alphan
