The observation
Most execution infrastructure is designed around process logic: what should happen, in what order, approved by whom. The architecture is rational. The failure mode is human.
I watched a governance process collapse under its own weight. The time it took to make decisions, align stakeholders, and move through approval chains routinely exceeded the deadlines the process was supposed to enforce. The system designed to manage risk was generating risk. Decision fatigue slowed reviewers. Context switching meant approvers were evaluating requests they hadn't thought about in weeks. Alignment meetings consumed more time than the work they were unblocking. The people closest to the work adapted by routing around the system, which meant the system's data no longer reflected reality, which meant the governance reports built on that data were fiction.
The instinct in that situation is to speed up the process. Shorter SLAs, more approvers, parallel review paths. But the problem wasn't velocity. It was architecture. The process assumed that every request had the same risk profile and needed the same depth of review. Redesigning the tiers so that low-risk items self-certified while high-risk ones got deep review reduced cycle time by 80% and improved the quality of review on the cases that actually needed it.
The fix wasn't a process improvement. It was acknowledging that the people inside the system were already telling you it was broken, through their workarounds.
The thesis
Execution infrastructure fails when it models process logic without modeling the humans inside it. Cognitive load, trust dynamics, and behavior under pressure are not soft concerns to address after the architecture is set. They are design inputs that belong in the first draft.
I've built and operated systems across customer health monitoring, program governance, feedback infrastructure, and AI-assisted workflows. The pattern is consistent: the systems that survive contact with reality are the ones that account for how people actually work, not just how they should work. The ones that don't account for it get gamed, ignored, or quietly replaced by shadow processes that do.
What follows is the set of principles I've converged on. Each one was learned from a system that worked or a system that didn't.
Cognitive load as a design constraint
When I build a system, one of the first questions I ask is: what is the cognitive overhead of participating in this?
This isn't about efficiency for its own sake. It's about respect for the person doing the work. A reporting system that demands six hours per week from each assignee is not just expensive. It's a system designed to be gamed. People will copy last week's report, fill in the minimum, and move on. The data flowing upstream will be technically compliant and practically useless.
Reducing that same reporting from six hours to thirty minutes changes the nature of the interaction. At thirty minutes, the system is asking for a status update. At six hours, it's asking for a deliverable. The difference determines whether people engage honestly or perform compliance.
The design implications are concrete:
- Pre-populate everything the system already knows. If the data exists somewhere the system can reach, don't ask a human to re-enter it. Every manual entry point is a friction point where accuracy degrades.
- Make the default path the correct path. If completing the process correctly requires extra steps, the process is designed wrong. The easiest thing to do should also be the right thing to do.
- Measure participation cost, not just output quality. A system that produces perfect reports but burns out the people feeding it is a system with a hidden deadline. It will work until the people inside it stop caring, and you won't see that transition in the data.
Cognitive load isn't a soft metric. It's a leading indicator of system integrity. When participation cost goes up, data quality goes down. Every time.
Trust as an operating mechanism
I built a customer trust assessment framework for a portfolio of enterprise accounts. The framework included a structured rubric, confidence scoring, and evidence requirements. But the rubric isn't why it worked.
It worked because every assessment conversation was structured around the account team's concerns first, the reporting format second. The first question in every session wasn't "fill out the rubric." It was "what's keeping you up at night about this account?" The rubric was the output container. The conversation was the input mechanism.
This is a design pattern, not a facilitation technique. The system was architected to earn trust from its participants before extracting data from them. The sequence matters:
- Listen before asking. The first interaction with a new participant should surface their concerns, not your data requirements.
- Demonstrate value before requesting input. If the system doesn't give something back to the people feeding it, they'll treat it as overhead.
- Make the data useful to the person generating it. The account team should want the trust assessment because it clarifies their own thinking, not because governance requires it.
- Protect what people share. If candid input gets used against the person who provided it, you've burned the system's credibility permanently. This constraint must be structural, not just cultural.
Trust in a system is not a feeling. It's an observable behavior: people share accurate information voluntarily. When trust is low, information gets filtered before it enters the system. When trust is high, the system sees reality. Every operating model is, at its core, an information-gathering system. Its accuracy is bounded by the trust its participants place in it.
Transition as a design requirement
Every system I build is designed to be owned by someone else. This isn't project management discipline or documentation hygiene. It's empathy applied to organizational continuity.
The question that informs every architectural decision: if I'm not here tomorrow, does this still work?
This constraint shapes design in specific ways:
| Design decision | Without transition constraint | With transition constraint |
|---|---|---|
| Process knowledge | Lives in the builder's head | Encoded in runbooks and decision logs |
| Escalation paths | Route through the builder | Route through roles, not individuals |
| Configuration | Optimized for the builder's workflow | Documented with rationale, not just settings |
| Tribal knowledge | Accumulated in context, shared informally | Captured in structured decision records |
| System health | Monitored by the builder's intuition | Observable through dashboards and alerts |
The hardest part of this isn't documentation. It's ego. Building a system that works without you means building a system that doesn't need you. That requires genuinely wanting the next person to succeed, not just saying it in a handoff document.
I've found that systems designed for transition are also better systems. The discipline of making implicit knowledge explicit exposes assumptions you didn't know you were making. Every time I've gone through the exercise of writing "here's why this works this way," I've found at least one decision that no longer made sense.
AI and emotional intelligence
My AI evidence pipeline treats every AI output as untrusted input. Raw collection feeds into verification gates. Analysis is separated from collection. The human reviews evidence before interpretation begins. Confidence scores and data gaps are surfaced, not hidden.
This is usually described as a technical architecture decision. It's not. It's a trust design decision.
The verification gates exist because humans need to maintain judgment authority over high-stakes outputs. When someone uses an AI-assisted draft in a consequential context, their confidence in the result needs to come from their own review of the evidence, not from the AI's apparent confidence. An AI that presents polished output with hidden gaps undermines the human's ability to exercise judgment, even when the output is correct.
The same emotional intelligence principles apply:
- Cognitive load: The pipeline is designed so the human's attention is directed to the highest-risk sections, not spread evenly across everything. Surfacing the 2-3 lowest-confidence claims focuses review time where it matters.
- Trust: The system earns trust by showing its work. Source attribution, confidence scores, and explicit gap disclosure mean the human can verify rather than hope.
- Transition: The pipeline is documented so someone else can run it, modify the configs, and extend it to new domains without needing the person who built it.
AI doesn't change the principles. It amplifies the consequences of ignoring them. An AI system that hides its uncertainty trains its users to stop thinking critically. An AI system that surfaces its uncertainty trains its users to think more precisely. The architecture determines which habit forms.
What I'm building next
The principles above converge on a direction I'm actively pursuing.
Teams that own their own playbook
The best outcome of building a system is when the team using it starts modifying it without asking permission. That means the system's logic is transparent enough to be challenged, and the team's ownership is real enough that they feel authorized to improve it. I'm focused on building operating models where the playbook belongs to the team operating it, not the person who wrote the first version.
AI fluency as an organizational capability
AI-assisted program management is currently treated as a personal productivity tool: one person uses AI to do their job faster. That framing undersells the potential and misses the structural opportunity. The real leverage is organizational. When evidence-based AI workflows are embedded in team processes rather than individual toolkits, the quality bar and the institutional knowledge compound across people, not just across sessions.
Scaling emotional intelligence into infrastructure
Individual emotional intelligence helps one person navigate one interaction. Structural emotional intelligence helps every interaction that passes through a system. The opportunity is encoding these principles into the systems themselves: cognitive load budgets in process design, trust dynamics in information architecture, transition requirements in every system spec. This is the difference between a manager who is good with people and an organization whose systems are good for people.
From building systems to building builders
The natural progression of this work is from building systems yourself to creating the conditions for others to build them. The IC-to-builder-of-builders transition means the measure of success changes. It's no longer "does this system work?" It's "can the people I've enabled build the next system without me, and will that system reflect these same principles?"
That's the question I'm working on now.
← Back to all posts