Dustin GoodDustin Good

Framework

The Trust Stack

When Does Investing in Trust Pay Technical Dividends?

By Dustin Good · Elgin City Council

I. The Problem: Trust Isn't Binary

The Verifiability Framework answers a critical question: Can AI do this task? It sorts municipal functions into automatable and augmentable categories based on whether success can be objectively verified. That framework has been useful...it gives municipalities a clear decision heuristic for evaluating AI deployments.

But there's a second question the Verifiability Framework doesn't address: What kind of trust strengthens a deployment, and how do you build it?

A task can pass every verifiability test and still face political resistance. Permit processing might be perfectly automatable, but if residents don't trust that the system treats applicants fairly, it doesn't matter how efficient it is. The deployment will face resistance, generate complaints, and eventually get pulled...not because the technology failed, but because the trust environment wasn't there to support it.

This is the gap the Trust Stack addresses. Where the Verifiability Framework asks “Can AI do this?” the Trust Stack asks “What kind of trust strengthens this deployment?” One is about technical capability. The other is about the environment that determines whether that capability gains traction.

Trust in municipal AI isn't binary...it's layered. Different stakeholders operate at different layers, often talking past each other without realizing it. A city manager focused on process compliance is operating at a different trust layer than a community advocate asking whether the AI reflects their neighborhood's values. They're not disagreeing about the technology...they're disagreeing about which layer of trust matters most.

The Trust Stack gives municipalities a diagnostic vocabulary for something they've been feeling but couldn't articulate.

II. The Four Layers

Trust in municipal AI operates across four distinct layers. Each builds on the ones below it, and each requires different tools to establish.

Layer 1: Process Trust

“Did it follow the rules?”

This is where most current municipal AI governance lives. Did the vendor pass the security audit? Is the data handling compliant with privacy regulations? Are the procurement requirements satisfied? IT departments, legal teams, and procurement officers operate here.

Process trust is necessary. You can't skip it. But it's insufficient on its own. Checking compliance boxes doesn't automatically create public confidence...it creates a floor. A system can be perfectly compliant and still generate justified public concern.

Layer 2: Outcome Trust

“Did it work?”

Did the AI reduce permit processing time? Improve response accuracy? Handle the volume we expected? City managers and department heads care about this layer. Vendors love to pitch here because outcomes are quantifiable and demo-friendly.

But good outcomes alone don't guarantee public confidence. A decision can be efficient and still feel wrong. If an AI system routes service requests faster but consistently deprioritizes certain neighborhoods, the speed improvement doesn't matter to the people being deprioritized. The outcome is “good” by one metric and unacceptable by another.

Layer 3: Representation Trust

“Does it reflect us?”

This is where most municipal AI controversies actually ignite, even though it's rarely addressed during implementation. Does the AI encode assumptions that conflict with community values? Who decided what “good” means in this context? If an algorithm prioritizes efficiency over equity...or vice versa...who made that choice?

Community groups, advocates, and engaged residents operate at this layer. They're asking whether the system reflects the community it serves, not just whether it works. This can't be fully addressed with better testing or more thorough audits. It points toward something harder: genuine conversation about values, priorities, and what the community wants AI to optimize for.

Layer 4: Sovereignty Trust

“Do we still control this?”

Can the community meaningfully change how the AI works? Can they turn it off? Override it? Or has decision-making authority been quietly transferred to a vendor's model? Elected officials and the general public operate here.

This is what people mean when they worry about “algorithmic governance” even if they can't articulate it precisely. It's the question of whether adopting an AI system means accepting its logic permanently, or whether the community retains genuine authority over the decisions being made on its behalf.

III. The Critical Structural Insight

Each layer depends on the ones below it. You can't have meaningful representation trust without first establishing that the system works (outcome trust) and follows the rules (process trust). And sovereignty trust requires all three layers beneath it.

But here's what makes this framework useful rather than just descriptive: you can't build the upper layers with the same tools you use for the lower layers.

Layers 1-2 are built with technical verification: audits, testing, metrics, benchmarks, compliance checks. These are engineering problems with engineering solutions.

Layers 3-4 are built with democratic process: deliberation, community input on values, participation, ongoing consent, override ability, accountability structures. These are governance problems that require governance solutions.

This explains a failure pattern that shows up across municipal AI conversations: organizations try to build Layer 3-4 trust with Layer 1-2 tools. They commission more audits, run more tests, publish more metrics...and wonder why public concern persists. The concern isn't about whether the system works. It's about whether the community had a meaningful voice in shaping what the system optimizes for.

IV. The Deployment Speed Principle

The layer of trust an application involves should shape how you approach deployment. This is where the Trust Stack becomes directly operational.

Trust Layer InvolvedDeployment SpeedExample
Layers 1-2 onlyMove in weeksBack-office automation, internal analytics
Layer 3Benefits from deliberation cyclesPublic-facing decisions with value tradeoffs
Layer 4Requires ongoing accountability structuresPolicy-shaping, resource allocation, enforcement

This is the diagnostic power of the Trust Stack. When a vendor says “we can deploy this in 60 days,” you can now ask: At which layer of the Trust Stack does this application operate? And do we have the trust infrastructure to support it?

If it's a Layer 1-2 application...back-office automation, internal analytics...60 days might be reasonable. If it's a Layer 3-4 application...public-facing decisions involving value tradeoffs, resource allocation...you need to think carefully about whether you're building the right trust infrastructure alongside the technical deployment, not just after it.

V. Mapping Applications to Trust Layers

Different municipal AI applications sit at different layers. This mapping helps municipalities understand what trust infrastructure they should be building toward as they deploy.

Application TypePrimary LayerTrust Requirements
Internal analytics, back-office automation1-2Technical validation sufficient
Public-facing information delivery2Accuracy verification, error correction
Public-facing with value tradeoffs3Community input on how “good” is defined
Resource allocation recommendations3-4Deliberation on priorities encoded
Policy-shaping or enforcement-adjacent4Ongoing democratic accountability structures

VI. Connection to the Verifiability Framework

The Trust Stack and the Verifiability Framework are companion diagnostics. The Verifiability Framework asks three questions about any municipal task: Is it resettable? Is it iterable? Is it scorable? If all three, the task is a candidate for automation. If any fail, AI should augment human judgment rather than replace it.

The Trust Stack extends this: even if a task passes all three verifiability tests, investing in Layer 3-4 trust infrastructure can be the difference between a deployment that sticks and one that gets pulled. Verifiability tells you what's technically possible. The Trust Stack helps you think about what's politically sustainable. A permit completeness check might be perfectly verifiable, but if the community has concerns about bias in the permitting system, understanding that tension...and addressing it alongside the deployment...will make it more durable.

Together, the two frameworks form a complete diagnostic:

  1. Verifiability Framework: Can AI do this? (Technical capability)
  2. Trust Stack: What trust strengthens this deployment? (Democratic context)

The Diagnostic System

The Verifiability Framework answers can AI do this? The Trust Stack answers when does investing in trust pay technical dividends?

AI should expand what you know, not replace who decides.

VII. Meaningful Inefficiencies

The Trust Stack connects to an important concept from Eric Gordon's work at the Engagement Lab: meaningful inefficiencies. The deliberation processes that build Layers 3-4 are intentionally slower than what's technically possible. This isn't a bug...it's a feature.

Democratic legitimacy benefits from participation, and participation takes time. The “inefficiency” of democratic process is meaningful because it builds the trust infrastructure that makes sustained deployment possible. When people feel heard in the design of a system, they're more likely to support it through the inevitable rough patches of early deployment.

Consider a concrete example: a municipality deploying AI to help triage 311 service requests. Technically, this is a Layer 1-2 application...route the request to the right department based on content. But if the system consistently categorizes complaints from certain neighborhoods differently, it becomes a Layer 3 issue fast. The “inefficiency” of engaging the community on how triage categories are defined...before that controversy erupts...is an investment that pays off in sustained adoption rather than reactive damage control.

This reframes the speed question. Rather than “How fast can we deploy?” the more useful question is “At which layer does this application operate, and what trust-building should happen alongside deployment?”

VIII. The Honest Tension

I want to name something directly: the Trust Stack describes an aspiration that municipalities...including mine...are still figuring out how to operationalize.

The framework is clear in theory. Layers 3-4 are built through democratic process, not just technical verification. But in practice, most municipalities haven't developed effective mechanisms for community input on AI deployment. And there's a harder question underneath: even where input mechanisms exist, public engagement on technical topics often gets captured by the loudest voices rather than producing the nuanced, representative feedback that would actually improve deployment decisions.

This doesn't mean the upper layers don't matter. It means we have to be honest about the gap between the framework's aspiration and where most municipalities actually are. The responsible path isn't to wait until perfect engagement infrastructure exists...it's to start where you can, build upward deliberately, and be transparent about what you're doing and why.

In practice, that often means beginning with Layer 1-2 deployments...internal staff tools, back-office automation...where technical validation is sufficient. These early deployments build institutional knowledge, demonstrate responsible AI use, and create the organizational muscle for navigating harder questions later. You earn the credibility to tackle Layer 3-4 applications by showing you can handle Layers 1-2 thoughtfully.

The Trust Stack isn't a gate that must be fully satisfied before any deployment proceeds. It's a diagnostic that helps you understand what you're building toward and what risks you're accepting along the way. Sometimes the right answer is to deploy at Layers 1-2 while investing in the engagement infrastructure needed for Layers 3-4...not because you're cutting corners, but because you're building capacity in the right sequence.

IX. What's Next

The Trust Stack is the lens I'm bringing to upcoming work. I'm currently partnering with U.S. Digital Response (USDR)...a civic tech organization that pairs volunteer technologists with government partners...to help Elgin develop responsible AI policies. That engagement will start where most municipalities should: with internal staff tools and clear Layer 1-2 applications, building the foundation for more complex deployments over time. The Trust Stack will help us map which applications can move quickly and which ones need broader trust infrastructure first.

This is new territory. The Trust Stack hasn't been tested at scale...it's a practitioner-developed framework being brought to its first real-world application. I expect it to evolve as it encounters the complexity of actual municipal decision-making. The “Honest Tension” section above isn't a caveat...it's the part I'm most interested in working through. How do you build meaningful engagement infrastructure for AI decisions when most municipalities are still learning what questions to ask?

If you're a municipality evaluating AI deployment, start by asking which layer of the Trust Stack your application operates at. If you're a vendor, understand that your client's timeline isn't just about technical readiness...it's about trust readiness. And if you're an elected official navigating the AI conversation in your community, the Trust Stack gives you a vocabulary for the concerns your constituents are already expressing...even if the mechanisms for addressing those concerns are still being built.

The municipalities that get AI deployment right won't be the ones that move fastest, and they won't be the ones that wait for perfect conditions. They'll be the ones that understand which layer of trust each application requires, start where they can, and build deliberately toward the harder layers...with the intent of delivering better services for residents and a better work experience for staff.