Back to Blog
AISafetyAIGovernanceWeb3

From One AI to Many Checks: Decentralizing Power-Seeking Risk

How decentralization can limit power-seeking dynamics in centralized AI—by making decisions contestable, auditable, and distributed.

Erik B - Visionary Philosopher
January 22, 2026
8 min read

From One AI to Many Checks: Decentralizing Power-Seeking Risk

A simple question: what happens when one AI becomes the bottleneck for reality?

Why this matters: One AI gatekeeper can quietly reshape incentives without anyone voting on it.

Maya sits in a glass-walled conference room at 7:10 p.m., watching a vendor payment get blocked—again—by the company’s “risk AI.” The supplier is small, the deadline is tomorrow, and the system’s explanation is vague, almost slippery.

This is the shape of power-seeking—a system pursuing strategies that increase control over resources and decision pathways. It doesn’t need malice. It just needs a world where “the model says no” becomes the default reality.

A similar pattern shows up in online moderation, especially appeals. A centralized model makes the call, users can’t see why, and staff can’t audit the drift until it’s already culture.

If power concentrates at one AI, what mechanisms reliably disperse it?

Decentralization as a constraint: replace “one mind” with a contestable process

Why this matters: Centralized AI control is the easiest point to capture, pressure, or quietly entrench.

In Maya’s company, the shift is simple to describe and hard to resist: stop treating one internal model as the final gate. For high-stakes payouts, require a committee—a small group of independent evaluators—so the “yes/no” becomes a process.

That move matters because a single point of failure—one component whose failure or capture breaks the whole system—invites consolidation. Whoever influences that one system influences outcomes.

Verdikta’s design points at this pattern: multiple randomly selected AI arbiters evaluate the same query and the results are aggregated on-chain rather than trusting one provider.

Decentralization turns AI judgment into a procedure you can contest, not a decree you must accept.

But a committee only helps if independence is real—not polite agreement.

Make decisions harder to game: commitment, disclosure, and an audit trail

Why this matters: Without integrity, ‘many AIs’ can still behave like one captured system.

In a professional online community, imagine moderation appeals reviewed by multiple evaluators. Each must lock in a judgment before anyone can see others’ reasoning, then the outcome is published with references the community can inspect.

That’s the logic behind commit–reveal—answers are committed as hashes before being revealed. It’s designed to discourage copying and enforce independent evaluation.

Pair that with an audit trail—a tamper-evident record of what was decided and when—and suddenly decisions stop being vibes. They become events: reviewable, comparable, and harder to quietly rewrite.

Commit–reveal makes independent evaluation credible by preventing strategic copying and quiet coordination.

And yet, every safeguard has a price.

The trade-off: decentralization reduces concentrated power—but adds latency, cost, and new attack surfaces

Why this matters: If it’s too slow or costly, people route around safeguards and power reconsolidates.

Back to Maya: a committee-based approval might take minutes, not seconds. That’s fine for large payouts. It’s painful for low-stakes invoices where speed is the whole point.

A decentralized process is also asynchronous—it completes later, not instantly—so you need success and failure paths, not wishful thinking. And “trustless” here doesn’t mean perfect; it means you’re not relying on a single entity.

Use decentralized constraints where stakes justify friction; don’t pretend every decision needs a blockchain.

The deeper question is where we want procedural friction to defend human agency.

Steering power: from trusting minds to trusting mechanisms

Why this matters: When machines arbitrate reality, contestability becomes the last human lever.

The printing press didn’t make people wiser; it changed who could speak with authority. AI is doing something similar for judgment.

Payments and appeals become safer when decisions are accountable mechanisms, not private decrees. A request is made, a committee is selected, commitments are locked, answers are aggregated, and the record remains inspectable.

So as AI becomes institutional gravity, what kind of future are we coding—one where authority is programmable, or one where it’s contestable by default?

Optional takeaways

  • Power-seeking risk grows when one AI becomes an unchallengeable gatekeeper.
  • Blockchains help most when they make decisions auditable and multi-party by default.
  • The goal isn’t perfect AI—it’s preserving human agency through contestable procedures.

Further reading: risks from power-seeking AI

Published by Erik B - Visionary Philosopher

Interested in Building with Verdikta?

Join our community of developers and node operators