About the AEGIS Initiative

What AEGIS Is

AEGIS (Autonomous Enforcement Governance for Intelligent Systems) is a governance-first security framework for autonomous AI systems. It defines enforceable constraints that bind AI capability to human-authorized policy at every layer of the stack.

AEGIS is not a product. It is a set of specifications, governance documents, and reference implementations designed to make AI systems auditable, constrained, and transparent by default.


Three-Layer Architecture

The AEGIS ecosystem is organized into three distinct layers, each with a clear separation of concerns:

1. Constitution

The foundational governance charter. The AEGIS Constitution defines the immutable principles that all AEGIS-governed systems must enforce. It is versioned, citeable, and openly licensed. The Constitution is the what — the constraints that must hold.

2. Governance

The enforcement architecture. The governance layer translates constitutional principles into deterministic rules, state machines, and audit requirements. It defines how constraints are enforced at runtime — through bounded execution, explicit threat classification, versioned authority, and structured persistence.

3. Initiative

The organizational layer. The AEGIS Initiative is the public-facing entity that publishes specifications, engages with policymakers, submits to standards bodies, and maintains the open-source ecosystem. It is the who — the people, processes, and publications behind the work.


Founding Principles

  • Constraint before capability. No system should gain capability without corresponding governance.
  • Governance before execution. Policy is evaluated before action, not after.
  • Transparency before trust. Trust is earned through auditable behavior, not declared.
  • Oversight before autonomy. Human authority is the root of all system authority.
  • Deny by default. Anything not explicitly authorized is prohibited.

How We Work

The AEGIS Initiative is human-directed and AI-collaborated. All governance decisions, architectural choices, and policy positions are made by humans. AI tools are used openly and deliberately as productivity instruments — for drafting, research, code generation, and analysis.

We use AI tools in our work and we are transparent about it:

  • Ken Sturrock — Principal and Founder
  • AI tools — ChatGPT, DALL-E, MidJourney, Claude.ai, Claude Code

Every document, specification, and publication is reviewed, approved, and published by a human. AI-generated content is always subject to human editorial judgment. We believe this transparency strengthens — rather than weakens — the credibility of governance work.


Non-Partisan Mandate

AEGIS is non-partisan. Our governance positions are grounded in technical analysis, not political alignment. We engage with any policymaker, regulator, or standards body — regardless of jurisdiction or political orientation — that is working to make AI systems safer, more transparent, and more accountable.