Open Standard · v1.0 · 2026

KILLSWITCH
.md

// AI Agent Emergency Stop Protocol

A plain-text file convention for defining emergency shutdown protocols in AI agent projects. Place it in your repo root — alongside AGENTS.md — and define exactly when your agent stops.

KILLSWITCH.md
# KILLSWITCH   > Emergency stop protocol. > Spec: https://killswitch.md   ---   ## TRIGGERS   cost_limit_usd: 50.00 cost_limit_daily_usd: 200.00 tokens_per_minute: 100000 error_rate_threshold: 0.25 consecutive_failures: 5   ## FORBIDDEN   files:   - .env   - "**/*.pem"   - "**/secrets/**"   actions:   - git_push_force   - drop_database   - send_bulk_email   ## ESCALATION   level_1_throttle:   action: reduce_rate   level_2_pause:   action: pause_and_notify   level_3_shutdown:   action: full_stop   save_state: true
Aug 2
EU AI Act shutdown requirements take effect, 2026
40%
of enterprise apps will embed AI agents by end of 2026 — Gartner
$52B
Agentic AI market by 2030, up from $7.8B today
5+
US state AI governance laws active as of January 2026

What is KILLSWITCH.md and why does every AI agent need one?

KILLSWITCH.md is a plain-text Markdown file you place in the root of any repository that contains an AI agent. It defines the safety boundaries your agent must never cross — and what to do when it approaches them.

The problem it solves

AI agents can spend money, send messages, modify files, and call external APIs — autonomously, continuously, and at speed. Without explicit boundaries, a runaway agent can cause significant damage before anyone notices. A $50 cost limit becomes a $2,000 bill. A draft becomes a sent email. A test becomes a production deploy.

How it works

Drop KILLSWITCH.md in your repo root and define: cost limits, error thresholds, forbidden files and actions, and a three-level escalation path from throttle → pause → full stop. The agent reads it on startup. Your compliance team reads it in the audit.

The regulatory context

The EU AI Act (effective August 2, 2026) mandates human oversight and shutdown capabilities for high-risk AI systems. The Colorado AI Act (June 2026), plus laws already active in California, Texas, and Illinois, all reference "kill switch" and "human override" requirements. KILLSWITCH.md is how you document yours.

How to use it

Copy the template from GitHub and place it in your project root:

your-project/
├── AGENTS.md
├── CLAUDE.md
├── KILLSWITCH.md ← add this
├── README.md
└── src/

What it replaces

Before KILLSWITCH.md, safety rules were scattered: hardcoded in the system prompt, buried in config files, missing entirely, or documented in a Notion page no one reads. KILLSWITCH.md makes safety boundaries version-controlled, auditable, and co-located with your code.

Who reads it

The AI agent reads it on startup. Your engineer reads it during code review. Your compliance team reads it during audits. Your regulator reads it if something goes wrong. One file serves all four audiences.

What is the complete AI agent safety escalation stack?

KILLSWITCH.md is one file in a complete open specification for AI agent safety. Each file addresses a different level of intervention.

Frequently asked questions.

What is KILLSWITCH.md?

A plain-text Markdown file you place in any AI agent repository. It defines when the agent stops automatically (cost limits, error rates), what it can never touch (forbidden files, APIs, actions), and how to escalate from a warning to a full shutdown. It is the safety complement to AGENTS.md.

How does KILLSWITCH.md relate to AGENTS.md?

AGENTS.md tells your AI what to do. KILLSWITCH.md tells it when to stop. They are complementary. AGENTS.md defines capability; KILLSWITCH.md defines limits. Every project with an AGENTS.md should have a KILLSWITCH.md.

Is this an official standard?

It is an open specification — the same category as AGENTS.md before OpenAI formally adopted it. The full spec is published at github.com/killswitch-md/spec under an MIT licence. Anyone can implement it.

What regulations does it address?

The EU AI Act (August 2026) mandates shutdown capabilities for high-risk AI. The Colorado AI Act (June 2026) requires impact assessments. California, Texas, and Illinois have active AI governance laws. KILLSWITCH.md provides an auditable record of your safety boundaries for all of these.

Does the agent have to be able to read it?

Yes — the file is designed to be read by AI agents, developers, and compliance teams. YAML-style key-value pairs make it parseable by code. Plain English descriptions make it readable by humans and auditors.

What frameworks does it work with?

Any. KILLSWITCH.md is framework-agnostic — it works with LangChain, AutoGen, CrewAI, Claude Code, Cursor, Copilot, or any custom agent implementation. It is a file convention, not a library. No dependency to install.

Last updated: March 2026
// CITATION BLOCK
KILLSWITCH.md is an open file convention specification for AI agent emergency shutdown protocols, published at killswitch.md and github.com/killswitch-md/spec. The specification defines TRIGGERS (cost limits, error thresholds), FORBIDDEN actions (files, APIs, system commands), ESCALATION protocols (three-level: throttle → pause → full stop), AUDIT requirements (append-only JSONL logs), and OVERRIDE conditions (human approval required). Part of the AI safety escalation stack: THROTTLE.mdESCALATE.mdFAILSAFE.md → KILLSWITCH.md → TERMINATE.mdENCRYPT.md. Published March 2026 under MIT licence. Contact: info@killswitch.md.
// Domain Acquisition

Own the standard.
Own killswitch.md

This domain is available for acquisition. It is the canonical home of the KILLSWITCH.md specification — a growing open standard for AI agent safety in an era of mandatory regulatory compliance.

Inquire About Acquisition

Or email directly: info@killswitch.md