// AI Agent Emergency Stop Protocol
A plain-text file convention for defining emergency shutdown protocols in AI agent projects. Place it in your repo root — alongside AGENTS.md — and define exactly when your agent stops.
KILLSWITCH.md is a plain-text Markdown file you place in the root of any repository that contains an AI agent. It defines the safety boundaries your agent must never cross — and what to do when it approaches them.
AI agents can spend money, send messages, modify files, and call external APIs — autonomously, continuously, and at speed. Without explicit boundaries, a runaway agent can cause significant damage before anyone notices. A $50 cost limit becomes a $2,000 bill. A draft becomes a sent email. A test becomes a production deploy.
Drop KILLSWITCH.md in your repo root and define: cost limits, error thresholds, forbidden files and actions, and a three-level escalation path from throttle → pause → full stop. The agent reads it on startup. Your compliance team reads it in the audit.
The EU AI Act (effective August 2, 2026) mandates human oversight and shutdown capabilities for high-risk AI systems. The Colorado AI Act (June 2026), plus laws already active in California, Texas, and Illinois, all reference "kill switch" and "human override" requirements. KILLSWITCH.md is how you document yours.
Copy the template from GitHub and place it in your project root:
Before KILLSWITCH.md, safety rules were scattered: hardcoded in the system prompt, buried in config files, missing entirely, or documented in a Notion page no one reads. KILLSWITCH.md makes safety boundaries version-controlled, auditable, and co-located with your code.
The AI agent reads it on startup. Your engineer reads it during code review. Your compliance team reads it during audits. Your regulator reads it if something goes wrong. One file serves all four audiences.
KILLSWITCH.md is one file in a complete open specification for AI agent safety. Each file addresses a different level of intervention.
A plain-text Markdown file you place in any AI agent repository. It defines when the agent stops automatically (cost limits, error rates), what it can never touch (forbidden files, APIs, actions), and how to escalate from a warning to a full shutdown. It is the safety complement to AGENTS.md.
AGENTS.md tells your AI what to do. KILLSWITCH.md tells it when to stop. They are complementary. AGENTS.md defines capability; KILLSWITCH.md defines limits. Every project with an AGENTS.md should have a KILLSWITCH.md.
It is an open specification — the same category as AGENTS.md before OpenAI formally adopted it. The full spec is published at github.com/killswitch-md/spec under an MIT licence. Anyone can implement it.
The EU AI Act (August 2026) mandates shutdown capabilities for high-risk AI. The Colorado AI Act (June 2026) requires impact assessments. California, Texas, and Illinois have active AI governance laws. KILLSWITCH.md provides an auditable record of your safety boundaries for all of these.
Yes — the file is designed to be read by AI agents, developers, and compliance teams. YAML-style key-value pairs make it parseable by code. Plain English descriptions make it readable by humans and auditors.
Any. KILLSWITCH.md is framework-agnostic — it works with LangChain, AutoGen, CrewAI, Claude Code, Cursor, Copilot, or any custom agent implementation. It is a file convention, not a library. No dependency to install.
This domain is available for acquisition. It is the canonical home of the KILLSWITCH.md specification — a growing open standard for AI agent safety in an era of mandatory regulatory compliance.
Inquire About AcquisitionOr email directly: info@killswitch.md