Agentik.md — The AI Agent Safety Stack: twelve open specifications for the safety, quality and accountability of AI agents.
WellStrategic has released the AI Agent Safety Stack: twelve free, open-source Markdown file specifications that define shutdown protocols, safety boundaries, and accountability standards for autonomous AI agents. The specifications are available at agentik.md and killswitch.md under the MIT license and are intended to help organizations document AI safety controls in advance of the 2026 EU AI Act and Colorado AI Act enforcement deadlines.
WellStrategic has launched the AI Agent Safety Stack: twelve open-source Markdown file specifications designed to help developers and organizations define safety boundaries, shutdown protocols, and accountability standards for autonomous AI agents.
The specifications are available at https://killswitch.md [https://killswitch.md/]with full templates and documentation on it https://github.com/killswitch-md/spec. All twelve specifications are released free of charge under the MIT license.
Background
Autonomous AI agents – software systems that can plan, decide and act without constant human direction – are quickly entering enterprise environments. Industry analysts predict that a significant portion of enterprise applications will include AI agents by the end of 2026. These systems can call APIs, modify files, send messages, and incur charges at machine speed. However, there is currently no widely accepted, version-controlled format for documenting their security boundaries alongside project code.
The AI Agent Safety Stack addresses this gap by providing a series of plain-text Markdown files, each covering a single safety issue, that can be placed at the root of a project’s repository. The approach follows the pattern of AGENTS.md, a project instruction file convention for AI agents now used in more than 60,000 open source repositories.
Regulatory context
Several AI governance frameworks are expected to come into effect in 2026:
Provisions of the EU AI Act (Regulation (EU) 2024/1689) regarding high-risk AI systems – including requirements for human oversight and the ability to interrupt or stop AI systems – are expected to apply from August 2, 2026. The Colorado Consumer Protections for Artificial Intelligence Act (SB 24-205), which requires impact assessments and risk management documentation for high-risk AI systems, will become effective from June 30 effective 2026. legislation is active or pending in California, Texas, Illinois and other US states.
The AI Agent Safety Stack is designed to help organizations document their AI safety controls in a format that is versioned, auditable, and co-located with the project code. The specifications do not guarantee compliance with any regulations and should not be treated as a substitute for qualified legal or compliance advice.
The twelve specifications
The specifications are divided into four categories:
Operational Control: THROTTLE.md – Rate limiting, cost caps and automatic deceleration protocols ESCALATE.md – Human-in-the-loop approval and notification workflows FAILSAFE.md (https://failsafe.md [https://failsafe.md/]) – Safe fallback states and recovery procedures KILLSWITCH.md – Emergency shutdown triggers and escalation paths TERMINATE.md – Permanent shutdown while preserving evidence
Data security: ENCRYPT.md (https://encrypt.md [https://encrypt.md/]) – Data classification, secret processing and transmission rules ENCRYPTION.md – Cryptographic standards, key management and compliance mapping
Output quality: SYCOPHANCE.md – Protocols for output bias and contention detection COMPRESSION.md – Rules for context compression and coherence verification COLLAPSE.md – Model drift detection and recovery checkpoints
Responsibility: FAILURE.md – Mapping failure modes and incident response procedures LEADERBOARD.md – Benchmarking agent performance and regression detection
Each specification is a plain-text Markdown file, designed to be read by AI agents at startup, reviewed by engineers during development, referenced by compliance teams during audits, and inspected by regulators as necessary. The specifications are framework independent and can be used with any AI agent implementation.
Availability
All twelve specifications are immediately available under the MIT license. Full documentation can be found at https://agentik.md [https://agentik.md/].
About WellStrategic
WellStrategic is an Australian technology and virtual tour company. The AI Agent Safety Stack is an open file convention specification project published and maintained by WellStrategic. The project is designed to complement existing open standards, including AGENTS.md and the llms.txt proposal.
Note: The AI Agent Safety Stack is an open source project provided under the MIT License, “as-is” and without warranty of any kind. It does not constitute legal, regulatory or compliance advice. Organizations should consult qualified professionals to determine their legal obligations. Use of these specifications does not guarantee compliance with any law, regulation or standard.
Media contact
Company name: WellStrategic
Contact: Craig
Email: Send email [https://www.abnewswire.com/email_contact_us.php?pr=agentikmd-launches-opensource-ai-safety-specifications-ahead-of-2026-eu-and-colorado-ai-regulations]
Phone: +611800360888
Country: Australia
Website: https://agentik.md
Legal Disclaimer: The information on this page is provided by an independent third party content provider. ABNewswire makes no guarantees, responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article and wish to have this article removed, please contact retract@swscontact.com
This release was published on openPR.
