Open Standard

AIRISK Protocol v1.1

The SECURITY.md for AI risk.

A single markdown file that captures everything an insurer, regulator, or investor needs to assess your AI risk posture.

What is AIRISK

A structured disclosure format for AI companies.

Every software company has a SECURITY.md. It tells the world how you handle vulnerabilities, where to report issues, and what your security posture looks like. But there is no equivalent for AI risk. Insurers send you 40-page questionnaires. Regulators demand bespoke documentation. Investors ask the same questions in every due diligence cycle.

AIRISK.md solves this. It is a single, machine-readable markdown file that lives in your repository root. Seven sections cover company profile, AI systems inventory, data governance, risk management, insurance coverage, regulatory exposure, and operational controls. Every field uses a standardized key-value format that humans can read and tools can parse.

Generate it once, keep it updated, and point every stakeholder to the same source of truth. Insurance applications that took weeks now take minutes. Regulatory audits start with a complete picture instead of a blank page. And your risk posture is always transparent, always versioned, always in your control.

7 sections. One file.

1
Company ProfileJurisdiction, industry, stage, size, website
2
AI Systems InventoryModels, types, providers, scale, core vs internal
3
Data GovernanceData types, legal basis, transfers, retention
4
Risk ManagementFramework, oversight, testing, explainability
5
Insurance CoverageCurrent policies, AI-specific, exclusions
6
Regulatory ExposureApplicable frameworks, compliance status, gaps
7
Operational ControlsEthics, policies, training, transparency
File Format

Plain markdown. YAML frontmatter.

The format is intentionally simple. YAML frontmatter for metadata, followed by 7 numbered sections using standard markdown headers and bullet-point key-value pairs. No JSON schemas, no XML, no custom syntax.

AIRISK.mdv1.1
---
airisk: "1.1"
generated: "2026-04-13T10:30:00Z"
generator: "claude"
---

# 1. Company Profile

- **Company Name**: Acme Inc.
- **Legal Entity Name**: Acme Technologies Ltd.
- **Industry**: FinTech
- **Industry Vertical**: FinTech
- **Company Size**: 51-200
- **Incorporation Country**: US
- **Jurisdictions**: EU, US, UK
- **Founding Year**: 2020
- **Funding Stage**: Series A
- **Revenue Range**: $1-5M
- **Website**: https://acme.ai

# 2. AI Systems Inventory

- **Use Cases**: Customer Service, Data Analysis, Fraud Detection
- **Model Types**: LLM, NLP, Predictive
- **Providers**: OpenAI, Anthropic
- **Deployment**: Cloud
- **AI Role**: Core product
- **End Users**: B2B
- **Models in Production**: 3
- **Daily Inferences**: 50000

# 3. Data Governance

- **Data Types**: Personal Data, Financial
- **Data Subject Locations**: EU, US, UK
- **Legal Bases**: Consent, Contract
- **Cross-border Transfers**: Yes
- **Retention Policy**: Yes
- **Breach Notification**: Yes
- **Third-party Processors**: AWS, Snowflake

# 4. Risk Management

- **Framework**: NIST AI RMF
- **Human Oversight**: Human-in-loop
- **Test Frequency**: Quarterly
- **Bias Testing**: Periodic
- **Explainability**: Built-in explainability
- **Incident Response**: Yes
- **Model Versioning**: Yes

# 5. Insurance Coverage

- **Current Policies**: General Liability, E&O, Cyber
- **Total Coverage**: $5-25M
- **Exclusions Known**: Unsure
- **Broker Relationship**: Yes
- **Prior Claims**: No
- **Renewal Date**: 2026-09

# 6. Regulatory Exposure

## Frameworks

### EU AI Act
- **Awareness**: Yes
- **Compliance**: Partial

### GDPR
- **Awareness**: Yes
- **Compliance**: Compliant

### NIST AI RMF
- **Awareness**: Yes
- **Compliance**: Partial

## Certifications
- ISO 42001
- SOC 2

- **Recent Audits**: Yes
- **Pending Actions**: No

# 7. Operational Controls

- **Ethics Committee**: Yes
- **AI Usage Policy**: Yes
- **Training Program**: No
- **Vendor Assessment**: Yes
- **Transparency Level**: Partial
- **Sustainability Tracking**: No

# 8. Verifiable Endpoints

- **Production URL**: https://app.acme.ai
- **Login Path**: /sign-in
- **Signup Path**: /sign-up
- **Privacy Policy URL**: https://acme.ai/privacy
- **Terms URL**: https://acme.ai/terms
- **AI Disclosure URL**: embedded
- **Account Settings Path**: /settings
- **Account Deletion Path**: /settings/account/delete
- **Data Export Path**: /settings/data-export
- **Chatbot Entry Selector**: [data-testid="chat-launcher"]
- **Cookie Banner Selector**: #onetrust-banner-sdk
- **Status Page URL**: https://status.acme.ai
- **Test Account Available**: yes
How to Generate

Three paths. Same output.

01Conversational

Talk to your LLM

Paste the AIRISK prompt template into ChatGPT, Claude, or any LLM. It will interview you section by section and produce a complete AIRISK.md file.

02Automated

Use the CLI

Run npx create-airisk in your terminal. The CLI walks you through each section interactively and writes the file to your repo.

npx create-airisk
03Manual

Write manually

Copy the example format below and fill in each field. Every field uses a simple key-value pair. Lists are comma-separated.

Prompt Template

Copy, paste, generate.

Paste this prompt into any LLM. It will interview you section by section and produce a valid AIRISK.md file. Works with ChatGPT, Claude, Gemini, and any model that can follow structured instructions.

Prompt
I need to generate an AIRISK.md file for my company — the open standard
for AI risk disclosure (https://nexweave.ai/airisk).

Interview me section by section. Ask me 3-5 questions per section,
then write that section before moving to the next. The 7 sections are:

1. Company Profile (jurisdiction, industry, stage, size, website)
2. AI Systems (models, types, providers, scale, core vs internal)
3. Data Governance (data types, legal basis, transfers, retention)
4. Risk Management (framework, oversight, testing, explainability)
5. Insurance Coverage (current policies, AI-specific, exclusions)
6. Regulatory Exposure (applicable frameworks, compliance status, gaps)
7. Operational Controls (ethics, policies, training, transparency)

Rules:
- Use EXACTLY the AIRISK v1.1 format with YAML frontmatter
- For list fields, separate values with commas
- Section 6 uses ### sub-headings for each regulatory framework
- If I say "I don't know", write "Unknown" and flag it
- After all sections, output the complete AIRISK.md

Start with Section 1.
Integration

Your AIRISK.md goes further than your repo.

01

NexWeave Platform

Upload your AIRISK.md to NexWeave and instantly receive an AI Protection Index score, regulatory gap analysis, and insurance recommendations tailored to your risk profile.

02

CI/CD Pipeline

Add airisk-lint to your CI pipeline. It validates your AIRISK.md on every commit, catches missing fields, and fails the build if your risk disclosure falls below a completeness threshold.

03

MCP Server

Run the AIRISK MCP server to let AI coding assistants read your AIRISK.md and provide risk-aware code suggestions. Context flows from your risk profile into every pull request.

Get Started

Generate your AIRISK.md today.

Create your risk disclosure in minutes, not weeks. Upload it to NexWeave for instant scoring and insurance recommendations.

Free for the first 500 companies.