Decision framework for regulated industries

AI governance toolkit for software delivery teams.

A decision framework and control library for governing AI in regulated software delivery — built for risk, compliance, and engineering leaders.

Translate AI usage into board-level decisions. Classify risk, identify controls, and decide whether a use-case is allowed — all aligned to EU AI Act, NIST RMF, and ISO standards.

Risk zones
  • Green AI supports code drafts and documentation with no sensitive data.
  • Amber AI may operate inside private tenants with logging, redaction, and retention controls.
  • Red No client data goes to public AI services. Ever.

What is AI governance?

For those new to the space — a quick orientation before diving into decisions.

The problem

AI tools like ChatGPT, Copilot, and Claude are being adopted faster than organizations can govern them. Every prompt is an outbound data channel. Every output needs review. Without structure, you get shadow AI, data leakage, and audit failures.

What governance means

AI governance is the set of policies, controls, and processes that ensure AI is used responsibly. It covers who can use AI, what data can be processed, how outputs are reviewed, and how decisions are documented for audit.

Why it matters now

The EU AI Act is now law. Regulators expect evidence of risk classification, human oversight, and data protection. Financial services, healthcare, and public sector organizations face the highest scrutiny.

What this site does

This is a practical toolkit — not theory. Use it to classify AI use-cases, identify required controls, generate policy templates, and build audit-ready evidence. Designed for teams that need to move fast without breaking compliance.

Who

Senior developers lead

The most successful teams treat AI as a junior producer. Seniors define intent, constraints, and acceptance criteria, then redirect until the output is auditable.

Where

Regulated boundaries

Banks, insurers, audit, and funds can adopt AI safely when data classification dictates the deployment model, not convenience.

Why

Data leakage is structural

Every prompt leaves your device, creating a permanent record. Scale does not reduce risk; it makes exposure statistically visible.

Aligned to recognized standards

Controls and guidance mapped to the frameworks regulators and auditors expect.

EU AI Act

Risk-based classification, transparency, and accountability requirements for AI systems in the European Union.

View guidance →

NIST AI RMF

US framework for managing AI risks across the lifecycle: govern, map, measure, manage.

View framework ↗

ISO/IEC 42001

International standard for AI management systems with controls for responsible development.

View framework ↗

How we align: Each control in our library maps to specific requirements in these frameworks. The Controls page shows which standard each control satisfies.

Start here

Use the decision flow to classify risk, identify controls, and decide whether a use-case is allowed.

Industry → Data type → Deployment → Purpose → Decision

Launch decision flow

What this site covers

AI Guidance Academy

Role-based tracks that combine learning paths, decision flows, and governance outputs.

Enter the academy

Use-case library

Industry-specific patterns with risk ratings, deployment guidance, and controls.

Explore use cases

Learning paths

Executive modules on data leakage, operating model, and audit-ready controls.

View learning paths

Example lab

Safe vs unsafe prompts, redaction demos, and quick policy checks.

Enter example lab

Governance pack

Copy-ready policy templates, risk questionnaires, and acceptable use matrix.

Open governance pack

EU AI Act guide

Practical summary of requirements, risk classification, and implementation checklist.

Read EU AI Act guide

Governance essentials

These controls are the minimum for approving AI in regulated delivery pipelines.

ControlData classification

AI use follows data tiering, not user preference.

EU AI Act Art. 10 · NIST Map
ControlPrompt security

Redaction, retention, and logging by policy.

ISO 42001 A.6 · NIST Manage
ControlAudit evidence

Every AI action is attributable and reviewable.

EU AI Act Art. 12 · NIST Govern

Who this is for

⚖️

Risk & Compliance

Approve use-cases, define policies, evidence controls

💻

Engineering Leaders

Operate AI safely, enforce guardrails, review outputs

📊

Executives

Understand risk, approve budgets, report to board

🔍

Audit & Assurance

Verify controls, review evidence, assess compliance