About This Project

The story behind the Reversibility Index framework and how it was built

What is the Reversibility Index?

The Reversibility Index is a practical framework for evaluating user recoverability in AI systems. It asks a simple but crucial question: "Can users recover when things go wrong?"

Born from the gap between AI ethics principles and day-to-day product decisions, the framework provides teams with concrete assessment criteria across 10 key areas - from undo capabilities to data portability.

This isn't just documentation. It's a working tool that teams can use to score their AI systems, visualise results, and generate stakeholder-ready reports.

Why This Framework Exists

As AI systems become more autonomous and influential in daily life, user agency becomes increasingly critical. Yet most AI ethics frameworks focus on preventing harms rather than preserving user control.

The Reversibility Index fills this gap by providing practical criteria for evaluating whether users can:

  • Undo or revert AI actions
  • Exit systems gracefully
  • Delegate control to humans
  • Recover from errors or confusion
  • Maintain control throughout their interaction

EU AI Act Article 14 Compliance

The EU AI Act’s Article 14 mandates that high-risk AI systems be designed for effective human oversight. This includes the ability to understand AI outputs, intervene when necessary, override decisions, and stop the system entirely.

The Reversibility Index directly addresses these requirements. Our 10 assessment domains map to Article 14’s core mandates:

  • User Control & Exit → Ability to stop or override AI operation
  • Delegation → Escalation to competent human personnel
  • Observability & Clarity of Intent → Understanding AI outputs and behaviour
  • Recovery & Undo → Intervening when things go wrong

Is Your AI System High-Risk?

Under Annex III, high-risk categories include AI used in:

• Employment (hiring, CV screening)
• Credit & finance (loans, insurance)
• Education (admissions, grading)
• Healthcare (diagnosis, treatment)
• Critical infrastructure
• Biometrics & identification
• Law enforcement
• Democratic processes

Even if your system isn’t legally classified as high-risk, building reversible AI protects users, reduces liability, and builds trust.

ETSI EN 304 223 - AI Cybersecurity Evaluation

This platform has been voluntarily evaluated against ETSI EN 304 223, the European standard defining baseline cybersecurity requirements for AI models and systems (published January 2026).

Lifecycle PhaseStatusKey Controls
Secure DesignImplementedPrompt injection defence (3 layers), privacy by design, minimised attack surface
Secure DevelopmentImplementedInput validation, dependency scanning, 0 known vulnerabilities
Secure DeploymentImplementedSecurity headers (CSP, HSTS), rate limiting, environment separation
Secure MaintenancePartialVulnerability management in place; audit logging and model drift detection identified as gaps
Secure End of LifePartialClient-side data with user-controlled clearing; formal discontinuation plan not yet documented

This is a voluntary self-assessment. The Reversibility Index platform is classified as limited risk and is not subject to mandatory conformity assessment under the EU AI Act. Full evaluation details are available in the project's AI System Card.

How It Was Built

This project represents a unique collaboration between human expertise and AI capabilities, with the framework's own principles applied throughout development.

The Team

This was built by a single developer with experience in product development, product management, service design, and UX design, working in partnership with AI tools:

  • Human leadership: Strategic decisions, framework development, user experience design, and security testing
  • Claude Code: Pair programming partner for full-stack development, security implementation, and AI integration

Development Approach

We applied Reversibility Index principles to the tool's own development:

User Control: Manual assessment always available alongside AI-guided options
Clarity of Intent: AI features explain their purpose and limitations clearly
Exit: Clear pathways back to manual mode from any AI interaction
Data Portability: All assessments exportable as PDF reports
Consent: Explicit opt-in for AI features, with usage limits clearly communicated

Technical Stack

  • Frontend: Next.js 14 with TypeScript and Tailwind CSS
  • Documentation: 40+ pages of markdown-based content
  • Visualisation: Recharts for scoring displays and radar charts
  • Export: Client-side PDF generation for stakeholder reports
  • AI Integration: Claude 3.5 Haiku via OpenRouter for conversational assessments

Framework Scope

10 Assessment Areas

1. Undo & Revert - Can users reverse AI actions?
2. Exit - Can users stop or leave the system gracefully?
3. Delegation - Can users escalate to human alternatives?
4. Recovery - Can users recover from errors or confusion?
5. User Control - Do users maintain agency throughout interaction?
6. Clarity of Intent - Does the system explain what it's doing?
7. Observability - Can users see system processes and reasoning?
8. Consent - Does the system request permission appropriately?
9. Model Transparency - Do users understand system capabilities and limitations?
10. Data Portability - Can users access and export their data?

8 AI System Types

The framework provides modality-specific guidance for:

• Conversational AI (chatbots, virtual assistants)
• Recommendation Systems
• Automated Decision Systems
• Generative AI (content creation)
• Autonomous Systems
• Surveillance & Monitoring
• Educational AI
• Healthcare AI

Design Principles

Human-Centred Language

No technical jargon. Every concept explained in plain English with real-world examples.

Practical Application

Built for teams who need actionable guidance, not academic theory. Every assessment criterion includes specific examples and implementation patterns.

Accessibility First

Designed to be usable by non-technical stakeholders, with progressive disclosure for teams who want deeper detail.

Privacy by Design

No user tracking, no data collection, no required accounts. All processing happens client-side.

Meta-Application

This project serves as a working example of responsible AI development. By applying the Reversibility Index to our own AI features, we demonstrate how the framework guides real product decisions.

Every AI integration decision - from usage limits to transparency mechanisms - was evaluated against the framework's criteria. The result is an AI tool that preserves user agency while providing genuine value.

Current Status

Version 1.0 includes:

  • Complete framework documentation (10 domains, 8 modalities)
  • AI-guided conversational assessment (Claude 3.5 Haiku)
  • Manual assessment tool with scoring and visualisation
  • Professional PDF export capability
  • 5 detailed case studies
  • Implementation guides and QA checklists
  • Recovery Design Toolkit with implementation patterns
  • Prompt injection defence and rate limiting
  • Evaluated against ETSI EN 304 223

Planned for future releases:

  • Enhanced recommendations engine
  • Community features and shared case studies
  • Audit logging and model drift detection

Open Development

This project is being developed transparently, with methodology and decision-making documented throughout. We believe responsible AI development requires showing our working, not just our results.

Licensing & Attribution

The Reversibility Index framework is freely available under Creative Commons licensing. When using the framework in your work, please include proper attribution:

For academic or research use:
"Assessment conducted using the Reversibility Index framework (reversibility-index.app)"
For team or organisational use:
"AI system evaluated against the Reversibility Index criteria"
For public presentations or reports:
Please reference the framework and include a link to reversibility-index.app

The assessment tool remains free for teams to use, with no tracking or data collection.

Want to support the project?

☕ Buy me a coffee to help pay for the AI

Contributions help cover AI development costs and keep the tool free for everyone.

Get Involved

We welcome feedback from practitioners building AI systems. The framework improves through real-world application and community input.

Try the assessment tool, review the framework documentation, and share your experience. This is most valuable when it reflects the collective wisdom of teams actually shipping AI products.

For feedback or questions about the framework, reach out through the contact details provided on the site.

Built with Next.js, TypeScript, and thoughtful AI collaboration.
No user data collected. All processing client-side.