About This Project

The story behind the Reversibility Index framework and how it was built

What is the Reversibility Index?

The Reversibility Index is a practical framework for evaluating user recoverability in AI systems. It asks a simple but crucial question: "Can users recover when things go wrong?"

Born from the gap between AI ethics principles and day-to-day product decisions, the framework provides teams with concrete assessment criteria across 10 key areas - from undo capabilities to data portability.

This isn't just documentation. It's a working tool that teams can use to score their AI systems, visualise results, and generate stakeholder-ready reports.

Why This Framework Exists

As AI systems become more autonomous and influential in daily life, user agency becomes increasingly critical. Yet most AI ethics frameworks focus on preventing harms rather than preserving user control.

The Reversibility Index fills this gap by providing practical criteria for evaluating whether users can:

  • Undo or revert AI actions
  • Exit systems gracefully
  • Delegate control to humans
  • Recover from errors or confusion
  • Maintain control throughout their interaction

How It Was Built

This project represents a unique collaboration between human expertise and AI capabilities, with the framework's own principles applied throughout development.

The Team

This was built by a single developer with experience in product development, product management, service design, and UX design, working in partnership with AI tools:

  • Human leadership: Strategic decisions, framework development, and user experience design
  • Claude Code: Pair programming partner for Next.js development, component architecture, and assessment logic
  • GPT: Content strategy, UX writing, and framework documentation
  • Claude: Strategic advisor for framework refinement and development planning

Development Approach

We applied Reversibility Index principles to the tool's own development:

User Control: Manual assessment always available alongside AI-guided options
Clarity of Intent: AI features explain their purpose and limitations clearly
Exit: Clear pathways back to manual mode from any AI interaction
Data Portability: All assessments exportable as PDF reports
Consent: Explicit opt-in for AI features, with usage limits clearly communicated

Technical Stack

  • Frontend: Next.js 14 with TypeScript and Tailwind CSS
  • Documentation: 40+ pages of markdown-based content
  • Visualisation: Recharts for scoring displays and radar charts
  • Export: Client-side PDF generation for stakeholder reports
  • AI Integration: DeepSeek R1 API for conversational assessments (when implemented)

Framework Scope

10 Assessment Areas

1. Undo & Revert - Can users reverse AI actions?
2. Exit - Can users stop or leave the system gracefully?
3. Delegation - Can users escalate to human alternatives?
4. Recovery - Can users recover from errors or confusion?
5. User Control - Do users maintain agency throughout interaction?
6. Clarity of Intent - Does the system explain what it's doing?
7. Observability - Can users see system processes and reasoning?
8. Consent - Does the system request permission appropriately?
9. Model Transparency - Do users understand system capabilities and limitations?
10. Data Portability - Can users access and export their data?

8 AI System Types

The framework provides modality-specific guidance for:

• Conversational AI (chatbots, virtual assistants)
• Recommendation Systems
• Automated Decision Systems
• Generative AI (content creation)
• Autonomous Systems
• Surveillance & Monitoring
• Educational AI
• Healthcare AI

Design Principles

Human-Centred Language

No technical jargon. Every concept explained in plain English with real-world examples.

Practical Application

Built for teams who need actionable guidance, not academic theory. Every assessment criterion includes specific examples and implementation patterns.

Accessibility First

Designed to be usable by non-technical stakeholders, with progressive disclosure for teams who want deeper detail.

Privacy by Design

No user tracking, no data collection, no required accounts. All processing happens client-side.

Meta-Application

This project serves as a working example of responsible AI development. By applying the Reversibility Index to our own AI features, we demonstrate how the framework guides real product decisions.

Every AI integration decision - from usage limits to transparency mechanisms - was evaluated against the framework's criteria. The result is an AI tool that preserves user agency while providing genuine value.

Current Status

Version 0.9 includes:

  • Complete framework documentation (10 domains, 8 modalities)
  • Manual assessment tool with scoring and visualisation
  • Professional PDF export capability
  • 5 detailed case studies
  • Implementation guides and QA checklists

Planned for Version 1.0:

  • AI-guided conversational assessment
  • Enhanced recommendations engine
  • Community features and shared case studies

Open Development

This project is being developed transparently, with methodology and decision-making documented throughout. We believe responsible AI development requires showing our working, not just our results.

Licensing & Attribution

The Reversibility Index framework is freely available under Creative Commons licensing. When using the framework in your work, please include proper attribution:

For academic or research use:
"Assessment conducted using the Reversibility Index framework (reversibility-index.app)"
For team or organisational use:
"AI system evaluated against the Reversibility Index criteria"
For public presentations or reports:
Please reference the framework and include a link to reversibility-index.app

The assessment tool remains free for teams to use, with no tracking or data collection.

Want to support the project?

☕ Buy me a coffee to help pay for the AI

Contributions help cover AI development costs and keep the tool free for everyone.

Get Involved

We welcome feedback from practitioners building AI systems. The framework improves through real-world application and community input.

Try the assessment tool, review the framework documentation, and share your experience. This is most valuable when it reflects the collective wisdom of teams actually shipping AI products.

For feedback or questions about the framework, reach out through the contact details provided on the site.

Built with Next.js, TypeScript, and thoughtful AI collaboration.
No user data collected. All processing client-side.