Clarity of Intent Recovery Pattern

This pattern ensures users understand AI system capabilities and limitations so they can set appropriate expectations and use features effectively.

🧭 Domain: Clarity of Intent ⏱️ Effort: 5-minute fix

Problem

When users can't understand what the AI system is trying to do or what it's capable of, they make wrong assumptions and get unexpected results. This confusion about system purpose and limitations leads to misuse, frustration, and loss of trust when outcomes don't match expectations.

Solution

Provide clear, contextual explanations of what the AI system does, what it can and cannot do, and what users should expect from interactions.

Implementation Example

Pseudocode: Framework-agnostic code showing the core logic and approach. Adapt the syntax and methods to your specific technology stack.

// Pseudocode:
// Contextual capability communication
function showSystemCapabilities(currentContext) {
  const capabilities = {
    canDo: getCapabilitiesForContext(currentContext),
    cannotDo: getLimitationsForContext(currentContext),
    confidence: getConfidenceLevel(currentContext),
    alternatives: getAlternativeActions(currentContext)
  };
  displayCapabilityCard(capabilities);
}

function explainCurrentAction(action) {
  const explanation = {
    whatHappening: translateAction(action),
    whyHappening: getReasoning(action),
    whatNext: getNextSteps(action),
    limitations: getActionLimitations(action)
  };
  showInlineExplanation(explanation);
}

function detectConfusion() {
  if (userRetryCount > 3 || sessionTime > threshold) {
    showHelpDialog("It looks like something isn't working as expected. Here's what I can help with...");
  }
}

🧠 What this does: Prevents user confusion and misuse by clearly communicating what the AI system is designed to do and what users can realistically expect.

Try this:

Add "What can I help with?" or "Here's what I do" explanations on first interaction
Show confidence levels or uncertainty indicators alongside AI outputs
Implement "Why did this happen?" explanations for unexpected results
Create contextual help that adapts based on what user is trying to do
Display clear boundaries like "I can't access external websites" or "I don't store conversation history"

Testing Checklist

  • First-time users can quickly understand what the system does and doesn't do
  • Capability explanations are contextual and relevant to current user goals
  • Uncertainty or low-confidence outputs are clearly flagged for users
  • Help content is accessible without interrupting the main workflow
  • System limitations are communicated proactively, not discovered through failure

Related Patterns

User Control Recovery Pattern

This pattern gives users control over AI suggestions, recommendations, and automated actions.

View Pattern

Delegation Recovery Pattern

This pattern allows users to transfer AI tasks to human support when needed.

View Pattern

Exit Recovery Pattern

This pattern ensures users can easily stop, cancel, or exit AI interactions at any point.

View Pattern

Error Recovery Pattern

This pattern lets users correct AI errors, undo actions, or fix missteps.

View Pattern