Skip to main content
Best Practice

Accessibility Testing Workflow

Build a repeatable, efficient accessibility testing workflow that combines automated scanning, manual testing, and user testing — from design handoff to production.

The Testing Pyramid

Accessibility testing mirrors the software testing pyramid. At the base are automated tests (fast, cheap, catches ~35% of issues). In the middle are manual tests (slower, more expensive, catches another ~50%). At the top are user tests with disabled users (slowest, most expensive, but provides insight that no tool can replicate).

A mature accessibility workflow runs all three layers, tuned to the stage of development. Design-phase testing prevents expensive fixes. CI/CD automated tests provide continuous assurance. Pre-release manual testing catches what automation misses. Periodic user testing validates the real-world experience.

Phase 1: Design Review

Accessibility testing starts in Figma, not in the browser. Issues found in design cost 10–100x less to fix than issues found in production. Design-phase checks:

  • Color contrast — use Figma contrast plugins (Contrast, A11y Annotation Kit) to verify all text meets 4.5:1 (normal) or 3:1 (large).
  • Tap target sizes — verify all interactive elements are at least 44×44px.
  • Focus order documentation — annotate the intended tab order using accessibility annotation kits.
  • State designs — ensure hover, focus, active, disabled, and error states are designed for every interactive component.
  • Heading structure review — verify the page has a logical heading hierarchy.
  • Accessible name review — verify every interactive element has a text label in the design spec.

Phase 2: Development — Automated CI Testing

Integrate automated accessibility testing into your CI/CD pipeline so every pull request is checked before merging:

// Example: axe-core with Playwright in CI
import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';

test('homepage should have no accessibility violations', async ({ page }) => {
  await page.goto('/');
  const accessibilityScanResults = await new AxeBuilder({ page })
    .withTags(['wcag2a', 'wcag2aa', 'wcag21aa', 'wcag22aa'])
    .analyze();
  expect(accessibilityScanResults.violations).toEqual([]);
});

// Test all page templates
const pageTemplates = ['/', '/guides', '/guides/keyboard-testing', '/contact'];
for (const url of pageTemplates) {
  test(`${url} passes axe`, async ({ page }) => {
    await page.goto(url);
    const results = await new AxeBuilder({ page }).withTags(['wcag2aa']).analyze();
    expect(results.violations).toEqual([]);
  });
}
  • Use axe-core via Playwright, Cypress, or Jest + jsdom.
  • Block merges on any axe violations with impact "critical" or "serious".
  • Run Lighthouse CI for performance + accessibility combined checks.
  • Store results over time to catch regressions — a violation that passes once and fails next sprint is a regression.

Phase 3: Pre-Release Manual Testing

Before each release, run a structured manual test across the modified page templates. Use a testing checklist that mirrors WCAG 2.2 AA criteria:

  • Keyboard navigation — Tab through every interactive element. Check focus order, focus visibility, and keyboard traps.
  • Screen reader — NVDA+Firefox minimum. Verify headings, links, forms, images, dynamic content.
  • Color contrast — run Colour Contrast Analyser on all new UI components and states.
  • Zoom test — zoom to 200% and 400% in Chrome. Verify no content or functionality is lost.
  • Mobile — test with iOS VoiceOver and Android TalkBack on real devices.
  • Resize text — increase browser font size to 200% (browser settings). Verify layout does not break.

Phase 4: User Testing

Quarterly user testing sessions with disabled users provide qualitative insight that no automated or manual tool can replicate. Recruit participants across disability types: blind users (screen reader users), low-vision users, keyboard-only users, users with cognitive disabilities, and users with motor disabilities using alternative input devices.

Testing sessions should be task-based, not exploratory. Define 4–6 specific tasks (e.g., "Find the keyboard navigation testing guide and save it to your reading list") and observe how participants attempt to complete them. Avoid prompting or helping — the friction points are the findings.

Issue Tracking and Prioritization

Use a consistent severity scale for accessibility issues:

  • Critical — users with disabilities cannot access core functionality. Blocks affected users completely. Examples: keyboard trap, missing form labels, missing alt text on functional images. Fix before release.
  • Serious — users with disabilities have severe difficulty. Partial workaround may exist. Examples: missing heading structure, color-only error indication, missing accessible name on icon buttons. Fix in current sprint.
  • Moderate — some difficulty for users with disabilities. A workaround exists. Examples: suboptimal focus order, inconsistent navigation. Fix in next sprint.
  • Minor — nuisance issues with minimal impact. Fix in backlog.

Resources