A structured UI context layer for AI agents.
Makes existing user interfaces understandable and actionable for AI agents.
Try or
The world runs on user interfaces. Interfaces solve problems by making state, constraints, and actions explicit.
ai11y exposes this structure so agents can operate existing UIs.
Describe — Observe the current UI context.
Runtime: local — DOM → structured context.
Plan — Get instructions from the agent.
Runtime: model/server — context + intent → instructions.
Act — Perform actions on the UI.
Runtime: local — instructions → DOM actions.
import { createClient, plan } from "@ai11y/core";
const client = createClient({
onNavigate: (route) => window.history.pushState({}, "", route),
});
const ui = client.describe();
const { reply, instructions } = await plan(ui, "click the save button");
for (const instruction of instructions ?? []) {
client.act(instruction);
}Draw attention to any element with customizable highlight animations. Perfect for tutorials, onboarding flows, and guided experiences.
Give your agent the ability to interact with buttons, links, and other UI elements. Users describe their intent in natural language; your agent takes the right action.
Give your agent the ability to read and fill form fields from natural language. It can read current values and fill inputs, textareas, and dropdowns. Emits native browser events for compatibility.
Your agent can perform multi-step tasks inside one surface: fill, toggle, save. Instructions chain reliably across the same describe → plan → act loop.
Give your agent knowledge beyond the DOM. Expose permissions, user context, and app-specific data so your agent can answer questions about capabilities and constraints.