Design history and rationale. Records why the skill is the way it is for contributors, future maintainers, and dependent skills.
ATD covers discovery, mapping, and registry maintenance of external tools. It does not cover codebase or team onboarding context, observation and conformance discipline, or post-discovery verification of agent understanding.
These are distinct concerns with distinct workflows. Observation and conformance discipline is the responsibility of ACS, which ATD invokes as its foundational layer. Codebase and team onboarding context is the responsibility of Adaptive Onboarding (AO). ATD's scope must stay narrow enough to be reusable across any team or tool surface without requiring codebase knowledge.
A tool is defined as any external capability surface an agent can invoke: MCP servers, REST APIs, CLIs, SDKs, or any registered agent function. ATD is not specific to MCP or any other protocol.
Specificity to a single protocol would make ATD dated quickly and exclude legitimate use cases. The capability mapping problem is the same regardless of invocation mechanism. Teams using MCP servers today may use different protocols tomorrow. The spec should remain valid across that transition.
ATD produces a persistent capability registry stored at a stable location (file path or URL), not a session-local record. The registry is owned by the team whose tools it describes, not by the ATD repo.
For teams of significant size (25+ people and agents), having every agent re-run discovery independently is wasteful and produces inconsistent results. A shared persistent registry means discovery runs once, is verified once, and is loaded by all agents and subagents. Storing the registry in the team's own codebase or infrastructure keeps it close to the tools it describes and under the team's version control.
The registry is refreshed on a schedule (e.g., weekly) via a GitHub Actions workflow. Teams might use auto-commit for small changes and a PR for large changes.
Tools change infrequently but do change. A scheduled action catches drift without requiring manual intervention. The small/large change branching ensures humans review significant schema or behavioral changes before they propagate to all agents, while keeping minor updates from creating unnecessary review burden.
Registry fields that cannot be verified must be set to "unverified",
not omitted or guessed.
An agent reading a registry entry must be able to distinguish
between a field that was verified and found to be empty
versus a field that was never checked.
Omission is ambiguous.
Guessing violates the ACS evidence-first principle.
An explicit "unverified" marker preserves the integrity of the registry
and signals to the agent that additional discovery may be needed.
MIT across all adaptive-interfaces repos.
Organizations need to be able to use and adapt skills internally without legal friction. MIT is OSI-approved, universally recognized by legal teams, and imposes no obligations on derivative works.
Uses schema = "adaptive-interfaces-manifest-1", defined in
adaptive-interfaces-manifest-1.md,
an adaptation of
SE_MANIFEST.toml.
The adaptive-interfaces manifest borrows the pattern (explicit includes/excludes, declarative intent) without the schema binding.
Use one TOML file per tool surface rather than one monolithic registry file, with a manifest.toml that lists all registry files.
Observed during scenario testing: re-verification of the GitHub API entries in a monolithic registry (560+ lines, 3 tool surfaces) took 12m 18s and required the agent to search and update entries across a large file. On large tool sets this is slow, fragile, and risks missed or duplicate updates. Per-surface files keep each file small and focused. Re-verification targets one file, not the entire registry. Agents load only the registries relevant to their current task. The manifest provides a stable index without requiring agents to scan the filesystem.
The Conclude step requires the agent to score its own output against the rubric inline in the response, and to copy the registry file to the scenario folder if one exists.
Observed during scenario testing: the agent self-scored and copied the artifact unprompted in the github-api run 3. This behavior makes evaluation traceable without a separate manual step. The score is recorded alongside the evidence that justifies it, and the registry is preserved in the scenario folder for comparison across runs. Making these behaviors explicit in the spec ensures they happen consistently rather than opportunistically.