Profiles are ordered lists of compose modules.
They answer a simple question:
which parts of the body should be awake right now?
The smallest useful local substrate:
10-storage.yml20-orchestration.yml32-llamacpp-inference.yml
A local agent-facing runtime surface with a canonical llama.cpp chat path:
10-storage.yml20-orchestration.yml32-llamacpp-inference.yml41-agent-api.yml
This profile keeps the default POST /run path on the canonical langchain-api -> llama.cpp lane.
Its new POST /run/federated path stays opt-in and only becomes useful when the federation profile is also present and AOA_FEDERATED_RUN_ENABLED=true.
The agentic surface plus the current reviewed Intel-oriented serving seam through OVMS:
10-storage.yml20-orchestration.yml32-llamacpp-inference.yml31-intel-inference.yml41-agent-api.yml42-agent-api-intel.yml
In the current promoted posture, this routes embeddings to OVMS while keeping the canonical chat path on llama.cpp.
That does not freeze the broader Intel-serving family to embeddings-only forever; wider OVMS, OpenVINO, or OpenVINO GenAI model lanes stay additive and separately reviewed.
The canonical langchain-api path now keeps its text target behind a generic runtime-chat seam, so additive Intel text lanes can be configured explicitly without changing what this profile promotes by default.
An opt-in metadata-only federation seam:
43-federation-router.yml
This profile is intended to layer over agentic or intel, but it may also be run by itself for seam debugging.
It reads a mirrored aoa-agents contract seam, an aoa-routing advisory seam, an aoa-memo recall seam, an aoa-evals eval selection seam, an aoa-playbooks activation/composition advisory seam, an aoa-kag retrieval/regrounding seam, and a source-owned tos-source handoff seam through the single localhost-only route-api.
It also enables filesystem-first memo export candidates under ${AOA_STACK_ROOT}/Logs/memo-exports/ and filesystem-first eval export candidates under ${AOA_STACK_ROOT}/Logs/eval-exports/.
route-api remains advisory-only in this shape, but when this profile is layered onto agentic, langchain-api may consume it through POST /run/federated.
A route-first ToS graph helper surface:
10-storage.yml52-tos-graph.yml
This profile keeps the route helper on top of the storage substrate so neo4j
is available without silently widening the rest of the runtime.
The current slice stays read-first: it loads canonical ToS files from
AOA_TOS_ROOT, exposes a localhost-only helper on 5410, syncs route-scoped
projection state into Neo4j, and keeps writeback deferred. Machine-fit overlays
that do not touch these services are skipped automatically, so curation stays
narrow even when the host has a broader runtime recommendation on file.
Optional helper surfaces:
50-speech.yml51-browser-tools.yml
Optional monitoring stack:
60-monitoring.yml
Profiles stay small and legible. A new service should usually enter through a module. Only then should it be included in one or more profiles.
The optional llama.cpp benchmark lane deliberately stays outside the default profiles and presets.
Use LLAMACPP_PILOT only when you want an explicit alternate benchmark or promotion surface beyond the canonical runtime path.
Some modules rely on sibling modules being present in the same profile. The repository validator now checks these inter-module requirements so broken profiles fail fast in CI.
Profiles can be combined.
This is the intended way to layer optional surfaces like tools and observability onto a base runtime path.
aoa-up --profile agentic --profile tools --profile observabilityaoa-up --profile agentic,tools,observability- profiles are resolved in the order you declare them
- modules are appended in that order
- duplicate modules are kept only once, at first appearance
- optional layers should usually come after the base profile
If you want to see the concrete host-facing endpoints and post-start checks for a profile or profile-combination, read:
If you want named bundles on top of composition, read:
Or use:
aoa-profile-modules --profile agentic --profile tools --paths
aoa-profile-endpoints --profile agentic --profile toolsBring up the smallest substrate:
aoa-up --profile coreBring up the main agent runtime:
aoa-profile-modules --profile agentic --paths
aoa-profile-endpoints --profile agentic
aoa-up --profile agenticBring up the Intel-aware agent runtime:
aoa-profile-modules --profile intel --paths
aoa-profile-endpoints --profile intel
aoa-up --profile intelBring up an agent runtime plus the optional federation seam:
scripts/aoa-sync-federation-surfaces --layer aoa-agents
scripts/aoa-sync-federation-surfaces --layer aoa-routing
scripts/aoa-sync-federation-surfaces --layer aoa-memo
scripts/aoa-sync-federation-surfaces --layer aoa-evals
scripts/aoa-sync-federation-surfaces --layer aoa-playbooks
scripts/aoa-sync-federation-surfaces --layer aoa-kag
scripts/aoa-sync-federation-surfaces --layer tos-source
aoa-profile-modules --profile agentic --profile federation --paths
aoa-profile-endpoints --profile agentic --profile federation
aoa-up --profile agentic --profile federationIf you want the live advisory consumer as well, enable AOA_FEDERATED_RUN_ENABLED=true for langchain-api before starting the combined profile.
Bring up the route-first ToS graph helper:
aoa-profile-modules --profile curation --paths
aoa-profile-endpoints --profile curation
aoa-up --profile curationOr layer it onto the current core substrate:
aoa-profile-modules --profile core --profile curation --paths
aoa-profile-endpoints --profile core --profile curation
aoa-up --profile core --profile curationBring up an agent runtime plus tools and observability:
aoa-profile-modules --profile agentic --profile tools --profile observability --paths
aoa-profile-endpoints --profile agentic --profile tools --profile observability
aoa-up --profile agentic --profile tools --profile observabilityBring up only observability:
aoa-up --profile observability