⏱ Estimated time: 30 minutes
This guide will walk you through setting up the Apollo Federation Supergraph reference architecture on Minikube for local development.
Before you begin, ensure you have:
- Minikube installed and configured
- kubectl installed
- Helm installed
- Docker installed
- jq installed
- curl installed
- An Apollo GraphOS account with a Personal API key
- Go to Apollo GraphOS Studio
- Navigate to User Settings > API Keys
- Create a new Personal API key or use an existing one
- Copy the API key value
Follow the Minikube installation guide for your operating system.
minikube version
kubectl version --client
helm version
docker --version- Copy the environment template:
cp scripts/minikube/.env.sample .env- Edit
.envand set your Apollo GraphOS Personal API key and environment:
export APOLLO_KEY="your-apollo-personal-api-key"
export ENVIRONMENT="dev"The ENVIRONMENT variable is required and allows you to create multiple environments. Each environment will reference its own Apollo GraphOS variant (e.g., @dev, @prod) in the SupergraphSchema CRD. Variants are created automatically when schemas are first published to them.
Note: When deploying subgraphs, the scripts will look for environment-specific values files at subgraphs/{subgraph}/deploy/environments/${ENVIRONMENT}.yaml. If this file exists, it will be used to override the default values.yaml. If it doesn't exist, the default values.yaml will be used. The repository includes dev.yaml and prod.yaml files for all subgraphs. If you create a custom environment name, you can optionally create matching values files for environment-specific configurations.
Run the scripts in order from the repository root:
./scripts/minikube/01-setup-minikube.shThis script:
- Starts or creates a Minikube cluster
- Enables the ingress addon for external access
- Configures kubectl to use the Minikube context
./scripts/minikube/02-setup-apollo-graph.shThis script:
- Creates an Apollo GraphOS graph
- Creates an Operator API key
- Saves configuration to
.env
Note: Make sure your .env file has APOLLO_KEY set before running this script.
Note: Variants (e.g., @dev, @prod) are referenced in the SupergraphSchema CRD and will be created automatically when schemas are first published to those variants.
source .env
./scripts/minikube/03-setup-cluster.shThis script:
- Creates required namespaces (
apollo-operator,apollo) - Creates the operator API key secret
- Installs the Apollo GraphOS Operator via Helm
./scripts/minikube/04-build-images.shThis script:
- Configures Docker to use Minikube's Docker daemon
- Builds all subgraph images locally
- Builds coprocessor and client images (for future use)
./scripts/minikube/05-deploy-subgraphs.shThis script:
- Deploys eight GraphQL subgraphs (checkout, discovery, inventory, orders, products, reviews, shipping, users) using Helm charts
- Deploys the promotions-api REST API service (data source for the promotions Connector)
- Creates Subgraph CRDs with inline SDL schemas for all nine subgraphs, including the promotions Connector (schema-only; uses
http://ignoreas endpoint) - Configures images to use local builds
Monitor subgraph deployment:
kubectl get subgraphs --all-namespaces
kubectl get pods --all-namespacesThe coprocessor handles JWT authentication for the @authenticated directive and is required for the router to function properly.
./scripts/minikube/06-deploy-coprocessor.shThis script:
- Builds the coprocessor Docker image (if not already built)
- Deploys the coprocessor using Helm
- Waits for coprocessor pods to be ready
Note: The coprocessor validates JWT tokens from the users subgraph's JWKS endpoint and enables the @authenticated directive to work properly. It must be deployed before the router (script 08).
If you’re enabling Apollo Router response caching, deploy Redis first:
./scripts/minikube/07-deploy-redis.shThis script:
- Creates a dedicated
redisnamespace - Installs Redis via Helm (standalone, no auth, no persistence)
- Waits for Redis to be ready
./scripts/minikube/08-deploy-operator-resources.shThis script:
- Deploys SupergraphSchema CRD (triggers composition)
- Deploys Supergraph CRD with router configuration (deploys the Apollo Router)
- Waits for the router deployment to be created
Note: The coprocessor (script 06) must be deployed before running this script.
Note: The router is configured with telemetry enabled. If you plan to use distributed tracing, it's recommended to deploy the telemetry stack (script 11) before deploying the router to avoid connection errors in the router logs. However, the router will still function correctly even if the telemetry collector is not available - you'll just see connection errors in the logs until the collector is deployed.
Monitor router deployment:
kubectl get supergraphs -n apollo
kubectl get pods -n apollo
kubectl describe supergraph reference-architecture-${ENVIRONMENT} -n apollo./scripts/minikube/09-setup-router-access.shThis script:
- Enables and configures the ingress controller addon (required for the client application's Ingress resource)
- Configures the ingress controller as LoadBalancer for
minikube tunnelsupport - Determines and saves the router URL to
.envfile - Note: The router does not use an Ingress resource - the client's nginx proxies to it internally. The ingress controller is needed for the client's Ingress.
./scripts/minikube/10-deploy-client.shThis script:
- Builds and deploys the client application
- Sets up ingress for client access
Note: The client is required if you want to access the router via the ingress controller (minikube tunnel or NodePort). If you only need direct router access, you can use port-forward (Option 2 in Step 4) and skip this script.
Deploy Zipkin and OpenTelemetry Collector for distributed tracing:
./scripts/minikube/11-deploy-telemetry.shThis script:
- Creates the
monitoringnamespace - Deploys Zipkin for trace visualization
- Deploys OpenTelemetry Collector to receive traces from router and subgraphs
- Waits for both services to be ready
Note: This script is optional. Telemetry is configured for all environments (dev and prod) but the telemetry stack only needs to be deployed once per cluster. The collector receives traces from all subgraphs and the router, then exports them to Zipkin.
Recommended: Deploy telemetry (script 11) before deploying the router (script 08) to avoid connection errors in router logs. If you've already deployed the router, you can deploy telemetry at any time - the router will automatically start sending traces once the collector is available.
Access Zipkin UI:
kubectl port-forward -n monitoring svc/zipkin 9411:9411Then open http://localhost:9411 in your browser to view traces.
Deploy the Apollo MCP Server to expose your supergraph to AI agents and LLM tools via the Model Context Protocol:
./scripts/minikube/12-deploy-mcp-server.shThis script:
- Creates a Kubernetes secret with Apollo GraphOS credentials and endpoint configuration
- Deploys the Apollo MCP Server via Helm into the
apollonamespace - Configures the MCP server to connect to the local Router instance
- Enables OAuth 2.1 authentication using the users subgraph as the authorization server
- Waits for the MCP server pod to be ready
Prerequisites: The router (script 08) and subgraphs (script 05) must be deployed first. The MCP server connects to the router and uses the users subgraph for authentication.
After deploying the MCP server, start the required port-forwards for local access:
./scripts/minikube/12a-mcp-port-forwards.shThis script:
- Starts port-forwards for the MCP server (localhost:5001) and the OAuth auth server (localhost:4001)
- Adds the required
/etc/hostsentry for the OAuth flow (prompts for sudo) - Verifies connectivity to both services
- Keeps running until you press Ctrl+C
After running this script, see Step 6: Connect AI Agents via MCP for MCP client configuration.
After running all scripts, you can access your supergraph in several ways:
Note: This option requires the client application to be deployed (Script 09) because it uses the client's Ingress resource.
The ingress controller has been configured as a LoadBalancer service. To access it via minikube tunnel:
minikube tunnelTroubleshooting if tunnel hangs:
- Check if tunnel is already running:
ps aux | grep 'minikube tunnel' - Stop existing tunnel:
pkill -f 'minikube tunnel' - Try running with explicit cleanup:
minikube tunnel --cleanup - On macOS, if sudo password isn't prompted, try:
sudo minikube tunnel
Important notes:
- You may see a message "Starting tunnel for service router" - this can be safely ignored
- The "router" is an Ingress resource (not a service), so it doesn't need tunneling
- Only the
ingress-nginx-controllerLoadBalancer service needs tunneling - Wait for the "Status: running" message
- Access the client UI at:
http://127.0.0.1/
Why you see "router" in the tunnel output:
The ingress controller automatically sets a LoadBalancer status on Ingress resources, which makes minikube tunnel think it needs to tunnel them. However, since the ingress controller is already being tunneled, the router is accessible through it. You can safely ignore this message.
Port forward directly to the router service. This method does not require the client application:
kubectl port-forward service/reference-architecture-${ENVIRONMENT} -n apollo 4000:80Then access at http://localhost:4000 in your browser.
Note: Keep the port-forward command running in a terminal while you access the router.
Note: This option requires the client application to be deployed (Script 09) because it uses the client's Ingress resource.
Get the Minikube IP and ingress controller NodePort:
MINIKUBE_IP=$(minikube ip)
NODEPORT=$(kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
echo "Access at: http://${MINIKUBE_IP}:${NODEPORT}"Note: This method may not work reliably on macOS due to network routing. Use Option 1 (minikube tunnel) instead.
Test the router with a simple GraphQL query:
curl -X POST http://localhost:4000 \
-H "Content-Type: application/json" \
-d '{"query":"{ __typename }"}'Or test the health endpoint (if accessible on the main port):
curl http://localhost:4000/healthIf you deployed the client application (script 10), you can log in using the following test credentials:
The application includes three test users:
| Username | Password | Notes | |
|---|---|---|---|
user1 |
Any non-empty password | user1@contoso.org | Has 2 credit cards, cart with items |
user2 |
Any non-empty password | user2@contoso.org | Has 1 debit card, cart with items |
user3 |
Any non-empty password | user3@contoso.org | Has debit card and bank account, empty cart |
- Navigate to the client application (typically at
http://127.0.0.1/if using minikube tunnel) - Click "Login" in the navigation menu
- Enter one of the test usernames (e.g.,
user1) - Enter any non-empty password (e.g.,
password) - Enter username and password (scopes are server-assigned based on user data)
- Click "Sign In"
Note: The password validation only checks that it's not empty. Any non-empty password will work for authentication.
Scopes are server-assigned based on user data and control access to certain fields:
user:read:email- Allows reading the user's email address (assigned to all users by default)inventory:read- Allows reading inventory levels (assigned toinventoryManageruser)
Available test users and their scopes:
user1,user2,user3- Haveuser:read:emailscopeinventoryManager- Hasuser:read:emailandinventory:readscopes
If you deployed the Apollo MCP Server (script 12), you can connect AI agents and LLM tools to your supergraph. This requires a few networking steps because both the MCP server and its OAuth authorization server run inside the Minikube cluster.
For production deployment guidance — including provider-specific IdP configuration (redirect URLs, logout URLs, scopes, audience) for Auth0, Okta, and Keycloak — see the MCP Production Guide.
- Apollo MCP Server deployed (script 12)
npxavailable (comes with Node.js)- Two available terminal windows for port-forwarding
The MCP server and the OAuth authorization server (users subgraph) both need to be reachable from your local machine.
Option A: Use the helper script (recommended)
./scripts/minikube/12a-mcp-port-forwards.shThis starts both port-forwards, adds the /etc/hosts entry (Step 6b), and verifies connectivity. Keep the script running — press Ctrl+C to stop.
Option B: Manual port-forwards
Open two terminal windows:
Terminal 1 — MCP Server:
kubectl port-forward -n apollo svc/apollo-mcp-server 5001:8000Terminal 2 — OAuth Authorization Server:
kubectl port-forward -n users svc/graphql 4001:4001Keep both terminals running. If either port-forward drops (e.g., after a pod restart), restart it.
If you used the helper script (12a) in Step 6a, this was already handled for you. Skip to Step 6c.
The MCP server's OAuth configuration references the users subgraph by its in-cluster DNS name (graphql.users.svc.cluster.local). For the OAuth flow to work from your local machine, this hostname must resolve to localhost where the port-forward is listening.
Add this entry to your /etc/hosts file:
echo '127.0.0.1 graphql.users.svc.cluster.local' | sudo tee -a /etc/hostsVerify it works:
curl -s http://graphql.users.svc.cluster.local:4001/.well-known/oauth-authorization-server | python3 -m json.toolYou should see OAuth metadata including authorization_endpoint, token_endpoint, and registration_endpoint.
Why is this needed? The MCP server advertises its authorization server URL using the in-cluster DNS name. MCP clients (like mcp-remote) follow this URL to start the OAuth flow. Without the hosts entry, your local machine can't resolve the cluster-internal hostname. Inside the cluster, the same hostname resolves normally via Kubernetes DNS, so the MCP server can validate tokens by fetching JWKS from the same URL.
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"apollo-reference-arch": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:5001/mcp",
"--transport",
"http-only"
]
}
}
}Restart Claude Desktop. Your browser will open a login page where you enter your username and password (same test users as the client app — see Step 5). After signing in, the MCP tools should appear in the tool list.
Troubleshooting Claude Desktop:
- If you see
EADDRINUSEerrors, kill stalemcp-remoteprocesses:
pkill -f "mcp-remote.*localhost:5001"- If authorization fails, clear the
mcp-remoteauth cache and restart:
rm -rf ~/.mcp-auth/mcp-remote-*/claude mcp add --transport http apollo-mcp -- npx mcp-remote http://localhost:5001/mcp --transport http-onlyAdd to your MCP settings (.cursor/mcp.json):
{
"mcpServers": {
"apollo-reference-arch": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:5001/mcp",
"--transport",
"http-only"
]
}
}
}Use MCP Inspector to verify tools are available:
npx @modelcontextprotocol/inspector http://localhost:5001/mcp --transport httpThis opens a browser at http://127.0.0.1:6274 where you can click Connect and then List Tools to verify the available operations.
The MCP server uses OAuth 2.1 with the users subgraph acting as the authorization server. The full flow:
MCP Client (mcp-remote) MCP Server Users Subgraph
| | |
|-- GET /.well-known/oauth-auth... ---->| |
|<-- auth server URL -------------------| (graphql.users.svc...:4001) |
| | |
|-- POST /register ---------------------------------------->|
|<-- client_id, client_secret ----------------------------- |
| | |
|-- GET /authorize (browser) -------------------------------->|
|<-- redirect with auth code -------------------------------- |
| | |
|-- POST /token (exchange code) ------------------------------>|
|<-- JWT access token ---------------------------------------- |
| | |
|-- POST /mcp + Authorization: Bearer -->| |
| |-- validate JWT (JWKS) ------>|
| |-- GraphQL query + token ---> Router
|<-- tool results ----------------------| |
- Tool Discovery — The MCP server allows unauthenticated
initializeandtools/listcalls (allow_anonymous_mcp_discovery: true), so MCP clients can display available tools before the user signs in - Auth Server Discovery — When a tool is invoked,
mcp-remotegets a401with aWWW-Authenticateheader pointing to the Protected Resource Metadata, which in turn references the users subgraph as the authorization server - Client Registration —
mcp-remoteregisters itself either via Client ID Metadata Documents (if supported) or dynamically via RFC 7591. The authorization server advertisesclient_id_metadata_document_supported: trueand supports both approaches - Authorization — The user's browser opens a login page at the
/authorizeendpoint, where they sign in with their username and password. For CIMD clients, the login page shows the client's name and redirect hostname - Token Exchange —
mcp-remoteexchanges the authorization code for a JWT access token - Authenticated Requests — The MCP server validates the JWT, then forwards it to the Router as a Bearer token. The Router enforces
@authenticatedand@requiresScopesdirectives as usual
The MCP server exposes pre-defined GraphQL operations as tools:
| Tool | Description | Operation |
|---|---|---|
MyProfileDetails |
Fetches the authenticated user's profile (username, email, address, loyalty points) | query { me { id username email shippingAddress ... } } |
MyCart |
Fetches the authenticated user's shopping cart with full product details | query { me { cart { items { product { ... } } } } } |
introspect |
Explores the GraphQL schema by type name | Built-in schema introspection |
Operations are defined in deploy/apollo-mcp-server/operations/ and can be customized by adding or modifying .graphql files.
To create a new environment (e.g., "prod"):
- Set the environment variable:
export ENVIRONMENT="prod"- Run scripts 02-09 again with the new environment:
./scripts/minikube/02-setup-apollo-graph.sh
source .env
./scripts/minikube/03-setup-cluster.sh
./scripts/minikube/04-build-images.sh
./scripts/minikube/05-deploy-subgraphs.sh
./scripts/minikube/06-deploy-coprocessor.sh
./scripts/minikube/08-deploy-operator-resources.sh
./scripts/minikube/09-setup-router-access.sh
./scripts/minikube/10-deploy-client.shNote: Script 10 (telemetry) only needs to be run once per cluster, not per environment. All environments share the same telemetry stack.
Each environment will have:
- Its own Apollo GraphOS variant (created automatically when schemas are published)
- Separate Kubernetes resources (namespaces, services, etc.)
- Its own router instance
minikube delete
minikube startEnsure script 04 built the images and Docker is using Minikube's daemon:
eval $(minikube docker-env)
docker images | grep localCheck subgraph status:
kubectl describe subgraph <subgraph-name> -n <subgraph-namespace>Look for errors in schema extraction or API key authentication.
Check router status:
kubectl describe supergraph reference-architecture-${ENVIRONMENT} -n apollo
kubectl logs -n apollo deployment/reference-architecture-${ENVIRONMENT}If the router is not picking up configuration changes:
-
Verify routerConfig in Supergraph CRD:
kubectl get supergraph reference-architecture-${ENVIRONMENT} -n apollo -o yaml | grep -A 50 routerConfig
-
Check router deployment status:
kubectl describe deployment reference-architecture-${ENVIRONMENT} -n apollo kubectl get pods -n apollo -
Check router logs for configuration errors:
kubectl logs -n apollo deployment/reference-architecture-${ENVIRONMENT} | grep -i "config\|error"
-
Re-apply router configuration:
kubectl apply -f deploy/operator-resources/supergraph-${ENVIRONMENT}.yaml
For more details on updating router configuration, see the Operator Guide.
If router pods are crashing:
-
Check pod logs:
kubectl logs -n apollo deployment/reference-architecture-${ENVIRONMENT} --previous -
Common causes:
- Invalid YAML in
spec.routerConfig(check syntax in Supergraph CRD) - Schema composition issues (check SupergraphSchema status)
- Missing coprocessor (verify coprocessor is running)
- Invalid YAML in
-
Verify configuration syntax:
# Check if Supergraph CRD has valid routerConfig kubectl get supergraph reference-architecture-${ENVIRONMENT} -n apollo -o yaml | grep -A 50 routerConfig
Ensure ingress addon is enabled:
minikube addons enable ingress
kubectl get pods -n ingress-nginxIf minikube tunnel hangs without prompting for your sudo password:
-
Check if tunnel is already running:
ps aux | grep 'minikube tunnel'
-
Stop any existing tunnel processes:
pkill -f 'minikube tunnel' -
Try running with cleanup flag:
minikube tunnel --cleanup
-
On macOS, try running with sudo:
sudo minikube tunnel
-
Alternative: Use NodePort or port-forward instead:
- NodePort: Access via
http://$(minikube ip):$(kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}') - Port-forward:
kubectl port-forward service/reference-architecture-${ENVIRONMENT} -n apollo 4000:80
- NodePort: Access via
- Read the Operator Guide to understand how the Apollo GraphOS Operator works
- Explore your supergraph in Apollo Studio
- Make schema changes and see them automatically composed and deployed