Skip to content

Latest commit

 

History

History
721 lines (505 loc) · 26.2 KB

File metadata and controls

721 lines (505 loc) · 26.2 KB

Setup

⏱ Estimated time: 30 minutes

This guide will walk you through setting up the Apollo Federation Supergraph reference architecture on Minikube for local development.

Prerequisites

Before you begin, ensure you have:

Get Your Apollo GraphOS Personal API Key

  1. Go to Apollo GraphOS Studio
  2. Navigate to User Settings > API Keys
  3. Create a new Personal API key or use an existing one
  4. Copy the API key value

Step 1: Install Minikube and Dependencies

Install Minikube

Follow the Minikube installation guide for your operating system.

Verify Installation

minikube version
kubectl version --client
helm version
docker --version

Step 2: Configure Environment Variables

  1. Copy the environment template:
cp scripts/minikube/.env.sample .env
  1. Edit .env and set your Apollo GraphOS Personal API key and environment:
export APOLLO_KEY="your-apollo-personal-api-key"
export ENVIRONMENT="dev"

The ENVIRONMENT variable is required and allows you to create multiple environments. Each environment will reference its own Apollo GraphOS variant (e.g., @dev, @prod) in the SupergraphSchema CRD. Variants are created automatically when schemas are first published to them.

Note: When deploying subgraphs, the scripts will look for environment-specific values files at subgraphs/{subgraph}/deploy/environments/${ENVIRONMENT}.yaml. If this file exists, it will be used to override the default values.yaml. If it doesn't exist, the default values.yaml will be used. The repository includes dev.yaml and prod.yaml files for all subgraphs. If you create a custom environment name, you can optionally create matching values files for environment-specific configurations.

Step 3: Run Setup Scripts

Run the scripts in order from the repository root:

Script 01: Setup Minikube Cluster

./scripts/minikube/01-setup-minikube.sh

This script:

  • Starts or creates a Minikube cluster
  • Enables the ingress addon for external access
  • Configures kubectl to use the Minikube context

Script 02: Setup Apollo GraphOS Graph

./scripts/minikube/02-setup-apollo-graph.sh

This script:

  • Creates an Apollo GraphOS graph
  • Creates an Operator API key
  • Saves configuration to .env

Note: Make sure your .env file has APOLLO_KEY set before running this script.

Note: Variants (e.g., @dev, @prod) are referenced in the SupergraphSchema CRD and will be created automatically when schemas are first published to those variants.

Script 03: Setup Kubernetes Cluster

source .env
./scripts/minikube/03-setup-cluster.sh

This script:

  • Creates required namespaces (apollo-operator, apollo)
  • Creates the operator API key secret
  • Installs the Apollo GraphOS Operator via Helm

Script 04: Build Docker Images

./scripts/minikube/04-build-images.sh

This script:

  • Configures Docker to use Minikube's Docker daemon
  • Builds all subgraph images locally
  • Builds coprocessor and client images (for future use)

Script 05: Deploy Subgraphs

./scripts/minikube/05-deploy-subgraphs.sh

This script:

  • Deploys eight GraphQL subgraphs (checkout, discovery, inventory, orders, products, reviews, shipping, users) using Helm charts
  • Deploys the promotions-api REST API service (data source for the promotions Connector)
  • Creates Subgraph CRDs with inline SDL schemas for all nine subgraphs, including the promotions Connector (schema-only; uses http://ignore as endpoint)
  • Configures images to use local builds

Monitor subgraph deployment:

kubectl get subgraphs --all-namespaces
kubectl get pods --all-namespaces

Script 06: Deploy Coprocessor

The coprocessor handles JWT authentication for the @authenticated directive and is required for the router to function properly.

./scripts/minikube/06-deploy-coprocessor.sh

This script:

  • Builds the coprocessor Docker image (if not already built)
  • Deploys the coprocessor using Helm
  • Waits for coprocessor pods to be ready

Note: The coprocessor validates JWT tokens from the users subgraph's JWKS endpoint and enables the @authenticated directive to work properly. It must be deployed before the router (script 08).

Script 07: Deploy Redis (for Router response caching)

If you’re enabling Apollo Router response caching, deploy Redis first:

./scripts/minikube/07-deploy-redis.sh

This script:

  • Creates a dedicated redis namespace
  • Installs Redis via Helm (standalone, no auth, no persistence)
  • Waits for Redis to be ready

Script 08: Deploy Operator Resources

./scripts/minikube/08-deploy-operator-resources.sh

This script:

  • Deploys SupergraphSchema CRD (triggers composition)
  • Deploys Supergraph CRD with router configuration (deploys the Apollo Router)
  • Waits for the router deployment to be created

Note: The coprocessor (script 06) must be deployed before running this script.

Note: The router is configured with telemetry enabled. If you plan to use distributed tracing, it's recommended to deploy the telemetry stack (script 11) before deploying the router to avoid connection errors in the router logs. However, the router will still function correctly even if the telemetry collector is not available - you'll just see connection errors in the logs until the collector is deployed.

Monitor router deployment:

kubectl get supergraphs -n apollo
kubectl get pods -n apollo
kubectl describe supergraph reference-architecture-${ENVIRONMENT} -n apollo

Script 09: Setup Router Access

./scripts/minikube/09-setup-router-access.sh

This script:

  • Enables and configures the ingress controller addon (required for the client application's Ingress resource)
  • Configures the ingress controller as LoadBalancer for minikube tunnel support
  • Determines and saves the router URL to .env file
  • Note: The router does not use an Ingress resource - the client's nginx proxies to it internally. The ingress controller is needed for the client's Ingress.

Script 10: Deploy Client

./scripts/minikube/10-deploy-client.sh

This script:

  • Builds and deploys the client application
  • Sets up ingress for client access

Note: The client is required if you want to access the router via the ingress controller (minikube tunnel or NodePort). If you only need direct router access, you can use port-forward (Option 2 in Step 4) and skip this script.

Script 11: Deploy Telemetry Stack (Optional)

Deploy Zipkin and OpenTelemetry Collector for distributed tracing:

./scripts/minikube/11-deploy-telemetry.sh

This script:

  • Creates the monitoring namespace
  • Deploys Zipkin for trace visualization
  • Deploys OpenTelemetry Collector to receive traces from router and subgraphs
  • Waits for both services to be ready

Note: This script is optional. Telemetry is configured for all environments (dev and prod) but the telemetry stack only needs to be deployed once per cluster. The collector receives traces from all subgraphs and the router, then exports them to Zipkin.

Recommended: Deploy telemetry (script 11) before deploying the router (script 08) to avoid connection errors in router logs. If you've already deployed the router, you can deploy telemetry at any time - the router will automatically start sending traces once the collector is available.

Access Zipkin UI:

kubectl port-forward -n monitoring svc/zipkin 9411:9411

Then open http://localhost:9411 in your browser to view traces.

Script 12: Deploy Apollo MCP Server (Optional)

Deploy the Apollo MCP Server to expose your supergraph to AI agents and LLM tools via the Model Context Protocol:

./scripts/minikube/12-deploy-mcp-server.sh

This script:

  • Creates a Kubernetes secret with Apollo GraphOS credentials and endpoint configuration
  • Deploys the Apollo MCP Server via Helm into the apollo namespace
  • Configures the MCP server to connect to the local Router instance
  • Enables OAuth 2.1 authentication using the users subgraph as the authorization server
  • Waits for the MCP server pod to be ready

Prerequisites: The router (script 08) and subgraphs (script 05) must be deployed first. The MCP server connects to the router and uses the users subgraph for authentication.

Script 12a: Start MCP Port Forwards (Optional)

After deploying the MCP server, start the required port-forwards for local access:

./scripts/minikube/12a-mcp-port-forwards.sh

This script:

  • Starts port-forwards for the MCP server (localhost:5001) and the OAuth auth server (localhost:4001)
  • Adds the required /etc/hosts entry for the OAuth flow (prompts for sudo)
  • Verifies connectivity to both services
  • Keeps running until you press Ctrl+C

After running this script, see Step 6: Connect AI Agents via MCP for MCP client configuration.

Step 4: Access Your Supergraph

After running all scripts, you can access your supergraph in several ways:

Option 1: Using Minikube Tunnel (requires Script 09)

Note: This option requires the client application to be deployed (Script 09) because it uses the client's Ingress resource.

The ingress controller has been configured as a LoadBalancer service. To access it via minikube tunnel:

minikube tunnel

Troubleshooting if tunnel hangs:

  • Check if tunnel is already running: ps aux | grep 'minikube tunnel'
  • Stop existing tunnel: pkill -f 'minikube tunnel'
  • Try running with explicit cleanup: minikube tunnel --cleanup
  • On macOS, if sudo password isn't prompted, try: sudo minikube tunnel

Important notes:

  • You may see a message "Starting tunnel for service router" - this can be safely ignored
  • The "router" is an Ingress resource (not a service), so it doesn't need tunneling
  • Only the ingress-nginx-controller LoadBalancer service needs tunneling
  • Wait for the "Status: running" message
  • Access the client UI at: http://127.0.0.1/

Why you see "router" in the tunnel output: The ingress controller automatically sets a LoadBalancer status on Ingress resources, which makes minikube tunnel think it needs to tunnel them. However, since the ingress controller is already being tunneled, the router is accessible through it. You can safely ignore this message.

Option 2: Using Port Forwarding (no client required)

Port forward directly to the router service. This method does not require the client application:

kubectl port-forward service/reference-architecture-${ENVIRONMENT} -n apollo 4000:80

Then access at http://localhost:4000 in your browser.

Note: Keep the port-forward command running in a terminal while you access the router.

Option 3: Using Ingress via NodePort (requires Script 09)

Note: This option requires the client application to be deployed (Script 09) because it uses the client's Ingress resource.

Get the Minikube IP and ingress controller NodePort:

MINIKUBE_IP=$(minikube ip)
NODEPORT=$(kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
echo "Access at: http://${MINIKUBE_IP}:${NODEPORT}"

Note: This method may not work reliably on macOS due to network routing. Use Option 1 (minikube tunnel) instead.

Verify Router is Working

Test the router with a simple GraphQL query:

curl -X POST http://localhost:4000 \
  -H "Content-Type: application/json" \
  -d '{"query":"{ __typename }"}'

Or test the health endpoint (if accessible on the main port):

curl http://localhost:4000/health

Step 5: Logging Into the Client Application

If you deployed the client application (script 10), you can log in using the following test credentials:

Test Users

The application includes three test users:

Username Password Email Notes
user1 Any non-empty password user1@contoso.org Has 2 credit cards, cart with items
user2 Any non-empty password user2@contoso.org Has 1 debit card, cart with items
user3 Any non-empty password user3@contoso.org Has debit card and bank account, empty cart

Login Instructions

  1. Navigate to the client application (typically at http://127.0.0.1/ if using minikube tunnel)
  2. Click "Login" in the navigation menu
  3. Enter one of the test usernames (e.g., user1)
  4. Enter any non-empty password (e.g., password)
  5. Enter username and password (scopes are server-assigned based on user data)
  6. Click "Sign In"

Note: The password validation only checks that it's not empty. Any non-empty password will work for authentication.

Scopes

Scopes are server-assigned based on user data and control access to certain fields:

  • user:read:email - Allows reading the user's email address (assigned to all users by default)
  • inventory:read - Allows reading inventory levels (assigned to inventoryManager user)

Available test users and their scopes:

  • user1, user2, user3 - Have user:read:email scope
  • inventoryManager - Has user:read:email and inventory:read scopes

Step 6: Connect AI Agents via MCP

If you deployed the Apollo MCP Server (script 12), you can connect AI agents and LLM tools to your supergraph. This requires a few networking steps because both the MCP server and its OAuth authorization server run inside the Minikube cluster.

For production deployment guidance — including provider-specific IdP configuration (redirect URLs, logout URLs, scopes, audience) for Auth0, Okta, and Keycloak — see the MCP Production Guide.

Prerequisites

  • Apollo MCP Server deployed (script 12)
  • npx available (comes with Node.js)
  • Two available terminal windows for port-forwarding

Step 6a: Start Port Forwards

The MCP server and the OAuth authorization server (users subgraph) both need to be reachable from your local machine.

Option A: Use the helper script (recommended)

./scripts/minikube/12a-mcp-port-forwards.sh

This starts both port-forwards, adds the /etc/hosts entry (Step 6b), and verifies connectivity. Keep the script running — press Ctrl+C to stop.

Option B: Manual port-forwards

Open two terminal windows:

Terminal 1 — MCP Server:

kubectl port-forward -n apollo svc/apollo-mcp-server 5001:8000

Terminal 2 — OAuth Authorization Server:

kubectl port-forward -n users svc/graphql 4001:4001

Keep both terminals running. If either port-forward drops (e.g., after a pod restart), restart it.

Step 6b: Add DNS Entry for the Authorization Server

If you used the helper script (12a) in Step 6a, this was already handled for you. Skip to Step 6c.

The MCP server's OAuth configuration references the users subgraph by its in-cluster DNS name (graphql.users.svc.cluster.local). For the OAuth flow to work from your local machine, this hostname must resolve to localhost where the port-forward is listening.

Add this entry to your /etc/hosts file:

echo '127.0.0.1 graphql.users.svc.cluster.local' | sudo tee -a /etc/hosts

Verify it works:

curl -s http://graphql.users.svc.cluster.local:4001/.well-known/oauth-authorization-server | python3 -m json.tool

You should see OAuth metadata including authorization_endpoint, token_endpoint, and registration_endpoint.

Why is this needed? The MCP server advertises its authorization server URL using the in-cluster DNS name. MCP clients (like mcp-remote) follow this URL to start the OAuth flow. Without the hosts entry, your local machine can't resolve the cluster-internal hostname. Inside the cluster, the same hostname resolves normally via Kubernetes DNS, so the MCP server can validate tokens by fetching JWKS from the same URL.

Step 6c: Configure Your MCP Client

Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "apollo-reference-arch": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:5001/mcp",
        "--transport",
        "http-only"
      ]
    }
  }
}

Restart Claude Desktop. Your browser will open a login page where you enter your username and password (same test users as the client app — see Step 5). After signing in, the MCP tools should appear in the tool list.

Troubleshooting Claude Desktop:

  • If you see EADDRINUSE errors, kill stale mcp-remote processes:
pkill -f "mcp-remote.*localhost:5001"
  • If authorization fails, clear the mcp-remote auth cache and restart:
rm -rf ~/.mcp-auth/mcp-remote-*/

Claude Code (CLI)

claude mcp add --transport http apollo-mcp -- npx mcp-remote http://localhost:5001/mcp --transport http-only

Cursor

Add to your MCP settings (.cursor/mcp.json):

{
  "mcpServers": {
    "apollo-reference-arch": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:5001/mcp",
        "--transport",
        "http-only"
      ]
    }
  }
}

Step 6d: Verify the Connection

Use MCP Inspector to verify tools are available:

npx @modelcontextprotocol/inspector http://localhost:5001/mcp --transport http

This opens a browser at http://127.0.0.1:6274 where you can click Connect and then List Tools to verify the available operations.

How Authentication Works

The MCP server uses OAuth 2.1 with the users subgraph acting as the authorization server. The full flow:

MCP Client (mcp-remote)                    MCP Server                    Users Subgraph
        |                                       |                              |
        |-- GET /.well-known/oauth-auth... ---->|                              |
        |<-- auth server URL -------------------| (graphql.users.svc...:4001)  |
        |                                       |                              |
        |-- POST /register ---------------------------------------->|
        |<-- client_id, client_secret ----------------------------- |
        |                                       |                              |
        |-- GET /authorize (browser) -------------------------------->|
        |<-- redirect with auth code -------------------------------- |
        |                                       |                              |
        |-- POST /token (exchange code) ------------------------------>|
        |<-- JWT access token ---------------------------------------- |
        |                                       |                              |
        |-- POST /mcp + Authorization: Bearer -->|                             |
        |                                       |-- validate JWT (JWKS) ------>|
        |                                       |-- GraphQL query + token ---> Router
        |<-- tool results ----------------------|                              |
  1. Tool Discovery — The MCP server allows unauthenticated initialize and tools/list calls (allow_anonymous_mcp_discovery: true), so MCP clients can display available tools before the user signs in
  2. Auth Server Discovery — When a tool is invoked, mcp-remote gets a 401 with a WWW-Authenticate header pointing to the Protected Resource Metadata, which in turn references the users subgraph as the authorization server
  3. Client Registrationmcp-remote registers itself either via Client ID Metadata Documents (if supported) or dynamically via RFC 7591. The authorization server advertises client_id_metadata_document_supported: true and supports both approaches
  4. Authorization — The user's browser opens a login page at the /authorize endpoint, where they sign in with their username and password. For CIMD clients, the login page shows the client's name and redirect hostname
  5. Token Exchangemcp-remote exchanges the authorization code for a JWT access token
  6. Authenticated Requests — The MCP server validates the JWT, then forwards it to the Router as a Bearer token. The Router enforces @authenticated and @requiresScopes directives as usual

Available MCP Tools

The MCP server exposes pre-defined GraphQL operations as tools:

Tool Description Operation
MyProfileDetails Fetches the authenticated user's profile (username, email, address, loyalty points) query { me { id username email shippingAddress ... } }
MyCart Fetches the authenticated user's shopping cart with full product details query { me { cart { items { product { ... } } } } }
introspect Explores the GraphQL schema by type name Built-in schema introspection

Operations are defined in deploy/apollo-mcp-server/operations/ and can be customized by adding or modifying .graphql files.

Creating Additional Environments

To create a new environment (e.g., "prod"):

  1. Set the environment variable:
export ENVIRONMENT="prod"
  1. Run scripts 02-09 again with the new environment:
./scripts/minikube/02-setup-apollo-graph.sh
source .env
./scripts/minikube/03-setup-cluster.sh
./scripts/minikube/04-build-images.sh
./scripts/minikube/05-deploy-subgraphs.sh
./scripts/minikube/06-deploy-coprocessor.sh
./scripts/minikube/08-deploy-operator-resources.sh
./scripts/minikube/09-setup-router-access.sh
./scripts/minikube/10-deploy-client.sh

Note: Script 10 (telemetry) only needs to be run once per cluster, not per environment. All environments share the same telemetry stack.

Each environment will have:

  • Its own Apollo GraphOS variant (created automatically when schemas are published)
  • Separate Kubernetes resources (namespaces, services, etc.)
  • Its own router instance

Troubleshooting

Minikube won't start

minikube delete
minikube start

Images not found

Ensure script 04 built the images and Docker is using Minikube's daemon:

eval $(minikube docker-env)
docker images | grep local

Subgraphs not publishing schemas

Check subgraph status:

kubectl describe subgraph <subgraph-name> -n <subgraph-namespace>

Look for errors in schema extraction or API key authentication.

Router not deploying

Check router status:

kubectl describe supergraph reference-architecture-${ENVIRONMENT} -n apollo
kubectl logs -n apollo deployment/reference-architecture-${ENVIRONMENT}

Router configuration not applied

If the router is not picking up configuration changes:

  1. Verify routerConfig in Supergraph CRD:

    kubectl get supergraph reference-architecture-${ENVIRONMENT} -n apollo -o yaml | grep -A 50 routerConfig
  2. Check router deployment status:

    kubectl describe deployment reference-architecture-${ENVIRONMENT} -n apollo
    kubectl get pods -n apollo
  3. Check router logs for configuration errors:

    kubectl logs -n apollo deployment/reference-architecture-${ENVIRONMENT} | grep -i "config\|error"
  4. Re-apply router configuration:

    kubectl apply -f deploy/operator-resources/supergraph-${ENVIRONMENT}.yaml

For more details on updating router configuration, see the Operator Guide.

Router pods in CrashLoopBackOff

If router pods are crashing:

  1. Check pod logs:

    kubectl logs -n apollo deployment/reference-architecture-${ENVIRONMENT} --previous
  2. Common causes:

    • Invalid YAML in spec.routerConfig (check syntax in Supergraph CRD)
    • Schema composition issues (check SupergraphSchema status)
    • Missing coprocessor (verify coprocessor is running)
  3. Verify configuration syntax:

    # Check if Supergraph CRD has valid routerConfig
    kubectl get supergraph reference-architecture-${ENVIRONMENT} -n apollo -o yaml | grep -A 50 routerConfig

Ingress not working

Ensure ingress addon is enabled:

minikube addons enable ingress
kubectl get pods -n ingress-nginx

Minikube tunnel hangs or doesn't prompt for password

If minikube tunnel hangs without prompting for your sudo password:

  1. Check if tunnel is already running:

    ps aux | grep 'minikube tunnel'
  2. Stop any existing tunnel processes:

    pkill -f 'minikube tunnel'
  3. Try running with cleanup flag:

    minikube tunnel --cleanup
  4. On macOS, try running with sudo:

    sudo minikube tunnel
  5. Alternative: Use NodePort or port-forward instead:

    • NodePort: Access via http://$(minikube ip):$(kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
    • Port-forward: kubectl port-forward service/reference-architecture-${ENVIRONMENT} -n apollo 4000:80

Next Steps

  • Read the Operator Guide to understand how the Apollo GraphOS Operator works
  • Explore your supergraph in Apollo Studio
  • Make schema changes and see them automatically composed and deployed