Internal notes for the playground service we use to rehearse VIP staging patterns (Postgres, Kafka, outbound HTTP, mail, filesystem) and gateway behaviour (OIDC via Keycloak, mTLS, correlation id, a few headers) before Kong is replaced by APIM plus the self-hosted gateway on DGi’s cluster.
Who reads this: VIP DevOps , Azure / cloud crew (what to wire in APIM), anyone running Bruno or curl against the service.
If you’re on Azure: start with §2 . Routes §3.1, backends §5, Kong parity §7, Keycloak §8, quick checklist §10. Policy lives in APIM; SHGW syncs it. DGi runs the gateway pods, DNS, and network into the app namespace (§5).
We’re retiring Kong for backend APIs and putting APIM in front, with SHGW in managed-azure-api-gateway-dev (kustomize: https://git.svc.vrpintegration.dev/ops/kustomize/azure-api-gateway).
vrp-poc-test-service is a small Spring Boot app with REST endpoints so we can walk through JWT checks, optional client certs, and the usual headers without touching a real business API. It’s on dev, feature, and test staging; Bruno and pipelines have been green there.
So far the happy paths were exercised through Kong hostnames (*.kong.stg.vrpintegration.dev, and the .public.stg… variants in §7.1). Once the same API is exposed through APIM/SHGW, rerun those calls against the new base URL — otherwise we’ve only proven Kong, not the new stack.
In practice: VIP can’t click in the APIM tenant. So we do need one or two short sync sessions — we explain the service and the VIP realm / Kong parity; you create the API, backend, policies, and SHGW attachment; we bring a real token or decode claims with you if something’s off; we hit an endpoint together until behaviour matches what we see through Kong today.
- Name:
vrp-poc-test-service, Spring Boot, namespacesvrp-poc-test-service-{dev,feature,test}. - Goal for you: show APIM+SHGW can sit in front of a normal VIP-style workload: JWT from Keycloak realm VIP (scope
poc_test_api_accesson the/api/v1/testroutes), optional mTLS, plus the integration noise (DB, Kafka, etc.) — details in §3. - No SOAP on this repo. Your EPI work used realm EPI; here it’s VIP — same
validate-jwtidea as your existing policy, different discovery URL and almost certainly differentaud.
| Task | Azure | VIP DevOps | DGi |
|---|---|---|---|
| API in APIM, operations, backend URL | R/A | C | I |
| validate-jwt / inbound policies | R/A | C (values + sample token if needed) | I |
| mTLS trust on APIM | R/A | C (CA / Kong CA ref) | C |
| API on SHGW | R/A | I | R (Helm/runtime) |
| Gateway token rotation | A (new key in APIM) | I | R (K8s secret + rollout) |
| Keycloak VIP clients & test tokens | I | R/A | I |
| Network: SHGW → app namespace | I | R | C |
R = does it, A = owns outcome, C = consulted, I = informed.
- API (or product) for this service on the PoC SHGW.
- Backend pointing at the right in-cluster URL (§5.1).
- Operations for the paths in §3.1.
- Inbound: at least
validate-jwton what Kong fronted with OIDC (/api/v1/test/**), using OIDC discovery — same shape as EPI:openid-config,issuers,audiences. - If you want parity: correlation-style header,
Cache-Control: no-store, rewrite/core/actuator/health→/actuator/health, mTLS where Kong usedmtls-auth(§7). - When you have it: public base URL / API name → drop into §13.
| Item | Dev / Feature | Test | Notes |
|---|---|---|---|
| OIDC metadata | https://keycloak-apps-dev.public.apps.stg.vrpintegration.dev/auth/realms/VIP/.well-known/openid-configuration |
https://keycloak-apps-test.public.apps.stg.vrpintegration.dev/auth/realms/VIP/.well-known/openid-configuration |
Normal Keycloak OIDC; fine for validate-jwt. |
Issuer (iss) |
https://keycloak-apps-dev.public.apps.stg.vrpintegration.dev/auth/realms/VIP |
https://keycloak-apps-test.public.apps.stg.vrpintegration.dev/auth/realms/VIP |
Match the token, including trailing slash quirks. |
| Realm | VIP | VIP | Not EPI. |
| Scope (Kong had this as required) | poc_test_api_access |
same | Token used in Bruno must include it. |
| Keycloak admin (clients) | https://keycloak-apps-dev.public.apps.stg.vrpintegration.dev/auth/admin/master/console/#/VIP/clients |
https://keycloak-apps-test.public.apps.stg.vrpintegration.dev/auth/admin/master/console/#/VIP/clients |
We maintain clients. |
| Client row (URL id) | 676c51c6-c670-4696-a6fe-695aeaeddeed |
f85069ee-1df8-43ba-8d6b-001ca0641b2d |
The “Client ID” field in the UI is what you often need, not only this uuid. |
| Client secret | Prolly in Credentials tab in Keycloak | same | Only if you need token endpoint access from APIM. Incoming Bearer validation is usually metadata + iss + aud. |
aud for validate-jwt |
TBD from a real token | same | Grab any access token from the same flow as Bruno, decode payload . Not the same as epi_development. |
| Backend | http://vrp-poc-test-service.vrp-poc-test-service-dev.svc.cluster.local:8080 |
http://vrp-poc-test-service.vrp-poc-test-service-test.svc.cluster.local:8080 |
Feature: swap namespace to vrp-poc-test-service-feature. Port 8080, HTTP inside the cluster. |
Reference write-up (same ballpark as what you’re doing): Keycloak as OIDC in APIM.
aud without a call: Either side can decode a token from the VIP client (Bruno, curl to token endpoint). If the first APIM test returns 401, we compare claims to policy — a call / emails.
Clone your EPI validate-jwt block: swap openid-config and issuers to VIP, set audiences from a real VIP token, point the backend at the table above. Scope poc_test_api_access is enforced when Keycloak issues the token; APIM often only checks signature + iss + aud unless you add a claim rule for scope.
VIP: what the app does and which paths are OIDC vs mTLS. Azure: which APIM instance / SHGW name. Azure: API + backend + policy (screen share optional). VIP: Bruno or curl against the SHGW URL, e.g. GET /api/v1/test/todo/1. Write the public URL and API name into §13.
SOAP on this service. SHGW gateway key rotation (Azure issues, DGi patches the secret). SHGW Helm upgrades (DGi).
JSON REST only (OpenAPI in the contract repo).
Spring Boot on Postgres (depends on overlay), Kafka via Strimzi with TLS user certs, optional jsonplaceholder call, SMTP for mail, PVC for the filesystem test.
Controller base: /api/v1.
| Method | Path | What happens |
|---|---|---|
| POST | /api/v1/test/database |
Insert row, 201 + body |
| GET | /api/v1/test/database/{id} |
Fetch or 404 |
| POST | /api/v1/test/filesystem |
Write to PVC path |
| GET | /api/v1/test/todo/{id} |
Proxies shape from jsonplaceholder |
| POST | /api/v1/test/kafka |
Produce to non-transactional topic |
| POST | /api/v1/test/transactional |
Produce; consumer writes to DB |
| POST | /api/v1/admin/mail |
Sends mail |
Health: /actuator/health/liveness, /actuator/health/readiness (not under /api/v1). Metrics/info per default Spring config.
Topic names per env in §6. Consumer reads the transactional topic and persists. Producer metadata in application.yaml includes source: vrp-poc-test-service, event-type: PocTestService.exampleCreated, version: 1.
| What | Where |
|---|---|
| Deploy (Kong, Kafka, ingress) | https://git.svc.vrpintegration.dev/vip/services/vip-playground-services/vrp-poc-test-service/vrp-poc-test-service-deploy |
| App | https://git.svc.vrpintegration.dev/vip/services/vip-playground-services/vrp-poc-test-service/vrp-poc-test-service-app |
| OpenAPI | Contract repo vrp-poc-test-service-contract-http |
| Kafka contract | vrp-poc-test-service-contract-kafka |
Argo: https://git.svc.vrpintegration.dev/ops/application-deployments/-/tree/staging/blue/vrp-poc-test-service
Namespaces: ops/stg-deployments → namespaces/blue → vrp-poc-test-service-{dev,feature,test}.yaml
| Stage | Namespace | URL |
|---|---|---|
| Dev | vrp-poc-test-service-dev |
http://vrp-poc-test-service.vrp-poc-test-service-dev.svc.cluster.local:8080 |
| Feature | vrp-poc-test-service-feature |
http://vrp-poc-test-service.vrp-poc-test-service-feature.svc.cluster.local:8080 |
| Test | vrp-poc-test-service-test |
http://vrp-poc-test-service.vrp-poc-test-service-test.svc.cluster.local:8080 |
Service name vrp-poc-test-service, port name http → 8080.
SHGW needs to reach the namespace; Helm already has:
ingressNamespaces:
- kubernetes.io/metadata.name: managed-azure-api-gateway-devTLS terminates at the gateway today; the pod is plain HTTP on 8080. Mail egress uses smtp.vrp.dg-i.net:25 in overlays; outbound HTTP may use vip-outbound-proxy-config.
| Stage | Cluster label | Topics CR namespace | Topics |
|---|---|---|---|
| Dev | vip-kafka-dev | managed-kafka-resources-dev | vrp-poc-test-service-dev, vrp-poc-test-service-transactional-dev |
| Feature | vip-kafka-dev | managed-kafka-resources-dev | vrp-poc-test-service-feature, vrp-poc-test-service-transactional-feature |
| Test | vip-kafka-test | managed-kafka-resources-test | vrp-poc-test-service-test, vrp-poc-test-service-transactional-test |
Bootstrap examples: dev/feature vip-kafka-dev-kafka-bootstrap.managed-kafka-cluster-dev.svc.cluster.local:9093, test vip-kafka-test-kafka-bootstrap.managed-kafka-cluster-test.svc.cluster.local:9093.
Kafka user secrets: kafka-vrp-poc-test-service-user-{dev,feature,test}.
Topic UI: test https://rpc-test.apps.stg.vrpintegration.dev/topics?q=poc-test-service, dev/feature https://rpc-dev.apps.stg.vrpintegration.dev/topics?showInternal=true&q=poc-test-service.
This is what we configured in Git; you don’t copy YAML into APIM — match behaviour.
| Stage | Main Kong host | Public variant |
|---|---|---|
| Dev | vrp-poc-test-service-dev.kong.stg.vrpintegration.dev |
…-dev.public.stg.vrpintegration.dev |
| Feature | vrp-poc-test-service-feature.kong.stg.vrpintegration.dev |
…-feature.public.stg… |
| Test | vrp-poc-test-service-test.kong.stg.vrpintegration.dev |
…-test.public.stg… |
Ingress class: kong.
Public API (vrp-poc-test-service-public): prefix /api/v1/test — plugins correlation-id, openid-connect, response-transformer. OIDC: bearer only, scope poc_test_api_access, issuer URLs as in §2.4 metadata (dev for dev+feature, test for test), hide_credentials: true.
Wide ingress (vrp-poc-test-service): path / — response-transformer, mtls-auth, correlation-id. mTLS CA id in Kong: 9183c403-0442-4431-b801-71e830c3a244 (all overlays).
Actuator (vrp-poc-test-service-public-actuator): exact /core/actuator/health rewritten to /actuator/health.
Mail (vrp-poc-test-service-mtls-public): exact /api/v1/admin/mail with response-transformer only in base — double-check how mTLS lines up with hostname in your env.
| Resource | Type | Effect |
|---|---|---|
| vrp-poc-test-service-correlation-id | correlation-id | X-CORRELATION-ID, uuid, echo downstream |
| vrp-poc-test-service-response-transformer | response-transformer | Cache-Control: no-store, s-maxage=0 |
| actuator-health-rewrite | request-transformer | external /core/actuator/health → /actuator/health |
Realm VIP — admin links and client row ids are in §2.4. Issuer/metadata same as Kong OIDC block in §7.2. Programme wording sometimes says “OAuth server in APIM”; for JWT validation the discovery doc is what you want.
Full scenario list and load ideas: TEST-CATALOGUE-vrp-poc-test-service.md.
Unit/integration: in vrp-poc-test-service-app, ./mvnw test — doesn’t touch Kong/APIM.
Pipeline: green deploy job means image + gitops to Kong URL; not the same as a manual gateway smoke test.
Bruno: folder PocTest, env per stage, Bearer token with poc_test_api_access where needed. Base URL: Kong from §7.1 until §13 is filled.
Argo: https://argocd.apps.stg.vrpintegration.dev/applications?search=poc-test
Mail: POST /api/v1/admin/mail uses VIP_POCTEST_MAIL_RECIPIENT from deployment-env.yaml in the deploy repo overlay. Change recipient → merge what Argo tracks → wait for rollout (no new image needed). Then hit the endpoint through whatever ingress requires (mTLS may apply — §7.2). If you only ever refresh deploy via the app pipeline, use your usual process; the point is new env vars on running pods.
Backend §5.1, routes §3.1, JWT §7.2 + §2.4, mTLS where Kong had it (CA ref §7.2), optional headers/health rewrite, SOAP N/A. Details and RACI: §2.
Maintained by VIP DevOps for cluster facts; Azure fills §13 when naming is stable.
| Stage | APIM API name | Public base URL | Notes |
|---|---|---|---|
| Dev | |||
| Feature | |||
| Test |