diff --git a/.agents/skills/build-from-issue/SKILL.md b/.agents/skills/build-from-issue/SKILL.md index 0c63e2a2..4e73a61b 100644 --- a/.agents/skills/build-from-issue/SKILL.md +++ b/.agents/skills/build-from-issue/SKILL.md @@ -1,6 +1,6 @@ --- name: build-from-issue -description: Given a GitHub issue number, plan and implement the work described in the issue. Operates iteratively - creates an implementation plan, responds to feedback, and only builds when 'agent-ready' label is applied. Includes tests, documentation updates, and PR creation. Trigger keywords - build from issue, implement issue, work on issue, build issue, start issue. +description: Given a GitHub issue number, plan and implement the work described in the issue. Operates iteratively - creates an implementation plan, responds to feedback, and only builds when the 'state:agent-ready' label is applied. Includes tests, documentation updates, and PR creation. Trigger keywords - build from issue, implement issue, work on issue, build issue, start issue. --- # Build From Issue @@ -14,11 +14,11 @@ This skill operates as a stateful workflow — it can be run repeatedly against - The `gh` CLI must be authenticated (`gh auth status`) - You must be in a git repository with a GitHub remote -## Critical: `agent-ready` Label Is Human-Only +## Critical: `state:agent-ready` Label Is Human-Only -The `agent-ready` label is a **human gate**. It signals that a human has reviewed the plan and authorized the agent to build. Under **no circumstances** should this skill or any agent: +The `state:agent-ready` label is a **human gate**. It signals that a human has reviewed the plan and authorized the agent to build. Under **no circumstances** should this skill or any agent: -- Apply the `agent-ready` label +- Apply the `state:agent-ready` label - Ask the user to let the agent apply it - Suggest automating its application - Bypass the check by proceeding without it @@ -57,7 +57,7 @@ Fetch issue + comments ├─ No plan comment (🏗️ build-plan) found? │ → Generate plan via principal-engineer-reviewer │ → Post plan comment - │ → Add 'review-ready' label + │ → Add 'state:review-ready' label │ → STOP │ ├─ Plan exists + new human comments since last agent response? @@ -65,20 +65,20 @@ Fetch issue + comments │ → Update the plan comment if feedback requires plan changes │ → STOP │ - ├─ Plan exists + 'agent-ready' label + no 'in-progress' or 'pr-opened' label? + ├─ Plan exists + 'state:agent-ready' label + no 'state:in-progress' or 'state:pr-opened' label? │ → Run scope check (warn if high complexity) │ → Check for conflicting branches/PRs │ → BUILD (Steps 6–14) │ - ├─ 'in-progress' label present? + ├─ 'state:in-progress' label present? │ → Detect existing branch and resume if possible │ → Otherwise report current state │ - ├─ 'pr-opened' label present? + ├─ 'state:pr-opened' label present? │ → Report that PR already exists, link to it │ → STOP │ - └─ Plan exists + no new comments + no 'agent-ready'? + └─ Plan exists + no new comments + no 'state:agent-ready'? → Report: "Plan is posted and awaiting review. No new comments to address." → STOP ``` @@ -93,7 +93,7 @@ gh issue view --json number,title,body,state,labels,author If the issue is closed, report that and stop. -If the issue has the `needs-agent-triage` label, report that the issue has not been triaged yet. Suggest using the `triage-issue` skill first to assess and classify the issue before planning implementation. Stop. +If the issue has the `state:triage-needed` label, report that the issue has not been triaged yet. Suggest using the `triage-issue` skill first to assess and classify the issue before planning implementation. Stop. ## Step 2: Fetch and Classify Comments @@ -117,7 +117,7 @@ Using the state machine above, determine what to do based on: 1. Whether a plan comment exists 2. Whether there are human comments newer than the last agent comment (plan or conversation) -3. Which labels are present (`review-ready`, `agent-ready`, `in-progress`, `pr-opened`) +3. Which labels are present (`state:review-ready`, `state:agent-ready`, `state:in-progress`, `state:pr-opened`) Follow the appropriate branch below. @@ -193,10 +193,10 @@ EOF )" ``` -### A3: Add the `review-ready` Label +### A3: Add the `state:review-ready` Label ```bash -gh issue edit --add-label "review-ready" +gh issue edit --add-label "state:review-ready" ``` Report to the user that the plan has been posted and is awaiting review. Stop. @@ -267,7 +267,7 @@ Report to the user what feedback was addressed and whether the plan was updated. ## Branch C: Build -If the plan exists and the `agent-ready` label is present (and neither `in-progress` nor `pr-opened` is set), proceed with implementation. +If the plan exists and the `state:agent-ready` label is present (and neither `state:in-progress` nor `state:pr-opened` is set), proceed with implementation. ### Step 4: Scope Check @@ -277,7 +277,7 @@ Read the plan comment and check the **Complexity** and **Confidence** fields. > "This issue is rated High complexity / Low confidence. The plan includes open questions that may need human decisions during implementation. Proceeding, but flagging this for your awareness." - Continue — do not hard-stop. The human chose to apply `agent-ready`. + Continue — do not hard-stop. The human chose to apply `state:agent-ready`. ### Step 5: Conflict Detection @@ -322,10 +322,10 @@ git pull origin main git checkout -b -/$USERNAME ``` -### Step 7: Add `in-progress` Label +### Step 7: Add `state:in-progress` Label ```bash -gh issue edit --add-label "in-progress" +gh issue edit --add-label "state:in-progress" ``` ### Step 8: Implement the Changes @@ -599,10 +599,10 @@ Include **every test** that ran (not just the new ones) so the reviewer can see #### Update labels -Remove `in-progress` and `review-ready`, add `pr-opened`: +Remove `state:in-progress` and `state:review-ready`, add `state:pr-opened`: ```bash -gh issue edit --remove-label "in-progress" --remove-label "review-ready" --add-label "pr-opened" +gh issue edit --remove-label "state:in-progress" --remove-label "state:review-ready" --add-label "state:pr-opened" ``` #### Report workflow run URL @@ -620,7 +620,7 @@ Report the workflow run URL and suggest the user can use the `watch-github-actio ## Branch D: Resume In-Progress Build -If the `in-progress` label is present, the skill was previously started but may not have completed. +If the `state:in-progress` label is present, the skill was previously started but may not have completed. 1. Check for an existing branch matching the issue ID: ```bash @@ -628,7 +628,7 @@ If the `in-progress` label is present, the skill was previously started but may ``` 2. If found, check it out and inspect the state (are there uncommitted changes? committed but not pushed? pushed but no PR?). 3. Resume from the appropriate step (9, 10, 12, or 13). -4. If the state is unrecoverable, report to the user and suggest starting fresh (remove `in-progress` label and re-run). +4. If the state is unrecoverable, report to the user and suggest starting fresh (remove `state:in-progress` label and re-run). --- @@ -660,7 +660,7 @@ User says: "Build from issue #42" 3. Pass issue to `principal-engineer-reviewer` for analysis 4. Reviewer produces a plan: feat type, Medium complexity, 3 implementation steps, unit + integration tests needed 5. Post the plan comment with the `🏗️ build-plan` marker -6. Add `review-ready` label +6. Add `state:review-ready` label 7. Report to user: "Plan posted on issue #42. Awaiting review." ### Second run — human left feedback @@ -683,15 +683,15 @@ User says: "Check issue #42" 4. Edit the plan comment to include search endpoint pagination — Revision 2 5. Report to user: "Updated plan to include search pagination (Revision 2)." -### Fourth run — agent-ready applied +### Fourth run — state:agent-ready applied User says: "Build issue #42" -1. Fetch issue #42 — labels include `agent-ready` +1. Fetch issue #42 — labels include `state:agent-ready` 2. Plan exists (Revision 2), complexity: Medium, confidence: High 3. No conflicting branches or PRs 4. Create branch `feat/42-add-pagination/jmyers` -5. Add `in-progress` label +5. Add `state:in-progress` label 6. Implement pagination for both endpoints per the plan 7. Add unit tests for pagination logic, integration tests for both endpoints 8. `mise run pre-commit` passes on first attempt @@ -699,14 +699,14 @@ User says: "Build issue #42" 10. `arch-doc-writer` updates `architecture/gateway.md` with pagination details 10. Commit, push, create PR with `Closes #42` 11. Post summary comment on issue with PR link -12. Update labels: remove `in-progress` + `review-ready`, add `pr-opened` +12. Update labels: remove `state:in-progress` + `state:review-ready`, add `state:pr-opened` 13. Report PR URL and workflow run status to user ### Run on issue with existing PR User says: "Build issue #42" -1. Fetch issue #42 — `pr-opened` label present +1. Fetch issue #42 — `state:pr-opened` label present 2. Find existing PR #789 linked to the issue 3. Report: "PR [#789](...) already exists for issue #42. Nothing to build." @@ -714,7 +714,7 @@ User says: "Build issue #42" User says: "Build issue #99" -1. Fetch issue #99 — `agent-ready` label present +1. Fetch issue #99 — `state:agent-ready` label present 2. Plan exists: complexity High, confidence Low, has open questions 3. Warn user: "Issue #99 is rated High complexity / Low confidence. Proceeding but flagging for your awareness." 4. Continue with build diff --git a/.agents/skills/create-github-issue/SKILL.md b/.agents/skills/create-github-issue/SKILL.md index 7e2fa38f..15063d2c 100644 --- a/.agents/skills/create-github-issue/SKILL.md +++ b/.agents/skills/create-github-issue/SKILL.md @@ -17,12 +17,11 @@ This project uses YAML form issue templates. When creating issues, match the tem ### Bug Reports -Use the `bug` label. The body must include an **Agent Diagnostic** section — this is required by the template and enforced by project convention. +Do not add a type label automatically. The body must include an **Agent Diagnostic** section — this is required by the template and enforced by project convention. Apply area or topic labels only when they are clearly known. ```bash gh issue create \ --title "bug: " \ - --label "bug" \ --body "$(cat <<'EOF' ## Agent Diagnostic @@ -57,12 +56,11 @@ EOF ### Feature Requests -Use the `feat` label. The body must include a **Proposed Design** — not a "please build this" request. +Do not add a type label automatically. The body must include a **Proposed Design** — not a "please build this" request. Apply area or topic labels only when they are clearly known. ```bash gh issue create \ --title "feat: " \ - --label "feat" \ --body "$(cat <<'EOF' ## Problem Statement @@ -107,6 +105,8 @@ EOF )" ``` +GitHub built-in issue types (`Bug`, `Feature`, `Task`) should come from the matching issue template when possible, or be set manually afterward. Do not try to emulate them through labels. + ## Useful Options | Option | Description | diff --git a/.agents/skills/create-github-pr/SKILL.md b/.agents/skills/create-github-pr/SKILL.md index 56719901..63f8dd5a 100644 --- a/.agents/skills/create-github-pr/SKILL.md +++ b/.agents/skills/create-github-pr/SKILL.md @@ -141,7 +141,7 @@ gh pr create --draft --title "WIP: New feature" --assignee "@me" ### With Labels ```bash -gh pr create --title "Title" --label "component::evaluator" --label "bug" +gh pr create --title "Title" --label "area:cli" --label "topic:security" ``` ### Target a Different Branch diff --git a/.agents/skills/create-spike/SKILL.md b/.agents/skills/create-spike/SKILL.md index 29350c3a..faa7aca0 100644 --- a/.agents/skills/create-spike/SKILL.md +++ b/.agents/skills/create-spike/SKILL.md @@ -115,10 +115,10 @@ gh label list --limit 100 Based on the investigation results, select appropriate labels: -- **Always include the issue type** as a label (e.g., `feat`, `fix`, `refactor`, `chore`, `perf`, `docs`) -- **Include component labels** if they exist in the repo (e.g., `sandbox`, `proxy`, `policy`, `cli`) +- **Do not add issue type labels** — GitHub built-in issue types come from issue templates or manual follow-up, not labels +- **Include area labels** if they exist in the repo (e.g., `area:sandbox`, `area:proxy`, `area:policy`, `area:cli`) - **Do not invent labels** — only use labels that already exist in the repo -- **Add `review-ready`** — the issue is ready for human review upon creation +- **Add `state:review-ready`** — the issue is ready for human review upon creation ## Step 4: Create the GitHub Issue @@ -127,7 +127,7 @@ Create the issue with a structured body containing both the stakeholder-readable ```bash gh issue create \ --title ": " \ - --label "" --label "" --label "review-ready" \ + --label "" --label "state:review-ready" \ --body "$(cat <<'EOF' ## Problem Statement @@ -259,7 +259,7 @@ User says: "Allow sandbox egress to private IP space via networking policy" - Reads `architecture/security-policy.md` and `architecture/sandbox.md` - Identifies exact insertion points: policy field addition, SSRF check bypass path, OPA rule extension - Assesses: Medium complexity, High confidence, ~6 files -3. Fetch labels — select `feat`, `sandbox`, `proxy`, `policy`, `review-ready` +3. Fetch labels — select `area:sandbox`, `area:proxy`, `area:policy`, `state:review-ready` 4. Create issue: `feat: allow sandbox egress to private IP space via networking policy` — body includes both the summary and full investigation (code references, architecture context, alternative approaches) 5. Report: "Created issue #59. The investigation found that private IP blocking is enforced at the SSRF check layer in the proxy. The proposed approach adds a policy-level override. Review the issue and use `build-from-issue` when ready." @@ -275,7 +275,7 @@ User says: "The proxy retry logic seems too aggressive — I'm seeing cascading - Maps the failure propagation path - Identifies that retries happen without backoff jitter, causing thundering herd - Assesses: Low complexity, High confidence, ~2 files -3. Fetch labels — select `fix`, `proxy`, `review-ready` +3. Fetch labels — select `area:proxy`, `state:review-ready` 4. Create issue: `fix: proxy retry logic causes cascading failures under load` — body includes both the summary and full investigation (retry code references, current behavior trace, comparison to standard backoff patterns) 5. Report: "Created issue #74. The proxy retries without jitter or circuit breaking, which amplifies failures under load. Straightforward fix. Review and use `build-from-issue` when ready." @@ -291,6 +291,6 @@ User says: "Policy evaluation is getting slow — can we cache compiled OPA poli - Reads the policy reload/hot-swap mechanism - Identifies that policies are recompiled on every evaluation - Assesses: Medium complexity, Medium confidence (cache invalidation is a design decision), ~4 files -3. Fetch labels — select `perf`, `policy`, `review-ready` +3. Fetch labels — select `area:policy`, `state:review-ready` 4. Create issue: `perf: cache compiled OPA policies to reduce evaluation latency` — body includes both the summary and full investigation (compilation hot path, per-request overhead, cache invalidation strategies with trade-offs) 5. Report: "Created issue #81. Policies are recompiled per-request with no caching. The main design decision is the cache invalidation strategy — flagged as an open question. Review and use `build-from-issue` when ready." diff --git a/.agents/skills/debug-openshell-cluster/SKILL.md b/.agents/skills/debug-openshell-cluster/SKILL.md index 115a2aa5..e2b8c3ac 100644 --- a/.agents/skills/debug-openshell-cluster/SKILL.md +++ b/.agents/skills/debug-openshell-cluster/SKILL.md @@ -13,13 +13,13 @@ Use **only** `openshell` CLI commands (`openshell status`, `openshell doctor log `openshell gateway start` creates a Docker container running k3s with the OpenShell server deployed via Helm. The deployment stages, in order, are: -1. **Pre-deploy check**: `openshell gateway start` in interactive mode prompts to **reuse** (keep volume, clean stale nodes) or **recreate** (destroy everything, fresh start). `mise run cluster` always recreates before deploy. +1. **Pre-deploy check**: `openshell gateway start` in interactive mode prompts to **reuse** (keep volume, clean stale nodes) or **recreate** (destroy everything, fresh start). `mise run cluster` bootstraps only when no healthy cluster is running; otherwise it performs incremental deploy. 2. Ensure cluster image is available (local build or remote pull) 3. Create Docker network (`openshell-cluster`) and volume (`openshell-cluster-{name}`) 4. Create and start a privileged Docker container (`openshell-cluster-{name}`) 5. Wait for k3s to generate kubeconfig (up to 60s) 6. **Clean stale nodes**: Remove any `NotReady` k3s nodes left over from previous container instances that reused the same persistent volume -7. **Prepare local images** (if `OPENSHELL_PUSH_IMAGES` is set): In `internal` registry mode, bootstrap waits for the in-cluster registry and pushes tagged images there. In `external` mode, bootstrap uses legacy `ctr -n k8s.io images import` push-mode behavior. +7. **Prepare local images** (if `OPENSHELL_PUSH_IMAGES` is set): bootstrap imports the tagged images into the k3s containerd `k8s.io` namespace so the current cluster can consume locally built component images during incremental development flows. 8. **Reconcile TLS PKI**: Load existing TLS secrets from the cluster; if missing, incomplete, or malformed, generate fresh PKI (CA + server + client certs). Apply secrets to cluster. If rotation happened and the OpenShell workload is already running, rollout restart and wait for completion (failed rollout aborts deploy). 9. **Store CLI mTLS credentials**: Persist client cert/key/CA locally for CLI authentication. 10. Wait for cluster health checks to pass (up to 6 min): @@ -28,6 +28,8 @@ Use **only** `openshell` CLI commands (`openshell status`, `openshell doctor log - TLS secrets `openshell-server-tls` and `openshell-client-tls` exist in `openshell` namespace - Sandbox supervisor binary exists at `/opt/openshell/bin/openshell-sandbox` (emits `HEALTHCHECK_MISSING_SUPERVISOR` marker if absent) +The image build graph now lives in `deploy/docker/Dockerfile.images`. Use target `cluster` for the gateway bootstrap container and target `gateway` for the OpenShell server image. + For local deploys, metadata endpoint selection now depends on Docker connectivity: - default local Docker socket (`unix:///var/run/docker.sock`): `https://127.0.0.1:{port}` (default port 8080) @@ -160,7 +162,7 @@ openshell doctor exec -- kubectl -n kube-system logs -l job-name=helm-install-op Common issues: - **Replicas 0/0**: The StatefulSet has been scaled to zero — no pods are running. This can happen after a failed deploy, manual scale-down, or Helm values misconfiguration. Fix: `openshell doctor exec -- kubectl -n openshell scale statefulset openshell --replicas=1` -- **ImagePullBackOff**: The component image failed to pull. In `internal` mode, verify internal registry readiness and pushed image tags (Step 5). In `external` mode, check `/etc/rancher/k3s/registries.yaml` credentials/endpoints and DNS (Step 8). Default external registry is `ghcr.io/nvidia/openshell/` (public, no auth required). If using a private registry, ensure `--registry-username` and `--registry-token` (or `OPENSHELL_REGISTRY_USERNAME`/`OPENSHELL_REGISTRY_TOKEN`) were provided during deploy. +- **ImagePullBackOff**: The component image failed to pull. Check `/etc/rancher/k3s/registries.yaml` credentials/endpoints and DNS (Step 8). The current bootstrap path configures external registry pulls by default. If using a private registry, ensure `--registry-username` and `--registry-token` (or `OPENSHELL_REGISTRY_USERNAME`/`OPENSHELL_REGISTRY_PASSWORD`) were provided during deploy. - **CrashLoopBackOff**: The server is crashing. Check pod logs for the actual error. - **Pending**: Insufficient resources or scheduling constraints. @@ -177,9 +179,9 @@ Expected port: `30051/tcp` (mapped to configurable host port, default 8080; set ### Step 5: Check Image Availability -Component images (server, sandbox) can reach kubelet via two paths: +Component images (server, sandbox) normally reach kubelet through the configured external registry path. For local development, the cluster task can also import locally built images directly into the k3s containerd `k8s.io` namespace. -**Local/external pull mode** (default local via `mise run cluster`): Local images are tagged to the configured local registry base (default `127.0.0.1:5000/openshell/*`), pushed to that registry, and pulled by k3s via `registries.yaml` mirror endpoint (typically `host.docker.internal:5000`). The `cluster` task pushes prebuilt local tags (`openshell/*:dev`, falling back to `localhost:5000/openshell/*:dev` or `127.0.0.1:5000/openshell/*:dev`). +**External pull path** (default bootstrap/runtime path): Images are pulled from the configured registry at runtime, and the entrypoint writes `/etc/rancher/k3s/registries.yaml` accordingly. ```bash # Verify image refs currently used by openshell deployment @@ -189,14 +191,14 @@ openshell doctor exec -- kubectl -n openshell get statefulset openshell -o jsonp openshell doctor exec -- cat /etc/rancher/k3s/registries.yaml ``` -**Legacy push mode**: Images are imported into the k3s containerd `k8s.io` namespace. +**Local imported-image path**: Images are imported into the k3s containerd `k8s.io` namespace for local development flows that set `OPENSHELL_PUSH_IMAGES`. ```bash # Check if images were imported into containerd (k3s default namespace is k8s.io) openshell doctor exec -- ctr -a /run/k3s/containerd/containerd.sock images ls | grep openshell ``` -**External pull mode** (remote deploy, or local with `OPENSHELL_REGISTRY_HOST`/`IMAGE_REPO_BASE` pointing at a non-local registry): Images are pulled from an external registry at runtime. The entrypoint generates `/etc/rancher/k3s/registries.yaml`. +**Registry troubleshooting**: Remote deploy and normal pull-based flows depend on the generated `/etc/rancher/k3s/registries.yaml`. ```bash # Verify registries.yaml exists and has credentials @@ -290,7 +292,10 @@ If DNS is broken, all image pulls from the distribution registry will fail, as w | `tls handshake eof` from `openshell status` | Server not running or mTLS credentials missing/mismatched | Check StatefulSet replicas (Step 3) and mTLS files (Step 6) | | StatefulSet `0/0` replicas | StatefulSet scaled to zero (failed deploy, manual scale-down, or Helm misconfiguration) | `openshell doctor exec -- kubectl -n openshell scale statefulset openshell --replicas=1` | | Local mTLS files missing | Deploy was interrupted before credentials were persisted | Extract from cluster secret `openshell-client-tls` (Step 6) | -| Container not found | Image not built | `mise run docker:build:cluster` (local) or re-deploy (remote) | +| Container not found | Image not built | `mise run docker:build:cluster` (local, with `OPENSHELL_RUNTIME_BUNDLE_TARBALL` set) or re-deploy (remote, with `--runtime-bundle-tarball`) | +| Local cluster image build now fails before Docker starts with runtime-bundle validation errors | Missing, malformed, wrong-arch, or unstaged `OPENSHELL_RUNTIME_BUNDLE_TARBALL` input for the controlled GPU runtime path | Re-run the cluster-image build with `OPENSHELL_RUNTIME_BUNDLE_TARBALL` pointing at a valid per-arch bundle tarball, and confirm `tasks/scripts/docker-build-image.sh cluster` stages `deploy/docker/.build/runtime-bundle//` successfully for `deploy/docker/Dockerfile.images` target `cluster` | +| Remote deploy now fails before Docker starts with runtime-bundle validation errors | `scripts/remote-deploy.sh` was run without `--runtime-bundle-tarball`, or the synced tarball path on the remote host is missing/invalid | Re-run `scripts/remote-deploy.sh` with `--runtime-bundle-tarball ` and confirm the tarball syncs to `${REMOTE_DIR}/.cache/runtime-bundles/` before the remote cluster build starts | +| Multi-arch cluster publish fails before Docker starts with missing runtime-bundle variables | One or both per-arch tarballs were not provided to `tasks/scripts/docker-publish-multiarch.sh` | Set `OPENSHELL_RUNTIME_BUNDLE_TARBALL_AMD64` and `OPENSHELL_RUNTIME_BUNDLE_TARBALL_ARM64` to valid per-arch tarballs, then re-run the multi-arch publish command | | Container exited, OOMKilled | Insufficient memory | Increase host memory or reduce workload | | Container exited, non-zero exit | k3s crash, port conflict, privilege issue | Check `openshell doctor logs` for details | | `/readyz` fails | k3s still starting or crashed | Wait longer or check container logs for k3s errors | @@ -303,6 +308,7 @@ If DNS is broken, all image pulls from the distribution registry will fail, as w | mTLS secrets missing | Bootstrap couldn't apply secrets (namespace not ready) | Check deploy logs and verify `openshell` namespace exists (Step 6) | | mTLS mismatch after redeploy | PKI rotated but workload not restarted, or rollout failed | Check that all three TLS secrets exist and that the openshell pod restarted after cert rotation (Step 6) | | Helm install job failed | Chart values error or dependency issue | `openshell doctor exec -- kubectl -n kube-system logs -l job-name=helm-install-openshell` | +| NFD/GFD DaemonSets present (`node-feature-discovery`, `gpu-feature-discovery`) | Cluster was deployed before NFD/GFD were disabled (pre-simplify-device-plugin change) | These are harmless but add overhead. Clean up: `openshell doctor exec -- kubectl delete daemonset -n nvidia-device-plugin -l app.kubernetes.io/name=node-feature-discovery` and similarly for GFD. The `nvidia.com/gpu.present` node label is no longer applied; device plugin scheduling no longer requires it. | | Architecture mismatch (remote) | Built on arm64, deploying to amd64 | Cross-build the image for the target architecture | | Port conflict | Another service on the configured gateway host port (default 8080) | Stop conflicting service or use `--port` on `openshell gateway start` to pick a different host port | | gRPC connect refused to `127.0.0.1:443` in CI | Docker daemon is remote (`DOCKER_HOST=tcp://...`) but metadata still points to loopback | Verify metadata endpoint host matches `DOCKER_HOST` and includes non-loopback host | @@ -311,7 +317,7 @@ If DNS is broken, all image pulls from the distribution registry will fail, as w | Pods evicted with "The node had condition: [DiskPressure]" | Host disk full, kubelet evicting pods | Free disk space on host, then `openshell gateway destroy && openshell gateway start` | | `metrics-server` errors in logs | Normal k3s noise, not the root cause | These errors are benign — look for the actual failing health check component | | Stale NotReady nodes from previous deploys | Volume reused across container recreations | The deploy flow now auto-cleans stale nodes; if it still fails, manually delete NotReady nodes (see Step 2) or choose "Recreate" when prompted | -| gRPC `UNIMPLEMENTED` for newer RPCs in push mode | Helm values still point at older pulled images instead of the pushed refs | Verify rendered `openshell-helmchart.yaml` uses the expected push refs (`server`, `sandbox`, `pki-job`) and not `:latest` | +| gRPC `UNIMPLEMENTED` for newer RPCs after local image import | Helm values still point at an older gateway image instead of the imported ref | Verify rendered `openshell-helmchart.yaml` uses the expected gateway push ref and not `:latest` | | Sandbox pods crash with `/opt/openshell/bin/openshell-sandbox: no such file or directory` | Supervisor binary missing from cluster image | The cluster image was built/published without the `supervisor-builder` stage. Rebuild with `mise run docker:build:cluster` and recreate gateway. Bootstrap auto-detects via `HEALTHCHECK_MISSING_SUPERVISOR` marker | | `HEALTHCHECK_MISSING_SUPERVISOR` in health check logs | `/opt/openshell/bin/openshell-sandbox` not found in gateway container | Rebuild cluster image: `mise run docker:build:cluster`, then `openshell gateway destroy && openshell gateway start` | diff --git a/.agents/skills/fix-security-issue/SKILL.md b/.agents/skills/fix-security-issue/SKILL.md index b8c8b423..7df9b417 100644 --- a/.agents/skills/fix-security-issue/SKILL.md +++ b/.agents/skills/fix-security-issue/SKILL.md @@ -1,6 +1,6 @@ --- name: fix-security-issue -description: Implement a fix for a reviewed security issue. Takes an issue number or scans for issues labeled "security" and "agent-ready". Reads the security review from the issue comments and implements the remediation plan. Trigger keywords - fix security issue, remediate security, implement security fix, patch vulnerability. +description: Implement a fix for a reviewed security issue. Takes an issue number or scans for issues labeled "topic:security" and "state:agent-ready". Reads the security review from the issue comments and implements the remediation plan. Trigger keywords - fix security issue, remediate security, implement security fix, patch vulnerability. --- # Fix Security Issue @@ -11,7 +11,7 @@ Implement a code fix for a security issue that has already been reviewed by the - The `gh` CLI must be authenticated (`gh auth status`) - You must be in a git repository with a GitHub remote -- The issue **must** have both the `security` and `agent-ready` labels. If either is missing, do not proceed. +- The issue **must** have both the `topic:security` and `state:agent-ready` labels. If either is missing, do not proceed. - The issue must have a prior security review comment (posted by `review-security-issue`) with a **Legitimate concern** determination and a remediation plan ## Agent Comment Marker @@ -34,10 +34,10 @@ Strip any leading `#` and proceed to Step 2 with that issue ID. ### If no issue number is provided -Scan for open issues labeled `security` and `agent-ready`: +Scan for open issues labeled `topic:security` and `state:agent-ready`: ```bash -gh issue list --label "security" --label "agent-ready" --state open --json number,title,labels,updatedAt +gh issue list --label "topic:security" --label "state:agent-ready" --state open --json number,title,labels,updatedAt ``` - **If no issues are found**, report to the user that there are no security issues ready for fixing and stop. @@ -52,18 +52,18 @@ Fetch the issue details: gh issue view --json number,title,body,state,labels,author ``` -### Require both `security` and `agent-ready` labels +### Require both `topic:security` and `state:agent-ready` labels **This is a hard gate.** Check the issue's `labels` array from the response above. Both of the following labels **must** be present: -- `security` -- `agent-ready` +- `topic:security` +- `state:agent-ready` If **either label is missing**, do **not** proceed. Report to the user which label(s) are missing and stop. For example: -- Missing `agent-ready`: "Issue #42 has the `security` label but is not marked `agent-ready`. It may still need review or human triage before a fix can be implemented." -- Missing `security`: "Issue #42 is marked `agent-ready` but does not have the `security` label. This skill only handles security issues." -- Missing both: "Issue #42 is missing both the `security` and `agent-ready` labels. Cannot proceed." +- Missing `state:agent-ready`: "Issue #42 has the `topic:security` label but is not marked `state:agent-ready`. It may still need review or human triage before a fix can be implemented." +- Missing `topic:security`: "Issue #42 is marked `state:agent-ready` but does not have the `topic:security` label. This skill only handles security issues." +- Missing both: "Issue #42 is missing both the `topic:security` and `state:agent-ready` labels. Cannot proceed." **Do not offer to add the labels or bypass this check.** The labels are a deliberate human-controlled gate. @@ -208,7 +208,7 @@ Create a PR that closes the security issue. Put the full fix summary in the PR d gh pr create \ --title "fix(security): " \ --assignee "@me" \ - --label "security" \ + --label "topic:security" \ --body "$(cat <<'EOF' > **🔧 security-fix-agent** @@ -262,7 +262,7 @@ Summarize what was done: | Command | Description | | --- | --- | -| `gh issue list --label "security" --label "agent-ready" --state open` | Find open security issues ready for fixing | +| `gh issue list --label "topic:security" --label "state:agent-ready" --state open` | Find open security issues ready for fixing | | `gh issue view --json number,title,body,state,labels,author` | Fetch full issue metadata | | `gh issue view --json comments` | Fetch all comments on an issue | | `gh pr create --title "..." --body "..."` | Create a pull request | @@ -290,7 +290,7 @@ User says: "Fix security issue #42" User says: "Fix any ready security issues" -1. Query for open issues with labels `security` + `agent-ready` +1. Query for open issues with labels `topic:security` + `state:agent-ready` 2. Find issue #78: "SQL injection in search endpoint" 3. Fetch the review comment -- determination is "Legitimate concern" 4. Implement parameterized queries @@ -307,20 +307,20 @@ User says: "Fix security issue #99" 3. Report to the user: "Issue #99 was reviewed and determined to be not actionable. No fix is needed." 4. Stop -### Issue missing `agent-ready` label +### Issue missing `state:agent-ready` label User says: "Fix security issue #55" 1. Fetch issue #55 metadata -2. Labels are `["security"]` -- missing `agent-ready` -3. Report to the user: "Issue #55 has the `security` label but is not marked `agent-ready`. It may still need review or human triage before a fix can be implemented." +2. Labels are `["topic:security"]` -- missing `state:agent-ready` +3. Report to the user: "Issue #55 has the `topic:security` label but is not marked `state:agent-ready`. It may still need review or human triage before a fix can be implemented." 4. Stop ### Issue without a review User says: "Fix security issue #60" -1. Fetch issue #60 metadata -- labels include both `security` and `agent-ready` +1. Fetch issue #60 metadata -- labels include both `topic:security` and `state:agent-ready` 2. Fetch comments -- no `security-review-agent` comment found 3. Report to the user: "Issue #60 has not been reviewed yet. Run the review-security-issue skill first." 4. Stop diff --git a/.agents/skills/review-security-issue/SKILL.md b/.agents/skills/review-security-issue/SKILL.md index 0445774b..caac9fd1 100644 --- a/.agents/skills/review-security-issue/SKILL.md +++ b/.agents/skills/review-security-issue/SKILL.md @@ -40,7 +40,7 @@ gh issue view --json title,body,state,labels,author First, check the issue's labels from the metadata fetched in Step 1. -- **If the issue has the `agent-ready` label**, the issue has already been reviewed and is ready for implementation. There is no review to perform. Report to the user that this issue is already reviewed and marked as agent-ready, and suggest using the `fix-security-issue` skill instead. Stop. +- **If the issue has the `state:agent-ready` label**, the issue has already been reviewed and is ready for implementation. There is no review to perform. Report to the user that this issue is already reviewed and marked as `state:agent-ready`, and suggest using the `fix-security-issue` skill instead. Stop. Next, fetch existing comments on the issue: @@ -133,12 +133,12 @@ EOF )" ``` -## Step 5: Add `review-ready` Label +## Step 5: Add `state:review-ready` Label -After posting the review comment (whether legitimate or not actionable), add the `review-ready` label to the issue: +After posting the review comment (whether legitimate or not actionable), add the `state:review-ready` label to the issue: ```bash -gh issue edit --add-label "review-ready" +gh issue edit --add-label "state:review-ready" ``` This signals to humans and downstream skills (e.g., `fix-security-issue`) that the review is complete. @@ -163,7 +163,7 @@ For each unanswered human comment: | `gh issue view --json title,body,state,labels,author` | Fetch full issue metadata as JSON | | `gh issue view --json comments --jq '.comments[].body'` | Fetch all comments on an issue | | `gh issue comment --body "..."` | Post a comment on an issue | -| `gh issue edit --add-label "review-ready"` | Add a label to an issue | +| `gh issue edit --add-label "state:review-ready"` | Add a label to an issue | ## Example Usage @@ -176,7 +176,7 @@ User says: "Review security issue #42" 3. No prior review found -- pass issue to `principal-engineer-reviewer` with security lens 4. Reviewer determines it's a legitimate XSS vulnerability in the API response handler 5. Post a comment with severity assessment and remediation plan -6. Add the `review-ready` label to the issue +6. Add the `state:review-ready` label to the issue 7. Report the finding and posted comment to the user ### Re-review with new comments diff --git a/.agents/skills/sync-agent-infra/SKILL.md b/.agents/skills/sync-agent-infra/SKILL.md index 4139bdc7..2c87dd41 100644 --- a/.agents/skills/sync-agent-infra/SKILL.md +++ b/.agents/skills/sync-agent-infra/SKILL.md @@ -18,7 +18,7 @@ Detect and fix drift across the agent-first infrastructure files. These files re | `.github/workflows/issue-triage.yml` | Comment text referencing skills | | `.agents/skills/triage-issue/SKILL.md` | Skill name references in gate check and diagnosis steps | | `.agents/skills/openshell-cli/SKILL.md` | Companion skills table | -| `.agents/skills/build-from-issue/SKILL.md` | `needs-agent-triage` label awareness | +| `.agents/skills/build-from-issue/SKILL.md` | `state:triage-needed` label awareness | ## When to Run @@ -60,7 +60,7 @@ The canonical workflow chains are defined in `AGENTS.md` under "## Workflow Chai ### Labels -The canonical label set is used by skills and templates. The key labels are: `agent-ready`, `review-ready`, `in-progress`, `pr-opened`, `security`, `bug`, `feat`, `needs-agent-triage`, `good-first-issue`, `spike`. +The canonical label set is used by skills and templates. The key labels are: `state:agent-ready`, `state:review-ready`, `state:in-progress`, `state:pr-opened`, `state:triage-needed`, `topic:security`, `good first issue`, `spike`, and the relevant `area:*`, `topic:*`, `integration:*`, and `test:*` labels. ## Step 2: Check Each File for Drift diff --git a/.agents/skills/triage-issue/SKILL.md b/.agents/skills/triage-issue/SKILL.md index 083dcdd8..e4998a05 100644 --- a/.agents/skills/triage-issue/SKILL.md +++ b/.agents/skills/triage-issue/SKILL.md @@ -1,6 +1,6 @@ --- name: triage-issue -description: Assess, classify, and route community-filed issues. Takes a specific issue number or processes all open issues with the needs-agent-triage label in batch. Validates agent-first gate compliance, attempts diagnosis using relevant skills, and classifies issues for routing into the spike-build pipeline. Trigger keywords - triage issue, triage, assess issue, review incoming issue, triage issues. +description: Assess, classify, and route community-filed issues. Takes a specific issue number or processes all open issues with the state:triage-needed label in batch. Validates agent-first gate compliance, attempts diagnosis using relevant skills, and classifies issues for routing into the spike-build pipeline. Trigger keywords - triage issue, triage, assess issue, review incoming issue, triage issues. --- # Triage Issue @@ -12,9 +12,9 @@ Assess, classify, and route community-filed issues. This is the front door for c - The `gh` CLI must be authenticated (`gh auth status`) - You must be in a git repository with a GitHub remote -## Critical: `agent-ready` Label Is Human-Only +## Critical: `state:agent-ready` Label Is Human-Only -The `agent-ready` label is a **human gate**. Triage **never** applies this label. Triage assesses and classifies — humans decide what gets built. This is a non-negotiable safety control. +The `state:agent-ready` label is a **human gate**. Triage **never** applies this label. Triage assesses and classifies — humans decide what gets built. This is a non-negotiable safety control. ## Agent Comment Marker @@ -45,10 +45,10 @@ Assess one specific issue. Proceed to Step 1 with the given issue number. triage issues ``` -Query all open issues with the `needs-agent-triage` label and process them in sequence: +Query all open issues with the `state:triage-needed` label and process them in sequence: ```bash -gh issue list --label "needs-agent-triage" --state open --json number,title --jq '.[].number' +gh issue list --label "state:triage-needed" --state open --json number,title --jq '.[].number' ``` For each issue returned, run the full triage workflow (Steps 1-6). Report a summary at the end listing each issue and its classification. @@ -81,9 +81,9 @@ Check whether the issue body contains a substantive agent diagnostic section. Lo **If the diagnostic section is missing or clearly placeholder:** -1. Add the `needs-agent-triage` label if not already present: +1. Add the `state:triage-needed` label if not already present: ```bash - gh issue edit --add-label "needs-agent-triage" + gh issue edit --add-label "state:triage-needed" ``` 2. Post a comment with the triage marker: ``` @@ -132,12 +132,12 @@ Based on the investigation, classify the issue into one of these categories: | Classification | Criteria | Action | |---------------|----------|--------| -| **bug-confirmed** | Agent diagnostic and codebase analysis confirm a real defect | Label `bug`, remove `needs-agent-triage` | -| **feature-valid** | Design proposal is sound, feasible given the architecture | Label `feat`, remove `needs-agent-triage` | +| **bug-confirmed** | Agent diagnostic and codebase analysis confirm a real defect | Apply relevant `area:*` or `topic:*` labels as needed, remove `state:triage-needed`, and assign the built-in `Bug` issue type manually if needed | +| **feature-valid** | Design proposal is sound, feasible given the architecture | Apply relevant `area:*` or `topic:*` labels as needed, remove `state:triage-needed`, and assign the built-in `Feature` issue type manually if needed | | **duplicate** | An existing open issue covers this | Link the duplicate, close with comment | | **user-error** | The reported behavior is expected, or the issue is a misconfiguration | Comment with explanation and guidance, close | -| **needs-more-info** | Report is substantive but missing critical reproduction details | Comment requesting specifics, keep `needs-agent-triage` | -| **needs-investigation** | Report appears valid but requires deeper analysis (spike candidate) | Label `spike`, remove `needs-agent-triage` | +| **needs-more-info** | Report is substantive but missing critical reproduction details | Comment requesting specifics, keep `state:triage-needed` | +| **needs-investigation** | Report appears valid but requires deeper analysis (spike candidate) | Label `spike`, remove `state:triage-needed` | ## Step 6: Post Triage Comment @@ -162,7 +162,7 @@ Post a structured comment with the triage marker: Apply the appropriate labels as determined in Step 5. -**Do not apply `agent-ready`.** That is always a human decision. +**Do not apply `state:agent-ready`.** That is always a human decision. ## Relationship to Other Skills @@ -175,7 +175,7 @@ Community issue filed | create-spike (if classification is needs-investigation) | - build-from-issue (if human applies agent-ready) + build-from-issue (if human applies state:agent-ready) ``` - **triage-issue** decides whether an issue is valid and how to classify it. diff --git a/.gitattributes b/.gitattributes index 78b56857..af03ae26 100644 --- a/.gitattributes +++ b/.gitattributes @@ -8,3 +8,6 @@ python/openshell/_proto/*_pb2.pyi linguist-generated # Generated Rust protobuf code (excludes hand-written mod.rs) crates/openshell-core/src/proto/openshell.*.rs linguist-generated + +# Vendored OCSF schemas fetched from schema.ocsf.io +crates/openshell-ocsf/schemas/** linguist-generated diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index a4b531a0..fbb62245 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -1,7 +1,6 @@ name: Bug Report description: Report a bug. Your agent should investigate first — see CONTRIBUTING.md. -title: "bug: " -labels: ["bug"] +type: Bug body: - type: markdown attributes: diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml index cbdce243..bdf2cacc 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.yml +++ b/.github/ISSUE_TEMPLATE/feature_request.yml @@ -1,7 +1,6 @@ name: Feature Request description: Propose a feature with a design. Not a "please build this" request. -title: "feat: " -labels: ["feat"] +type: Feature body: - type: markdown attributes: diff --git a/.github/workflows/branch-e2e.yml b/.github/workflows/branch-e2e.yml index a59f84b6..84d151b2 100644 --- a/.github/workflows/branch-e2e.yml +++ b/.github/workflows/branch-e2e.yml @@ -10,7 +10,7 @@ permissions: jobs: build-gateway: - if: contains(github.event.pull_request.labels.*.name, 'e2e') + if: contains(github.event.pull_request.labels.*.name, 'test:e2e') uses: ./.github/workflows/docker-build.yml with: component: gateway @@ -18,12 +18,17 @@ jobs: runner: build-arm64 build-cluster: - if: contains(github.event.pull_request.labels.*.name, 'e2e') + if: contains(github.event.pull_request.labels.*.name, 'test:e2e') uses: ./.github/workflows/docker-build.yml with: component: cluster platform: linux/arm64 runner: build-arm64 + runtime-bundle-url: ${{ vars.OPENSHELL_RUNTIME_BUNDLE_URL_ARM64 }} + runtime-bundle-github-repo: ${{ github.repository_owner }}/nvidia-container-toolkit + runtime-bundle-release-tag: devel + runtime-bundle-filename-prefix: openshell-gpu-runtime-bundle + runtime-bundle-version: devel e2e: needs: [build-gateway, build-cluster] @@ -31,3 +36,5 @@ jobs: with: image-tag: ${{ github.sha }} runner: build-arm64 + run-tool-smoke-validations: true + run-installer-selection-smoke: true diff --git a/.github/workflows/docker-build.yml b/.github/workflows/docker-build.yml index 48f68ab6..2f820a3d 100644 --- a/.github/workflows/docker-build.yml +++ b/.github/workflows/docker-build.yml @@ -32,6 +32,41 @@ on: required: false type: string default: "" + runtime-bundle-url: + description: "Per-arch runtime bundle tarball URL for single-arch cluster builds" + required: false + type: string + default: "" + runtime-bundle-url-amd64: + description: "amd64 runtime bundle tarball URL for multi-arch cluster builds" + required: false + type: string + default: "" + runtime-bundle-url-arm64: + description: "arm64 runtime bundle tarball URL for multi-arch cluster builds" + required: false + type: string + default: "" + runtime-bundle-github-repo: + description: "Runtime bundle producer GitHub repository" + required: false + type: string + default: "" + runtime-bundle-release-tag: + description: "Runtime bundle release tag used for derived defaults" + required: false + type: string + default: "" + runtime-bundle-filename-prefix: + description: "Runtime bundle asset filename prefix" + required: false + type: string + default: "" + runtime-bundle-version: + description: "Runtime bundle version token used in asset filenames" + required: false + type: string + default: "" env: MISE_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} @@ -87,7 +122,47 @@ jobs: uses: ./.github/actions/setup-buildx - name: Build ${{ inputs.component }} image + if: inputs.component != 'cluster' env: DOCKER_BUILDER: openshell OPENSHELL_CARGO_VERSION: ${{ steps.version.outputs.cargo_version }} + # Enable dev-settings feature for test settings (dummy_bool, dummy_int) + # used by e2e tests. + EXTRA_CARGO_FEATURES: openshell-core/dev-settings run: mise run --no-prepare docker:build:${{ inputs.component }} + + - name: Build cluster image + if: inputs.component == 'cluster' + env: + DOCKER_BUILDER: openshell + OPENSHELL_CARGO_VERSION: ${{ steps.version.outputs.cargo_version }} + OPENSHELL_RUNTIME_BUNDLE_URL: ${{ inputs.runtime-bundle-url }} + OPENSHELL_RUNTIME_BUNDLE_URL_AMD64: ${{ inputs.runtime-bundle-url-amd64 }} + OPENSHELL_RUNTIME_BUNDLE_URL_ARM64: ${{ inputs.runtime-bundle-url-arm64 }} + OPENSHELL_RUNTIME_BUNDLE_GITHUB_REPO: ${{ inputs.runtime-bundle-github-repo }} + OPENSHELL_RUNTIME_BUNDLE_RELEASE_TAG: ${{ inputs.runtime-bundle-release-tag }} + OPENSHELL_RUNTIME_BUNDLE_FILENAME_PREFIX: ${{ inputs.runtime-bundle-filename-prefix }} + OPENSHELL_RUNTIME_BUNDLE_VERSION: ${{ inputs.runtime-bundle-version }} + run: | + set -euo pipefail + + if [[ "${DOCKER_PLATFORM}" == *","* ]]; then + bash tasks/scripts/ci-build-cluster-image.sh \ + --platform "${DOCKER_PLATFORM}" \ + --runtime-bundle-url-amd64 "${OPENSHELL_RUNTIME_BUNDLE_URL_AMD64}" \ + --runtime-bundle-url-arm64 "${OPENSHELL_RUNTIME_BUNDLE_URL_ARM64}" \ + --runtime-bundle-github-repo "${OPENSHELL_RUNTIME_BUNDLE_GITHUB_REPO}" \ + --runtime-bundle-release-tag "${OPENSHELL_RUNTIME_BUNDLE_RELEASE_TAG}" \ + --runtime-bundle-filename-prefix "${OPENSHELL_RUNTIME_BUNDLE_FILENAME_PREFIX}" \ + --runtime-bundle-version "${OPENSHELL_RUNTIME_BUNDLE_VERSION}" + else + bash tasks/scripts/ci-build-cluster-image.sh \ + --platform "${DOCKER_PLATFORM}" \ + --runtime-bundle-url "${OPENSHELL_RUNTIME_BUNDLE_URL}" \ + --runtime-bundle-url-amd64 "${OPENSHELL_RUNTIME_BUNDLE_URL_AMD64}" \ + --runtime-bundle-url-arm64 "${OPENSHELL_RUNTIME_BUNDLE_URL_ARM64}" \ + --runtime-bundle-github-repo "${OPENSHELL_RUNTIME_BUNDLE_GITHUB_REPO}" \ + --runtime-bundle-release-tag "${OPENSHELL_RUNTIME_BUNDLE_RELEASE_TAG}" \ + --runtime-bundle-filename-prefix "${OPENSHELL_RUNTIME_BUNDLE_FILENAME_PREFIX}" \ + --runtime-bundle-version "${OPENSHELL_RUNTIME_BUNDLE_VERSION}" + fi diff --git a/.github/workflows/e2e-test.yml b/.github/workflows/e2e-test.yml index f14ccb88..c21d10e5 100644 --- a/.github/workflows/e2e-test.yml +++ b/.github/workflows/e2e-test.yml @@ -1,6 +1,27 @@ name: E2E Test on: + workflow_dispatch: + inputs: + image-tag: + description: "Image tag to test (typically the commit SHA)" + required: true + type: string + runner: + description: "GitHub Actions runner label for the core E2E suite and optional smoke slices" + required: false + type: string + default: "build-amd64" + run-tool-smoke-validations: + description: "Add the first-class tool smoke evidence slice after the core E2E suite" + required: false + type: boolean + default: false + run-installer-selection-smoke: + description: "Add the installer selection smoke slice after the core E2E suite" + required: false + type: boolean + default: false workflow_call: inputs: image-tag: @@ -8,10 +29,20 @@ on: required: true type: string runner: - description: "GitHub Actions runner label" + description: "GitHub Actions runner label for the core E2E suite and optional smoke slices" required: false type: string default: "build-amd64" + run-tool-smoke-validations: + description: "Add the first-class tool smoke evidence slice after the core E2E suite" + required: false + type: boolean + default: false + run-installer-selection-smoke: + description: "Add the installer selection smoke slice after the core E2E suite" + required: false + type: boolean + default: false permissions: contents: read @@ -62,7 +93,26 @@ jobs: - name: Install SSH client for Rust CLI e2e tests run: apt-get update && apt-get install -y --no-install-recommends openssh-client && rm -rf /var/lib/apt/lists/* - - name: Run E2E tests + - name: Run core E2E suite run: | mise run --no-prepare --skip-deps e2e:python mise run --no-prepare --skip-deps e2e:rust + + - name: Record tool smoke evidence slice + if: ${{ inputs.run-tool-smoke-validations }} + run: | + printf 'Enabled first-class tool smoke evidence slice for image-tag=%s\n' "${IMAGE_TAG}" + { + printf '## Tool Smoke Evidence Slice\n\n' + printf -- '- Trigger: `run-tool-smoke-validations=true`\n' + printf -- '- Image tag: `%s`\n' "${IMAGE_TAG}" + printf -- '- Contract: run `tool_adapter_smoke` after the core E2E suite\n' + } >> "$GITHUB_STEP_SUMMARY" + + - name: Run first-class tool smoke evidence slice + if: ${{ inputs.run-tool-smoke-validations }} + run: cargo test --manifest-path e2e/rust/Cargo.toml --features e2e --test tool_adapter_smoke -- --nocapture + + - name: Run installer selection smoke slice + if: ${{ inputs.run-installer-selection-smoke }} + run: bash e2e/install/bash_test.sh diff --git a/.github/workflows/issue-triage.yml b/.github/workflows/issue-triage.yml index 241af7f6..50bdd31e 100644 --- a/.github/workflows/issue-triage.yml +++ b/.github/workflows/issue-triage.yml @@ -37,12 +37,12 @@ jobs: console.log('Agent diagnostic section missing or placeholder. Flagging.'); - // Add needs-agent-triage label + // Add state:triage-needed label await github.rest.issues.addLabels({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, - labels: ['needs-agent-triage'] + labels: ['state:triage-needed'] }); // Post redirect comment diff --git a/.github/workflows/release-auto-tag.yml b/.github/workflows/release-auto-tag.yml index c0722853..569c3338 100644 --- a/.github/workflows/release-auto-tag.yml +++ b/.github/workflows/release-auto-tag.yml @@ -6,7 +6,7 @@ name: Release Auto-Tag on: workflow_dispatch: {} schedule: - - cron: "0 2 * * *" # 7 PM PDT + - cron: "0 14 * * 1-5" # 7 AM PDT, weekdays only permissions: contents: write diff --git a/.github/workflows/release-canary.yml b/.github/workflows/release-canary.yml index c4627b7e..ed7e7321 100644 --- a/.github/workflows/release-canary.yml +++ b/.github/workflows/release-canary.yml @@ -4,7 +4,7 @@ on: workflow_dispatch: inputs: tag: - description: "Release tag to test (e.g. devel, v1.2.3)" + description: "Release tag to test (e.g. dev, v1.2.3)" required: true type: string workflow_run: @@ -47,7 +47,7 @@ jobs: - name: Install CLI (default / latest) run: | set -euo pipefail - curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh + curl -LsSf https://raw.githubusercontent.com/linuxdevel/OpenShell/main/install.sh | sh - name: Verify CLI installation run: | @@ -115,7 +115,7 @@ jobs: else WORKFLOW_NAME="${{ github.event.workflow_run.name }}" if [ "$WORKFLOW_NAME" = "Release Dev" ]; then - echo "tag=devel" >> "$GITHUB_OUTPUT" + echo "tag=dev" >> "$GITHUB_OUTPUT" elif [ "$WORKFLOW_NAME" = "Release Tag" ]; then TAG="${{ github.event.workflow_run.head_branch }}" if [ -z "$TAG" ]; then @@ -132,7 +132,7 @@ jobs: - name: Install CLI from published install script run: | set -euo pipefail - curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | OPENSHELL_VERSION=${{ steps.release.outputs.tag }} OPENSHELL_INSTALL_DIR=/usr/local/bin sh + curl -LsSf https://raw.githubusercontent.com/linuxdevel/OpenShell/main/install.sh | OPENSHELL_VERSION=${{ steps.release.outputs.tag }} OPENSHELL_INSTALL_DIR=/usr/local/bin sh - name: Verify CLI installation run: | diff --git a/.github/workflows/release-dev.yml b/.github/workflows/release-dev.yml index 488ec101..2ce80820 100644 --- a/.github/workflows/release-dev.yml +++ b/.github/workflows/release-dev.yml @@ -60,6 +60,12 @@ jobs: with: component: cluster cargo-version: ${{ needs.compute-versions.outputs.cargo_version }} + runtime-bundle-url-amd64: ${{ vars.OPENSHELL_RUNTIME_BUNDLE_URL_AMD64 }} + runtime-bundle-url-arm64: ${{ vars.OPENSHELL_RUNTIME_BUNDLE_URL_ARM64 }} + runtime-bundle-github-repo: ${{ github.repository_owner }}/nvidia-container-toolkit + runtime-bundle-release-tag: devel + runtime-bundle-filename-prefix: openshell-gpu-runtime-bundle + runtime-bundle-version: devel e2e: needs: [build-gateway, build-cluster] @@ -96,6 +102,7 @@ jobs: timeout-minutes: 120 outputs: wheel_version: ${{ needs.compute-versions.outputs.python_version }} + wheel_filenames: ${{ steps.filenames.outputs.wheel_filenames }} container: image: ghcr.io/nvidia/openshell/ci:latest credentials: @@ -132,6 +139,13 @@ jobs: OPENSHELL_CARGO_VERSION="${{ needs.compute-versions.outputs.cargo_version }}" mise run python:build:macos ls -la target/wheels/*.whl + - name: Capture wheel filenames + id: filenames + run: | + set -euo pipefail + WHEEL_FILENAMES=$(ls target/wheels/*.whl | xargs -n1 basename | paste -sd, -) + echo "wheel_filenames=${WHEEL_FILENAMES}" >> "$GITHUB_OUTPUT" + - name: Upload wheel artifacts uses: actions/upload-artifact@v4 with: @@ -297,10 +311,10 @@ jobs: retention-days: 5 # --------------------------------------------------------------------------- - # Create / update the devel GitHub Release with CLI binaries and wheels + # Create / update the dev GitHub Release with CLI binaries and wheels # --------------------------------------------------------------------------- - release-devel: - name: Release Devel + release-dev: + name: Release Dev needs: [build-cli-linux, build-cli-macos, build-python-wheels] runs-on: build-amd64 timeout-minutes: 10 @@ -327,7 +341,13 @@ jobs: sha256sum *.tar.gz *.whl > openshell-checksums-sha256.txt cat openshell-checksums-sha256.txt - - name: Prune stale wheel assets from devel release + - name: Skip detached checksum signing for devel release + run: | + set -euo pipefail + echo "Devel releases publish checksum manifests without detached signatures in the active path." + echo "Detached checksum signing is deferred backlog work for both devel and tagged release workflows." + + - name: Prune stale wheel assets from dev release uses: actions/github-script@v7 env: WHEEL_VERSION: ${{ needs.build-python-wheels.outputs.wheel_version }} @@ -341,20 +361,20 @@ jobs: core.info(`WHEEL_VERSION: ${wheelVersion}`); core.info(`CURRENT_PREFIX: ${currentPrefix}`); - // Fetch the devel release + // Fetch the dev release let release; try { - release = await github.rest.repos.getReleaseByTag({ owner, repo, tag: 'devel' }); + release = await github.rest.repos.getReleaseByTag({ owner, repo, tag: 'dev' }); } catch (err) { if (err.status === 404) { - core.info('No existing devel release found; skipping wheel pruning.'); + core.info('No existing dev release found; skipping wheel pruning.'); return; } throw err; } const assets = release.data.assets; - core.info(`=== Current devel release assets (${assets.length} total) ===`); + core.info(`=== Current dev release assets (${assets.length} total) ===`); for (const a of assets) { core.info(` ${String(a.id).padStart(12)} ${a.name}`); } @@ -374,51 +394,31 @@ jobs: } core.info(`Summary: kept=${kept}, deleted=${deleted}`); - - name: Move devel tag + - name: Move dev tag run: | git config user.name "github-actions[bot]" git config user.email "github-actions[bot]@users.noreply.github.com" - git tag -fa devel -m "Latest Devel" "${GITHUB_SHA}" - git push --force origin devel + git tag -fa dev -m "Latest Dev" "${GITHUB_SHA}" + git push --force origin dev - name: Create / update GitHub Release uses: softprops/action-gh-release@v2 with: name: OpenShell Development Build prerelease: true - tag_name: devel + tag_name: dev target_commitish: ${{ github.sha }} body: | - This build is automatically built on every commit to main that passes CI. + This build is automatically published on every commit to main that passes CI. > **NOTE**: This is a development build, not a tagged release, and may be unstable. + > **NOTE**: Checksum manifests are published in the active path. Detached checksum signing is deferred backlog work. ### Quick install - Requires the [GitHub CLI (`gh`)](https://cli.github.com) to be installed and authenticated. - - ```bash - sh -c 'ARCH=$(uname -m); OS=$(uname -s); \ - case "${OS}-${ARCH}" in \ - Linux-x86_64) ASSET="openshell-x86_64-unknown-linux-musl.tar.gz" ;; \ - Linux-aarch64) ASSET="openshell-aarch64-unknown-linux-musl.tar.gz" ;; \ - Darwin-arm64) ASSET="openshell-aarch64-apple-darwin.tar.gz" ;; \ - *) echo "Unsupported platform: ${OS}-${ARCH}" >&2; exit 1 ;; \ - esac; \ - gh release download devel --repo NVIDIA/OpenShell --pattern "${ASSET}" -O - \ - | tar xz \ - && sudo install -m 755 openshell /usr/local/bin/openshell' ``` - - ### Assets - - | File | Platform | Install | - |------|----------|---------| - | `openshell-x86_64-unknown-linux-musl.tar.gz` | Linux x86_64 | `gh release download devel --repo NVIDIA/OpenShell --pattern "openshell-x86_64-unknown-linux-musl.tar.gz" -O - \| tar xz && sudo install -m 755 openshell /usr/local/bin/openshell` | - | `openshell-aarch64-unknown-linux-musl.tar.gz` | Linux aarch64 / ARM64 | `gh release download devel --repo NVIDIA/OpenShell --pattern "openshell-aarch64-unknown-linux-musl.tar.gz" -O - \| tar xz && sudo install -m 755 openshell /usr/local/bin/openshell` | - | `openshell-aarch64-apple-darwin.tar.gz` | macOS Apple Silicon | `gh release download devel --repo NVIDIA/OpenShell --pattern "openshell-aarch64-apple-darwin.tar.gz" -O - \| tar xz && sudo install -m 755 openshell /usr/local/bin/openshell` | - | `openshell-*.whl` | Python wheels | `gh release download devel --repo NVIDIA/OpenShell --pattern "openshell-*.whl"` | - | `openshell-checksums-sha256.txt` | — | SHA256 checksums for all archives | + curl -LsSf https://raw.githubusercontent.com/linuxdevel/OpenShell/main/install.sh | OPENSHELL_VERSION=dev sh + ``` files: | release/openshell-x86_64-unknown-linux-musl.tar.gz release/openshell-aarch64-unknown-linux-musl.tar.gz @@ -428,31 +428,23 @@ jobs: trigger-wheel-publish: name: Trigger Wheel Publish - needs: [compute-versions, release-devel] + needs: [compute-versions, build-python-wheels, release-dev] runs-on: [self-hosted, nv] timeout-minutes: 10 steps: - - name: Download wheel artifacts - uses: actions/download-artifact@v4 - with: - name: python-wheels - path: release/ - - name: Trigger GitLab CI env: GITLAB_CI_TRIGGER_TOKEN: ${{ secrets.GITLAB_CI_TRIGGER_TOKEN }} GITLAB_CI_TRIGGER_URL: ${{ secrets.GITLAB_CI_TRIGGER_URL }} RELEASE_VERSION: ${{ needs.compute-versions.outputs.python_version }} + WHEEL_FILENAMES: ${{ needs.build-python-wheels.outputs.wheel_filenames }} run: | set -euo pipefail - shopt -s nullglob - wheel_files=(release/*.whl) - if (( ${#wheel_files[@]} == 0 )); then - echo "No wheel artifacts found in release/" >&2 + if [ -z "${WHEEL_FILENAMES}" ]; then + echo "No wheel filenames provided by build job" >&2 exit 1 fi - WHEEL_FILENAMES=$(printf '%s\n' "${wheel_files[@]##*/}" | paste -sd, -) response=$(curl -X POST \ --fail \ --silent \ @@ -462,7 +454,7 @@ jobs: -F "variables[PIPELINE_ACTION]=publish_wheels" \ -F "variables[GITHUB_REPOSITORY]=${GITHUB_REPOSITORY}" \ -F "variables[COMMIT_SHA]=${GITHUB_SHA}" \ - -F "variables[RELEASE_TAG]=devel" \ + -F "variables[RELEASE_TAG]=dev" \ -F "variables[RELEASE_VERSION]=${RELEASE_VERSION}" \ -F "variables[RELEASE_KIND]=dev" \ -F "variables[WHEEL_FILENAMES]=${WHEEL_FILENAMES}" \ diff --git a/.github/workflows/release-tag.yml b/.github/workflows/release-tag.yml index c0bd5deb..e01c54a6 100644 --- a/.github/workflows/release-tag.yml +++ b/.github/workflows/release-tag.yml @@ -75,6 +75,12 @@ jobs: with: component: cluster cargo-version: ${{ needs.compute-versions.outputs.cargo_version }} + runtime-bundle-url-amd64: ${{ vars.OPENSHELL_RUNTIME_BUNDLE_URL_AMD64 }} + runtime-bundle-url-arm64: ${{ vars.OPENSHELL_RUNTIME_BUNDLE_URL_ARM64 }} + runtime-bundle-github-repo: ${{ github.repository_owner }}/nvidia-container-toolkit + runtime-bundle-release-tag: ${{ inputs.tag || github.ref_name }} + runtime-bundle-filename-prefix: openshell-gpu-runtime-bundle + runtime-bundle-version: ${{ needs.compute-versions.outputs.semver }} e2e: needs: [build-gateway, build-cluster] @@ -116,6 +122,7 @@ jobs: timeout-minutes: 120 outputs: wheel_version: ${{ needs.compute-versions.outputs.python_version }} + wheel_filenames: ${{ steps.filenames.outputs.wheel_filenames }} container: image: ghcr.io/nvidia/openshell/ci:latest credentials: @@ -153,6 +160,13 @@ jobs: OPENSHELL_CARGO_VERSION="${{ needs.compute-versions.outputs.cargo_version }}" mise run python:build:macos ls -la target/wheels/*.whl + - name: Capture wheel filenames + id: filenames + run: | + set -euo pipefail + WHEEL_FILENAMES=$(ls target/wheels/*.whl | xargs -n1 basename | paste -sd, -) + echo "wheel_filenames=${WHEEL_FILENAMES}" >> "$GITHUB_OUTPUT" + - name: Upload wheel artifacts uses: actions/upload-artifact@v4 with: @@ -345,13 +359,19 @@ jobs: name: python-wheels path: release/ - - name: Generate checksums + - name: Generate required checksum manifest run: | set -euo pipefail cd release sha256sum *.tar.gz *.whl > openshell-checksums-sha256.txt cat openshell-checksums-sha256.txt + - name: Note deferred detached checksum signing + run: | + set -euo pipefail + echo "Tagged releases require release/openshell-checksums-sha256.txt in the active path." + echo "Detached checksum signing remains deferred backlog work and is not enforced by this workflow yet." + - name: Create GitHub Release uses: softprops/action-gh-release@v2 with: @@ -360,12 +380,14 @@ jobs: tag_name: ${{ env.RELEASE_TAG }} generate_release_notes: true body: | - ## OpenShell ${{ env.RELEASE_TAG }} + ## OpenShell ${{ env.RELEASE_TAG }} + + Checksum manifest generation is required for tagged releases. Detached checksum signing remains deferred backlog work and is not enforced in the active release path. ### Quick install ```bash - curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | OPENSHELL_VERSION=${{ env.RELEASE_TAG }} sh + curl -LsSf https://raw.githubusercontent.com/linuxdevel/OpenShell/main/install.sh | OPENSHELL_VERSION=${{ env.RELEASE_TAG }} sh ``` files: | @@ -377,32 +399,24 @@ jobs: trigger-wheel-publish: name: Trigger Wheel Publish - needs: [compute-versions, release] + needs: [compute-versions, build-python-wheels, release] runs-on: [self-hosted, nv] timeout-minutes: 10 steps: - - name: Download wheel artifacts - uses: actions/download-artifact@v4 - with: - name: python-wheels - path: release/ - - name: Trigger GitLab CI env: GITLAB_CI_TRIGGER_TOKEN: ${{ secrets.GITLAB_CI_TRIGGER_TOKEN }} GITLAB_CI_TRIGGER_URL: ${{ secrets.GITLAB_CI_TRIGGER_URL }} RELEASE_VERSION: ${{ needs.compute-versions.outputs.python_version }} RELEASE_TAG: ${{ env.RELEASE_TAG }} + WHEEL_FILENAMES: ${{ needs.build-python-wheels.outputs.wheel_filenames }} run: | set -euo pipefail - shopt -s nullglob - wheel_files=(release/*.whl) - if (( ${#wheel_files[@]} == 0 )); then - echo "No wheel artifacts found in release/" >&2 + if [ -z "${WHEEL_FILENAMES}" ]; then + echo "No wheel filenames provided by build job" >&2 exit 1 fi - WHEEL_FILENAMES=$(printf '%s\n' "${wheel_files[@]##*/}" | paste -sd, -) response=$(curl -X POST \ --fail \ --silent \ diff --git a/.github/workflows/vouch-check.yml b/.github/workflows/vouch-check.yml index 642a9db1..1b58c275 100644 --- a/.github/workflows/vouch-check.yml +++ b/.github/workflows/vouch-check.yml @@ -12,21 +12,18 @@ jobs: vouch-gate: if: github.repository_owner == 'NVIDIA' runs-on: ubuntu-latest + env: + ORG_READ_TOKEN: ${{ secrets.ORG_READ_TOKEN }} steps: - - name: Check if contributor is vouched + - name: Check org membership + id: org-check + if: env.ORG_READ_TOKEN != '' uses: actions/github-script@v7 with: + github-token: ${{ secrets.ORG_READ_TOKEN }} + result-encoding: string script: | const author = context.payload.pull_request.user.login; - const authorType = context.payload.pull_request.user.type; - - // Skip bots (dependabot, renovate, github-actions, etc.). - if (authorType === 'Bot') { - console.log(`${author} is a bot. Skipping vouch check.`); - return; - } - - // Check org membership — members bypass the vouch gate. try { const { status } = await github.rest.orgs.checkMembershipForUser({ org: context.repo.owner, @@ -34,29 +31,27 @@ jobs: }); if (status === 204 || status === 302) { console.log(`${author} is an org member. Skipping vouch check.`); - return; + return 'skip'; } } catch (e) { if (e.status !== 404) { - console.log(`Org membership check error: ${e.message}`); + console.log(`Org membership check error (status=${e.status}): ${e.message}`); } } + return ''; - // Check collaborator status — direct collaborators bypass. - try { - const { status } = await github.rest.repos.checkCollaborator({ - owner: context.repo.owner, - repo: context.repo.repo, - username: author, - }); - if (status === 204) { - console.log(`${author} is a collaborator. Skipping vouch check.`); - return; - } - } catch (e) { - if (e.status !== 404) { - console.log(`Collaborator check error: ${e.message}`); - } + - name: Check if contributor is vouched + if: steps.org-check.outputs.result != 'skip' + uses: actions/github-script@v7 + with: + script: | + const author = context.payload.pull_request.user.login; + const authorType = context.payload.pull_request.user.type; + + // Skip bots (dependabot, renovate, github-actions, etc.). + if (authorType === 'Bot') { + console.log(`${author} is a bot. Skipping vouch check.`); + return; } // Check the VOUCHED.td file on the dedicated "vouched" branch. diff --git a/AGENTS.md b/AGENTS.md index f5cf5269..8e31c4ac 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -19,9 +19,9 @@ These pipelines connect skills into end-to-end workflows. Individual skill files - **Community inflow:** `triage-issue` → `create-spike` → `build-from-issue` - Triage assesses and classifies community-filed issues. Spike investigates unknowns. Build implements. - **Internal development:** `create-spike` → `build-from-issue` - - Spike explores feasibility, then build executes once `agent-ready` is applied by a human. + - Spike explores feasibility, then build executes once `state:agent-ready` is applied by a human. - **Security:** `review-security-issue` → `fix-security-issue` - - Review produces a severity assessment and remediation plan. Fix implements it. Both require the `security` label; fix also requires `agent-ready`. + - Review produces a severity assessment and remediation plan. Fix implements it. Both require the `topic:security` label; fix also requires `state:agent-ready`. - **Policy iteration:** `openshell-cli` → `generate-sandbox-policy` - CLI manages the sandbox lifecycle; policy generation authors the YAML constraints. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 88a30acb..f9e1fc1c 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -91,6 +91,8 @@ Skills connect into pipelines. Individual skill files don't describe these relat - **Security:** `review-security-issue` → `fix-security-issue` - **Policy iteration:** `openshell-cli` → `generate-sandbox-policy` +Workflow state labels use the `state:*` prefix, and security work uses `topic:security`. GitHub issue templates assign built-in issue types where applicable, and agent-created issues should use issue types or manual follow-up rather than type labels. + ## Prerequisites Install [mise](https://mise.jdx.dev/). This is used to set up the development environment. diff --git a/Cargo.lock b/Cargo.lock index 3d01356a..3ac52b00 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2815,6 +2815,7 @@ dependencies = [ "serde", "serde_json", "tar", + "temp-env", "tempfile", "tokio", "tracing", @@ -2882,6 +2883,18 @@ dependencies = [ "url", ] +[[package]] +name = "openshell-ocsf" +version = "0.1.0" +dependencies = [ + "chrono", + "serde", + "serde_json", + "serde_repr", + "tracing", + "tracing-subscriber", +] + [[package]] name = "openshell-policy" version = "0.1.0" @@ -2910,6 +2923,7 @@ dependencies = [ "serde", "serde_json", "serde_yaml", + "temp-env", "tempfile", "thiserror 2.0.18", "tokio", diff --git a/README.md b/README.md index 1800a468..32ef2151 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,6 @@ OpenShell is built agent-first. The project ships with agent skills for everythi ## Quickstart - ### Prerequisites - **Docker** — Docker Desktop (or a Docker daemon) must be running. @@ -24,7 +23,7 @@ OpenShell is built agent-first. The project ships with agent skills for everythi **Binary (recommended):** ```bash -curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh +curl -LsSf https://raw.githubusercontent.com/linuxdevel/OpenShell/main/install.sh | sh ``` **From PyPI (requires [uv](https://docs.astral.sh/uv/)):** @@ -33,10 +32,12 @@ curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | uv tool install -U openshell ``` +Both methods install the latest stable release by default. To install a specific version, set `OPENSHELL_VERSION` (binary) or pin the version with `uv tool install openshell==`. A [`dev` release](https://github.com/NVIDIA/OpenShell/releases/tag/dev) is also available that tracks the latest commit on `main`. + ### Create a sandbox ```bash -openshell sandbox create -- claude # or opencode, codex, ollama +openshell sandbox create -- claude # or opencode, codex, copilot ``` A gateway is created automatically on first use. To deploy on a remote host instead, pass `--remote user@host` to the create command. @@ -45,7 +46,7 @@ The sandbox container includes the following tools by default: | Category | Tools | | ---------- | -------------------------------------------------------- | -| Agent | `claude`, `opencode`, `codex` | +| Agent | `claude`, `opencode`, `codex`, `copilot` | | Language | `python` (3.13), `node` (22) | | Developer | `gh`, `git`, `vim`, `nano` | | Networking | `ping`, `dig`, `nslookup`, `nc`, `traceroute`, `netstat` | @@ -115,9 +116,11 @@ Policies are declarative YAML files. Static sections (filesystem, process) are l ## Providers -Agents need credentials — API keys, tokens, service accounts. OpenShell manages these as **providers**: named credential bundles that are injected into sandboxes at creation. The CLI auto-discovers credentials for recognized agents (Claude, Codex, OpenCode) from your shell environment, or you can create providers explicitly with `openshell provider create`. Credentials never leak into the sandbox filesystem; they are injected as environment variables at runtime. +Agents need credentials — API keys, tokens, service accounts. OpenShell manages these as **providers**: named credential bundles that are injected into sandboxes at creation. The CLI auto-discovers credentials for recognized agents (Claude, Codex, OpenCode, Copilot) from your shell environment, or you can create providers explicitly with `openshell provider create`. Credentials never leak into the sandbox filesystem; they are injected as environment variables at runtime. + +## GPU Support (Experimental) -## GPU Support +> **Experimental** — GPU passthrough works on supported hosts but is under active development. Expect rough edges and breaking changes. OpenShell can pass host GPUs into sandboxes for local inference, fine-tuning, or any GPU workload. Add `--gpu` when creating a sandbox: @@ -136,8 +139,9 @@ The CLI auto-bootstraps a GPU-enabled gateway on first use. GPU intent is also i | [Claude Code](https://docs.anthropic.com/en/docs/claude-code) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `ANTHROPIC_API_KEY`. | | [OpenCode](https://opencode.ai/) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `OPENAI_API_KEY` or `OPENROUTER_API_KEY`. | | [Codex](https://developers.openai.com/codex) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `OPENAI_API_KEY`. | +| [GitHub Copilot CLI](https://docs.github.com/en/copilot/github-copilot-in-the-cli) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `GITHUB_TOKEN` or `COPILOT_GITHUB_TOKEN`. | | [OpenClaw](https://openclaw.ai/) | [Community](https://github.com/NVIDIA/OpenShell-Community) | Launch with `openshell sandbox create --from openclaw`. | -| [Ollama](https://ollama.com/) | [Community](https://github.com/NVIDIA/OpenShell-Community) | Launch with `openshell sandbox create --from ollama`. | +| [Ollama](https://ollama.com/) | [Community](https://github.com/NVIDIA/OpenShell-Community) | Launch with `openshell sandbox create --from ollama`. | ## Key Commands diff --git a/architecture/README.md b/architecture/README.md index d65b9b23..255b8ef9 100644 --- a/architecture/README.md +++ b/architecture/README.md @@ -162,6 +162,8 @@ The provider system handles: This approach means users configure credentials once, and every sandbox that needs them receives them automatically at runtime. +Provider records are not the same thing as sandbox tool adapters. A provider answers "which external account or secret material does this sandbox need?" A tool adapter answers "how should a specific CLI tool such as `claude code` or `opencode` receive and use that provider context inside the sandbox?" The long-term design keeps both layers explicit so OpenShell can support multiple first-class tools without turning each provider plugin into a tool-specific one-off. + For more detail, see [Providers](sandbox-providers.md). ### Inference Routing @@ -211,6 +213,15 @@ The platform produces three container images: Builds use multi-stage Dockerfiles with caching to keep rebuild times fast. A Helm chart handles Kubernetes-level configuration (service ports, health checks, security contexts, resource limits). Build automation is managed through mise tasks. +OpenShell is the top-level orchestrator, not the only producer in the install chain. The coordinated fork-owned path is split across four repos: + +- `libnvidia-container/` produces the low-level GPU library payload. +- `nvidia-container-toolkit/` produces the verified runtime-bundle release assets that cluster builds consume. +- `OpenShell-Community/` produces sandbox and base images that carry packaged tool environments. +- `OpenShell/` consumes those outputs to assemble cluster and gateway images, apply tool/provider behavior, and expose the currently expected final setup/install interface. + +The final installer/setup owner is still an explicit workspace decision, but the current direction is for `OpenShell/` to remain the user-facing composition layer rather than rebuilding the other repos' responsibilities internally. + For more detail, see [Container Management](build-containers.md). ### Policy Language @@ -224,9 +235,11 @@ Sandbox behavior is governed by policies written in YAML and evaluated by an emb Inference routing to `inference.local` is configured separately at the cluster level and does not require network policy entries. The OPA engine evaluates only explicit network policies; `inference.local` connections bypass OPA entirely and are handled by the proxy's dedicated inference interception path. -Policies are not intended to be hand-edited by end users in normal operation. They are associated with sandboxes at creation time and fetched by the sandbox supervisor at startup via gRPC. For development and testing, policies can also be loaded from local files. +Policies are not intended to be hand-edited by end users in normal operation. They are associated with sandboxes at creation time and fetched by the sandbox supervisor at startup via gRPC. For development and testing, policies can also be loaded from local files. A gateway-global policy can override all sandbox policies via `openshell policy set --global`. + +In addition to policy, the gateway delivers runtime **settings** -- typed key-value pairs (e.g., `log_level`) that can be configured per-sandbox or globally. Settings and policy are delivered together through the `GetSandboxSettings` RPC and tracked by a single `config_revision` fingerprint. See [Gateway Settings Channel](gateway-settings.md) for details. -For more detail, see [Policy Language](security-policy.md). +For more detail on the policy language, see [Policy Language](security-policy.md). ### Command-Line Interface @@ -234,7 +247,7 @@ The CLI is the primary way users interact with the platform. It provides command - **Gateway management** (`openshell gateway`): Deploy, stop, destroy, and inspect clusters. Supports both local and remote (SSH) targets. - **Sandbox management** (`openshell sandbox`): Create sandboxes (with optional file upload and provider auto-discovery), connect to sandboxes via SSH, and delete sandboxes. -- **Top-level commands**: `openshell status` (cluster health), `openshell logs` (sandbox logs), `openshell forward` (port forwarding), `openshell policy` (sandbox policy management). +- **Top-level commands**: `openshell status` (cluster health), `openshell logs` (sandbox logs), `openshell forward` (port forwarding), `openshell policy` (sandbox policy management), `openshell settings` (effective sandbox settings and global/sandbox key updates). - **Provider management** (`openshell provider`): Create, update, list, and delete external service credentials. - **Inference management** (`openshell cluster inference`): Configure cluster-level inference by specifying a provider and model. The gateway resolves endpoint and credential details from the named provider record. @@ -269,7 +282,11 @@ This performs the same bootstrap flow on the remote host via SSH. For development and testing against the current checkout, use `scripts/remote-deploy.sh` instead. That helper syncs the local repository to an SSH-reachable machine, builds the CLI and Docker images on the remote host, -and then runs `openshell gateway start` there. It defaults to secure gateway +and then runs `openshell gateway start` there. Cluster-image builds in that +flow now also require a runtime-bundle tarball: provide +`--runtime-bundle-tarball ` for normal sync-and-build deploys, or +`--remote-runtime-bundle-tarball --skip-sync` if the bundle is +already staged on the remote host. The helper defaults to secure gateway startup and only enables `--plaintext`, `--disable-gateway-auth`, or `--recreate` when explicitly requested. @@ -297,4 +314,5 @@ This opens an interactive SSH session into the sandbox, with all provider creden | [Policy Language](security-policy.md) | The YAML/Rego policy system that governs sandbox behavior. | | [Inference Routing](inference-routing.md) | Transparent interception and sandbox-local routing of AI inference API calls to configured backends. | | [System Architecture](system-architecture.md) | Top-level system architecture diagram with all deployable components and communication flows. | +| [Gateway Settings Channel](gateway-settings.md) | Runtime settings channel: two-tier key-value configuration, global policy override, settings registry, CLI/TUI commands. | | [TUI](tui.md) | Terminal user interface for sandbox interaction. | diff --git a/architecture/build-containers.md b/architecture/build-containers.md index 705b00d6..ef0d241e 100644 --- a/architecture/build-containers.md +++ b/architecture/build-containers.md @@ -6,7 +6,7 @@ OpenShell produces two container images, both published for `linux/amd64` and `l The gateway runs the control plane API server. It is deployed as a StatefulSet inside the cluster container via a bundled Helm chart. -- **Dockerfile**: `deploy/docker/Dockerfile.gateway` +- **Build target**: `deploy/docker/Dockerfile.images` target `gateway` - **Registry**: `ghcr.io/nvidia/openshell/gateway:latest` - **Pulled when**: Cluster startup (the Helm chart triggers the pull) - **Entrypoint**: `openshell-server --port 8080` (gRPC + HTTP, mTLS) @@ -15,12 +15,44 @@ The gateway runs the control plane API server. It is deployed as a StatefulSet i The cluster image is a single-container Kubernetes distribution that bundles the Helm charts, Kubernetes manifests, and the `openshell-sandbox` supervisor binary needed to bootstrap the control plane. -- **Dockerfile**: `deploy/docker/Dockerfile.cluster` +- **Build target**: `deploy/docker/Dockerfile.images` target `cluster` - **Registry**: `ghcr.io/nvidia/openshell/cluster:latest` - **Pulled when**: `openshell gateway start` The supervisor binary (`openshell-sandbox`) is cross-compiled in a build stage and placed at `/opt/openshell/bin/openshell-sandbox`. It is exposed to sandbox pods at runtime via a read-only `hostPath` volume mount — it is not baked into sandbox images. +## Controlled GPU Runtime Bundle Path + +OpenShell's runtime bundle publication contract is tarball-first. The canonical artifact is a per-architecture release tarball whose single top-level bundle directory contains the install-root payload plus `manifest.json`. If OCI publication is added later, it is only a mirror transport for that same bundle contract. + +The current cluster build now consumes that published tarball through the local staged bundle path. `tasks/scripts/docker-build-image.sh cluster` requires `OPENSHELL_RUNTIME_BUNDLE_TARBALL`, fails before any Helm packaging or Docker build when the bundle is missing or invalid, and stages the verified install-root payload under `deploy/docker/.build/runtime-bundle//`. `deploy/docker/Dockerfile.images` target `cluster` then copies the runtime binaries, config, and shared libraries from that staged local tree into the final cluster image. + +That requirement now flows through all cluster-image entrypoints instead of only the direct script call: + +- local bootstrap via `tasks/scripts/cluster-bootstrap.sh` requires `OPENSHELL_RUNTIME_BUNDLE_TARBALL` whenever it is going to build the cluster image; prebuilt-image flows can still set `SKIP_CLUSTER_IMAGE_BUILD=1` +- remote gateway deploy via `scripts/remote-deploy.sh` requires either `--runtime-bundle-tarball` (or local `OPENSHELL_RUNTIME_BUNDLE_TARBALL`) for sync-and-build flows, or `--remote-runtime-bundle-tarball` when `--skip-sync` should reuse a tarball already staged on the remote host; the script exports the resolved remote path before invoking the remote cluster build +- multi-arch publishing via `tasks/scripts/docker-publish-multiarch.sh` requires `OPENSHELL_RUNTIME_BUNDLE_TARBALL_AMD64` and `OPENSHELL_RUNTIME_BUNDLE_TARBALL_ARM64`, builds one verified per-arch cluster image at a time, then assembles the final multi-arch manifest from those architecture-specific tags +- GitHub workflow cluster builds now consume release-asset URLs rather than local tarball paths directly: `tasks/scripts/download-runtime-bundle.sh` downloads per-arch tarballs into `deploy/docker/.build/runtime-bundles/`, `tasks/scripts/ci-build-cluster-image.sh` maps single-arch builds to `docker:build:cluster` and multi-arch builds to `docker:build:cluster:multiarch`, and `.github/workflows/docker-build.yml` passes explicit bundle URLs from workflow inputs or repo variables into that helper path + +The intended first OpenShell tarball consumption path is the `tasks/scripts/docker-build-image.sh cluster` -> `deploy/docker/Dockerfile.images` target `cluster` flow: + +1. `tasks/scripts/docker-build-image.sh cluster` receives the per-architecture runtime bundle tarball path through `OPENSHELL_RUNTIME_BUNDLE_TARBALL` before `docker buildx build`. +2. The script verifies the single top-level bundle-directory shape, requires valid JSON `manifest.json` content inside that bundle directory with a matching `architecture`, validates manifest-declared checksums and sizes, and checks the required runtime payload paths before staging. +3. The script stages the tarball payload into `deploy/docker/.build/runtime-bundle//`, preserving the bundle directory and install-root layout expected by OpenShell. +4. `deploy/docker/Dockerfile.images` target `cluster` loads the staged local bundle tree in the `runtime-bundle` stage and copies the verified runtime files into the same final image paths OpenShell already expects. + +The tarball payload must contain the exact runtime assets the cluster image expects today: + +- `/usr/bin/nvidia-cdi-hook` +- `/usr/bin/nvidia-container-runtime` +- `/usr/bin/nvidia-container-runtime-hook` +- `/usr/bin/nvidia-container-cli` +- `/usr/bin/nvidia-ctk` +- `/etc/nvidia-container-runtime/` +- `/usr/lib/*-linux-gnu/libnvidia-container*.so*` + +This handoff keeps the OpenShell build package-manager-free for the runtime dependency itself. Standard OS image layers can remain upstream inputs, but the GPU runtime contents enter the build as a verified tarball payload rather than through a distro package repository. OCI, if later added, mirrors this same tarball-defined payload instead of changing the OpenShell consumption contract. + ## Sandbox Images Sandbox images are **not built in this repository**. They are maintained in the [openshell-community](https://github.com/nvidia/openshell-community) repository and pulled from `ghcr.io/nvidia/openshell-community/sandboxes/` at runtime. @@ -42,7 +74,7 @@ The incremental deploy (`cluster-deploy-fast.sh`) fingerprints local Git changes | Changed files | Rebuild triggered | |---|---| | Cargo manifests, proto definitions, cross-build script | Gateway + supervisor | -| `crates/openshell-server/*`, `Dockerfile.gateway` | Gateway | +| `crates/openshell-server/*`, `deploy/docker/Dockerfile.images` | Gateway | | `crates/openshell-sandbox/*`, `crates/openshell-policy/*` | Supervisor | | `deploy/helm/openshell/*` | Helm upgrade | diff --git a/architecture/gateway-security.md b/architecture/gateway-security.md index 14543640..f6598626 100644 --- a/architecture/gateway-security.md +++ b/architecture/gateway-security.md @@ -229,7 +229,7 @@ These are used to build a `tonic::transport::ClientTlsConfig` with: - `identity()` -- presents the shared client certificate for mTLS. The sandbox calls two RPCs over this authenticated channel: -- `GetSandboxPolicy` -- fetches the YAML policy that governs the sandbox's behavior. +- `GetSandboxSettings` -- fetches the YAML policy that governs the sandbox's behavior. - `GetSandboxProviderEnvironment` -- fetches provider credentials as environment variables. ## SSH Tunnel Authentication diff --git a/architecture/gateway-settings.md b/architecture/gateway-settings.md new file mode 100644 index 00000000..ba0fb7a8 --- /dev/null +++ b/architecture/gateway-settings.md @@ -0,0 +1,561 @@ +# Gateway Settings Channel + +## Overview + +The settings channel provides a two-tier key-value configuration system that the gateway delivers to sandboxes alongside policy. Settings are runtime-mutable name-value pairs (e.g., `log_level`, feature flags) that flow from the gateway to sandboxes through the existing `GetSandboxSettings` poll loop. The system supports two scopes -- sandbox-level and global -- with a deterministic merge strategy and per-key mutual exclusion to prevent conflicting ownership. + +## Architecture + +```mermaid +graph TD + CLI["CLI / TUI"] + GW["Gateway
(openshell-server)"] + OBJ["Store: objects table
(gateway_settings,
sandbox_settings blobs)"] + POL["Store: sandbox_policies table
(revisions for sandbox-scoped
and __global__ policies)"] + SB["Sandbox
(poll loop)"] + + CLI -- "UpdateSettings
(policy / setting_key + value)" --> GW + CLI -- "GetSandboxSettings
GetGatewaySettings
ListSandboxPolicies
GetSandboxPolicyStatus" --> GW + GW -- "load/save settings blobs
(delivery mechanism)" --> OBJ + GW -- "put/list/update
policy revisions
(audit + versioning)" --> POL + GW -- "GetSandboxSettingsResponse
(policy + settings +
config_revision +
global_policy_version)" --> SB + SB -- "diff settings
reload OPA on policy change" --> SB +``` + +## Settings Registry + +**File:** `crates/openshell-core/src/settings.rs` + +The `REGISTERED_SETTINGS` static array defines the allowed setting keys and their value types. The registry is the source of truth for both client-side validation (CLI, TUI) and server-side enforcement. + +```rust +pub const REGISTERED_SETTINGS: &[RegisteredSetting] = &[ + RegisteredSetting { key: "log_level", kind: SettingValueKind::String }, + RegisteredSetting { key: "dummy_int", kind: SettingValueKind::Int }, + RegisteredSetting { key: "dummy_bool", kind: SettingValueKind::Bool }, +]; +``` + +| Type | Proto variant | Description | +|------|---------------|-------------| +| `String` | `SettingValue.string_value` | Arbitrary UTF-8 string | +| `Int` | `SettingValue.int_value` | 64-bit signed integer | +| `Bool` | `SettingValue.bool_value` | Boolean; CLI accepts `true/false/yes/no/1/0/on/off` via `parse_bool_like()` | + +The reserved key `policy` is excluded from the registry. It is handled by dedicated policy commands and stored as a hex-encoded protobuf `SandboxPolicy` in the global settings' `Bytes` variant. Attempts to set or delete the `policy` key through settings commands are rejected. + +Helper functions: +- `setting_for_key(key)` -- look up a `RegisteredSetting` by name, returns `None` for unknown keys +- `registered_keys_csv()` -- comma-separated list of valid keys for error messages +- `parse_bool_like(raw)` -- flexible bool parsing from CLI string input + +## Proto Layer + +**File:** `proto/sandbox.proto` + +### New Message Types + +| Message | Fields | Purpose | +|---------|--------|---------| +| `SettingValue` | `oneof value { string_value, bool_value, int_value, bytes_value }` | Type-aware setting value | +| `EffectiveSetting` | `SettingValue value`, `SettingScope scope` | A resolved setting with its controlling scope | +| `SettingScope` enum | `UNSPECIFIED`, `SANDBOX`, `GLOBAL` | Which tier controls the current value | +| `PolicySource` enum | `UNSPECIFIED`, `SANDBOX`, `GLOBAL` | Origin of the policy in a settings response | + +### New RPCs + +**File:** `proto/openshell.proto` + +| RPC | Request | Response | Called by | +|-----|---------|----------|-----------| +| `GetSandboxSettings` | `GetSandboxSettingsRequest { sandbox_id }` | `GetSandboxSettingsResponse { policy, version, policy_hash, settings, config_revision, policy_source, global_policy_version }` | Sandbox poll loop, CLI `settings get` | +| `GetGatewaySettings` | `GetGatewaySettingsRequest {}` | `GetGatewaySettingsResponse { settings, settings_revision }` | CLI `settings get --global`, TUI dashboard | + +### `UpdateSettingsRequest` + +The `UpdateSettings` RPC multiplexes policy and setting mutations through a single request message: + +| Field | Type | Description | +|-------|------|-------------| +| `setting_key` | `string` | Key to mutate (mutually exclusive with `policy` payload) | +| `setting_value` | `SettingValue` | Value to set (for upsert operations) | +| `delete_setting` | `bool` | Delete the key from the specified scope | +| `global` | `bool` | Target gateway-global scope instead of sandbox scope | + +Validation rules: +- `policy` and `setting_key` cannot both be present +- At least one of `policy` or `setting_key` must be present +- `delete_setting` cannot be combined with a `policy` payload +- The reserved `policy` key requires the `policy` field (not `setting_key`) for set operations +- `name` is required for sandbox-scoped updates but not for global updates + +## Server Implementation + +**File:** `crates/openshell-server/src/grpc.rs` + +### Storage Model + +The settings channel uses two storage mechanisms: the `objects` table for settings blobs (fast delivery) and the `sandbox_policies` table for versioned policy revisions (audit/history). + +#### Settings blobs (`objects` table) + +Settings are persisted using the existing generic `objects` table with two object types: + +| Object type string | Record ID | Record name | Purpose | +|--------------------|-----------|-------------|---------| +| `gateway_settings` | `"global"` | `"global"` | Singleton global settings (includes reserved `policy` key for delivery) | +| `sandbox_settings` | `"settings:{sandbox_uuid}"` | sandbox name | Per-sandbox settings | + +The sandbox settings ID is prefixed with `settings:` to avoid a primary key collision with the sandbox's own record in the `objects` table. The `sandbox_settings_id()` function computes this key. + +The payload is a JSON-encoded `StoredSettings` struct: + +```rust +struct StoredSettings { + revision: u64, // Monotonically increasing + settings: BTreeMap, // Sorted for determinism +} + +enum StoredSettingValue { + String(String), + Bool(bool), + Int(i64), + Bytes(String), // Hex-encoded binary (used for global policy) +} +``` + +#### Policy revisions (`sandbox_policies` table) + +Global policy revisions are stored in the `sandbox_policies` table using the sentinel `sandbox_id = "__global__"` (`GLOBAL_POLICY_SANDBOX_ID` constant). This reuses the same schema as sandbox-scoped policy revisions: + +| Column | Type | Description | +|--------|------|-------------| +| `id` | `TEXT` | UUID primary key | +| `sandbox_id` | `TEXT` | `"__global__"` for global revisions, sandbox UUID for sandbox-scoped | +| `version` | `INTEGER` | Monotonically increasing per `sandbox_id` | +| `policy_payload` | `BLOB` | Protobuf-encoded `SandboxPolicy` | +| `policy_hash` | `TEXT` | Deterministic SHA-256 hash of the policy | +| `status` | `TEXT` | `pending`, `loaded`, `failed`, or `superseded` | +| `load_error` | `TEXT` | Error message (populated on `failed` status) | +| `created_at_ms` | `INTEGER` | Epoch milliseconds when the revision was created | +| `loaded_at_ms` | `INTEGER` | Epoch milliseconds when the revision was marked loaded | + +The `sandbox_policies` table provides history and audit trail (queried by `policy list --global` and `policy get --global`). The `gateway_settings` blob's `policy` key is the authoritative source that `GetSandboxSettings` reads for fast poll resolution. Both are written on `policy set --global` -- this dual-write is intentional. + +### Two-Tier Resolution (`merge_effective_settings`) + +The `GetSandboxSettings` handler resolves the effective settings map by merging sandbox and global tiers: + +1. **Seed registered keys**: All keys from `REGISTERED_SETTINGS` are inserted with `scope: UNSPECIFIED` and `value: None`. This ensures registered keys always appear in the response even when unset. +2. **Apply sandbox values**: Sandbox-scoped settings overlay the registered defaults. Scope becomes `SANDBOX`. +3. **Apply global values**: Global settings override sandbox values. Scope becomes `GLOBAL`. +4. **Exclude reserved keys**: The `policy` key is excluded from the merged settings map (it is delivered as the top-level `policy` field in the response). + +```mermaid +flowchart LR + REG["REGISTERED_SETTINGS
(seed: scope=UNSPECIFIED)"] + SB["Sandbox settings
(scope=SANDBOX)"] + GL["Global settings
(scope=GLOBAL)"] + OUT["Effective settings map"] + + REG --> OUT + SB -->|"overlay"| OUT + GL -->|"override"| OUT +``` + +### Global Policy as a Setting + +The reserved `policy` key in global settings stores a hex-encoded protobuf `SandboxPolicy`. When present, `GetSandboxSettings` uses the global policy instead of the sandbox's own policy: + +1. `decode_policy_from_global_settings()` checks for the `policy` key in global settings +2. If present, the global policy replaces the sandbox policy in the response +3. `policy_source` is set to `GLOBAL` +4. The sandbox policy version counter is preserved for status APIs +5. The `global_policy_version` field is populated from the latest `__global__` revision in the `sandbox_policies` table + +This allows operators to push a single policy that applies to all sandboxes via `openshell policy set --global --policy FILE`. + +### Global Policy Lifecycle + +Global policies are versioned through a full revision lifecycle stored alongside sandbox policies. The sentinel `sandbox_id = "__global__"` (constant `GLOBAL_POLICY_SANDBOX_ID`) distinguishes global revisions from sandbox-scoped revisions in the same `sandbox_policies` table. + +#### State Machine + +```mermaid +stateDiagram-v2 + [*] --> NoGlobalPolicy + + NoGlobalPolicy --> v1_Loaded : policy set --global
(creates v1, marks loaded) + + v1_Loaded --> v1_Loaded : policy set --global
(same hash, dedup no-op) + v1_Loaded --> v2_Loaded : policy set --global
(different hash) + v1_Loaded --> AllSuperseded : policy delete --global + + v2_Loaded --> v2_Loaded : policy set --global
(same hash, dedup no-op) + v2_Loaded --> v3_Loaded : policy set --global
(different hash) + v2_Loaded --> AllSuperseded : policy delete --global + + v3_Loaded --> v3_Loaded : policy set --global
(same hash, dedup no-op) + v3_Loaded --> AllSuperseded : policy delete --global + + AllSuperseded --> NewVersion_Loaded : policy set --global
(any hash, no dedup) + + state "No Global Policy" as NoGlobalPolicy + state "v1: Loaded" as v1_Loaded + state "v2: Loaded, v1: Superseded" as v2_Loaded + state "v3: Loaded, v1-v2: Superseded" as v3_Loaded + state "All Revisions Superseded
(no active global policy)" as AllSuperseded + state "vN: Loaded, older: Superseded" as NewVersion_Loaded +``` + +#### Key behaviors + +- **Dedup on set**: When the latest global revision has status `loaded` and its hash matches the submitted policy, no new revision is created. The settings blob is still ensured to have the `policy` key (reconciliation against potential data loss from a pod restart while the `sandbox_policies` table retained the revision). See `crates/openshell-server/src/grpc.rs` -- `update_settings()`, lines around the `current.policy_hash == hash && current.status == "loaded"` check. + +- **No dedup against superseded**: If the latest revision has status `superseded` (e.g., after a `policy delete --global`), the same hash creates a new revision. This supports the toggle pattern: delete the global policy, then re-set the same policy. The dedup check explicitly requires `status == "loaded"`. + +- **Immediate load**: Global policy revisions are marked `loaded` immediately upon creation (no sandbox confirmation needed). The gateway calls `update_policy_status(GLOBAL_POLICY_SANDBOX_ID, next_version, "loaded", ...)` right after `put_policy_revision()`. Sandboxes pick up changes via the 10-second poll loop. + +- **Supersede on set**: When a new global revision is created, `supersede_older_policies(GLOBAL_POLICY_SANDBOX_ID, next_version)` marks all older revisions with `pending` or `loaded` status as `superseded`. + +- **Delete supersedes all**: `policy delete --global` removes the `policy` key from the `gateway_settings` blob and calls `supersede_older_policies()` with `latest.version + 1` to mark ALL `__global__` revisions as `superseded`. This restores sandbox-level policy control. + +- **Dual-write**: `policy set --global` writes to BOTH the `sandbox_policies` revision table (for audit/listing via `policy list --global`) AND the `gateway_settings` blob (for fast delivery via `GetSandboxSettings`). The revision table provides history; the settings blob is the authoritative source that sandboxes poll. + +- **Concurrency**: All global mutations acquire `ServerState.settings_mutex` (a `tokio::sync::Mutex<()>`) for the duration of the read-modify-write cycle. This prevents races between concurrent global policy set/delete operations and global setting mutations. + +#### Global policy effects on sandboxes + +When a global policy is active (the `policy` key exists in `gateway_settings`): + +| Operation | Effect | +|-----------|--------| +| `GetSandboxSettings` | Returns the global policy payload instead of the sandbox's own policy. `policy_source = GLOBAL`. `global_policy_version` set to the active revision's version number. | +| `policy set ` | Rejected with `FailedPrecondition: "policy is managed globally; delete global policy before sandbox policy update"` | +| `rule approve ` | Rejected with `FailedPrecondition: "cannot approve rules while a global policy is active; delete the global policy to manage per-sandbox rules"` | +| `rule approve-all` | Rejected with same `FailedPrecondition` as `rule approve` | +| Revoking an approved chunk (via `rule reject` on an `approved` chunk) | Rejected with same `FailedPrecondition` -- revoking would modify the sandbox policy which is not in use | +| Rejecting a `pending` chunk | Allowed -- rejection does not modify the sandbox policy | +| `settings set/delete` at sandbox scope | Allowed -- settings and policy are independent channels | +| Draft chunk collection | Continues normally -- sandbox proxy still generates proposals. Chunks are visible but cannot be approved. | + +The blocking logic is implemented by `require_no_global_policy()` in `crates/openshell-server/src/grpc.rs`, which checks for the `policy` key in global settings and returns `FailedPrecondition` if present. + +### `config_revision` and `global_policy_version` + +**`config_revision`** (`u64`): Content hash of the merged effective config. Computed by `compute_config_revision()` from three inputs: `policy_source` (as 4 LE bytes), the deterministic policy hash (if policy present), and sorted settings entries (key bytes + scope as 4 LE bytes + type tag byte + value bytes). The SHA-256 digest is truncated to 8 bytes and interpreted as `u64` (little-endian). Changes when the global policy, sandbox policy, settings, or policy source changes. Used by the sandbox poll loop for change detection. + +**`global_policy_version`** (`u32`): The version number of the active global policy revision. Populated in `GetSandboxSettingsResponse` when `policy_source == GLOBAL` by looking up the latest revision for `GLOBAL_POLICY_SANDBOX_ID`. Zero when no global policy is active or when `policy_source == SANDBOX`. Displayed in the TUI dashboard and sandbox metadata pane, and logged by the sandbox on reload. + +### Per-Key Mutual Exclusion + +Global and sandbox scopes cannot both control the same key simultaneously: + +| Operation | Global key exists | Behavior | +|-----------|-------------------|----------| +| Sandbox set | Yes | `FailedPrecondition`: "setting '{key}' is managed globally; delete the global setting before sandbox update" | +| Sandbox delete | Yes | `FailedPrecondition`: "setting '{key}' is managed globally; delete the global setting first" | +| Sandbox set | No | Allowed | +| Sandbox delete | No | Allowed | +| Global set | (any) | Always allowed (global overrides) | +| Global delete | (any) | Allowed; unlocks sandbox control for the key | + +This prevents conflicting values at different scopes. An operator must delete a global key before a sandbox-level value can be set for the same key. + +### Sandbox-Scoped Policy Update Interaction + +When a global policy is set, sandbox-scoped policy updates via `UpdateSettings` are rejected with `FailedPrecondition`: + +``` +policy is managed globally; delete global policy before sandbox policy update +``` + +Deleting the global policy (`openshell policy delete --global`) removes the `policy` key from global settings and restores sandbox-level policy control. + +## Sandbox Implementation + +### Poll Loop Changes + +**File:** `crates/openshell-sandbox/src/lib.rs` (`run_policy_poll_loop`) + +The poll loop uses `GetSandboxSettings` (not a policy-specific RPC) and tracks `config_revision` as the change-detection signal: + +1. **Fetch initial state**: Call `poll_settings(sandbox_id)` to establish baseline `current_config_revision`, `current_policy_hash`, and `current_settings`. +2. **On each tick**: Compare `result.config_revision` against `current_config_revision`. If unchanged, skip. +3. **Determine what changed**: + - Compare `result.policy_hash` against `current_policy_hash` to detect policy changes + - Call `log_setting_changes()` to diff the settings map and log individual changes +4. **Conditional OPA reload**: Only call `opa_engine.reload_from_proto()` when `policy_hash` changes. Settings-only changes update the tracked state without touching the OPA engine. +5. **Status reporting**: Report policy load status only for sandbox-scoped revisions (`policy_source == SANDBOX` and `version > 0`). Global policy overrides trigger a reload but do not write per-sandbox policy status history. + +```mermaid +sequenceDiagram + participant PL as Poll Loop + participant GW as Gateway + participant OPA as OPA Engine + + PL->>GW: GetSandboxSettings(sandbox_id) + GW-->>PL: policy + settings + config_revision + + loop Every interval (default 10s) + PL->>GW: GetSandboxSettings(sandbox_id) + GW-->>PL: response + + alt config_revision unchanged + PL->>PL: Skip + else config_revision changed + PL->>PL: log_setting_changes(old, new) + alt policy_hash changed + PL->>OPA: reload_from_proto(policy) + PL->>GW: ReportPolicyStatus (if sandbox-scoped) + else settings-only change + PL->>PL: Update tracked state (no OPA reload) + end + end + end +``` + +### Per-Setting Diff Logging + +**File:** `crates/openshell-sandbox/src/lib.rs` (`log_setting_changes`) + +When `config_revision` changes, the sandbox logs each individual setting change: + +- **Changed**: `info!(key, old, new, "Setting changed")` -- logs old and new values +- **Added**: `info!(key, value, "Setting added")` -- new key not in previous snapshot +- **Removed**: `info!(key, "Setting removed")` -- key in previous snapshot but not in new + +Values are formatted by `format_setting_value()`: strings as-is, bools and ints as their string representation, bytes as ``, unset as ``. + +### `SettingsPollResult` + +**File:** `crates/openshell-sandbox/src/grpc_client.rs` + +```rust +pub struct SettingsPollResult { + pub policy: Option, + pub version: u32, + pub policy_hash: String, + pub config_revision: u64, + pub policy_source: PolicySource, + pub settings: HashMap, + pub global_policy_version: u32, +} +``` + +The `poll_settings()` method maps the full `GetSandboxSettingsResponse` into this struct. The `settings` field carries the effective settings map for diff logging. The `global_policy_version` field is propagated from the response and used for logging when the sandbox reloads a global policy. + +## CLI Commands + +**File:** `crates/openshell-cli/src/main.rs` (`SettingsCommands`), `crates/openshell-cli/src/run.rs` + +### `settings get [name] [--global]` + +Display effective settings for a sandbox or the gateway-global scope. + +```bash +# Sandbox-scoped effective settings +openshell settings get my-sandbox + +# Gateway-global settings +openshell settings get --global +``` + +Sandbox output includes: sandbox name, config revision, policy source (sandbox/global), policy hash, and a table of settings with key, value, and scope (sandbox/global/unset). + +Global output includes: scope label, settings revision, and a table of settings with key and value. Registered keys without a configured value display as ``. + +### `settings set [name] --key K --value V [--global] [--yes]` + +Set a single setting key at sandbox or global scope. + +```bash +# Sandbox-scoped +openshell settings set my-sandbox --key log_level --value debug + +# Global (requires confirmation) +openshell settings set --global --key log_level --value warn +openshell settings set --global --key dummy_bool --value yes +openshell settings set --global --key dummy_int --value 42 + +# Skip confirmation +openshell settings set --global --key log_level --value info --yes +``` + +Value parsing is type-aware: bool keys accept `true/false/yes/no/1/0/on/off` via `parse_bool_like()`. Int keys parse as base-10 `i64`. String keys accept any value. + +### `settings delete [name] --key K [--global] [--yes]` + +Delete a setting key from the specified scope. + +```bash +# Global delete (unlocks sandbox control) +openshell settings delete --global --key log_level --yes +``` + +### `policy set --global --policy FILE [--yes]` + +Set a gateway-global policy that overrides all sandbox policies. Creates a versioned revision in the `sandbox_policies` table and writes the policy to the `gateway_settings` blob for delivery. + +```bash +openshell policy set --global --policy policy.yaml --yes +``` + +The `--wait` flag is rejected for global policy updates with: `"--wait is not supported for global policies; global policies are effective immediately"`. See `crates/openshell-cli/src/main.rs`. + +### `policy delete --global [--yes]` + +Delete the gateway-global policy, restoring sandbox-level policy control. Removes the `policy` key from the `gateway_settings` blob and supersedes all `__global__` revisions. + +```bash +openshell policy delete --global --yes +``` + +Note: `policy delete` without `--global` is not supported (sandbox policies are managed through versioned updates, not deletion). The CLI returns: `"sandbox policy delete is not supported; use --global to remove global policy lock"`. + +### `policy list --global [--limit N]` + +List global policy revision history. Uses `ListSandboxPolicies` with `global: true`, which routes to the `__global__` sentinel in the `sandbox_policies` table. + +```bash +openshell policy list --global +openshell policy list --global --limit 10 +``` + +### `policy get --global [--rev N] [--full]` + +Show a specific global policy revision (or the latest). Uses `GetSandboxPolicyStatus` with `global: true`. + +```bash +# Latest global revision +openshell policy get --global + +# Specific version +openshell policy get --global --rev 3 + +# Full policy payload as YAML +openshell policy get --global --full +``` + +### HITL Confirmation + +All `--global` mutations require human-in-the-loop confirmation via an interactive prompt. The `--yes` flag bypasses the prompt for scripted/CI usage. In non-interactive mode (no TTY), `--yes` is required -- otherwise the command fails with an error. + +The confirmation message varies: +- **Global setting set**: warns that this will override sandbox-level values for the key +- **Global setting delete**: warns that this re-enables sandbox-level management +- **Global policy set**: warns that this overrides all sandbox policies +- **Global policy delete**: warns that this restores sandbox-level control + +## TUI Integration + +**File:** `crates/openshell-tui/src/` + +### Dashboard: Global Policy Indicator + +**File:** `crates/openshell-tui/src/ui/dashboard.rs` + +The gateway row in the dashboard shows a yellow `Global Policy Active (vN)` indicator when a global policy is active. The TUI detects this by calling `ListSandboxPolicies` with `global: true, limit: 1` on each polling tick and checking if the latest revision has `PolicyStatus::Loaded`. The version number and active flag are tracked in `App.global_policy_active` and `App.global_policy_version`. + +### Dashboard: Global Settings Tab + +The dashboard's middle pane has a tabbed interface: **Providers** | **Global Settings**. Press `Tab` to switch. + +The Global Settings tab displays registered keys with their current values, fetched via `GetGatewaySettings`. Features: + +- **Navigate**: `j`/`k` or arrow keys to select a setting +- **Edit** (`Enter`): Opens a type-aware editor: + - Bool keys: toggle between true/false + - String/Int keys: text input field +- **Delete** (`d`): Remove the selected key's value +- **Confirmation modals**: Both edit and delete operations show a confirmation dialog before applying +- **Scope indicators**: Each key shows its current value or `` + +### Sandbox Metadata Pane: Global Policy Indicator + +**File:** `crates/openshell-tui/src/ui/sandbox_detail.rs` + +When the sandbox's policy source is `GLOBAL` (detected via `policy_source` in the `GetSandboxSettings` response), the metadata pane shows `Policy: managed globally (vN)` in yellow. The version comes from `global_policy_version` in the response. Tracked in `App.sandbox_policy_is_global` and `App.sandbox_global_policy_version`. + +### Network Rules Pane: Global Policy Warning + +**File:** `crates/openshell-tui/src/ui/sandbox_draft.rs` + +When `sandbox_policy_is_global` is true, the Network Rules pane displays a yellow bottom title: `" Cannot approve rules while global policy is active "`. Draft chunks are still rendered but their status styles are greyed out (`t.muted`). Keyboard actions for approve (`a`), reject/revoke (`x`), and approve-all are intercepted client-side with status messages like `"Cannot approve rules while a global policy is active"` and `"Cannot modify rules while a global policy is active"`. See `crates/openshell-tui/src/app.rs` -- draft key handling. + +### Sandbox Screen: Settings Tab + +The sandbox detail view's bottom pane has a tabbed interface: **Policy** | **Settings**. Press `l` to switch tabs. + +The Settings tab shows effective settings for the selected sandbox, fetched as part of the `GetSandboxSettings` response. Features: + +- Same navigation and editing as the global settings tab +- **Scope indicators**: Each key shows `(sandbox)`, `(global)`, or `(unset)` to indicate the controlling tier +- Sandbox-scoped edits are blocked for globally-managed keys (server returns `FailedPrecondition`) + +### Data Refresh + +Settings are refreshed on each 2-second polling tick alongside the sandbox list and health status. The global settings revision is tracked to detect changes. Sandbox settings are refreshed when viewing a specific sandbox. Global policy active status is detected on each tick via `ListSandboxPolicies` with `global: true`. + +## Data Flow: Setting a Global Key + +End-to-end trace for `openshell settings set --global --key log_level --value debug --yes`: + +1. **CLI** (`crates/openshell-cli/src/run.rs` -- `gateway_setting_set()`): + - `parse_cli_setting_value("log_level", "debug")` -- looks up `SettingValueKind::String` in the registry, wraps as `SettingValue { string_value: "debug" }` + - `confirm_global_setting_takeover()` -- skipped because `--yes` + - Sends `UpdateSettingsRequest { setting_key: "log_level", setting_value: Some(...), global: true }` + +2. **Gateway** (`crates/openshell-server/src/grpc.rs` -- `update_settings()`): + - Acquires `settings_mutex` for the duration of the operation + - Detects `global=true`, `has_setting=true` + - `validate_registered_setting_key("log_level")` -- passes (key is in registry) + - `load_global_settings()` -- reads `gateway_settings` record from store + - `proto_setting_to_stored()` -- converts proto value to `StoredSettingValue::String("debug")` + - `upsert_setting_value()` -- inserts into `BTreeMap`, returns `true` (changed) + - Increments `revision`, calls `save_global_settings()` + - Returns `UpdateSettingsResponse { settings_revision: N }` + +3. **Sandbox** (next poll tick in `run_policy_poll_loop()`): + - `poll_settings(sandbox_id)` returns new `config_revision` + - `log_setting_changes()` logs: `Setting changed key="log_level" old="" new="debug"` + - `policy_hash` unchanged -- no OPA reload + - Updates tracked `current_config_revision` and `current_settings` + +## Data Flow: Setting a Global Policy + +End-to-end trace for `openshell policy set --global --policy policy.yaml --yes`: + +1. **CLI** (`crates/openshell-cli/src/main.rs`, `crates/openshell-cli/src/run.rs` -- `sandbox_policy_set_global()`): + - Rejects `--wait` flag with `"--wait is not supported for global policies; global policies are effective immediately"` + - Loads and parses the YAML policy file into a `SandboxPolicy` protobuf + - Sends `UpdateSettingsRequest { policy: Some(sandbox_policy), global: true }` + +2. **Gateway** (`crates/openshell-server/src/grpc.rs` -- `update_settings()`): + - Acquires `settings_mutex` + - Detects `global=true`, `has_policy=true` + - `ensure_sandbox_process_identity()` -- ensures process identity defaults to "sandbox" + - `validate_policy_safety()` -- rejects unsafe policies (e.g., root process) + - `deterministic_policy_hash()` -- computes SHA-256 hash of the policy + - **Dedup check**: Fetches `get_latest_policy(GLOBAL_POLICY_SANDBOX_ID)` + - If latest exists with `status == "loaded"` and same hash → no-op (ensures settings blob has `policy` key, returns existing version) + - If no latest, or latest is `superseded`, or hash differs → create new revision + - `put_policy_revision(id, "__global__", next_version, payload, hash)` -- persists revision + - `update_policy_status("__global__", next_version, "loaded")` -- marks loaded immediately + - `supersede_older_policies("__global__", next_version)` -- marks all older revisions as superseded + - Stores hex-encoded payload in `gateway_settings` blob under `policy` key via `upsert_setting_value()` + - Returns `UpdateSettingsResponse { version: N, policy_hash: "..." }` + +3. **Sandbox** (next poll tick, ~10 seconds): + - `poll_settings(sandbox_id)` returns response with `policy_source: GLOBAL`, `global_policy_version: N` + - `config_revision` changed → enters change processing + - `policy_hash` changed → calls `opa_engine.reload_from_proto(global_policy)` + - Logs `"Policy reloaded successfully (global)"` with `global_version=N` + - Does NOT call `ReportPolicyStatus` (global policies skip per-sandbox status reporting) + +## Cross-References + +- [Gateway Architecture](gateway.md) -- Persistence layer, gRPC service, object types +- [Sandbox Architecture](sandbox.md) -- Poll loop, `CachedOpenShellClient`, OPA reload lifecycle +- [Policy Language](security-policy.md) -- Live policy updates, global policy CLI commands +- [TUI](tui.md) -- Settings tabs in dashboard and sandbox views diff --git a/architecture/gateway-single-node.md b/architecture/gateway-single-node.md index 8dc270ac..63f998a1 100644 --- a/architecture/gateway-single-node.md +++ b/architecture/gateway-single-node.md @@ -29,7 +29,7 @@ Out of scope: - `crates/openshell-bootstrap/src/push.rs`: Local development image push into k3s containerd. - `crates/openshell-bootstrap/src/paths.rs`: XDG path resolution. - `crates/openshell-bootstrap/src/constants.rs`: Shared constants (image name, container/volume/network naming). -- `deploy/docker/Dockerfile.cluster`: Container image definition (k3s base + Helm charts + manifests + entrypoint). +- `deploy/docker/Dockerfile.images` target `cluster`: Container image definition (k3s base + Helm charts + manifests + entrypoint). - `deploy/docker/cluster-entrypoint.sh`: Container entrypoint (DNS proxy, registry config, manifest injection). - `deploy/docker/cluster-healthcheck.sh`: Docker HEALTHCHECK script. - Docker daemon(s): @@ -58,7 +58,11 @@ For remote dev/test deploys from a local checkout, `scripts/remote-deploy.sh` wraps a different workflow: it rsyncs the repository to a remote host, builds the release CLI plus cluster/server/sandbox images on that machine, and then invokes `openshell gateway start` with explicit flags such as `--recreate`, -`--plaintext`, or `--disable-gateway-auth` only when requested. +`--plaintext`, or `--disable-gateway-auth` only when requested. The remote +cluster-image build now also requires a runtime-bundle tarball: provide +`--runtime-bundle-tarball ` for normal sync-and-build deploys, or +`--remote-runtime-bundle-tarball --skip-sync` when the tarball is +already present on the remote host. ## Local Task Flows (`mise`) @@ -68,8 +72,9 @@ Development task entrypoints split bootstrap behavior: |---|---| | `mise run cluster` | Bootstrap or incremental deploy: creates gateway if needed (fast recreate), then detects changed files and rebuilds/pushes only impacted components | -For `mise run cluster`, `.env` acts as local source-of-truth for `GATEWAY_NAME`, `GATEWAY_PORT`, and `OPENSHELL_GATEWAY`. Missing keys are appended; existing values are preserved. If `GATEWAY_PORT` is missing, the task selects a free local port and persists it. +For `mise run cluster`, `.env` acts as local source-of-truth for `GATEWAY_PORT` and `OPENSHELL_GATEWAY`. Missing keys are appended; existing values are preserved. If `GATEWAY_PORT` is missing, the task selects a free local port and persists it. Fast mode ensures a local registry (`127.0.0.1:5000`) is running and configures k3s to mirror pulls via `host.docker.internal:5000`, so the cluster task can push/pull local component images consistently. +When that flow needs to rebuild the cluster image, it also requires `OPENSHELL_RUNTIME_BUNDLE_TARBALL`; prebuilt-image paths can still skip the local cluster-image build with `SKIP_CLUSTER_IMAGE_BUILD=1`. ## Bootstrap Sequence Diagram @@ -97,7 +102,7 @@ sequenceDiagram B->>R: start_container B->>R: clean_stale_nodes (kubectl delete node) B->>R: wait_for_gateway_ready (180 attempts, 2s apart) - B->>R: poll for secret openshell-cli-client (90 attempts, 2s apart) + B->>R: extract client PKI from reconciled cluster secrets R-->>B: ca.crt, tls.crt, tls.key B->>B: atomically store mTLS bundle B->>B: create and persist gateway metadata JSON @@ -196,7 +201,7 @@ The gateway StatefulSet also uses a Kubernetes `startupProbe` on the gRPC port b ### 5) mTLS bundle capture -TLS is always required. `fetch_and_store_cli_mtls()` polls for Kubernetes secret `openshell-cli-client` in namespace `openshell` (90 attempts, 2 seconds apart, 3 min total). Each attempt checks the container is still running. The secret's base64-encoded `ca.crt`, `tls.crt`, and `tls.key` fields are decoded and stored. +TLS is enabled unless deploy options explicitly disable it. After the bootstrap applies or reuses the server/client TLS secrets, the CLI extracts the client bundle from cluster-managed PKI and stores it locally for later authenticated commands. The current flow does not depend on polling a separate `openshell-cli-client` secret; it reads the reconciled client credentials from the cluster state that bootstrap just verified or created. Storage location: `~/.config/openshell/gateways/{name}/mtls/` @@ -228,7 +233,7 @@ After deploy, the CLI calls `save_active_gateway(name)`, writing the gateway nam ## Container Image -The gateway image is defined in `deploy/docker/Dockerfile.cluster`: +The gateway bootstrap container image is defined by `deploy/docker/Dockerfile.images` target `cluster`: ``` Base: rancher/k3s:v1.35.2-k3s1 @@ -238,7 +243,7 @@ Layers added: 1. Custom entrypoint: `deploy/docker/cluster-entrypoint.sh` -> `/usr/local/bin/cluster-entrypoint.sh` 2. Healthcheck script: `deploy/docker/cluster-healthcheck.sh` -> `/usr/local/bin/cluster-healthcheck.sh` -3. Packaged Helm charts: `deploy/docker/.build/charts/*.tgz` -> `/var/lib/rancher/k3s/server/static/charts/` +3. Packaged Helm charts: `deploy/docker/.build/charts/*.tgz` -> `/opt/openshell/charts/` at image build time, then copied into `/var/lib/rancher/k3s/server/static/charts/` by the entrypoint at container start 4. Kubernetes manifests: `deploy/kube/manifests/*.yaml` -> `/opt/openshell/manifests/` Bundled manifests include: @@ -276,9 +281,9 @@ Copies bundled manifests from `/opt/openshell/manifests/` to `/var/lib/rancher/k When environment variables are set, the entrypoint modifies the HelmChart manifest at `/var/lib/rancher/k3s/server/manifests/openshell-helmchart.yaml`: -- `IMAGE_REPO_BASE`: Rewrites `repository:`, `sandboxImage:`, and `jobImage:` in the HelmChart. -- `PUSH_IMAGE_REFS`: In push mode, parses comma-separated image refs and rewrites the exact gateway, sandbox, and pki-job image references (matching on path component `/gateway:`, `/sandbox:`, `/pki-job:`). -- `IMAGE_TAG`: Replaces `:latest` tags with the specified tag on gateway, sandbox, and pki-job images. Handles both quoted and unquoted `tag: latest` formats. +- `IMAGE_REPO_BASE`: Rewrites the gateway image repository in the HelmChart. +- `PUSH_IMAGE_REFS`: In push mode, parses comma-separated image refs and rewrites the exact gateway image reference used by the StatefulSet. +- `IMAGE_TAG`: Replaces the gateway image `:latest` tag with the specified tag. Handles both quoted and unquoted `tag: latest` formats. - `IMAGE_PULL_POLICY`: Replaces `pullPolicy: Always` with the specified policy (e.g., `IfNotPresent`). - `SSH_GATEWAY_HOST` / `SSH_GATEWAY_PORT`: Replaces `__SSH_GATEWAY_HOST__` and `__SSH_GATEWAY_PORT__` placeholders. - `EXTRA_SANS`: Builds a YAML flow-style list from the comma-separated SANs and replaces `extraSANs: []`. @@ -287,10 +292,12 @@ When environment variables are set, the entrypoint modifies the HelmChart manife `deploy/docker/cluster-healthcheck.sh` validates cluster readiness through a series of checks: -1. **Kubernetes API**: `kubectl get --raw='/readyz'` -2. **OpenShell StatefulSet**: Checks that `statefulset/openshell` in namespace `openshell` exists and has 1 ready replica. -3. **Gateway**: Checks that `gateway/openshell-gateway` in namespace `openshell` has the `Programmed` condition. -4. **mTLS secret** (conditional): If `NAV_GATEWAY_TLS_ENABLED` is true (or inferred from the HelmChart manifest using the same two-path detection logic as the bootstrap code), checks that secret `openshell-cli-client` exists with non-empty `ca.crt`, `tls.crt`, and `tls.key` data. +1. **DNS preflight**: verifies the configured registry host is resolvable unless it is already an IP literal. +2. **Kubernetes API**: `kubectl get --raw='/readyz'` +3. **Node pressure markers**: emits `HEALTHCHECK_NODE_PRESSURE` warnings if the kubelet reports `DiskPressure`, `MemoryPressure`, or `PIDPressure`. +4. **OpenShell StatefulSet**: checks that `statefulset/openshell` in namespace `openshell` exists and has 1 ready replica. +5. **Sandbox supervisor binary**: checks that `/opt/openshell/bin/openshell-sandbox` exists and is executable, otherwise emits `HEALTHCHECK_MISSING_SUPERVISOR`. +6. **TLS secrets** (when TLS is enabled): checks that `openshell-server-tls` and `openshell-client-tls` exist in namespace `openshell`. ## GPU Enablement @@ -298,9 +305,10 @@ GPU support is part of the single-node gateway bootstrap path rather than a sepa - `openshell gateway start --gpu` threads a boolean deploy option through `crates/openshell-cli`, `crates/openshell-bootstrap`, and `crates/openshell-bootstrap/src/docker.rs`. - When enabled, the cluster container is created with Docker `DeviceRequests`, which is the API equivalent of `docker run --gpus all`. -- `deploy/docker/Dockerfile.cluster` installs NVIDIA Container Toolkit packages in a dedicated Ubuntu stage and copies the runtime binaries, config, and `libnvidia-container` shared libraries into the final Ubuntu-based cluster image. +- `tasks/scripts/docker-build-image.sh cluster` now validates a staged local runtime-bundle tarball and places the verified payload under `deploy/docker/.build/runtime-bundle//` before Docker runs. +- `deploy/docker/Dockerfile.images` target `cluster` copies the runtime binaries, config, and `libnvidia-container` shared libraries from that staged local bundle into the final Ubuntu-based cluster image instead of installing toolkit packages from an apt repository during the build. - `deploy/docker/cluster-entrypoint.sh` checks `GPU_ENABLED=true` and copies GPU-only manifests from `/opt/openshell/gpu-manifests/` into k3s's manifests directory. -- `deploy/kube/gpu-manifests/nvidia-device-plugin-helmchart.yaml` installs the NVIDIA device plugin chart, currently pinned to `0.18.2`, along with GPU Feature Discovery and Node Feature Discovery. +- `deploy/kube/gpu-manifests/nvidia-device-plugin-helmchart.yaml` installs the NVIDIA device plugin chart, currently pinned to `0.18.2`. NFD and GFD are disabled; the device plugin's default `nodeAffinity` (which requires `feature.node.kubernetes.io/pci-10de.present=true` or `nvidia.com/gpu.present=true` from NFD/GFD) is overridden to empty so the DaemonSet schedules on the single-node cluster without requiring those labels. - k3s auto-detects `nvidia-container-runtime` on `PATH`, registers the `nvidia` containerd runtime, and creates the `nvidia` `RuntimeClass` automatically. - The OpenShell Helm chart grants the gateway service account cluster-scoped read access to `node.k8s.io/runtimeclasses` and core `nodes` so GPU sandbox admission can verify both the `nvidia` `RuntimeClass` and allocatable GPU capacity before creating a sandbox. @@ -370,8 +378,8 @@ flowchart LR - Docker API failures from inspect/create/start/remove. - SSH connection failures when creating the remote Docker client. - Health check timeout (6 min) with recent container logs. - - Container exit during any polling phase (health, mTLS) with diagnostic information (exit code, OOM status, recent logs). - - mTLS secret polling timeout (3 min). + - Container exit during any polling phase with diagnostic information (exit code, OOM status, recent logs). + - PKI extraction or local mTLS storage failure after bootstrap reconciles the TLS materials. - Local image ref without registry prefix: clear error with build instructions rather than a failed Docker Hub pull. ## Auto-Bootstrap from `sandbox create` @@ -411,7 +419,7 @@ Environment variables that affect bootstrap behavior when set on the host: |---|---| | `OPENSHELL_CLUSTER_IMAGE` | Overrides entire image ref if set and non-empty | | `IMAGE_TAG` | Sets image tag (default: `"dev"`) when `OPENSHELL_CLUSTER_IMAGE` is not set | -| `NAV_GATEWAY_TLS_ENABLED` | Overrides HelmChart manifest for TLS enabled check (`true`/`1`/`yes`/`false`/`0`/`no`) | +| `DISABLE_TLS` | Disables TLS-secret checks in the container healthcheck when set to `true` | | `XDG_CONFIG_HOME` | Base config directory (default: `$HOME/.config`) | | `DOCKER_HOST` | When `tcp://` and non-loopback, the host is added as a TLS SAN and used as the gateway endpoint | | `OPENSHELL_PUSH_IMAGES` | Comma-separated image refs to push into the gateway's containerd (local deploy only) | @@ -454,7 +462,7 @@ openshell/ - `crates/openshell-cli/src/main.rs` -- CLI command definitions - `crates/openshell-cli/src/run.rs` -- CLI command implementations - `crates/openshell-cli/src/bootstrap.rs` -- auto-bootstrap from sandbox create -- `deploy/docker/Dockerfile.cluster` -- container image definition +- `deploy/docker/Dockerfile.images` -- shared image build graph (`cluster` target for gateway bootstrap) - `deploy/docker/cluster-entrypoint.sh` -- container entrypoint script - `deploy/docker/cluster-healthcheck.sh` -- Docker HEALTHCHECK script - `deploy/kube/manifests/openshell-helmchart.yaml` -- OpenShell Helm chart manifest diff --git a/architecture/gateway.md b/architecture/gateway.md index ca541c7b..39f97c8c 100644 --- a/architecture/gateway.md +++ b/architecture/gateway.md @@ -82,7 +82,7 @@ Proto definitions consumed by the gateway: | `proto/openshell.proto` | `openshell.v1` | `OpenShell` service, sandbox/provider/SSH/watch messages | | `proto/inference.proto` | `openshell.inference.v1` | `Inference` service: `SetClusterInference`, `GetClusterInference`, `GetInferenceBundle` | | `proto/datamodel.proto` | `openshell.datamodel.v1` | `Sandbox`, `SandboxSpec`, `SandboxStatus`, `Provider`, `SandboxPhase` | -| `proto/sandbox.proto` | `openshell.sandbox.v1` | `SandboxPolicy`, `NetworkPolicyRule` | +| `proto/sandbox.proto` | `openshell.sandbox.v1` | `SandboxPolicy`, `NetworkPolicyRule`, `SettingValue`, `EffectiveSetting`, `SettingScope`, `PolicySource`, `GetSandboxSettingsRequest/Response`, `GetGatewaySettingsRequest/Response` | ## Startup Sequence @@ -141,6 +141,9 @@ pub struct ServerState { pub sandbox_index: SandboxIndex, pub sandbox_watch_bus: SandboxWatchBus, pub tracing_log_bus: TracingLogBus, + pub ssh_connections_by_token: Mutex>, + pub ssh_connections_by_sandbox: Mutex>, + pub settings_mutex: tokio::sync::Mutex<()>, } ``` @@ -149,6 +152,7 @@ pub struct ServerState { - **`sandbox_index`** -- in-memory bidirectional index mapping sandbox names and agent pod names to sandbox IDs. Used by the event tailer to correlate Kubernetes events. - **`sandbox_watch_bus`** -- `broadcast`-based notification bus keyed by sandbox ID. Producers call `notify(&id)` when the persisted sandbox record changes; consumers in `WatchSandbox` streams receive `()` signals and re-read the record. - **`tracing_log_bus`** -- captures `tracing` events that include a `sandbox_id` field and republishes them as `SandboxLogLine` messages. Maintains a per-sandbox tail buffer (default 200 entries). Also contains a nested `PlatformEventBus` for Kubernetes events. +- **`settings_mutex`** -- serializes settings mutations (global and sandbox) to prevent read-modify-write races. Held for the duration of any setting set/delete or global policy set/delete operation. See [Gateway Settings Channel](gateway-settings.md#global-policy-lifecycle). ## Protocol Multiplexing @@ -225,13 +229,14 @@ Full CRUD for `Provider` objects, which store typed credentials (e.g., API keys | `UpdateProvider` | Updates an existing provider by name. Preserves the stored `id` and `name`; replaces `type`, `credentials`, and `config`. | | `DeleteProvider` | Deletes a provider by name. Returns `deleted: true/false`. | -#### Policy and Provider Environment Delivery +#### Policy, Settings, and Provider Environment Delivery -These RPCs are called by sandbox pods at startup to bootstrap themselves. +These RPCs are called by sandbox pods at startup and during runtime polling. | RPC | Description | |-----|-------------| -| `GetSandboxPolicy` | Returns the `SandboxPolicy` from a sandbox's spec, looked up by sandbox ID. | +| `GetSandboxSettings` | Returns effective sandbox config looked up by sandbox ID: policy payload, policy metadata (version, hash, source, `global_policy_version`), merged effective settings, and a `config_revision` fingerprint for change detection. Two-tier resolution: registered keys start unset, sandbox values overlay, global values override. The reserved `policy` key in global settings can override the sandbox's own policy. When a global policy is active, `policy_source` is `GLOBAL` and `global_policy_version` carries the active revision number. See [Gateway Settings Channel](gateway-settings.md). | +| `GetGatewaySettings` | Returns gateway-global settings only (excluding the reserved `policy` key). Returns registered keys with empty values when unconfigured, and a monotonic `settings_revision`. | | `GetSandboxProviderEnvironment` | Resolves provider credentials into environment variables for a sandbox. Iterates the sandbox's `spec.providers` list, fetches each `Provider`, and collects credential key-value pairs. First provider wins on duplicate keys. Skips credential keys that do not match `^[A-Za-z_][A-Za-z0-9_]*$`. | #### Policy Recommendation (Network Rules) @@ -242,9 +247,9 @@ These RPCs support the sandbox-initiated policy recommendation pipeline. The san |-----|-------------| | `SubmitPolicyAnalysis` | Receives pre-formed `PolicyChunk` proposals from a sandbox. Validates each chunk, persists via upsert on `(sandbox_id, host, port, binary)` dedup key, notifies watch bus. | | `GetDraftPolicy` | Returns all draft chunks for a sandbox with current draft version. | -| `ApproveDraftChunk` | Approves a pending or rejected chunk. Merges the proposed rule into the active policy (appends binary to existing rule or inserts new rule). | -| `RejectDraftChunk` | Rejects a pending chunk or revokes an approved chunk. If revoking, removes the binary from the active policy rule. | -| `ApproveAllDraftChunks` | Bulk approves all pending chunks for a sandbox. | +| `ApproveDraftChunk` | Approves a pending or rejected chunk. Merges the proposed rule into the active policy (appends binary to existing rule or inserts new rule). **Blocked when a global policy is active** -- returns `FailedPrecondition`. | +| `RejectDraftChunk` | Rejects a pending chunk or revokes an approved chunk. If revoking, removes the binary from the active policy rule. Rejection of `pending` chunks is always allowed. **Revoking approved chunks is blocked when a global policy is active** -- returns `FailedPrecondition`. | +| `ApproveAllDraftChunks` | Bulk approves all pending chunks for a sandbox. **Blocked when a global policy is active** -- returns `FailedPrecondition`. | | `EditDraftChunk` | Updates the proposed rule on a pending chunk. | | `GetDraftHistory` | Returns all chunks (including rejected) for audit trail. | @@ -457,12 +462,16 @@ Objects are identified by `(object_type, id)` with a unique constraint on `(obje ### Object Types -| Object type string | Proto message | Traits implemented | -|--------------------|---------------|-------------------| -| `"sandbox"` | `Sandbox` | `ObjectType`, `ObjectId`, `ObjectName` | -| `"provider"` | `Provider` | `ObjectType`, `ObjectId`, `ObjectName` | -| `"ssh_session"` | `SshSession` | `ObjectType`, `ObjectId`, `ObjectName` | -| `"inference_route"` | `InferenceRoute` | `ObjectType`, `ObjectId`, `ObjectName` | +| Object type string | Proto message / format | Traits implemented | Notes | +|--------------------|------------------------|-------------------|-------| +| `"sandbox"` | `Sandbox` | `ObjectType`, `ObjectId`, `ObjectName` | | +| `"provider"` | `Provider` | `ObjectType`, `ObjectId`, `ObjectName` | | +| `"ssh_session"` | `SshSession` | `ObjectType`, `ObjectId`, `ObjectName` | | +| `"inference_route"` | `InferenceRoute` | `ObjectType`, `ObjectId`, `ObjectName` | | +| `"gateway_settings"` | JSON `StoredSettings` | Generic `put`/`get` | Singleton, id=`"global"`. Contains the reserved `policy` key for global policy delivery. | +| `"sandbox_settings"` | JSON `StoredSettings` | Generic `put`/`get` | Per-sandbox, id=`"settings:{sandbox_uuid}"` | + +The `sandbox_policies` table stores versioned policy revisions for both sandbox-scoped and global policies. Global revisions use the sentinel `sandbox_id = "__global__"`. See [Gateway Settings Channel](gateway-settings.md#storage-model) for schema details. ### Generic Protobuf Codec @@ -559,6 +568,7 @@ Updated by the sandbox watcher on every Applied event and by gRPC handlers durin ## Cross-References - [Sandbox Architecture](sandbox.md) -- sandbox-side policy enforcement, proxy, and isolation details +- [Gateway Settings Channel](gateway-settings.md) -- runtime settings channel, two-tier resolution, CLI/TUI commands - [Inference Routing](inference-routing.md) -- end-to-end inference interception flow, sandbox-side proxy logic, and route resolution - [Container Management](build-containers.md) -- how sandbox container images are built and configured - [Sandbox Connect](sandbox-connect.md) -- client-side SSH connection flow diff --git a/architecture/inference-routing.md b/architecture/inference-routing.md index 0d3a95af..57d53dd5 100644 --- a/architecture/inference-routing.md +++ b/architecture/inference-routing.md @@ -53,6 +53,16 @@ Each profile also defines `credential_key_names` (e.g. `["OPENAI_API_KEY"]`) and Unknown provider types return `None` from `profile_for()` and default to `Bearer` auth with no default headers via `auth_for_provider_type()`. +## Vendor Auth Projection Boundaries + +Inference provider profiles describe backend request shape, not full sandbox tool auth projection. The next phase keeps these concerns separate: + +- **Anthropic path:** `claude code` and any other Anthropic-facing tool may rely on provider records that discover `ANTHROPIC_API_KEY` or related fields, but the decision to expose those values to the child process belongs to the tool adapter contract in the sandbox layer. +- **GitHub / Copilot path:** GitHub token discovery (`GITHUB_TOKEN`, `GH_TOKEN`) is not enough on its own to define a GitHub Copilot model-access contract. Any Copilot-backed model flow must explicitly document which endpoints, token shapes, and tool adapters are supported. +- **Fail-closed rule:** if a tool/vendor combination is not explicitly documented, OpenShell should not silently treat generic provider discovery as authorization to project those credentials into a sandbox child process. + +In other words, provider profiles remain the source of truth for upstream inference protocol handling once traffic is routed, but tool adapters remain the source of truth for what vendor-facing auth material can enter the child process before routing begins. + ## Control Plane (Gateway) File: `crates/openshell-server/src/inference.rs` diff --git a/architecture/sandbox-custom-containers.md b/architecture/sandbox-custom-containers.md index b72cf047..e44de29d 100644 --- a/architecture/sandbox-custom-containers.md +++ b/architecture/sandbox-custom-containers.md @@ -67,7 +67,7 @@ The server applies these transforms to every sandbox pod template (`sandbox/mod. 3. Overrides the agent container's `command` to `/opt/openshell/bin/openshell-sandbox`. 4. Sets `runAsUser: 0` so the supervisor has root privileges for namespace creation, proxy setup, and Landlock/seccomp. -These transforms apply to both generated templates and user-provided `pod_template` overrides. +These transforms apply to every generated pod template. ## CLI Usage diff --git a/architecture/sandbox-providers.md b/architecture/sandbox-providers.md index dca36c59..ac19a999 100644 --- a/architecture/sandbox-providers.md +++ b/architecture/sandbox-providers.md @@ -18,6 +18,8 @@ supervisor rewrites those placeholders back to the real secret values before for Access is enforced through the sandbox policy — the policy decides which outbound requests are allowed or denied based on the providers attached to that sandbox. +Providers are only one half of the runtime contract. The other half is the tool adapter that defines how a sandboxed CLI actually receives provider context. OpenShell's first-class tool targets are currently `claude code` and `opencode`, with the expectation that future tools follow the same adapter model instead of adding bespoke credential plumbing to the sandbox supervisor. + Core goals: - manage providers directly via CLI, @@ -26,6 +28,67 @@ Core goals: - project provider context into sandbox runtime, - drive sandbox policy to allow or deny outbound access to third-party services. +## Tool Adapter Matrix + +The tool adapter layer sits between generic provider discovery and the sandbox child process. It answers four questions for each first-class tool: + +- which env vars are allowed to appear in the child process +- which config file paths may be projected or synthesized +- whether a value must remain a placeholder until proxy rewrite time or may be projected directly +- which outbound endpoint categories the tool is expected to use + +Current first-class targets: + +| Tool | Primary purpose | Expected vendor families | Notes | +|---|---|---|---| +| `claude code` | Anthropic-oriented coding agent CLI | Anthropic first, future adapter growth possible | Should prefer placeholder env projection and explicit config-file mapping rather than raw secret sprawl | +| `opencode` | Open coding/runtime CLI with multiple provider backends | GitHub/Copilot, Anthropic, OpenAI-compatible families | Needs an adapter boundary that does not collapse GitHub auth, inference routing, and tool configuration into one concept | +| Future tool adapters | Extension point | Tool-specific | Must define env/config/endpoint needs explicitly before sandbox projection is allowed | + +This matrix is intentionally separate from provider discovery. For example, a `github` provider may supply credentials that `opencode` can use, but the decision to project those credentials into an `opencode` child process belongs to the tool adapter contract, not the provider plugin alone. + +## Tool Projection Contract + +The sandbox projection contract is per tool, not global. The current design target is: + +### `claude code` + +- **Environment variables:** only adapter-approved Anthropic-related variables should appear in the child process, and they should prefer placeholder values that are resolved at the proxy boundary rather than raw secret values. +- **Config files:** adapter-managed projection may eventually populate Claude-specific config or credential file locations, but only from an explicit allowlist of paths. +- **Direct secret projection:** disallowed by default; any exception must be documented as a tool-specific requirement. +- **Outbound endpoints:** Anthropic API endpoints plus any documented non-model support endpoints required by the tool. +- **Vendor-auth boundary:** Anthropic credential discovery and inference routing remain provider-layer responsibilities; the `claude code` adapter only decides which approved Anthropic-facing fields can enter the child process and whether they remain placeholders until proxy rewrite time. + +### `opencode` + +- **Environment variables:** adapter-approved variables may include provider-specific keys used by `opencode`, but only when the tool contract explicitly allows them. +- **Config files:** adapter-managed projection may populate `opencode` config file paths from an allowlisted set. For the current local `opencode` + Copilot validation slice, the approved compatibility contract includes projecting upstream `opencode` device-flow auth state at `~/.local/share/opencode/auth.json`. +- **Direct secret projection:** disallowed by default; exceptions require an explicit tool/vendor contract. The `auth.json` projection above is an `opencode`-specific compatibility exception, not a general sandbox framework rule. +- **Outbound endpoints:** GitHub/Copilot-related endpoints plus OpenAI-compatible or Anthropic-compatible inference endpoints only when the selected `opencode` adapter path explicitly supports them. +- **Vendor-auth boundary:** Upstream `opencode` persists device-flow auth in `~/.local/share/opencode/auth.json` and uses the stored OAuth token for `Authorization: Bearer` requests to `api.githubcopilot.com`. OpenShell normally prefers placeholder env projection, but this local validation slice documents a narrow compatibility exception for that upstream contract rather than inferring it from generic `github` or `opencode` provider discovery alone. + +### Future tool adapters + +Before a new tool becomes first-class, it must define: + +- env-var projection needs +- config-file projection needs +- whether any direct secret projection is unavoidable +- outbound endpoint categories + +If that contract is not defined, the intended end state is for OpenShell to fail closed rather than guessing how to inject provider state into the child process. In the current Phase 1 slice, fail-closed enforcement only applies to detected first-class tool commands; all other commands still use the legacy generic projection path until later phases replace that fallback. + +## Vendor Auth Risk Notes + +The next phase of this work adds vendor-native auth/model design on top of the tool adapter layer. The critical constraints are: + +- Anthropic and GitHub/Copilot auth flows may require different projection shapes even when both ultimately drive model access. +- Provider discovery does not automatically imply that a credential is safe to inject into a child process for a given tool. +- Endpoint allowlists must be tied to explicit tool/vendor contracts, not to broad assumptions like "all GitHub endpoints" or "all Anthropic-compatible hosts". +- If a vendor flow depends on direct config-file or session-state projection rather than placeholder env vars, that must be documented as a deliberate exception and tested separately. +- For the current `opencode` + Copilot local path, projecting `~/.local/share/opencode/auth.json` likely makes readable auth material available to the agent process inside the sandbox. Treat that as a temporary compatibility trade-off, not the final security design. +- TODO: replace raw readable `auth.json` projection with a harder boundary, such as adapter-mediated token handoff, short-lived derived credentials, or a projection format that keeps bearer material out of the child-visible filesystem. + ## Data Model Provider is defined in `proto/datamodel.proto`: @@ -241,7 +304,7 @@ variables (injected into the pod spec by the gateway's Kubernetes sandbox creati In `run_sandbox()` (`crates/openshell-sandbox/src/lib.rs`): -1. loads the sandbox policy via gRPC (`GetSandboxPolicy`), +1. loads the sandbox policy via gRPC (`GetSandboxSettings`), 2. fetches provider credentials via gRPC (`GetSandboxProviderEnvironment`), 3. if the fetch fails, continues with an empty map (graceful degradation with a warning). diff --git a/architecture/sandbox.md b/architecture/sandbox.md index a8e4d247..9fb768e5 100644 --- a/architecture/sandbox.md +++ b/architecture/sandbox.md @@ -36,6 +36,28 @@ All paths are relative to `crates/openshell-sandbox/src/`. ## Startup and Orchestration +## First-Class Sandbox Tools + +The sandbox runtime is intentionally generic about the child process it launches, but OpenShell still needs an explicit first-class tool matrix so credential projection, config-file layout, and network policy stay predictable. The current first-class tool targets are: + +- `claude code` +- `opencode` + +Future tools should fit the same adapter shape rather than bypassing it with tool-specific ad hoc logic in the supervisor. + +Phase 1 only enforces this boundary for detected first-class tool commands (`claude`, `opencode`). Other commands still use the legacy generic provider-env projection path until later phases tighten the adapter model further. + +The key separation is: + +- **tool adapter**: defines what a specific CLI tool needs inside the sandbox (env vars, config-file projection, trust-store expectations, endpoint categories) +- **provider/model routing**: defines how OpenShell discovers credentials, resolves providers, and routes model traffic such as Anthropic-compatible or OpenAI-compatible inference + +That separation matters because one provider may support multiple tools, and one tool may need credentials from multiple providers. The sandbox should therefore project a stable tool contract while the provider and inference layers remain the source of truth for credential discovery and backend routing. + +For the current local `opencode` + Copilot validation slice, the approved tool contract includes one explicit compatibility exception: upstream `opencode` stores device-flow auth in `~/.local/share/opencode/auth.json` and then uses the stored OAuth token for `Authorization: Bearer` requests to `api.githubcopilot.com`. OpenShell's preferred model remains placeholder env projection with proxy-time secret rewrite, but this narrow `opencode`-specific path may project that auth file to preserve upstream behavior during local validation. This is not a general projection framework, and it is not the final design. + +Security caveat: projecting `auth.json` likely makes readable auth material available to the agent process inside the sandbox. Treat that as a temporary local compatibility trade-off only. TODO: replace raw readable auth-file projection with a hardened adapter path that preserves upstream compatibility without exposing bearer material directly to child processes. + The `run_sandbox()` function in `crates/openshell-sandbox/src/lib.rs` is the main orchestration entry point. It executes the following steps in order. ### Orchestration flow @@ -311,35 +333,42 @@ In gRPC mode, the sandbox can receive policy updates at runtime without restarti | `landlock` | No | Landlock LSM in pre_exec | Configuration for the above; same restriction | | `process` | No | `setuid`/`setgid` in pre-exec | Privileges dropped irrevocably before exec | -The gateway's `UpdateSandboxPolicy` RPC enforces this boundary: it rejects any update where the static fields (`filesystem`, `landlock`, `process`) differ from the version 1 (creation-time) policy. It also rejects updates that would change the network mode (e.g., adding `network_policies` to a sandbox that started in `Block` mode), because the network namespace and proxy infrastructure are set up once at startup. +The gateway's `UpdateSandboxPolicy` RPC enforces this boundary: it rejects any update where the static fields (`filesystem`, `landlock`, `process`) differ from the version 1 (creation-time) policy. `network_policies` remain live-editable, including transitions between an empty rule set and a non-empty one, because proto-backed sandboxes already start with the proxy and network namespace infrastructure in place. ### Poll loop +The poll loop tracks `config_revision` (a fingerprint of policy + settings + source) as the primary change-detection signal. It separately tracks `policy_hash` to determine whether an OPA reload is needed -- settings-only changes do not trigger OPA reloads. + ```mermaid sequenceDiagram - participant PL as Policy Poll Loop + participant PL as Settings Poll Loop participant GW as Gateway (gRPC) participant OPA as OPA Engine (Arc) - PL->>GW: GetSandboxPolicy(sandbox_id) - GW-->>PL: policy + version + hash - PL->>PL: Store initial version + PL->>GW: GetSandboxSettings(sandbox_id) + GW-->>PL: policy + settings + config_revision + PL->>PL: Store initial config_revision, policy_hash, settings loop Every OPENSHELL_POLICY_POLL_INTERVAL_SECS (default 10) - PL->>GW: GetSandboxPolicy(sandbox_id) - GW-->>PL: policy + version + hash - alt version > current_version - PL->>OPA: reload_from_proto(policy) - alt Reload succeeds - OPA-->>PL: Ok - PL->>PL: Update current_version - PL->>GW: ReportPolicyStatus(version, LOADED) - else Reload fails (validation error) - OPA-->>PL: Err (old engine untouched) - PL->>GW: ReportPolicyStatus(version, FAILED, error_msg) + PL->>GW: GetSandboxSettings(sandbox_id) + GW-->>PL: policy + settings + config_revision + alt config_revision unchanged + PL->>PL: Skip + else config_revision changed + PL->>PL: log_setting_changes(old_settings, new_settings) + alt policy_hash changed + PL->>OPA: reload_from_proto(policy) + alt Reload succeeds + OPA-->>PL: Ok + PL->>PL: Update tracked state + PL->>GW: ReportPolicyStatus(version, LOADED) + else Reload fails (validation error) + OPA-->>PL: Err (old engine untouched) + PL->>GW: ReportPolicyStatus(version, FAILED, error_msg) + end + else settings-only change + PL->>PL: Update tracked state (no OPA reload) end - else version <= current_version - PL->>PL: Skip (no update) end end ``` @@ -347,11 +376,14 @@ sequenceDiagram The `run_policy_poll_loop()` function in `crates/openshell-sandbox/src/lib.rs` implements this loop: 1. **Connect once**: Create a `CachedOpenShellClient` that holds a persistent mTLS channel to the gateway. This avoids TLS renegotiation on every poll. -2. **Fetch initial version**: Call `poll_policy(sandbox_id)` to establish the baseline `current_version`. On failure, log a warning and retry on the next interval. -3. **Poll loop**: Sleep for the configured interval, then call `poll_policy()` again. -4. **Version comparison**: If `result.version <= current_version`, skip. The version is a monotonically increasing `u32` per sandbox. -5. **Reload attempt**: Call `opa_engine.reload_from_proto(&result.policy)`. This runs the full `from_proto()` pipeline on the new policy, then atomically swaps the inner engine. -6. **Status reporting**: On success, report `PolicyStatus::Loaded` to the gateway via `ReportPolicyStatus` RPC. On failure, report `PolicyStatus::Failed` with the error message. Status report failures are logged but do not affect the poll loop. +2. **Fetch initial state**: Call `poll_settings(sandbox_id)` to establish baseline `current_config_revision`, `current_policy_hash`, and `current_settings` map. On failure, log a warning and retry on the next interval. +3. **Poll loop**: Sleep for the configured interval, then call `poll_settings()` again. +4. **Config comparison**: If `result.config_revision == current_config_revision`, skip. +5. **Per-setting diff logging**: Call `log_setting_changes()` to diff old and new settings maps. Each individual change is logged with old and new values. +6. **Conditional OPA reload**: Only call `opa_engine.reload_from_proto(policy)` when `policy_hash` changes. Settings-only changes (e.g., `log_level` updated) update the tracked state without touching the OPA engine. +7. **Status reporting**: On success/failure, report status only for sandbox-scoped policy revisions (`policy_source = SANDBOX`, `version > 0`). Global policy overrides still trigger OPA reload, but they do not write per-sandbox policy status history. +8. **Global policy logging**: When `global_policy_version > 0`, the sandbox logs `"Policy reloaded successfully (global)"` with the `global_version` field. This distinguishes global reloads from sandbox-scoped reloads in the log stream. +9. **Update tracked state**: After processing, update `current_config_revision`, `current_policy_hash`, and `current_settings` regardless of whether OPA was reloaded. ### `CachedOpenShellClient` @@ -364,29 +396,40 @@ pub struct CachedOpenShellClient { client: OpenShellClient, } -pub struct PolicyPollResult { - pub policy: ProtoSandboxPolicy, +pub struct SettingsPollResult { + pub policy: Option, pub version: u32, pub policy_hash: String, + pub config_revision: u64, + pub policy_source: PolicySource, + pub settings: HashMap, + pub global_policy_version: u32, } ``` Methods: - **`connect(endpoint)`**: Establish an mTLS channel and return a new client. -- **`poll_policy(sandbox_id)`**: Call `GetSandboxPolicy` RPC and return a `PolicyPollResult` containing the policy, version, and hash. +- **`poll_settings(sandbox_id)`**: Call `GetSandboxSettings` RPC and return a `SettingsPollResult` containing policy payload (optional), policy metadata, effective config revision, policy source, global policy version, and the effective settings map (for diff logging). - **`report_policy_status(sandbox_id, version, loaded, error_msg)`**: Call `ReportPolicyStatus` RPC with the appropriate `PolicyStatus` enum value (`Loaded` or `Failed`). - **`raw_client()`**: Return a clone of the underlying `OpenShellClient` for direct RPC calls (used by the log push task). ### Server-side policy versioning -The gateway assigns a monotonically increasing version number to each policy revision per sandbox. The `GetSandboxPolicyResponse` includes `version` and `policy_hash` fields. The `ReportPolicyStatus` RPC records which version the sandbox successfully loaded (or failed to load), enabling operators to query `GetSandboxPolicyStatus` for the current active version and load history. +The gateway assigns a monotonically increasing version number to each sandbox policy revision. `GetSandboxSettingsResponse` carries the full effective configuration: policy payload, effective settings map (with per-key scope indicators), a `config_revision` fingerprint that changes when any effective input changes (policy, settings, or source), and a `policy_source` field indicating whether the policy came from the sandbox's own history or from a global override. Proto messages involved: -- `GetSandboxPolicyResponse` (`proto/sandbox.proto`): `policy`, `version`, `policy_hash` +- `GetSandboxSettingsResponse` (`proto/sandbox.proto`): `policy`, `version`, `policy_hash`, `settings` (map of `EffectiveSetting`), `config_revision`, `policy_source`, `global_policy_version` +- `EffectiveSetting` (`proto/sandbox.proto`): `SettingValue value`, `SettingScope scope` +- `SettingScope` enum: `UNSPECIFIED`, `SANDBOX`, `GLOBAL` +- `PolicySource` enum: `UNSPECIFIED`, `SANDBOX`, `GLOBAL` - `ReportPolicyStatusRequest` (`proto/openshell.proto`): `sandbox_id`, `version`, `status` (enum), `load_error` - `PolicyStatus` enum: `PENDING`, `LOADED`, `FAILED`, `SUPERSEDED` - `SandboxPolicyRevision` (`proto/openshell.proto`): Full revision metadata including `created_at_ms`, `loaded_at_ms` +The `global_policy_version` field is zero when no global policy is active or when `policy_source` is `SANDBOX`. When `policy_source` is `GLOBAL`, it carries the version number of the active global revision. The sandbox logs this value on reload (`"Policy reloaded successfully (global)" global_version=N`) and the TUI displays it in the dashboard and sandbox metadata pane. + +See [Gateway Settings Channel](gateway-settings.md) for full details on the settings resolution model, storage, and CLI/TUI commands. + ### Failure modes | Condition | Behavior | diff --git a/architecture/security-policy.md b/architecture/security-policy.md index cd4d697f..b63179c4 100644 --- a/architecture/security-policy.md +++ b/architecture/security-policy.md @@ -90,12 +90,9 @@ Attempting to change a static field in an update request returns an `INVALID_ARG ### Network Mode Immutability -The network mode (Block vs. Proxy) cannot change after sandbox creation. This is because switching modes requires infrastructure changes that only happen at startup: +Proto-backed sandboxes always run with proxy networking. The proxy, network namespace, and OPA evaluation path are created at sandbox startup and stay in place for the lifetime of the sandbox. -- **Block to Proxy**: Requires creating a network namespace, veth pair, and starting the CONNECT proxy -- none of which exist if the sandbox started in Block mode. -- **Proxy to Block**: Requires removing the proxy, veth pair, and network namespace, and applying a stricter seccomp filter that blocks `AF_INET`/`AF_INET6` -- not possible on a running process. - -An update that adds `network_policies` to a sandbox created without them (or removes all `network_policies` from a sandbox created with them) is rejected. See `crates/openshell-server/src/grpc.rs` -- `validate_network_mode_unchanged()`. +That means `network_policies` can change freely at runtime, including transitions between an empty map (proxy-backed deny-all) and a non-empty map (proxy-backed allowlist). The immutable boundary is the proxy infrastructure itself, not whether the current policy has any rules. ### Update Flow @@ -110,15 +107,14 @@ sequenceDiagram CLI->>GW: UpdateSandboxPolicy(name, new_policy) GW->>GW: Validate static fields unchanged - GW->>GW: Validate network mode unchanged GW->>DB: put_policy_revision(version=N, status=pending) GW->>DB: supersede_pending_policies(before_version=N) GW-->>CLI: UpdateSandboxPolicyResponse(version=N, hash) loop Every 30s (configurable) - SB->>GW: GetSandboxPolicy(sandbox_id) + SB->>GW: GetSandboxSettings(sandbox_id) GW->>DB: get_latest_policy(sandbox_id) - GW-->>SB: GetSandboxPolicyResponse(policy, version=N, hash) + GW-->>SB: GetSandboxSettingsResponse(policy, version=N, hash) end Note over SB: Detects version > current_version @@ -144,7 +140,7 @@ sequenceDiagram Each sandbox maintains an independent, monotonically increasing version counter for its policy revisions: -- **Version 1** is the policy from the sandbox's `spec.policy` at creation time. It is backfilled lazily on the first `GetSandboxPolicy` call if no explicit revision exists in the policy history table. See `crates/openshell-server/src/grpc.rs` -- `get_sandbox_policy()`. +- **Version 1** is the policy from the sandbox's `spec.policy` at creation time. It is backfilled lazily on the first `GetSandboxSettings` call if no explicit revision exists in the policy history table. See `crates/openshell-server/src/grpc.rs` -- `get_sandbox_settings()`. - Each `UpdateSandboxPolicy` call computes the next version as `latest_version + 1` and persists a new `PolicyRecord` with status `"pending"`. - When a new version is persisted, all older revisions still in `"pending"` status are marked `"superseded"` via `supersede_pending_policies()`. This handles rapid successive updates where the sandbox has not yet picked up an intermediate version. - The `Sandbox` protobuf object carries a `current_policy_version` field (see `proto/datamodel.proto`) that is updated when the sandbox reports a successful load. @@ -186,7 +182,7 @@ In gRPC mode, the sandbox spawns a background task that periodically polls the g The poll loop: 1. Connects a reusable gRPC client (`CachedOpenShellClient`) to avoid per-poll TLS handshake overhead. -2. Fetches the current policy via `GetSandboxPolicy`, which returns the latest version, its policy payload, and a SHA-256 hash. +2. Fetches the current policy via `GetSandboxSettings`, which returns the latest version, its policy payload, and a SHA-256 hash. 3. Compares the returned version against the locally tracked `current_version`. If the server version is not greater, the loop sleeps and retries. 4. On a new version, calls `OpaEngine::reload_from_proto()` which builds a complete new `regorus::Engine` through the same validated pipeline as the initial load (proto-to-JSON conversion, L7 validation, access preset expansion). 5. If the new engine builds successfully, it atomically replaces the inner `Mutex`. If it fails, the previous engine is untouched. @@ -210,34 +206,61 @@ Failure scenarios that trigger LKG behavior include: ### CLI Commands -The `nav policy` subcommand group manages live policy updates: +The `openshell policy` subcommand group manages live policy updates: ```bash # Push a new policy to a running sandbox -nav policy set --policy updated-policy.yaml +openshell policy set --policy updated-policy.yaml # Push and wait for the sandbox to load it (with 60s timeout) -nav policy set --policy updated-policy.yaml --wait +openshell policy set --policy updated-policy.yaml --wait # Push and wait with a custom timeout -nav policy set --policy updated-policy.yaml --wait --timeout 120 +openshell policy set --policy updated-policy.yaml --wait --timeout 120 + +# Set a gateway-global policy (overrides all sandbox policies) +openshell policy set --global --policy policy.yaml --yes + +# Delete the gateway-global policy (restores sandbox-level control) +openshell policy delete --global --yes # View the current active policy and its status -nav policy get +openshell policy get # Inspect a specific revision -nav policy get --rev 3 +openshell policy get --rev 3 # Print the full policy as YAML (round-trips with --policy input format) -nav policy get --full +openshell policy get --full # Combine: inspect a specific revision's full policy -nav policy get --rev 2 --full +openshell policy get --rev 2 --full # List policy revision history -nav policy list --limit 20 +openshell policy list --limit 20 ``` +#### Global Policy + +The `--global` flag on `policy set`, `policy delete`, `policy list`, and `policy get` manages a gateway-wide policy override. When a global policy is set, all sandboxes receive it through `GetSandboxSettings` (with `policy_source: GLOBAL`) instead of their own per-sandbox policy. Global policies are versioned through the `sandbox_policies` table using the sentinel `sandbox_id = "__global__"` and delivered to sandboxes via the reserved `policy` key in the `gateway_settings` blob. + +| Command | Behavior | +|---------|----------| +| `policy set --global --policy FILE` | Creates a versioned revision (marked `loaded` immediately) and stores the policy in the global settings blob. Sandboxes pick it up on their next poll (~10s). Deduplicates against the latest `loaded` revision by hash. | +| `policy delete --global` | Removes the `policy` key from global settings and supersedes all `__global__` revisions. Sandboxes revert to their per-sandbox policy on the next poll. | +| `policy list --global [--limit N]` | Lists global policy revision history (version, hash, status, timestamps). | +| `policy get --global [--rev N] [--full]` | Shows a specific global revision's metadata, or the latest. `--full` includes the full policy as YAML. | + +Both `set` and `delete` require interactive confirmation (or `--yes` to bypass). The `--wait` flag is rejected for global policy updates: `"--wait is not supported for global policies; global policies are effective immediately"`. + +When a global policy is active, sandbox-scoped policy mutations are blocked: +- `policy set ` returns `FailedPrecondition: "policy is managed globally"` +- `rule approve`, `rule approve-all` return `FailedPrecondition: "cannot approve rules while a global policy is active"` +- Revoking a previously approved draft chunk is blocked (it would modify the sandbox policy) +- Rejecting pending chunks is allowed (does not modify the sandbox policy) + +See [Gateway Settings Channel](gateway-settings.md#global-policy-lifecycle) for the full state machine, storage model, and implementation details. + #### `policy get` flags | Flag | Default | Description | @@ -377,7 +400,7 @@ process: ### `network_policies` -A map of named network policy rules. Each rule defines which binary/endpoint pairs are allowed to make outbound network connections. This is the core of the network access control system. **Dynamic field** -- can be updated on a running sandbox via live policy updates (see [Live Policy Updates](#live-policy-updates)). However, the overall network mode (Block vs. Proxy) is immutable. +A map of named network policy rules. Each rule defines which binary/endpoint pairs are allowed to make outbound network connections. This is the core of the network access control system. **Dynamic field** -- can be updated on a running sandbox via live policy updates (see [Live Policy Updates](#live-policy-updates)). **Behavioral trigger**: The sandbox always starts in **proxy mode** regardless of whether `network_policies` is present. The proxy is required so that all egress can be evaluated by OPA and the virtual hostname `inference.local` is always addressable for inference routing. When `network_policies` is empty, the OPA engine denies all connections. @@ -621,7 +644,7 @@ In proxy mode: When `network_policies` is empty, the OPA engine denies all outbound connections (except `inference.local` which is handled separately by the proxy before OPA evaluation). -**Gateway-side validation**: The `validate_network_mode_unchanged()` function on the server still rejects live policy updates that would add `network_policies` to a sandbox created without them or remove all `network_policies` from a sandbox created with them. This prevents unexpected behavioral changes in the OPA allow/deny logic. See `crates/openshell-server/src/grpc.rs` -- `validate_network_mode_unchanged()`. +The gateway validates that static fields stay unchanged across live policy updates, then persists a new policy revision for the supervisor to load. Empty and non-empty `network_policies` revisions follow the same live-update path. **Proxy sub-modes**: In proxy mode, the proxy handles two distinct request types: @@ -937,8 +960,6 @@ These errors are returned by the gateway's `UpdateSandboxPolicy` handler and rej | `filesystem_policy` differs from version 1 | `filesystem policy cannot be changed on a live sandbox (applied at startup)` | | `landlock` differs from version 1 | `landlock policy cannot be changed on a live sandbox (applied at startup)` | | `process` differs from version 1 | `process policy cannot be changed on a live sandbox (applied at startup)` | -| Adding `network_policies` when version 1 had none | `cannot add network policies to a sandbox created without them (Block -> Proxy mode change requires restart)` | -| Removing all `network_policies` when version 1 had some | `cannot remove all network policies from a sandbox created with them (Proxy -> Block mode change requires restart)` | ### Warnings (Log Only) @@ -1388,6 +1409,7 @@ An empty `sources`/`log_sources` list means no source filtering (all sources pas - [Sandbox Architecture](sandbox.md) -- Full sandbox lifecycle, enforcement mechanisms, and component interaction - [Gateway Architecture](gateway.md) -- How the gateway stores and delivers policies via gRPC +- [Gateway Settings Channel](gateway-settings.md) -- Runtime settings channel, global policy override, CLI/TUI settings commands - [Inference Routing](inference-routing.md) -- How `inference.local` requests are routed to model backends - [Overview](README.md) -- System-level context for how policies fit into the platform - [Plain HTTP Forward Proxy Plan](plans/plain-http-forward-proxy.md) -- Design document for the forward proxy feature diff --git a/architecture/system-architecture.md b/architecture/system-architecture.md index f0915c18..290d27c6 100644 --- a/architecture/system-architecture.md +++ b/architecture/system-architecture.md @@ -123,7 +123,7 @@ graph TB %% ============================================================ %% CONNECTIONS: Sandbox --> Gateway (control plane) %% ============================================================ - Supervisor -- "gRPC (mTLS):
GetSandboxPolicy,
GetProviderEnvironment,
GetInferenceBundle,
PushSandboxLogs" --> Gateway + Supervisor -- "gRPC (mTLS):
GetSandboxSettings
(policy + settings),
GetProviderEnvironment,
GetInferenceBundle,
PushSandboxLogs" --> Gateway %% ============================================================ %% CONNECTIONS: Sandbox --> External (via proxy) @@ -197,4 +197,4 @@ graph TB 5. **Inference Routing**: Inference requests are handled inside the sandbox by the openshell-router (not through the gateway). The gateway provides route configuration and credentials via gRPC; the sandbox executes HTTP requests directly to inference backends. -6. **Sandbox to Gateway**: The sandbox supervisor uses gRPC (mTLS) to fetch policies, provider credentials, inference bundles, and to push logs back to the gateway. +6. **Sandbox to Gateway**: The sandbox supervisor uses gRPC (mTLS) to fetch policies and runtime settings (via `GetSandboxSettings`), provider credentials, inference bundles, and to push logs back to the gateway. The settings channel delivers typed key-value pairs alongside policy through a unified poll loop. diff --git a/architecture/tui.md b/architecture/tui.md index f54452c8..1a83e96d 100644 --- a/architecture/tui.md +++ b/architecture/tui.md @@ -48,15 +48,35 @@ The TUI divides the terminal into four horizontal regions: ### Dashboard (press `1`) -The Dashboard is the home screen. It shows your cluster at a glance: +The Dashboard is the home screen. It shows your cluster at a glance. -- **Cluster name** and **gateway endpoint** — which cluster you are connected to and how to reach it. -- **Health status** — a live indicator that polls the cluster every 2 seconds: +The dashboard is divided into a top info pane and a middle pane with two tabs: + +- **Top pane**: Cluster name, gateway endpoint, health status, sandbox count. +- **Middle pane**: Tabbed view toggled with `Tab`: + - **Providers** — provider configurations attached to the cluster. + - **Global Settings** — gateway-global runtime settings (fetched via `GetGatewaySettings`). + +**Health status** indicators: - `●` **Healthy** (green) — everything is running normally. - `◐` **Degraded** (yellow) — the cluster is up but something needs attention. - `○` **Unhealthy** (red) — the cluster is not operating correctly. - `…` — still connecting or status unknown. -- **Sandbox count** — how many sandboxes exist in the cluster. + +**Global policy indicator**: When a global policy is active, the gateway row shows `Global Policy Active (vN)` in yellow (the `status_warn` style). The TUI detects this by polling `ListSandboxPolicies` with `global: true, limit: 1` on each tick and checking if the latest revision has `PolicyStatus::Loaded`. See `crates/openshell-tui/src/ui/dashboard.rs`. + +#### Global Settings Tab + +The Global Settings tab shows all registered setting keys with their current values. Keys without a configured value display as ``. + +| Key | Action | +|-----|--------| +| `j` / `↓` | Move selection down | +| `k` / `↑` | Move selection up | +| `Enter` | Edit the selected setting (type-aware: bool toggle, string/int text input) | +| `d` | Delete the selected setting's value | + +Both edit and delete operations display a confirmation modal before applying. Changes are sent to the gateway via the `UpdateSandboxPolicy` RPC with `global: true`. ### Sandboxes (press `2`) @@ -82,6 +102,23 @@ Use `j`/`k` or the arrow keys to move through the list. The selected row is high When there are no sandboxes, the view displays: *"No sandboxes found."* +When viewing a specific sandbox (by pressing `Enter` on a selected row), the bottom pane shows a tabbed view toggled with `l`: + +- **Policy** — the sandbox's current active policy, auto-refreshed on version change. +- **Settings** — effective runtime settings for the sandbox (fetched via `GetSandboxSettings`). + +**Global policy indicator on sandbox detail**: When the sandbox's policy is managed globally (`policy_source == GLOBAL` in the `GetSandboxSettings` response), the metadata pane shows `Policy: managed globally (vN)` in yellow. Draft chunks in the **Network Rules** pane are greyed out and a yellow warning reads `"Cannot approve rules while global policy is active"`. Approve (`a`), reject/revoke (`x`), and approve-all actions are blocked client-side with status messages. See `crates/openshell-tui/src/ui/sandbox_detail.rs` and `crates/openshell-tui/src/ui/sandbox_draft.rs`. + +#### Sandbox Settings Tab + +The Settings tab shows all registered setting keys with their effective values and scope indicators: + +- **(sandbox)** — value is set at sandbox scope +- **(global)** — value is set at gateway-global scope (overrides sandbox) +- **(unset)** — no value configured at any scope + +Navigation and editing use the same keys as the Global Settings tab (`j`/`k`, `Enter` to edit, `d` to delete). Sandbox-scoped edits to globally-managed keys are rejected by the server with a `FailedPrecondition` error. + ## Keyboard Controls The TUI has two input modes: **Normal** (default) and **Command** (activated by pressing `:`). @@ -112,10 +149,12 @@ Press `Esc` to cancel and return to Normal mode. `Backspace` deletes characters ## Data Refresh -The TUI automatically polls the cluster every **2 seconds**. Both cluster health and the sandbox list update on each tick, so the display stays current without manual refreshing. This uses the same gRPC calls as the CLI — no additional server-side setup is required. +The TUI automatically polls the cluster every **2 seconds**. Cluster health, the sandbox list, and global settings all update on each tick, so the display stays current without manual refreshing. This uses the same gRPC calls as the CLI — no additional server-side setup is required. When viewing a sandbox, the policy pane auto-refreshes when a new policy version is detected. The sandbox list response includes `current_policy_version` for each sandbox; on every tick the TUI compares this against the currently displayed policy version and re-fetches the full policy only when they differ. This avoids extra RPCs during normal operation while ensuring policy updates appear within the polling interval. The user's scroll position is preserved across auto-refreshes. +Global settings are refreshed via `GetGatewaySettings` and tracked by `settings_revision` to detect changes. Sandbox settings are fetched as part of the `GetSandboxSettings` response when viewing a specific sandbox. + ## Theme The TUI uses a dark terminal theme based on the NVIDIA brand palette: @@ -143,9 +182,9 @@ The forwarding implementation lives in `openshell-core::forward`, shared between ## What is Not Yet Available -The TUI is in its initial phase. The following features are planned but not yet implemented: +The TUI is in active development. The following features are planned but not yet implemented: -- **Inference and provider views** — browsing inference routes and provider configurations. +- **Inference views** — browsing inference routes and configuration. - **Help overlay** — the `?` key is shown in the nav bar but does not open a help screen yet. - **Command bar autocomplete** — the command bar accepts text but does not offer suggestions. - **Filtering and search** — no `/` search within views yet. diff --git a/crates/openshell-bootstrap/Cargo.toml b/crates/openshell-bootstrap/Cargo.toml index ab57ad57..54af87c6 100644 --- a/crates/openshell-bootstrap/Cargo.toml +++ b/crates/openshell-bootstrap/Cargo.toml @@ -25,6 +25,7 @@ tracing = { workspace = true } [dev-dependencies] tempfile = "3" +temp-env = "0.3" [lints] workspace = true diff --git a/crates/openshell-bootstrap/src/docker.rs b/crates/openshell-bootstrap/src/docker.rs index 9c365bfe..fedbbb4d 100644 --- a/crates/openshell-bootstrap/src/docker.rs +++ b/crates/openshell-bootstrap/src/docker.rs @@ -304,22 +304,25 @@ pub async fn find_gateway_container(docker: &Docker, port: Option) -> Resul let matches: Vec = containers .iter() - .filter(|c| is_gateway_image(c) && port.map_or(true, |p| has_port(c, p))) + .filter(|c| is_gateway_image(c) && port.is_none_or(|p| has_port(c, p))) .filter_map(container_name) .collect(); match matches.len() { 0 => { - let hint = if let Some(p) = port { - format!( - "No openshell gateway container found listening on port {p}.\n\ + let hint = port.map_or_else( + || { + "No openshell gateway container found.\n\ Is the gateway running? Check with: docker ps" - ) - } else { - "No openshell gateway container found.\n\ - Is the gateway running? Check with: docker ps" - .to_string() - }; + .to_string() + }, + |p| { + format!( + "No openshell gateway container found listening on port {p}.\n\ + Is the gateway running? Check with: docker ps" + ) + }, + ); Err(miette::miette!("{hint}")) } 1 => Ok(matches.into_iter().next().unwrap()), @@ -748,22 +751,22 @@ pub async fn check_port_conflicts( let ports = container.ports.as_deref().unwrap_or_default(); for port in ports { - if let Some(public) = port.public_port { - if needed_ports.contains(&public) { - let cname = names - .first() - .map(|n| n.trim_start_matches('/').to_string()) - .unwrap_or_else(|| { - container - .id - .clone() - .unwrap_or_else(|| "".to_string()) - }); - conflicts.push(PortConflict { - container_name: cname, - host_port: public, - }); - } + if let Some(public) = port.public_port + && needed_ports.contains(&public) + { + let cname = names.first().map_or_else( + || { + container + .id + .clone() + .unwrap_or_else(|| "".to_string()) + }, + |n| n.trim_start_matches('/').to_string(), + ); + conflicts.push(PortConflict { + container_name: cname, + host_port: public, + }); } } } @@ -1091,6 +1094,7 @@ fn is_port_conflict(err: &BollardError) -> bool { #[cfg(test)] mod tests { use super::*; + use temp_env::with_var; #[test] fn normalize_arch_x86_64() { @@ -1152,36 +1156,22 @@ mod tests { #[test] fn docker_not_reachable_error_with_docker_host() { // Simulate: DOCKER_HOST is set but daemon unresponsive. - // We set the env var temporarily (this is test-only). - let prev_docker_host = std::env::var("DOCKER_HOST").ok(); - // SAFETY: test-only, single-threaded test runner for this test - unsafe { - std::env::set_var("DOCKER_HOST", "unix:///tmp/fake-docker.sock"); - } - - let err = docker_not_reachable_error( - "daemon not responding", - "Docker socket exists but the daemon is not responding", - ); - let msg = format!("{err:?}"); - - // Restore env - // SAFETY: test-only, restoring previous state - unsafe { - match prev_docker_host { - Some(val) => std::env::set_var("DOCKER_HOST", val), - None => std::env::remove_var("DOCKER_HOST"), - } - } + with_var("DOCKER_HOST", Some("unix:///tmp/fake-docker.sock"), || { + let err = docker_not_reachable_error( + "daemon not responding", + "Docker socket exists but the daemon is not responding", + ); + let msg = format!("{err:?}"); - assert!( - msg.contains("DOCKER_HOST"), - "should mention DOCKER_HOST when it is set" - ); - assert!( - msg.contains("unix:///tmp/fake-docker.sock"), - "should show the current DOCKER_HOST value" - ); + assert!( + msg.contains("DOCKER_HOST"), + "should mention DOCKER_HOST when it is set" + ); + assert!( + msg.contains("unix:///tmp/fake-docker.sock"), + "should show the current DOCKER_HOST value" + ); + }); } #[test] diff --git a/crates/openshell-bootstrap/src/errors.rs b/crates/openshell-bootstrap/src/errors.rs index 1511b362..b487c94a 100644 --- a/crates/openshell-bootstrap/src/errors.rs +++ b/crates/openshell-bootstrap/src/errors.rs @@ -175,21 +175,19 @@ fn diagnose_corrupted_state(gateway_name: &str) -> GatewayFailureDiagnosis { GatewayFailureDiagnosis { summary: "Corrupted cluster state".to_string(), explanation: "The gateway cluster has corrupted internal state, likely from a previous \ - interrupted startup or unclean shutdown." + interrupted startup or unclean shutdown. Resources from the failed deploy have been \ + automatically cleaned up." .to_string(), recovery_steps: vec![ + RecoveryStep::new("Retry the gateway start (cleanup was automatic)"), RecoveryStep::with_command( - "Destroy and recreate the gateway", + "If the retry fails, manually destroy and recreate", format!( "openshell gateway destroy --name {gateway_name} && openshell gateway start" ), ), - RecoveryStep::with_command( - "If that fails, remove the volume for a clean slate", - format!("docker volume rm openshell-cluster-{gateway_name}"), - ), ], - retryable: false, + retryable: true, } } @@ -449,16 +447,20 @@ pub fn generic_failure_diagnosis(gateway_name: &str) -> GatewayFailureDiagnosis summary: "Gateway failed to start".to_string(), explanation: "The gateway encountered an unexpected error during startup.".to_string(), recovery_steps: vec![ + RecoveryStep::with_command( + "Check container logs for details", + format!("openshell doctor logs --name {gateway_name}"), + ), + RecoveryStep::with_command( + "Run diagnostics", + format!("openshell doctor check --name {gateway_name}"), + ), RecoveryStep::with_command( "Try destroying and recreating the gateway", format!( "openshell gateway destroy --name {gateway_name} && openshell gateway start" ), ), - RecoveryStep::with_command( - "Check container logs for details", - format!("docker logs openshell-cluster-{gateway_name}"), - ), RecoveryStep::new( "If the issue persists, report it at https://github.com/nvidia/openshell/issues", ), @@ -483,6 +485,87 @@ mod tests { assert!(d.summary.contains("Corrupted")); } + #[test] + fn test_diagnose_corrupted_state_is_retryable_after_auto_cleanup() { + // After the auto-cleanup fix (#463), corrupted state errors should be + // marked retryable because deploy_gateway_with_logs now automatically + // cleans up Docker resources on failure. + let d = diagnose_failure( + "mygw", + "K8s namespace not ready", + Some("configmaps \"extension-apiserver-authentication\" is forbidden"), + ) + .expect("should match corrupted state pattern"); + assert!( + d.retryable, + "corrupted state should be retryable after auto-cleanup" + ); + assert!( + d.explanation.contains("automatically cleaned up"), + "explanation should mention automatic cleanup, got: {}", + d.explanation + ); + } + + #[test] + fn test_diagnose_corrupted_state_recovery_no_manual_volume_rm() { + // The recovery steps should no longer include a manual docker volume rm + // command, since cleanup is now automatic. The first step should tell + // the user to simply retry. + let d = diagnose_failure("mygw", "cannot get resource \"namespaces\"", None) + .expect("should match corrupted state pattern"); + + let all_commands: Vec = d + .recovery_steps + .iter() + .filter_map(|s| s.command.clone()) + .collect(); + let all_commands_joined = all_commands.join(" "); + + assert!( + !all_commands_joined.contains("docker volume rm"), + "recovery steps should not include manual docker volume rm, got: {all_commands_joined}" + ); + + // First step should be a description-only step (no command) about retrying + assert!( + d.recovery_steps[0].command.is_none(), + "first recovery step should be description-only (automatic cleanup)" + ); + assert!( + d.recovery_steps[0] + .description + .contains("cleanup was automatic"), + "first recovery step should mention automatic cleanup" + ); + } + + #[test] + fn test_diagnose_corrupted_state_fallback_step_includes_gateway_name() { + // The fallback recovery step should interpolate the gateway name so + // users can copy-paste the command. + let d = diagnose_failure("my-gateway", "is forbidden", None) + .expect("should match corrupted state pattern"); + + assert!( + d.recovery_steps.len() >= 2, + "should have at least 2 recovery steps" + ); + let fallback = &d.recovery_steps[1]; + let cmd = fallback + .command + .as_deref() + .expect("fallback step should have a command"); + assert!( + cmd.contains("my-gateway"), + "fallback command should contain gateway name, got: {cmd}" + ); + assert!( + cmd.contains("openshell gateway destroy"), + "fallback command should include gateway destroy, got: {cmd}" + ); + } + #[test] fn test_diagnose_no_default_route() { let diagnosis = diagnose_failure( @@ -650,4 +733,196 @@ mod tests { ); assert!(d.retryable); } + + // -- generic_failure_diagnosis tests -- + + #[test] + fn generic_diagnosis_suggests_doctor_logs() { + let d = generic_failure_diagnosis("my-gw"); + let commands: Vec = d + .recovery_steps + .iter() + .filter_map(|s| s.command.clone()) + .collect(); + assert!( + commands.iter().any(|c| c.contains("openshell doctor logs")), + "expected 'openshell doctor logs' in recovery commands, got: {commands:?}" + ); + } + + #[test] + fn generic_diagnosis_suggests_doctor_check() { + let d = generic_failure_diagnosis("my-gw"); + let commands: Vec = d + .recovery_steps + .iter() + .filter_map(|s| s.command.clone()) + .collect(); + assert!( + commands + .iter() + .any(|c| c.contains("openshell doctor check")), + "expected 'openshell doctor check' in recovery commands, got: {commands:?}" + ); + } + + #[test] + fn generic_diagnosis_includes_gateway_name() { + let d = generic_failure_diagnosis("custom-name"); + let all_text: String = d + .recovery_steps + .iter() + .filter_map(|s| s.command.clone()) + .collect::>() + .join(" "); + assert!( + all_text.contains("custom-name"), + "expected gateway name in recovery commands, got: {all_text}" + ); + } + + // -- fallback behavior tests -- + + #[test] + fn namespace_timeout_without_logs_returns_none() { + // This is the most common user-facing error: a plain timeout with only + // kubectl output. It must NOT match any specific pattern so the caller + // can fall back to generic_failure_diagnosis. + let diagnosis = diagnose_failure( + "test", + "K8s namespace not ready\n\nCaused by:\n \ + timed out waiting for namespace 'openshell' to exist: \ + error: the server doesn't have a resource type \"namespace\"", + None, + ); + assert!( + diagnosis.is_none(), + "plain namespace timeout should not match any specific pattern, got: {:?}", + diagnosis.map(|d| d.summary) + ); + } + + #[test] + fn namespace_timeout_with_pressure_logs_matches() { + // When container logs reveal node pressure, the diagnosis engine + // should detect it even though the error message itself is generic. + let diagnosis = diagnose_failure( + "test", + "K8s namespace not ready\n\nCaused by:\n \ + timed out waiting for namespace 'openshell' to exist: ", + Some("HEALTHCHECK_NODE_PRESSURE: DiskPressure"), + ); + assert!(diagnosis.is_some(), "expected node pressure diagnosis"); + let d = diagnosis.unwrap(); + assert!( + d.summary.contains("pressure"), + "expected pressure in summary, got: {}", + d.summary + ); + } + + #[test] + fn namespace_timeout_with_corrupted_state_logs_matches() { + // Container logs revealing RBAC corruption should be caught. + let diagnosis = diagnose_failure( + "test", + "K8s namespace not ready\n\nCaused by:\n \ + timed out waiting for namespace 'openshell' to exist: ", + Some( + "configmaps \"extension-apiserver-authentication\" is forbidden: \ + User cannot get resource", + ), + ); + assert!(diagnosis.is_some(), "expected corrupted state diagnosis"); + let d = diagnosis.unwrap(); + assert!( + d.summary.contains("Corrupted"), + "expected Corrupted in summary, got: {}", + d.summary + ); + } + + #[test] + fn namespace_timeout_with_no_route_logs_matches() { + let diagnosis = diagnose_failure( + "test", + "K8s namespace not ready", + Some("Error: no default route present before starting k3s"), + ); + assert!(diagnosis.is_some(), "expected networking diagnosis"); + let d = diagnosis.unwrap(); + assert!( + d.summary.contains("networking"), + "expected networking in summary, got: {}", + d.summary + ); + } + + #[test] + fn diagnose_failure_with_logs_uses_combined_text() { + // Verify that diagnose_failure combines error_message + container_logs + // for pattern matching. The pattern "connection refused" is in logs, + // not in the error message. + let diagnosis = diagnose_failure( + "test", + "K8s namespace not ready", + Some("dial tcp 127.0.0.1:6443: connect: connection refused"), + ); + assert!( + diagnosis.is_some(), + "expected diagnosis from container logs pattern" + ); + let d = diagnosis.unwrap(); + assert!( + d.summary.contains("Network") || d.summary.contains("connectivity"), + "expected network diagnosis, got: {}", + d.summary + ); + } + + // -- end-to-end fallback pattern (mirrors CLI code) -- + + #[test] + fn fallback_to_generic_produces_actionable_diagnosis() { + // This mirrors the actual CLI pattern: + // diagnose_failure(...).unwrap_or_else(|| generic_failure_diagnosis(name)) + // For a plain namespace timeout with no useful container logs, the + // specific matcher returns None and we must fall back to the generic + // diagnosis that suggests doctor commands. + let err_str = "K8s namespace not ready\n\nCaused by:\n \ + timed out waiting for namespace 'openshell' to exist: \ + error: the server doesn't have a resource type \"namespace\""; + let container_logs = Some("k3s is starting\nwaiting for kube-apiserver"); + + let diagnosis = diagnose_failure("my-gw", err_str, container_logs) + .unwrap_or_else(|| generic_failure_diagnosis("my-gw")); + + // Should have gotten the generic diagnosis (no specific pattern matched) + assert_eq!(diagnosis.summary, "Gateway failed to start"); + // Must contain actionable recovery steps + assert!( + !diagnosis.recovery_steps.is_empty(), + "generic diagnosis should have recovery steps" + ); + // Must mention doctor commands + let all_commands: String = diagnosis + .recovery_steps + .iter() + .filter_map(|s| s.command.as_ref()) + .cloned() + .collect::>() + .join("\n"); + assert!( + all_commands.contains("doctor logs"), + "should suggest 'doctor logs', got: {all_commands}" + ); + assert!( + all_commands.contains("doctor check"), + "should suggest 'doctor check', got: {all_commands}" + ); + assert!( + all_commands.contains("my-gw"), + "commands should include gateway name, got: {all_commands}" + ); + } } diff --git a/crates/openshell-bootstrap/src/lib.rs b/crates/openshell-bootstrap/src/lib.rs index b6423aae..fd25e02f 100644 --- a/crates/openshell-bootstrap/src/lib.rs +++ b/crates/openshell-bootstrap/src/lib.rs @@ -305,8 +305,8 @@ where } // Ensure the image is available on the target Docker daemon + log("[status] Downloading gateway".to_string()); if remote_opts.is_some() { - log("[status] Downloading gateway".to_string()); let on_log_clone = Arc::clone(&on_log); let progress_cb = move |msg: String| { if let Ok(mut f) = on_log_clone.lock() { @@ -323,7 +323,6 @@ where .await?; } else { // Local deployment: ensure image exists (pull if needed) - log("[status] Downloading gateway".to_string()); ensure_image( &target_docker, &image_ref, @@ -400,119 +399,143 @@ where )); } - ensure_container( - &target_docker, - &name, - &image_ref, - &extra_sans, - ssh_gateway_host.as_deref(), - port, - disable_tls, - disable_gateway_auth, - registry_username.as_deref(), - registry_token.as_deref(), - gpu, - ) - .await?; - start_container(&target_docker, &name).await?; - - // Clean up stale k3s nodes left over from previous container instances that - // used the same persistent volume. Without this, pods remain scheduled on - // NotReady ghost nodes and the health check will time out. - match clean_stale_nodes(&target_docker, &name).await { - Ok(0) => {} - Ok(n) => tracing::debug!("removed {n} stale node(s)"), - Err(err) => { - tracing::debug!("stale node cleanup failed (non-fatal): {err}"); + // From this point on, Docker resources (container, volume, network) are + // being created. If any subsequent step fails, we must clean up to avoid + // leaving an orphaned volume in a corrupted state that blocks retries. + // See: https://github.com/NVIDIA/OpenShell/issues/463 + let deploy_result: Result = async { + ensure_container( + &target_docker, + &name, + &image_ref, + &extra_sans, + ssh_gateway_host.as_deref(), + port, + disable_tls, + disable_gateway_auth, + registry_username.as_deref(), + registry_token.as_deref(), + gpu, + ) + .await?; + start_container(&target_docker, &name).await?; + + // Clean up stale k3s nodes left over from previous container instances that + // used the same persistent volume. Without this, pods remain scheduled on + // NotReady ghost nodes and the health check will time out. + match clean_stale_nodes(&target_docker, &name).await { + Ok(0) => {} + Ok(n) => tracing::debug!("removed {n} stale node(s)"), + Err(err) => { + tracing::debug!("stale node cleanup failed (non-fatal): {err}"); + } } - } - // Reconcile PKI: reuse existing cluster TLS secrets if they are complete and - // valid; only generate fresh PKI when secrets are missing, incomplete, - // malformed, or expiring within MIN_REMAINING_VALIDITY_DAYS. - // - // Ordering is: reconcile secrets -> (if rotated and workload exists: - // rollout restart and wait) -> persist CLI-side bundle. - // - // We check workload presence before reconciliation. On a fresh/recreated - // cluster, secrets are always newly generated and a restart is unnecessary. - // Restarting only when workload pre-existed avoids extra rollout latency. - let workload_existed_before_pki = openshell_workload_exists(&target_docker, &name).await?; - let (pki_bundle, rotated) = reconcile_pki(&target_docker, &name, &extra_sans, &log).await?; - - if rotated && workload_existed_before_pki { - // If an openshell workload is already running, it must be restarted so - // it picks up the new TLS secrets before we write CLI-side certs. - // A failed rollout is a hard error — CLI certs must not be persisted - // if the server cannot come up with the new PKI. - restart_openshell_deployment(&target_docker, &name).await?; - } + // Reconcile PKI: reuse existing cluster TLS secrets if they are complete and + // valid; only generate fresh PKI when secrets are missing, incomplete, + // malformed, or expiring within MIN_REMAINING_VALIDITY_DAYS. + // + // Ordering is: reconcile secrets -> (if rotated and workload exists: + // rollout restart and wait) -> persist CLI-side bundle. + // + // We check workload presence before reconciliation. On a fresh/recreated + // cluster, secrets are always newly generated and a restart is unnecessary. + // Restarting only when workload pre-existed avoids extra rollout latency. + let workload_existed_before_pki = openshell_workload_exists(&target_docker, &name).await?; + let (pki_bundle, rotated) = reconcile_pki(&target_docker, &name, &extra_sans, &log).await?; + + if rotated && workload_existed_before_pki { + // If an openshell workload is already running, it must be restarted so + // it picks up the new TLS secrets before we write CLI-side certs. + // A failed rollout is a hard error — CLI certs must not be persisted + // if the server cannot come up with the new PKI. + restart_openshell_deployment(&target_docker, &name).await?; + } + + store_pki_bundle(&name, &pki_bundle)?; + + // Push locally-built component images into the k3s containerd runtime. + // This is the "push" path for local development — images are exported from + // the local Docker daemon and streamed into the cluster's containerd so + // k3s can resolve them without pulling from the remote registry. + if remote_opts.is_none() + && let Ok(push_images_str) = std::env::var("OPENSHELL_PUSH_IMAGES") + { + let images: Vec<&str> = push_images_str + .split(',') + .map(str::trim) + .filter(|s| !s.is_empty()) + .collect(); + if !images.is_empty() { + log("[status] Deploying components".to_string()); + let local_docker = Docker::connect_with_local_defaults().into_diagnostic()?; + let container = container_name(&name); + let on_log_ref = Arc::clone(&on_log); + let mut push_log = move |msg: String| { + if let Ok(mut f) = on_log_ref.lock() { + f(msg); + } + }; + push::push_local_images( + &local_docker, + &target_docker, + &container, + &images, + &mut push_log, + ) + .await?; - store_pki_bundle(&name, &pki_bundle)?; + restart_openshell_deployment(&target_docker, &name).await?; + } + } - // Push locally-built component images into the k3s containerd runtime. - // This is the "push" path for local development — images are exported from - // the local Docker daemon and streamed into the cluster's containerd so - // k3s can resolve them without pulling from the remote registry. - if remote_opts.is_none() - && let Ok(push_images_str) = std::env::var("OPENSHELL_PUSH_IMAGES") - { - let images: Vec<&str> = push_images_str - .split(',') - .map(str::trim) - .filter(|s| !s.is_empty()) - .collect(); - if !images.is_empty() { - log("[status] Deploying components".to_string()); - let local_docker = Docker::connect_with_local_defaults().into_diagnostic()?; - let container = container_name(&name); + log("[status] Starting gateway".to_string()); + { + // Create a short-lived closure that locks on each call rather than holding + // the MutexGuard across await points. let on_log_ref = Arc::clone(&on_log); - let mut push_log = move |msg: String| { + let mut gateway_log = move |msg: String| { if let Ok(mut f) = on_log_ref.lock() { f(msg); } }; - push::push_local_images( - &local_docker, - &target_docker, - &container, - &images, - &mut push_log, - ) - .await?; - - restart_openshell_deployment(&target_docker, &name).await?; + wait_for_gateway_ready(&target_docker, &name, &mut gateway_log).await?; } - } - log("[status] Starting gateway".to_string()); - { - // Create a short-lived closure that locks on each call rather than holding - // the MutexGuard across await points. - let on_log_ref = Arc::clone(&on_log); - let mut gateway_log = move |msg: String| { - if let Ok(mut f) = on_log_ref.lock() { - f(msg); + // Create and store gateway metadata. + let metadata = create_gateway_metadata_with_host( + &name, + remote_opts.as_ref(), + port, + ssh_gateway_host.as_deref(), + disable_tls, + ); + store_gateway_metadata(&name, &metadata)?; + + Ok(metadata) + } + .await; + + match deploy_result { + Ok(metadata) => Ok(GatewayHandle { + name, + metadata, + docker: target_docker, + }), + Err(deploy_err) => { + // Automatically clean up Docker resources (volume, container, network, + // image) so the environment is left in a retryable state. + tracing::info!("deploy failed, cleaning up gateway resources for '{name}'"); + if let Err(cleanup_err) = destroy_gateway_resources(&target_docker, &name).await { + tracing::warn!( + "automatic cleanup after failed deploy also failed: {cleanup_err}. \ + Manual cleanup may be required: \ + openshell gateway destroy --name {name}" + ); } - }; - wait_for_gateway_ready(&target_docker, &name, &mut gateway_log).await?; + Err(deploy_err) + } } - - // Create and store gateway metadata. - let metadata = create_gateway_metadata_with_host( - &name, - remote_opts.as_ref(), - port, - ssh_gateway_host.as_deref(), - disable_tls, - ); - store_gateway_metadata(&name, &metadata)?; - - Ok(GatewayHandle { - name, - metadata, - docker: target_docker, - }) } /// Get a handle to an existing gateway. @@ -638,6 +661,21 @@ pub async fn gateway_container_logs( Ok(()) } +/// Fetch the last `n` lines of container logs for a local gateway as a `String`. +/// +/// This is a convenience wrapper for diagnostic call sites (e.g. failure +/// diagnosis in the CLI) that do not hold a Docker client handle. +/// +/// Returns an empty string on any Docker/connection error so callers don't +/// need to worry about error handling. +pub async fn fetch_gateway_logs(name: &str, n: usize) -> String { + let Ok(docker) = Docker::connect_with_local_defaults() else { + return String::new(); + }; + let container = container_name(name); + fetch_recent_logs(&docker, &container, n).await +} + fn default_gateway_image_ref() -> String { if let Ok(image) = std::env::var("OPENSHELL_CLUSTER_IMAGE") && !image.trim().is_empty() @@ -984,7 +1022,11 @@ async fn wait_for_namespace( } if attempt + 1 == attempts { - return Err(err).wrap_err("K8s namespace not ready"); + let logs = fetch_recent_logs(docker, container_name, 40).await; + return Err(miette::miette!( + "exec failed on final attempt while waiting for namespace '{namespace}': {err}\n{logs}" + )) + .wrap_err("K8s namespace not ready"); } tokio::time::sleep(backoff).await; backoff = std::cmp::min(backoff.saturating_mul(2), max_backoff); @@ -997,8 +1039,9 @@ async fn wait_for_namespace( } if attempt + 1 == attempts { + let logs = fetch_recent_logs(docker, container_name, 40).await; return Err(miette::miette!( - "timed out waiting for namespace '{namespace}' to exist: {output}" + "timed out waiting for namespace '{namespace}' to exist: {output}\n{logs}" )) .wrap_err("K8s namespace not ready"); } diff --git a/crates/openshell-cli/Cargo.toml b/crates/openshell-cli/Cargo.toml index 61c20450..ef6d8779 100644 --- a/crates/openshell-cli/Cargo.toml +++ b/crates/openshell-cli/Cargo.toml @@ -74,6 +74,9 @@ tracing-subscriber = { workspace = true } [lints] workspace = true +[features] +dev-settings = ["openshell-core/dev-settings"] + [dev-dependencies] futures = { workspace = true } rcgen = { version = "0.13", features = ["crypto", "pem"] } diff --git a/crates/openshell-cli/src/auth.rs b/crates/openshell-cli/src/auth.rs index 53d35ae1..6325ebf9 100644 --- a/crates/openshell-cli/src/auth.rs +++ b/crates/openshell-cli/src/auth.rs @@ -108,23 +108,37 @@ pub async fn browser_auth_flow(gateway_endpoint: &str) -> Result { gateway_endpoint.to_string(), )); + // Allow suppressing the browser popup via environment variable (useful for + // CI, e2e tests, and headless environments). + let no_browser = std::env::var("OPENSHELL_NO_BROWSER") + .map(|v| v == "1" || v.eq_ignore_ascii_case("true")) + .unwrap_or(false); + // Prompt the user before opening the browser. eprintln!(" Confirmation code: {code}"); eprintln!(" Verify this code matches your browser before clicking Connect."); eprintln!(); - eprint!("Press Enter to open the browser for authentication..."); - std::io::stderr().flush().ok(); - let mut _input = String::new(); - std::io::stdin().read_line(&mut _input).ok(); - - if let Err(e) = open_browser(&auth_url) { - debug!(error = %e, "failed to open browser"); - eprintln!("Could not open browser automatically."); + + if no_browser { + eprintln!("Browser opening suppressed (OPENSHELL_NO_BROWSER is set)."); eprintln!("Open this URL in your browser:"); eprintln!(" {auth_url}"); eprintln!(); } else { - eprintln!("Browser opened."); + eprint!("Press Enter to open the browser for authentication..."); + std::io::stderr().flush().ok(); + let mut _input = String::new(); + std::io::stdin().read_line(&mut _input).ok(); + + if let Err(e) = open_browser(&auth_url) { + debug!(error = %e, "failed to open browser"); + eprintln!("Could not open browser automatically."); + eprintln!("Open this URL in your browser:"); + eprintln!(" {auth_url}"); + eprintln!(); + } else { + eprintln!("Browser opened."); + } } // Wait for the callback or timeout. diff --git a/crates/openshell-cli/src/bootstrap.rs b/crates/openshell-cli/src/bootstrap.rs index eb8f93a3..e976061f 100644 --- a/crates/openshell-cli/src/bootstrap.rs +++ b/crates/openshell-cli/src/bootstrap.rs @@ -139,7 +139,7 @@ pub async fn run_bootstrap( eprintln!(); eprintln!( " Manage it later with: {} or {}", - "openshell gateway status".bold(), + "openshell status".bold(), "openshell gateway stop".bold(), ); eprintln!(); diff --git a/crates/openshell-cli/src/doctor_llm_prompt.md b/crates/openshell-cli/src/doctor_llm_prompt.md index 4d4a6b64..cb057359 100644 --- a/crates/openshell-cli/src/doctor_llm_prompt.md +++ b/crates/openshell-cli/src/doctor_llm_prompt.md @@ -277,7 +277,10 @@ If DNS is broken, all image pulls from the distribution registry will fail, as w | `tls handshake eof` from `openshell status` | Server not running or mTLS credentials missing/mismatched | Check StatefulSet replicas (Step 3) and mTLS files (Step 6) | | StatefulSet `0/0` replicas | StatefulSet scaled to zero (failed deploy, manual scale-down, or Helm misconfiguration) | `openshell doctor exec -- kubectl -n openshell scale statefulset openshell --replicas=1` | | Local mTLS files missing | Deploy was interrupted before credentials were persisted | Extract from cluster secret `openshell-client-tls` (Step 6) | -| Container not found | Image not built | `mise run docker:build:cluster` (local) or re-deploy (remote) | +| Container not found | Image not built | `mise run docker:build:cluster` (local, with `OPENSHELL_RUNTIME_BUNDLE_TARBALL` set) or re-deploy (remote, with `--runtime-bundle-tarball`) | +| Local cluster image build now fails before Docker starts with runtime-bundle validation errors | Missing, malformed, wrong-arch, or unstaged `OPENSHELL_RUNTIME_BUNDLE_TARBALL` input for the controlled GPU runtime path | Re-run the cluster-image build with `OPENSHELL_RUNTIME_BUNDLE_TARBALL` pointing at a valid per-arch bundle tarball, and confirm `tasks/scripts/docker-build-image.sh cluster` stages `deploy/docker/.build/runtime-bundle//` successfully for `deploy/docker/Dockerfile.images` | +| Remote deploy now fails before Docker starts with runtime-bundle validation errors | `scripts/remote-deploy.sh` was run without `--runtime-bundle-tarball`, without `--remote-runtime-bundle-tarball` in `--skip-sync` mode, or the resolved tarball path is missing/invalid | Re-run `scripts/remote-deploy.sh` with `--runtime-bundle-tarball ` for sync-and-build deploys, or `--remote-runtime-bundle-tarball --skip-sync` when the tarball is already staged remotely | +| Multi-arch cluster publish fails before Docker starts with missing runtime-bundle variables | One or both per-arch tarballs were not provided to `tasks/scripts/docker-publish-multiarch.sh` | Set `OPENSHELL_RUNTIME_BUNDLE_TARBALL_AMD64` and `OPENSHELL_RUNTIME_BUNDLE_TARBALL_ARM64` to valid per-arch tarballs, then re-run the multi-arch publish command | | Container exited, OOMKilled | Insufficient memory | Increase host memory or reduce workload | | Container exited, non-zero exit | k3s crash, port conflict, privilege issue | Check `openshell doctor logs` for details | | `/readyz` fails | k3s still starting or crashed | Wait longer or check container logs for k3s errors | diff --git a/crates/openshell-cli/src/main.rs b/crates/openshell-cli/src/main.rs index 84a323b5..3799b392 100644 --- a/crates/openshell-cli/src/main.rs +++ b/crates/openshell-cli/src/main.rs @@ -164,6 +164,7 @@ const HELP_TEMPLATE: &str = "\ forward: Manage port forwarding to a sandbox logs: View sandbox logs policy: Manage sandbox policy + settings: Manage sandbox and global settings provider: Manage provider configuration \x1b[1mGATEWAY COMMANDS\x1b[0m @@ -249,9 +250,21 @@ const POLICY_EXAMPLES: &str = "\x1b[1mALIAS\x1b[0m \x1b[1mEXAMPLES\x1b[0m $ openshell policy get my-sandbox $ openshell policy set my-sandbox --policy policy.yaml + $ openshell policy set --global --policy policy.yaml + $ openshell policy delete --global $ openshell policy list my-sandbox "; +const SETTINGS_EXAMPLES: &str = "\x1b[1mEXAMPLES\x1b[0m + $ openshell settings get my-sandbox + $ openshell settings get --global + $ openshell settings set my-sandbox --key log_level --value debug + $ openshell settings set --global --key log_level --value warn + $ openshell settings set --global --key dummy_bool --value yes + $ openshell settings set --global --key dummy_int --value 42 + $ openshell settings delete --global --key log_level +"; + const PROVIDER_EXAMPLES: &str = "\x1b[1mEXAMPLES\x1b[0m $ openshell provider create --name openai --type openai --credential OPENAI_API_KEY $ openshell provider create --name anthropic --type anthropic --from-existing @@ -397,6 +410,13 @@ enum Commands { command: Option, }, + /// Manage sandbox and gateway settings. + #[command(after_help = SETTINGS_EXAMPLES, help_template = SUBCOMMAND_HELP_TEMPLATE)] + Settings { + #[command(subcommand)] + command: Option, + }, + /// Manage network rules for a sandbox. #[command(visible_alias = "rl", hide = true, help_template = SUBCOMMAND_HELP_TEMPLATE)] Rule { @@ -1324,7 +1344,7 @@ enum PolicyCommands { /// Update policy on a live sandbox. #[command(help_template = LEAF_HELP_TEMPLATE, next_help_heading = "FLAGS")] Set { - /// Sandbox name (defaults to last-used sandbox). + /// Sandbox name (defaults to last-used sandbox when not using --global). #[arg(add = ArgValueCompleter::new(completers::complete_sandbox_names))] name: Option, @@ -1332,6 +1352,14 @@ enum PolicyCommands { #[arg(long, value_hint = ValueHint::FilePath)] policy: String, + /// Apply as a gateway-global policy for all sandboxes. + #[arg(long)] + global: bool, + + /// Skip the confirmation prompt for global policy updates. + #[arg(long)] + yes: bool, + /// Wait for the sandbox to load the policy. #[arg(long)] wait: bool, @@ -1341,10 +1369,10 @@ enum PolicyCommands { timeout: u64, }, - /// Show current active policy for a sandbox. + /// Show current active policy for a sandbox or the global policy. #[command(help_template = LEAF_HELP_TEMPLATE, next_help_heading = "FLAGS")] Get { - /// Sandbox name (defaults to last-used sandbox). + /// Sandbox name (defaults to last-used sandbox). Ignored with --global. #[arg(add = ArgValueCompleter::new(completers::complete_sandbox_names))] name: Option, @@ -1355,18 +1383,101 @@ enum PolicyCommands { /// Print the full policy as YAML. #[arg(long)] full: bool, + + /// Show the global policy revision. + #[arg(long)] + global: bool, }, - /// List policy history for a sandbox. + /// List policy history for a sandbox or the global policy. #[command(help_template = LEAF_HELP_TEMPLATE, next_help_heading = "FLAGS")] List { - /// Sandbox name (defaults to last-used sandbox). + /// Sandbox name (defaults to last-used sandbox). Ignored with --global. #[arg(add = ArgValueCompleter::new(completers::complete_sandbox_names))] name: Option, /// Maximum number of revisions to return. #[arg(long, default_value_t = 20)] limit: u32, + + /// List global policy revisions. + #[arg(long)] + global: bool, + }, + + /// Delete the gateway-global policy lock, restoring sandbox-level policy control. + #[command(help_template = LEAF_HELP_TEMPLATE, next_help_heading = "FLAGS")] + Delete { + /// Delete the global policy setting. + #[arg(long)] + global: bool, + + /// Skip the confirmation prompt for global policy delete. + #[arg(long)] + yes: bool, + }, +} + +#[derive(Subcommand, Debug)] +enum SettingsCommands { + /// Show effective settings for a sandbox or gateway-global scope. + #[command(help_template = LEAF_HELP_TEMPLATE, next_help_heading = "FLAGS")] + Get { + /// Sandbox name (defaults to last-used sandbox). + #[arg(add = ArgValueCompleter::new(completers::complete_sandbox_names))] + name: Option, + + /// Show gateway-global settings. + #[arg(long)] + global: bool, + + /// Output as JSON. + #[arg(long)] + json: bool, + }, + + /// Set a single setting key. + #[command(help_template = LEAF_HELP_TEMPLATE, next_help_heading = "FLAGS")] + Set { + /// Sandbox name (defaults to last-used sandbox when not using --global). + #[arg(add = ArgValueCompleter::new(completers::complete_sandbox_names))] + name: Option, + + /// Setting key. + #[arg(long)] + key: String, + + /// Setting value (string input; bool keys accept true/false/yes/no/1/0). + #[arg(long)] + value: String, + + /// Apply at gateway-global scope. + #[arg(long)] + global: bool, + + /// Skip the confirmation prompt for global setting updates. + #[arg(long)] + yes: bool, + }, + + /// Delete a setting key (sandbox-scoped or gateway-global). + #[command(help_template = LEAF_HELP_TEMPLATE, next_help_heading = "FLAGS")] + Delete { + /// Sandbox name (defaults to last-used sandbox when not using --global). + #[arg(add = ArgValueCompleter::new(completers::complete_sandbox_names))] + name: Option, + + /// Setting key. + #[arg(long)] + key: String, + + /// Delete at gateway-global scope. + #[arg(long)] + global: bool, + + /// Skip the confirmation prompt for global setting delete. + #[arg(long)] + yes: bool, }, } @@ -1730,20 +1841,119 @@ async fn main() -> Result<()> { PolicyCommands::Set { name, policy, + global, + yes, wait, timeout, } => { - let name = resolve_sandbox_name(name, &ctx.name)?; - run::sandbox_policy_set(&ctx.endpoint, &name, &policy, wait, timeout, &tls) + if global { + if wait { + return Err(miette::miette!( + "--wait is not supported for global policies; \ + global policies are effective immediately" + )); + } + run::sandbox_policy_set_global( + &ctx.endpoint, + &policy, + yes, + wait, + timeout, + &tls, + ) .await?; + } else { + let name = resolve_sandbox_name(name, &ctx.name)?; + run::sandbox_policy_set(&ctx.endpoint, &name, &policy, wait, timeout, &tls) + .await?; + } } - PolicyCommands::Get { name, rev, full } => { - let name = resolve_sandbox_name(name, &ctx.name)?; - run::sandbox_policy_get(&ctx.endpoint, &name, rev, full, &tls).await?; + PolicyCommands::Get { + name, + rev, + full, + global, + } => { + if global { + run::sandbox_policy_get_global(&ctx.endpoint, rev, full, &tls).await?; + } else { + let name = resolve_sandbox_name(name, &ctx.name)?; + run::sandbox_policy_get(&ctx.endpoint, &name, rev, full, &tls).await?; + } } - PolicyCommands::List { name, limit } => { - let name = resolve_sandbox_name(name, &ctx.name)?; - run::sandbox_policy_list(&ctx.endpoint, &name, limit, &tls).await?; + PolicyCommands::List { + name, + limit, + global, + } => { + if global { + run::sandbox_policy_list_global(&ctx.endpoint, limit, &tls).await?; + } else { + let name = resolve_sandbox_name(name, &ctx.name)?; + run::sandbox_policy_list(&ctx.endpoint, &name, limit, &tls).await?; + } + } + PolicyCommands::Delete { global, yes } => { + if !global { + return Err(miette::miette!( + "sandbox policy delete is not supported; use --global to remove global policy lock" + )); + } + run::gateway_setting_delete(&ctx.endpoint, "policy", yes, &tls).await?; + } + } + } + + // ----------------------------------------------------------- + // Settings commands + // ----------------------------------------------------------- + Some(Commands::Settings { + command: Some(settings_cmd), + }) => { + let ctx = resolve_gateway(&cli.gateway, &cli.gateway_endpoint)?; + let mut tls = tls.with_gateway_name(&ctx.name); + apply_edge_auth(&mut tls, &ctx.name); + + match settings_cmd { + SettingsCommands::Get { name, global, json } => { + if global { + if name.is_some() { + return Err(miette::miette!( + "settings get --global does not accept a sandbox name" + )); + } + run::gateway_settings_get(&ctx.endpoint, json, &tls).await?; + } else { + let name = resolve_sandbox_name(name, &ctx.name)?; + run::sandbox_settings_get(&ctx.endpoint, &name, json, &tls).await?; + } + } + SettingsCommands::Set { + name, + key, + value, + global, + yes, + } => { + if global { + run::gateway_setting_set(&ctx.endpoint, &key, &value, yes, &tls).await?; + } else { + let name = resolve_sandbox_name(name, &ctx.name)?; + run::sandbox_setting_set(&ctx.endpoint, &name, &key, &value, &tls).await?; + } + } + SettingsCommands::Delete { + name, + key, + global, + yes, + } => { + if global { + run::gateway_setting_delete(&ctx.endpoint, &key, yes, &tls).await?; + } else { + let name = resolve_sandbox_name(name, &ctx.name)?; + run::sandbox_setting_delete(&ctx.endpoint, &name, &key, &tls).await?; + } } } } @@ -2229,6 +2439,13 @@ async fn main() -> Result<()> { .print_help() .expect("Failed to print help"); } + Some(Commands::Settings { command: None }) => { + Cli::command() + .find_subcommand_mut("settings") + .expect("settings subcommand exists") + .print_help() + .expect("Failed to print help"); + } Some(Commands::Provider { command: None }) => { Cli::command() .find_subcommand_mut("provider") @@ -2803,4 +3020,99 @@ mod tests { other => panic!("expected SshProxy, got: {other:?}"), } } + + #[test] + fn settings_set_global_parses_yes_flag() { + let cli = Cli::try_parse_from([ + "openshell", + "settings", + "set", + "--global", + "--key", + "log_level", + "--value", + "warn", + "--yes", + ]) + .expect("settings set --global should parse"); + + match cli.command { + Some(Commands::Settings { + command: + Some(SettingsCommands::Set { + global, + yes, + key, + value, + .. + }), + }) => { + assert!(global); + assert!(yes); + assert_eq!(key, "log_level"); + assert_eq!(value, "warn"); + } + other => panic!("expected settings set command, got: {other:?}"), + } + } + + #[test] + fn settings_get_global_parses() { + let cli = Cli::try_parse_from(["openshell", "settings", "get", "--global"]) + .expect("settings get --global should parse"); + + match cli.command { + Some(Commands::Settings { + command: Some(SettingsCommands::Get { name, global, .. }), + }) => { + assert!(global); + assert!(name.is_none()); + } + other => panic!("expected settings get command, got: {other:?}"), + } + } + + #[test] + fn policy_delete_global_parses() { + let cli = Cli::try_parse_from(["openshell", "policy", "delete", "--global", "--yes"]) + .expect("policy delete --global should parse"); + + match cli.command { + Some(Commands::Policy { + command: Some(PolicyCommands::Delete { global, yes }), + }) => { + assert!(global); + assert!(yes); + } + other => panic!("expected policy delete command, got: {other:?}"), + } + } + + #[test] + fn settings_delete_global_parses_yes_flag() { + let cli = Cli::try_parse_from([ + "openshell", + "settings", + "delete", + "--global", + "--key", + "log_level", + "--yes", + ]) + .expect("settings delete --global should parse"); + + match cli.command { + Some(Commands::Settings { + command: + Some(SettingsCommands::Delete { + key, global, yes, .. + }), + }) => { + assert_eq!(key, "log_level"); + assert!(global); + assert!(yes); + } + other => panic!("expected settings delete command, got: {other:?}"), + } + } } diff --git a/crates/openshell-cli/src/run.rs b/crates/openshell-cli/src/run.rs index 37f11fcb..f5576b6a 100644 --- a/crates/openshell-cli/src/run.rs +++ b/crates/openshell-cli/src/run.rs @@ -24,13 +24,15 @@ use openshell_bootstrap::{ use openshell_core::proto::{ ApproveAllDraftChunksRequest, ApproveDraftChunkRequest, ClearDraftChunksRequest, CreateProviderRequest, CreateSandboxRequest, DeleteProviderRequest, DeleteSandboxRequest, - GetClusterInferenceRequest, GetDraftHistoryRequest, GetDraftPolicyRequest, GetProviderRequest, - GetSandboxLogsRequest, GetSandboxPolicyStatusRequest, GetSandboxRequest, HealthRequest, - ListProvidersRequest, ListSandboxPoliciesRequest, ListSandboxesRequest, PolicyStatus, Provider, + GetClusterInferenceRequest, GetDraftHistoryRequest, GetDraftPolicyRequest, + GetGatewayConfigRequest, GetProviderRequest, GetSandboxConfigRequest, GetSandboxLogsRequest, + GetSandboxPolicyStatusRequest, GetSandboxRequest, HealthRequest, ListProvidersRequest, + ListSandboxPoliciesRequest, ListSandboxesRequest, PolicyStatus, Provider, RejectDraftChunkRequest, Sandbox, SandboxPhase, SandboxPolicy, SandboxSpec, SandboxTemplate, - SetClusterInferenceRequest, UpdateProviderRequest, UpdateSandboxPolicyRequest, - WatchSandboxRequest, + SetClusterInferenceRequest, SettingScope, SettingValue, UpdateConfigRequest, + UpdateProviderRequest, WatchSandboxRequest, setting_value, }; +use openshell_core::settings::{self, SettingValueKind}; use openshell_providers::{ ProviderRegistry, detect_provider_from_command, normalize_provider_type, }; @@ -1248,19 +1250,27 @@ pub(crate) async fn deploy_gateway_with_panel( "x".red().bold(), "Gateway failed:".red().bold(), ); + // Fetch container logs for pattern-based diagnosis + let container_logs = openshell_bootstrap::fetch_gateway_logs(name, 80).await; + let logs_opt = if container_logs.is_empty() { + None + } else { + Some(container_logs.as_str()) + }; // Try to diagnose the failure and provide guidance let err_str = format!("{err:?}"); - if let Some(diagnosis) = - openshell_bootstrap::errors::diagnose_failure(name, &err_str, None) - { - print_failure_diagnosis(&diagnosis); - } + let diagnosis = + openshell_bootstrap::errors::diagnose_failure(name, &err_str, logs_opt) + .unwrap_or_else(|| { + openshell_bootstrap::errors::generic_failure_diagnosis(name) + }); + print_failure_diagnosis(&diagnosis); Err(err) } } } else { eprintln!("Deploying {location} gateway {name}..."); - let handle = openshell_bootstrap::deploy_gateway_with_logs(options, |line| { + let result = openshell_bootstrap::deploy_gateway_with_logs(options, |line| { if let Some(status) = line.strip_prefix("[status] ") { eprintln!(" {status}"); } else if line.strip_prefix("[progress] ").is_some() { @@ -1269,9 +1279,35 @@ pub(crate) async fn deploy_gateway_with_panel( eprintln!(" {line}"); } }) - .await?; - eprintln!("Gateway {name} ready."); - Ok(handle) + .await; + match result { + Ok(handle) => { + eprintln!("Gateway {name} ready."); + Ok(handle) + } + Err(err) => { + eprintln!( + "{} {} {name}", + "x".red().bold(), + "Gateway failed:".red().bold(), + ); + // Fetch container logs for pattern-based diagnosis + let container_logs = openshell_bootstrap::fetch_gateway_logs(name, 80).await; + let logs_opt = if container_logs.is_empty() { + None + } else { + Some(container_logs.as_str()) + }; + let err_str = format!("{err:?}"); + let diagnosis = + openshell_bootstrap::errors::diagnose_failure(name, &err_str, logs_opt) + .unwrap_or_else(|| { + openshell_bootstrap::errors::generic_failure_diagnosis(name) + }); + print_failure_diagnosis(&diagnosis); + Err(err) + } + } } } @@ -3749,6 +3785,456 @@ fn parse_duration_to_ms(s: &str) -> Result { Ok(num * multiplier) } +fn confirm_global_setting_takeover(key: &str, yes: bool) -> Result<()> { + if yes { + return Ok(()); + } + + if !std::io::stdin().is_terminal() || !std::io::stdout().is_terminal() { + return Err(miette::miette!( + "global setting updates require confirmation; pass --yes in non-interactive mode" + )); + } + + let proceed = Confirm::with_theme(&ColorfulTheme::default()) + .with_prompt(format!( + "Setting '{key}' globally will disable sandbox-level management for this key. Continue?" + )) + .default(false) + .interact() + .into_diagnostic()?; + + if !proceed { + return Err(miette::miette!("aborted by user")); + } + + Ok(()) +} + +fn confirm_global_setting_delete(key: &str, yes: bool) -> Result<()> { + if yes { + return Ok(()); + } + + if !std::io::stdin().is_terminal() || !std::io::stdout().is_terminal() { + return Err(miette::miette!( + "global setting deletes require confirmation; pass --yes in non-interactive mode" + )); + } + + let proceed = Confirm::with_theme(&ColorfulTheme::default()) + .with_prompt(format!( + "Deleting global setting '{key}' re-enables sandbox-level management for this key. Continue?" + )) + .default(false) + .interact() + .into_diagnostic()?; + + if !proceed { + return Err(miette::miette!("aborted by user")); + } + + Ok(()) +} + +fn parse_cli_setting_value(key: &str, raw_value: &str) -> Result { + let setting = settings::setting_for_key(key).ok_or_else(|| { + miette::miette!( + "unknown setting key '{}'. Allowed keys: {}", + key, + settings::registered_keys_csv() + ) + })?; + + let value = match setting.kind { + SettingValueKind::String => setting_value::Value::StringValue(raw_value.to_string()), + SettingValueKind::Int => { + let parsed = raw_value.trim().parse::().map_err(|_| { + miette::miette!( + "invalid int value '{}' for key '{}'; expected base-10 integer", + raw_value, + key + ) + })?; + setting_value::Value::IntValue(parsed) + } + SettingValueKind::Bool => { + let parsed = settings::parse_bool_like(raw_value).ok_or_else(|| { + miette::miette!( + "invalid bool value '{}' for key '{}'; expected one of: true,false,yes,no,1,0", + raw_value, + key + ) + })?; + setting_value::Value::BoolValue(parsed) + } + }; + + Ok(SettingValue { value: Some(value) }) +} + +fn format_setting_value(value: Option<&SettingValue>) -> String { + let Some(value) = value.and_then(|v| v.value.as_ref()) else { + return "".to_string(); + }; + match value { + setting_value::Value::StringValue(v) => v.clone(), + setting_value::Value::BoolValue(v) => v.to_string(), + setting_value::Value::IntValue(v) => v.to_string(), + setting_value::Value::BytesValue(v) => format!("", v.len()), + } +} + +pub async fn sandbox_policy_set_global( + server: &str, + policy_path: &str, + yes: bool, + wait: bool, + _timeout_secs: u64, + tls: &TlsOptions, +) -> Result<()> { + if wait { + return Err(miette::miette!( + "--wait is only supported for sandbox-scoped policy updates" + )); + } + + confirm_global_setting_takeover("policy", yes)?; + + let policy = load_sandbox_policy(Some(policy_path))? + .ok_or_else(|| miette::miette!("No policy loaded from {policy_path}"))?; + + let mut client = grpc_client(server, tls).await?; + let response = client + .update_config(UpdateConfigRequest { + name: String::new(), + policy: Some(policy), + setting_key: String::new(), + setting_value: None, + delete_setting: false, + global: true, + }) + .await + .into_diagnostic()? + .into_inner(); + + eprintln!( + "{} Global policy configured (hash: {}, settings revision: {})", + "✓".green().bold(), + if response.policy_hash.len() >= 12 { + &response.policy_hash[..12] + } else { + &response.policy_hash + }, + response.settings_revision, + ); + Ok(()) +} + +pub async fn sandbox_settings_get( + server: &str, + name: &str, + json: bool, + tls: &TlsOptions, +) -> Result<()> { + let mut client = grpc_client(server, tls).await?; + let sandbox = client + .get_sandbox(GetSandboxRequest { + name: name.to_string(), + }) + .await + .into_diagnostic()? + .into_inner() + .sandbox + .ok_or_else(|| miette::miette!("sandbox not found"))?; + + let response = client + .get_sandbox_config(GetSandboxConfigRequest { + sandbox_id: sandbox.id.clone(), + }) + .await + .into_diagnostic()? + .into_inner(); + + if json { + let obj = settings_to_json_sandbox(name, &response); + println!("{}", serde_json::to_string_pretty(&obj).into_diagnostic()?); + return Ok(()); + } + + let policy_source = + if response.policy_source == openshell_core::proto::PolicySource::Global as i32 { + "global" + } else { + "sandbox" + }; + + println!("Sandbox: {}", name); + println!("Config Rev: {}", response.config_revision); + println!("Policy Source: {}", policy_source); + println!("Policy Hash: {}", response.policy_hash); + + if response.settings.is_empty() { + println!("Settings: No settings available."); + return Ok(()); + } + + println!("Settings:"); + let mut keys: Vec<_> = response.settings.keys().cloned().collect(); + keys.sort(); + for key in keys { + if let Some(setting) = response.settings.get(&key) { + let scope = match SettingScope::try_from(setting.scope) { + Ok(SettingScope::Global) => "global", + Ok(SettingScope::Sandbox) => "sandbox", + _ => "unset", + }; + println!( + " {} = {} ({})", + key, + format_setting_value(setting.value.as_ref()), + scope + ); + } + } + + Ok(()) +} + +pub async fn gateway_settings_get(server: &str, json: bool, tls: &TlsOptions) -> Result<()> { + let mut client = grpc_client(server, tls).await?; + let response = client + .get_gateway_config(GetGatewayConfigRequest {}) + .await + .into_diagnostic()? + .into_inner(); + + if json { + let obj = settings_to_json_global(&response); + println!("{}", serde_json::to_string_pretty(&obj).into_diagnostic()?); + return Ok(()); + } + + println!("Scope: global"); + println!("Settings Rev: {}", response.settings_revision); + + if response.settings.is_empty() { + println!("Settings: No settings available."); + return Ok(()); + } + + println!("Settings:"); + let mut keys: Vec<_> = response.settings.keys().cloned().collect(); + keys.sort(); + for key in keys { + if let Some(setting) = response.settings.get(&key) { + println!(" {} = {}", key, format_setting_value(Some(setting))); + } + } + Ok(()) +} + +fn settings_to_json_sandbox( + name: &str, + response: &openshell_core::proto::GetSandboxConfigResponse, +) -> serde_json::Value { + let policy_source = + if response.policy_source == openshell_core::proto::PolicySource::Global as i32 { + "global" + } else { + "sandbox" + }; + + let mut settings = serde_json::Map::new(); + let mut keys: Vec<_> = response.settings.keys().cloned().collect(); + keys.sort(); + for key in keys { + if let Some(setting) = response.settings.get(&key) { + let scope = match SettingScope::try_from(setting.scope) { + Ok(SettingScope::Global) => "global", + Ok(SettingScope::Sandbox) => "sandbox", + _ => "unset", + }; + settings.insert( + key, + serde_json::json!({ + "value": format_setting_value(setting.value.as_ref()), + "scope": scope, + }), + ); + } + } + + serde_json::json!({ + "sandbox": name, + "config_revision": response.config_revision, + "policy_source": policy_source, + "policy_hash": response.policy_hash, + "settings": settings, + }) +} + +fn settings_to_json_global( + response: &openshell_core::proto::GetGatewayConfigResponse, +) -> serde_json::Value { + let mut settings = serde_json::Map::new(); + let mut keys: Vec<_> = response.settings.keys().cloned().collect(); + keys.sort(); + for key in keys { + if let Some(setting) = response.settings.get(&key) { + settings.insert(key, serde_json::json!(format_setting_value(Some(setting)))); + } + } + + serde_json::json!({ + "scope": "global", + "settings_revision": response.settings_revision, + "settings": settings, + }) +} + +pub async fn gateway_setting_set( + server: &str, + key: &str, + value: &str, + yes: bool, + tls: &TlsOptions, +) -> Result<()> { + let setting_value = parse_cli_setting_value(key, value)?; + confirm_global_setting_takeover(key, yes)?; + + let mut client = grpc_client(server, tls).await?; + let response = client + .update_config(UpdateConfigRequest { + name: String::new(), + policy: None, + setting_key: key.to_string(), + setting_value: Some(setting_value), + delete_setting: false, + global: true, + }) + .await + .into_diagnostic()? + .into_inner(); + + println!( + "{} Set global setting {}={} (revision {})", + "✓".green().bold(), + key, + value, + response.settings_revision + ); + Ok(()) +} + +pub async fn sandbox_setting_set( + server: &str, + name: &str, + key: &str, + value: &str, + tls: &TlsOptions, +) -> Result<()> { + let setting_value = parse_cli_setting_value(key, value)?; + + let mut client = grpc_client(server, tls).await?; + let response = client + .update_config(UpdateConfigRequest { + name: name.to_string(), + policy: None, + setting_key: key.to_string(), + setting_value: Some(setting_value), + delete_setting: false, + global: false, + }) + .await + .into_diagnostic()? + .into_inner(); + + println!( + "{} Set sandbox setting {}={} for {} (revision {})", + "✓".green().bold(), + key, + value, + name, + response.settings_revision + ); + Ok(()) +} + +pub async fn gateway_setting_delete( + server: &str, + key: &str, + yes: bool, + tls: &TlsOptions, +) -> Result<()> { + confirm_global_setting_delete(key, yes)?; + + let mut client = grpc_client(server, tls).await?; + let response = client + .update_config(UpdateConfigRequest { + name: String::new(), + policy: None, + setting_key: key.to_string(), + setting_value: None, + delete_setting: true, + global: true, + }) + .await + .into_diagnostic()? + .into_inner(); + + if response.deleted { + println!( + "{} Deleted global setting {} (revision {})", + "✓".green().bold(), + key, + response.settings_revision + ); + } else { + println!("{} Global setting {} not found", "!".yellow(), key,); + } + Ok(()) +} + +pub async fn sandbox_setting_delete( + server: &str, + name: &str, + key: &str, + tls: &TlsOptions, +) -> Result<()> { + let mut client = grpc_client(server, tls).await?; + let response = client + .update_config(UpdateConfigRequest { + name: name.to_string(), + policy: None, + setting_key: key.to_string(), + setting_value: None, + delete_setting: true, + global: false, + }) + .await + .into_diagnostic()? + .into_inner(); + + if response.deleted { + println!( + "{} Deleted sandbox setting {} for {} (revision {})", + "✓".green().bold(), + key, + name, + response.settings_revision + ); + } else { + println!( + "{} Sandbox setting {} not found for {}", + "!".yellow(), + key, + name, + ); + } + Ok(()) +} + pub async fn sandbox_policy_set( server: &str, name: &str, @@ -3767,6 +4253,7 @@ pub async fn sandbox_policy_set( .get_sandbox_policy_status(GetSandboxPolicyStatusRequest { name: name.to_string(), version: 0, + global: false, }) .await .ok() @@ -3774,9 +4261,13 @@ pub async fn sandbox_policy_set( .map_or(0, |r| r.version); let response = client - .update_sandbox_policy(UpdateSandboxPolicyRequest { + .update_config(UpdateConfigRequest { name: name.to_string(), policy: Some(policy), + setting_key: String::new(), + setting_value: None, + delete_setting: false, + global: false, }) .await .into_diagnostic()?; @@ -3822,6 +4313,7 @@ pub async fn sandbox_policy_set( .get_sandbox_policy_status(GetSandboxPolicyStatusRequest { name: name.to_string(), version: resp.version, + global: false, }) .await .into_diagnostic()?; @@ -3876,6 +4368,7 @@ pub async fn sandbox_policy_get( .get_sandbox_policy_status(GetSandboxPolicyStatusRequest { name: name.to_string(), version, + global: false, }) .await .into_diagnostic()?; @@ -3914,6 +4407,54 @@ pub async fn sandbox_policy_get( Ok(()) } +pub async fn sandbox_policy_get_global( + server: &str, + version: u32, + full: bool, + tls: &TlsOptions, +) -> Result<()> { + let mut client = grpc_client(server, tls).await?; + + let status_resp = client + .get_sandbox_policy_status(GetSandboxPolicyStatusRequest { + name: String::new(), + version, + global: true, + }) + .await + .into_diagnostic()?; + + let inner = status_resp.into_inner(); + if let Some(rev) = inner.revision { + let status = PolicyStatus::try_from(rev.status).unwrap_or(PolicyStatus::Unspecified); + println!("Scope: global"); + println!("Version: {}", rev.version); + println!("Hash: {}", rev.policy_hash); + println!("Status: {status:?}"); + if rev.created_at_ms > 0 { + println!("Created: {} ms", rev.created_at_ms); + } + if rev.loaded_at_ms > 0 { + println!("Loaded: {} ms", rev.loaded_at_ms); + } + + if full { + if let Some(ref policy) = rev.policy { + println!("---"); + let yaml_str = openshell_policy::serialize_sandbox_policy(policy) + .wrap_err("failed to serialize policy to YAML")?; + print!("{yaml_str}"); + } else { + eprintln!("Policy payload not available for this version"); + } + } + } else { + eprintln!("No global policy history found"); + } + + Ok(()) +} + pub async fn sandbox_policy_list( server: &str, name: &str, @@ -3927,6 +4468,7 @@ pub async fn sandbox_policy_list( name: name.to_string(), limit, offset: 0, + global: false, }) .await .into_diagnostic()?; @@ -3937,11 +4479,39 @@ pub async fn sandbox_policy_list( return Ok(()); } + print_policy_revision_table(&revisions); + Ok(()) +} + +pub async fn sandbox_policy_list_global(server: &str, limit: u32, tls: &TlsOptions) -> Result<()> { + let mut client = grpc_client(server, tls).await?; + + let resp = client + .list_sandbox_policies(ListSandboxPoliciesRequest { + name: String::new(), + limit, + offset: 0, + global: true, + }) + .await + .into_diagnostic()?; + + let revisions = resp.into_inner().revisions; + if revisions.is_empty() { + eprintln!("No global policy history found"); + return Ok(()); + } + + print_policy_revision_table(&revisions); + Ok(()) +} + +fn print_policy_revision_table(revisions: &[openshell_core::proto::SandboxPolicyRevision]) { println!( "{:<8} {:<14} {:<12} {:<24} ERROR", "VERSION", "HASH", "STATUS", "CREATED" ); - for rev in &revisions { + for rev in revisions { let status = PolicyStatus::try_from(rev.status).unwrap_or(PolicyStatus::Unspecified); let hash_short = if rev.policy_hash.len() >= 12 { &rev.policy_hash[..12] @@ -3962,8 +4532,6 @@ pub async fn sandbox_policy_list( error_short, ); } - - Ok(()) } // --------------------------------------------------------------------------- @@ -4372,8 +4940,9 @@ mod tests { GatewayControlTarget, TlsOptions, format_gateway_select_header, format_gateway_select_items, gateway_auth_label, gateway_select_with, gateway_type_label, git_sync_files, http_health_check, image_requests_gpu, inferred_provider_type, - parse_credential_pairs, provisioning_timeout_message, ready_false_condition_message, - resolve_gateway_control_target_from, sandbox_should_persist, source_requests_gpu, + parse_cli_setting_value, parse_credential_pairs, provisioning_timeout_message, + ready_false_condition_message, resolve_gateway_control_target_from, sandbox_should_persist, + source_requests_gpu, }; use crate::TEST_ENV_LOCK; use hyper::StatusCode; @@ -4493,6 +5062,49 @@ mod tests { )); } + #[cfg(feature = "dev-settings")] + #[test] + fn parse_cli_setting_value_parses_bool_aliases() { + let yes_value = parse_cli_setting_value("dummy_bool", "yes").expect("parse yes"); + assert_eq!( + yes_value.value, + Some(openshell_core::proto::setting_value::Value::BoolValue(true)) + ); + + let zero_value = parse_cli_setting_value("dummy_bool", "0").expect("parse 0"); + assert_eq!( + zero_value.value, + Some(openshell_core::proto::setting_value::Value::BoolValue( + false + )) + ); + } + + #[cfg(feature = "dev-settings")] + #[test] + fn parse_cli_setting_value_parses_int_key() { + let int_value = parse_cli_setting_value("dummy_int", "42").expect("parse int"); + assert_eq!( + int_value.value, + Some(openshell_core::proto::setting_value::Value::IntValue(42)) + ); + } + + #[cfg(feature = "dev-settings")] + #[test] + fn parse_cli_setting_value_rejects_invalid_bool() { + let err = + parse_cli_setting_value("dummy_bool", "maybe").expect_err("invalid bool should fail"); + assert!(err.to_string().contains("invalid bool value")); + } + + #[test] + fn parse_cli_setting_value_rejects_unknown_key() { + let err = + parse_cli_setting_value("unknown_key", "value").expect_err("unknown key should fail"); + assert!(err.to_string().contains("unknown setting key")); + } + #[test] fn inferred_provider_type_returns_type_for_known_command() { let result = inferred_provider_type(&["claude".to_string(), "--help".to_string()]); diff --git a/crates/openshell-cli/tests/ensure_providers_integration.rs b/crates/openshell-cli/tests/ensure_providers_integration.rs index 659edffd..2cd36202 100644 --- a/crates/openshell-cli/tests/ensure_providers_integration.rs +++ b/crates/openshell-cli/tests/ensure_providers_integration.rs @@ -11,12 +11,13 @@ use openshell_core::proto::open_shell_server::{OpenShell, OpenShellServer}; use openshell_core::proto::{ CreateProviderRequest, CreateSandboxRequest, CreateSshSessionRequest, CreateSshSessionResponse, DeleteProviderRequest, DeleteProviderResponse, DeleteSandboxRequest, DeleteSandboxResponse, - ExecSandboxEvent, ExecSandboxRequest, GetProviderRequest, GetSandboxPolicyRequest, - GetSandboxPolicyResponse, GetSandboxProviderEnvironmentRequest, - GetSandboxProviderEnvironmentResponse, GetSandboxRequest, HealthRequest, HealthResponse, - ListProvidersRequest, ListProvidersResponse, ListSandboxesRequest, ListSandboxesResponse, - Provider, ProviderResponse, RevokeSshSessionRequest, RevokeSshSessionResponse, SandboxResponse, - SandboxStreamEvent, ServiceStatus, UpdateProviderRequest, WatchSandboxRequest, + ExecSandboxEvent, ExecSandboxRequest, GetGatewayConfigRequest, GetGatewayConfigResponse, + GetProviderRequest, GetSandboxConfigRequest, GetSandboxConfigResponse, + GetSandboxProviderEnvironmentRequest, GetSandboxProviderEnvironmentResponse, GetSandboxRequest, + HealthRequest, HealthResponse, ListProvidersRequest, ListProvidersResponse, + ListSandboxesRequest, ListSandboxesResponse, Provider, ProviderResponse, + RevokeSshSessionRequest, RevokeSshSessionResponse, SandboxResponse, SandboxStreamEvent, + ServiceStatus, UpdateProviderRequest, WatchSandboxRequest, }; use rcgen::{ BasicConstraints, Certificate, CertificateParams, ExtendedKeyUsagePurpose, IsCa, KeyPair, @@ -153,11 +154,18 @@ impl OpenShell for TestOpenShell { Ok(Response::new(DeleteSandboxResponse { deleted: true })) } - async fn get_sandbox_policy( + async fn get_sandbox_config( &self, - _request: tonic::Request, - ) -> Result, Status> { - Ok(Response::new(GetSandboxPolicyResponse::default())) + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetSandboxConfigResponse::default())) + } + + async fn get_gateway_config( + &self, + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetGatewayConfigResponse::default())) } async fn get_sandbox_provider_environment( @@ -311,10 +319,10 @@ impl OpenShell for TestOpenShell { ))) } - async fn update_sandbox_policy( + async fn update_config( &self, - _request: tonic::Request, - ) -> Result, Status> { + _request: tonic::Request, + ) -> Result, Status> { Err(Status::unimplemented("not implemented in test")) } diff --git a/crates/openshell-cli/tests/mtls_integration.rs b/crates/openshell-cli/tests/mtls_integration.rs index 8b238da9..5d04239b 100644 --- a/crates/openshell-cli/tests/mtls_integration.rs +++ b/crates/openshell-cli/tests/mtls_integration.rs @@ -108,12 +108,21 @@ impl OpenShell for TestOpenShell { )) } - async fn get_sandbox_policy( + async fn get_sandbox_config( &self, - _request: tonic::Request, - ) -> Result, Status> { + _request: tonic::Request, + ) -> Result, Status> { Ok(Response::new( - openshell_core::proto::GetSandboxPolicyResponse::default(), + openshell_core::proto::GetSandboxConfigResponse::default(), + )) + } + + async fn get_gateway_config( + &self, + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new( + openshell_core::proto::GetGatewayConfigResponse::default(), )) } @@ -212,10 +221,10 @@ impl OpenShell for TestOpenShell { ))) } - async fn update_sandbox_policy( + async fn update_config( &self, - _request: tonic::Request, - ) -> Result, Status> { + _request: tonic::Request, + ) -> Result, Status> { Err(Status::unimplemented("not implemented in test")) } diff --git a/crates/openshell-cli/tests/provider_commands_integration.rs b/crates/openshell-cli/tests/provider_commands_integration.rs index af7e80a3..c5476afe 100644 --- a/crates/openshell-cli/tests/provider_commands_integration.rs +++ b/crates/openshell-cli/tests/provider_commands_integration.rs @@ -7,12 +7,13 @@ use openshell_core::proto::open_shell_server::{OpenShell, OpenShellServer}; use openshell_core::proto::{ CreateProviderRequest, CreateSandboxRequest, CreateSshSessionRequest, CreateSshSessionResponse, DeleteProviderRequest, DeleteProviderResponse, DeleteSandboxRequest, DeleteSandboxResponse, - ExecSandboxEvent, ExecSandboxRequest, GetProviderRequest, GetSandboxPolicyRequest, - GetSandboxPolicyResponse, GetSandboxProviderEnvironmentRequest, - GetSandboxProviderEnvironmentResponse, GetSandboxRequest, HealthRequest, HealthResponse, - ListProvidersRequest, ListProvidersResponse, ListSandboxesRequest, ListSandboxesResponse, - Provider, ProviderResponse, RevokeSshSessionRequest, RevokeSshSessionResponse, SandboxResponse, - SandboxStreamEvent, ServiceStatus, UpdateProviderRequest, WatchSandboxRequest, + ExecSandboxEvent, ExecSandboxRequest, GetGatewayConfigRequest, GetGatewayConfigResponse, + GetProviderRequest, GetSandboxConfigRequest, GetSandboxConfigResponse, + GetSandboxProviderEnvironmentRequest, GetSandboxProviderEnvironmentResponse, GetSandboxRequest, + HealthRequest, HealthResponse, ListProvidersRequest, ListProvidersResponse, + ListSandboxesRequest, ListSandboxesResponse, Provider, ProviderResponse, + RevokeSshSessionRequest, RevokeSshSessionResponse, SandboxResponse, SandboxStreamEvent, + ServiceStatus, UpdateProviderRequest, WatchSandboxRequest, }; use rcgen::{ BasicConstraints, Certificate, CertificateParams, ExtendedKeyUsagePurpose, IsCa, KeyPair, @@ -107,11 +108,18 @@ impl OpenShell for TestOpenShell { Ok(Response::new(DeleteSandboxResponse { deleted: true })) } - async fn get_sandbox_policy( + async fn get_sandbox_config( &self, - _request: tonic::Request, - ) -> Result, Status> { - Ok(Response::new(GetSandboxPolicyResponse::default())) + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetSandboxConfigResponse::default())) + } + + async fn get_gateway_config( + &self, + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetGatewayConfigResponse::default())) } async fn get_sandbox_provider_environment( @@ -265,10 +273,10 @@ impl OpenShell for TestOpenShell { ))) } - async fn update_sandbox_policy( + async fn update_config( &self, - _request: tonic::Request, - ) -> Result, Status> { + _request: tonic::Request, + ) -> Result, Status> { Err(Status::unimplemented("not implemented in test")) } diff --git a/crates/openshell-cli/tests/sandbox_create_lifecycle_integration.rs b/crates/openshell-cli/tests/sandbox_create_lifecycle_integration.rs index 9fcfeced..d5d39f08 100644 --- a/crates/openshell-cli/tests/sandbox_create_lifecycle_integration.rs +++ b/crates/openshell-cli/tests/sandbox_create_lifecycle_integration.rs @@ -8,13 +8,14 @@ use openshell_core::proto::open_shell_server::{OpenShell, OpenShellServer}; use openshell_core::proto::{ CreateProviderRequest, CreateSandboxRequest, CreateSshSessionRequest, CreateSshSessionResponse, DeleteProviderRequest, DeleteProviderResponse, DeleteSandboxRequest, DeleteSandboxResponse, - ExecSandboxEvent, ExecSandboxRequest, GetProviderRequest, GetSandboxPolicyRequest, - GetSandboxPolicyResponse, GetSandboxProviderEnvironmentRequest, - GetSandboxProviderEnvironmentResponse, GetSandboxRequest, HealthRequest, HealthResponse, - ListProvidersRequest, ListProvidersResponse, ListSandboxesRequest, ListSandboxesResponse, - PlatformEvent, ProviderResponse, RevokeSshSessionRequest, RevokeSshSessionResponse, Sandbox, - SandboxPhase, SandboxResponse, SandboxStreamEvent, ServiceStatus, UpdateProviderRequest, - WatchSandboxRequest, sandbox_stream_event, + ExecSandboxEvent, ExecSandboxRequest, GetGatewayConfigRequest, GetGatewayConfigResponse, + GetProviderRequest, GetSandboxConfigRequest, GetSandboxConfigResponse, + GetSandboxProviderEnvironmentRequest, GetSandboxProviderEnvironmentResponse, GetSandboxRequest, + HealthRequest, HealthResponse, ListProvidersRequest, ListProvidersResponse, + ListSandboxesRequest, ListSandboxesResponse, PlatformEvent, ProviderResponse, + RevokeSshSessionRequest, RevokeSshSessionResponse, Sandbox, SandboxPhase, SandboxResponse, + SandboxStreamEvent, ServiceStatus, UpdateProviderRequest, WatchSandboxRequest, + sandbox_stream_event, }; use rcgen::{ BasicConstraints, Certificate, CertificateParams, ExtendedKeyUsagePurpose, IsCa, KeyPair, @@ -156,11 +157,18 @@ impl OpenShell for TestOpenShell { Ok(Response::new(DeleteSandboxResponse { deleted: true })) } - async fn get_sandbox_policy( + async fn get_sandbox_config( &self, - _request: tonic::Request, - ) -> Result, Status> { - Ok(Response::new(GetSandboxPolicyResponse::default())) + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetSandboxConfigResponse::default())) + } + + async fn get_gateway_config( + &self, + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetGatewayConfigResponse::default())) } async fn get_sandbox_provider_environment( @@ -291,10 +299,10 @@ impl OpenShell for TestOpenShell { ))) } - async fn update_sandbox_policy( + async fn update_config( &self, - _request: tonic::Request, - ) -> Result, Status> { + _request: tonic::Request, + ) -> Result, Status> { Err(Status::unimplemented("not implemented in test")) } diff --git a/crates/openshell-cli/tests/sandbox_name_fallback_integration.rs b/crates/openshell-cli/tests/sandbox_name_fallback_integration.rs index 3fce5d8d..fbadec4c 100644 --- a/crates/openshell-cli/tests/sandbox_name_fallback_integration.rs +++ b/crates/openshell-cli/tests/sandbox_name_fallback_integration.rs @@ -8,12 +8,12 @@ use openshell_core::proto::open_shell_server::{OpenShell, OpenShellServer}; use openshell_core::proto::{ CreateProviderRequest, CreateSandboxRequest, CreateSshSessionRequest, CreateSshSessionResponse, DeleteProviderRequest, DeleteProviderResponse, DeleteSandboxRequest, DeleteSandboxResponse, - ExecSandboxEvent, ExecSandboxRequest, GetProviderRequest, GetSandboxPolicyRequest, - GetSandboxPolicyResponse, GetSandboxProviderEnvironmentRequest, - GetSandboxProviderEnvironmentResponse, GetSandboxRequest, HealthRequest, HealthResponse, - ListProvidersRequest, ListProvidersResponse, ListSandboxesRequest, ListSandboxesResponse, - ProviderResponse, Sandbox, SandboxResponse, SandboxStreamEvent, ServiceStatus, - UpdateProviderRequest, WatchSandboxRequest, + ExecSandboxEvent, ExecSandboxRequest, GetGatewayConfigRequest, GetGatewayConfigResponse, + GetProviderRequest, GetSandboxConfigRequest, GetSandboxConfigResponse, + GetSandboxProviderEnvironmentRequest, GetSandboxProviderEnvironmentResponse, GetSandboxRequest, + HealthRequest, HealthResponse, ListProvidersRequest, ListProvidersResponse, + ListSandboxesRequest, ListSandboxesResponse, ProviderResponse, Sandbox, SandboxResponse, + SandboxStreamEvent, ServiceStatus, UpdateProviderRequest, WatchSandboxRequest, }; use rcgen::{ BasicConstraints, Certificate, CertificateParams, ExtendedKeyUsagePurpose, IsCa, KeyPair, @@ -132,11 +132,18 @@ impl OpenShell for TestOpenShell { Ok(Response::new(DeleteSandboxResponse { deleted: true })) } - async fn get_sandbox_policy( + async fn get_sandbox_config( &self, - _request: tonic::Request, - ) -> Result, Status> { - Ok(Response::new(GetSandboxPolicyResponse::default())) + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetSandboxConfigResponse::default())) + } + + async fn get_gateway_config( + &self, + _request: tonic::Request, + ) -> Result, Status> { + Ok(Response::new(GetGatewayConfigResponse::default())) } async fn get_sandbox_provider_environment( @@ -224,10 +231,10 @@ impl OpenShell for TestOpenShell { ))) } - async fn update_sandbox_policy( + async fn update_config( &self, - _request: tonic::Request, - ) -> Result, Status> { + _request: tonic::Request, + ) -> Result, Status> { Err(Status::unimplemented("not implemented in test")) } diff --git a/crates/openshell-core/Cargo.toml b/crates/openshell-core/Cargo.toml index eeedd11a..8bccef54 100644 --- a/crates/openshell-core/Cargo.toml +++ b/crates/openshell-core/Cargo.toml @@ -20,6 +20,12 @@ serde = { workspace = true } serde_json = { workspace = true } url = { workspace = true } +[features] +## Include test-only settings (dummy_bool, dummy_int) in the registry. +## Off by default so production builds have an empty registry. +## Enabled by e2e tests and during development. +dev-settings = [] + [build-dependencies] tonic-build = { workspace = true } protobuf-src = { workspace = true } diff --git a/crates/openshell-core/src/config.rs b/crates/openshell-core/src/config.rs index 750aa98b..8ff6ac22 100644 --- a/crates/openshell-core/src/config.rs +++ b/crates/openshell-core/src/config.rs @@ -1,7 +1,7 @@ // SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. // SPDX-License-Identifier: Apache-2.0 -//! Configuration management for OpenShell components. +//! Configuration management for `OpenShell` components. use serde::{Deserialize, Serialize}; use std::net::SocketAddr; @@ -39,7 +39,7 @@ pub struct Config { #[serde(default)] pub sandbox_image_pull_policy: String, - /// gRPC endpoint for sandboxes to connect back to OpenShell. + /// gRPC endpoint for sandboxes to connect back to `OpenShell`. /// Used by sandbox pods to fetch their policy at startup. #[serde(default)] pub grpc_endpoint: String, diff --git a/crates/openshell-core/src/error.rs b/crates/openshell-core/src/error.rs index 2399368f..e9d03f11 100644 --- a/crates/openshell-core/src/error.rs +++ b/crates/openshell-core/src/error.rs @@ -1,15 +1,15 @@ // SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. // SPDX-License-Identifier: Apache-2.0 -//! Common error types for OpenShell. +//! Common error types for `OpenShell`. use miette::Diagnostic; use thiserror::Error; -/// Result type alias using OpenShell's error type. +/// Result type alias using `OpenShell`'s error type. pub type Result = std::result::Result; -/// OpenShell error type. +/// `OpenShell` error type. #[derive(Debug, Error, Diagnostic)] pub enum Error { /// Configuration error. diff --git a/crates/openshell-core/src/forward.rs b/crates/openshell-core/src/forward.rs index c7b63fef..7fcf81c2 100644 --- a/crates/openshell-core/src/forward.rs +++ b/crates/openshell-core/src/forward.rs @@ -135,18 +135,17 @@ pub fn pid_matches_forward(pid: u32, port: u16, sandbox_id: Option<&str>) -> boo /// match is expected. pub fn find_forward_by_port(port: u16) -> Result> { let dir = forward_pid_dir()?; - let entries = match std::fs::read_dir(&dir) { - Ok(e) => e, - Err(_) => return Ok(None), + let Ok(entries) = std::fs::read_dir(&dir) else { + return Ok(None); }; let suffix = format!("-{port}.pid"); for entry in entries.flatten() { let file_name = entry.file_name(); let file_name = file_name.to_string_lossy(); - if let Some(name) = file_name.strip_suffix(&suffix) { - if !name.is_empty() { - return Ok(Some(name.to_string())); - } + if let Some(name) = file_name.strip_suffix(&suffix) + && !name.is_empty() + { + return Ok(Some(name.to_string())); } } Ok(None) @@ -671,9 +670,8 @@ mod tests { // `python3 -m http.server` which listens on [::] by default. The // IPv4-only TcpListener::bind("127.0.0.1", port) might succeed, but // lsof should detect the listener and the check should still fail. - let listener = match TcpListener::bind("[::]:0") { - Ok(l) => l, - Err(_) => return, // IPv6 not available, skip + let Ok(listener) = TcpListener::bind("[::]:0") else { + return; // IPv6 not available, skip }; let port = listener.local_addr().unwrap().port(); diff --git a/crates/openshell-core/src/inference.rs b/crates/openshell-core/src/inference.rs index a06c427f..626092ae 100644 --- a/crates/openshell-core/src/inference.rs +++ b/crates/openshell-core/src/inference.rs @@ -105,17 +105,17 @@ pub fn profile_for(provider_type: &str) -> Option<&'static InferenceProviderProf /// need the auth/header information (e.g. the sandbox bundle-to-route /// conversion). pub fn auth_for_provider_type(provider_type: &str) -> (AuthHeader, Vec<(String, String)>) { - match profile_for(provider_type) { - Some(profile) => { + profile_for(provider_type).map_or_else( + || (AuthHeader::Bearer, Vec::new()), + |profile| { let headers = profile .default_headers .iter() .map(|(k, v)| ((*k).to_string(), (*v).to_string())) .collect(); (profile.auth.clone(), headers) - } - None => (AuthHeader::Bearer, Vec::new()), - } + }, + ) } // --------------------------------------------------------------------------- diff --git a/crates/openshell-core/src/lib.rs b/crates/openshell-core/src/lib.rs index 9cf0d620..681a89f6 100644 --- a/crates/openshell-core/src/lib.rs +++ b/crates/openshell-core/src/lib.rs @@ -1,7 +1,7 @@ // SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. // SPDX-License-Identifier: Apache-2.0 -//! OpenShell Core - shared library for OpenShell components. +//! `OpenShell` Core - shared library for `OpenShell` components. //! //! This crate provides: //! - Protocol buffer definitions and generated code @@ -15,6 +15,7 @@ pub mod forward; pub mod inference; pub mod paths; pub mod proto; +pub mod settings; pub use config::{Config, TlsConfig}; pub use error::{Error, Result}; diff --git a/crates/openshell-core/src/proto/openshell.datamodel.v1.rs b/crates/openshell-core/src/proto/openshell.datamodel.v1.rs index 6abade7f..310497d1 100644 --- a/crates/openshell-core/src/proto/openshell.datamodel.v1.rs +++ b/crates/openshell-core/src/proto/openshell.datamodel.v1.rs @@ -55,8 +55,6 @@ pub struct SandboxTemplate { ::std::collections::HashMap<::prost::alloc::string::String, ::prost::alloc::string::String>, #[prost(message, optional, tag = "7")] pub resources: ::core::option::Option<::prost_types::Struct>, - #[prost(message, optional, tag = "8")] - pub pod_template: ::core::option::Option<::prost_types::Struct>, #[prost(message, optional, tag = "9")] pub volume_claim_templates: ::core::option::Option<::prost_types::Struct>, } @@ -100,16 +98,12 @@ pub struct Provider { pub r#type: ::prost::alloc::string::String, /// Secret values used for authentication. #[prost(map = "string, string", tag = "4")] - pub credentials: ::std::collections::HashMap< - ::prost::alloc::string::String, - ::prost::alloc::string::String, - >, + pub credentials: + ::std::collections::HashMap<::prost::alloc::string::String, ::prost::alloc::string::String>, /// Non-secret provider configuration. #[prost(map = "string, string", tag = "5")] - pub config: ::std::collections::HashMap< - ::prost::alloc::string::String, - ::prost::alloc::string::String, - >, + pub config: + ::std::collections::HashMap<::prost::alloc::string::String, ::prost::alloc::string::String>, } /// High-level sandbox lifecycle phase. #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, ::prost::Enumeration)] diff --git a/crates/openshell-core/src/settings.rs b/crates/openshell-core/src/settings.rs new file mode 100644 index 00000000..b94c08fc --- /dev/null +++ b/crates/openshell-core/src/settings.rs @@ -0,0 +1,245 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Registry for sandbox runtime settings keys and value kinds. + +/// Supported value kinds for registered sandbox settings. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum SettingValueKind { + String, + Int, + Bool, +} + +impl SettingValueKind { + /// Human-readable value kind used in error messages. + #[must_use] + pub const fn as_str(self) -> &'static str { + match self { + Self::String => "string", + Self::Int => "int", + Self::Bool => "bool", + } + } +} + +/// Static descriptor for one registered sandbox setting key. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub struct RegisteredSetting { + pub key: &'static str, + pub kind: SettingValueKind, +} + +/// Static registry of currently-supported runtime settings. +/// +/// `policy` is intentionally excluded because it is a reserved key handled by +/// dedicated policy commands and payloads. +/// +/// # Adding a new setting +/// +/// 1. Add a [`RegisteredSetting`] entry to this array with the key name and +/// [`SettingValueKind`]. +/// 2. Recompile `openshell-server` (gateway) and `openshell-sandbox` +/// (supervisor). No database migration is needed -- new keys are stored in +/// the existing settings JSON blob. +/// 3. Add sandbox-side consumption in `openshell-sandbox` to read and act on +/// the new key from the poll loop's `SettingsPollResult::settings` map. +/// 4. The key will automatically appear in `settings get` (CLI/TUI) and be +/// settable via `settings set`. The server validates that only registered +/// keys are accepted. +/// 5. Add a unit test in this module's `tests` section to cover the new key. +pub const REGISTERED_SETTINGS: &[RegisteredSetting] = &[ + // Production settings go here. Add entries following the steps above. + // + // Test-only keys live behind the `dev-settings` feature flag so they + // don't appear in production builds. + #[cfg(feature = "dev-settings")] + RegisteredSetting { + key: "dummy_int", + kind: SettingValueKind::Int, + }, + #[cfg(feature = "dev-settings")] + RegisteredSetting { + key: "dummy_bool", + kind: SettingValueKind::Bool, + }, +]; + +/// Resolve a setting descriptor from the registry by key. +#[must_use] +pub fn setting_for_key(key: &str) -> Option<&'static RegisteredSetting> { + REGISTERED_SETTINGS.iter().find(|entry| entry.key == key) +} + +/// Return comma-separated registered keys for CLI/API diagnostics. +#[must_use] +pub fn registered_keys_csv() -> String { + REGISTERED_SETTINGS + .iter() + .map(|entry| entry.key) + .collect::>() + .join(", ") +} + +/// Parse common bool-like string values. +#[must_use] +pub fn parse_bool_like(raw: &str) -> Option { + match raw.trim().to_ascii_lowercase().as_str() { + "1" | "true" | "yes" | "y" | "on" => Some(true), + "0" | "false" | "no" | "n" | "off" => Some(false), + _ => None, + } +} + +#[cfg(test)] +mod tests { + use super::{ + REGISTERED_SETTINGS, RegisteredSetting, SettingValueKind, parse_bool_like, + registered_keys_csv, setting_for_key, + }; + + #[cfg(feature = "dev-settings")] + #[test] + fn setting_for_key_returns_dev_entries() { + let setting = setting_for_key("dummy_bool").expect("dummy_bool should be registered"); + assert_eq!(setting.kind, SettingValueKind::Bool); + let setting = setting_for_key("dummy_int").expect("dummy_int should be registered"); + assert_eq!(setting.kind, SettingValueKind::Int); + } + + #[test] + fn setting_for_key_returns_none_for_unknown() { + assert!(setting_for_key("nonexistent_key").is_none()); + } + + #[test] + fn setting_for_key_returns_none_for_reserved_policy() { + // "policy" is intentionally excluded from the registry. + assert!(setting_for_key("policy").is_none()); + } + + // ---- parse_bool_like ---- + + #[test] + fn parse_bool_like_accepts_expected_spellings() { + for raw in ["1", "true", "yes", "on", "Y"] { + assert_eq!(parse_bool_like(raw), Some(true), "expected true for {raw}"); + } + for raw in ["0", "false", "no", "off", "N"] { + assert_eq!( + parse_bool_like(raw), + Some(false), + "expected false for {raw}" + ); + } + } + + #[test] + fn parse_bool_like_case_insensitive() { + assert_eq!(parse_bool_like("TRUE"), Some(true)); + assert_eq!(parse_bool_like("True"), Some(true)); + assert_eq!(parse_bool_like("FALSE"), Some(false)); + assert_eq!(parse_bool_like("False"), Some(false)); + assert_eq!(parse_bool_like("YES"), Some(true)); + assert_eq!(parse_bool_like("NO"), Some(false)); + assert_eq!(parse_bool_like("On"), Some(true)); + assert_eq!(parse_bool_like("Off"), Some(false)); + } + + #[test] + fn parse_bool_like_trims_whitespace() { + assert_eq!(parse_bool_like(" true "), Some(true)); + assert_eq!(parse_bool_like("\tfalse\t"), Some(false)); + assert_eq!(parse_bool_like(" 1 "), Some(true)); + assert_eq!(parse_bool_like(" 0 "), Some(false)); + } + + #[test] + fn parse_bool_like_rejects_unrecognized_values() { + assert_eq!(parse_bool_like("maybe"), None); + assert_eq!(parse_bool_like(""), None); + assert_eq!(parse_bool_like("2"), None); + assert_eq!(parse_bool_like("nope"), None); + assert_eq!(parse_bool_like("yep"), None); + assert_eq!(parse_bool_like("enabled"), None); + assert_eq!(parse_bool_like("disabled"), None); + } + + // ---- REGISTERED_SETTINGS entries ---- + + #[test] + fn registered_settings_have_valid_kinds() { + let valid_kinds = [ + SettingValueKind::String, + SettingValueKind::Int, + SettingValueKind::Bool, + ]; + for entry in REGISTERED_SETTINGS { + assert!( + valid_kinds.contains(&entry.kind), + "registered setting '{}' has unexpected kind {:?}", + entry.key, + entry.kind, + ); + } + } + + #[test] + fn registered_settings_keys_are_nonempty_and_unique() { + let mut seen = std::collections::HashSet::new(); + for entry in REGISTERED_SETTINGS { + assert!( + !entry.key.is_empty(), + "registered setting key must not be empty" + ); + assert!( + seen.insert(entry.key), + "duplicate registered setting key '{}'", + entry.key, + ); + } + } + + #[test] + fn registered_settings_excludes_policy() { + assert!( + !REGISTERED_SETTINGS.iter().any(|e| e.key == "policy"), + "policy must not appear in REGISTERED_SETTINGS" + ); + } + + #[test] + fn registered_keys_csv_contains_all_keys() { + let csv = registered_keys_csv(); + for entry in REGISTERED_SETTINGS { + assert!( + csv.contains(entry.key), + "registered_keys_csv() missing '{}'", + entry.key, + ); + } + } + + // ---- SettingValueKind::as_str ---- + + #[test] + fn setting_value_kind_as_str_returns_expected_labels() { + assert_eq!(SettingValueKind::String.as_str(), "string"); + assert_eq!(SettingValueKind::Int.as_str(), "int"); + assert_eq!(SettingValueKind::Bool.as_str(), "bool"); + } + + // ---- RegisteredSetting structural ---- + + #[test] + fn registered_setting_derives_debug_clone_eq() { + let a = RegisteredSetting { + key: "test", + kind: SettingValueKind::Bool, + }; + let b = a; + assert_eq!(a, b); + // Debug is exercised implicitly by format! + let _ = format!("{a:?}"); + } +} diff --git a/crates/openshell-ocsf/Cargo.toml b/crates/openshell-ocsf/Cargo.toml new file mode 100644 index 00000000..14cc93ba --- /dev/null +++ b/crates/openshell-ocsf/Cargo.toml @@ -0,0 +1,25 @@ +# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 + +[package] +name = "openshell-ocsf" +description = "OCSF v1.7.0 event types, formatters, and tracing layers for OpenShell sandbox logging" +version.workspace = true +edition.workspace = true +rust-version.workspace = true +license.workspace = true +repository.workspace = true + +[dependencies] +chrono = { version = "0.4", features = ["serde"] } +serde = { workspace = true } +serde_json = { workspace = true } +serde_repr = "0.1" +tracing = { workspace = true } +tracing-subscriber = { workspace = true } + +[dev-dependencies] +tracing-subscriber = { workspace = true, features = ["env-filter", "json"] } + +[lints] +workspace = true diff --git a/crates/openshell-ocsf/schemas/ocsf/README.md b/crates/openshell-ocsf/schemas/ocsf/README.md new file mode 100644 index 00000000..520325a0 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/README.md @@ -0,0 +1,53 @@ +# Vendored OCSF Schemas + +These schemas are vendored from the [OCSF Schema Server](https://schema.ocsf.io/) +for offline test validation. + +## Version + +- **OCSF v1.7.0** — fetched from `https://schema.ocsf.io/api/1.7.0/` + +## Contents + +### Classes (8) + +- `network_activity` [4001] +- `http_activity` [4002] +- `ssh_activity` [4007] +- `process_activity` [1007] +- `detection_finding` [2004] +- `application_lifecycle` [6002] +- `device_config_state_change` [5019] +- `base_event` [0] + +### Objects (17) + +- `metadata`, `network_endpoint`, `network_proxy`, `process`, `actor` +- `device`, `container`, `product`, `firewall_rule`, `finding_info` +- `evidences`, `http_request`, `http_response`, `url`, `attack` +- `remediation`, `connection_info` + +## Updating + +To update to a new OCSF version: + +```bash +VERSION=1.7.0 + +for class in network_activity http_activity ssh_activity process_activity \ + detection_finding application_lifecycle device_config_state_change base_event; do + curl -s "https://schema.ocsf.io/api/${VERSION}/classes/${class}" \ + | python3 -m json.tool > "classes/${class}.json" +done + +for object in metadata network_endpoint network_proxy process actor device \ + container product firewall_rule finding_info evidences \ + http_request http_response url attack remediation connection_info; do + curl -s "https://schema.ocsf.io/api/${VERSION}/objects/${object}" \ + | python3 -m json.tool > "objects/${object}.json" +done + +echo "${VERSION}" > VERSION +``` + +Then update `OCSF_VERSION` in `crates/openshell-ocsf/src/lib.rs` to match. diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/VERSION b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/VERSION new file mode 100644 index 00000000..bd8bf882 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/VERSION @@ -0,0 +1 @@ +1.7.0 diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/application_lifecycle.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/application_lifecycle.json new file mode 100644 index 00000000..6cbaf2e6 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/application_lifecycle.json @@ -0,0 +1,1047 @@ +{ + "attributes": [ + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "risk_level": { + "profile": "security_control", + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "confidence": { + "profile": "security_control", + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "confidence_id" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "app": { + "type": "object_t", + "description": "The application that was affected by the lifecycle event. This also applies to self-updating application systems.", + "group": "primary", + "requirement": "required", + "caption": "Application", + "object_name": "Product", + "object_type": "product", + "_source": "application_lifecycle" + } + }, + { + "risk_level_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "base_event", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "risk_details": { + "profile": "security_control", + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "base_event" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value: Application Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "is_alert": { + "profile": "security_control", + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. Should be set to true if disposition_id = Alert among other dispositions, and/or risk_level_id or severity_id of the event is elevated. Not all control events will be alertable, for example if disposition_id = Exonerated or disposition_id = Allowed.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "base_event" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "600203": { + "description": "Start the application.", + "caption": "Application Lifecycle: Start" + }, + "600206": { + "description": "Enable the application.", + "caption": "Application Lifecycle: Enable" + }, + "600200": { + "caption": "Application Lifecycle: Unknown" + }, + "600201": { + "description": "Install the application.", + "caption": "Application Lifecycle: Install" + }, + "600202": { + "description": "Remove the application.", + "caption": "Application Lifecycle: Remove" + }, + "600299": { + "caption": "Application Lifecycle: Other" + }, + "600204": { + "description": "Stop the application.", + "caption": "Application Lifecycle: Stop" + }, + "600205": { + "description": "Restart the application.", + "caption": "Application Lifecycle: Restart" + }, + "600207": { + "description": "Disable the application.", + "caption": "Application Lifecycle: Disable" + }, + "600208": { + "description": "Update the application.", + "caption": "Application Lifecycle: Update" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "application_lifecycle" + } + }, + { + "confidence_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "base_event" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "6": { + "description": "Application Activity events report detailed information about the behavior of applications and services.", + "uid": 6, + "caption": "Application Activity" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "application_lifecycle" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The event status, normalized to the caption of the status_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Status", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "malware": { + "profile": "security_control", + "type": "object_t", + "description": "A list of Malware objects, describing details about the identified malware.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "base_event" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "confidence_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Success" + }, + "2": { + "caption": "Failure" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the event status.", + "group": "primary", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "base_event" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: Application Lifecycle.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": "host", + "type": "object_t", + "description": "An addressable device, computer system or host.", + "group": "primary", + "requirement": "recommended", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "base_event" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Start the application.", + "caption": "Start" + }, + "6": { + "description": "Enable the application.", + "caption": "Enable" + }, + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Install the application.", + "caption": "Install" + }, + "2": { + "description": "Remove the application.", + "caption": "Remove" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Stop the application.", + "caption": "Stop" + }, + "5": { + "description": "Restart the application.", + "caption": "Restart" + }, + "7": { + "description": "Disable the application.", + "caption": "Disable" + }, + "8": { + "description": "Update the application.", + "caption": "Update" + } + }, + "description": "The normalized identifier of the activity that triggered the event.", + "group": "classification", + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "application_lifecycle", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": "security_control", + "type": "object_t", + "description": "Describes details about the scan job that identified malware on the target system.", + "group": "primary", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "base_event" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "6002": { + "description": "Application Lifecycle events report installation, removal, start, stop of an application or service.", + "caption": "Application Lifecycle" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "application_lifecycle" + } + }, + { + "risk_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The event activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": "host", + "type": "object_t", + "description": "The actor object describes details about the user/role/process that was the source of the activity. Note that this is not the threat actor of a campaign but may be part of a campaign.", + "group": "primary", + "requirement": "optional", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "base_event" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "base_event" + } + } + ], + "name": "application_lifecycle", + "description": "Application Lifecycle events report installation, removal, start, stop of an application or service.", + "uid": 6002, + "extends": "application", + "category": "application", + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control", + "data_classification", + "container", + "linux/linux_users" + ], + "category_uid": 6, + "caption": "Application Lifecycle", + "category_name": "Application Activity" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/base_event.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/base_event.json new file mode 100644 index 00000000..b9be86a7 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/base_event.json @@ -0,0 +1,964 @@ +{ + "attributes": [ + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "risk_level": { + "profile": "security_control", + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "confidence": { + "profile": "security_control", + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "confidence_id" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "risk_level_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "base_event", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "risk_details": { + "profile": "security_control", + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "base_event" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "is_alert": { + "profile": "security_control", + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. Should be set to true if disposition_id = Alert among other dispositions, and/or risk_level_id or severity_id of the event is elevated. Not all control events will be alertable, for example if disposition_id = Exonerated or disposition_id = Allowed.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "base_event" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "0": { + "caption": "Base Event: Unknown" + }, + "99": { + "caption": "Base Event: Other" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "base_event" + } + }, + { + "confidence_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "base_event" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "0": { + "caption": "Uncategorized" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "base_event" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The event status, normalized to the caption of the status_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Status", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "malware": { + "profile": "security_control", + "type": "object_t", + "description": "A list of Malware objects, describing details about the identified malware.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "base_event" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "confidence_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Success" + }, + "2": { + "caption": "Failure" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the event status.", + "group": "primary", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "base_event" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: Base Event.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": "host", + "type": "object_t", + "description": "An addressable device, computer system or host.", + "group": "primary", + "requirement": "recommended", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "base_event" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the activity that triggered the event.", + "group": "classification", + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "base_event", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": "security_control", + "type": "object_t", + "description": "Describes details about the scan job that identified malware on the target system.", + "group": "primary", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "base_event" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "0": { + "description": "The base event is a generic and concrete event. It also defines a set of attributes available in most event classes. As a generic event that does not belong to any event category, it could be used to log events that are not otherwise defined by the schema.", + "caption": "Base Event" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "base_event" + } + }, + { + "risk_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The event activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": "host", + "type": "object_t", + "description": "The actor object describes details about the user/role/process that was the source of the activity. Note that this is not the threat actor of a campaign but may be part of a campaign.", + "group": "primary", + "requirement": "optional", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "base_event" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "base_event" + } + } + ], + "name": "base_event", + "description": "The base event is a generic and concrete event. It also defines a set of attributes available in most event classes. As a generic event that does not belong to any event category, it could be used to log events that are not otherwise defined by the schema.", + "uid": 0, + "category": "other", + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control" + ], + "category_uid": 0, + "caption": "Base Event" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/detection_finding.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/detection_finding.json new file mode 100644 index 00000000..256f4db9 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/detection_finding.json @@ -0,0 +1,1397 @@ +{ + "attributes": [ + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "risk_level": { + "profile": null, + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "detection_finding", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time of the least recent event included in the finding.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "finding" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "anomaly_analyses": { + "type": "object_t", + "description": "Describes baseline information about normal activity patterns, along with any detected deviations or anomalies that triggered this finding.", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Anomaly Analyses", + "object_name": "Anomaly Analysis", + "object_type": "anomaly_analysis", + "_source": "detection_finding" + } + }, + { + "confidence": { + "profile": null, + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "detection_finding", + "_sibling_of": "confidence_id" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "impact_score": { + "profile": null, + "type": "integer_t", + "description": "The impact as an integer value of the finding, valid range 0-100.", + "group": "context", + "requirement": "optional", + "caption": "Impact Score", + "type_name": "Integer", + "_source": "detection_finding" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "is_suspected_breach": { + "profile": "incident", + "type": "boolean_t", + "description": "A determination based on analytics as to whether a potential breach was found.", + "group": "context", + "requirement": "optional", + "caption": "Suspected Breach", + "type_name": "Boolean", + "_source": "finding" + } + }, + { + "priority": { + "profile": "incident", + "type": "string_t", + "description": "The priority, normalized to the caption of the priority_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Priority", + "type_name": "String", + "_source": "finding", + "_sibling_of": "priority_id" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "risk_level_id": { + "profile": null, + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "detection_finding", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "ticket": { + "profile": "incident", + "type": "object_t", + "description": "The linked ticket in the ticketing system.", + "group": "context", + "requirement": "optional", + "caption": "Ticket", + "object_name": "Ticket", + "object_type": "ticket", + "@deprecated": { + "message": "Use tickets instead.", + "since": "1.5.0" + }, + "_source": "finding" + } + }, + { + "tickets": { + "profile": "incident", + "type": "object_t", + "description": "The associated ticket(s) in the ticketing system. Each ticket contains details like ticket ID, status, etc.", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Tickets", + "object_name": "Ticket", + "object_type": "ticket", + "_source": "finding" + } + }, + { + "vendor_attributes": { + "type": "object_t", + "description": "The Vendor Attributes object can be used to represent values of attributes populated by the Vendor/Finding Provider. It can help distinguish between the vendor-provided values and consumer-updated values, of key attributes like severity_id.
The original finding producer should not populate this object. It should be populated by consuming systems that support data mutability.", + "group": "context", + "requirement": "optional", + "caption": "Vendor Attributes", + "object_name": "Vendor Attributes", + "object_type": "vendor_attributes", + "_source": "finding" + } + }, + { + "priority_id": { + "profile": "incident", + "type": "integer_t", + "enum": { + "3": { + "description": "Critical functionality or network access is interrupted, degraded or unusable, having a severe impact on services availability. No acceptable alternative is possible.", + "caption": "High" + }, + "0": { + "description": "No priority is assigned.", + "caption": "Unknown" + }, + "1": { + "description": "Application or personal procedure is unusable, where a workaround is available or a repair is possible.", + "caption": "Low" + }, + "2": { + "description": "Non-critical function or procedure is unusable or hard to use causing operational disruptions with no direct impact on a service's availability. A workaround is available.", + "caption": "Medium" + }, + "99": { + "description": "The priority is not normalized.", + "caption": "Other" + }, + "4": { + "description": "Interruption making a critical functionality inaccessible or a complete network interruption causing a severe impact on services availability. There is no possible alternative.", + "caption": "Critical" + } + }, + "description": "The normalized priority. Priority identifies the relative importance of the incident or finding. It is a measurement of urgency.", + "group": "context", + "requirement": "recommended", + "caption": "Priority ID", + "type_name": "Integer", + "sibling": "priority", + "_source": "finding" + } + }, + { + "vulnerabilities": { + "type": "object_t", + "description": "Describes vulnerabilities reported in a Detection Finding.", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Vulnerabilities", + "object_name": "Vulnerability Details", + "object_type": "vulnerability", + "_source": "detection_finding" + } + }, + { + "risk_details": { + "profile": null, + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "detection_finding" + } + }, + { + "remediation": { + "type": "object_t", + "description": "Describes the recommended remediation steps to address identified issue(s).", + "group": "context", + "requirement": "optional", + "caption": "Remediation Guidance", + "object_name": "Remediation", + "object_type": "remediation", + "_source": "detection_finding" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The time of the most recent event included in the finding.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "finding" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value: Findings.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "is_alert": { + "profile": null, + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. For example, an activity_id of 'Create' could constitute an alertable signal and the value would be true, while 'Close' likely would not and either omit the attribute or set its value to false. Note that other events with the security_control profile may also be deemed alertable signals and may also carry is_alert = true attributes.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "detection_finding" + } + }, + { + "assignee_group": { + "profile": "incident", + "type": "object_t", + "description": "The details of the group assigned to an Incident.", + "group": "context", + "requirement": "optional", + "caption": "Assignee Group", + "object_name": "Group", + "object_type": "group", + "_source": "finding" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "200403": { + "description": "A finding was closed.", + "caption": "Detection Finding: Close" + }, + "200400": { + "caption": "Detection Finding: Unknown" + }, + "200401": { + "description": "A finding was created.", + "caption": "Detection Finding: Create" + }, + "200402": { + "description": "A finding was updated.", + "caption": "Detection Finding: Update" + }, + "200499": { + "caption": "Detection Finding: Other" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "detection_finding" + } + }, + { + "confidence_id": { + "profile": null, + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "detection_finding" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "2": { + "description": "Findings events report findings, detections, and possible resolutions of malware, anomalies, or other actions performed by security products.", + "uid": 2, + "caption": "Findings" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "detection_finding" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The normalized status of the Finding set by the consumer normalized to the caption of the status_id value. In the case of 'Other', it is defined by the source.", + "group": "context", + "requirement": "optional", + "caption": "Status", + "type_name": "String", + "_source": "finding", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "assignee": { + "profile": "incident", + "type": "object_t", + "description": "The details of the user assigned to an Incident.", + "group": "context", + "requirement": "optional", + "caption": "Assignee", + "object_name": "User", + "object_type": "user", + "_source": "finding" + } + }, + { + "malware": { + "profile": null, + "type": "object_t", + "description": "Describes malware reported in a Detection Finding.", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "detection_finding" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "src_url": { + "profile": "incident", + "type": "url_t", + "description": "A Url link used to access the original incident.", + "group": "primary", + "requirement": "recommended", + "caption": "Source URL", + "type_name": "URL String", + "_source": "finding" + } + }, + { + "verdict": { + "profile": "incident", + "type": "string_t", + "description": "The verdict assigned to an Incident finding.", + "group": "primary", + "requirement": "recommended", + "caption": "Verdict", + "type_name": "String", + "_source": "finding", + "_sibling_of": "verdict_id" + } + }, + { + "impact_id": { + "profile": null, + "type": "integer_t", + "enum": { + "3": { + "description": "The magnitude of harm is high.", + "caption": "High" + }, + "0": { + "description": "The normalized impact is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The magnitude of harm is low.", + "caption": "Low" + }, + "2": { + "description": "The magnitude of harm is moderate.", + "caption": "Medium" + }, + "99": { + "description": "The impact is not mapped. See the impact attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The magnitude of harm is high and the scope is widespread.", + "caption": "Critical" + } + }, + "description": "The normalized impact of the incident or finding. Per NIST, this is the magnitude of harm that can be expected to result from the consequences of unauthorized disclosure, modification, destruction, or loss of information or information system availability.", + "group": "context", + "source": "impact value; impact level", + "references": [ + { + "description": "NIST SP 800-172 from FIPS 199", + "url": "https://doi.org/10.6028/NIST.FIPS.199" + }, + { + "description": "NIST Computer Security Resource Center", + "url": "https://doi.org/10.6028/NIST.FIPS.199" + } + ], + "requirement": "optional", + "caption": "Impact ID", + "type_name": "Integer", + "sibling": "impact", + "_source": "detection_finding" + } + }, + { + "confidence_score": { + "profile": null, + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "detection_finding" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "The Finding was reviewed, determined to be benign or a false positive and is now suppressed.", + "caption": "Suppressed" + }, + "6": { + "description": "The Finding was deleted. For example, it might have been created in error.", + "caption": "Deleted" + }, + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The Finding is new and yet to be reviewed.", + "caption": "New" + }, + "2": { + "description": "The Finding is under review.", + "caption": "In Progress" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The Finding was reviewed, remediated and is now considered resolved.", + "caption": "Resolved" + }, + "5": { + "description": "The Finding was archived.", + "caption": "Archived" + } + }, + "description": "The normalized status identifier of the Finding, set by the consumer.", + "group": "context", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "finding" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: Detection Finding.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "comment": { + "type": "string_t", + "description": "A user provided comment about the finding.", + "group": "context", + "requirement": "optional", + "caption": "Comment", + "type_name": "String", + "_source": "finding" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "evidences": { + "type": "object_t", + "description": "Describes various evidence artifacts associated to the activity/activities that triggered a security detection.", + "group": "primary", + "is_array": true, + "requirement": "recommended", + "caption": "Evidence Artifacts", + "object_name": "Evidence Artifacts", + "object_type": "evidences", + "_source": "detection_finding" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time of the most recent event included in the finding.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "finding" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": null, + "type": "object_t", + "description": "Describes the affected device/host. If applicable, it can be used in conjunction with Resource(s).

e.g. Specific details about an AWS EC2 instance, that is affected by the Finding.

", + "group": "context", + "requirement": "optional", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "finding" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "A finding was closed.", + "caption": "Close" + }, + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "A finding was created.", + "caption": "Create" + }, + "2": { + "description": "A finding was updated.", + "caption": "Update" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the finding activity.", + "group": "classification", + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "finding", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": null, + "type": "object_t", + "description": "Describes details about malware scan job that triggered this Detection Finding.", + "group": "context", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "detection_finding" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "2004": { + "description": "A Detection Finding describes detections or alerts generated by security products using correlation engines, detection engines or other methodologies. Note: if the event producer is a security control, the security_control profile should be applied and its attacks information, if present, should be duplicated into the finding_info object.
Note: If the Finding is an incident, i.e. requires incident workflow, also apply the incident profile or aggregate this finding into an Incident Finding.", + "caption": "Detection Finding" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "detection_finding" + } + }, + { + "finding_info": { + "type": "object_t", + "description": "Describes the supporting information about a generated finding.", + "group": "primary", + "requirement": "required", + "caption": "Finding Information", + "object_name": "Finding Information", + "object_type": "finding_info", + "_source": "finding" + } + }, + { + "risk_score": { + "profile": null, + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "detection_finding" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "impact": { + "profile": null, + "type": "string_t", + "description": "The impact , normalized to the caption of the impact_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Impact", + "type_name": "String", + "_source": "detection_finding", + "_sibling_of": "impact_id" + } + }, + { + "resources": { + "type": "object_t", + "description": "Describes details about resources that were the target of the activity that triggered the finding.", + "group": "context", + "is_array": true, + "requirement": "recommended", + "caption": "Affected Resources", + "object_name": "Resource Details", + "object_type": "resource_details", + "_source": "detection_finding" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The finding activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "finding", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": "host", + "type": "object_t", + "description": "The actor object describes details about the user/role/process that was the source of the activity. Note that this is not the threat actor of a campaign but may be part of a campaign.", + "group": "primary", + "requirement": "optional", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "base_event" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "verdict_id": { + "profile": "incident", + "type": "integer_t", + "enum": { + "3": { + "description": "The incident can be disregarded as it is unimportant, an error or accident.", + "caption": "Disregard" + }, + "6": { + "description": "The incident is a test.", + "caption": "Test" + }, + "0": { + "description": "The type is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The incident is a false positive.", + "caption": "False Positive" + }, + "2": { + "description": "The incident is a true positive.", + "caption": "True Positive" + }, + "99": { + "description": "The type is not mapped. See the type attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The incident is suspicious.", + "caption": "Suspicious" + }, + "5": { + "description": "The incident is benign.", + "caption": "Benign" + }, + "7": { + "description": "The incident has insufficient data to make a verdict.", + "caption": "Insufficient Data" + }, + "8": { + "description": "The incident is a security risk.", + "caption": "Security Risk" + }, + "9": { + "description": "The incident remediation or required actions are managed externally.", + "caption": "Managed Externally" + }, + "10": { + "description": "The incident is a duplicate.", + "caption": "Duplicate" + } + }, + "description": "The normalized verdict of an Incident.", + "group": "primary", + "requirement": "recommended", + "caption": "Verdict ID", + "type_name": "Integer", + "sibling": "verdict", + "_source": "finding" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The time of the least recent event included in the finding.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "finding" + } + } + ], + "name": "detection_finding", + "description": "A Detection Finding describes detections or alerts generated by security products using correlation engines, detection engines or other methodologies. Note: if the event producer is a security control, the security_control profile should be applied and its attacks information, if present, should be duplicated into the finding_info object.
Note: If the Finding is an incident, i.e. requires incident workflow, also apply the incident profile or aggregate this finding into an Incident Finding.", + "uid": 2004, + "extends": "finding", + "category": "findings", + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control", + "incident", + "data_classification", + "container", + "linux/linux_users" + ], + "category_uid": 2, + "caption": "Detection Finding", + "category_name": "Findings" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/device_config_state_change.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/device_config_state_change.json new file mode 100644 index 00000000..9fda4882 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/device_config_state_change.json @@ -0,0 +1,1137 @@ +{ + "attributes": [ + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "risk_level": { + "profile": "security_control", + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "prev_security_level": { + "type": "string_t", + "description": "The previous security level of the entity", + "group": "primary", + "requirement": "recommended", + "caption": "Previous Security Level", + "type_name": "String", + "_source": "device_config_state_change", + "_sibling_of": "prev_security_level_id" + } + }, + { + "security_level_id": { + "type": "integer_t", + "enum": { + "3": { + "caption": "Compromised" + }, + "0": { + "caption": "Unknown" + }, + "1": { + "caption": "Secure" + }, + "2": { + "caption": "At Risk" + }, + "99": { + "description": "The security level is not mapped. See the security_level attribute, which contains data source specific values.", + "caption": "Other" + } + }, + "description": "The current security level of the entity", + "group": "primary", + "requirement": "recommended", + "caption": "Security Level ID", + "type_name": "Integer", + "sibling": "security_level", + "_source": "device_config_state_change" + } + }, + { + "confidence": { + "profile": "security_control", + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "confidence_id" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "risk_level_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "base_event", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "risk_details": { + "profile": "security_control", + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "base_event" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value: Discovery.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "is_alert": { + "profile": "security_control", + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. Should be set to true if disposition_id = Alert among other dispositions, and/or risk_level_id or severity_id of the event is elevated. Not all control events will be alertable, for example if disposition_id = Exonerated or disposition_id = Allowed.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "base_event" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "501900": { + "caption": "Device Config State Change: Unknown" + }, + "501901": { + "description": "The discovered information is via a log.", + "caption": "Device Config State Change: Log" + }, + "501902": { + "description": "The discovered information is via a collection process.", + "caption": "Device Config State Change: Collect" + }, + "501999": { + "caption": "Device Config State Change: Other" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "device_config_state_change" + } + }, + { + "confidence_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "base_event" + } + }, + { + "security_states": { + "type": "object_t", + "description": "The current security states of the device.", + "group": "primary", + "is_array": true, + "requirement": "recommended", + "caption": "Security States", + "object_name": "Security State", + "object_type": "security_state", + "_source": "device_config_state_change" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "5": { + "description": "Discovery events report the existence and state of devices, files, configurations, processes, registry keys, and other objects.", + "uid": 5, + "caption": "Discovery" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "device_config_state_change" + } + }, + { + "prev_security_level_id": { + "type": "integer_t", + "enum": { + "3": { + "caption": "Compromised" + }, + "0": { + "caption": "Unknown" + }, + "1": { + "caption": "Secure" + }, + "2": { + "caption": "At Risk" + }, + "99": { + "description": "The security level is not mapped. See the prev_security_level attribute, which contains data source specific values.", + "caption": "Other" + } + }, + "description": "The previous security level of the entity", + "group": "primary", + "requirement": "recommended", + "caption": "Previous Security Level ID", + "type_name": "Integer", + "sibling": "prev_security_level", + "_source": "device_config_state_change" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The event status, normalized to the caption of the status_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Status", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "malware": { + "profile": "security_control", + "type": "object_t", + "description": "A list of Malware objects, describing details about the identified malware.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "base_event" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "state_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The Config Change state is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Config State Changed to Disabled.", + "caption": "Disabled" + }, + "2": { + "description": "Config State Changed to Enabled.", + "caption": "Enabled" + }, + "99": { + "description": "The Config Change is not mapped. See the state attribute, which contains data source specific values.", + "caption": "Other" + } + }, + "description": "The Config Change State of the managed entity.", + "requirement": "recommended", + "caption": "Config Change State ID", + "type_name": "Integer", + "sibling": "state", + "_source": "device_config_state_change" + } + }, + { + "confidence_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Success" + }, + "2": { + "caption": "Failure" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the event status.", + "group": "primary", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "base_event" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: Device Config State Change.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "state": { + "type": "string_t", + "description": "The Config Change Stat, normalized to the caption of the state_id value. In the case of 'Other', it is defined by the source.", + "requirement": "optional", + "caption": "Config Change State", + "type_name": "String", + "_source": "device_config_state_change", + "_sibling_of": "state_id" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": null, + "type": "object_t", + "description": "The device that is impacted by the state change.", + "group": "primary", + "requirement": "required", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "device_config_state_change" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The discovered information is via a log.", + "caption": "Log" + }, + "2": { + "description": "The discovered information is via a collection process.", + "caption": "Collect" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the activity that triggered the event.", + "group": "classification", + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "discovery", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": "security_control", + "type": "object_t", + "description": "Describes details about the scan job that identified malware on the target system.", + "group": "primary", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "base_event" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "5019": { + "description": "Device Config State Change events report state changes that impact the security of the device.", + "caption": "Device Config State Change" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "device_config_state_change" + } + }, + { + "risk_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The event activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": null, + "type": "object_t", + "description": "The actor object describes details about the user/role/process that was the source of the activity. Note that this is not the threat actor of a campaign but may be part of a campaign.", + "group": "context", + "requirement": "optional", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "device_config_state_change" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "security_level": { + "type": "string_t", + "description": "The current security level of the entity", + "group": "primary", + "requirement": "recommended", + "caption": "Security Level", + "type_name": "String", + "_source": "device_config_state_change", + "_sibling_of": "security_level_id" + } + }, + { + "prev_security_states": { + "type": "object_t", + "description": "The previous security states of the device.", + "group": "primary", + "is_array": true, + "requirement": "recommended", + "caption": "Previous Security States", + "object_name": "Security State", + "object_type": "security_state", + "_source": "device_config_state_change" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "base_event" + } + } + ], + "name": "device_config_state_change", + "description": "Device Config State Change events report state changes that impact the security of the device.", + "uid": 5019, + "extends": "discovery", + "category": "discovery", + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control", + "data_classification", + "container", + "linux/linux_users" + ], + "category_uid": 5, + "caption": "Device Config State Change", + "category_name": "Discovery" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/http_activity.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/http_activity.json new file mode 100644 index 00000000..55a8760d --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/http_activity.json @@ -0,0 +1,1380 @@ +{ + "attributes": [ + { + "proxy_http_request": { + "profile": "network_proxy", + "type": "object_t", + "description": "The HTTP Request from the proxy server to the remote server.", + "group": "context", + "requirement": "optional", + "caption": "Proxy HTTP Request", + "object_name": "HTTP Request", + "object_type": "http_request", + "_source": "network" + } + }, + { + "proxy_endpoint": { + "profile": "network_proxy", + "type": "object_t", + "description": "The proxy (server) in a network connection.", + "group": "context", + "requirement": "optional", + "caption": "Proxy Endpoint", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "_source": "network" + } + }, + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "observation_point": { + "type": "string_t", + "description": "Indicates whether the source network endpoint, destination network endpoint, or neither served as the observation point for the activity. The value is normalized to the caption of the observation_point_id.", + "requirement": "optional", + "caption": "Observation Point", + "type_name": "String", + "_source": "network", + "_sibling_of": "observation_point_id" + } + }, + { + "risk_level": { + "profile": "security_control", + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "confidence": { + "profile": "security_control", + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "confidence_id" + } + }, + { + "observation_point_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Neither the source nor destination network endpoint is the observation point.", + "caption": "Neither" + }, + "0": { + "description": "The observation point is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The source network endpoint is the observation point.", + "caption": "Source" + }, + "2": { + "description": "The destination network endpoint is the observation point.", + "caption": "Destination" + }, + "99": { + "description": "The observation point is not mapped. See the observation_point attribute for a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Both the source and destination network endpoint are the observation point. This typically occurs in localhost or internal communications where the source and destination are the same endpoint, often resulting in a connection_info.direction of Local.", + "caption": "Both" + } + }, + "description": "The normalized identifier of the observation point. The observation point identifier indicates whether the source network endpoint, destination network endpoint, or neither served as the observation point for the activity.", + "requirement": "optional", + "caption": "Observation Point ID", + "type_name": "Integer", + "sibling": "observation_point", + "_source": "network" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "connection_info": { + "type": "object_t", + "description": "The network connection information.", + "group": "primary", + "requirement": "recommended", + "caption": "Connection Info", + "object_name": "Network Connection Information", + "object_type": "network_connection_info", + "_source": "network" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "ja4_fingerprint_list": { + "type": "object_t", + "description": "A list of the JA4+ network fingerprints.", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "JA4+ Fingerprints", + "object_name": "JA4+ Fingerprint", + "object_type": "ja4_fingerprint", + "_source": "network" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "risk_level_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "base_event", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "risk_details": { + "profile": "security_control", + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "base_event" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "dst_endpoint": { + "type": "object_t", + "description": "The responder (server) in a network connection.", + "group": "primary", + "requirement": "recommended", + "caption": "Destination Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "network" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value: Network Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "http_status": { + "type": "integer_t", + "description": "The Hypertext Transfer Protocol (HTTP) status code returned to the client.", + "group": "primary", + "requirement": "recommended", + "caption": "HTTP Status", + "type_name": "Integer", + "@deprecated": { + "message": "Use the http_response.code attribute instead.", + "since": "1.1.0" + }, + "_source": "http_activity" + } + }, + { + "is_alert": { + "profile": "security_control", + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. Should be set to true if disposition_id = Alert among other dispositions, and/or risk_level_id or severity_id of the event is elevated. Not all control events will be alertable, for example if disposition_id = Exonerated or disposition_id = Allowed.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "base_event" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "400203": { + "description": "The GET method requests a representation of the specified resource. Requests using GET should only retrieve data.", + "caption": "HTTP Activity: Get" + }, + "400206": { + "description": "The POST method submits an entity to the specified resource, often causing a change in state or side effects on the server.", + "caption": "HTTP Activity: Post" + }, + "400200": { + "caption": "HTTP Activity: Unknown" + }, + "400201": { + "description": "The CONNECT method establishes a tunnel to the server identified by the target resource.", + "caption": "HTTP Activity: Connect" + }, + "400202": { + "description": "The DELETE method deletes the specified resource.", + "caption": "HTTP Activity: Delete" + }, + "400299": { + "caption": "HTTP Activity: Other" + }, + "400204": { + "description": "The HEAD method asks for a response identical to a GET request, but without the response body.", + "caption": "HTTP Activity: Head" + }, + "400205": { + "description": "The OPTIONS method describes the communication options for the target resource.", + "caption": "HTTP Activity: Options" + }, + "400207": { + "description": "The PUT method replaces all current representations of the target resource with the request payload.", + "caption": "HTTP Activity: Put" + }, + "400208": { + "description": "The TRACE method performs a message loop-back test along the path to the target resource.", + "caption": "HTTP Activity: Trace" + }, + "400209": { + "description": "The PATCH method applies partial modifications to a resource.", + "caption": "HTTP Activity: Patch" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "http_activity" + } + }, + { + "confidence_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "base_event" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "4": { + "description": "Network Activity events.", + "uid": 4, + "caption": "Network Activity" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "http_activity" + } + }, + { + "proxy_traffic": { + "profile": "network_proxy", + "type": "object_t", + "description": "The network traffic refers to the amount of data moving across a network, from proxy to remote server at a given point of time.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The event status, normalized to the caption of the status_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Status", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "load_balancer": { + "profile": "load_balancer", + "type": "object_t", + "description": "The Load Balancer object contains information related to the device that is distributing incoming traffic to specified destinations.", + "group": "primary", + "requirement": "recommended", + "caption": "Load Balancer", + "object_name": "Load Balancer", + "object_type": "load_balancer", + "_source": "network" + } + }, + { + "app_name": { + "type": "string_t", + "description": "The name of the application associated with the event or object.", + "group": "context", + "requirement": "optional", + "caption": "Application Name", + "type_name": "String", + "_source": "network" + } + }, + { + "src_endpoint": { + "type": "object_t", + "description": "The initiator (client) of the network connection.", + "group": "primary", + "requirement": "recommended", + "caption": "Source Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "network" + } + }, + { + "proxy_tls": { + "profile": "network_proxy", + "type": "object_t", + "description": "The TLS protocol negotiated between the proxy server and the remote server.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy TLS", + "object_name": "Transport Layer Security (TLS)", + "object_type": "tls", + "_source": "network" + } + }, + { + "malware": { + "profile": "security_control", + "type": "object_t", + "description": "A list of Malware objects, describing details about the identified malware.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "base_event" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "http_request": { + "type": "object_t", + "description": "The HTTP Request Object documents attributes of a request made to a web server.", + "group": "primary", + "requirement": "recommended", + "caption": "HTTP Request", + "object_name": "HTTP Request", + "object_type": "http_request", + "_source": "http_activity" + } + }, + { + "traffic": { + "type": "object_t", + "description": "The network traffic for this observation period. Use when reporting: (1) delta values (bytes/packets transferred since the last observation), (2) instantaneous measurements at a specific point in time, or (3) standalone single-event metrics. This attribute represents a point-in-time measurement or incremental change, not a running total. For accumulated totals across multiple observations or the lifetime of a flow, use cumulative_traffic instead.", + "group": "primary", + "requirement": "recommended", + "caption": "Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "http_cookies": { + "type": "object_t", + "description": "The cookies object describes details about HTTP cookies", + "group": "primary", + "is_array": true, + "requirement": "recommended", + "caption": "HTTP Cookies", + "object_name": "HTTP Cookie", + "object_type": "http_cookie", + "_source": "http_activity" + } + }, + { + "confidence_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "proxy": { + "type": "object_t", + "description": "The proxy (server) in a network connection.", + "group": "primary", + "requirement": "recommended", + "caption": "Proxy", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "@deprecated": { + "message": "Use the proxy_endpoint attribute instead.", + "since": "1.1.0" + }, + "_source": "network" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "file": { + "type": "object_t", + "description": "The file that is the target of the HTTP activity.", + "group": "context", + "requirement": "optional", + "caption": "File", + "object_name": "File", + "object_type": "file", + "_source": "http_activity" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Success" + }, + "2": { + "caption": "Failure" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the event status.", + "group": "primary", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "base_event" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: HTTP Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "proxy_connection_info": { + "profile": "network_proxy", + "type": "object_t", + "description": "The connection information from the proxy server to the remote server.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy Connection Info", + "object_name": "Network Connection Information", + "object_type": "network_connection_info", + "_source": "network" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": "host", + "type": "object_t", + "description": "An addressable device, computer system or host.", + "group": "primary", + "requirement": "recommended", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "base_event" + } + }, + { + "http_response": { + "type": "object_t", + "description": "The HTTP Response from a web server to a requester.", + "group": "primary", + "requirement": "recommended", + "caption": "HTTP Response", + "object_name": "HTTP Response", + "object_type": "http_response", + "_source": "http_activity" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "The GET method requests a representation of the specified resource. Requests using GET should only retrieve data.", + "caption": "Get" + }, + "6": { + "description": "The POST method submits an entity to the specified resource, often causing a change in state or side effects on the server.", + "caption": "Post" + }, + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The CONNECT method establishes a tunnel to the server identified by the target resource.", + "caption": "Connect" + }, + "2": { + "description": "The DELETE method deletes the specified resource.", + "caption": "Delete" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The HEAD method asks for a response identical to a GET request, but without the response body.", + "caption": "Head" + }, + "5": { + "description": "The OPTIONS method describes the communication options for the target resource.", + "caption": "Options" + }, + "7": { + "description": "The PUT method replaces all current representations of the target resource with the request payload.", + "caption": "Put" + }, + "8": { + "description": "The TRACE method performs a message loop-back test along the path to the target resource.", + "caption": "Trace" + }, + "9": { + "description": "The PATCH method applies partial modifications to a resource.", + "caption": "Patch" + } + }, + "description": "The normalized identifier of the activity that triggered the event.", + "group": "classification", + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "http_activity", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": "security_control", + "type": "object_t", + "description": "Describes details about the scan job that identified malware on the target system.", + "group": "primary", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "base_event" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "4002": { + "description": "HTTP Activity events report HTTP connection and traffic information.", + "caption": "HTTP Activity" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "http_activity" + } + }, + { + "risk_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "trace": { + "profile": "trace", + "type": "object_t", + "description": "The trace object contains information about distributed traces which are critical to observability and describe how requests move through a system, capturing each step's timing and status.", + "group": "primary", + "requirement": "recommended", + "caption": "Trace", + "object_name": "Trace", + "object_type": "trace", + "_source": "http_activity" + } + }, + { + "proxy_http_response": { + "profile": "network_proxy", + "type": "object_t", + "description": "The HTTP Response from the remote server to the proxy server.", + "group": "context", + "requirement": "optional", + "caption": "Proxy HTTP Response", + "object_name": "HTTP Response", + "object_type": "http_response", + "_source": "network" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The event activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": "host", + "type": "object_t", + "description": "The actor object describes details about the user/role/process that was the source of the activity. Note that this is not the threat actor of a campaign but may be part of a campaign.", + "group": "primary", + "requirement": "optional", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "base_event" + } + }, + { + "tls": { + "type": "object_t", + "description": "The Transport Layer Security (TLS) attributes.", + "group": "context", + "requirement": "optional", + "caption": "TLS", + "object_name": "Transport Layer Security (TLS)", + "object_type": "tls", + "_source": "network" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "cumulative_traffic": { + "type": "object_t", + "description": "The cumulative (running total) network traffic aggregated from the start of a flow or session. Use when reporting: (1) total accumulated bytes/packets since flow initiation, (2) combined aggregation models where both incremental deltas and running totals are reported together (populate both traffic for the delta and this attribute for the cumulative total), or (3) final summary metrics when a long-lived connection closes. This represents the sum of all activity from flow start to the current observation, not a delta or point-in-time value.", + "group": "context", + "requirement": "optional", + "caption": "Cumulative Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "base_event" + } + } + ], + "name": "http_activity", + "description": "HTTP Activity events report HTTP connection and traffic information.", + "uid": 4002, + "extends": "network", + "category": "network", + "constraints": { + "at_least_one": [ + "http_request", + "http_response" + ] + }, + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control", + "network_proxy", + "load_balancer", + "trace", + "data_classification", + "container", + "linux/linux_users" + ], + "category_uid": 4, + "caption": "HTTP Activity", + "category_name": "Network Activity" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/network_activity.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/network_activity.json new file mode 100644 index 00000000..1e1f5c9e --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/network_activity.json @@ -0,0 +1,1309 @@ +{ + "attributes": [ + { + "proxy_http_request": { + "profile": "network_proxy", + "type": "object_t", + "description": "The HTTP Request from the proxy server to the remote server.", + "group": "context", + "requirement": "optional", + "caption": "Proxy HTTP Request", + "object_name": "HTTP Request", + "object_type": "http_request", + "_source": "network" + } + }, + { + "proxy_endpoint": { + "profile": "network_proxy", + "type": "object_t", + "description": "The proxy (server) in a network connection.", + "group": "context", + "requirement": "optional", + "caption": "Proxy Endpoint", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "_source": "network" + } + }, + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "observation_point": { + "type": "string_t", + "description": "Indicates whether the source network endpoint, destination network endpoint, or neither served as the observation point for the activity. The value is normalized to the caption of the observation_point_id.", + "requirement": "optional", + "caption": "Observation Point", + "type_name": "String", + "_source": "network", + "_sibling_of": "observation_point_id" + } + }, + { + "risk_level": { + "profile": "security_control", + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "confidence": { + "profile": "security_control", + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "confidence_id" + } + }, + { + "observation_point_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Neither the source nor destination network endpoint is the observation point.", + "caption": "Neither" + }, + "0": { + "description": "The observation point is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The source network endpoint is the observation point.", + "caption": "Source" + }, + "2": { + "description": "The destination network endpoint is the observation point.", + "caption": "Destination" + }, + "99": { + "description": "The observation point is not mapped. See the observation_point attribute for a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Both the source and destination network endpoint are the observation point. This typically occurs in localhost or internal communications where the source and destination are the same endpoint, often resulting in a connection_info.direction of Local.", + "caption": "Both" + } + }, + "description": "The normalized identifier of the observation point. The observation point identifier indicates whether the source network endpoint, destination network endpoint, or neither served as the observation point for the activity.", + "requirement": "optional", + "caption": "Observation Point ID", + "type_name": "Integer", + "sibling": "observation_point", + "_source": "network" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "connection_info": { + "type": "object_t", + "description": "The network connection information.", + "group": "primary", + "requirement": "recommended", + "caption": "Connection Info", + "object_name": "Network Connection Information", + "object_type": "network_connection_info", + "_source": "network" + } + }, + { + "is_src_dst_assignment_known": { + "type": "boolean_t", + "description": "true denotes that src_endpoint and dst_endpoint correctly identify the initiator and responder respectively. false denotes that the event source has arbitrarily assigned one peer to src_endpoint and the other to dst_endpoint, in other words that initiator and responder are not being asserted. This can occur, for example, when the event source is a network appliance that has not observed the initiation of a given connection. In the absence of this attribute, interpretation of the initiator and responder is implementation-specific.", + "group": "primary", + "requirement": "recommended", + "caption": "Source/Destination Assignment Known", + "type_name": "Boolean", + "_source": "network_activity" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "url": { + "type": "object_t", + "description": "The URL details relevant to the network traffic.", + "group": "primary", + "requirement": "recommended", + "caption": "URL", + "object_name": "Uniform Resource Locator", + "object_type": "url", + "_source": "network_activity" + } + }, + { + "ja4_fingerprint_list": { + "type": "object_t", + "description": "A list of the JA4+ network fingerprints.", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "JA4+ Fingerprints", + "object_name": "JA4+ Fingerprint", + "object_type": "ja4_fingerprint", + "_source": "network" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "risk_level_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "base_event", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "risk_details": { + "profile": "security_control", + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "base_event" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "dst_endpoint": { + "type": "object_t", + "description": "The responder of the network connection. In some contexts an event source cannot correctly identify the responder. Refer to is_src_dst_assignment_known for certainty.", + "group": "primary", + "requirement": "recommended", + "caption": "Destination Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "network_activity" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value: Network Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "is_alert": { + "profile": "security_control", + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. Should be set to true if disposition_id = Alert among other dispositions, and/or risk_level_id or severity_id of the event is elevated. Not all control events will be alertable, for example if disposition_id = Exonerated or disposition_id = Allowed.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "base_event" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "400103": { + "description": "The network connection was abnormally terminated or closed by a middle device like firewalls.", + "caption": "Network Activity: Reset" + }, + "400106": { + "description": "Network traffic report.", + "caption": "Network Activity: Traffic" + }, + "400100": { + "caption": "Network Activity: Unknown" + }, + "400101": { + "description": "A new network connection was opened.", + "caption": "Network Activity: Open" + }, + "400102": { + "description": "The network connection was closed.", + "caption": "Network Activity: Close" + }, + "400199": { + "caption": "Network Activity: Other" + }, + "400104": { + "description": "The network connection failed. For example a connection timeout or no route to host.", + "caption": "Network Activity: Fail" + }, + "400105": { + "description": "The network connection was refused. For example an attempt to connect to a server port which is not open.", + "caption": "Network Activity: Refuse" + }, + "400107": { + "description": "A network endpoint began listening for new network connections.", + "caption": "Network Activity: Listen" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "network_activity" + } + }, + { + "confidence_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "base_event" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "4": { + "description": "Network Activity events.", + "uid": 4, + "caption": "Network Activity" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "network_activity" + } + }, + { + "proxy_traffic": { + "profile": "network_proxy", + "type": "object_t", + "description": "The network traffic refers to the amount of data moving across a network, from proxy to remote server at a given point of time.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The event status, normalized to the caption of the status_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Status", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "load_balancer": { + "profile": "load_balancer", + "type": "object_t", + "description": "The Load Balancer object contains information related to the device that is distributing incoming traffic to specified destinations.", + "group": "primary", + "requirement": "recommended", + "caption": "Load Balancer", + "object_name": "Load Balancer", + "object_type": "load_balancer", + "_source": "network" + } + }, + { + "app_name": { + "type": "string_t", + "description": "The name of the application associated with the event or object.", + "group": "context", + "requirement": "optional", + "caption": "Application Name", + "type_name": "String", + "_source": "network" + } + }, + { + "src_endpoint": { + "type": "object_t", + "description": " The initiator of the network connection. In some contexts an event source cannot correctly identify the initiator. Refer to is_src_dst_assignment_known for certainty.", + "group": "primary", + "requirement": "recommended", + "caption": "Source Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "network_activity" + } + }, + { + "proxy_tls": { + "profile": "network_proxy", + "type": "object_t", + "description": "The TLS protocol negotiated between the proxy server and the remote server.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy TLS", + "object_name": "Transport Layer Security (TLS)", + "object_type": "tls", + "_source": "network" + } + }, + { + "malware": { + "profile": "security_control", + "type": "object_t", + "description": "A list of Malware objects, describing details about the identified malware.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "base_event" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "traffic": { + "type": "object_t", + "description": "The network traffic for this observation period. Use when reporting: (1) delta values (bytes/packets transferred since the last observation), (2) instantaneous measurements at a specific point in time, or (3) standalone single-event metrics. This attribute represents a point-in-time measurement or incremental change, not a running total. For accumulated totals across multiple observations or the lifetime of a flow, use cumulative_traffic instead.", + "group": "primary", + "requirement": "recommended", + "caption": "Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "confidence_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "proxy": { + "type": "object_t", + "description": "The proxy (server) in a network connection.", + "group": "primary", + "requirement": "recommended", + "caption": "Proxy", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "@deprecated": { + "message": "Use the proxy_endpoint attribute instead.", + "since": "1.1.0" + }, + "_source": "network" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Success" + }, + "2": { + "caption": "Failure" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the event status.", + "group": "primary", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "base_event" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: Network Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "proxy_connection_info": { + "profile": "network_proxy", + "type": "object_t", + "description": "The connection information from the proxy server to the remote server.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy Connection Info", + "object_name": "Network Connection Information", + "object_type": "network_connection_info", + "_source": "network" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": "host", + "type": "object_t", + "description": "An addressable device, computer system or host.", + "group": "primary", + "requirement": "recommended", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "base_event" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "The network connection was abnormally terminated or closed by a middle device like firewalls.", + "caption": "Reset" + }, + "6": { + "description": "Network traffic report.", + "caption": "Traffic" + }, + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "A new network connection was opened.", + "caption": "Open" + }, + "2": { + "description": "The network connection was closed.", + "caption": "Close" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The network connection failed. For example a connection timeout or no route to host.", + "caption": "Fail" + }, + "5": { + "description": "The network connection was refused. For example an attempt to connect to a server port which is not open.", + "caption": "Refuse" + }, + "7": { + "description": "A network endpoint began listening for new network connections.", + "caption": "Listen" + } + }, + "description": "The normalized identifier of the activity that triggered the event.", + "group": "classification", + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "network_activity", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": "security_control", + "type": "object_t", + "description": "Describes details about the scan job that identified malware on the target system.", + "group": "primary", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "base_event" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "4001": { + "description": "Network Activity events report network connection and traffic activity.", + "caption": "Network Activity" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "network_activity" + } + }, + { + "risk_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "proxy_http_response": { + "profile": "network_proxy", + "type": "object_t", + "description": "The HTTP Response from the remote server to the proxy server.", + "group": "context", + "requirement": "optional", + "caption": "Proxy HTTP Response", + "object_name": "HTTP Response", + "object_type": "http_response", + "_source": "network" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The event activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": "host", + "type": "object_t", + "description": "The actor object describes details about the user/role/process that was the source of the activity. Note that this is not the threat actor of a campaign but may be part of a campaign.", + "group": "primary", + "requirement": "optional", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "base_event" + } + }, + { + "tls": { + "type": "object_t", + "description": "The Transport Layer Security (TLS) attributes.", + "group": "context", + "requirement": "optional", + "caption": "TLS", + "object_name": "Transport Layer Security (TLS)", + "object_type": "tls", + "_source": "network" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "cumulative_traffic": { + "type": "object_t", + "description": "The cumulative (running total) network traffic aggregated from the start of a flow or session. Use when reporting: (1) total accumulated bytes/packets since flow initiation, (2) combined aggregation models where both incremental deltas and running totals are reported together (populate both traffic for the delta and this attribute for the cumulative total), or (3) final summary metrics when a long-lived connection closes. This represents the sum of all activity from flow start to the current observation, not a delta or point-in-time value.", + "group": "context", + "requirement": "optional", + "caption": "Cumulative Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "base_event" + } + } + ], + "name": "network_activity", + "description": "Network Activity events report network connection and traffic activity.", + "uid": 4001, + "extends": "network", + "category": "network", + "constraints": { + "at_least_one": [ + "dst_endpoint", + "src_endpoint" + ] + }, + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control", + "network_proxy", + "load_balancer", + "data_classification", + "container", + "linux/linux_users" + ], + "category_uid": 4, + "caption": "Network Activity", + "category_name": "Network Activity" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/process_activity.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/process_activity.json new file mode 100644 index 00000000..beb7655d --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/process_activity.json @@ -0,0 +1,1193 @@ +{ + "attributes": [ + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "risk_level": { + "profile": "security_control", + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "launch_type": { + "type": "string_t", + "description": "The specific type of Launch activity, normalized to the caption of the launch_type_id value. In the case of Other it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Launch Type", + "type_name": "String", + "_source": "process_activity", + "_sibling_of": "launch_type_id" + } + }, + { + "confidence": { + "profile": "security_control", + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "confidence_id" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "requested_permissions": { + "type": "integer_t", + "description": "The permissions mask that was requested by the process.", + "group": "primary", + "requirement": "recommended", + "caption": "Requested Permissions", + "type_name": "Integer", + "_source": "process_activity" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "injection_type": { + "type": "string_t", + "description": "The process injection method, normalized to the caption of the injection_type_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Injection Type", + "type_name": "String", + "_source": "process_activity", + "_sibling_of": "injection_type_id" + } + }, + { + "launch_type_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Denotes that the Launch event represents the \"exec\" step of Unix process creation, where a process replaces its executable image, command line, and environment. WSL1 pico processes on Windows also use the 2-step Unix model.", + "caption": "Exec" + }, + "0": { + "description": "The launch type is unknown or not specified.", + "caption": "Unknown" + }, + "1": { + "description": "Denotes that the Launch event represents atomic creation of a new process on Windows. This launch type ID may also be used to represent both steps of Unix process creation in a single Launch event.", + "caption": "Spawn" + }, + "2": { + "description": "Denotes that the Launch event represents the \"fork\" step of Unix process creation, where a process creates a clone of itself in a parent-child relationship. WSL1 pico processes on Windows also use the 2-step Unix model.", + "caption": "Fork" + }, + "99": { + "description": "The launch type is not mapped. See the launch_type attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier for the specific type of Launch activity.", + "group": "primary", + "references": [ + { + "description": "fork() man page", + "url": "https://www.man7.org/linux/man-pages/man2/fork.2.html" + }, + { + "description": "execve() man page", + "url": "https://www.man7.org/linux/man-pages/man2/execve.2.html" + } + ], + "requirement": "recommended", + "caption": "Launch Type ID", + "type_name": "Integer", + "sibling": "launch_type", + "_source": "process_activity" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "risk_level_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "base_event", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "injection_type_id": { + "type": "integer_t", + "enum": { + "3": { + "caption": "Queue APC" + }, + "0": { + "description": "The injection type is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Remote Thread" + }, + "2": { + "caption": "Load Library" + }, + "99": { + "description": "The injection type is not mapped. See the injection_type attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the process injection method.", + "group": "primary", + "requirement": "recommended", + "caption": "Injection Type ID", + "type_name": "Integer", + "sibling": "injection_type", + "_source": "process_activity" + } + }, + { + "risk_details": { + "profile": "security_control", + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "base_event" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "actual_permissions": { + "type": "integer_t", + "description": "The permissions that were granted to the process in a platform-native format.", + "group": "primary", + "requirement": "recommended", + "caption": "Actual Permissions", + "type_name": "Integer", + "_source": "process_activity" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value: System Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "is_alert": { + "profile": "security_control", + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. Should be set to true if disposition_id = Alert among other dispositions, and/or risk_level_id or severity_id of the event is elevated. Not all control events will be alertable, for example if disposition_id = Exonerated or disposition_id = Allowed.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "base_event" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "100703": { + "description": "A request by the actor to obtain a handle or descriptor to a process with the aim of performing further actions upon that process. The target is usually a different process but this activity can also be reflexive.", + "caption": "Process Activity: Open" + }, + "100700": { + "caption": "Process Activity: Unknown" + }, + "100701": { + "description": "A request by the actor to launch another process. Refer to the launch_type_id attribute for details of the specific launch type.", + "caption": "Process Activity: Launch" + }, + "100702": { + "description": "A request by the actor to terminate a process. This activity is most commonly reflexive, this being the case when a process exits at its own initiation. Note too that Windows security products cannot always identify the actor in the case of inter-process termination. In this case, actor.process and process refer to the exiting process, i.e. indistinguishable from the reflexive case.", + "caption": "Process Activity: Terminate" + }, + "100799": { + "caption": "Process Activity: Other" + }, + "100704": { + "description": "A request by the actor to execute code within the context of a process. The target is usually a different process but this activity can also be reflexive. Refer to the injection_type_id attribute for details of the specific injection type.", + "references": [ + { + "description": "Guidance on the use of \"Module Activity: Load\" and \"Process Activity: Inject\".", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/faqs/schema-faq.md#when-should-i-use-a-module-activity-load-event-and-when-should-i-use-a-process-activity-inject-event" + } + ], + "caption": "Process Activity: Inject" + }, + "100705": { + "description": "A request by the actor to change its user identity by invoking the setuid() system call. Common programs like su and sudo use this mechanism. Note that the impersonation mechanism on Windows is not directly equivalent because it acts at the thread level.", + "caption": "Process Activity: Set User ID" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "process_activity" + } + }, + { + "confidence_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "base_event" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "1": { + "description": "System Activity events.", + "uid": 1, + "caption": "System Activity" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "process_activity" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The event status, normalized to the caption of the status_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Status", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "exit_code": { + "type": "integer_t", + "description": "The exit code reported by a process when it terminates. The convention is that zero indicates success and any non-zero exit code indicates that some error occurred.", + "group": "primary", + "requirement": "recommended", + "caption": "Exit Code", + "type_name": "Integer", + "_source": "process_activity" + } + }, + { + "malware": { + "profile": "security_control", + "type": "object_t", + "description": "A list of Malware objects, describing details about the identified malware.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "base_event" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "module": { + "type": "object_t", + "description": "The module that was injected by the actor process.", + "group": "primary", + "requirement": "recommended", + "caption": "Module", + "object_name": "Module", + "object_type": "module", + "_source": "process_activity" + } + }, + { + "confidence_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Success" + }, + "2": { + "caption": "Failure" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the event status.", + "group": "primary", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "base_event" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: Process Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "process": { + "type": "object_t", + "description": "The process that was launched, injected into, opened, or terminated.", + "group": "primary", + "requirement": "required", + "caption": "Process", + "object_name": "Process", + "object_type": "process", + "_source": "process_activity" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": null, + "type": "object_t", + "description": "An addressable device, computer system or host.", + "group": "primary", + "requirement": "required", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "system" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "A request by the actor to obtain a handle or descriptor to a process with the aim of performing further actions upon that process. The target is usually a different process but this activity can also be reflexive.", + "caption": "Open" + }, + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "A request by the actor to launch another process. Refer to the launch_type_id attribute for details of the specific launch type.", + "caption": "Launch" + }, + "2": { + "description": "A request by the actor to terminate a process. This activity is most commonly reflexive, this being the case when a process exits at its own initiation. Note too that Windows security products cannot always identify the actor in the case of inter-process termination. In this case, actor.process and process refer to the exiting process, i.e. indistinguishable from the reflexive case.", + "caption": "Terminate" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A request by the actor to execute code within the context of a process. The target is usually a different process but this activity can also be reflexive. Refer to the injection_type_id attribute for details of the specific injection type.", + "references": [ + { + "description": "Guidance on the use of \"Module Activity: Load\" and \"Process Activity: Inject\".", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/faqs/schema-faq.md#when-should-i-use-a-module-activity-load-event-and-when-should-i-use-a-process-activity-inject-event" + } + ], + "caption": "Inject" + }, + "5": { + "description": "A request by the actor to change its user identity by invoking the setuid() system call. Common programs like su and sudo use this mechanism. Note that the impersonation mechanism on Windows is not directly equivalent because it acts at the thread level.", + "caption": "Set User ID" + } + }, + "description": "The normalized identifier of the activity that triggered the event.", + "group": "classification", + "references": [ + { + "description": "setuid() man page", + "url": "https://www.man7.org/linux/man-pages/man2/setuid.2.html" + } + ], + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "process_activity", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": "security_control", + "type": "object_t", + "description": "Describes details about the scan job that identified malware on the target system.", + "group": "primary", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "base_event" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "1007": { + "description": "Process Activity events report when a process launches, injects, opens or terminates another process, successful or otherwise.", + "caption": "Process Activity" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "process_activity" + } + }, + { + "risk_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The event activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": null, + "type": "object_t", + "description": "The actor that performed the activity on the target process. For example, the process that started a new process or injected code into another process.", + "group": "primary", + "requirement": "required", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "process_activity" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "base_event" + } + } + ], + "name": "process_activity", + "description": "Process Activity events report when a process launches, injects, opens or terminates another process, successful or otherwise.", + "uid": 1007, + "extends": "system", + "category": "system", + "associations": { + "device": [ + "actor.user" + ], + "actor.user": [ + "device" + ] + }, + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control", + "data_classification", + "container", + "linux/linux_users" + ], + "category_uid": 1, + "caption": "Process Activity", + "category_name": "System Activity" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/ssh_activity.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/ssh_activity.json new file mode 100644 index 00000000..0017b737 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/classes/ssh_activity.json @@ -0,0 +1,1391 @@ +{ + "attributes": [ + { + "proxy_http_request": { + "profile": "network_proxy", + "type": "object_t", + "description": "The HTTP Request from the proxy server to the remote server.", + "group": "context", + "requirement": "optional", + "caption": "Proxy HTTP Request", + "object_name": "HTTP Request", + "object_type": "http_request", + "_source": "network" + } + }, + { + "proxy_endpoint": { + "profile": "network_proxy", + "type": "object_t", + "description": "The proxy (server) in a network connection.", + "group": "context", + "requirement": "optional", + "caption": "Proxy Endpoint", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "_source": "network" + } + }, + { + "server_hassh": { + "type": "object_t", + "description": "The Server HASSH fingerprinting object.", + "group": "primary", + "requirement": "recommended", + "caption": "Server HASSH", + "object_name": "HASSH", + "object_type": "hassh", + "_source": "ssh_activity" + } + }, + { + "severity": { + "type": "string_t", + "description": "The event/finding severity, normalized to the caption of the severity_id value. In the case of 'Other', it is defined by the source.", + "group": "classification", + "requirement": "optional", + "caption": "Severity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "severity_id" + } + }, + { + "observation_point": { + "type": "string_t", + "description": "Indicates whether the source network endpoint, destination network endpoint, or neither served as the observation point for the activity. The value is normalized to the caption of the observation_point_id.", + "requirement": "optional", + "caption": "Observation Point", + "type_name": "String", + "_source": "network", + "_sibling_of": "observation_point_id" + } + }, + { + "risk_level": { + "profile": "security_control", + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "risk_level_id" + } + }, + { + "status_code": { + "type": "string_t", + "description": "The event status code, as reported by the event source.

For example, in a Windows Failed Authentication event, this would be the value of 'Failure Code', e.g. 0x18.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Code", + "type_name": "String", + "_source": "base_event" + } + }, + { + "start_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "osint": { + "profile": "osint", + "type": "object_t", + "description": "The OSINT (Open Source Intelligence) object contains details related to an indicator such as the indicator itself, related indicators, geolocation, registrar information, subdomains, analyst commentary, and other contextual information. This information can be used to further enrich a detection or finding by providing decisioning support to other analysts and engineers.", + "group": "primary", + "is_array": true, + "requirement": "required", + "caption": "OSINT", + "object_name": "OSINT", + "object_type": "osint", + "_source": "base_event" + } + }, + { + "auth_type": { + "type": "string_t", + "description": "The SSH authentication type, normalized to the caption of 'auth_type_id'. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Authentication Type", + "type_name": "String", + "_source": "ssh_activity", + "_sibling_of": "auth_type_id" + } + }, + { + "confidence": { + "profile": "security_control", + "type": "string_t", + "description": "The confidence, normalized to the caption of the confidence_id value. In the case of 'Other', it is defined by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "confidence_id" + } + }, + { + "observation_point_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Neither the source nor destination network endpoint is the observation point.", + "caption": "Neither" + }, + "0": { + "description": "The observation point is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The source network endpoint is the observation point.", + "caption": "Source" + }, + "2": { + "description": "The destination network endpoint is the observation point.", + "caption": "Destination" + }, + "99": { + "description": "The observation point is not mapped. See the observation_point attribute for a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Both the source and destination network endpoint are the observation point. This typically occurs in localhost or internal communications where the source and destination are the same endpoint, often resulting in a connection_info.direction of Local.", + "caption": "Both" + } + }, + "description": "The normalized identifier of the observation point. The observation point identifier indicates whether the source network endpoint, destination network endpoint, or neither served as the observation point for the activity.", + "requirement": "optional", + "caption": "Observation Point ID", + "type_name": "Integer", + "sibling": "observation_point", + "_source": "network" + } + }, + { + "policy": { + "profile": "security_control", + "type": "object_t", + "description": "The policy that pertains to the control that triggered the event, if applicable. For example the name of an anti-malware policy or an access control policy.", + "group": "primary", + "requirement": "optional", + "caption": "Policy", + "object_name": "Policy", + "object_type": "policy", + "_source": "base_event" + } + }, + { + "connection_info": { + "type": "object_t", + "description": "The network connection information.", + "group": "primary", + "requirement": "recommended", + "caption": "Connection Info", + "object_name": "Network Connection Information", + "object_type": "network_connection_info", + "_source": "network" + } + }, + { + "action_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "The activity was observed, but neither explicitly allowed nor denied. This is common with IDS and EDR controls that report additional information on observed behavior such as TTPs. The disposition_id attribute should be set to a value that conforms to this action, for example 'Logged', 'Alert', 'Detected', 'Count', etc.", + "caption": "Observed" + }, + "0": { + "description": "The action was unknown. The disposition_id attribute may still be set to a non-unknown value, for example 'Custom Action', 'Challenge'.", + "caption": "Unknown" + }, + "1": { + "description": "The activity was allowed. The disposition_id attribute should be set to a value that conforms to this action, for example 'Allowed', 'Approved', 'Delayed', 'No Action', 'Count' etc.", + "caption": "Allowed" + }, + "2": { + "description": "The attempted activity was denied. The disposition_id attribute should be set to a value that conforms to this action, for example 'Blocked', 'Rejected', 'Quarantined', 'Isolated', 'Dropped', 'Access Revoked, etc.", + "caption": "Denied" + }, + "99": { + "description": "The action is not mapped. See the action attribute which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The activity was modified, adjusted, or corrected. The disposition_id attribute should be set appropriately, for example 'Restored', 'Corrected', 'Delayed', 'Captcha', 'Tagged'.", + "caption": "Modified" + } + }, + "description": "The action taken by a control or other policy-based system leading to an outcome or disposition. An unknown action may still correspond to a known disposition. Refer to disposition_id for the outcome of the action.", + "group": "primary", + "requirement": "recommended", + "caption": "Action ID", + "type_name": "Integer", + "sibling": "action", + "_source": "base_event" + } + }, + { + "authorizations": { + "profile": "security_control", + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "base_event" + } + }, + { + "firewall_rule": { + "profile": "security_control", + "type": "object_t", + "description": "The firewall rule that pertains to the control that triggered the event, if applicable.", + "group": "primary", + "requirement": "optional", + "caption": "Firewall Rule", + "object_name": "Firewall Rule", + "object_type": "firewall_rule", + "_source": "base_event" + } + }, + { + "ja4_fingerprint_list": { + "type": "object_t", + "description": "A list of the JA4+ network fingerprints.", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "JA4+ Fingerprints", + "object_name": "JA4+ Fingerprint", + "object_type": "ja4_fingerprint", + "_source": "network" + } + }, + { + "raw_data_hash": { + "type": "object_t", + "description": "The hash, which describes the content of the raw_data field.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "base_event" + } + }, + { + "time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "optional", + "caption": "Event Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "risk_level_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "group": "context", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "base_event", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "risk_details": { + "profile": "security_control", + "type": "string_t", + "description": "Describes the risk associated with the finding.", + "group": "context", + "requirement": "optional", + "caption": "Risk Details", + "type_name": "String", + "_source": "base_event" + } + }, + { + "disposition_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "description": "A suspicious file or other content was moved to a benign location.", + "caption": "Quarantined" + }, + "6": { + "description": "The request was detected as a threat and resulted in the connection being dropped.", + "caption": "Dropped" + }, + "0": { + "description": "The disposition is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Granted access or allowed the action to the protected resource.", + "caption": "Allowed" + }, + "2": { + "description": "Denied access or blocked the action to the protected resource.", + "caption": "Blocked" + }, + "99": { + "description": "The disposition is not mapped. See the disposition attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A session was isolated on the network or within a browser.", + "caption": "Isolated" + }, + "5": { + "description": "A file or other content was deleted.", + "caption": "Deleted" + }, + "7": { + "description": "A custom action was executed such as running of a command script. Use the message attribute of the base class for details.", + "caption": "Custom Action" + }, + "8": { + "description": "A request or submission was approved. For example, when a form was properly filled out and submitted. This is distinct from 1 'Allowed'.", + "caption": "Approved" + }, + "9": { + "description": "A quarantined file or other content was restored to its original location.", + "caption": "Restored" + }, + "10": { + "description": "A suspicious or risky entity was deemed to no longer be suspicious (re-scored).", + "caption": "Exonerated" + }, + "11": { + "description": "A corrupt file or configuration was corrected.", + "caption": "Corrected" + }, + "12": { + "description": "A corrupt file or configuration was partially corrected.", + "caption": "Partially Corrected" + }, + "14": { + "description": "An operation was delayed, for example if a restart was required to finish the operation.", + "caption": "Delayed" + }, + "15": { + "description": "Suspicious activity or a policy violation was detected without further action.", + "caption": "Detected" + }, + "16": { + "description": "The outcome of an operation had no action taken.", + "caption": "No Action" + }, + "17": { + "description": "The operation or action was logged without further action.", + "caption": "Logged" + }, + "18": { + "description": "A file or other entity was marked with extended attributes.", + "caption": "Tagged" + }, + "20": { + "description": "Counted the request or activity but did not determine whether to allow it or block it.", + "caption": "Count" + }, + "21": { + "description": "The request was detected as a threat and resulted in the connection being reset.", + "caption": "Reset" + }, + "22": { + "description": "Required the end user to solve a CAPTCHA puzzle to prove that a human being is sending the request.", + "caption": "Captcha" + }, + "23": { + "description": "Ran a silent challenge that required the client session to verify that it's a browser, and not a bot.", + "caption": "Challenge" + }, + "24": { + "description": "The requestor's access has been revoked due to security policy enforcements. Note: use the Host profile if the User or Actor requestor is not present in the event class.", + "caption": "Access Revoked" + }, + "25": { + "description": "A request or submission was rejected. For example, when a form was improperly filled out and submitted. This is distinct from 2 'Blocked'.", + "caption": "Rejected" + }, + "26": { + "description": "An attempt to access a resource was denied due to an authorization check that failed. This is a more specific disposition than 2 'Blocked' and can be complemented with the authorizations attribute for more detail.", + "caption": "Unauthorized" + }, + "27": { + "description": "An error occurred during the processing of the activity or request. Use the message attribute of the base class for details.", + "caption": "Error" + }, + "13": { + "description": "A corrupt file or configuration was not corrected.", + "caption": "Uncorrected" + }, + "19": { + "description": "The request or activity was detected as a threat and resulted in a notification but request was not blocked.", + "caption": "Alert" + } + }, + "description": "Describes the outcome or action taken by a security control, such as access control checks, malware detections or various types of policy violations.", + "group": "primary", + "requirement": "recommended", + "caption": "Disposition ID", + "type_name": "Integer", + "sibling": "disposition", + "_source": "base_event" + } + }, + { + "type_name": { + "type": "string_t", + "description": "The event/finding type name, as defined by the type_uid.", + "group": "classification", + "requirement": "optional", + "caption": "Type Name", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "type_uid" + } + }, + { + "dst_endpoint": { + "type": "object_t", + "description": "The responder (server) in a network connection.", + "group": "primary", + "requirement": "recommended", + "caption": "Destination Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "network" + } + }, + { + "end_time": { + "type": "timestamp_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "count": { + "type": "integer_t", + "description": "The number of times that events in the same logical group occurred during the event Start Time to End Time period.", + "group": "occurrence", + "requirement": "optional", + "caption": "Count", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "category_name": { + "type": "string_t", + "description": "The event category name, as defined by category_uid value: Network Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "category_uid" + } + }, + { + "unmapped": { + "type": "object_t", + "description": "The attributes that are not mapped to the event schema. The names and values of those attributes are specific to the event source.", + "group": "context", + "requirement": "optional", + "caption": "Unmapped Data", + "object_name": "Object", + "object_type": "object", + "_source": "base_event" + } + }, + { + "is_alert": { + "profile": "security_control", + "type": "boolean_t", + "description": "Indicates that the event is considered to be an alertable signal. Should be set to true if disposition_id = Alert among other dispositions, and/or risk_level_id or severity_id of the event is elevated. Not all control events will be alertable, for example if disposition_id = Exonerated or disposition_id = Allowed.", + "group": "primary", + "requirement": "recommended", + "caption": "Alert", + "type_name": "Boolean", + "_source": "base_event" + } + }, + { + "client_hassh": { + "type": "object_t", + "description": "The Client HASSH fingerprinting object.", + "group": "primary", + "requirement": "recommended", + "caption": "Client HASSH", + "object_name": "HASSH", + "object_type": "hassh", + "_source": "ssh_activity" + } + }, + { + "type_uid": { + "type": "long_t", + "enum": { + "400703": { + "description": "The network connection was abnormally terminated or closed by a middle device like firewalls.", + "caption": "SSH Activity: Reset" + }, + "400706": { + "description": "Network traffic report.", + "caption": "SSH Activity: Traffic" + }, + "400700": { + "caption": "SSH Activity: Unknown" + }, + "400701": { + "description": "A new network connection was opened.", + "caption": "SSH Activity: Open" + }, + "400702": { + "description": "The network connection was closed.", + "caption": "SSH Activity: Close" + }, + "400799": { + "caption": "SSH Activity: Other" + }, + "400704": { + "description": "The network connection failed. For example a connection timeout or no route to host.", + "caption": "SSH Activity: Fail" + }, + "400705": { + "description": "The network connection was refused. For example an attempt to connect to a server port which is not open.", + "caption": "SSH Activity: Refuse" + }, + "400707": { + "description": "A network endpoint began listening for new network connections.", + "caption": "SSH Activity: Listen" + } + }, + "description": "The event/finding type ID. It identifies the event's semantics and structure. The value is calculated by the logging system as: class_uid * 100 + activity_id.", + "group": "classification", + "requirement": "required", + "caption": "Type ID", + "type_name": "Long", + "sibling": "type_name", + "_source": "ssh_activity" + } + }, + { + "confidence_id": { + "profile": "security_control", + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "description": "The normalized confidence is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The confidence is not mapped to the defined enum values. See the confidence attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized confidence refers to the accuracy of the rule that created the finding. A rule with a low confidence means that the finding scope is wide and may create finding reports that may not be malicious in nature.", + "group": "context", + "requirement": "recommended", + "caption": "Confidence ID", + "type_name": "Integer", + "sibling": "confidence", + "_source": "base_event" + } + }, + { + "category_uid": { + "type": "integer_t", + "enum": { + "4": { + "description": "Network Activity events.", + "uid": 4, + "caption": "Network Activity" + } + }, + "description": "The category unique identifier of the event.", + "group": "classification", + "requirement": "required", + "caption": "Category ID", + "type_name": "Integer", + "sibling": "category_name", + "_source": "ssh_activity" + } + }, + { + "proxy_traffic": { + "profile": "network_proxy", + "type": "object_t", + "description": "The network traffic refers to the amount of data moving across a network, from proxy to remote server at a given point of time.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "time": { + "type": "timestamp_t", + "description": "The normalized event occurrence time or the finding creation time.", + "group": "occurrence", + "requirement": "required", + "caption": "Event Time", + "type_name": "Timestamp", + "_source": "base_event" + } + }, + { + "status": { + "type": "string_t", + "description": "The event status, normalized to the caption of the status_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "recommended", + "caption": "Status", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "status_id" + } + }, + { + "duration": { + "type": "long_t", + "description": "The event duration or aggregate time, the amount of time the event covers from start_time to end_time in milliseconds.", + "group": "occurrence", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "load_balancer": { + "profile": "load_balancer", + "type": "object_t", + "description": "The Load Balancer object contains information related to the device that is distributing incoming traffic to specified destinations.", + "group": "primary", + "requirement": "recommended", + "caption": "Load Balancer", + "object_name": "Load Balancer", + "object_type": "load_balancer", + "_source": "network" + } + }, + { + "app_name": { + "type": "string_t", + "description": "The name of the application associated with the event or object.", + "group": "context", + "requirement": "optional", + "caption": "Application Name", + "type_name": "String", + "_source": "network" + } + }, + { + "src_endpoint": { + "type": "object_t", + "description": "The initiator (client) of the network connection.", + "group": "primary", + "requirement": "recommended", + "caption": "Source Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "network" + } + }, + { + "proxy_tls": { + "profile": "network_proxy", + "type": "object_t", + "description": "The TLS protocol negotiated between the proxy server and the remote server.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy TLS", + "object_name": "Transport Layer Security (TLS)", + "object_type": "tls", + "_source": "network" + } + }, + { + "malware": { + "profile": "security_control", + "type": "object_t", + "description": "A list of Malware objects, describing details about the identified malware.", + "group": "primary", + "is_array": true, + "requirement": "optional", + "caption": "Malware", + "object_name": "Malware", + "object_type": "malware", + "_source": "base_event" + } + }, + { + "metadata": { + "type": "object_t", + "description": "The metadata associated with the event or a finding.", + "group": "context", + "requirement": "required", + "caption": "Metadata", + "object_name": "Metadata", + "object_type": "metadata", + "_source": "base_event" + } + }, + { + "traffic": { + "type": "object_t", + "description": "The network traffic for this observation period. Use when reporting: (1) delta values (bytes/packets transferred since the last observation), (2) instantaneous measurements at a specific point in time, or (3) standalone single-event metrics. This attribute represents a point-in-time measurement or incremental change, not a running total. For accumulated totals across multiple observations or the lifetime of a flow, use cumulative_traffic instead.", + "group": "primary", + "requirement": "recommended", + "caption": "Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "confidence_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The confidence score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Confidence Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "proxy": { + "type": "object_t", + "description": "The proxy (server) in a network connection.", + "group": "primary", + "requirement": "recommended", + "caption": "Proxy", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "@deprecated": { + "message": "Use the proxy_endpoint attribute instead.", + "since": "1.1.0" + }, + "_source": "network" + } + }, + { + "enrichments": { + "type": "object_t", + "description": "The additional information from an external data source, which is associated with the event or a finding. For example add location information for the IP address in the DNS answers:

[{\"name\": \"answers.ip\", \"value\": \"92.24.47.250\", \"type\": \"location\", \"data\": {\"city\": \"Socotra\", \"continent\": \"Asia\", \"coordinates\": [-25.4153, 17.0743], \"country\": \"YE\", \"desc\": \"Yemen\"}}]", + "group": "context", + "is_array": true, + "requirement": "optional", + "caption": "Enrichments", + "object_name": "Enrichment", + "object_type": "enrichment", + "_source": "base_event" + } + }, + { + "file": { + "type": "object_t", + "description": "The file that is the target of the SSH activity.", + "group": "context", + "requirement": "optional", + "caption": "File", + "object_name": "File", + "object_type": "file", + "_source": "ssh_activity" + } + }, + { + "status_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "The status is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Success" + }, + "2": { + "caption": "Failure" + }, + "99": { + "description": "The status is not mapped. See the status attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the event status.", + "group": "primary", + "requirement": "recommended", + "caption": "Status ID", + "type_name": "Integer", + "sibling": "status", + "_source": "base_event" + } + }, + { + "class_name": { + "type": "string_t", + "description": "The event class name, as defined by class_uid value: SSH Activity.", + "group": "classification", + "requirement": "optional", + "caption": "Class", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "class_uid" + } + }, + { + "status_detail": { + "type": "string_t", + "description": "The status detail contains additional information about the event/finding outcome.", + "group": "primary", + "requirement": "recommended", + "caption": "Status Detail", + "type_name": "String", + "_source": "base_event" + } + }, + { + "proxy_connection_info": { + "profile": "network_proxy", + "type": "object_t", + "description": "The connection information from the proxy server to the remote server.", + "group": "context", + "requirement": "recommended", + "caption": "Proxy Connection Info", + "object_name": "Network Connection Information", + "object_type": "network_connection_info", + "_source": "network" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "group": "primary", + "requirement": "recommended", + "caption": "Message", + "type_name": "String", + "_source": "base_event" + } + }, + { + "end_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The end time of a time period, or the time of the most recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "End Time", + "type_name": "Datetime", + "_source": "base_event" + } + }, + { + "api": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about a typical API (Application Programming Interface) call.", + "group": "context", + "requirement": "optional", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "base_event" + } + }, + { + "device": { + "profile": "host", + "type": "object_t", + "description": "An addressable device, computer system or host.", + "group": "primary", + "requirement": "recommended", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "base_event" + } + }, + { + "action": { + "profile": "security_control", + "type": "string_t", + "description": "The normalized caption of action_id.", + "group": "primary", + "requirement": "optional", + "caption": "Action", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "action_id" + } + }, + { + "severity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Action is required but the situation is not serious at this time.", + "caption": "Medium" + }, + "6": { + "description": "An error occurred but it is too late to take remedial action.", + "caption": "Fatal" + }, + "0": { + "description": "The event/finding severity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Informational message. No action required.", + "caption": "Informational" + }, + "2": { + "description": "The user decides if action is needed.", + "caption": "Low" + }, + "99": { + "description": "The event/finding severity is not mapped. See the severity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Action is required immediately.", + "caption": "High" + }, + "5": { + "description": "Action is required immediately and the scope is broad.", + "caption": "Critical" + } + }, + "description": "

The normalized identifier of the event/finding severity.

The normalized severity is a measurement the effort and expense required to manage and resolve an event or incident. Smaller numerical values represent lower impact events, and larger numerical values represent higher impact events.", + "group": "classification", + "requirement": "required", + "caption": "Severity ID", + "type_name": "Integer", + "sibling": "severity", + "_source": "base_event" + } + }, + { + "attacks": { + "profile": "security_control", + "type": "object_t", + "description": "An array of MITRE ATT&CK\u00ae objects describing identified tactics, techniques & sub-techniques. The objects are compatible with MITRE ATLAS\u2122 tactics, techniques & sub-techniques.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "base_event" + } + }, + { + "timezone_offset": { + "type": "integer_t", + "description": "The number of minutes that the reported event time is ahead or behind UTC, in the range -1,080 to +1,080.", + "group": "occurrence", + "requirement": "recommended", + "caption": "Timezone Offset", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "activity_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "The network connection was abnormally terminated or closed by a middle device like firewalls.", + "caption": "Reset" + }, + "6": { + "description": "Network traffic report.", + "caption": "Traffic" + }, + "0": { + "description": "The event activity is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "A new network connection was opened.", + "caption": "Open" + }, + "2": { + "description": "The network connection was closed.", + "caption": "Close" + }, + "99": { + "description": "The event activity is not mapped. See the activity_name attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The network connection failed. For example a connection timeout or no route to host.", + "caption": "Fail" + }, + "5": { + "description": "The network connection was refused. For example an attempt to connect to a server port which is not open.", + "caption": "Refuse" + }, + "7": { + "description": "A network endpoint began listening for new network connections.", + "caption": "Listen" + } + }, + "description": "The normalized identifier of the activity that triggered the event.", + "group": "classification", + "requirement": "required", + "caption": "Activity ID", + "type_name": "Integer", + "sibling": "activity_name", + "_source": "ssh_activity", + "suppress_checks": [ + "sibling_convention" + ] + } + }, + { + "malware_scan_info": { + "profile": "security_control", + "type": "object_t", + "description": "Describes details about the scan job that identified malware on the target system.", + "group": "primary", + "requirement": "optional", + "caption": "Malware Scan Info", + "object_name": "Malware Scan Info", + "object_type": "malware_scan_info", + "_source": "base_event" + } + }, + { + "class_uid": { + "type": "integer_t", + "enum": { + "4007": { + "description": "SSH Activity events report remote client connections to a server using the Secure Shell (SSH) Protocol.", + "caption": "SSH Activity" + } + }, + "description": "The unique identifier of a class. A class describes the attributes available in an event.", + "group": "classification", + "requirement": "required", + "caption": "Class ID", + "type_name": "Integer", + "sibling": "class_name", + "_source": "ssh_activity" + } + }, + { + "risk_score": { + "profile": "security_control", + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "group": "context", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "base_event" + } + }, + { + "protocol_ver": { + "type": "string_t", + "description": "The Secure Shell Protocol version.", + "group": "context", + "requirement": "recommended", + "caption": "SSH Version", + "type_name": "String", + "_source": "ssh_activity" + } + }, + { + "raw_data_size": { + "type": "long_t", + "description": "The size of the raw data which was transformed into an OCSF event, in bytes.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data Size", + "type_name": "Long", + "_source": "base_event" + } + }, + { + "observables": { + "type": "object_t", + "description": "The observables associated with the event or a finding.", + "group": "primary", + "is_array": true, + "references": [ + { + "description": "OCSF Observables FAQ", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/defining-and-using-observables.md" + } + ], + "requirement": "recommended", + "caption": "Observables", + "object_name": "Observable", + "object_type": "observable", + "_source": "base_event" + } + }, + { + "auth_type_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "Authentication based on the client host's identity.", + "caption": "Host Based" + }, + "6": { + "description": "Paired public key authentication.", + "caption": "Public Key" + }, + "0": { + "description": "The authentication type is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "Authentication using digital certificates.", + "caption": "Certificate Based" + }, + "2": { + "description": "GSSAPI for centralized authentication.", + "caption": "GSSAPI" + }, + "99": { + "description": "The authentication type is not mapped. See the auth_type attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "Multi-step, interactive authentication.", + "caption": "Keyboard Interactive" + }, + "5": { + "description": "Password Authentication.", + "caption": "Password" + } + }, + "description": "The normalized identifier of the SSH authentication type.", + "group": "primary", + "requirement": "recommended", + "caption": "Authentication Type ID", + "type_name": "Integer", + "sibling": "auth_type", + "_source": "ssh_activity" + } + }, + { + "disposition": { + "profile": "security_control", + "type": "string_t", + "description": "The disposition name, normalized to the caption of the disposition_id value. In the case of 'Other', it is defined by the event source.", + "group": "primary", + "requirement": "optional", + "caption": "Disposition", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "disposition_id" + } + }, + { + "proxy_http_response": { + "profile": "network_proxy", + "type": "object_t", + "description": "The HTTP Response from the remote server to the proxy server.", + "group": "context", + "requirement": "optional", + "caption": "Proxy HTTP Response", + "object_name": "HTTP Response", + "object_type": "http_response", + "_source": "network" + } + }, + { + "activity_name": { + "type": "string_t", + "description": "The event activity name, as defined by the activity_id.", + "group": "classification", + "requirement": "optional", + "caption": "Activity", + "type_name": "String", + "_source": "base_event", + "_sibling_of": "activity_id" + } + }, + { + "cloud": { + "profile": "cloud", + "type": "object_t", + "description": "Describes details about the Cloud environment where the event or finding was created.", + "group": "primary", + "requirement": "required", + "caption": "Cloud", + "object_name": "Cloud", + "object_type": "cloud", + "_source": "base_event" + } + }, + { + "actor": { + "profile": "host", + "type": "object_t", + "description": "The actor object describes details about the user/role/process that was the source of the activity. Note that this is not the threat actor of a campaign but may be part of a campaign.", + "group": "primary", + "requirement": "optional", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "base_event" + } + }, + { + "tls": { + "type": "object_t", + "description": "The Transport Layer Security (TLS) attributes.", + "group": "context", + "requirement": "optional", + "caption": "TLS", + "object_name": "Transport Layer Security (TLS)", + "object_type": "tls", + "_source": "network" + } + }, + { + "raw_data": { + "type": "string_t", + "description": "The raw event/finding data as received from the source.", + "group": "context", + "requirement": "optional", + "caption": "Raw Data", + "type_name": "String", + "_source": "base_event" + } + }, + { + "cumulative_traffic": { + "type": "object_t", + "description": "The cumulative (running total) network traffic aggregated from the start of a flow or session. Use when reporting: (1) total accumulated bytes/packets since flow initiation, (2) combined aggregation models where both incremental deltas and running totals are reported together (populate both traffic for the delta and this attribute for the cumulative total), or (3) final summary metrics when a long-lived connection closes. This represents the sum of all activity from flow start to the current observation, not a delta or point-in-time value.", + "group": "context", + "requirement": "optional", + "caption": "Cumulative Traffic", + "object_name": "Network Traffic", + "object_type": "network_traffic", + "_source": "network" + } + }, + { + "start_time": { + "type": "timestamp_t", + "description": "The start time of a time period, or the time of the least recent event included in the aggregate event.", + "group": "occurrence", + "requirement": "optional", + "caption": "Start Time", + "type_name": "Timestamp", + "_source": "base_event" + } + } + ], + "name": "ssh_activity", + "description": "SSH Activity events report remote client connections to a server using the Secure Shell (SSH) Protocol.", + "uid": 4007, + "extends": "network", + "category": "network", + "constraints": { + "at_least_one": [ + "dst_endpoint", + "src_endpoint" + ] + }, + "profiles": [ + "cloud", + "datetime", + "host", + "osint", + "security_control", + "network_proxy", + "load_balancer", + "data_classification", + "container", + "linux/linux_users" + ], + "category_uid": 4, + "caption": "SSH Activity", + "category_name": "Network Activity" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/actor.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/actor.json new file mode 100644 index 00000000..8202259f --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/actor.json @@ -0,0 +1,118 @@ +{ + "attributes": [ + { + "process": { + "type": "object_t", + "description": "The process that initiated the activity.", + "requirement": "recommended", + "caption": "Process", + "object_name": "Process", + "object_type": "process", + "_source": "actor" + } + }, + { + "session": { + "type": "object_t", + "description": "The user session from which the activity was initiated.", + "requirement": "optional", + "caption": "Session", + "object_name": "Session", + "object_type": "session", + "_source": "actor" + } + }, + { + "user": { + "type": "object_t", + "description": "The user that initiated the activity or the user context from which the activity was initiated.", + "requirement": "recommended", + "caption": "User", + "object_name": "User", + "object_type": "user", + "_source": "actor" + } + }, + { + "app_name": { + "type": "string_t", + "description": "The client application or service that initiated the activity. This can be in conjunction with the user if present. Note that app_name is distinct from the process if present.", + "requirement": "optional", + "caption": "Application Name", + "type_name": "String", + "_source": "actor" + } + }, + { + "app_uid": { + "type": "string_t", + "description": "The unique identifier of the client application or service that initiated the activity. This can be in conjunction with the user if present. Note that app_name is distinct from the process.pid or process.uid if present.", + "requirement": "optional", + "caption": "Application ID", + "type_name": "String", + "_source": "actor" + } + }, + { + "authorizations": { + "type": "object_t", + "description": "Provides details about an authorization, such as authorization outcome, and any associated policies related to the activity/event.", + "is_array": true, + "requirement": "optional", + "caption": "Authorization Information", + "object_name": "Authorization Result", + "object_type": "authorization", + "_source": "actor" + } + }, + { + "idp": { + "type": "object_t", + "description": "This object describes details about the Identity Provider used.", + "requirement": "optional", + "caption": "Identity Provider", + "object_name": "Identity Provider", + "object_type": "idp", + "_source": "actor" + } + }, + { + "invoked_by": { + "type": "string_t", + "description": "The name of the service that invoked the activity as described in the event.", + "requirement": "optional", + "caption": "Invoked by", + "type_name": "String", + "@deprecated": { + "message": "Use app_name, app_uid attributes instead.", + "since": "1.2.0" + }, + "_source": "actor" + } + } + ], + "name": "actor", + "description": "The Actor object contains details about the user, role, application, service, or process that initiated or performed a specific activity. Note that Actor is not the threat actor of a campaign but may be part of a campaign.", + "extends": "object", + "constraints": { + "at_least_one": [ + "process", + "user", + "invoked_by", + "session", + "app_name", + "app_uid" + ] + }, + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:Agent.", + "url": "https://next.d3fend.mitre.org/agent/d3f:Agent/" + } + ], + "profiles": [ + "container", + "linux/linux_users" + ], + "caption": "Actor" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/attack.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/attack.json new file mode 100644 index 00000000..ebf95e4f --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/attack.json @@ -0,0 +1,135 @@ +{ + "attributes": [ + { + "version": { + "type": "string_t", + "description": "The ATT&CK\u00ae or ATLAS\u2122 Matrix version.", + "requirement": "recommended", + "caption": "Version", + "type_name": "String", + "_source": "attack" + } + }, + { + "tactics": { + "type": "object_t", + "description": "The Tactic object describes the tactic ID and/or tactic name that are associated with the attack technique, as defined by ATT&CK\u00ae Matrix.", + "is_array": true, + "requirement": "optional", + "caption": "Tactics", + "object_name": "MITRE Tactic", + "object_type": "tactic", + "@deprecated": { + "message": "Use the tactic attribute instead.", + "since": "1.1.0" + }, + "_source": "attack" + } + }, + { + "technique": { + "type": "object_t", + "description": "The Technique object describes the MITRE ATT&CK\u00ae or ATLAS\u2122 Technique ID and/or name associated to an attack.", + "references": [ + { + "description": "ATT&CK\u00ae Matrix", + "url": "https://attack.mitre.org/wiki/ATT&CK_Matrix" + }, + { + "description": "ATLAS\u2122 Matrix", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "recommended", + "caption": "MITRE Technique", + "object_name": "MITRE Technique", + "object_type": "technique", + "_source": "attack" + } + }, + { + "mitigation": { + "type": "object_t", + "description": "The Mitigation object describes the MITRE ATT&CK\u00ae or ATLAS\u2122 Mitigation ID and/or name that is associated to an attack.", + "references": [ + { + "description": "ATT&CK\u00ae Matrix", + "url": "https://attack.mitre.org/wiki/ATT&CK_Matrix" + }, + { + "description": "ATLAS\u2122 Matrix", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE Mitigation", + "object_name": "MITRE Mitigation", + "object_type": "mitigation", + "_source": "attack" + } + }, + { + "sub_technique": { + "type": "object_t", + "description": "The Sub-technique object describes the MITRE ATT&CK\u00ae or ATLAS\u2122 Sub-technique ID and/or name associated to an attack.", + "references": [ + { + "description": "ATT&CK\u00ae Matrix", + "url": "https://attack.mitre.org/wiki/ATT&CK_Matrix" + }, + { + "description": "ATLAS\u2122 Matrix", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "recommended", + "caption": "MITRE Sub-technique", + "object_name": "MITRE Sub-technique", + "object_type": "sub_technique", + "_source": "attack" + } + }, + { + "tactic": { + "type": "object_t", + "description": "The Tactic object describes the MITRE ATT&CK\u00ae or ATLAS\u2122 Tactic ID and/or name that is associated to an attack.", + "references": [ + { + "description": "ATT&CK\u00ae Matrix", + "url": "https://attack.mitre.org/wiki/ATT&CK_Matrix" + }, + { + "description": "ATLAS\u2122 Matrix", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "recommended", + "caption": "MITRE Tactic", + "object_name": "MITRE Tactic", + "object_type": "tactic", + "_source": "attack" + } + } + ], + "name": "attack", + "description": "The MITRE ATT&CK\u00ae & ATLAS\u2122 object describes the tactic, technique, sub-technique & mitigation associated to an attack.", + "extends": "object", + "constraints": { + "at_least_one": [ + "tactic", + "technique", + "sub_technique" + ] + }, + "references": [ + { + "description": "ATT&CK\u00ae Matrix", + "url": "https://attack.mitre.org/wiki/ATT&CK_Matrix" + }, + { + "description": "ATLAS\u2122 Matrix", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "caption": "MITRE ATT&CK\u00ae & ATLAS\u2122" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/connection_info.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/connection_info.json new file mode 100644 index 00000000..7ad5decc --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/connection_info.json @@ -0,0 +1,3 @@ +{ + "error": "Object connection_info not found" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/container.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/container.json new file mode 100644 index 00000000..8c4f3f80 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/container.json @@ -0,0 +1,150 @@ +{ + "attributes": [ + { + "name": { + "type": "string_t", + "description": "The container name.", + "requirement": "recommended", + "caption": "Name", + "type_name": "String", + "_source": "container" + } + }, + { + "runtime": { + "type": "string_t", + "description": "The backend running the container, such as containerd or cri-o.", + "requirement": "optional", + "caption": "Runtime", + "type_name": "String", + "_source": "container" + } + }, + { + "size": { + "type": "long_t", + "description": "The size of the container image.", + "requirement": "recommended", + "caption": "Size", + "type_name": "Long", + "_source": "container" + } + }, + { + "tag": { + "type": "string_t", + "description": "The tag used by the container. It can indicate version, format, OS.", + "requirement": "optional", + "caption": "Image Tag", + "type_name": "String", + "@deprecated": { + "message": "Use the labels or tags attribute instead.", + "since": "1.4.0" + }, + "_source": "container" + } + }, + { + "uid": { + "type": "string_t", + "description": "The full container unique identifier for this instantiation of the container. For example: ac2ea168264a08f9aaca0dfc82ff3551418dfd22d02b713142a6843caa2f61bf.", + "requirement": "recommended", + "caption": "Unique ID", + "type_name": "String", + "_source": "container" + } + }, + { + "image": { + "type": "object_t", + "description": "The container image used as a template to run the container.", + "requirement": "recommended", + "caption": "Image", + "object_name": "Image", + "object_type": "image", + "_source": "container" + } + }, + { + "hash": { + "type": "object_t", + "description": "Commit hash of image created for docker or the SHA256 hash of the container. For example: 13550340a8681c84c861aac2e5b440161c2b33a3e4f302ac680ca5b686de48de.", + "requirement": "recommended", + "caption": "Hash", + "object_name": "Fingerprint", + "object_type": "fingerprint", + "_source": "container" + } + }, + { + "labels": { + "type": "string_t", + "description": "The list of labels associated to the container.", + "is_array": true, + "requirement": "optional", + "caption": "Labels", + "type_name": "String", + "_source": "container" + } + }, + { + "tags": { + "type": "object_t", + "description": "The list of tags; {key:value} pairs associated to the container.", + "is_array": true, + "requirement": "optional", + "caption": "Tags", + "object_name": "Key:Value object", + "object_type": "key_value_object", + "_source": "container" + } + }, + { + "network_driver": { + "type": "string_t", + "description": "The network driver used by the container. For example, bridge, overlay, host, none, etc.", + "requirement": "optional", + "caption": "Network Driver", + "type_name": "String", + "_source": "container" + } + }, + { + "orchestrator": { + "type": "string_t", + "description": "The orchestrator managing the container, such as ECS, EKS, K8s, or OpenShift.", + "requirement": "optional", + "caption": "Orchestrator", + "type_name": "String", + "_source": "container" + } + }, + { + "pod_uuid": { + "type": "uuid_t", + "description": "The unique identifier of the pod (or equivalent) that the container is executing on.", + "requirement": "optional", + "caption": "Pod UUID", + "type_name": "UUID", + "_source": "container" + } + } + ], + "name": "container", + "description": "The Container object describes an instance of a specific container. A container is a prepackaged, portable system image that runs isolated on an existing system using a container runtime like containerd.", + "extends": "object", + "constraints": { + "at_least_one": [ + "uid", + "name" + ] + }, + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:ContainerProcess.", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:ContainerProcess/" + } + ], + "caption": "Container", + "observable": 27 +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/device.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/device.json new file mode 100644 index 00000000..94a37a9d --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/device.json @@ -0,0 +1,798 @@ +{ + "attributes": [ + { + "boot_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time the system was booted.", + "requirement": "optional", + "caption": "Boot Time", + "type_name": "Datetime", + "_source": "device" + } + }, + { + "risk_level": { + "type": "string_t", + "description": "The risk level, normalized to the caption of the risk_level_id value.", + "requirement": "optional", + "caption": "Risk Level", + "type_name": "String", + "_source": "device", + "_sibling_of": "risk_level_id" + } + }, + { + "vendor_name": { + "type": "string_t", + "description": "The vendor for the device. For example Dell or Lenovo.", + "requirement": "recommended", + "caption": "Vendor Name", + "type_name": "String", + "_source": "device" + } + }, + { + "type_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "A laptop computer.", + "caption": "Laptop" + }, + "6": { + "description": "A virtual machine.", + "caption": "Virtual" + }, + "0": { + "description": "The type is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "A server.", + "caption": "Server" + }, + "2": { + "description": "A desktop computer.", + "caption": "Desktop" + }, + "99": { + "description": "The type is not mapped. See the type attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A tablet computer.", + "caption": "Tablet" + }, + "5": { + "description": "A mobile phone.", + "caption": "Mobile" + }, + "7": { + "description": "An IOT (Internet of Things) device.", + "caption": "IOT" + }, + "8": { + "description": "A web browser.", + "caption": "Browser" + }, + "9": { + "description": "A networking firewall.", + "caption": "Firewall" + }, + "10": { + "description": "A networking switch.", + "caption": "Switch" + }, + "11": { + "description": "A networking hub.", + "caption": "Hub" + }, + "12": { + "description": "A networking router.", + "caption": "Router" + }, + "14": { + "description": "An intrusion prevention system.", + "caption": "IPS" + }, + "15": { + "description": "A Load Balancer device.", + "caption": "Load Balancer" + }, + "13": { + "description": "An intrusion detection system.", + "caption": "IDS" + } + }, + "description": "The device type ID.", + "requirement": "required", + "caption": "Type ID", + "type_name": "Integer", + "sibling": "type", + "_source": "device" + } + }, + { + "imei_list": { + "type": "string_t", + "description": "The International Mobile Equipment Identity values that are associated with the device.", + "is_array": true, + "requirement": "optional", + "caption": "IMEI List", + "type_name": "String", + "_source": "device" + } + }, + { + "first_seen_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The initial discovery time of the device.", + "requirement": "optional", + "caption": "First Seen", + "type_name": "Datetime", + "_source": "device" + } + }, + { + "org": { + "type": "object_t", + "description": "Organization and org unit related to the device.", + "requirement": "optional", + "caption": "Organization", + "object_name": "Organization", + "object_type": "organization", + "_source": "device" + } + }, + { + "type": { + "type": "string_t", + "description": "The device type. For example: unknown, server, desktop, laptop, tablet, mobile, virtual, browser, or other.", + "requirement": "recommended", + "caption": "Type", + "type_name": "String", + "_source": "device", + "_sibling_of": "type_id" + } + }, + { + "interface_uid": { + "type": "string_t", + "description": "The unique identifier of the network interface.", + "requirement": "recommended", + "caption": "Network Interface ID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "is_trusted": { + "type": "boolean_t", + "description": "The event occurred on a trusted device.", + "requirement": "optional", + "caption": "Trusted Device", + "type_name": "Boolean", + "_source": "device" + } + }, + { + "is_shared": { + "type": "boolean_t", + "description": "The event occurred on a shared device.", + "requirement": "optional", + "caption": "Shared Device", + "type_name": "Boolean", + "_source": "device" + } + }, + { + "boot_uid": { + "type": "string_t", + "description": "A unique identifier of the device that changes after every reboot. For example, the value of /proc/sys/kernel/random/boot_id from Linux's procfs.", + "references": [ + { + "description": "Linux kernel's documentation", + "url": "https://docs.kernel.org/admin-guide/sysctl/kernel.html#random" + } + ], + "requirement": "optional", + "caption": "Boot UID", + "type_name": "String", + "_source": "device" + } + }, + { + "subnet": { + "type": "subnet_t", + "description": "The subnet mask.", + "requirement": "optional", + "caption": "Subnet", + "type_name": "Subnet", + "_source": "device" + } + }, + { + "name": { + "type": "string_t", + "description": "The alternate device name, ordinarily as assigned by an administrator.

Note: The Name could be any other string that helps to identify the device, such as a phone number; for example 310-555-1234.

", + "requirement": "optional", + "caption": "Name", + "type_name": "String", + "_source": "device" + } + }, + { + "region": { + "type": "string_t", + "description": "The region where the virtual machine is located. For example, an AWS Region.", + "requirement": "recommended", + "caption": "Region", + "type_name": "String", + "_source": "device" + } + }, + { + "udid": { + "type": "string_t", + "description": "The Apple assigned Unique Device Identifier (UDID). For iOS, iPadOS, tvOS, watchOS and visionOS devices, this is the UDID. For macOS devices, it is the Provisioning UDID. For example: 00008020-008D4548007B4F26", + "references": [ + { + "description": "Apple Wiki", + "url": "https://theapplewiki.com/wiki/UDID" + } + ], + "requirement": "optional", + "caption": "Unique Device Identifier", + "type_name": "String", + "_source": "device" + } + }, + { + "last_seen_time": { + "type": "timestamp_t", + "description": "The most recent discovery time of the device.", + "requirement": "optional", + "caption": "Last Seen", + "type_name": "Timestamp", + "_source": "device" + } + }, + { + "modified_time": { + "type": "timestamp_t", + "description": "The time when the device was last known to have been modified.", + "requirement": "optional", + "caption": "Modified Time", + "type_name": "Timestamp", + "_source": "device" + } + }, + { + "risk_level_id": { + "type": "integer_t", + "enum": { + "3": { + "caption": "High" + }, + "0": { + "caption": "Info" + }, + "1": { + "caption": "Low" + }, + "2": { + "caption": "Medium" + }, + "99": { + "description": "The risk level is not mapped. See the risk_level attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "Critical" + } + }, + "description": "The normalized risk level id.", + "requirement": "optional", + "caption": "Risk Level ID", + "type_name": "Integer", + "sibling": "risk_level", + "_source": "device", + "suppress_checks": [ + "enum_convention" + ] + } + }, + { + "created_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the device was known to have been created.", + "requirement": "optional", + "caption": "Created Time", + "type_name": "Datetime", + "_source": "device" + } + }, + { + "is_mobile_account_active": { + "type": "boolean_t", + "description": "Indicates whether the device has an active mobile account. For example, this is indicated by the itunesStoreAccountActive value within JAMF Pro mobile devices.", + "requirement": "optional", + "caption": "Mobile Account Active", + "type_name": "Boolean", + "_source": "device" + } + }, + { + "hostname": { + "type": "hostname_t", + "description": "The device hostname.", + "requirement": "recommended", + "caption": "Hostname", + "type_name": "Hostname", + "_source": "device" + } + }, + { + "modified_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the device was last known to have been modified.", + "requirement": "optional", + "caption": "Modified Time", + "type_name": "Datetime", + "_source": "device" + } + }, + { + "mac": { + "type": "mac_t", + "description": "The Media Access Control (MAC) address of the endpoint.", + "requirement": "optional", + "caption": "MAC Address", + "type_name": "MAC Address", + "_source": "endpoint" + } + }, + { + "instance_uid": { + "type": "string_t", + "description": "The unique identifier of a VM instance.", + "requirement": "recommended", + "caption": "Instance ID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "domain": { + "type": "string_t", + "description": "The network domain where the device resides. For example: work.example.com.", + "requirement": "optional", + "caption": "Domain", + "type_name": "String", + "_source": "device" + } + }, + { + "hw_info": { + "type": "object_t", + "description": "The endpoint hardware information.", + "requirement": "optional", + "caption": "Hardware Info", + "object_name": "Device Hardware Info", + "object_type": "device_hw_info", + "_source": "endpoint" + } + }, + { + "agent_list": { + "type": "object_t", + "description": "A list of agent objects associated with a device, endpoint, or resource.", + "is_array": true, + "requirement": "optional", + "caption": "Agent List", + "object_name": "Agent", + "object_type": "agent", + "_source": "endpoint" + } + }, + { + "autoscale_uid": { + "type": "string_t", + "description": "The unique identifier of the cloud autoscale configuration.", + "requirement": "optional", + "caption": "Autoscale UID", + "type_name": "String", + "_source": "device" + } + }, + { + "imei": { + "type": "string_t", + "description": "The International Mobile Equipment Identity that is associated with the device.", + "requirement": "optional", + "caption": "IMEI", + "type_name": "String", + "@deprecated": { + "message": "Use the imei_list attribute instead.", + "since": "1.4.0" + }, + "_source": "device" + } + }, + { + "uid_alt": { + "type": "string_t", + "description": "An alternate unique identifier of the device if any. For example the ActiveDirectory DN.", + "requirement": "optional", + "caption": "Alternate ID", + "type_name": "String", + "_source": "device" + } + }, + { + "first_seen_time": { + "type": "timestamp_t", + "description": "The initial discovery time of the device.", + "requirement": "optional", + "caption": "First Seen", + "type_name": "Timestamp", + "_source": "device" + } + }, + { + "is_supervised": { + "type": "boolean_t", + "description": "The event occurred on a supervised device. Devices that are supervised are typically mobile devices managed by a Mobile Device Management solution and are restricted from specific behaviors such as Apple AirDrop.", + "requirement": "optional", + "caption": "Supervised Device", + "type_name": "Boolean", + "_source": "device" + } + }, + { + "ip": { + "type": "ip_t", + "description": "The device IP address, in either IPv4 or IPv6 format.", + "requirement": "optional", + "caption": "IP Address", + "type_name": "IP Address", + "_source": "device" + } + }, + { + "image": { + "type": "object_t", + "description": "The image used as a template to run the virtual machine.", + "requirement": "optional", + "caption": "Image", + "object_name": "Image", + "object_type": "image", + "_source": "device" + } + }, + { + "namespace_pid": { + "profile": "container", + "type": "integer_t", + "description": "If running under a process namespace (such as in a container), the process identifier within that process namespace.", + "group": "context", + "requirement": "recommended", + "caption": "Namespace PID", + "type_name": "Integer", + "_source": "endpoint" + } + }, + { + "is_personal": { + "type": "boolean_t", + "description": "The event occurred on a personal device.", + "requirement": "optional", + "caption": "Personal Device", + "type_name": "Boolean", + "_source": "device" + } + }, + { + "is_compliant": { + "type": "boolean_t", + "description": "The event occurred on a compliant device.", + "requirement": "optional", + "caption": "Compliant Device", + "type_name": "Boolean", + "_source": "device" + } + }, + { + "boot_time": { + "type": "timestamp_t", + "description": "The time the system was booted.", + "requirement": "optional", + "caption": "Boot Time", + "type_name": "Timestamp", + "_source": "device" + } + }, + { + "vpc_uid": { + "type": "string_t", + "description": "The unique identifier of the Virtual Private Cloud (VPC).", + "requirement": "optional", + "caption": "VPC UID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "meid": { + "type": "string_t", + "description": "The Mobile Equipment Identifier. It's a unique number that identifies a Code Division Multiple Access (CDMA) mobile device.", + "requirement": "optional", + "caption": "MEID", + "type_name": "String", + "_source": "device" + } + }, + { + "vlan_uid": { + "type": "string_t", + "description": "The Virtual LAN identifier.", + "requirement": "optional", + "caption": "VLAN", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "os": { + "type": "object_t", + "description": "The endpoint operating system.", + "requirement": "optional", + "caption": "OS", + "object_name": "Operating System (OS)", + "object_type": "os", + "_source": "endpoint" + } + }, + { + "os_machine_uuid": { + "type": "uuid_t", + "description": "The operating system assigned Machine ID. In Windows, this is the value stored at the registry path: HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Cryptography\\MachineGuid. In Linux, this is stored in the file: /etc/machine-id.", + "requirement": "optional", + "caption": "OS Machine UUID", + "type_name": "UUID", + "_source": "device" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the device. For example the Windows TargetSID or AWS EC2 ARN.", + "requirement": "recommended", + "caption": "Unique ID", + "type_name": "String", + "observable": 47, + "_source": "device" + } + }, + { + "owner": { + "type": "object_t", + "description": "The identity of the service or user account that owns the endpoint or was last logged into it.", + "requirement": "recommended", + "caption": "Owner", + "object_name": "User", + "object_type": "user", + "_source": "endpoint" + } + }, + { + "model": { + "type": "string_t", + "description": "The model of the device. For example ThinkPad X1 Carbon.", + "requirement": "optional", + "caption": "Model", + "type_name": "String", + "_source": "device" + } + }, + { + "network_interfaces": { + "type": "object_t", + "description": "The physical or virtual network interfaces that are associated with the device, one for each unique MAC address/IP address/hostname/name combination.

Note: The first element of the array is the network information that pertains to the event.

", + "is_array": true, + "requirement": "optional", + "caption": "Network Interfaces", + "object_name": "Network Interface", + "object_type": "network_interface", + "_source": "device" + } + }, + { + "desc": { + "type": "string_t", + "description": "The description of the device, ordinarily as reported by the operating system.", + "requirement": "optional", + "caption": "Description", + "type_name": "String", + "_source": "device" + } + }, + { + "created_time": { + "type": "timestamp_t", + "description": "The time when the device was known to have been created.", + "requirement": "optional", + "caption": "Created Time", + "type_name": "Timestamp", + "_source": "device" + } + }, + { + "last_seen_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The most recent discovery time of the device.", + "requirement": "optional", + "caption": "Last Seen", + "type_name": "Datetime", + "_source": "device" + } + }, + { + "iccid": { + "type": "string_t", + "description": "The Integrated Circuit Card Identification of a mobile device. Typically it is a unique 18 to 22 digit number that identifies a SIM card.", + "requirement": "optional", + "caption": "ICCID", + "type_name": "String", + "_source": "device" + } + }, + { + "container": { + "profile": "container", + "type": "object_t", + "description": "The information describing an instance of a container. A container is a prepackaged, portable system image that runs isolated on an existing system using a container runtime like containerd.", + "group": "context", + "requirement": "recommended", + "caption": "Container", + "object_name": "Container", + "object_type": "container", + "_source": "endpoint" + } + }, + { + "subnet_uid": { + "type": "string_t", + "description": "The unique identifier of a virtual subnet.", + "requirement": "optional", + "caption": "Subnet UID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "zone": { + "type": "string_t", + "description": "The network zone or LAN segment.", + "requirement": "optional", + "caption": "Network Zone", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "eid": { + "type": "string_t", + "description": "An Embedded Identity Document, is a unique serial number that identifies an eSIM-enabled device.", + "requirement": "optional", + "caption": "EID", + "type_name": "String", + "_source": "device" + } + }, + { + "hypervisor": { + "type": "string_t", + "description": "The name of the hypervisor running on the device. For example, Xen, VMware, Hyper-V, VirtualBox, etc.", + "requirement": "optional", + "caption": "Hypervisor", + "type_name": "String", + "_source": "device" + } + }, + { + "interface_name": { + "type": "string_t", + "description": "The name of the network interface (e.g. eth2).", + "requirement": "recommended", + "caption": "Network Interface Name", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "risk_score": { + "type": "integer_t", + "description": "The risk score as reported by the event source.", + "requirement": "optional", + "caption": "Risk Score", + "type_name": "Integer", + "_source": "device" + } + }, + { + "location": { + "type": "object_t", + "description": "The geographical location of the device.", + "requirement": "optional", + "caption": "Geo Location", + "object_name": "Geo Location", + "object_type": "location", + "_source": "device" + } + }, + { + "groups": { + "type": "object_t", + "description": "The group names to which the device belongs. For example: [\"Windows Laptops\", \"Engineering\"].", + "is_array": true, + "requirement": "optional", + "caption": "Groups", + "object_name": "Group", + "object_type": "group", + "_source": "device" + } + }, + { + "is_managed": { + "type": "boolean_t", + "description": "The event occurred on a managed device.", + "requirement": "optional", + "caption": "Managed Device", + "type_name": "Boolean", + "_source": "device" + } + }, + { + "is_backed_up": { + "type": "boolean_t", + "description": "Indicates whether the device or resource has a backup enabled, such as an automated snapshot or a cloud backup. For example, this is indicated by the cloudBackupEnabled value within JAMF Pro mobile devices or the registration of an AWS ARN with the AWS Backup service.", + "requirement": "optional", + "caption": "Back Ups Configured", + "type_name": "Boolean", + "_source": "device" + } + } + ], + "name": "device", + "description": "The Device object represents an addressable computer system or host, which is typically connected to a computer network and participates in the transmission or processing of data within the computer network.", + "extends": "endpoint", + "constraints": { + "at_least_one": [ + "ip", + "uid", + "name", + "hostname", + "instance_uid", + "interface_uid", + "interface_name" + ] + }, + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:Host.", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:Host/" + } + ], + "profiles": [ + "container", + "datetime" + ], + "caption": "Device", + "observable": 20 +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/evidences.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/evidences.json new file mode 100644 index 00000000..a0a4c950 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/evidences.json @@ -0,0 +1,428 @@ +{ + "attributes": [ + { + "data": { + "type": "json_t", + "description": "Additional evidence data that is not accounted for in the specific evidence attributes. Use only when absolutely necessary.", + "requirement": "optional", + "caption": "Data", + "type_name": "JSON", + "_source": "evidences" + } + }, + { + "http_response": { + "type": "object_t", + "description": "Describes details about the http response associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "HTTP Response", + "object_name": "HTTP Response", + "object_type": "http_response", + "_source": "evidences" + } + }, + { + "http_request": { + "type": "object_t", + "description": "Describes details about the http request associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "HTTP Request", + "object_name": "HTTP Request", + "object_type": "http_request", + "_source": "evidences" + } + }, + { + "name": { + "type": "string_t", + "description": "The naming convention or type identifier of the evidence associated with the security detection. For example, the @odata.type from Microsoft Graph Alerts V2 or display_name from CrowdStrike Falcon Incident Behaviors.", + "requirement": "optional", + "caption": "Name", + "type_name": "String", + "_source": "evidences" + } + }, + { + "process": { + "type": "object_t", + "description": "Describes details about the process associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Process", + "object_name": "Process", + "object_type": "process", + "_source": "evidences" + } + }, + { + "file": { + "type": "object_t", + "description": "Describes details about the file associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "File", + "object_name": "File", + "object_type": "file", + "_source": "evidences" + } + }, + { + "user": { + "type": "object_t", + "description": "Describes details about the user that was the target or somehow else associated with the activity that triggered the detection.", + "requirement": "recommended", + "caption": "User", + "object_name": "User", + "object_type": "user", + "_source": "evidences" + } + }, + { + "script": { + "type": "object_t", + "description": "Describes details about the script that was associated with the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Script", + "object_name": "Script", + "object_type": "script", + "_source": "evidences" + } + }, + { + "device": { + "type": "object_t", + "description": "An addressable device, computer system or host associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Device", + "object_name": "Device", + "object_type": "device", + "_source": "evidences" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the evidence associated with the security detection. For example, the activity_id from CrowdStrike Falcon Alerts or behavior_id from CrowdStrike Falcon Incident Behaviors.", + "requirement": "optional", + "caption": "Unique ID", + "type_name": "String", + "_source": "evidences" + } + }, + { + "query": { + "type": "object_t", + "description": "Describes details about the DNS query associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "DNS Query", + "object_name": "DNS Query", + "object_type": "dns_query", + "_source": "evidences" + } + }, + { + "connection_info": { + "type": "object_t", + "description": "Describes details about the network connection associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Connection Info", + "object_name": "Network Connection Information", + "object_type": "network_connection_info", + "_source": "evidences" + } + }, + { + "url": { + "type": "object_t", + "description": "The URL object that pertains to the event or object associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "URL", + "object_name": "Uniform Resource Locator", + "object_type": "url", + "_source": "evidences" + } + }, + { + "email": { + "type": "object_t", + "description": "The email object associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Email", + "object_name": "Email", + "object_type": "email", + "_source": "evidences" + } + }, + { + "tls": { + "type": "object_t", + "description": "Describes details about the Transport Layer Security (TLS) activity that triggered the detection.", + "requirement": "recommended", + "caption": "TLS", + "object_name": "Transport Layer Security (TLS)", + "object_type": "tls", + "_source": "evidences" + } + }, + { + "api": { + "type": "object_t", + "description": "Describes details about the API call associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "API Details", + "object_name": "API", + "object_type": "api", + "_source": "evidences" + } + }, + { + "resources": { + "type": "object_t", + "description": "Describes details about the cloud resources directly related to activity that triggered the detection. For resources impacted by the detection, use Affected Resources at the top-level of the finding.", + "is_array": true, + "requirement": "recommended", + "caption": "Cloud Resources", + "object_name": "Resource Details", + "object_type": "resource_details", + "_source": "evidences" + } + }, + { + "actor": { + "type": "object_t", + "description": "Describes details about the user/role/process that was the source of the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Actor", + "object_name": "Actor", + "object_type": "actor", + "_source": "evidences" + } + }, + { + "container": { + "type": "object_t", + "description": "Describes details about the container associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Container", + "object_name": "Container", + "object_type": "container", + "_source": "evidences" + } + }, + { + "database": { + "type": "object_t", + "description": "Describes details about the database associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Database", + "object_name": "Database", + "object_type": "database", + "_source": "evidences" + } + }, + { + "databucket": { + "type": "object_t", + "description": "Describes details about the databucket associated to the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Databucket", + "object_name": "Databucket", + "object_type": "databucket", + "_source": "evidences" + } + }, + { + "dst_endpoint": { + "type": "object_t", + "description": "Describes details about the destination of the network activity that triggered the detection.", + "requirement": "recommended", + "caption": "Destination Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "evidences" + } + }, + { + "ja4_fingerprint_list": { + "type": "object_t", + "description": "Describes details about the JA4+ fingerprints that triggered the detection.", + "is_array": true, + "requirement": "recommended", + "caption": "JA4+ Fingerprints", + "object_name": "JA4+ Fingerprint", + "object_type": "ja4_fingerprint", + "_source": "evidences" + } + }, + { + "job": { + "type": "object_t", + "description": "Describes details about the scheduled job that was associated with the activity that triggered the detection.", + "requirement": "recommended", + "caption": "Job", + "object_name": "Job", + "object_type": "job", + "_source": "evidences" + } + }, + { + "src_endpoint": { + "type": "object_t", + "description": "Describes details about the source of the network activity that triggered the detection.", + "requirement": "recommended", + "caption": "Source Endpoint", + "object_name": "Network Endpoint", + "object_type": "network_endpoint", + "_source": "evidences" + } + }, + { + "verdict": { + "type": "string_t", + "description": "The normalized verdict of the evidence associated with the security detection. ", + "requirement": "optional", + "caption": "Verdict", + "type_name": "String", + "_source": "evidences", + "_sibling_of": "verdict_id" + } + }, + { + "verdict_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "The verdict for the evidence is that is should be Disregarded.", + "caption": "Disregard" + }, + "6": { + "description": "The evidence is part of a Test, or other sanctioned behavior(s).", + "caption": "Test" + }, + "0": { + "description": "The type is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "The verdict for the evidence has been identified as a False Positive.", + "caption": "False Positive" + }, + "2": { + "description": "The verdict for the evidence has been identified as a True Positive.", + "caption": "True Positive" + }, + "99": { + "description": "The type is not mapped. See the type attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "The verdict for the evidence is that the behavior has been identified as Suspicious.", + "caption": "Suspicious" + }, + "5": { + "description": "The verdict for the evidence is that the behavior has been identified as Benign.", + "caption": "Benign" + }, + "7": { + "description": "There is insufficient data to render a verdict on the evidence.", + "caption": "Insufficient Data" + }, + "8": { + "description": "The verdict for the evidence is that the behavior has been identified as a Security Risk.", + "caption": "Security Risk" + }, + "9": { + "description": "The verdict for the evidence is Managed Externally, such as in a case management tool.", + "caption": "Managed Externally" + }, + "10": { + "description": "This evidence duplicates existing evidence related to this finding.", + "caption": "Duplicate" + } + }, + "description": "The normalized verdict (or status) ID of the evidence associated with the security detection. For example, Microsoft Graph Security Alerts contain a verdict enumeration for each type of evidence associated with the Alert. This is typically set by an automated investigation process or an analyst/investigator assigned to the finding.", + "requirement": "optional", + "caption": "Verdict ID", + "type_name": "Integer", + "sibling": "verdict", + "_source": "evidences" + } + }, + { + "reg_key": { + "type": "object_t", + "description": "Describes details about the registry key that triggered the detection.", + "group": "primary", + "extension": "win", + "requirement": "recommended", + "extension_id": 2, + "caption": "Registry Key", + "object_name": "Registry Key", + "object_type": "win/reg_key", + "_source": "win/evidences", + "_source_patched": "evidences" + } + }, + { + "reg_value": { + "type": "object_t", + "description": "Describes details about the registry value that triggered the detection.", + "group": "primary", + "extension": "win", + "requirement": "recommended", + "extension_id": 2, + "caption": "Registry Value", + "object_name": "Registry Value", + "object_type": "win/reg_value", + "_source": "win/evidences", + "_source_patched": "evidences" + } + }, + { + "win_service": { + "type": "object_t", + "description": "Describes details about the Windows service that triggered the detection.", + "extension": "win", + "requirement": "recommended", + "extension_id": 2, + "caption": "Windows Service", + "object_name": "Windows Service", + "object_type": "win/win_service", + "_source": "win/evidences", + "_source_patched": "evidences" + } + } + ], + "name": "evidences", + "description": "A collection of evidence artifacts associated to the activity/activities that triggered a security detection.", + "extends": "_entity", + "constraints": { + "at_least_one": [ + "actor", + "api", + "connection_info", + "data", + "database", + "databucket", + "device", + "dst_endpoint", + "email", + "file", + "process", + "query", + "src_endpoint", + "url", + "user", + "job", + "script", + "reg_key", + "reg_value", + "win_service" + ] + }, + "profiles": [ + "data_classification", + "cloud", + "container", + "linux/linux_users" + ], + "caption": "Evidence Artifacts" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/finding_info.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/finding_info.json new file mode 100644 index 00000000..03b7dc7b --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/finding_info.json @@ -0,0 +1,318 @@ +{ + "attributes": [ + { + "title": { + "type": "string_t", + "description": "A title or a brief phrase summarizing the reported finding.", + "requirement": "recommended", + "caption": "Title", + "type_name": "String", + "_source": "finding_info" + } + }, + { + "desc": { + "type": "string_t", + "description": "The description of the reported finding.", + "requirement": "optional", + "caption": "Description", + "type_name": "String", + "_source": "finding_info" + } + }, + { + "product": { + "type": "object_t", + "description": "Details about the product that reported the finding.", + "requirement": "optional", + "caption": "Product", + "object_name": "Product", + "object_type": "product", + "_source": "finding_info" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the reported finding.", + "requirement": "required", + "caption": "Unique ID", + "type_name": "String", + "_source": "finding_info" + } + }, + { + "types": { + "type": "string_t", + "description": "One or more types of the reported finding.", + "is_array": true, + "requirement": "optional", + "caption": "Types", + "type_name": "String", + "_source": "finding_info" + } + }, + { + "tags": { + "type": "object_t", + "description": "The list of tags; {key:value} pairs associated with the finding.", + "is_array": true, + "requirement": "optional", + "caption": "Tags", + "object_name": "Key:Value object", + "object_type": "key_value_object", + "_source": "finding_info" + } + }, + { + "attacks": { + "type": "object_t", + "description": "The MITRE ATT&CK\u00ae technique and associated tactics related to the finding.", + "is_array": true, + "references": [ + { + "description": "MITRE ATT&CK\u00ae", + "url": "https://attack.mitre.org" + }, + { + "description": "MITRE ATLAS", + "url": "https://atlas.mitre.org/matrices/ATLAS" + } + ], + "requirement": "optional", + "caption": "MITRE ATT&CK\u00ae and ATLAS\u2122 Details", + "object_name": "MITRE ATT&CK\u00ae & ATLAS\u2122", + "object_type": "attack", + "_source": "finding_info" + } + }, + { + "analytic": { + "type": "object_t", + "description": "The analytic technique used to analyze and derive insights from the data or information that led to the finding or conclusion.", + "requirement": "recommended", + "caption": "Analytic", + "object_name": "Analytic", + "object_type": "analytic", + "_source": "finding_info" + } + }, + { + "attack_graph": { + "type": "object_t", + "description": "An Attack Graph describes possible routes an attacker could take through an environment. It describes relationships between resources and their findings, such as malware detections, vulnerabilities, misconfigurations, and other security actions.", + "group": "context", + "references": [ + { + "description": "MS Defender description of Attack Path", + "url": "https://learn.microsoft.com/en-us/azure/defender-for-cloud/how-to-manage-attack-path" + }, + { + "description": "SentinelOne Attack Path documentation", + "url": "https://www.sentinelone.com/cybersecurity-101/cybersecurity/attack-path-analysis/" + } + ], + "requirement": "optional", + "caption": "Attack Graph", + "object_name": "Graph", + "object_type": "graph", + "_source": "finding_info" + } + }, + { + "created_time": { + "type": "timestamp_t", + "description": "The time when the finding was created.", + "requirement": "optional", + "caption": "Created Time", + "type_name": "Timestamp", + "_source": "finding_info" + } + }, + { + "data_sources": { + "type": "string_t", + "description": "A list of data sources utilized in generation of the finding.", + "is_array": true, + "requirement": "optional", + "caption": "Data Sources", + "type_name": "String", + "_source": "finding_info" + } + }, + { + "first_seen_time": { + "type": "timestamp_t", + "description": "The time when the finding was first observed. e.g. The time when a vulnerability was first observed.

It can differ from the created_time timestamp, which reflects the time this finding was created.

", + "requirement": "optional", + "caption": "First Seen", + "type_name": "Timestamp", + "_source": "finding_info" + } + }, + { + "kill_chain": { + "type": "object_t", + "description": "The Cyber Kill Chain\u00ae provides a detailed description of each phase and its associated activities within the broader context of a cyber attack.", + "is_array": true, + "requirement": "optional", + "caption": "Kill Chain", + "object_name": "Kill Chain Phase", + "object_type": "kill_chain_phase", + "_source": "finding_info" + } + }, + { + "last_seen_time": { + "type": "timestamp_t", + "description": "The time when the finding was most recently observed. e.g. The time when a vulnerability was most recently observed.

It can differ from the modified_time timestamp, which reflects the time this finding was last modified.

", + "requirement": "optional", + "caption": "Last Seen", + "type_name": "Timestamp", + "_source": "finding_info" + } + }, + { + "modified_time": { + "type": "timestamp_t", + "description": "The time when the finding was last modified.", + "requirement": "optional", + "caption": "Modified Time", + "type_name": "Timestamp", + "_source": "finding_info" + } + }, + { + "product_uid": { + "type": "string_t", + "description": "The unique identifier of the product that reported the finding.", + "requirement": "optional", + "caption": "Product Identifier", + "type_name": "String", + "@deprecated": { + "message": "Use the uid attribute in the product object instead. See specific usage.", + "since": "1.4.0" + }, + "_source": "finding_info" + } + }, + { + "related_analytics": { + "type": "object_t", + "description": "Other analytics related to this finding.", + "is_array": true, + "requirement": "optional", + "caption": "Related Analytics", + "object_name": "Analytic", + "object_type": "analytic", + "_source": "finding_info" + } + }, + { + "related_events": { + "type": "object_t", + "description": "Describes events and/or other findings related to the finding as identified by the security product. Note that these events may or may not be in OCSF.", + "is_array": true, + "requirement": "optional", + "caption": "Related Events/Findings", + "object_name": "Related Event/Finding", + "object_type": "related_event", + "_source": "finding_info" + } + }, + { + "related_events_count": { + "type": "integer_t", + "description": "Number of related events or findings.", + "requirement": "optional", + "caption": "Related Events/Findings Count", + "type_name": "Integer", + "_source": "finding_info" + } + }, + { + "src_url": { + "type": "url_t", + "description": "The URL pointing to the source of the finding.", + "requirement": "optional", + "caption": "Source URL", + "type_name": "URL String", + "_source": "finding_info" + } + }, + { + "traits": { + "type": "object_t", + "description": "The list of key traits or characteristics extracted from the finding.", + "is_array": true, + "requirement": "optional", + "caption": "Traits", + "object_name": "Trait", + "object_type": "trait", + "_source": "finding_info" + } + }, + { + "uid_alt": { + "type": "string_t", + "description": "The alternative unique identifier of the reported finding.", + "requirement": "optional", + "caption": "Alternate ID", + "type_name": "String", + "_source": "finding_info" + } + }, + { + "last_seen_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the finding was most recently observed. e.g. The time when a vulnerability was most recently observed.

It can differ from the modified_time timestamp, which reflects the time this finding was last modified.

", + "requirement": "optional", + "caption": "Last Seen", + "type_name": "Datetime", + "_source": "finding_info" + } + }, + { + "modified_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the finding was last modified.", + "requirement": "optional", + "caption": "Modified Time", + "type_name": "Datetime", + "_source": "finding_info" + } + }, + { + "first_seen_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the finding was first observed. e.g. The time when a vulnerability was first observed.

It can differ from the created_time timestamp, which reflects the time this finding was created.

", + "requirement": "optional", + "caption": "First Seen", + "type_name": "Datetime", + "_source": "finding_info" + } + }, + { + "created_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the finding was created.", + "requirement": "optional", + "caption": "Created Time", + "type_name": "Datetime", + "_source": "finding_info" + } + } + ], + "name": "finding_info", + "description": "The Finding Information object describes metadata related to a security finding generated by a security tool or system.", + "extends": "object", + "profiles": [ + "data_classification", + "datetime" + ], + "caption": "Finding Information" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/firewall_rule.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/firewall_rule.json new file mode 100644 index 00000000..66ae54d3 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/firewall_rule.json @@ -0,0 +1,135 @@ +{ + "attributes": [ + { + "name": { + "type": "string_t", + "description": "The name of the rule that generated the event.", + "requirement": "recommended", + "caption": "Name", + "type_name": "String", + "_source": "rule" + } + }, + { + "type": { + "type": "string_t", + "description": "The rule type.", + "requirement": "optional", + "caption": "Type", + "type_name": "String", + "_source": "rule" + } + }, + { + "version": { + "type": "string_t", + "description": "The rule version. For example: 1.1.", + "requirement": "optional", + "caption": "Version", + "type_name": "String", + "_source": "rule" + } + }, + { + "desc": { + "type": "string_t", + "description": "The description of the rule that generated the event.", + "requirement": "optional", + "caption": "Description", + "type_name": "String", + "_source": "rule" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the rule that generated the event.", + "requirement": "recommended", + "caption": "Unique ID", + "type_name": "String", + "_source": "rule" + } + }, + { + "category": { + "type": "string_t", + "description": "The rule category.", + "requirement": "optional", + "caption": "Category", + "type_name": "String", + "_source": "rule" + } + }, + { + "duration": { + "type": "long_t", + "description": "The rule response time duration, usually used for challenge completion time.", + "requirement": "optional", + "caption": "Duration Milliseconds", + "type_name": "Long", + "_source": "firewall_rule" + } + }, + { + "condition": { + "type": "string_t", + "description": "The rule trigger condition for the rule. For example: SQL_INJECTION.", + "requirement": "optional", + "caption": "Condition", + "type_name": "String", + "_source": "firewall_rule" + } + }, + { + "match_details": { + "type": "string_t", + "description": "The data in a request that rule matched. For example: '[\"10\",\"and\",\"1\"]'.", + "is_array": true, + "requirement": "optional", + "caption": "Match Details", + "type_name": "String", + "_source": "firewall_rule" + } + }, + { + "match_location": { + "type": "string_t", + "description": "The location of the matched data in the source which resulted in the triggered firewall rule. For example: HEADER.", + "requirement": "optional", + "caption": "Match Location", + "type_name": "String", + "_source": "firewall_rule" + } + }, + { + "rate_limit": { + "type": "integer_t", + "description": "The rate limit for a rate-based rule.", + "requirement": "optional", + "caption": "Rate Limit", + "type_name": "Integer", + "_source": "firewall_rule" + } + }, + { + "sensitivity": { + "type": "string_t", + "description": "The sensitivity of the firewall rule in the matched event. For example: HIGH.", + "requirement": "optional", + "caption": "Sensitivity", + "type_name": "String", + "_source": "firewall_rule" + } + } + ], + "name": "firewall_rule", + "description": "The Firewall Rule object represents a specific rule within a firewall policy or event. It contains information about a rule's configuration, properties, and associated actions that define how network traffic is handled by the firewall.", + "extends": "rule", + "constraints": { + "at_least_one": [ + "name", + "uid" + ] + }, + "caption": "Firewall Rule" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/http_request.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/http_request.json new file mode 100644 index 00000000..2837168b --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/http_request.json @@ -0,0 +1,167 @@ +{ + "attributes": [ + { + "args": { + "type": "string_t", + "description": "The arguments sent along with the HTTP request.", + "requirement": "optional", + "caption": "HTTP Arguments", + "type_name": "String", + "_source": "http_request" + } + }, + { + "version": { + "type": "string_t", + "description": "The Hypertext Transfer Protocol (HTTP) version.", + "requirement": "recommended", + "caption": "HTTP Version", + "type_name": "String", + "_source": "http_request" + } + }, + { + "length": { + "type": "integer_t", + "description": "The length of the entire HTTP request, in number of bytes.", + "requirement": "optional", + "caption": "Request Length", + "type_name": "Integer", + "_source": "http_request" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the http request.", + "requirement": "optional", + "caption": "Unique ID", + "type_name": "String", + "_source": "http_request" + } + }, + { + "url": { + "type": "object_t", + "description": "The URL object that pertains to the request.", + "requirement": "recommended", + "caption": "URL", + "object_name": "Uniform Resource Locator", + "object_type": "url", + "_source": "http_request" + } + }, + { + "body_length": { + "type": "integer_t", + "description": "The actual length of the HTTP request body, in number of bytes, independent of a potentially existing Content-Length header.", + "requirement": "optional", + "caption": "Request Body Length", + "type_name": "Integer", + "_source": "http_request" + } + }, + { + "user_agent": { + "type": "string_t", + "description": "The request header that identifies the operating system and web browser.", + "requirement": "recommended", + "caption": "HTTP User-Agent", + "type_name": "String", + "observable": 16, + "_source": "http_request" + } + }, + { + "http_headers": { + "type": "object_t", + "description": "Additional HTTP headers of an HTTP request or response.", + "is_array": true, + "requirement": "recommended", + "caption": "HTTP Headers", + "object_name": "HTTP Header", + "object_type": "http_header", + "_source": "http_request" + } + }, + { + "http_method": { + "type": "string_t", + "enum": { + "OPTIONS": { + "description": "The OPTIONS method describes the communication options for the target resource.", + "caption": "Options" + }, + "GET": { + "description": "The GET method requests a representation of the specified resource. Requests using GET should only retrieve data.", + "caption": "Get" + }, + "HEAD": { + "description": "The HEAD method asks for a response identical to a GET request, but without the response body.", + "caption": "Head" + }, + "POST": { + "description": "The POST method submits an entity to the specified resource, often causing a change in state or side effects on the server.", + "caption": "Post" + }, + "PUT": { + "description": "The PUT method replaces all current representations of the target resource with the request payload.", + "caption": "Put" + }, + "DELETE": { + "description": "The DELETE method deletes the specified resource.", + "caption": "Delete" + }, + "TRACE": { + "description": "The TRACE method performs a message loop-back test along the path to the target resource.", + "caption": "Trace" + }, + "CONNECT": { + "description": "The CONNECT method establishes a tunnel to the server identified by the target resource.", + "caption": "Connect" + }, + "PATCH": { + "description": "The PATCH method applies partial modifications to a resource.", + "caption": "Patch" + } + }, + "description": "The HTTP request method indicates the desired action to be performed for a given resource.", + "requirement": "recommended", + "caption": "HTTP Method", + "type_name": "String", + "_source": "http_request" + } + }, + { + "referrer": { + "type": "string_t", + "description": "The request header that identifies the address of the previous web page, which is linked to the current web page or resource being requested.", + "requirement": "optional", + "caption": "HTTP Referrer", + "type_name": "String", + "_source": "http_request" + } + }, + { + "x_forwarded_for": { + "type": "ip_t", + "description": "The X-Forwarded-For header identifying the originating IP address(es) of a client connecting to a web server through an HTTP proxy or a load balancer.", + "is_array": true, + "requirement": "optional", + "caption": "X-Forwarded-For", + "type_name": "IP Address", + "_source": "http_request" + } + } + ], + "name": "http_request", + "description": "The HTTP Request object represents the attributes of a request made to a web server. It encapsulates the details and metadata associated with an HTTP request, including the request method, headers, URL, query parameters, body content, and other relevant information.", + "extends": "object", + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:OutboundInternetNetworkTraffic.", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:OutboundInternetNetworkTraffic/" + } + ], + "caption": "HTTP Request" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/http_response.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/http_response.json new file mode 100644 index 00000000..65c5f003 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/http_response.json @@ -0,0 +1,96 @@ +{ + "attributes": [ + { + "code": { + "type": "integer_t", + "description": "The Hypertext Transfer Protocol (HTTP) status code returned from the web server to the client. For example, 200.", + "requirement": "required", + "caption": "Response Code", + "type_name": "Integer", + "_source": "http_response" + } + }, + { + "message": { + "type": "string_t", + "description": "The description of the event/finding, as defined by the source.", + "requirement": "optional", + "caption": "Message", + "type_name": "String", + "_source": "http_response" + } + }, + { + "status": { + "type": "string_t", + "description": "The response status. For example: A successful HTTP status of 'OK' which corresponds to a code of 200.", + "requirement": "optional", + "caption": "Status", + "type_name": "String", + "_source": "http_response" + } + }, + { + "length": { + "type": "integer_t", + "description": "The length of the entire HTTP response, in number of bytes.", + "requirement": "optional", + "caption": "Response Length", + "type_name": "Integer", + "_source": "http_response" + } + }, + { + "content_type": { + "type": "string_t", + "description": "The request header that identifies the original media type of the resource (prior to any content encoding applied for sending).", + "requirement": "optional", + "caption": "HTTP Content Type", + "type_name": "String", + "_source": "http_response" + } + }, + { + "body_length": { + "type": "integer_t", + "description": "The actual length of the HTTP response body, in number of bytes, independent of a potentially existing Content-Length header.", + "requirement": "optional", + "caption": "Response Body Length", + "type_name": "Integer", + "_source": "http_response" + } + }, + { + "http_headers": { + "type": "object_t", + "description": "Additional HTTP headers of an HTTP request or response.", + "is_array": true, + "requirement": "recommended", + "caption": "HTTP Headers", + "object_name": "HTTP Header", + "object_type": "http_header", + "_source": "http_response" + } + }, + { + "latency": { + "type": "integer_t", + "description": "The HTTP response latency measured in milliseconds.", + "requirement": "optional", + "caption": "Latency", + "type_name": "Integer", + "_source": "http_response" + } + } + ], + "name": "http_response", + "description": "The HTTP Response object contains detailed information about the response sent from a web server to the requester. It encompasses attributes and metadata that describe the response status, headers, body content, and other relevant information.", + "extends": "object", + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:InboundInternetNetworkTraffic.", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:InboundInternetNetworkTraffic/" + } + ], + "caption": "HTTP Response" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/metadata.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/metadata.json new file mode 100644 index 00000000..9afd440a --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/metadata.json @@ -0,0 +1,431 @@ +{ + "attributes": [ + { + "untruncated_size": { + "type": "integer_t", + "description": "The original size of the OCSF event data in kilobytes before any truncation occurred. This field is typically populated when is_truncated is true to indicate the full size of the original event.", + "requirement": "optional", + "caption": "Untruncated Size", + "type_name": "Integer", + "_source": "metadata" + } + }, + { + "profiles": { + "type": "string_t", + "description": "The list of profiles used to create the event. Profiles should be referenced by their name attribute for core profiles, or extension/name for profiles from extensions.", + "is_array": true, + "requirement": "optional", + "caption": "Profiles", + "type_name": "String", + "_source": "metadata" + } + }, + { + "loggers": { + "type": "object_t", + "description": "An array of Logger objects that describe the pipeline of devices and logging products between the event source and its eventual destination. Note, this attribute can be used when there is a complex end-to-end path of event flow and/or to track the chain of custody of the data.", + "is_array": true, + "requirement": "optional", + "caption": "Loggers", + "object_name": "Logger", + "object_type": "logger", + "_source": "metadata" + } + }, + { + "type": { + "type": "string_t", + "description": "The type of the event or finding as a subset of the source of the event. This can be any distinguishing characteristic of the data. For example 'Management Events' or 'Device Penetration Test'.", + "requirement": "optional", + "caption": "Type", + "type_name": "String", + "_source": "metadata" + } + }, + { + "processed_time": { + "type": "timestamp_t", + "description": "The event processed time, such as an ETL operation.", + "requirement": "optional", + "caption": "Processed Time", + "type_name": "Timestamp", + "_source": "metadata" + } + }, + { + "is_truncated": { + "type": "boolean_t", + "description": "Indicates whether the OCSF event data has been truncated due to size limitations. When true, some event data may have been omitted to fit within system constraints.", + "requirement": "optional", + "caption": "Is Truncated", + "type_name": "Boolean", + "_source": "metadata" + } + }, + { + "transformation_info_list": { + "type": "object_t", + "description": "An array of transformation info that describes the mappings or transforms applied to the data.", + "is_array": true, + "requirement": "optional", + "caption": "Transformation Info", + "object_name": "Transformation Info", + "object_type": "transformation_info", + "_source": "metadata" + } + }, + { + "original_event_uid": { + "type": "string_t", + "description": "The unique identifier assigned to the event in its original logging system before transformation to OCSF format. This field preserves the source system's native event identifier, enabling traceability back to the raw log entry. For example, a Windows Event Record ID, a syslog message ID, a Splunk _cd value, or a database transaction log sequence number.", + "requirement": "optional", + "caption": "Original Event ID", + "type_name": "String", + "_source": "metadata" + } + }, + { + "modified_time": { + "type": "timestamp_t", + "description": "The time when the event was last modified or enriched.", + "requirement": "optional", + "caption": "Modified Time", + "type_name": "Timestamp", + "_source": "metadata" + } + }, + { + "data_classification": { + "profile": "data_classification", + "type": "object_t", + "description": "The Data Classification object includes information about data classification levels and data category types.", + "group": "context", + "requirement": "recommended", + "caption": "Data Classification", + "object_name": "Data Classification", + "object_type": "data_classification", + "@deprecated": { + "message": "Use the attribute data_classifications instead", + "since": "1.4.0" + }, + "_source": "metadata" + } + }, + { + "version": { + "type": "string_t", + "description": "The version of the OCSF schema, using Semantic Versioning Specification (SemVer). For example: 1.0.0. Event consumers use the version to determine the available event attributes.", + "requirement": "required", + "caption": "Version", + "type_name": "String", + "_source": "metadata" + } + }, + { + "modified_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the event was last modified or enriched.", + "requirement": "optional", + "caption": "Modified Time", + "type_name": "Datetime", + "_source": "metadata" + } + }, + { + "original_time": { + "type": "string_t", + "description": "The original event time as reported by the event source. For example, the time in the original format from system event log such as Syslog on Unix/Linux and the System event file on Windows. Omit if event is generated instead of collected via logs.", + "requirement": "recommended", + "caption": "Original Time", + "type_name": "String", + "_source": "metadata" + } + }, + { + "reporter": { + "type": "object_t", + "description": "The entity from which the event or finding was first reported.", + "requirement": "recommended", + "caption": "Reporter", + "object_name": "Reporter", + "object_type": "reporter", + "_source": "metadata" + } + }, + { + "extension": { + "type": "object_t", + "description": "The schema extension used to create the event.", + "requirement": "optional", + "caption": "Schema Extension", + "object_name": "Schema Extension", + "object_type": "extension", + "@deprecated": { + "message": "Use the extensions attribute instead.", + "since": "1.1.0" + }, + "_source": "metadata" + } + }, + { + "logged_time": { + "type": "timestamp_t", + "description": "

The time when the logging system collected and logged the event.

This attribute is distinct from the event time in that event time typically contain the time extracted from the original event. Most of the time, these two times will be different.", + "requirement": "optional", + "caption": "Logged Time", + "type_name": "Timestamp", + "_source": "metadata" + } + }, + { + "debug": { + "type": "string_t", + "description": "Debug information about non-fatal issues with this OCSF event. Each issue is a line in this string array.", + "is_array": true, + "requirement": "optional", + "caption": "Debug Information", + "type_name": "String", + "_source": "metadata" + } + }, + { + "processed_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The event processed time, such as an ETL operation.", + "requirement": "optional", + "caption": "Processed Time", + "type_name": "Datetime", + "_source": "metadata" + } + }, + { + "source": { + "type": "string_t", + "description": "The source of the event or finding. This can be any distinguishing name for the logical origin of the data \u2014 for example, 'CloudTrail Events', or a use case like 'Attack Simulations' or 'Vulnerability Scans'.", + "requirement": "optional", + "caption": "Source", + "type_name": "String", + "_source": "metadata" + } + }, + { + "event_code": { + "type": "string_t", + "description": "The identifier of the original event. For example the numerical Windows Event Code or Cisco syslog code.", + "requirement": "optional", + "caption": "Event Code", + "type_name": "String", + "_source": "metadata" + } + }, + { + "log_version": { + "type": "string_t", + "description": "The event log schema version of the original event. For example the syslog version or the Cisco Log Schema version", + "requirement": "optional", + "caption": "Log Version", + "type_name": "String", + "_source": "metadata" + } + }, + { + "labels": { + "type": "string_t", + "description": "The list of labels attached to the event. For example: [\"sample\", \"dev\"]", + "is_array": true, + "requirement": "optional", + "caption": "Labels", + "type_name": "String", + "_source": "metadata" + } + }, + { + "log_provider": { + "type": "string_t", + "description": "The logging provider or logging service that logged the event. For example AWS CloudWatch or Splunk.", + "requirement": "optional", + "caption": "Log Provider", + "type_name": "String", + "_source": "metadata" + } + }, + { + "data_classifications": { + "profile": "data_classification", + "type": "object_t", + "description": "A list of Data Classification objects, that include information about data classification levels and data category types, identified by a classifier.", + "group": "context", + "is_array": true, + "requirement": "recommended", + "caption": "Data Classification", + "object_name": "Data Classification", + "object_type": "data_classification", + "_source": "metadata" + } + }, + { + "log_source": { + "type": "string_t", + "description": "The log system or component where the data originated. For example, a file path, syslog server name or a Windows hostname and logging subsystem such as Security.", + "requirement": "optional", + "caption": "Log Source", + "type_name": "String", + "_source": "metadata" + } + }, + { + "extensions": { + "type": "object_t", + "description": "The schema extensions used to create the event.", + "is_array": true, + "requirement": "optional", + "caption": "Schema Extensions", + "object_name": "Schema Extension", + "object_type": "extension", + "_source": "metadata" + } + }, + { + "transmit_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the event was transmitted from the logging device to it's next destination.", + "requirement": "optional", + "caption": "Transmission Time", + "type_name": "Datetime", + "_source": "metadata" + } + }, + { + "logged_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "

The time when the logging system collected and logged the event.

This attribute is distinct from the event time in that event time typically contain the time extracted from the original event. Most of the time, these two times will be different.", + "requirement": "optional", + "caption": "Logged Time", + "type_name": "Datetime", + "_source": "metadata" + } + }, + { + "correlation_uid": { + "type": "string_t", + "description": "A unique identifier used to correlate this OCSF event with other related OCSF events, distinct from the event's uid value. This enables linking multiple OCSF events that are part of the same activity, transaction, or security incident across different systems or time periods.", + "requirement": "optional", + "caption": "Correlation UID", + "type_name": "String", + "_source": "metadata" + } + }, + { + "uid": { + "type": "string_t", + "description": "A unique identifier assigned to the OCSF event. This ID is specific to the OCSF event itself and is distinct from the original event identifier in the source system (see original_event_uid).", + "requirement": "optional", + "caption": "Event UID", + "type_name": "String", + "_source": "metadata" + } + }, + { + "tags": { + "type": "object_t", + "description": "The list of tags; {key:value} pairs associated to the event.", + "is_array": true, + "requirement": "optional", + "caption": "Tags", + "object_name": "Key:Value object", + "object_type": "key_value_object", + "_source": "metadata" + } + }, + { + "log_level": { + "type": "string_t", + "description": "The level at which an event was logged. This can be log provider specific. For example the audit level.", + "requirement": "optional", + "caption": "Log Level", + "type_name": "String", + "_source": "metadata" + } + }, + { + "transmit_time": { + "type": "timestamp_t", + "description": "The time when the event was transmitted from the logging device to it's next destination.", + "requirement": "optional", + "caption": "Transmission Time", + "type_name": "Timestamp", + "_source": "metadata" + } + }, + { + "tenant_uid": { + "type": "string_t", + "description": "The unique tenant identifier.", + "requirement": "recommended", + "caption": "Tenant UID", + "type_name": "String", + "_source": "metadata" + } + }, + { + "sequence": { + "type": "integer_t", + "description": "Sequence number of the event. The sequence number is a value available in some events, to make the exact ordering of events unambiguous, regardless of the event time precision.", + "requirement": "optional", + "caption": "Sequence Number", + "type_name": "Integer", + "_source": "metadata" + } + }, + { + "product": { + "type": "object_t", + "description": "The product that reported the event.", + "requirement": "required", + "caption": "Product", + "object_name": "Product", + "object_type": "product", + "_source": "metadata" + } + }, + { + "log_name": { + "type": "string_t", + "description": "The event log name, typically for the consumer of the event. For example, the storage bucket name, SIEM repository index name, etc.", + "requirement": "recommended", + "caption": "Log Name", + "type_name": "String", + "_source": "metadata" + } + }, + { + "log_format": { + "type": "string_t", + "description": "The format of data in the log where the data originated. For example CSV, XML, Windows Multiline, JSON, syslog or Cisco Log Schema.", + "requirement": "optional", + "caption": "Log Source Format", + "type_name": "String", + "_source": "metadata" + } + } + ], + "name": "metadata", + "description": "The Metadata object describes the metadata associated with the event.", + "extends": "object", + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:Metadata", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:Metadata/" + } + ], + "profiles": [ + "data_classification", + "datetime" + ], + "caption": "Metadata" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/network_endpoint.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/network_endpoint.json new file mode 100644 index 00000000..fe97c820 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/network_endpoint.json @@ -0,0 +1,448 @@ +{ + "attributes": [ + { + "name": { + "type": "string_t", + "description": "The short name of the endpoint.", + "requirement": "recommended", + "caption": "Name", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "owner": { + "type": "object_t", + "description": "The identity of the service or user account that owns the endpoint or was last logged into it.", + "requirement": "recommended", + "caption": "Owner", + "object_name": "User", + "object_type": "user", + "_source": "endpoint" + } + }, + { + "port": { + "type": "port_t", + "description": "The port used for communication within the network connection.", + "requirement": "recommended", + "caption": "Port", + "type_name": "Port", + "_source": "network_endpoint" + } + }, + { + "type": { + "type": "string_t", + "description": "The network endpoint type. For example: unknown, server, desktop, laptop, tablet, mobile, virtual, browser, or other.", + "requirement": "optional", + "caption": "Type", + "type_name": "String", + "_source": "network_endpoint", + "_sibling_of": "type_id" + } + }, + { + "os": { + "type": "object_t", + "description": "The endpoint operating system.", + "requirement": "optional", + "caption": "OS", + "object_name": "Operating System (OS)", + "object_type": "os", + "_source": "endpoint" + } + }, + { + "domain": { + "type": "string_t", + "description": "The name of the domain that the endpoint belongs to or that corresponds to the endpoint.", + "requirement": "optional", + "caption": "Domain", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "ip": { + "type": "ip_t", + "description": "The IP address of the endpoint, in either IPv4 or IPv6 format.", + "requirement": "recommended", + "caption": "IP Address", + "type_name": "IP Address", + "_source": "endpoint" + } + }, + { + "location": { + "type": "object_t", + "description": "The geographical location of the endpoint.", + "requirement": "optional", + "caption": "Geo Location", + "object_name": "Geo Location", + "object_type": "location", + "_source": "endpoint" + } + }, + { + "hostname": { + "type": "hostname_t", + "description": "The fully qualified name of the endpoint.", + "requirement": "recommended", + "caption": "Hostname", + "type_name": "Hostname", + "_source": "endpoint" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the endpoint.", + "requirement": "recommended", + "caption": "Unique ID", + "type_name": "String", + "observable": 48, + "_source": "network_endpoint" + } + }, + { + "mac": { + "type": "mac_t", + "description": "The Media Access Control (MAC) address of the endpoint.", + "requirement": "optional", + "caption": "MAC Address", + "type_name": "MAC Address", + "_source": "endpoint" + } + }, + { + "type_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "A laptop computer.", + "caption": "Laptop" + }, + "6": { + "description": "A virtual machine.", + "caption": "Virtual" + }, + "0": { + "description": "The type is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "A server.", + "caption": "Server" + }, + "2": { + "description": "A desktop computer.", + "caption": "Desktop" + }, + "99": { + "description": "The type is not mapped. See the type attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A tablet computer.", + "caption": "Tablet" + }, + "5": { + "description": "A mobile phone.", + "caption": "Mobile" + }, + "7": { + "description": "An IOT (Internet of Things) device.", + "caption": "IOT" + }, + "8": { + "description": "A web browser.", + "caption": "Browser" + }, + "9": { + "description": "A networking firewall.", + "caption": "Firewall" + }, + "10": { + "description": "A networking switch.", + "caption": "Switch" + }, + "11": { + "description": "A networking hub.", + "caption": "Hub" + }, + "12": { + "description": "A networking router.", + "caption": "Router" + }, + "14": { + "description": "An intrusion prevention system.", + "caption": "IPS" + }, + "15": { + "description": "A Load Balancer device.", + "caption": "Load Balancer" + }, + "13": { + "description": "An intrusion detection system.", + "caption": "IDS" + } + }, + "description": "The network endpoint type ID.", + "requirement": "recommended", + "caption": "Type ID", + "type_name": "Integer", + "sibling": "type", + "_source": "network_endpoint" + } + }, + { + "agent_list": { + "type": "object_t", + "description": "A list of agent objects associated with a device, endpoint, or resource.", + "is_array": true, + "requirement": "optional", + "caption": "Agent List", + "object_name": "Agent", + "object_type": "agent", + "_source": "endpoint" + } + }, + { + "autonomous_system": { + "type": "object_t", + "description": "The Autonomous System details associated with an IP address.", + "requirement": "optional", + "caption": "Autonomous System", + "object_name": "Autonomous System", + "object_type": "autonomous_system", + "_source": "network_endpoint" + } + }, + { + "container": { + "profile": "container", + "type": "object_t", + "description": "The information describing an instance of a container. A container is a prepackaged, portable system image that runs isolated on an existing system using a container runtime like containerd.", + "group": "context", + "requirement": "recommended", + "caption": "Container", + "object_name": "Container", + "object_type": "container", + "_source": "endpoint" + } + }, + { + "hw_info": { + "type": "object_t", + "description": "The endpoint hardware information.", + "requirement": "optional", + "caption": "Hardware Info", + "object_name": "Device Hardware Info", + "object_type": "device_hw_info", + "_source": "endpoint" + } + }, + { + "instance_uid": { + "type": "string_t", + "description": "The unique identifier of a VM instance.", + "requirement": "recommended", + "caption": "Instance ID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "interface_name": { + "type": "string_t", + "description": "The name of the network interface (e.g. eth2).", + "requirement": "recommended", + "caption": "Network Interface Name", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "interface_uid": { + "type": "string_t", + "description": "The unique identifier of the network interface.", + "requirement": "recommended", + "caption": "Network Interface ID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "intermediate_ips": { + "type": "ip_t", + "description": "The intermediate IP Addresses. For example, the IP addresses in the HTTP X-Forwarded-For header.", + "is_array": true, + "requirement": "optional", + "caption": "Intermediate IP Addresses", + "type_name": "IP Address", + "_source": "network_endpoint" + } + }, + { + "isp": { + "type": "string_t", + "description": "The name of the Internet Service Provider (ISP).", + "requirement": "optional", + "caption": "ISP Name", + "type_name": "String", + "_source": "network_endpoint" + } + }, + { + "isp_org": { + "type": "string_t", + "description": "The organization name of the Internet Service Provider (ISP). This represents the parent organization or company that owns/operates the ISP. For example, Comcast Corporation would be the ISP org for Xfinity internet service. This attribute helps identify the ultimate provider when ISPs operate under different brand names.", + "requirement": "optional", + "caption": "ISP Org", + "type_name": "String", + "_source": "network_endpoint" + } + }, + { + "namespace_pid": { + "profile": "container", + "type": "integer_t", + "description": "If running under a process namespace (such as in a container), the process identifier within that process namespace.", + "group": "context", + "requirement": "recommended", + "caption": "Namespace PID", + "type_name": "Integer", + "_source": "endpoint" + } + }, + { + "network_scope": { + "type": "string_t", + "description": "Indicates whether the endpoint resides inside the customer\u2019s network, outside on the Internet, or if its location relative to the customer\u2019s network cannot be determined. The value is normalized to the caption of the network_scope_id.", + "requirement": "optional", + "caption": "Network Scope", + "type_name": "String", + "_source": "network_endpoint", + "_sibling_of": "network_scope_id" + } + }, + { + "network_scope_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "Unknown whether this endpoint resides within the customer\u2019s network.", + "caption": "Unknown" + }, + "1": { + "description": "The endpoint resides inside the customer\u2019s network.", + "caption": "Internal" + }, + "2": { + "description": "The endpoint is on the Internet or otherwise external to the customer\u2019s network.", + "caption": "External" + }, + "99": { + "description": "The network scope is not mapped. See the network_scope attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the endpoint\u2019s network scope. The normalized network scope identifier indicates whether the endpoint resides inside the customer\u2019s network, outside on the Internet, or if its location relative to the customer\u2019s network cannot be determined.", + "requirement": "optional", + "caption": "Network Scope ID", + "type_name": "Integer", + "sibling": "network_scope", + "_source": "network_endpoint" + } + }, + { + "proxy_endpoint": { + "type": "object_t", + "description": "The network proxy information pertaining to a specific endpoint. This can be used to describe information pertaining to network address translation (NAT).", + "requirement": "optional", + "caption": "Proxy Endpoint", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "_source": "network_endpoint" + } + }, + { + "subnet_uid": { + "type": "string_t", + "description": "The unique identifier of a virtual subnet.", + "requirement": "optional", + "caption": "Subnet UID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "svc_name": { + "type": "string_t", + "description": "The service name in service-to-service connections. For example, AWS VPC logs the pkt-src-aws-service and pkt-dst-aws-service fields identify the connection is coming from or going to an AWS service.", + "requirement": "recommended", + "caption": "Service Name", + "type_name": "String", + "_source": "network_endpoint" + } + }, + { + "vlan_uid": { + "type": "string_t", + "description": "The Virtual LAN identifier.", + "requirement": "optional", + "caption": "VLAN", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "vpc_uid": { + "type": "string_t", + "description": "The unique identifier of the Virtual Private Cloud (VPC).", + "requirement": "optional", + "caption": "VPC UID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "zone": { + "type": "string_t", + "description": "The network zone or LAN segment.", + "requirement": "optional", + "caption": "Network Zone", + "type_name": "String", + "_source": "endpoint" + } + } + ], + "name": "network_endpoint", + "description": "The Network Endpoint object describes characteristics of a network endpoint. These can be a source or destination of a network connection.", + "extends": "endpoint", + "constraints": { + "at_least_one": [ + "ip", + "uid", + "name", + "hostname", + "svc_name", + "instance_uid", + "interface_uid", + "interface_name", + "domain" + ] + }, + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:ComputerNetworkNode.", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:ComputerNetworkNode/" + } + ], + "profiles": [ + "container" + ], + "caption": "Network Endpoint", + "observable": 20 +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/network_proxy.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/network_proxy.json new file mode 100644 index 00000000..f7225c9c --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/network_proxy.json @@ -0,0 +1,448 @@ +{ + "attributes": [ + { + "name": { + "type": "string_t", + "description": "The short name of the endpoint.", + "requirement": "recommended", + "caption": "Name", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "owner": { + "type": "object_t", + "description": "The identity of the service or user account that owns the endpoint or was last logged into it.", + "requirement": "recommended", + "caption": "Owner", + "object_name": "User", + "object_type": "user", + "_source": "endpoint" + } + }, + { + "port": { + "type": "port_t", + "description": "The port used for communication within the network connection.", + "requirement": "recommended", + "caption": "Port", + "type_name": "Port", + "_source": "network_endpoint" + } + }, + { + "type": { + "type": "string_t", + "description": "The network endpoint type. For example: unknown, server, desktop, laptop, tablet, mobile, virtual, browser, or other.", + "requirement": "optional", + "caption": "Type", + "type_name": "String", + "_source": "network_endpoint", + "_sibling_of": "type_id" + } + }, + { + "os": { + "type": "object_t", + "description": "The endpoint operating system.", + "requirement": "optional", + "caption": "OS", + "object_name": "Operating System (OS)", + "object_type": "os", + "_source": "endpoint" + } + }, + { + "domain": { + "type": "string_t", + "description": "The name of the domain that the endpoint belongs to or that corresponds to the endpoint.", + "requirement": "optional", + "caption": "Domain", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "ip": { + "type": "ip_t", + "description": "The IP address of the endpoint, in either IPv4 or IPv6 format.", + "requirement": "recommended", + "caption": "IP Address", + "type_name": "IP Address", + "_source": "endpoint" + } + }, + { + "location": { + "type": "object_t", + "description": "The geographical location of the endpoint.", + "requirement": "optional", + "caption": "Geo Location", + "object_name": "Geo Location", + "object_type": "location", + "_source": "endpoint" + } + }, + { + "hostname": { + "type": "hostname_t", + "description": "The fully qualified name of the endpoint.", + "requirement": "recommended", + "caption": "Hostname", + "type_name": "Hostname", + "_source": "endpoint" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the endpoint.", + "requirement": "recommended", + "caption": "Unique ID", + "type_name": "String", + "observable": 48, + "_source": "network_endpoint" + } + }, + { + "mac": { + "type": "mac_t", + "description": "The Media Access Control (MAC) address of the endpoint.", + "requirement": "optional", + "caption": "MAC Address", + "type_name": "MAC Address", + "_source": "endpoint" + } + }, + { + "type_id": { + "type": "integer_t", + "enum": { + "3": { + "description": "A laptop computer.", + "caption": "Laptop" + }, + "6": { + "description": "A virtual machine.", + "caption": "Virtual" + }, + "0": { + "description": "The type is unknown.", + "caption": "Unknown" + }, + "1": { + "description": "A server.", + "caption": "Server" + }, + "2": { + "description": "A desktop computer.", + "caption": "Desktop" + }, + "99": { + "description": "The type is not mapped. See the type attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "description": "A tablet computer.", + "caption": "Tablet" + }, + "5": { + "description": "A mobile phone.", + "caption": "Mobile" + }, + "7": { + "description": "An IOT (Internet of Things) device.", + "caption": "IOT" + }, + "8": { + "description": "A web browser.", + "caption": "Browser" + }, + "9": { + "description": "A networking firewall.", + "caption": "Firewall" + }, + "10": { + "description": "A networking switch.", + "caption": "Switch" + }, + "11": { + "description": "A networking hub.", + "caption": "Hub" + }, + "12": { + "description": "A networking router.", + "caption": "Router" + }, + "14": { + "description": "An intrusion prevention system.", + "caption": "IPS" + }, + "15": { + "description": "A Load Balancer device.", + "caption": "Load Balancer" + }, + "13": { + "description": "An intrusion detection system.", + "caption": "IDS" + } + }, + "description": "The network endpoint type ID.", + "requirement": "recommended", + "caption": "Type ID", + "type_name": "Integer", + "sibling": "type", + "_source": "network_endpoint" + } + }, + { + "agent_list": { + "type": "object_t", + "description": "A list of agent objects associated with a device, endpoint, or resource.", + "is_array": true, + "requirement": "optional", + "caption": "Agent List", + "object_name": "Agent", + "object_type": "agent", + "_source": "endpoint" + } + }, + { + "autonomous_system": { + "type": "object_t", + "description": "The Autonomous System details associated with an IP address.", + "requirement": "optional", + "caption": "Autonomous System", + "object_name": "Autonomous System", + "object_type": "autonomous_system", + "_source": "network_endpoint" + } + }, + { + "container": { + "profile": "container", + "type": "object_t", + "description": "The information describing an instance of a container. A container is a prepackaged, portable system image that runs isolated on an existing system using a container runtime like containerd.", + "group": "context", + "requirement": "recommended", + "caption": "Container", + "object_name": "Container", + "object_type": "container", + "_source": "endpoint" + } + }, + { + "hw_info": { + "type": "object_t", + "description": "The endpoint hardware information.", + "requirement": "optional", + "caption": "Hardware Info", + "object_name": "Device Hardware Info", + "object_type": "device_hw_info", + "_source": "endpoint" + } + }, + { + "instance_uid": { + "type": "string_t", + "description": "The unique identifier of a VM instance.", + "requirement": "recommended", + "caption": "Instance ID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "interface_name": { + "type": "string_t", + "description": "The name of the network interface (e.g. eth2).", + "requirement": "recommended", + "caption": "Network Interface Name", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "interface_uid": { + "type": "string_t", + "description": "The unique identifier of the network interface.", + "requirement": "recommended", + "caption": "Network Interface ID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "intermediate_ips": { + "type": "ip_t", + "description": "The intermediate IP Addresses. For example, the IP addresses in the HTTP X-Forwarded-For header.", + "is_array": true, + "requirement": "optional", + "caption": "Intermediate IP Addresses", + "type_name": "IP Address", + "_source": "network_endpoint" + } + }, + { + "isp": { + "type": "string_t", + "description": "The name of the Internet Service Provider (ISP).", + "requirement": "optional", + "caption": "ISP Name", + "type_name": "String", + "_source": "network_endpoint" + } + }, + { + "isp_org": { + "type": "string_t", + "description": "The organization name of the Internet Service Provider (ISP). This represents the parent organization or company that owns/operates the ISP. For example, Comcast Corporation would be the ISP org for Xfinity internet service. This attribute helps identify the ultimate provider when ISPs operate under different brand names.", + "requirement": "optional", + "caption": "ISP Org", + "type_name": "String", + "_source": "network_endpoint" + } + }, + { + "namespace_pid": { + "profile": "container", + "type": "integer_t", + "description": "If running under a process namespace (such as in a container), the process identifier within that process namespace.", + "group": "context", + "requirement": "recommended", + "caption": "Namespace PID", + "type_name": "Integer", + "_source": "endpoint" + } + }, + { + "network_scope": { + "type": "string_t", + "description": "Indicates whether the endpoint resides inside the customer\u2019s network, outside on the Internet, or if its location relative to the customer\u2019s network cannot be determined. The value is normalized to the caption of the network_scope_id.", + "requirement": "optional", + "caption": "Network Scope", + "type_name": "String", + "_source": "network_endpoint", + "_sibling_of": "network_scope_id" + } + }, + { + "network_scope_id": { + "type": "integer_t", + "enum": { + "0": { + "description": "Unknown whether this endpoint resides within the customer\u2019s network.", + "caption": "Unknown" + }, + "1": { + "description": "The endpoint resides inside the customer\u2019s network.", + "caption": "Internal" + }, + "2": { + "description": "The endpoint is on the Internet or otherwise external to the customer\u2019s network.", + "caption": "External" + }, + "99": { + "description": "The network scope is not mapped. See the network_scope attribute, which contains a data source specific value.", + "caption": "Other" + } + }, + "description": "The normalized identifier of the endpoint\u2019s network scope. The normalized network scope identifier indicates whether the endpoint resides inside the customer\u2019s network, outside on the Internet, or if its location relative to the customer\u2019s network cannot be determined.", + "requirement": "optional", + "caption": "Network Scope ID", + "type_name": "Integer", + "sibling": "network_scope", + "_source": "network_endpoint" + } + }, + { + "proxy_endpoint": { + "type": "object_t", + "description": "The network proxy information pertaining to a specific endpoint. This can be used to describe information pertaining to network address translation (NAT).", + "requirement": "optional", + "caption": "Proxy Endpoint", + "object_name": "Network Proxy Endpoint", + "object_type": "network_proxy", + "_source": "network_endpoint" + } + }, + { + "subnet_uid": { + "type": "string_t", + "description": "The unique identifier of a virtual subnet.", + "requirement": "optional", + "caption": "Subnet UID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "svc_name": { + "type": "string_t", + "description": "The service name in service-to-service connections. For example, AWS VPC logs the pkt-src-aws-service and pkt-dst-aws-service fields identify the connection is coming from or going to an AWS service.", + "requirement": "recommended", + "caption": "Service Name", + "type_name": "String", + "_source": "network_endpoint" + } + }, + { + "vlan_uid": { + "type": "string_t", + "description": "The Virtual LAN identifier.", + "requirement": "optional", + "caption": "VLAN", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "vpc_uid": { + "type": "string_t", + "description": "The unique identifier of the Virtual Private Cloud (VPC).", + "requirement": "optional", + "caption": "VPC UID", + "type_name": "String", + "_source": "endpoint" + } + }, + { + "zone": { + "type": "string_t", + "description": "The network zone or LAN segment.", + "requirement": "optional", + "caption": "Network Zone", + "type_name": "String", + "_source": "endpoint" + } + } + ], + "name": "network_proxy", + "description": "The network proxy endpoint object describes a proxy server, which acts as an intermediary between a client requesting a resource and the server providing that resource.", + "extends": "network_endpoint", + "constraints": { + "at_least_one": [ + "ip", + "uid", + "name", + "hostname", + "svc_name", + "instance_uid", + "interface_uid", + "interface_name", + "domain" + ] + }, + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:ProxyServer", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:ProxyServer/" + } + ], + "profiles": [ + "container" + ], + "caption": "Network Proxy Endpoint", + "observable": 20 +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/process.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/process.json new file mode 100644 index 00000000..148e47e1 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/process.json @@ -0,0 +1,446 @@ +{ + "attributes": [ + { + "name": { + "type": "process_name_t", + "description": "The friendly name of the process, for example: Notepad++.", + "requirement": "recommended", + "caption": "Name", + "type_name": "String", + "_source": "process_entity" + } + }, + { + "pid": { + "type": "integer_t", + "description": "The process identifier, as reported by the operating system. Process ID (PID) is a number used by the operating system to uniquely identify an active process.", + "requirement": "recommended", + "caption": "Process ID", + "type_name": "Integer", + "observable": 15, + "_source": "process_entity" + } + }, + { + "session": { + "type": "object_t", + "description": "The user session under which this process is running.", + "requirement": "optional", + "caption": "Session", + "object_name": "Session", + "object_type": "session", + "_source": "process" + } + }, + { + "file": { + "type": "object_t", + "description": "The process file object.", + "requirement": "recommended", + "caption": "File", + "object_name": "File", + "object_type": "file", + "_source": "process" + } + }, + { + "user": { + "type": "object_t", + "description": "The user under which this process is running.", + "requirement": "recommended", + "caption": "User", + "object_name": "User", + "object_type": "user", + "_source": "process" + } + }, + { + "path": { + "type": "string_t", + "description": "The process file path.", + "requirement": "optional", + "caption": "Path", + "type_name": "String", + "_source": "process_entity" + } + }, + { + "group": { + "profile": "linux/linux_users", + "type": "object_t", + "description": "The group under which this process is running.", + "requirement": "recommended", + "caption": "Group", + "object_name": "Group", + "object_type": "group", + "_source": "linux/process", + "_source_patched": "process" + } + }, + { + "tid": { + "type": "integer_t", + "description": "The identifier of the thread associated with the event, as returned by the operating system.", + "requirement": "optional", + "caption": "Thread ID", + "type_name": "Integer", + "@deprecated": { + "message": "tid is deprecated in favor of ptid. ptid has type long_t which can accommodate the thread identifiers returned by all platforms (e.g. 64-bit on MacOS).", + "since": "1.6.0" + }, + "_source": "process" + } + }, + { + "uid": { + "type": "string_t", + "description": "A unique identifier for this process assigned by the producer (tool). Facilitates correlation of a process event with other events for that process.", + "requirement": "recommended", + "caption": "Unique ID", + "type_name": "String", + "observable": 39, + "_source": "process_entity" + } + }, + { + "loaded_modules": { + "type": "string_t", + "description": "The list of loaded module names.", + "is_array": true, + "requirement": "optional", + "caption": "Loaded Modules", + "type_name": "String", + "_source": "process" + } + }, + { + "ancestry": { + "type": "object_t", + "description": "An array of Process Entities describing the extended parentage of this process object. Direct parent information should be expressed through the parent_process attribute. The first array element is the direct parent of this process object. Subsequent list elements go up the process parentage hierarchy. That is, the array is sorted from newest to oldest process. It is recommended to only populate this field for the top-level process object.", + "is_array": true, + "references": [ + { + "description": "Guidance on Representing Process Parentage", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/representing-process-parentage.md" + } + ], + "requirement": "optional", + "caption": "Ancestry", + "object_name": "Process Entity", + "object_type": "process_entity", + "_source": "process" + } + }, + { + "cmd_line": { + "type": "string_t", + "description": "The full command line used to launch an application, service, process, or job. For example: ssh user@10.0.0.10. If the command line is unavailable or missing, the empty string '' is to be used.", + "requirement": "recommended", + "caption": "Command Line", + "type_name": "String", + "observable": 13, + "_source": "process_entity" + } + }, + { + "container": { + "profile": "container", + "type": "object_t", + "description": "The information describing an instance of a container. A container is a prepackaged, portable system image that runs isolated on an existing system using a container runtime like containerd.", + "group": "context", + "requirement": "recommended", + "caption": "Container", + "object_name": "Container", + "object_type": "container", + "_source": "process" + } + }, + { + "cpid": { + "type": "uuid_t", + "description": "A unique process identifier that can be assigned deterministically by multiple system data producers.", + "source": "cpid", + "references": [ + { + "description": "OCSF Common Process Identifier (CPID) Specification", + "url": "https://github.com/ocsf/common-process-id" + } + ], + "requirement": "recommended", + "caption": "Common Process Identifier", + "type_name": "UUID", + "_source": "process_entity" + } + }, + { + "created_time": { + "type": "timestamp_t", + "description": "The time when the process was created/started.", + "requirement": "recommended", + "caption": "Created Time", + "type_name": "Timestamp", + "_source": "process_entity" + } + }, + { + "environment_variables": { + "type": "object_t", + "description": "Environment variables associated with the process.", + "is_array": true, + "requirement": "optional", + "caption": "Environment Variables", + "object_name": "Environment Variable", + "object_type": "environment_variable", + "_source": "process" + } + }, + { + "integrity": { + "type": "string_t", + "description": "The process integrity level, normalized to the caption of the integrity_id value. In the case of 'Other', it is defined by the event source (Windows only).", + "requirement": "optional", + "caption": "Integrity", + "type_name": "String", + "_source": "process", + "_sibling_of": "integrity_id" + } + }, + { + "integrity_id": { + "type": "integer_t", + "enum": { + "3": { + "caption": "Medium" + }, + "6": { + "caption": "Protected" + }, + "0": { + "description": "The integrity level is unknown.", + "caption": "Unknown" + }, + "1": { + "caption": "Untrusted" + }, + "2": { + "caption": "Low" + }, + "99": { + "description": "The integrity level is not mapped. See the integrity attribute, which contains a data source specific value.", + "caption": "Other" + }, + "4": { + "caption": "High" + }, + "5": { + "caption": "System" + } + }, + "description": "The normalized identifier of the process integrity level (Windows only).", + "requirement": "optional", + "caption": "Integrity Level", + "type_name": "Integer", + "sibling": "integrity", + "_source": "process" + } + }, + { + "lineage": { + "type": "file_path_t", + "description": "The lineage of the process, represented by a list of paths for each ancestor process. For example: ['/usr/sbin/sshd', '/usr/bin/bash', '/usr/bin/whoami'].", + "is_array": true, + "requirement": "optional", + "caption": "Lineage", + "type_name": "File Path", + "@deprecated": { + "message": "Use the ancestry attribute.", + "since": "1.4.0" + }, + "_source": "process" + } + }, + { + "namespace_pid": { + "profile": "container", + "type": "integer_t", + "description": "If running under a process namespace (such as in a container), the process identifier within that process namespace.", + "group": "context", + "requirement": "recommended", + "caption": "Namespace PID", + "type_name": "Integer", + "_source": "process" + } + }, + { + "parent_process": { + "type": "object_t", + "description": "The parent process of this process object. It is recommended to only populate this field for the top-level process object, to prevent deep nesting. Additional ancestry information can be supplied in the ancestry attribute.", + "references": [ + { + "description": "Guidance on Representing Process Parentage", + "url": "https://github.com/ocsf/ocsf-docs/blob/main/articles/representing-process-parentage.md" + } + ], + "requirement": "recommended", + "caption": "Parent Process", + "object_name": "Process", + "object_type": "process", + "_source": "process" + } + }, + { + "ptid": { + "type": "long_t", + "description": "The identifier of the process thread associated with the event, as returned by the operating system.", + "requirement": "optional", + "caption": "Process Thread ID", + "type_name": "Long", + "_source": "process" + } + }, + { + "sandbox": { + "type": "string_t", + "description": "The name of the containment jail (i.e., sandbox). For example, hardened_ps, high_security_ps, oracle_ps, netsvcs_ps, or default_ps.", + "requirement": "optional", + "caption": "Sandbox", + "type_name": "String", + "_source": "process" + } + }, + { + "terminated_time": { + "type": "timestamp_t", + "description": "The time when the process was terminated.", + "requirement": "optional", + "caption": "Terminated Time", + "type_name": "Timestamp", + "_source": "process" + } + }, + { + "working_directory": { + "type": "string_t", + "description": "The working directory of a process.", + "requirement": "optional", + "caption": "Working Directory", + "type_name": "String", + "_source": "process" + } + }, + { + "xattributes": { + "type": "object_t", + "description": "An unordered collection of zero or more name/value pairs that represent a process extended attribute.", + "requirement": "optional", + "caption": "Extended Attributes", + "object_name": "Object", + "object_type": "object", + "_source": "process" + } + }, + { + "auid": { + "profile": "linux/linux_users", + "type": "integer_t", + "description": "The audit user assigned at login by the audit subsystem.", + "extension": "linux", + "requirement": "optional", + "extension_id": 1, + "caption": "Audit User ID", + "type_name": "Integer", + "_source": "linux/process", + "_source_patched": "process" + } + }, + { + "egid": { + "profile": "linux/linux_users", + "type": "integer_t", + "description": "The effective group under which this process is running.", + "extension": "linux", + "requirement": "optional", + "extension_id": 1, + "caption": "Effective Group ID", + "type_name": "Integer", + "_source": "linux/process", + "_source_patched": "process" + } + }, + { + "euid": { + "profile": "linux/linux_users", + "type": "integer_t", + "description": "The effective user under which this process is running.", + "extension": "linux", + "requirement": "optional", + "extension_id": 1, + "caption": "Effective User ID", + "type_name": "Integer", + "_source": "linux/process", + "_source_patched": "process" + } + }, + { + "hosted_services": { + "type": "object_t", + "description": "The Windows services that this process is hosting.", + "extension": "win", + "is_array": true, + "requirement": "optional", + "extension_id": 2, + "caption": "Hosted Services", + "object_name": "Windows Service", + "object_type": "win/win_service", + "_source": "win/process", + "_source_patched": "process" + } + }, + { + "terminated_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the process was terminated.", + "requirement": "optional", + "caption": "Terminated Time", + "type_name": "Datetime", + "_source": "process" + } + }, + { + "created_time_dt": { + "profile": "datetime", + "type": "datetime_t", + "description": "The time when the process was created/started.", + "requirement": "optional", + "caption": "Created Time", + "type_name": "Datetime", + "_source": "process_entity" + } + } + ], + "name": "process", + "description": "The Process object describes a running instance of a launched program.", + "extends": "process_entity", + "constraints": { + "at_least_one": [ + "pid", + "uid", + "cpid" + ] + }, + "references": [ + { + "description": "D3FEND\u2122 Ontology d3f:Process", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:Process/" + } + ], + "profiles": [ + "container", + "linux/linux_users", + "data_classification", + "datetime" + ], + "caption": "Process", + "observable": 25 +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/product.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/product.json new file mode 100644 index 00000000..39fe2403 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/product.json @@ -0,0 +1,139 @@ +{ + "attributes": [ + { + "name": { + "type": "string_t", + "description": "The name of the product.", + "requirement": "recommended", + "caption": "Name", + "type_name": "String", + "_source": "product" + } + }, + { + "version": { + "type": "string_t", + "description": "The version of the product, as defined by the event source. For example: 2013.1.3-beta.", + "requirement": "recommended", + "caption": "Version", + "type_name": "String", + "_source": "product" + } + }, + { + "path": { + "type": "string_t", + "description": "The installation path of the product.", + "requirement": "optional", + "caption": "Path", + "type_name": "String", + "_source": "product" + } + }, + { + "uid": { + "type": "string_t", + "description": "The unique identifier of the product.", + "requirement": "recommended", + "caption": "Unique ID", + "type_name": "String", + "_source": "product" + } + }, + { + "feature": { + "type": "object_t", + "description": "The feature that reported the event.", + "requirement": "optional", + "caption": "Feature", + "object_name": "Feature", + "object_type": "feature", + "_source": "product" + } + }, + { + "lang": { + "type": "string_t", + "description": "The two letter lower case language codes, as defined by ISO 639-1. For example: en (English), de (German), or fr (French).", + "requirement": "optional", + "caption": "Language", + "type_name": "String", + "_source": "product" + } + }, + { + "cpe_name": { + "type": "string_t", + "description": "The Common Platform Enumeration (CPE) name as described by (NIST) For example: cpe:/a:apple:safari:16.2.", + "requirement": "optional", + "caption": "The product CPE identifier", + "type_name": "String", + "_source": "product" + } + }, + { + "data_classification": { + "profile": "data_classification", + "type": "object_t", + "description": "The Data Classification object includes information about data classification levels and data category types.", + "group": "context", + "requirement": "recommended", + "caption": "Data Classification", + "object_name": "Data Classification", + "object_type": "data_classification", + "@deprecated": { + "message": "Use the attribute data_classifications instead", + "since": "1.4.0" + }, + "_source": "product" + } + }, + { + "data_classifications": { + "profile": "data_classification", + "type": "object_t", + "description": "A list of Data Classification objects, that include information about data classification levels and data category types, identified by a classifier.", + "group": "context", + "is_array": true, + "requirement": "recommended", + "caption": "Data Classification", + "object_name": "Data Classification", + "object_type": "data_classification", + "_source": "product" + } + }, + { + "url_string": { + "type": "url_t", + "description": "The URL pointing towards the product.", + "requirement": "optional", + "caption": "URL String", + "type_name": "URL String", + "_source": "product" + } + }, + { + "vendor_name": { + "type": "string_t", + "description": "The name of the vendor of the product.", + "requirement": "recommended", + "caption": "Vendor Name", + "type_name": "String", + "_source": "product" + } + } + ], + "name": "product", + "description": "The Product object describes characteristics of a software product.", + "extends": "_entity", + "constraints": { + "at_least_one": [ + "name", + "uid" + ] + }, + "profiles": [ + "data_classification" + ], + "caption": "Product" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/remediation.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/remediation.json new file mode 100644 index 00000000..06677c16 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/remediation.json @@ -0,0 +1,74 @@ +{ + "attributes": [ + { + "desc": { + "type": "string_t", + "description": "The description of the remediation strategy.", + "requirement": "required", + "caption": "Description", + "type_name": "String", + "_source": "remediation" + } + }, + { + "references": { + "type": "string_t", + "description": "A list of supporting URL/s, references that help describe the remediation strategy.", + "is_array": true, + "requirement": "optional", + "caption": "References", + "type_name": "String", + "_source": "remediation" + } + }, + { + "cis_controls": { + "type": "object_t", + "description": "An array of Center for Internet Security (CIS) Controls that can be optionally mapped to provide additional remediation details.", + "is_array": true, + "references": [ + { + "description": "Center For Internet Security Controls", + "url": "https://www.cisecurity.org/controls/" + } + ], + "requirement": "optional", + "caption": "CIS Controls", + "object_name": "CIS Control", + "object_type": "cis_control", + "_source": "remediation" + } + }, + { + "kb_article_list": { + "type": "object_t", + "description": "A list of KB articles or patches related to an endpoint. A KB Article contains metadata that describes the patch or an update.", + "is_array": true, + "requirement": "optional", + "caption": "Knowledgebase Articles", + "object_name": "KB Article", + "object_type": "kb_article", + "_source": "remediation" + } + }, + { + "kb_articles": { + "type": "string_t", + "description": "The KB article/s related to the entity. A KB Article contains metadata that describes the patch or an update.", + "is_array": true, + "requirement": "optional", + "caption": "Knowledgebase Articles", + "type_name": "String", + "@deprecated": { + "message": "Use the kb_article_list attribute instead.", + "since": "1.1.0" + }, + "_source": "remediation" + } + } + ], + "name": "remediation", + "description": "The Remediation object describes the recommended remediation steps to address identified issue(s).", + "extends": "object", + "caption": "Remediation" +} diff --git a/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/url.json b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/url.json new file mode 100644 index 00000000..370a8de9 --- /dev/null +++ b/crates/openshell-ocsf/schemas/ocsf/v1.7.0/objects/url.json @@ -0,0 +1,404 @@ +{ + "attributes": [ + { + "port": { + "type": "port_t", + "description": "The URL port. For example: 80.", + "requirement": "recommended", + "caption": "Port", + "type_name": "Port", + "_source": "url" + } + }, + { + "scheme": { + "type": "string_t", + "description": "The scheme portion of the URL. For example: http, https, ftp, or sftp.", + "requirement": "recommended", + "caption": "Scheme", + "type_name": "String", + "_source": "url" + } + }, + { + "path": { + "type": "string_t", + "description": "The URL path as extracted from the URL. For example: /download/trouble from www.example.com/download/trouble.", + "requirement": "recommended", + "caption": "Path", + "type_name": "String", + "_source": "url" + } + }, + { + "domain": { + "type": "string_t", + "description": "The domain portion of the URL. For example: example.com in https://sub.example.com.", + "requirement": "optional", + "caption": "Domain", + "type_name": "String", + "_source": "url" + } + }, + { + "hostname": { + "type": "hostname_t", + "description": "The URL host as extracted from the URL. For example: www.example.com from www.example.com/download/trouble.", + "requirement": "recommended", + "caption": "Hostname", + "type_name": "Hostname", + "_source": "url" + } + }, + { + "query_string": { + "type": "string_t", + "description": "The query portion of the URL. For example: the query portion of the URL http://www.example.com/search?q=bad&sort=date is q=bad&sort=date.", + "requirement": "recommended", + "caption": "HTTP Query String", + "type_name": "String", + "_source": "url" + } + }, + { + "categories": { + "type": "string_t", + "description": "The Website categorization names, as defined by category_ids enum values.", + "is_array": true, + "requirement": "optional", + "caption": "Website Categorization", + "type_name": "String", + "_source": "url", + "_sibling_of": "category_ids" + } + }, + { + "category_ids": { + "type": "integer_t", + "enum": { + "5": { + "caption": "Intimate Apparel/Swimsuit" + }, + "83": { + "caption": "Peer-to-Peer (P2P)" + }, + "35": { + "caption": "Military" + }, + "6": { + "caption": "Nudity" + }, + "93": { + "caption": "Sexual Expression" + }, + "65": { + "caption": "Sports/Recreation" + }, + "64": { + "caption": "Restaurants/Dining/Food" + }, + "11": { + "caption": "Gambling" + }, + "49": { + "caption": "Reference" + }, + "32": { + "caption": "Brokerage/Trading" + }, + "51": { + "caption": "Chat (IM)/SMS" + }, + "27": { + "caption": "Education" + }, + "97": { + "caption": "Content Servers" + }, + "66": { + "caption": "Travel" + }, + "50": { + "caption": "Mixed Content/Potentially Adult" + }, + "96": { + "caption": "Non-Viewable/Infrastructure" + }, + "86": { + "caption": "Proxy Avoidance" + }, + "98": { + "caption": "Placeholders" + }, + "52": { + "caption": "Email" + }, + "25": { + "caption": "Controlled Substances" + }, + "38": { + "caption": "Technology/Internet" + }, + "85": { + "caption": "Office/Business Applications" + }, + "110": { + "caption": "Internet Telephony" + }, + "23": { + "caption": "Alcohol" + }, + "29": { + "caption": "Charitable Organizations" + }, + "118": { + "caption": "Piracy/Copyright Concerns" + }, + "53": { + "caption": "Newsgroups/Forums" + }, + "56": { + "caption": "File Storage/Sharing" + }, + "3": { + "caption": "Pornography" + }, + "36": { + "caption": "Political/Social Advocacy" + }, + "17": { + "caption": "Hacking" + }, + "9": { + "caption": "Scam/Questionable/Illegal" + }, + "63": { + "caption": "Personal Sites" + }, + "89": { + "caption": "Web Hosting" + }, + "112": { + "caption": "Media Sharing" + }, + "121": { + "caption": "Marijuana" + }, + "44": { + "caption": "Malicious Outbound Data/Botnets" + }, + "107": { + "caption": "Informational" + }, + "68": { + "caption": "Humor/Jokes" + }, + "33": { + "caption": "Games" + }, + "92": { + "caption": "Suspicious" + }, + "114": { + "caption": "TV/Video Streams" + }, + "106": { + "caption": "E-Card/Invitations" + }, + "45": { + "caption": "Job Search/Careers" + }, + "108": { + "caption": "Computer/Information Security" + }, + "55": { + "caption": "Social Networking" + }, + "84": { + "caption": "Audio/Video Clips" + }, + "101": { + "caption": "Spam" + }, + "111": { + "caption": "Online Meetings" + }, + "87": { + "caption": "For Kids" + }, + "4": { + "caption": "Sex Education" + }, + "0": { + "description": "The Domain/URL category is unknown.", + "caption": "Unknown" + }, + "15": { + "caption": "Weapons" + }, + "88": { + "caption": "Web Ads/Analytics" + }, + "20": { + "caption": "Entertainment" + }, + "40": { + "caption": "Search Engines/Portals" + }, + "18": { + "caption": "Phishing" + }, + "71": { + "caption": "Software Downloads" + }, + "61": { + "caption": "Society/Daily Living" + }, + "30": { + "caption": "Art/Culture" + }, + "22": { + "caption": "Alternative Spirituality/Belief" + }, + "47": { + "caption": "Personals/Dating" + }, + "14": { + "caption": "Violence/Hate/Racism" + }, + "59": { + "caption": "Auctions" + }, + "58": { + "caption": "Shopping" + }, + "60": { + "caption": "Real Estate" + }, + "95": { + "caption": "Translation" + }, + "113": { + "caption": "Radio/Audio Streams" + }, + "16": { + "caption": "Abortion" + }, + "54": { + "caption": "Religion" + }, + "46": { + "caption": "News/Media" + }, + "67": { + "caption": "Vehicles" + }, + "103": { + "caption": "Dynamic DNS Host" + }, + "24": { + "caption": "Tobacco" + }, + "57": { + "caption": "Remote Access Tools" + }, + "109": { + "caption": "Internet Connected Devices" + }, + "102": { + "caption": "Potentially Unwanted Software" + }, + "7": { + "caption": "Extreme" + }, + "99": { + "description": "The Domain/URL category is not mapped. See the categories attribute, which contains a data source specific value.", + "caption": "Other" + }, + "34": { + "caption": "Government/Legal" + }, + "26": { + "caption": "Child Pornography" + }, + "43": { + "caption": "Malicious Sources/Malnets" + }, + "90": { + "caption": "Uncategorized" + }, + "37": { + "caption": "Health" + }, + "31": { + "caption": "Financial Services" + }, + "21": { + "caption": "Business/Economy" + }, + "1": { + "caption": "Adult/Mature Content" + } + }, + "description": "The Website categorization identifiers.", + "is_array": true, + "requirement": "recommended", + "caption": "Website Categorization IDs", + "type_name": "Integer", + "sibling": "categories", + "_source": "url" + } + }, + { + "resource_type": { + "type": "string_t", + "description": "The context in which a resource was retrieved in a web request.", + "requirement": "optional", + "caption": "Resource Type", + "type_name": "String", + "_source": "url" + } + }, + { + "subdomain": { + "type": "string_t", + "description": "The subdomain portion of the URL. For example: sub in https://sub.example.com or sub2.sub1 in https://sub2.sub1.example.com.", + "requirement": "optional", + "caption": "Subdomain", + "type_name": "String", + "_source": "url" + } + }, + { + "url_string": { + "type": "url_t", + "description": "The URL string. See RFC 1738. For example: http://www.example.com/download/trouble.exe. Note: The URL path should not populate the URL string.", + "requirement": "recommended", + "caption": "URL String", + "type_name": "URL String", + "_source": "url" + } + } + ], + "name": "url", + "description": "The Uniform Resource Locator (URL) object describes the characteristics of a URL.", + "extends": "object", + "constraints": { + "at_least_one": [ + "url_string", + "path" + ] + }, + "references": [ + { + "description": "Defined in RFC 1738", + "url": "https://datatracker.ietf.org/doc/html/rfc1738" + }, + { + "description": "D3FEND\u2122 Ontology d3f:URL", + "url": "https://d3fend.mitre.org/dao/artifact/d3f:URL/" + } + ], + "caption": "Uniform Resource Locator", + "observable": 23 +} diff --git a/crates/openshell-ocsf/src/builders/base.rs b/crates/openshell-ocsf/src/builders/base.rs new file mode 100644 index 00000000..791e5a09 --- /dev/null +++ b/crates/openshell-ocsf/src/builders/base.rs @@ -0,0 +1,114 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for Base Event [0]. + +use crate::builders::SandboxContext; +use crate::enums::{SeverityId, StatusId}; +use crate::events::base_event::BaseEventData; +use crate::events::{BaseEvent, OcsfEvent}; + +/// Builder for Base Event [0] — events without a specific OCSF class. +pub struct BaseEventBuilder<'a> { + ctx: &'a SandboxContext, + severity: SeverityId, + status: Option, + message: Option, + activity_name: Option, + unmapped: serde_json::Map, +} + +impl<'a> BaseEventBuilder<'a> { + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + severity: SeverityId::Informational, + status: None, + message: None, + activity_name: None, + unmapped: serde_json::Map::new(), + } + } + + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn status(mut self, id: StatusId) -> Self { + self.status = Some(id); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + #[must_use] + pub fn activity_name(mut self, name: impl Into) -> Self { + self.activity_name = Some(name.into()); + self + } + + /// Add an unmapped field. + #[must_use] + pub fn unmapped(mut self, key: &str, value: impl Into) -> Self { + self.unmapped.insert(key.to_string(), value.into()); + self + } + + #[must_use] + pub fn build(self) -> OcsfEvent { + let activity_name = self.activity_name.as_deref().unwrap_or("Other"); + let mut base = BaseEventData::new( + 0, + "Base Event", + 0, + "Uncategorized", + 99, + activity_name, + self.severity, + self.ctx.metadata(&["container", "host"]), + ); + if let Some(status) = self.status { + base.set_status(status); + } + if let Some(msg) = self.message { + base.set_message(msg); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + if !self.unmapped.is_empty() { + base.unmapped = Some(serde_json::Value::Object(self.unmapped)); + } + + OcsfEvent::Base(BaseEvent { base }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + + #[test] + fn test_base_event_builder() { + let ctx = test_sandbox_context(); + let event = BaseEventBuilder::new(&ctx) + .severity(SeverityId::Informational) + .status(StatusId::Success) + .activity_name("Network Namespace Created") + .message("Network namespace created") + .unmapped("namespace", serde_json::json!("openshell-sandbox-abc123")) + .unmapped("host_ip", serde_json::json!("10.42.0.1")) + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 0); + assert_eq!(json["activity_name"], "Network Namespace Created"); + assert_eq!(json["message"], "Network namespace created"); + assert_eq!(json["unmapped"]["namespace"], "openshell-sandbox-abc123"); + } +} diff --git a/crates/openshell-ocsf/src/builders/config.rs b/crates/openshell-ocsf/src/builders/config.rs new file mode 100644 index 00000000..8ff9cae2 --- /dev/null +++ b/crates/openshell-ocsf/src/builders/config.rs @@ -0,0 +1,142 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for Device Config State Change [5019] events. + +use crate::builders::SandboxContext; +use crate::enums::{SecurityLevelId, SeverityId, StateId, StatusId}; +use crate::events::base_event::BaseEventData; +use crate::events::{DeviceConfigStateChangeEvent, OcsfEvent}; + +/// Builder for Device Config State Change [5019] events. +pub struct ConfigStateChangeBuilder<'a> { + ctx: &'a SandboxContext, + severity: SeverityId, + status: Option, + state_id: Option, + state_label: Option, + security_level: Option, + prev_security_level: Option, + message: Option, + unmapped: serde_json::Map, +} + +impl<'a> ConfigStateChangeBuilder<'a> { + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + severity: SeverityId::Informational, + status: None, + state_id: None, + state_label: None, + security_level: None, + prev_security_level: None, + message: None, + unmapped: serde_json::Map::new(), + } + } + + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn status(mut self, id: StatusId) -> Self { + self.status = Some(id); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + + /// Set state with a custom label (OCSF `state_id` + display label). + #[must_use] + pub fn state(mut self, id: StateId, label: &str) -> Self { + self.state_id = Some(id); + self.state_label = Some(label.to_string()); + self + } + + #[must_use] + pub fn security_level(mut self, id: SecurityLevelId) -> Self { + self.security_level = Some(id); + self + } + #[must_use] + pub fn prev_security_level(mut self, id: SecurityLevelId) -> Self { + self.prev_security_level = Some(id); + self + } + + /// Add an unmapped field. + #[must_use] + pub fn unmapped(mut self, key: &str, value: impl Into) -> Self { + self.unmapped.insert(key.to_string(), value.into()); + self + } + + #[must_use] + pub fn build(self) -> OcsfEvent { + let mut base = BaseEventData::new( + 5019, + "Device Config State Change", + 5, + "Discovery", + 1, + "Log", // activity_id=1 (Log) + self.severity, + self.ctx + .metadata(&["security_control", "container", "host"]), + ); + if let Some(status) = self.status { + base.set_status(status); + } + if let Some(msg) = self.message { + base.set_message(msg); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + if !self.unmapped.is_empty() { + base.unmapped = Some(serde_json::Value::Object(self.unmapped)); + } + + OcsfEvent::DeviceConfigStateChange(DeviceConfigStateChangeEvent { + base, + state: self.state_id, + state_custom_label: self.state_label, + security_level: self.security_level, + prev_security_level: self.prev_security_level, + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + + #[test] + fn test_config_state_change_builder() { + let ctx = test_sandbox_context(); + let event = ConfigStateChangeBuilder::new(&ctx) + .state(StateId::Enabled, "loaded") + .security_level(SecurityLevelId::Secure) + .prev_security_level(SecurityLevelId::Unknown) + .severity(SeverityId::Informational) + .status(StatusId::Success) + .unmapped("policy_version", serde_json::json!("v3")) + .unmapped("policy_hash", serde_json::json!("sha256:abc123")) + .message("Policy reloaded successfully") + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 5019); + assert_eq!(json["state_id"], 2); + assert_eq!(json["security_level"], "Secure"); + assert_eq!(json["unmapped"]["policy_version"], "v3"); + } +} diff --git a/crates/openshell-ocsf/src/builders/finding.rs b/crates/openshell-ocsf/src/builders/finding.rs new file mode 100644 index 00000000..10d770f4 --- /dev/null +++ b/crates/openshell-ocsf/src/builders/finding.rs @@ -0,0 +1,217 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for Detection Finding [2004] events. + +use crate::builders::SandboxContext; +use crate::enums::{ActionId, ActivityId, ConfidenceId, DispositionId, RiskLevelId, SeverityId}; +use crate::events::base_event::BaseEventData; +use crate::events::{DetectionFindingEvent, OcsfEvent}; +use crate::objects::{Attack, Evidence, FindingInfo, Remediation}; + +/// Builder for Detection Finding [2004] events. +pub struct DetectionFindingBuilder<'a> { + ctx: &'a SandboxContext, + activity: ActivityId, + severity: SeverityId, + action: Option, + disposition: Option, + finding_info: Option, + evidences: Vec, + attacks: Vec, + remediation: Option, + is_alert: Option, + confidence: Option, + risk_level: Option, + message: Option, + log_source: Option, +} + +impl<'a> DetectionFindingBuilder<'a> { + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + activity: ActivityId::Open, + severity: SeverityId::Medium, + action: None, + disposition: None, + finding_info: None, + evidences: Vec::new(), + attacks: Vec::new(), + remediation: None, + is_alert: None, + confidence: None, + risk_level: None, + message: None, + log_source: None, + } + } + + #[must_use] + pub fn activity(mut self, id: ActivityId) -> Self { + self.activity = id; + self + } + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn action(mut self, id: ActionId) -> Self { + self.action = Some(id); + self + } + #[must_use] + pub fn disposition(mut self, id: DispositionId) -> Self { + self.disposition = Some(id); + self + } + #[must_use] + pub fn finding_info(mut self, info: FindingInfo) -> Self { + self.finding_info = Some(info); + self + } + #[must_use] + pub fn is_alert(mut self, alert: bool) -> Self { + self.is_alert = Some(alert); + self + } + #[must_use] + pub fn confidence(mut self, id: ConfidenceId) -> Self { + self.confidence = Some(id); + self + } + #[must_use] + pub fn risk_level(mut self, id: RiskLevelId) -> Self { + self.risk_level = Some(id); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + #[must_use] + pub fn log_source(mut self, source: impl Into) -> Self { + self.log_source = Some(source.into()); + self + } + + /// Add a remediation description. + #[must_use] + pub fn remediation(mut self, desc: &str) -> Self { + self.remediation = Some(Remediation::new(desc)); + self + } + + /// Add evidence key-value pair. + #[must_use] + pub fn evidence(mut self, key: &str, value: &str) -> Self { + self.evidences.push(Evidence::from_pairs(&[(key, value)])); + self + } + + /// Add evidence from multiple pairs. + #[must_use] + pub fn evidence_pairs(mut self, pairs: &[(&str, &str)]) -> Self { + self.evidences.push(Evidence::from_pairs(pairs)); + self + } + + /// Add a MITRE ATT&CK mapping. + #[must_use] + pub fn attack(mut self, attack: Attack) -> Self { + self.attacks.push(attack); + self + } + + #[must_use] + pub fn build(self) -> OcsfEvent { + let activity_name = self.activity.finding_label().to_string(); + let mut metadata = self + .ctx + .metadata(&["security_control", "container", "host"]); + if let Some(source) = self.log_source { + metadata.log_source = Some(source); + } + + let mut base = BaseEventData::new( + 2004, + "Detection Finding", + 2, + "Findings", + self.activity.as_u8(), + &activity_name, + self.severity, + metadata, + ); + if let Some(msg) = self.message { + base.set_message(msg); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + + OcsfEvent::DetectionFinding(DetectionFindingEvent { + base, + finding_info: self + .finding_info + .unwrap_or_else(|| FindingInfo::new("unknown", "Unknown Finding")), + evidences: if self.evidences.is_empty() { + None + } else { + Some(self.evidences) + }, + attacks: if self.attacks.is_empty() { + None + } else { + Some(self.attacks) + }, + remediation: self.remediation, + is_alert: self.is_alert, + confidence: self.confidence, + risk_level: self.risk_level, + action: self.action, + disposition: self.disposition, + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + + #[test] + fn test_detection_finding_builder() { + let ctx = test_sandbox_context(); + let event = DetectionFindingBuilder::new(&ctx) + .activity(ActivityId::Open) // Create + .action(ActionId::Denied) + .disposition(DispositionId::Blocked) + .severity(SeverityId::High) + .is_alert(true) + .confidence(ConfidenceId::High) + .risk_level(RiskLevelId::High) + .finding_info( + FindingInfo::new("nssh1-replay-abc", "NSSH1 Nonce Replay Attack") + .with_desc("A nonce was replayed."), + ) + .evidence("nonce", "0xdeadbeef") + .attack(Attack::mitre( + "T1550", + "Use Alternate Authentication Material", + "TA0008", + "Lateral Movement", + )) + .message("NSSH1 nonce replay detected") + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 2004); + assert_eq!(json["finding_info"]["title"], "NSSH1 Nonce Replay Attack"); + assert_eq!(json["is_alert"], true); + assert_eq!(json["confidence"], "High"); + } +} diff --git a/crates/openshell-ocsf/src/builders/http.rs b/crates/openshell-ocsf/src/builders/http.rs new file mode 100644 index 00000000..9cc49c9e --- /dev/null +++ b/crates/openshell-ocsf/src/builders/http.rs @@ -0,0 +1,178 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for HTTP Activity [4002] events. + +use crate::builders::SandboxContext; +use crate::enums::{ActionId, ActivityId, DispositionId, SeverityId, StatusId}; +use crate::events::base_event::BaseEventData; +use crate::events::{HttpActivityEvent, OcsfEvent}; +use crate::objects::{Actor, Endpoint, FirewallRule, HttpRequest, HttpResponse, Process}; + +/// Builder for HTTP Activity [4002] events. +pub struct HttpActivityBuilder<'a> { + ctx: &'a SandboxContext, + activity: ActivityId, + action: Option, + disposition: Option, + severity: SeverityId, + status: Option, + http_request: Option, + http_response: Option, + src_endpoint: Option, + dst_endpoint: Option, + actor: Option, + firewall_rule: Option, + message: Option, +} + +impl<'a> HttpActivityBuilder<'a> { + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + activity: ActivityId::Unknown, + action: None, + disposition: None, + severity: SeverityId::Informational, + status: None, + http_request: None, + http_response: None, + src_endpoint: None, + dst_endpoint: None, + actor: None, + firewall_rule: None, + message: None, + } + } + + #[must_use] + pub fn activity(mut self, id: ActivityId) -> Self { + self.activity = id; + self + } + #[must_use] + pub fn action(mut self, id: ActionId) -> Self { + self.action = Some(id); + self + } + #[must_use] + pub fn disposition(mut self, id: DispositionId) -> Self { + self.disposition = Some(id); + self + } + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn status(mut self, id: StatusId) -> Self { + self.status = Some(id); + self + } + #[must_use] + pub fn http_request(mut self, req: HttpRequest) -> Self { + self.http_request = Some(req); + self + } + #[must_use] + pub fn http_response(mut self, resp: HttpResponse) -> Self { + self.http_response = Some(resp); + self + } + #[must_use] + pub fn src_endpoint(mut self, ep: Endpoint) -> Self { + self.src_endpoint = Some(ep); + self + } + #[must_use] + pub fn dst_endpoint(mut self, ep: Endpoint) -> Self { + self.dst_endpoint = Some(ep); + self + } + #[must_use] + pub fn actor_process(mut self, process: Process) -> Self { + self.actor = Some(Actor { process }); + self + } + #[must_use] + pub fn firewall_rule(mut self, name: &str, rule_type: &str) -> Self { + self.firewall_rule = Some(FirewallRule::new(name, rule_type)); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + + #[must_use] + pub fn build(self) -> OcsfEvent { + let activity_name = self.activity.http_label().to_string(); + let mut base = BaseEventData::new( + 4002, + "HTTP Activity", + 4, + "Network Activity", + self.activity.as_u8(), + &activity_name, + self.severity, + self.ctx + .metadata(&["security_control", "network_proxy", "container", "host"]), + ); + if let Some(status) = self.status { + base.set_status(status); + } + if let Some(msg) = self.message { + base.set_message(msg); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + + OcsfEvent::HttpActivity(HttpActivityEvent { + base, + http_request: self.http_request, + http_response: self.http_response, + src_endpoint: self.src_endpoint, + dst_endpoint: self.dst_endpoint, + proxy_endpoint: Some(self.ctx.proxy_endpoint()), + actor: self.actor, + firewall_rule: self.firewall_rule, + action: self.action, + disposition: self.disposition, + observation_point_id: Some(2), + is_src_dst_assignment_known: Some(true), + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + use crate::objects::Url; + + #[test] + fn test_http_activity_builder() { + let ctx = test_sandbox_context(); + let event = HttpActivityBuilder::new(&ctx) + .activity(ActivityId::Reset) // Get = 3 + .action(ActionId::Allowed) + .disposition(DispositionId::Allowed) + .severity(SeverityId::Informational) + .http_request(HttpRequest::new( + "GET", + Url::new("https", "api.example.com", "/v1/data", 443), + )) + .actor_process(Process::new("curl", 88)) + .firewall_rule("default-egress", "mechanistic") + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 4002); + assert_eq!(json["activity_name"], "Get"); + assert_eq!(json["http_request"]["http_method"], "GET"); + assert_eq!(json["actor"]["process"]["name"], "curl"); + } +} diff --git a/crates/openshell-ocsf/src/builders/lifecycle.rs b/crates/openshell-ocsf/src/builders/lifecycle.rs new file mode 100644 index 00000000..b0d3a600 --- /dev/null +++ b/crates/openshell-ocsf/src/builders/lifecycle.rs @@ -0,0 +1,104 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for Application Lifecycle [6002] events. + +use crate::builders::SandboxContext; +use crate::enums::{ActivityId, SeverityId, StatusId}; +use crate::events::base_event::BaseEventData; +use crate::events::{ApplicationLifecycleEvent, OcsfEvent}; +use crate::objects::Product; + +/// Builder for Application Lifecycle [6002] events. +pub struct AppLifecycleBuilder<'a> { + ctx: &'a SandboxContext, + activity: ActivityId, + severity: SeverityId, + status: Option, + message: Option, +} + +impl<'a> AppLifecycleBuilder<'a> { + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + activity: ActivityId::Unknown, + severity: SeverityId::Informational, + status: None, + message: None, + } + } + + #[must_use] + pub fn activity(mut self, id: ActivityId) -> Self { + self.activity = id; + self + } + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn status(mut self, id: StatusId) -> Self { + self.status = Some(id); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + + #[must_use] + pub fn build(self) -> OcsfEvent { + let activity_name = self.activity.lifecycle_label().to_string(); + let mut base = BaseEventData::new( + 6002, + "Application Lifecycle", + 6, + "Application Activity", + self.activity.as_u8(), + &activity_name, + self.severity, + self.ctx.metadata(&["container", "host"]), + ); + if let Some(status) = self.status { + base.set_status(status); + } + if let Some(msg) = self.message { + base.set_message(msg); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + + OcsfEvent::ApplicationLifecycle(ApplicationLifecycleEvent { + base, + app: Product::openshell_sandbox(&self.ctx.product_version), + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + + #[test] + fn test_app_lifecycle_builder() { + let ctx = test_sandbox_context(); + let event = AppLifecycleBuilder::new(&ctx) + .activity(ActivityId::Reset) // Start + .severity(SeverityId::Informational) + .status(StatusId::Success) + .message("Starting sandbox") + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 6002); + assert_eq!(json["activity_name"], "Start"); + assert_eq!(json["app"]["name"], "OpenShell Sandbox Supervisor"); + assert_eq!(json["status"], "Success"); + } +} diff --git a/crates/openshell-ocsf/src/builders/mod.rs b/crates/openshell-ocsf/src/builders/mod.rs new file mode 100644 index 00000000..77004da4 --- /dev/null +++ b/crates/openshell-ocsf/src/builders/mod.rs @@ -0,0 +1,141 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Ergonomic builders for constructing OCSF events. +//! +//! Each event class has a builder that takes a `SandboxContext` reference +//! and provides chainable methods for setting event fields. + +mod base; +mod config; +mod finding; +mod http; +mod lifecycle; +mod network; +mod process; +mod ssh; + +pub use base::BaseEventBuilder; +pub use config::ConfigStateChangeBuilder; +pub use finding::DetectionFindingBuilder; +pub use http::HttpActivityBuilder; +pub use lifecycle::AppLifecycleBuilder; +pub use network::NetworkActivityBuilder; +pub use process::ProcessActivityBuilder; +pub use ssh::SshActivityBuilder; + +use std::net::IpAddr; + +use crate::OCSF_VERSION; +use crate::objects::{Container, Device, Endpoint, Image, Metadata, Product}; + +/// Immutable context created once at sandbox startup. +/// +/// Passed to every event builder to populate shared OCSF fields +/// (metadata, container, device, proxy endpoint). +#[derive(Debug, Clone)] +pub struct SandboxContext { + /// Sandbox unique identifier. + pub sandbox_id: String, + /// Sandbox display name. + pub sandbox_name: String, + /// Container image reference. + pub container_image: String, + /// Device hostname. + pub hostname: String, + /// Product version string. + pub product_version: String, + /// Proxy listen IP address. + pub proxy_ip: IpAddr, + /// Proxy listen port. + pub proxy_port: u16, +} + +impl SandboxContext { + /// Build the OCSF `Metadata` object for any event. + #[must_use] + pub fn metadata(&self, profiles: &[&str]) -> Metadata { + Metadata { + version: OCSF_VERSION.to_string(), + product: Product::openshell_sandbox(&self.product_version), + profiles: profiles.iter().map(|s| (*s).to_string()).collect(), + uid: Some(self.sandbox_id.clone()), + log_source: None, + } + } + + /// Build the OCSF `Container` object. + #[must_use] + pub fn container(&self) -> Container { + Container { + name: self.sandbox_name.clone(), + uid: Some(self.sandbox_id.clone()), + image: Some(Image { + name: self.container_image.clone(), + }), + } + } + + /// Build the OCSF `Device` object. + #[must_use] + pub fn device(&self) -> Device { + Device::linux(&self.hostname) + } + + /// Build the `proxy_endpoint` object for the Network Proxy profile. + #[must_use] + pub fn proxy_endpoint(&self) -> Endpoint { + Endpoint::from_ip(self.proxy_ip, self.proxy_port) + } +} + +#[cfg(test)] +pub(crate) fn test_sandbox_context() -> SandboxContext { + SandboxContext { + sandbox_id: "sandbox-abc123".to_string(), + sandbox_name: "my-sandbox".to_string(), + container_image: "ghcr.io/openshell/sandbox:latest".to_string(), + hostname: "sandbox-abc123".to_string(), + product_version: "0.1.0".to_string(), + proxy_ip: "10.42.0.1".parse().unwrap(), + proxy_port: 3128, + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_sandbox_context_metadata() { + let ctx = test_sandbox_context(); + let meta = ctx.metadata(&["security_control", "container"]); + assert_eq!(meta.version, "1.7.0"); + assert_eq!(meta.product.name, "OpenShell Sandbox Supervisor"); + assert_eq!(meta.profiles.len(), 2); + assert_eq!(meta.uid.as_deref(), Some("sandbox-abc123")); + } + + #[test] + fn test_sandbox_context_container() { + let ctx = test_sandbox_context(); + let container = ctx.container(); + assert_eq!(container.name, "my-sandbox"); + assert_eq!(container.uid.as_deref(), Some("sandbox-abc123")); + } + + #[test] + fn test_sandbox_context_device() { + let ctx = test_sandbox_context(); + let device = ctx.device(); + assert_eq!(device.hostname, "sandbox-abc123"); + } + + #[test] + fn test_sandbox_context_proxy_endpoint() { + let ctx = test_sandbox_context(); + let ep = ctx.proxy_endpoint(); + assert_eq!(ep.ip.as_deref(), Some("10.42.0.1")); + assert_eq!(ep.port, Some(3128)); + } +} diff --git a/crates/openshell-ocsf/src/builders/network.rs b/crates/openshell-ocsf/src/builders/network.rs new file mode 100644 index 00000000..d0a79925 --- /dev/null +++ b/crates/openshell-ocsf/src/builders/network.rs @@ -0,0 +1,233 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for Network Activity [4001] events. + +use std::net::IpAddr; + +use crate::builders::SandboxContext; +use crate::enums::{ActionId, ActivityId, DispositionId, SeverityId, StatusId}; +use crate::events::base_event::BaseEventData; +use crate::events::{NetworkActivityEvent, OcsfEvent}; +use crate::objects::{Actor, ConnectionInfo, Endpoint, FirewallRule, Process}; + +/// Builder for Network Activity [4001] events. +pub struct NetworkActivityBuilder<'a> { + ctx: &'a SandboxContext, + activity: ActivityId, + activity_name: Option, + action: Option, + disposition: Option, + severity: SeverityId, + status: Option, + src_endpoint: Option, + dst_endpoint: Option, + actor: Option, + firewall_rule: Option, + connection_info: Option, + observation_point_id: Option, + message: Option, + status_detail: Option, + unmapped: Option>, + log_source: Option, +} + +impl<'a> NetworkActivityBuilder<'a> { + /// Start building a Network Activity event. + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + activity: ActivityId::Unknown, + activity_name: None, + action: None, + disposition: None, + severity: SeverityId::Informational, + status: None, + src_endpoint: None, + dst_endpoint: None, + actor: None, + firewall_rule: None, + connection_info: None, + observation_point_id: None, + message: None, + status_detail: None, + unmapped: None, + log_source: None, + } + } + + #[must_use] + pub fn activity(mut self, id: ActivityId) -> Self { + self.activity = id; + self + } + #[must_use] + pub fn activity_name(mut self, name: impl Into) -> Self { + self.activity_name = Some(name.into()); + self + } + #[must_use] + pub fn action(mut self, id: ActionId) -> Self { + self.action = Some(id); + self + } + #[must_use] + pub fn disposition(mut self, id: DispositionId) -> Self { + self.disposition = Some(id); + self + } + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn status(mut self, id: StatusId) -> Self { + self.status = Some(id); + self + } + #[must_use] + pub fn src_endpoint_addr(mut self, ip: IpAddr, port: u16) -> Self { + self.src_endpoint = Some(Endpoint::from_ip(ip, port)); + self + } + #[must_use] + pub fn dst_endpoint(mut self, endpoint: Endpoint) -> Self { + self.dst_endpoint = Some(endpoint); + self + } + #[must_use] + pub fn actor_process(mut self, process: Process) -> Self { + self.actor = Some(Actor { process }); + self + } + #[must_use] + pub fn firewall_rule(mut self, name: &str, rule_type: &str) -> Self { + self.firewall_rule = Some(FirewallRule::new(name, rule_type)); + self + } + #[must_use] + pub fn connection_info(mut self, info: ConnectionInfo) -> Self { + self.connection_info = Some(info); + self + } + #[must_use] + pub fn observation_point(mut self, id: u8) -> Self { + self.observation_point_id = Some(id); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + #[must_use] + pub fn status_detail(mut self, detail: impl Into) -> Self { + self.status_detail = Some(detail.into()); + self + } + #[must_use] + pub fn log_source(mut self, source: impl Into) -> Self { + self.log_source = Some(source.into()); + self + } + + /// Add an unmapped field. + #[must_use] + pub fn unmapped(mut self, key: &str, value: impl Into) -> Self { + self.unmapped + .get_or_insert_with(serde_json::Map::new) + .insert(key.to_string(), value.into()); + self + } + + /// Finalize and return the `OcsfEvent`. + #[must_use] + pub fn build(self) -> OcsfEvent { + let activity_name = self + .activity_name + .unwrap_or_else(|| self.activity.network_label().to_string()); + let mut metadata = + self.ctx + .metadata(&["security_control", "network_proxy", "container", "host"]); + if let Some(source) = self.log_source { + metadata.log_source = Some(source); + } + + let mut base = BaseEventData::new( + 4001, + "Network Activity", + 4, + "Network Activity", + self.activity.as_u8(), + &activity_name, + self.severity, + metadata, + ); + + if let Some(status) = self.status { + base.set_status(status); + } + if let Some(msg) = self.message { + base.set_message(msg); + } + if let Some(detail) = self.status_detail { + base.set_status_detail(detail); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + if let Some(unmapped) = self.unmapped { + base.unmapped = Some(serde_json::Value::Object(unmapped)); + } + + OcsfEvent::NetworkActivity(NetworkActivityEvent { + base, + src_endpoint: self.src_endpoint, + dst_endpoint: self.dst_endpoint, + proxy_endpoint: Some(self.ctx.proxy_endpoint()), + actor: self.actor, + firewall_rule: self.firewall_rule, + connection_info: self.connection_info, + action: self.action, + disposition: self.disposition, + observation_point_id: self.observation_point_id, + is_src_dst_assignment_known: Some(true), + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + + #[test] + fn test_network_activity_builder() { + let ctx = test_sandbox_context(); + let event = NetworkActivityBuilder::new(&ctx) + .activity(ActivityId::Open) + .action(ActionId::Allowed) + .disposition(DispositionId::Allowed) + .severity(SeverityId::Informational) + .status(StatusId::Success) + .dst_endpoint(Endpoint::from_domain("api.example.com", 443)) + .actor_process(Process::new("python3", 42).with_cmd_line("python3 /app/main.py")) + .firewall_rule("default-egress", "mechanistic") + .observation_point(2) + .message("CONNECT api.example.com:443 allowed") + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 4001); + assert_eq!(json["activity_name"], "Open"); + assert_eq!(json["action"], "Allowed"); + assert_eq!(json["disposition"], "Allowed"); + assert_eq!(json["dst_endpoint"]["domain"], "api.example.com"); + assert_eq!(json["actor"]["process"]["name"], "python3"); + assert_eq!(json["firewall_rule"]["name"], "default-egress"); + assert_eq!(json["container"]["name"], "my-sandbox"); + assert_eq!(json["device"]["hostname"], "sandbox-abc123"); + assert_eq!(json["is_src_dst_assignment_known"], true); + } +} diff --git a/crates/openshell-ocsf/src/builders/process.rs b/crates/openshell-ocsf/src/builders/process.rs new file mode 100644 index 00000000..8ede8012 --- /dev/null +++ b/crates/openshell-ocsf/src/builders/process.rs @@ -0,0 +1,170 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for Process Activity [1007] events. + +use crate::builders::SandboxContext; +use crate::enums::{ActionId, ActivityId, DispositionId, LaunchTypeId, SeverityId, StatusId}; +use crate::events::base_event::BaseEventData; +use crate::events::{OcsfEvent, ProcessActivityEvent}; +use crate::objects::{Actor, Process}; + +/// Builder for Process Activity [1007] events. +pub struct ProcessActivityBuilder<'a> { + ctx: &'a SandboxContext, + activity: ActivityId, + severity: SeverityId, + status: Option, + action: Option, + disposition: Option, + process: Option, + actor: Option, + launch_type: Option, + exit_code: Option, + message: Option, +} + +impl<'a> ProcessActivityBuilder<'a> { + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + activity: ActivityId::Unknown, + severity: SeverityId::Informational, + status: None, + action: None, + disposition: None, + process: None, + actor: None, + launch_type: None, + exit_code: None, + message: None, + } + } + + #[must_use] + pub fn activity(mut self, id: ActivityId) -> Self { + self.activity = id; + self + } + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn status(mut self, id: StatusId) -> Self { + self.status = Some(id); + self + } + #[must_use] + pub fn action(mut self, id: ActionId) -> Self { + self.action = Some(id); + self + } + #[must_use] + pub fn disposition(mut self, id: DispositionId) -> Self { + self.disposition = Some(id); + self + } + #[must_use] + pub fn process(mut self, proc: Process) -> Self { + self.process = Some(proc); + self + } + #[must_use] + pub fn actor_process(mut self, process: Process) -> Self { + self.actor = Some(Actor { process }); + self + } + #[must_use] + pub fn launch_type(mut self, lt: LaunchTypeId) -> Self { + self.launch_type = Some(lt); + self + } + #[must_use] + pub fn exit_code(mut self, code: i32) -> Self { + self.exit_code = Some(code); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + + #[must_use] + pub fn build(self) -> OcsfEvent { + let activity_name = self.activity.process_label().to_string(); + let mut base = BaseEventData::new( + 1007, + "Process Activity", + 1, + "System Activity", + self.activity.as_u8(), + &activity_name, + self.severity, + self.ctx + .metadata(&["security_control", "container", "host"]), + ); + if let Some(status) = self.status { + base.set_status(status); + } + if let Some(msg) = self.message { + base.set_message(msg); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + + OcsfEvent::ProcessActivity(ProcessActivityEvent { + base, + process: self.process.unwrap_or_else(|| Process::new("unknown", 0)), + actor: self.actor, + launch_type: self.launch_type, + exit_code: self.exit_code, + action: self.action, + disposition: self.disposition, + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + + #[test] + fn test_process_activity_builder_launch() { + let ctx = test_sandbox_context(); + let event = ProcessActivityBuilder::new(&ctx) + .activity(ActivityId::Open) // Launch + .action(ActionId::Allowed) + .disposition(DispositionId::Allowed) + .severity(SeverityId::Informational) + .launch_type(LaunchTypeId::Spawn) + .process(Process::new("python3", 42).with_cmd_line("python3 /app/main.py")) + .actor_process(Process::new("openshell-sandbox", 1)) + .message("Process started: python3 /app/main.py") + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 1007); + assert_eq!(json["launch_type"], "Spawn"); + assert_eq!(json["process"]["name"], "python3"); + assert_eq!(json["actor"]["process"]["name"], "openshell-sandbox"); + } + + #[test] + fn test_process_activity_builder_terminate() { + let ctx = test_sandbox_context(); + let event = ProcessActivityBuilder::new(&ctx) + .activity(ActivityId::Close) // Terminate + .severity(SeverityId::Informational) + .process(Process::new("python3", 42)) + .exit_code(0) + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["exit_code"], 0); + } +} diff --git a/crates/openshell-ocsf/src/builders/ssh.rs b/crates/openshell-ocsf/src/builders/ssh.rs new file mode 100644 index 00000000..6df01f3d --- /dev/null +++ b/crates/openshell-ocsf/src/builders/ssh.rs @@ -0,0 +1,172 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Builder for SSH Activity [4007] events. + +use std::net::IpAddr; + +use crate::builders::SandboxContext; +use crate::enums::{ActionId, ActivityId, AuthTypeId, DispositionId, SeverityId, StatusId}; +use crate::events::base_event::BaseEventData; +use crate::events::{OcsfEvent, SshActivityEvent}; +use crate::objects::{Actor, Endpoint, Process}; + +/// Builder for SSH Activity [4007] events. +pub struct SshActivityBuilder<'a> { + ctx: &'a SandboxContext, + activity: ActivityId, + action: Option, + disposition: Option, + severity: SeverityId, + status: Option, + src_endpoint: Option, + dst_endpoint: Option, + actor: Option, + auth_type_id: Option, + auth_type_label: Option, + protocol_ver: Option, + message: Option, +} + +impl<'a> SshActivityBuilder<'a> { + #[must_use] + pub fn new(ctx: &'a SandboxContext) -> Self { + Self { + ctx, + activity: ActivityId::Unknown, + action: None, + disposition: None, + severity: SeverityId::Informational, + status: None, + src_endpoint: None, + dst_endpoint: None, + actor: None, + auth_type_id: None, + auth_type_label: None, + protocol_ver: None, + message: None, + } + } + + #[must_use] + pub fn activity(mut self, id: ActivityId) -> Self { + self.activity = id; + self + } + #[must_use] + pub fn action(mut self, id: ActionId) -> Self { + self.action = Some(id); + self + } + #[must_use] + pub fn disposition(mut self, id: DispositionId) -> Self { + self.disposition = Some(id); + self + } + #[must_use] + pub fn severity(mut self, id: SeverityId) -> Self { + self.severity = id; + self + } + #[must_use] + pub fn status(mut self, id: StatusId) -> Self { + self.status = Some(id); + self + } + #[must_use] + pub fn src_endpoint_addr(mut self, ip: IpAddr, port: u16) -> Self { + self.src_endpoint = Some(Endpoint::from_ip(ip, port)); + self + } + #[must_use] + pub fn dst_endpoint(mut self, ep: Endpoint) -> Self { + self.dst_endpoint = Some(ep); + self + } + #[must_use] + pub fn actor_process(mut self, process: Process) -> Self { + self.actor = Some(Actor { process }); + self + } + #[must_use] + pub fn message(mut self, msg: impl Into) -> Self { + self.message = Some(msg.into()); + self + } + + /// Set auth type with a custom label (e.g., "NSSH1"). + #[must_use] + pub fn auth_type(mut self, id: AuthTypeId, label: &str) -> Self { + self.auth_type_id = Some(id); + self.auth_type_label = Some(label.to_string()); + self + } + + #[must_use] + pub fn protocol_ver(mut self, ver: &str) -> Self { + self.protocol_ver = Some(ver.to_string()); + self + } + + #[must_use] + pub fn build(self) -> OcsfEvent { + let activity_name = self.activity.network_label().to_string(); + let mut base = BaseEventData::new( + 4007, + "SSH Activity", + 4, + "Network Activity", + self.activity.as_u8(), + &activity_name, + self.severity, + self.ctx + .metadata(&["security_control", "container", "host"]), + ); + if let Some(status) = self.status { + base.set_status(status); + } + if let Some(msg) = self.message { + base.set_message(msg); + } + base.set_device(self.ctx.device()); + base.set_container(self.ctx.container()); + + OcsfEvent::SshActivity(SshActivityEvent { + base, + src_endpoint: self.src_endpoint, + dst_endpoint: self.dst_endpoint, + actor: self.actor, + auth_type: self.auth_type_id, + auth_type_custom_label: self.auth_type_label, + protocol_ver: self.protocol_ver, + action: self.action, + disposition: self.disposition, + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::test_sandbox_context; + + #[test] + fn test_ssh_activity_builder() { + let ctx = test_sandbox_context(); + let event = SshActivityBuilder::new(&ctx) + .activity(ActivityId::Open) + .action(ActionId::Allowed) + .disposition(DispositionId::Allowed) + .severity(SeverityId::Informational) + .src_endpoint_addr("10.42.0.1".parse().unwrap(), 48201) + .auth_type(AuthTypeId::Other, "NSSH1") + .protocol_ver("NSSH1") + .message("SSH handshake accepted via NSSH1") + .build(); + + let json = event.to_json().unwrap(); + assert_eq!(json["class_uid"], 4007); + assert_eq!(json["auth_type"], "NSSH1"); + assert_eq!(json["auth_type_id"], 99); + } +} diff --git a/crates/openshell-ocsf/src/enums/action.rs b/crates/openshell-ocsf/src/enums/action.rs new file mode 100644 index 00000000..0d5eab24 --- /dev/null +++ b/crates/openshell-ocsf/src/enums/action.rs @@ -0,0 +1,67 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `action_id` enum. + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Action ID (0-4, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum ActionId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Allowed + Allowed = 1, + /// 2 — Denied + Denied = 2, + /// 3 — Observed + Observed = 3, + /// 4 — Modified + Modified = 4, + /// 99 — Other + Other = 99, +} + +impl ActionId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Allowed => "Allowed", + Self::Denied => "Denied", + Self::Observed => "Observed", + Self::Modified => "Modified", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_action_labels() { + assert_eq!(ActionId::Unknown.label(), "Unknown"); + assert_eq!(ActionId::Allowed.label(), "Allowed"); + assert_eq!(ActionId::Denied.label(), "Denied"); + assert_eq!(ActionId::Observed.label(), "Observed"); + assert_eq!(ActionId::Modified.label(), "Modified"); + assert_eq!(ActionId::Other.label(), "Other"); + } + + #[test] + fn test_action_json_roundtrip() { + let action = ActionId::Denied; + let json = serde_json::to_value(action).unwrap(); + assert_eq!(json, serde_json::json!(2)); + let deserialized: ActionId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, ActionId::Denied); + } +} diff --git a/crates/openshell-ocsf/src/enums/activity.rs b/crates/openshell-ocsf/src/enums/activity.rs new file mode 100644 index 00000000..e31fea85 --- /dev/null +++ b/crates/openshell-ocsf/src/enums/activity.rs @@ -0,0 +1,180 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `activity_id` enum — unified across event classes. +//! +//! OCSF defines per-class activity IDs. We use a single enum with variants +//! covering all classes. The `class_uid` context determines which variants +//! are valid for a given event. + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Activity ID — unified across event classes. +/// +/// Activity semantics vary by event class. The naming follows the most +/// common OCSF usage. See per-variant docs for which classes use each. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum ActivityId { + /// 0 — Unknown (all classes) + Unknown = 0, + + // --- Network/SSH/HTTP Activity (4001, 4002, 4007) --- + // --- Also Detection Finding: Create (2004) --- + // --- Also Application Lifecycle: Install (6002) --- + // --- Also Config State Change: Log (5019) --- + /// 1 — Open (Network/SSH), Connect (HTTP), Create (Finding), Install (Lifecycle), Log (Config) + Open = 1, + /// 2 — Close (Network/SSH), Delete (HTTP), Update (Finding), Remove (Lifecycle), Collect (Config) + Close = 2, + /// 3 — Reset (Network), Get (HTTP), Close (Finding), Start (Lifecycle) + Reset = 3, + /// 4 — Fail (Network/SSH), Head (HTTP), Stop (Lifecycle) + Fail = 4, + /// 5 — Refuse (Network/SSH), Options (HTTP), Restart (Lifecycle) + Refuse = 5, + /// 6 — Traffic (Network), Post (HTTP), Enable (Lifecycle) + Traffic = 6, + /// 7 — Listen (Network/SSH), Put (HTTP), Disable (Lifecycle) + Listen = 7, + /// 8 — Trace (HTTP), Update (Lifecycle) + Trace = 8, + /// 9 — Patch (HTTP) + Patch = 9, + + /// 99 — Other (all classes) + Other = 99, +} + +impl ActivityId { + /// Returns a human-readable label for this activity in a network context. + #[must_use] + pub fn network_label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Open => "Open", + Self::Close => "Close", + Self::Reset => "Reset", + Self::Fail => "Fail", + Self::Refuse => "Refuse", + Self::Traffic => "Traffic", + Self::Listen => "Listen", + Self::Trace => "Trace", + Self::Patch => "Patch", + Self::Other => "Other", + } + } + + /// Returns a human-readable label for HTTP activity context. + #[must_use] + pub fn http_label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Open => "Connect", + Self::Close => "Delete", + Self::Reset => "Get", + Self::Fail => "Head", + Self::Refuse => "Options", + Self::Traffic => "Post", + Self::Listen => "Put", + Self::Trace => "Trace", + Self::Patch => "Patch", + Self::Other => "Other", + } + } + + /// Returns a human-readable label for Detection Finding activity context. + #[must_use] + pub fn finding_label(self) -> &'static str { + match self { + Self::Open => "Create", + Self::Close => "Update", + Self::Reset => "Close", + _ => self.network_label(), + } + } + + /// Returns a human-readable label for Application Lifecycle activity context. + #[must_use] + pub fn lifecycle_label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Open => "Install", + Self::Close => "Remove", + Self::Reset => "Start", + Self::Fail => "Stop", + Self::Refuse => "Restart", + Self::Traffic => "Enable", + Self::Listen => "Disable", + Self::Trace => "Update", + Self::Patch | Self::Other => "Other", + } + } + + /// Returns a human-readable label for Config State Change activity context. + #[must_use] + pub fn config_label(self) -> &'static str { + match self { + Self::Open => "Log", + Self::Close => "Collect", + _ => self.network_label(), + } + } + + /// Returns a human-readable label for Process Activity context. + #[must_use] + pub fn process_label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Open => "Launch", + Self::Close => "Terminate", + Self::Reset => "Open", + Self::Fail => "Inject", + Self::Refuse => "Set User ID", + _ => self.network_label(), + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_activity_network_labels() { + assert_eq!(ActivityId::Open.network_label(), "Open"); + assert_eq!(ActivityId::Close.network_label(), "Close"); + assert_eq!(ActivityId::Refuse.network_label(), "Refuse"); + assert_eq!(ActivityId::Listen.network_label(), "Listen"); + } + + #[test] + fn test_activity_http_labels() { + assert_eq!(ActivityId::Open.http_label(), "Connect"); + assert_eq!(ActivityId::Close.http_label(), "Delete"); + assert_eq!(ActivityId::Reset.http_label(), "Get"); + assert_eq!(ActivityId::Traffic.http_label(), "Post"); + assert_eq!(ActivityId::Listen.http_label(), "Put"); + assert_eq!(ActivityId::Patch.http_label(), "Patch"); + } + + #[test] + fn test_activity_process_labels() { + assert_eq!(ActivityId::Open.process_label(), "Launch"); + assert_eq!(ActivityId::Close.process_label(), "Terminate"); + } + + #[test] + fn test_activity_json_roundtrip() { + let activity = ActivityId::Open; + let json = serde_json::to_value(activity).unwrap(); + assert_eq!(json, serde_json::json!(1)); + let deserialized: ActivityId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, ActivityId::Open); + } +} diff --git a/crates/openshell-ocsf/src/enums/auth.rs b/crates/openshell-ocsf/src/enums/auth.rs new file mode 100644 index 00000000..5b72fe3f --- /dev/null +++ b/crates/openshell-ocsf/src/enums/auth.rs @@ -0,0 +1,79 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `auth_type_id` enum for SSH Activity. + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Auth Type ID (0-6, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum AuthTypeId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Certificate Based + CertificateBased = 1, + /// 2 — GSSAPI + Gssapi = 2, + /// 3 — Host Based + HostBased = 3, + /// 4 — Keyboard Interactive + KeyboardInteractive = 4, + /// 5 — Password + Password = 5, + /// 6 — Public Key + PublicKey = 6, + /// 99 — Other (used for NSSH1) + Other = 99, +} + +impl AuthTypeId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::CertificateBased => "Certificate Based", + Self::Gssapi => "GSSAPI", + Self::HostBased => "Host Based", + Self::KeyboardInteractive => "Keyboard Interactive", + Self::Password => "Password", + Self::PublicKey => "Public Key", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_auth_type_labels() { + assert_eq!(AuthTypeId::Unknown.label(), "Unknown"); + assert_eq!(AuthTypeId::CertificateBased.label(), "Certificate Based"); + assert_eq!(AuthTypeId::PublicKey.label(), "Public Key"); + assert_eq!(AuthTypeId::Other.label(), "Other"); + } + + #[test] + fn test_auth_type_integer_values() { + assert_eq!(AuthTypeId::Unknown.as_u8(), 0); + assert_eq!(AuthTypeId::CertificateBased.as_u8(), 1); + assert_eq!(AuthTypeId::PublicKey.as_u8(), 6); + assert_eq!(AuthTypeId::Other.as_u8(), 99); + } + + #[test] + fn test_auth_type_json_roundtrip() { + let auth = AuthTypeId::Other; + let json = serde_json::to_value(auth).unwrap(); + assert_eq!(json, serde_json::json!(99)); + let deserialized: AuthTypeId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, AuthTypeId::Other); + } +} diff --git a/crates/openshell-ocsf/src/enums/disposition.rs b/crates/openshell-ocsf/src/enums/disposition.rs new file mode 100644 index 00000000..8e4078a1 --- /dev/null +++ b/crates/openshell-ocsf/src/enums/disposition.rs @@ -0,0 +1,148 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `disposition_id` enum. + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Disposition ID (0-27, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum DispositionId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Allowed + Allowed = 1, + /// 2 — Blocked + Blocked = 2, + /// 3 — Quarantined + Quarantined = 3, + /// 4 — Isolated + Isolated = 4, + /// 5 — Deleted + Deleted = 5, + /// 6 — Dropped + Dropped = 6, + /// 7 — Custom Action + CustomAction = 7, + /// 8 — Approved + Approved = 8, + /// 9 — Restored + Restored = 9, + /// 10 — Exonerated + Exonerated = 10, + /// 11 — Corrected + Corrected = 11, + /// 12 — Partially Corrected + PartiallyCorrected = 12, + /// 13 — Uncorrected + Uncorrected = 13, + /// 14 — Delayed + Delayed = 14, + /// 15 — Detected + Detected = 15, + /// 16 — No Action + NoAction = 16, + /// 17 — Logged + Logged = 17, + /// 18 — Tagged + Tagged = 18, + /// 19 — Alert + Alert = 19, + /// 20 — Count + Count = 20, + /// 21 — Reset + Reset = 21, + /// 22 — Captcha + Captcha = 22, + /// 23 — Challenge + Challenge = 23, + /// 24 — Access Revoked + AccessRevoked = 24, + /// 25 — Rejected + Rejected = 25, + /// 26 — Unauthorized + Unauthorized = 26, + /// 27 — Error + Error = 27, + /// 99 — Other + Other = 99, +} + +impl DispositionId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Allowed => "Allowed", + Self::Blocked => "Blocked", + Self::Quarantined => "Quarantined", + Self::Isolated => "Isolated", + Self::Deleted => "Deleted", + Self::Dropped => "Dropped", + Self::CustomAction => "Custom Action", + Self::Approved => "Approved", + Self::Restored => "Restored", + Self::Exonerated => "Exonerated", + Self::Corrected => "Corrected", + Self::PartiallyCorrected => "Partially Corrected", + Self::Uncorrected => "Uncorrected", + Self::Delayed => "Delayed", + Self::Detected => "Detected", + Self::NoAction => "No Action", + Self::Logged => "Logged", + Self::Tagged => "Tagged", + Self::Alert => "Alert", + Self::Count => "Count", + Self::Reset => "Reset", + Self::Captcha => "Captcha", + Self::Challenge => "Challenge", + Self::AccessRevoked => "Access Revoked", + Self::Rejected => "Rejected", + Self::Unauthorized => "Unauthorized", + Self::Error => "Error", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_disposition_labels() { + assert_eq!(DispositionId::Unknown.label(), "Unknown"); + assert_eq!(DispositionId::Allowed.label(), "Allowed"); + assert_eq!(DispositionId::Blocked.label(), "Blocked"); + assert_eq!(DispositionId::Detected.label(), "Detected"); + assert_eq!(DispositionId::Logged.label(), "Logged"); + assert_eq!(DispositionId::Rejected.label(), "Rejected"); + assert_eq!(DispositionId::Error.label(), "Error"); + assert_eq!(DispositionId::Other.label(), "Other"); + } + + #[test] + fn test_disposition_integer_values() { + assert_eq!(DispositionId::Unknown.as_u8(), 0); + assert_eq!(DispositionId::Allowed.as_u8(), 1); + assert_eq!(DispositionId::Blocked.as_u8(), 2); + assert_eq!(DispositionId::Rejected.as_u8(), 25); + assert_eq!(DispositionId::Error.as_u8(), 27); + assert_eq!(DispositionId::Other.as_u8(), 99); + } + + #[test] + fn test_disposition_json_roundtrip() { + let disp = DispositionId::Blocked; + let json = serde_json::to_value(disp).unwrap(); + assert_eq!(json, serde_json::json!(2)); + let deserialized: DispositionId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, DispositionId::Blocked); + } +} diff --git a/crates/openshell-ocsf/src/enums/http_method.rs b/crates/openshell-ocsf/src/enums/http_method.rs new file mode 100644 index 00000000..b7b04e09 --- /dev/null +++ b/crates/openshell-ocsf/src/enums/http_method.rs @@ -0,0 +1,137 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `http_method` enum — the 9 OCSF-defined HTTP methods. + +use serde::{Deserialize, Serialize}; + +/// HTTP method as defined in the OCSF v1.7.0 `http_request` object schema. +/// +/// The 9 standard methods are typed variants. Non-standard methods use +/// `Other(String)`. +#[derive(Debug, Clone, PartialEq, Eq, Hash)] +pub enum HttpMethod { + /// OPTIONS + Options, + /// GET + Get, + /// HEAD + Head, + /// POST + Post, + /// PUT + Put, + /// DELETE + Delete, + /// TRACE + Trace, + /// CONNECT + Connect, + /// PATCH + Patch, + /// Non-standard method. + Other(String), +} + +impl HttpMethod { + /// Return the canonical uppercase string representation. + #[must_use] + pub fn as_str(&self) -> &str { + match self { + Self::Options => "OPTIONS", + Self::Get => "GET", + Self::Head => "HEAD", + Self::Post => "POST", + Self::Put => "PUT", + Self::Delete => "DELETE", + Self::Trace => "TRACE", + Self::Connect => "CONNECT", + Self::Patch => "PATCH", + Self::Other(s) => s, + } + } +} + +impl std::str::FromStr for HttpMethod { + type Err = std::convert::Infallible; + + /// Parse a method string into a typed variant (case-insensitive). + fn from_str(s: &str) -> Result { + Ok(match s.to_uppercase().as_str() { + "OPTIONS" => Self::Options, + "GET" => Self::Get, + "HEAD" => Self::Head, + "POST" => Self::Post, + "PUT" => Self::Put, + "DELETE" => Self::Delete, + "TRACE" => Self::Trace, + "CONNECT" => Self::Connect, + "PATCH" => Self::Patch, + _ => Self::Other(s.to_string()), + }) + } +} + +impl Serialize for HttpMethod { + fn serialize(&self, serializer: S) -> Result { + serializer.serialize_str(self.as_str()) + } +} + +impl<'de> Deserialize<'de> for HttpMethod { + fn deserialize>(deserializer: D) -> Result { + let s = String::deserialize(deserializer)?; + Ok(s.parse().unwrap()) + } +} + +impl std::fmt::Display for HttpMethod { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.write_str(self.as_str()) + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_from_str_standard_methods() { + assert_eq!("GET".parse::().unwrap(), HttpMethod::Get); + assert_eq!("get".parse::().unwrap(), HttpMethod::Get); + assert_eq!("Post".parse::().unwrap(), HttpMethod::Post); + assert_eq!("DELETE".parse::().unwrap(), HttpMethod::Delete); + assert_eq!( + "CONNECT".parse::().unwrap(), + HttpMethod::Connect + ); + assert_eq!("PATCH".parse::().unwrap(), HttpMethod::Patch); + } + + #[test] + fn test_from_str_non_standard() { + let method: HttpMethod = "PROPFIND".parse().unwrap(); + assert_eq!(method, HttpMethod::Other("PROPFIND".to_string())); + assert_eq!(method.as_str(), "PROPFIND"); + } + + #[test] + fn test_json_roundtrip() { + let method = HttpMethod::Get; + let json = serde_json::to_value(&method).unwrap(); + assert_eq!(json, serde_json::json!("GET")); + + let deserialized: HttpMethod = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, HttpMethod::Get); + } + + #[test] + fn test_json_roundtrip_other() { + let method = HttpMethod::Other("PROPFIND".to_string()); + let json = serde_json::to_value(&method).unwrap(); + assert_eq!(json, serde_json::json!("PROPFIND")); + + let deserialized: HttpMethod = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, HttpMethod::Other("PROPFIND".to_string())); + } +} diff --git a/crates/openshell-ocsf/src/enums/launch.rs b/crates/openshell-ocsf/src/enums/launch.rs new file mode 100644 index 00000000..71e6823a --- /dev/null +++ b/crates/openshell-ocsf/src/enums/launch.rs @@ -0,0 +1,63 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `launch_type_id` enum (new in v1.7.0). + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Launch Type ID (0-3, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum LaunchTypeId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Spawn + Spawn = 1, + /// 2 — Fork + Fork = 2, + /// 3 — Exec + Exec = 3, + /// 99 — Other + Other = 99, +} + +impl LaunchTypeId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Spawn => "Spawn", + Self::Fork => "Fork", + Self::Exec => "Exec", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_launch_type_labels() { + assert_eq!(LaunchTypeId::Unknown.label(), "Unknown"); + assert_eq!(LaunchTypeId::Spawn.label(), "Spawn"); + assert_eq!(LaunchTypeId::Fork.label(), "Fork"); + assert_eq!(LaunchTypeId::Exec.label(), "Exec"); + assert_eq!(LaunchTypeId::Other.label(), "Other"); + } + + #[test] + fn test_launch_type_json_roundtrip() { + let launch = LaunchTypeId::Spawn; + let json = serde_json::to_value(launch).unwrap(); + assert_eq!(json, serde_json::json!(1)); + let deserialized: LaunchTypeId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, LaunchTypeId::Spawn); + } +} diff --git a/crates/openshell-ocsf/src/enums/mod.rs b/crates/openshell-ocsf/src/enums/mod.rs new file mode 100644 index 00000000..8ab03eb6 --- /dev/null +++ b/crates/openshell-ocsf/src/enums/mod.rs @@ -0,0 +1,61 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF v1.7.0 enum types. + +mod action; +mod activity; +mod auth; +mod disposition; +mod http_method; +mod launch; +mod security; +mod severity; +mod status; + +pub use action::ActionId; +pub use activity::ActivityId; +pub use auth::AuthTypeId; +pub use disposition::DispositionId; +pub use http_method::HttpMethod; +pub use launch::LaunchTypeId; +pub use security::{ConfidenceId, RiskLevelId, SecurityLevelId}; +pub use severity::SeverityId; +pub use status::{StateId, StatusId}; + +/// Trait for OCSF enum types that have an integer ID and a string label. +/// +/// All OCSF "sibling pair" enums implement this trait, enabling generic +/// serialization of `_id` + label field pairs. +pub trait OcsfEnum: Copy + Clone + PartialEq + Eq + std::fmt::Debug { + /// Return the integer representation for JSON serialization. + fn as_u8(self) -> u8; + + /// Return the OCSF string label for this value. + fn label(self) -> &'static str; +} + +/// Implement [`OcsfEnum`] for a type that already has `as_u8()` and `label()` methods. +macro_rules! impl_ocsf_enum { + ($($ty:ty),+ $(,)?) => { + $( + impl OcsfEnum for $ty { + fn as_u8(self) -> u8 { self.as_u8() } + fn label(self) -> &'static str { self.label() } + } + )+ + }; +} + +impl_ocsf_enum!( + ActionId, + AuthTypeId, + ConfidenceId, + DispositionId, + LaunchTypeId, + RiskLevelId, + SecurityLevelId, + SeverityId, + StateId, + StatusId, +); diff --git a/crates/openshell-ocsf/src/enums/security.rs b/crates/openshell-ocsf/src/enums/security.rs new file mode 100644 index 00000000..b0fcffa3 --- /dev/null +++ b/crates/openshell-ocsf/src/enums/security.rs @@ -0,0 +1,160 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF security-related enums: `security_level_id`, `confidence_id`, `risk_level_id`. + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Security Level ID (0-3, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum SecurityLevelId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Secure + Secure = 1, + /// 2 — At Risk + AtRisk = 2, + /// 3 — Compromised + Compromised = 3, + /// 99 — Other + Other = 99, +} + +impl SecurityLevelId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Secure => "Secure", + Self::AtRisk => "At Risk", + Self::Compromised => "Compromised", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +/// OCSF Confidence ID (0-3, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum ConfidenceId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Low + Low = 1, + /// 2 — Medium + Medium = 2, + /// 3 — High + High = 3, + /// 99 — Other + Other = 99, +} + +impl ConfidenceId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Low => "Low", + Self::Medium => "Medium", + Self::High => "High", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +/// OCSF Risk Level ID (0-4, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum RiskLevelId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Info + Info = 1, + /// 2 — Low + Low = 2, + /// 3 — Medium + Medium = 3, + /// 4 — High + High = 4, + /// 5 — Critical + Critical = 5, + /// 99 — Other + Other = 99, +} + +impl RiskLevelId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Info => "Info", + Self::Low => "Low", + Self::Medium => "Medium", + Self::High => "High", + Self::Critical => "Critical", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_security_level_labels() { + assert_eq!(SecurityLevelId::Unknown.label(), "Unknown"); + assert_eq!(SecurityLevelId::Secure.label(), "Secure"); + assert_eq!(SecurityLevelId::AtRisk.label(), "At Risk"); + assert_eq!(SecurityLevelId::Compromised.label(), "Compromised"); + } + + #[test] + fn test_confidence_labels() { + assert_eq!(ConfidenceId::Unknown.label(), "Unknown"); + assert_eq!(ConfidenceId::Low.label(), "Low"); + assert_eq!(ConfidenceId::Medium.label(), "Medium"); + assert_eq!(ConfidenceId::High.label(), "High"); + } + + #[test] + fn test_risk_level_labels() { + assert_eq!(RiskLevelId::Unknown.label(), "Unknown"); + assert_eq!(RiskLevelId::Info.label(), "Info"); + assert_eq!(RiskLevelId::Low.label(), "Low"); + assert_eq!(RiskLevelId::Medium.label(), "Medium"); + assert_eq!(RiskLevelId::High.label(), "High"); + assert_eq!(RiskLevelId::Critical.label(), "Critical"); + } + + #[test] + fn test_security_json_roundtrips() { + let sl = SecurityLevelId::Secure; + let json = serde_json::to_value(sl).unwrap(); + assert_eq!(json, serde_json::json!(1)); + + let conf = ConfidenceId::High; + let json = serde_json::to_value(conf).unwrap(); + assert_eq!(json, serde_json::json!(3)); + + let risk = RiskLevelId::High; + let json = serde_json::to_value(risk).unwrap(); + assert_eq!(json, serde_json::json!(4)); + } +} diff --git a/crates/openshell-ocsf/src/enums/severity.rs b/crates/openshell-ocsf/src/enums/severity.rs new file mode 100644 index 00000000..4609e0cc --- /dev/null +++ b/crates/openshell-ocsf/src/enums/severity.rs @@ -0,0 +1,115 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `severity_id` enum. + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Severity ID (0-6, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum SeverityId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Informational + Informational = 1, + /// 2 — Low + Low = 2, + /// 3 — Medium + Medium = 3, + /// 4 — High + High = 4, + /// 5 — Critical + Critical = 5, + /// 6 — Fatal + Fatal = 6, + /// 99 — Other + Other = 99, +} + +impl SeverityId { + /// Returns the OCSF string label for this severity. + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Informational => "Informational", + Self::Low => "Low", + Self::Medium => "Medium", + Self::High => "High", + Self::Critical => "Critical", + Self::Fatal => "Fatal", + Self::Other => "Other", + } + } + + /// Returns the single-character shorthand for log display. + #[must_use] + pub fn shorthand_char(self) -> char { + match self { + Self::Informational => 'I', + Self::Low => 'L', + Self::Medium => 'M', + Self::High => 'H', + Self::Critical => 'C', + Self::Fatal => 'F', + Self::Unknown | Self::Other => ' ', + } + } + + /// Returns the integer value for JSON serialization. + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_severity_labels() { + assert_eq!(SeverityId::Unknown.label(), "Unknown"); + assert_eq!(SeverityId::Informational.label(), "Informational"); + assert_eq!(SeverityId::Low.label(), "Low"); + assert_eq!(SeverityId::Medium.label(), "Medium"); + assert_eq!(SeverityId::High.label(), "High"); + assert_eq!(SeverityId::Critical.label(), "Critical"); + assert_eq!(SeverityId::Fatal.label(), "Fatal"); + assert_eq!(SeverityId::Other.label(), "Other"); + } + + #[test] + fn test_severity_shorthand_chars() { + assert_eq!(SeverityId::Unknown.shorthand_char(), ' '); + assert_eq!(SeverityId::Informational.shorthand_char(), 'I'); + assert_eq!(SeverityId::Low.shorthand_char(), 'L'); + assert_eq!(SeverityId::Medium.shorthand_char(), 'M'); + assert_eq!(SeverityId::High.shorthand_char(), 'H'); + assert_eq!(SeverityId::Critical.shorthand_char(), 'C'); + assert_eq!(SeverityId::Fatal.shorthand_char(), 'F'); + assert_eq!(SeverityId::Other.shorthand_char(), ' '); + } + + #[test] + fn test_severity_integer_values() { + assert_eq!(SeverityId::Unknown.as_u8(), 0); + assert_eq!(SeverityId::Informational.as_u8(), 1); + assert_eq!(SeverityId::Low.as_u8(), 2); + assert_eq!(SeverityId::Medium.as_u8(), 3); + assert_eq!(SeverityId::High.as_u8(), 4); + assert_eq!(SeverityId::Critical.as_u8(), 5); + assert_eq!(SeverityId::Fatal.as_u8(), 6); + assert_eq!(SeverityId::Other.as_u8(), 99); + } + + #[test] + fn test_severity_json_roundtrip() { + let severity = SeverityId::High; + let json = serde_json::to_value(severity).unwrap(); + assert_eq!(json, serde_json::json!(4)); + let deserialized: SeverityId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, SeverityId::High); + } +} diff --git a/crates/openshell-ocsf/src/enums/status.rs b/crates/openshell-ocsf/src/enums/status.rs new file mode 100644 index 00000000..90ac4c27 --- /dev/null +++ b/crates/openshell-ocsf/src/enums/status.rs @@ -0,0 +1,107 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF `status_id` and `state_id` enums. + +use serde_repr::{Deserialize_repr, Serialize_repr}; + +/// OCSF Status ID (0-2, 99). +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum StatusId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Success + Success = 1, + /// 2 — Failure + Failure = 2, + /// 99 — Other + Other = 99, +} + +impl StatusId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Success => "Success", + Self::Failure => "Failure", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +/// OCSF State ID (0-2, 99) — used by Device Config State Change. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize_repr, Deserialize_repr)] +#[repr(u8)] +pub enum StateId { + /// 0 — Unknown + Unknown = 0, + /// 1 — Disabled + Disabled = 1, + /// 2 — Enabled + Enabled = 2, + /// 99 — Other + Other = 99, +} + +impl StateId { + #[must_use] + pub fn label(self) -> &'static str { + match self { + Self::Unknown => "Unknown", + Self::Disabled => "Disabled", + Self::Enabled => "Enabled", + Self::Other => "Other", + } + } + + #[must_use] + pub fn as_u8(self) -> u8 { + self as u8 + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_status_labels() { + assert_eq!(StatusId::Unknown.label(), "Unknown"); + assert_eq!(StatusId::Success.label(), "Success"); + assert_eq!(StatusId::Failure.label(), "Failure"); + assert_eq!(StatusId::Other.label(), "Other"); + } + + #[test] + fn test_status_json_roundtrip() { + let status = StatusId::Success; + let json = serde_json::to_value(status).unwrap(); + assert_eq!(json, serde_json::json!(1)); + let deserialized: StatusId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, StatusId::Success); + } + + #[test] + fn test_state_labels() { + assert_eq!(StateId::Unknown.label(), "Unknown"); + assert_eq!(StateId::Disabled.label(), "Disabled"); + assert_eq!(StateId::Enabled.label(), "Enabled"); + assert_eq!(StateId::Other.label(), "Other"); + } + + #[test] + fn test_state_json_roundtrip() { + let state = StateId::Enabled; + let json = serde_json::to_value(state).unwrap(); + assert_eq!(json, serde_json::json!(2)); + let deserialized: StateId = serde_json::from_value(json).unwrap(); + assert_eq!(deserialized, StateId::Enabled); + } +} diff --git a/crates/openshell-ocsf/src/events/app_lifecycle.rs b/crates/openshell-ocsf/src/events/app_lifecycle.rs new file mode 100644 index 00000000..4dc074eb --- /dev/null +++ b/crates/openshell-ocsf/src/events/app_lifecycle.rs @@ -0,0 +1,61 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF Application Lifecycle [6002] event class. + +use serde::{Deserialize, Serialize}; + +use crate::events::base_event::BaseEventData; +use crate::objects::Product; + +/// OCSF Application Lifecycle Event [6002]. +/// +/// Sandbox supervisor lifecycle events. +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct ApplicationLifecycleEvent { + /// Common base event fields. + #[serde(flatten)] + pub base: BaseEventData, + + /// Application / product info (required). + pub app: Product, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::enums::SeverityId; + use crate::objects::Metadata; + + #[test] + fn test_app_lifecycle_serialization() { + let mut base = BaseEventData::new( + 6002, + "Application Lifecycle", + 6, + "Application Activity", + 3, + "Start", + SeverityId::Informational, + Metadata { + version: "1.7.0".to_string(), + product: Product::openshell_sandbox("0.1.0"), + profiles: vec!["container".to_string()], + uid: Some("sandbox-abc123".to_string()), + log_source: None, + }, + ); + base.set_message("Starting sandbox"); + + let event = ApplicationLifecycleEvent { + base, + app: Product::openshell_sandbox("0.1.0"), + }; + + let json = serde_json::to_value(&event).unwrap(); + assert_eq!(json["class_uid"], 6002); + assert_eq!(json["type_uid"], 600_203); + assert_eq!(json["app"]["name"], "OpenShell Sandbox Supervisor"); + assert_eq!(json["message"], "Starting sandbox"); + } +} diff --git a/crates/openshell-ocsf/src/events/base_event.rs b/crates/openshell-ocsf/src/events/base_event.rs new file mode 100644 index 00000000..97e7c038 --- /dev/null +++ b/crates/openshell-ocsf/src/events/base_event.rs @@ -0,0 +1,286 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF Base Event [0] and shared `BaseEventData`. + +use serde::{Deserialize, Serialize}; + +use crate::enums::{SeverityId, StatusId}; +use crate::objects::{Container, Device, Metadata}; + +/// Common fields shared by all OCSF event classes. +/// +/// Every event class embeds this struct via `#[serde(flatten)]`. +#[derive(Debug, Clone, PartialEq, Eq, Deserialize)] +pub struct BaseEventData { + /// OCSF class UID (e.g., 4001 for Network Activity). + pub class_uid: u32, + + /// Human-readable class name. + pub class_name: String, + + /// OCSF category UID. + pub category_uid: u8, + + /// Human-readable category name. + pub category_name: String, + + /// Activity ID within the class. + pub activity_id: u8, + + /// Human-readable activity name. + pub activity_name: String, + + /// Computed type UID: `class_uid * 100 + activity_id`. + pub type_uid: u32, + + /// Human-readable type name: "`class_name`: `activity_name`". + pub type_name: String, + + /// Event timestamp in milliseconds since epoch. + pub time: i64, + + /// Severity (typed enum, serialized as `severity_id` + `severity` pair). + #[serde(rename = "severity_id")] + pub severity: SeverityId, + + /// Status (typed enum, serialized as `status_id` + `status` pair). + #[serde(rename = "status_id", default, skip_serializing_if = "Option::is_none")] + pub status: Option, + + /// Human-readable event message. + #[serde(skip_serializing_if = "Option::is_none")] + pub message: Option, + + /// Status detail / reason. + #[serde(skip_serializing_if = "Option::is_none")] + pub status_detail: Option, + + /// Event metadata (schema version, product, profiles). + pub metadata: Metadata, + + /// Device info. + #[serde(skip_serializing_if = "Option::is_none")] + pub device: Option, + + /// Container info (Container profile). + #[serde(skip_serializing_if = "Option::is_none")] + pub container: Option, + + /// Unmapped fields that don't fit the OCSF schema. + #[serde(skip_serializing_if = "Option::is_none")] + pub unmapped: Option, +} + +impl Serialize for BaseEventData { + fn serialize(&self, serializer: S) -> Result { + use serde::ser::SerializeMap; + + // Count fields: 9 required + severity pair (2) + up to 6 optional + let mut map = serializer.serialize_map(None)?; + + map.serialize_entry("class_uid", &self.class_uid)?; + map.serialize_entry("class_name", &self.class_name)?; + map.serialize_entry("category_uid", &self.category_uid)?; + map.serialize_entry("category_name", &self.category_name)?; + map.serialize_entry("activity_id", &self.activity_id)?; + map.serialize_entry("activity_name", &self.activity_name)?; + map.serialize_entry("type_uid", &self.type_uid)?; + map.serialize_entry("type_name", &self.type_name)?; + map.serialize_entry("time", &self.time)?; + + // Severity — typed enum → id + label pair + map.serialize_entry("severity_id", &self.severity.as_u8())?; + map.serialize_entry("severity", self.severity.label())?; + + // Status — optional typed enum → id + label pair + if let Some(status) = self.status { + map.serialize_entry("status_id", &status.as_u8())?; + map.serialize_entry("status", status.label())?; + } + + if let Some(ref msg) = self.message { + map.serialize_entry("message", msg)?; + } + if let Some(ref detail) = self.status_detail { + map.serialize_entry("status_detail", detail)?; + } + map.serialize_entry("metadata", &self.metadata)?; + if let Some(ref device) = self.device { + map.serialize_entry("device", device)?; + } + if let Some(ref container) = self.container { + map.serialize_entry("container", container)?; + } + if let Some(ref unmapped) = self.unmapped { + map.serialize_entry("unmapped", unmapped)?; + } + + map.end() + } +} + +impl BaseEventData { + /// Create base event data with required fields. + #[allow(clippy::too_many_arguments)] + #[must_use] + pub fn new( + class_uid: u32, + class_name: &str, + category_uid: u8, + category_name: &str, + activity_id: u8, + activity_name: &str, + severity_id: SeverityId, + metadata: Metadata, + ) -> Self { + let type_uid = class_uid * 100 + u32::from(activity_id); + let type_name = format!("{class_name}: {activity_name}"); + + Self { + class_uid, + class_name: class_name.to_string(), + category_uid, + category_name: category_name.to_string(), + activity_id, + activity_name: activity_name.to_string(), + type_uid, + type_name, + time: chrono::Utc::now().timestamp_millis(), + severity: severity_id, + status: None, + message: None, + status_detail: None, + metadata, + device: None, + container: None, + unmapped: None, + } + } + + /// Set the timestamp (milliseconds since epoch). + pub fn set_time(&mut self, time_ms: i64) { + self.time = time_ms; + } + + /// Set status. + pub fn set_status(&mut self, status_id: StatusId) { + self.status = Some(status_id); + } + + /// Set message. + pub fn set_message(&mut self, message: impl Into) { + self.message = Some(message.into()); + } + + /// Set status detail. + pub fn set_status_detail(&mut self, detail: impl Into) { + self.status_detail = Some(detail.into()); + } + + /// Set device info. + pub fn set_device(&mut self, device: Device) { + self.device = Some(device); + } + + /// Set container info. + pub fn set_container(&mut self, container: Container) { + self.container = Some(container); + } + + /// Add an unmapped field. + pub fn add_unmapped(&mut self, key: &str, value: impl Into) { + let map = self + .unmapped + .get_or_insert_with(|| serde_json::Value::Object(serde_json::Map::new())); + if let serde_json::Value::Object(m) = map { + m.insert(key.to_string(), value.into()); + } + } +} + +/// OCSF Base Event [0] — for events that don't fit a specific class. +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] +pub struct BaseEvent { + /// Common base event fields. + #[serde(flatten)] + pub base: BaseEventData, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::objects::Product; + + fn test_metadata() -> Metadata { + Metadata { + version: "1.7.0".to_string(), + product: Product::openshell_sandbox("0.1.0"), + profiles: vec!["container".to_string(), "host".to_string()], + uid: Some("sandbox-abc123".to_string()), + log_source: None, + } + } + + #[test] + fn test_base_event_data_creation() { + let base = BaseEventData::new( + 0, + "Base Event", + 0, + "Uncategorized", + 99, + "Other", + SeverityId::Informational, + test_metadata(), + ); + + assert_eq!(base.class_uid, 0); + assert_eq!(base.type_uid, 99); // 0 * 100 + 99 + assert_eq!(base.type_name, "Base Event: Other"); + assert_eq!(base.severity, SeverityId::Informational); + } + + #[test] + fn test_type_uid_computation() { + let base = BaseEventData::new( + 4001, + "Network Activity", + 4, + "Network Activity", + 1, + "Open", + SeverityId::Informational, + test_metadata(), + ); + + assert_eq!(base.type_uid, 400_101); // 4001 * 100 + 1 + } + + #[test] + fn test_base_event_serialization() { + let mut base = BaseEventData::new( + 0, + "Base Event", + 0, + "Uncategorized", + 99, + "Network Namespace Created", + SeverityId::Informational, + test_metadata(), + ); + base.set_status(StatusId::Success); + base.set_message("Network namespace created"); + base.add_unmapped("namespace", serde_json::json!("openshell-sandbox-abc123")); + + let event = BaseEvent { base }; + let json = serde_json::to_value(&event).unwrap(); + + assert_eq!(json["class_uid"], 0); + assert_eq!(json["class_name"], "Base Event"); + assert_eq!(json["activity_name"], "Network Namespace Created"); + assert_eq!(json["status"], "Success"); + assert_eq!(json["message"], "Network namespace created"); + assert_eq!(json["unmapped"]["namespace"], "openshell-sandbox-abc123"); + } +} diff --git a/crates/openshell-ocsf/src/events/config_state_change.rs b/crates/openshell-ocsf/src/events/config_state_change.rs new file mode 100644 index 00000000..a28b4375 --- /dev/null +++ b/crates/openshell-ocsf/src/events/config_state_change.rs @@ -0,0 +1,102 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF Device Config State Change [5019] event class. + +use serde::{Deserialize, Serialize}; + +use crate::enums::{SecurityLevelId, StateId}; +use crate::events::base_event::BaseEventData; + +/// OCSF Device Config State Change Event [5019]. +/// +/// Policy engine and inference routing configuration changes. +#[derive(Debug, Clone, PartialEq, Eq, Deserialize)] +pub struct DeviceConfigStateChangeEvent { + /// Common base event fields. + #[serde(flatten)] + pub base: BaseEventData, + + #[serde(rename = "state_id", default, skip_serializing_if = "Option::is_none")] + pub state: Option, + + /// Custom state label (used when `state_id` maps to a non-standard label). + #[serde(rename = "state", default, skip_serializing_if = "Option::is_none")] + pub state_custom_label: Option, + + #[serde( + rename = "security_level_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub security_level: Option, + + #[serde( + rename = "prev_security_level_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub prev_security_level: Option, +} + +impl Serialize for DeviceConfigStateChangeEvent { + fn serialize(&self, serializer: S) -> Result { + use crate::events::serde_helpers::{insert_enum_pair, insert_enum_pair_custom}; + + let mut base_val = serde_json::to_value(&self.base).map_err(serde::ser::Error::custom)?; + let obj = base_val + .as_object_mut() + .ok_or_else(|| serde::ser::Error::custom("expected object"))?; + + insert_enum_pair_custom!(obj, "state", self.state, self.state_custom_label); + insert_enum_pair!(obj, "security_level", self.security_level); + insert_enum_pair!(obj, "prev_security_level", self.prev_security_level); + + base_val.serialize(serializer) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::enums::{SecurityLevelId, SeverityId, StateId}; + use crate::objects::{Metadata, Product}; + + #[test] + fn test_config_state_change_serialization() { + let mut base = BaseEventData::new( + 5019, + "Device Config State Change", + 5, + "Discovery", + 1, + "Log", + SeverityId::Informational, + Metadata { + version: "1.7.0".to_string(), + product: Product::openshell_sandbox("0.1.0"), + profiles: vec!["security_control".to_string()], + uid: Some("sandbox-abc123".to_string()), + log_source: None, + }, + ); + base.set_message("Policy reloaded successfully"); + base.add_unmapped("policy_version", serde_json::json!("v3")); + base.add_unmapped("policy_hash", serde_json::json!("sha256:abc123def456")); + + let event = DeviceConfigStateChangeEvent { + base, + state: Some(StateId::Enabled), + state_custom_label: None, + security_level: Some(SecurityLevelId::Secure), + prev_security_level: Some(SecurityLevelId::Unknown), + }; + + let json = serde_json::to_value(&event).unwrap(); + assert_eq!(json["class_uid"], 5019); + assert_eq!(json["state_id"], 2); + assert_eq!(json["state"], "Enabled"); + assert_eq!(json["security_level"], "Secure"); + assert_eq!(json["unmapped"]["policy_version"], "v3"); + } +} diff --git a/crates/openshell-ocsf/src/events/detection_finding.rs b/crates/openshell-ocsf/src/events/detection_finding.rs new file mode 100644 index 00000000..35ef222c --- /dev/null +++ b/crates/openshell-ocsf/src/events/detection_finding.rs @@ -0,0 +1,140 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF Detection Finding [2004] event class. + +use serde::{Deserialize, Serialize}; + +use crate::enums::{ActionId, ConfidenceId, DispositionId, RiskLevelId}; +use crate::events::base_event::BaseEventData; +use crate::objects::{Attack, Evidence, FindingInfo, Remediation}; + +/// OCSF Detection Finding Event [2004]. +/// +/// Security-relevant findings from policy enforcement. +#[derive(Debug, Clone, PartialEq, Eq, Deserialize)] +pub struct DetectionFindingEvent { + /// Common base event fields. + #[serde(flatten)] + pub base: BaseEventData, + + /// Finding details (required). + pub finding_info: FindingInfo, + + /// Evidence artifacts. + #[serde(skip_serializing_if = "Option::is_none")] + pub evidences: Option>, + + /// MITRE ATT&CK mappings. + #[serde(skip_serializing_if = "Option::is_none")] + pub attacks: Option>, + + /// Remediation guidance. + #[serde(skip_serializing_if = "Option::is_none")] + pub remediation: Option, + + /// Whether this finding is an alert. + #[serde(skip_serializing_if = "Option::is_none")] + pub is_alert: Option, + + #[serde( + rename = "confidence_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub confidence: Option, + + #[serde( + rename = "risk_level_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub risk_level: Option, + + #[serde(rename = "action_id", default, skip_serializing_if = "Option::is_none")] + pub action: Option, + + #[serde( + rename = "disposition_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub disposition: Option, +} + +impl Serialize for DetectionFindingEvent { + fn serialize(&self, serializer: S) -> Result { + use crate::events::serde_helpers::{insert_enum_pair, insert_optional, insert_required}; + + let mut base_val = serde_json::to_value(&self.base).map_err(serde::ser::Error::custom)?; + let obj = base_val + .as_object_mut() + .ok_or_else(|| serde::ser::Error::custom("expected object"))?; + + insert_required!(obj, "finding_info", self.finding_info); + insert_optional!(obj, "evidences", self.evidences); + insert_optional!(obj, "attacks", self.attacks); + insert_optional!(obj, "remediation", self.remediation); + insert_optional!(obj, "is_alert", self.is_alert); + insert_enum_pair!(obj, "confidence", self.confidence); + insert_enum_pair!(obj, "risk_level", self.risk_level); + insert_enum_pair!(obj, "action", self.action); + insert_enum_pair!(obj, "disposition", self.disposition); + + base_val.serialize(serializer) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::enums::{ActionId, ConfidenceId, DispositionId, RiskLevelId, SeverityId}; + use crate::objects::{Metadata, Product}; + + #[test] + fn test_detection_finding_serialization() { + let event = DetectionFindingEvent { + base: BaseEventData::new( + 2004, + "Detection Finding", + 2, + "Findings", + 1, + "Create", + SeverityId::High, + Metadata { + version: "1.7.0".to_string(), + product: Product::openshell_sandbox("0.1.0"), + profiles: vec!["security_control".to_string()], + uid: Some("sandbox-abc123".to_string()), + log_source: None, + }, + ), + finding_info: FindingInfo::new("nssh1-replay-abc", "NSSH1 Nonce Replay Attack") + .with_desc("A nonce was replayed."), + evidences: Some(vec![Evidence::from_pairs(&[ + ("nonce", "0xdeadbeef"), + ("peer_ip", "10.42.0.1"), + ])]), + attacks: Some(vec![Attack::mitre( + "T1550", + "Use Alternate Authentication Material", + "TA0008", + "Lateral Movement", + )]), + remediation: None, + is_alert: Some(true), + confidence: Some(ConfidenceId::High), + risk_level: Some(RiskLevelId::High), + action: Some(ActionId::Denied), + disposition: Some(DispositionId::Blocked), + }; + + let json = serde_json::to_value(&event).unwrap(); + assert_eq!(json["class_uid"], 2004); + assert_eq!(json["finding_info"]["title"], "NSSH1 Nonce Replay Attack"); + assert_eq!(json["is_alert"], true); + assert_eq!(json["confidence"], "High"); + assert_eq!(json["attacks"][0]["technique"]["uid"], "T1550"); + } +} diff --git a/crates/openshell-ocsf/src/events/http_activity.rs b/crates/openshell-ocsf/src/events/http_activity.rs new file mode 100644 index 00000000..fe3b0357 --- /dev/null +++ b/crates/openshell-ocsf/src/events/http_activity.rs @@ -0,0 +1,145 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF HTTP Activity [4002] event class. + +use serde::{Deserialize, Serialize}; + +use crate::enums::{ActionId, DispositionId}; +use crate::events::base_event::BaseEventData; +use crate::objects::{Actor, Endpoint, FirewallRule, HttpRequest, HttpResponse}; + +/// OCSF HTTP Activity Event [4002]. +/// +/// HTTP-level events through the forward proxy and L7 relay. +#[derive(Debug, Clone, PartialEq, Eq, Deserialize)] +pub struct HttpActivityEvent { + /// Common base event fields. + #[serde(flatten)] + pub base: BaseEventData, + + /// HTTP request details. + #[serde(skip_serializing_if = "Option::is_none")] + pub http_request: Option, + + /// HTTP response details. + #[serde(skip_serializing_if = "Option::is_none")] + pub http_response: Option, + + /// Source endpoint. + #[serde(skip_serializing_if = "Option::is_none")] + pub src_endpoint: Option, + + /// Destination endpoint. + #[serde(skip_serializing_if = "Option::is_none")] + pub dst_endpoint: Option, + + /// Proxy endpoint. + #[serde(skip_serializing_if = "Option::is_none")] + pub proxy_endpoint: Option, + + /// Actor (process that made the request). + #[serde(skip_serializing_if = "Option::is_none")] + pub actor: Option, + + /// Firewall / policy rule. + #[serde(skip_serializing_if = "Option::is_none")] + pub firewall_rule: Option, + + /// Action taken (typed enum serialized as `action_id` + `action` label). + #[serde(rename = "action_id", default, skip_serializing_if = "Option::is_none")] + pub action: Option, + + /// Disposition (typed enum serialized as `disposition_id` + `disposition` label). + #[serde( + rename = "disposition_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub disposition: Option, + + /// Observation point ID (v1.6.0+). + #[serde(skip_serializing_if = "Option::is_none")] + pub observation_point_id: Option, + + /// Whether src/dst assignment is known (v1.6.0+). + #[serde(skip_serializing_if = "Option::is_none")] + pub is_src_dst_assignment_known: Option, +} + +impl Serialize for HttpActivityEvent { + fn serialize(&self, serializer: S) -> Result { + use crate::events::serde_helpers::{insert_enum_pair, insert_optional}; + + let mut base_val = serde_json::to_value(&self.base).map_err(serde::ser::Error::custom)?; + let obj = base_val + .as_object_mut() + .ok_or_else(|| serde::ser::Error::custom("expected object"))?; + + insert_optional!(obj, "http_request", self.http_request); + insert_optional!(obj, "http_response", self.http_response); + insert_optional!(obj, "src_endpoint", self.src_endpoint); + insert_optional!(obj, "dst_endpoint", self.dst_endpoint); + insert_optional!(obj, "proxy_endpoint", self.proxy_endpoint); + insert_optional!(obj, "actor", self.actor); + insert_optional!(obj, "firewall_rule", self.firewall_rule); + insert_enum_pair!(obj, "action", self.action); + insert_enum_pair!(obj, "disposition", self.disposition); + insert_optional!(obj, "observation_point_id", self.observation_point_id); + insert_optional!( + obj, + "is_src_dst_assignment_known", + self.is_src_dst_assignment_known + ); + + base_val.serialize(serializer) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::enums::{ActionId, DispositionId, SeverityId}; + use crate::objects::{Metadata, Product, Url}; + + #[test] + fn test_http_activity_serialization() { + let event = HttpActivityEvent { + base: BaseEventData::new( + 4002, + "HTTP Activity", + 4, + "Network Activity", + 3, + "Get", + SeverityId::Informational, + Metadata { + version: "1.7.0".to_string(), + product: Product::openshell_sandbox("0.1.0"), + profiles: vec!["security_control".to_string()], + uid: Some("sandbox-abc123".to_string()), + log_source: None, + }, + ), + http_request: Some(HttpRequest::new( + "GET", + Url::new("https", "api.example.com", "/v1/data", 443), + )), + http_response: None, + src_endpoint: None, + dst_endpoint: Some(Endpoint::from_domain("api.example.com", 443)), + proxy_endpoint: None, + actor: None, + firewall_rule: None, + action: Some(ActionId::Allowed), + disposition: Some(DispositionId::Allowed), + observation_point_id: None, + is_src_dst_assignment_known: None, + }; + + let json = serde_json::to_value(&event).unwrap(); + assert_eq!(json["class_uid"], 4002); + assert_eq!(json["type_uid"], 400_203); + assert_eq!(json["http_request"]["http_method"], "GET"); + } +} diff --git a/crates/openshell-ocsf/src/events/mod.rs b/crates/openshell-ocsf/src/events/mod.rs new file mode 100644 index 00000000..23519e56 --- /dev/null +++ b/crates/openshell-ocsf/src/events/mod.rs @@ -0,0 +1,293 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF v1.7.0 event class definitions. + +mod app_lifecycle; +pub(crate) mod base_event; +mod config_state_change; +mod detection_finding; +mod http_activity; +mod network_activity; +mod process_activity; +pub(crate) mod serde_helpers; +mod ssh_activity; + +pub use app_lifecycle::ApplicationLifecycleEvent; +pub use base_event::{BaseEvent, BaseEventData}; +pub use config_state_change::DeviceConfigStateChangeEvent; +pub use detection_finding::DetectionFindingEvent; +pub use http_activity::HttpActivityEvent; +pub use network_activity::NetworkActivityEvent; +pub use process_activity::ProcessActivityEvent; +pub use ssh_activity::SshActivityEvent; + +use serde::{Deserialize, Serialize}; + +/// Top-level OCSF event enum encompassing all supported event classes. +/// +/// Serialization delegates directly to the inner event struct (untagged). +/// Deserialization dispatches on the `class_uid` field to select the +/// correct variant, avoiding the ambiguity of `#[serde(untagged)]`. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum OcsfEvent { + /// Network Activity [4001] + NetworkActivity(NetworkActivityEvent), + /// HTTP Activity [4002] + HttpActivity(HttpActivityEvent), + /// SSH Activity [4007] + SshActivity(SshActivityEvent), + /// Process Activity [1007] + ProcessActivity(ProcessActivityEvent), + /// Detection Finding [2004] + DetectionFinding(DetectionFindingEvent), + /// Application Lifecycle [6002] + ApplicationLifecycle(ApplicationLifecycleEvent), + /// Device Config State Change [5019] + DeviceConfigStateChange(DeviceConfigStateChangeEvent), + /// Base Event [0] + Base(BaseEvent), +} + +impl Serialize for OcsfEvent { + fn serialize(&self, serializer: S) -> Result { + match self { + Self::NetworkActivity(e) => e.serialize(serializer), + Self::HttpActivity(e) => e.serialize(serializer), + Self::SshActivity(e) => e.serialize(serializer), + Self::ProcessActivity(e) => e.serialize(serializer), + Self::DetectionFinding(e) => e.serialize(serializer), + Self::ApplicationLifecycle(e) => e.serialize(serializer), + Self::DeviceConfigStateChange(e) => e.serialize(serializer), + Self::Base(e) => e.serialize(serializer), + } + } +} + +impl<'de> Deserialize<'de> for OcsfEvent { + fn deserialize>(deserializer: D) -> Result { + // Deserialize into a raw JSON value first, then dispatch on class_uid. + let value = serde_json::Value::deserialize(deserializer)?; + + let class_uid = value + .get("class_uid") + .and_then(serde_json::Value::as_u64) + .ok_or_else(|| serde::de::Error::missing_field("class_uid"))?; + + match class_uid { + 4001 => serde_json::from_value::(value) + .map(Self::NetworkActivity) + .map_err(serde::de::Error::custom), + 4002 => serde_json::from_value::(value) + .map(Self::HttpActivity) + .map_err(serde::de::Error::custom), + 4007 => serde_json::from_value::(value) + .map(Self::SshActivity) + .map_err(serde::de::Error::custom), + 1007 => serde_json::from_value::(value) + .map(Self::ProcessActivity) + .map_err(serde::de::Error::custom), + 2004 => serde_json::from_value::(value) + .map(Self::DetectionFinding) + .map_err(serde::de::Error::custom), + 6002 => serde_json::from_value::(value) + .map(Self::ApplicationLifecycle) + .map_err(serde::de::Error::custom), + 5019 => serde_json::from_value::(value) + .map(Self::DeviceConfigStateChange) + .map_err(serde::de::Error::custom), + 0 => serde_json::from_value::(value) + .map(Self::Base) + .map_err(serde::de::Error::custom), + other => Err(serde::de::Error::custom(format!( + "unknown OCSF class_uid: {other}" + ))), + } + } +} + +impl OcsfEvent { + /// Returns the OCSF `class_uid` for this event. + #[must_use] + pub fn class_uid(&self) -> u32 { + match self { + Self::NetworkActivity(_) => 4001, + Self::HttpActivity(_) => 4002, + Self::SshActivity(_) => 4007, + Self::ProcessActivity(_) => 1007, + Self::DetectionFinding(_) => 2004, + Self::ApplicationLifecycle(_) => 6002, + Self::DeviceConfigStateChange(_) => 5019, + Self::Base(_) => 0, + } + } + + /// Returns the base event data common to all event classes. + #[must_use] + pub fn base(&self) -> &BaseEventData { + match self { + Self::NetworkActivity(e) => &e.base, + Self::HttpActivity(e) => &e.base, + Self::SshActivity(e) => &e.base, + Self::ProcessActivity(e) => &e.base, + Self::DetectionFinding(e) => &e.base, + Self::ApplicationLifecycle(e) => &e.base, + Self::DeviceConfigStateChange(e) => &e.base, + Self::Base(e) => &e.base, + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::builders::{ + AppLifecycleBuilder, BaseEventBuilder, ConfigStateChangeBuilder, DetectionFindingBuilder, + HttpActivityBuilder, NetworkActivityBuilder, ProcessActivityBuilder, SshActivityBuilder, + test_sandbox_context, + }; + use crate::enums::*; + use crate::objects::*; + + /// Verify that every event class round-trips through JSON and deserializes + /// to the correct `OcsfEvent` variant (not silently matching the wrong one). + #[test] + fn test_roundtrip_network_activity() { + let ctx = test_sandbox_context(); + let event = NetworkActivityBuilder::new(&ctx) + .activity(ActivityId::Open) + .action(ActionId::Allowed) + .disposition(DispositionId::Allowed) + .severity(SeverityId::Informational) + .dst_endpoint(Endpoint::from_domain("example.com", 443)) + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!(deserialized, OcsfEvent::NetworkActivity(_))); + assert_eq!(deserialized.class_uid(), 4001); + } + + #[test] + fn test_roundtrip_http_activity() { + let ctx = test_sandbox_context(); + let event = HttpActivityBuilder::new(&ctx) + .activity(ActivityId::Reset) + .action(ActionId::Allowed) + .severity(SeverityId::Informational) + .http_request(HttpRequest::new( + "GET", + Url::new("https", "example.com", "/", 443), + )) + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!(deserialized, OcsfEvent::HttpActivity(_))); + assert_eq!(deserialized.class_uid(), 4002); + } + + #[test] + fn test_roundtrip_ssh_activity() { + let ctx = test_sandbox_context(); + let event = SshActivityBuilder::new(&ctx) + .activity(ActivityId::Open) + .action(ActionId::Allowed) + .severity(SeverityId::Informational) + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!(deserialized, OcsfEvent::SshActivity(_))); + assert_eq!(deserialized.class_uid(), 4007); + } + + #[test] + fn test_roundtrip_process_activity() { + let ctx = test_sandbox_context(); + let event = ProcessActivityBuilder::new(&ctx) + .activity(ActivityId::Open) + .severity(SeverityId::Informational) + .process(Process::new("test", 1)) + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!(deserialized, OcsfEvent::ProcessActivity(_))); + assert_eq!(deserialized.class_uid(), 1007); + } + + #[test] + fn test_roundtrip_detection_finding() { + let ctx = test_sandbox_context(); + let event = DetectionFindingBuilder::new(&ctx) + .severity(SeverityId::High) + .finding_info(FindingInfo::new("test-uid", "Test Finding")) + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!(deserialized, OcsfEvent::DetectionFinding(_))); + assert_eq!(deserialized.class_uid(), 2004); + } + + #[test] + fn test_roundtrip_application_lifecycle() { + let ctx = test_sandbox_context(); + let event = AppLifecycleBuilder::new(&ctx) + .activity(ActivityId::Reset) + .severity(SeverityId::Informational) + .status(StatusId::Success) + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!(deserialized, OcsfEvent::ApplicationLifecycle(_))); + assert_eq!(deserialized.class_uid(), 6002); + } + + #[test] + fn test_roundtrip_config_state_change() { + let ctx = test_sandbox_context(); + let event = ConfigStateChangeBuilder::new(&ctx) + .state(StateId::Enabled, "loaded") + .severity(SeverityId::Informational) + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!( + deserialized, + OcsfEvent::DeviceConfigStateChange(_) + )); + assert_eq!(deserialized.class_uid(), 5019); + } + + #[test] + fn test_roundtrip_base_event() { + let ctx = test_sandbox_context(); + let event = BaseEventBuilder::new(&ctx) + .severity(SeverityId::Informational) + .message("test") + .build(); + + let json = serde_json::to_value(&event).unwrap(); + let deserialized: OcsfEvent = serde_json::from_value(json).unwrap(); + assert!(matches!(deserialized, OcsfEvent::Base(_))); + assert_eq!(deserialized.class_uid(), 0); + } + + #[test] + fn test_deserialize_unknown_class_uid_errors() { + let json = serde_json::json!({"class_uid": 9999}); + let result = serde_json::from_value::(json); + assert!(result.is_err()); + } + + #[test] + fn test_deserialize_missing_class_uid_errors() { + let json = serde_json::json!({"severity_id": 1}); + let result = serde_json::from_value::(json); + assert!(result.is_err()); + } +} diff --git a/crates/openshell-ocsf/src/events/network_activity.rs b/crates/openshell-ocsf/src/events/network_activity.rs new file mode 100644 index 00000000..6cd125fd --- /dev/null +++ b/crates/openshell-ocsf/src/events/network_activity.rs @@ -0,0 +1,142 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF Network Activity [4001] event class. + +use serde::{Deserialize, Serialize}; + +use crate::enums::{ActionId, DispositionId}; +use crate::events::base_event::BaseEventData; +use crate::objects::{Actor, ConnectionInfo, Endpoint, FirewallRule}; + +/// OCSF Network Activity Event [4001]. +/// +/// Proxy CONNECT tunnel events and iptables-level bypass detection. +#[derive(Debug, Clone, PartialEq, Eq, Deserialize)] +pub struct NetworkActivityEvent { + /// Common base event fields. + #[serde(flatten)] + pub base: BaseEventData, + + /// Source endpoint. + #[serde(skip_serializing_if = "Option::is_none")] + pub src_endpoint: Option, + + /// Destination endpoint. + #[serde(skip_serializing_if = "Option::is_none")] + pub dst_endpoint: Option, + + /// Proxy endpoint (Network Proxy profile). + #[serde(skip_serializing_if = "Option::is_none")] + pub proxy_endpoint: Option, + + /// Actor (process that initiated the connection). + #[serde(skip_serializing_if = "Option::is_none")] + pub actor: Option, + + /// Firewall / policy rule that applied. + #[serde(skip_serializing_if = "Option::is_none")] + pub firewall_rule: Option, + + /// Connection info (protocol name). + #[serde(skip_serializing_if = "Option::is_none")] + pub connection_info: Option, + + /// Action (Security Control profile). + #[serde(rename = "action_id", default, skip_serializing_if = "Option::is_none")] + pub action: Option, + + /// Disposition. + #[serde( + rename = "disposition_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub disposition: Option, + + /// Observation point ID (v1.6.0+). + #[serde(skip_serializing_if = "Option::is_none")] + pub observation_point_id: Option, + + /// Whether src/dst assignment is known (v1.6.0+). + #[serde(skip_serializing_if = "Option::is_none")] + pub is_src_dst_assignment_known: Option, +} + +impl Serialize for NetworkActivityEvent { + fn serialize(&self, serializer: S) -> Result { + use crate::events::serde_helpers::{insert_enum_pair, insert_optional}; + + let mut base_val = serde_json::to_value(&self.base).map_err(serde::ser::Error::custom)?; + let obj = base_val + .as_object_mut() + .ok_or_else(|| serde::ser::Error::custom("expected object"))?; + + insert_optional!(obj, "src_endpoint", self.src_endpoint); + insert_optional!(obj, "dst_endpoint", self.dst_endpoint); + insert_optional!(obj, "proxy_endpoint", self.proxy_endpoint); + insert_optional!(obj, "actor", self.actor); + insert_optional!(obj, "firewall_rule", self.firewall_rule); + insert_optional!(obj, "connection_info", self.connection_info); + insert_enum_pair!(obj, "action", self.action); + insert_enum_pair!(obj, "disposition", self.disposition); + insert_optional!(obj, "observation_point_id", self.observation_point_id); + insert_optional!( + obj, + "is_src_dst_assignment_known", + self.is_src_dst_assignment_known + ); + + base_val.serialize(serializer) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::enums::{ActionId, DispositionId, SeverityId}; + use crate::objects::{Metadata, Product}; + + #[test] + fn test_network_activity_serialization() { + let event = NetworkActivityEvent { + base: BaseEventData::new( + 4001, + "Network Activity", + 4, + "Network Activity", + 1, + "Open", + SeverityId::Informational, + Metadata { + version: "1.7.0".to_string(), + product: Product::openshell_sandbox("0.1.0"), + profiles: vec!["security_control".to_string(), "network_proxy".to_string()], + uid: Some("sandbox-abc123".to_string()), + log_source: None, + }, + ), + src_endpoint: Some(Endpoint::from_ip_str("10.42.0.2", 54321)), + dst_endpoint: Some(Endpoint::from_domain("api.example.com", 443)), + proxy_endpoint: Some(Endpoint::from_ip_str("10.42.0.1", 3128)), + actor: None, + firewall_rule: Some(FirewallRule::new("default-egress", "mechanistic")), + connection_info: None, + action: Some(ActionId::Allowed), + disposition: Some(DispositionId::Allowed), + observation_point_id: Some(2), + is_src_dst_assignment_known: Some(true), + }; + + let json = serde_json::to_value(&event).unwrap(); + assert_eq!(json["class_uid"], 4001); + assert_eq!(json["class_name"], "Network Activity"); + assert_eq!(json["type_uid"], 400_101); + assert_eq!(json["action"], "Allowed"); + assert_eq!(json["disposition"], "Allowed"); + assert_eq!(json["dst_endpoint"]["domain"], "api.example.com"); + assert_eq!(json["firewall_rule"]["type"], "mechanistic"); + assert_eq!(json["observation_point_id"], 2); + assert_eq!(json["is_src_dst_assignment_known"], true); + } +} diff --git a/crates/openshell-ocsf/src/events/process_activity.rs b/crates/openshell-ocsf/src/events/process_activity.rs new file mode 100644 index 00000000..0c5829f9 --- /dev/null +++ b/crates/openshell-ocsf/src/events/process_activity.rs @@ -0,0 +1,112 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! OCSF Process Activity [1007] event class. + +use serde::{Deserialize, Serialize}; + +use crate::enums::{ActionId, DispositionId, LaunchTypeId}; +use crate::events::base_event::BaseEventData; +use crate::objects::{Actor, Process}; + +/// OCSF Process Activity Event [1007]. +#[derive(Debug, Clone, PartialEq, Eq, Deserialize)] +pub struct ProcessActivityEvent { + /// Common base event fields. + #[serde(flatten)] + pub base: BaseEventData, + + /// The process being acted upon (required in v1.7.0). + pub process: Process, + + /// Actor (parent/supervisor process, required in v1.7.0). + #[serde(skip_serializing_if = "Option::is_none")] + pub actor: Option, + + /// Launch type. + #[serde( + rename = "launch_type_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub launch_type: Option, + + /// Process exit code (for Terminate activity). + #[serde(skip_serializing_if = "Option::is_none")] + pub exit_code: Option, + + /// Action (Security Control profile). + #[serde(rename = "action_id", default, skip_serializing_if = "Option::is_none")] + pub action: Option, + + /// Disposition. + #[serde( + rename = "disposition_id", + default, + skip_serializing_if = "Option::is_none" + )] + pub disposition: Option, +} + +impl Serialize for ProcessActivityEvent { + fn serialize(&self, serializer: S) -> Result { + use crate::events::serde_helpers::{insert_enum_pair, insert_optional, insert_required}; + + let mut base_val = serde_json::to_value(&self.base).map_err(serde::ser::Error::custom)?; + let obj = base_val + .as_object_mut() + .ok_or_else(|| serde::ser::Error::custom("expected object"))?; + + insert_required!(obj, "process", self.process); + insert_optional!(obj, "actor", self.actor); + insert_enum_pair!(obj, "launch_type", self.launch_type); + insert_optional!(obj, "exit_code", self.exit_code); + insert_enum_pair!(obj, "action", self.action); + insert_enum_pair!(obj, "disposition", self.disposition); + + base_val.serialize(serializer) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::enums::{ActionId, DispositionId, LaunchTypeId, SeverityId}; + use crate::objects::{Metadata, Product}; + + #[test] + fn test_process_activity_serialization() { + let event = ProcessActivityEvent { + base: BaseEventData::new( + 1007, + "Process Activity", + 1, + "System Activity", + 1, + "Launch", + SeverityId::Informational, + Metadata { + version: "1.7.0".to_string(), + product: Product::openshell_sandbox("0.1.0"), + profiles: vec!["container".to_string()], + uid: Some("sandbox-abc123".to_string()), + log_source: None, + }, + ), + process: Process::new("python3", 42).with_cmd_line("python3 /app/main.py"), + actor: Some(Actor { + process: Process::new("openshell-sandbox", 1), + }), + launch_type: Some(LaunchTypeId::Spawn), + exit_code: None, + action: Some(ActionId::Allowed), + disposition: Some(DispositionId::Allowed), + }; + + let json = serde_json::to_value(&event).unwrap(); + assert_eq!(json["class_uid"], 1007); + assert_eq!(json["process"]["name"], "python3"); + assert_eq!(json["actor"]["process"]["name"], "openshell-sandbox"); + assert_eq!(json["launch_type"], "Spawn"); + } +} diff --git a/crates/openshell-ocsf/src/events/serde_helpers.rs b/crates/openshell-ocsf/src/events/serde_helpers.rs new file mode 100644 index 00000000..d7881ada --- /dev/null +++ b/crates/openshell-ocsf/src/events/serde_helpers.rs @@ -0,0 +1,71 @@ +// SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +// SPDX-License-Identifier: Apache-2.0 + +//! Serialization helpers for OCSF event structs. +//! +//! These macros reduce boilerplate in custom `Serialize` impls that expand +//! typed enum fields into OCSF's `_id` + label pair format. + +/// Insert an OCSF enum pair (`_id` integer + label string) into a JSON map. +/// +/// If the value is `Some`, inserts both `"_id": ` and `"": "