fix: Update image template for using global.imageRegistry value and change default repository to ghcr.io/svtechnmaa#119
fix: Update image template for using global.imageRegistry value and change default repository to ghcr.io/svtechnmaa#119duchieu2k wants to merge 16 commits into
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughUpdated many Helm charts: added explicit image registries (mostly ghcr.io/svtechnmaa/*), standardized image values, passed Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoMigrate container images to ghcr.io/svtechnmaa and update image template
WalkthroughsDescription• Migrate all container images to ghcr.io/svtechnmaa registry • Update common image template to support global imageRegistry parameter • Add registry field to all image configurations in values files • Fix template rendering for extraInitContainers using tpl function Diagramflowchart LR
A["Image Registries"] -->|"Change from docker.io<br/>to ghcr.io"| B["Values Files"]
B -->|"Add registry field"| C["Image Configurations"]
D["Common Image Template"] -->|"Add global parameter"| E["Template Rendering"]
E -->|"Support imageRegistry"| F["Container Deployments"]
G["extraInitContainers"] -->|"Apply tpl function"| H["Template Processing"]
File Changes1. kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml
|
Code Review by Qodo
1.
|
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
kubernetes/postfix/values.yaml (1)
138-139:⚠️ Potential issue | 🔴 CriticalCritical: Plaintext credentials and PII in values file.
The
mtpUserandmtpPasscontain plaintext SMTP credentials and an email address (PII) that are committed to version control. This creates multiple security and compliance risks:
- Anyone with repository access can view these credentials
- The password is exposed in git history permanently
- The email address
ha.do@svtech.com.vnis PII that should not be in default valuesThese credentials should be:
- Removed from
values.yamlimmediately- Stored in Kubernetes Secrets
- Referenced in the chart templates via secret references
- The current password should be rotated since it's now compromised
🔐 Recommended approach
In
values.yaml, replace with secret references:mtpConfig: mtpRelay: smtp-relay.gmail.com mtpPort: 25 - mtpUser: ha.do@svtech.com.vn - mtpPass: Ha.Do@123456 + # Reference existing secret for credentials + existingSecret: "postfix-smtp-credentials" + userKey: "smtp-user" + passwordKey: "smtp-password" mtpMessageSizeLimit: "102400000" mtpMailboxSizeLimit: "51200000000" mailLogFile: /dev/stdoutUsers would create the secret separately:
kubectl create secret generic postfix-smtp-credentials \ --from-literal=smtp-user='<actual-user>' \ --from-literal=smtp-password='<actual-password>'Then update templates to reference these secret values.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/postfix/values.yaml` around lines 138 - 139, Remove the plaintext values mtpUser and mtpPass from values.yaml (and any committed PII email like ha.do@svtech.com.vn), create a Kubernetes Secret to hold smtp-user and smtp-password, update the chart templates that currently read .Values.mtpUser/.Values.mtpPass to instead reference the Secret (e.g., via secretKeyRef in env or volumeMount templates), and note that the exposed password must be rotated immediately; ensure values.yaml only contains a placeholder flag or instructions to point at the secret name (e.g., smtpCredentialsSecret) rather than the actual credentials.kubernetes/mage-ai/templates/redis-deployment.yaml (1)
28-52: 🧹 Nitpick | 🔵 TrivialConsider adding a security context to the Redis container.
Static analysis (Trivy KSV-0118) flags that this deployment uses the default security context, which allows root privileges. While this is pre-existing and not introduced by this PR, consider adding explicit security constraints as a follow-up improvement:
🛡️ Example security context addition
containers: - image: {{ include "common.images.image" ( dict "imageRoot" .Values.redis.image "global" .Values.global) }} {{- if .Values.redis.image.pullPolicy }} imagePullPolicy: {{ .Values.redis.image.pullPolicy }} {{- end}} name: redis + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true env:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/mage-ai/templates/redis-deployment.yaml` around lines 28 - 52, Add an explicit securityContext to the Redis container definition (the container named "redis" in the redis deployment template) to avoid running as root; set runAsNonRoot: true and a non-root runAsUser (e.g., 1000), disable privilege escalation (allowPrivilegeEscalation: false), drop all capabilities and set readOnlyRootFilesystem: true (and any other organization-required constraints). Place this under the container spec for the "redis" container so the livenessProbe and env entries remain unchanged.kubernetes/influxdb/templates/relay/deployment.yaml (1)
53-92:⚠️ Potential issue | 🟠 MajorHarden
relaycontainer security defaults in this touched block.This container block still misses key hardening controls (e.g.,
allowPrivilegeEscalation: false, dropping capabilities, seccomp profile, non-root/read-only rootfs), and static analysis flags these in the same changed region.Suggested baseline hardening pattern
containers: - name: relay image: {{ include "common.images.image" ( dict "imageRoot" .Values.relay.image "global" .Values.global) }} imagePullPolicy: {{ .Values.relay.image.pullPolicy | quote }} - {{- if .Values.relay.securityContext.enabled }} securityContext: + allowPrivilegeEscalation: false + runAsNonRoot: true + readOnlyRootFilesystem: true + capabilities: + drop: + - ALL + seccompProfile: + type: RuntimeDefault + {{- if .Values.relay.securityContext.enabled }} + securityContext: runAsUser: {{ .Values.relay.securityContext.runAsUser }} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/influxdb/templates/relay/deployment.yaml` around lines 53 - 92, The relay container's securityContext lacks hardening defaults; update the container spec (the block that currently renders securityContext / runAsUser for relay) to enforce container-level controls: set allowPrivilegeEscalation: false, runAsNonRoot: true (in addition to runAsUser), readOnlyRootFilesystem: true, add a capabilities section that drops all (drop: ["ALL"]), and set securityContext.seccompProfile.type to "RuntimeDefault"; ensure these keys are emitted by the template that renders .Values.relay.securityContext (or as sane defaults when that value is enabled) so the relay container (image/command/args block) has the tightened settings.kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml (1)
23-60:⚠️ Potential issue | 🟡 MinorAdd missing
imagePullSecretsfor ghcr.io registry.The akvorado-inlet-deployment.yaml lacks
imagePullSecretsconfiguration while other templates in this PR (redis, rundeck, docxtemplate) correctly define it. Since all akvorado images useghcr.io/svtechnmaa/registry, pods may fail to pull private images without proper authentication.🔧 Suggested fix
spec: + imagePullSecrets: + - name: ghcr-pull-secret initContainers: - name: wait-for-orchestrator🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml` around lines 23 - 60, The pod spec for the Akvorado inlet deployment is missing imagePullSecrets, so the initContainer wait-for-orchestrator and the inlet container may fail to pull images from ghcr.io; add an imagePullSecrets entry under the top-level spec (the same place as initContainers/containers/restartPolicy/volumes) that references the chart/global image pull secret value used elsewhere (use the same value/key as other templates in this PR), ensuring both the initContainer (name: wait-for-orchestrator) and the main container (name: inlet) can authenticate to ghcr.io.kubernetes/grafana/values.yaml (1)
1-11: 🧹 Nitpick | 🔵 TrivialConsider adding
global.imageRegistryfor consistency.Similar to
kubernetes/traefik/values.yaml, this file is missingglobal.imageRegistrywhilereport-ui/values.yamldefines it. For consistency and to support centralized registry override, consider adding it.🔧 Proposed fix
global: + imageRegistry: ghcr.io adminUser: adminPassword:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/grafana/values.yaml` around lines 1 - 11, Add a global.imageRegistry entry to the global block so the chart supports a centralized registry override; specifically, update the global map (where keys like adminUser, adminPassword, sharedVolume, externalIP, frontendVip, mariadb-galera appear) to include imageRegistry: "" (or a sensible default) and ensure any image references in the chart respect global.imageRegistry so behavior matches other charts like report-ui and traefik that support a centralized registry.kubernetes/traefik/values.yaml (1)
3-6: 🧹 Nitpick | 🔵 TrivialAdd
global.imageRegistryfor consistency with other charts and to enable global registry overrides.The deployment template uses the
common.images.imagehelper which prioritizesglobal.imageRegistryover the chart's localimage.registrywhen the global value is defined. This file is missing that configuration while other charts in the PR (e.g.,report-ui/values.yaml) defineglobal.imageRegistry: ghcr.io. Without this entry, global registry overrides will not apply to the traefik deployment.🔧 Proposed fix
global: + imageRegistry: ghcr.io sharedVolume: enabled: true sharedPersistenceVolume: []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/traefik/values.yaml` around lines 3 - 6, Add a global.imageRegistry key under the existing global block so Helm's common.images.image helper can pick up global registry overrides; update the global mapping (where global.sharedVolume and global.sharedPersistenceVolume are defined) to include global.imageRegistry (e.g., set to ghcr.io or an empty string to allow global override) so the traefik deployment respects global image registry settings referenced by common.images.image.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@kubernetes/airflow/charts/postgresql/values.yaml`:
- Around line 60-62: The specified image settings (registry: ghcr.io,
repository: svtechnmaa/postgresql, tag: 11.12.0 in values.yaml) point to a
non-existent or inaccessible image; either publish the image to ghcr.io and make
it public (ensure CI workflow builds and pushes svtechnmaa/postgresql:11.12.0
and that credentials/visibility are correct) or revert these fields back to the
official Bitnami/other known-working image reference (restore
registry/repository/tag to the original values) so Helm will pull a valid
PostgreSQL image.
In `@kubernetes/airflow/values.yaml`:
- Line 555: The BusyBox image YAML entries have an extra space after the colon
(e.g., the line with the key image and the template "{{
.Values.global.imageRegistry | default \"ghcr.io\" }}/svtechnmaa/busybox:1.33");
remove the extra space so there is exactly one space after the colon (change
occurrences of image: "{{ ... }}" to image: "{{ ... }}") for the image keys
referenced (same pattern appears for the other BusyBox image entries).
- Around line 553-556: The scheduler and triggerer deployments render
extraInitContainers with toYaml, which leaves Helm template expressions
unevaluated; update the render calls in scheduler-deployment.yaml and
triggerer-deployment.yaml to wrap the existing toYaml calls with tpl, i.e.
replace the occurrences that reference .Values.scheduler.extraInitContainers and
.Values.triggerer.extraInitContainers so they use tpl (e.g. tpl (toYaml
.Values.scheduler.extraInitContainers) . | nindent ...) and tpl (toYaml
.Values.triggerer.extraInitContainers) . | nindent ... respectively, mirroring
the workers deployment behavior to ensure templates in those value blocks are
evaluated.
In `@kubernetes/akvorado/templates/akvorado-console-deployment.yaml`:
- Line 26: The Helm template call to include "common.images.image" is using an
extra space after the opening parenthesis in the dict call ("( dict ...") which
triggers YAMLlint; edit the template invocation for include
"common.images.image" so the dict is called without the extra space (use "(dict
\"imageRoot\" .Values.init.image \"global\" .Values.global)" style), ensuring
the include and dict tokens are adjacent.
- Line 43: The Helm template call to include "common.images.image" has
inconsistent spacing: remove the extra space after the opening parenthesis in
the include call so the dict invocation uses (dict "imageRoot"
.Values.console.image "global" .Values.global) instead of ( dict ... ); update
the include expression around include "common.images.image" to use consistent
spacing.
In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml`:
- Line 26: The Helm template include call for the container image uses
inconsistent spacing inside the braces—update the include invocation (include
"common.images.image" (dict "imageRoot" .Values.init.image "global"
.Values.global)) to remove the extra spaces inside the parentheses so it matches
the project's style (same change for the other occurrence at the include
"common.images.image" (dict "imageRoot" .Values.image "global" .Values.global)
line); keep the same symbols and values but normalize spacing around the
dict(...) and include(...) calls for YAMLlint consistency.
In `@kubernetes/akvorado/templates/akvorado-orchestrator-deployment.yaml`:
- Line 26: Helm template expressions lack required internal spacing and trigger
YAMLlint `braces` errors; update the include/dict invocations so there is a
space after the opening handlebars and before the closing ones and also spaces
inside the dict call — e.g. change occurrences of image: {{ include
"common.images.image" ( dict "imageRoot" .Values.init.image "global"
.Values.global) }} to image: {{ include "common.images.image" ( dict "imageRoot"
.Values.init.image "global" .Values.global ) }}; apply the same spacing fix for
the other occurrences that call include "common.images.image" with dict
"imageRoot" and .Values.global.
In `@kubernetes/akvorado/templates/akvorado-redis-deployment.yaml`:
- Line 30: The Helm template braces lack proper spacing for YAMLlint; update the
image line using the include call (include "common.images.image" (dict
"imageRoot" .Values.redis.image "global" .Values.global)) so that there is a
single space after the opening {{ and before the closing }}, and ensure spacing
inside the parentheses around dict (e.g. {{ include "common.images.image" ( dict
"imageRoot" .Values.redis.image "global" .Values.global ) }}), replacing the
existing expression in the image field.
In `@kubernetes/clickhouse/values.yaml`:
- Line 43: Fix the typo in the Helm values key: replace the non-ASCII key name
"regístry" with the correct "registry" so Helm helpers and image resolution
logic (keys like registry used by templates) can find the registry value; update
any occurrences of "regístry" to "registry" in the values.yaml and ensure
templates referencing .Values.registry continue to work.
In `@kubernetes/icinga2/templates/statefulset-distribute-master.yaml`:
- Line 56: The dict passed to the include call for "common.images.image"
contains a duplicated key "global" (see the dict in the include
"common.images.image" invocation); remove the redundant second "global" entry so
the dict only includes unique keys (e.g., keep "imageRoot": .Values.image and
"global": .Values.global) to avoid ambiguity and redundancy.
In `@kubernetes/influxdb/values.yaml`:
- Around line 909-912: The image entry uses an unverified repository and an
invalid tag; update the image block (the image, registry, repository, and tag
keys) to use a verified Azure CLI image and a valid semantic version tag—for
example set registry to docker.io and repository to bitnami/azure-cli (or
registry mcr.microsoft.com and repository azure-cli) and change tag to a valid
semantic version such as 2.84.0 (or 2.22.1 if restoring the original release);
ensure the chosen repository is official (bitnami or Microsoft) and the tag
matches an existing release.
In `@kubernetes/maxscale/values.yaml`:
- Around line 8-10: The init container image is built only from
.Values.init.image.repository and .Values.init.image.tag, omitting
.Values.init.image.registry; update the init container in the deployment
template (init container spec in kubernetes/maxscale/templates/deployment.yaml)
to construct the full image from .Values.init.image.registry + "/" +
.Values.init.image.repository + ":" + .Values.init.image.tag (or reuse
common.images.image pattern with .Values.init.image) so the image becomes
ghcr.io/svtechnmaa/mysql:8.2.0 instead of svtechnmaa/mysql:8.2.0; ensure you
reference .Values.init.image.registry, .Values.init.image.repository and
.Values.init.image.tag accordingly.
In `@kubernetes/netops-data/values.yaml`:
- Around line 25-27: The YAML value for the image tag under the
registry/repository block uses an unquoted numeric literal ("tag: 1.33") which
some parsers may treat as a float; update the tag value to a quoted string
(e.g., change the value for the key "tag" in the busybox image block) so it is
always treated as a string and consistent with the other quoted tags in this
file.
---
Outside diff comments:
In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml`:
- Around line 23-60: The pod spec for the Akvorado inlet deployment is missing
imagePullSecrets, so the initContainer wait-for-orchestrator and the inlet
container may fail to pull images from ghcr.io; add an imagePullSecrets entry
under the top-level spec (the same place as
initContainers/containers/restartPolicy/volumes) that references the
chart/global image pull secret value used elsewhere (use the same value/key as
other templates in this PR), ensuring both the initContainer (name:
wait-for-orchestrator) and the main container (name: inlet) can authenticate to
ghcr.io.
In `@kubernetes/grafana/values.yaml`:
- Around line 1-11: Add a global.imageRegistry entry to the global block so the
chart supports a centralized registry override; specifically, update the global
map (where keys like adminUser, adminPassword, sharedVolume, externalIP,
frontendVip, mariadb-galera appear) to include imageRegistry: "" (or a sensible
default) and ensure any image references in the chart respect
global.imageRegistry so behavior matches other charts like report-ui and traefik
that support a centralized registry.
In `@kubernetes/influxdb/templates/relay/deployment.yaml`:
- Around line 53-92: The relay container's securityContext lacks hardening
defaults; update the container spec (the block that currently renders
securityContext / runAsUser for relay) to enforce container-level controls: set
allowPrivilegeEscalation: false, runAsNonRoot: true (in addition to runAsUser),
readOnlyRootFilesystem: true, add a capabilities section that drops all (drop:
["ALL"]), and set securityContext.seccompProfile.type to "RuntimeDefault";
ensure these keys are emitted by the template that renders
.Values.relay.securityContext (or as sane defaults when that value is enabled)
so the relay container (image/command/args block) has the tightened settings.
In `@kubernetes/mage-ai/templates/redis-deployment.yaml`:
- Around line 28-52: Add an explicit securityContext to the Redis container
definition (the container named "redis" in the redis deployment template) to
avoid running as root; set runAsNonRoot: true and a non-root runAsUser (e.g.,
1000), disable privilege escalation (allowPrivilegeEscalation: false), drop all
capabilities and set readOnlyRootFilesystem: true (and any other
organization-required constraints). Place this under the container spec for the
"redis" container so the livenessProbe and env entries remain unchanged.
In `@kubernetes/postfix/values.yaml`:
- Around line 138-139: Remove the plaintext values mtpUser and mtpPass from
values.yaml (and any committed PII email like ha.do@svtech.com.vn), create a
Kubernetes Secret to hold smtp-user and smtp-password, update the chart
templates that currently read .Values.mtpUser/.Values.mtpPass to instead
reference the Secret (e.g., via secretKeyRef in env or volumeMount templates),
and note that the exposed password must be rotated immediately; ensure
values.yaml only contains a placeholder flag or instructions to point at the
secret name (e.g., smtpCredentialsSecret) rather than the actual credentials.
In `@kubernetes/traefik/values.yaml`:
- Around line 3-6: Add a global.imageRegistry key under the existing global
block so Helm's common.images.image helper can pick up global registry
overrides; update the global mapping (where global.sharedVolume and
global.sharedPersistenceVolume are defined) to include global.imageRegistry
(e.g., set to ghcr.io or an empty string to allow global override) so the
traefik deployment respects global image registry settings referenced by
common.images.image.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3f475d7b-a4ef-4ecc-8f1e-1842d5817802
📒 Files selected for processing (75)
kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yamlkubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yamlkubernetes/airflow-postgresql-ha/values.yamlkubernetes/airflow/charts/postgresql/values.yamlkubernetes/airflow/templates/workers/worker-deployment.yamlkubernetes/airflow/values.yamlkubernetes/akvorado/templates/akvorado-console-deployment.yamlkubernetes/akvorado/templates/akvorado-inlet-deployment.yamlkubernetes/akvorado/templates/akvorado-orchestrator-deployment.yamlkubernetes/akvorado/templates/akvorado-redis-deployment.yamlkubernetes/akvorado/values.yamlkubernetes/arangodb/templates/arangodeployment.yamlkubernetes/arangodb/values.yamlkubernetes/asynqmon/templates/deployment.yamlkubernetes/asynqmon/values.yamlkubernetes/bngblaster/templates/deploy.yamlkubernetes/bngblaster/values.yamlkubernetes/clickhouse/templates/clickhouse.yamlkubernetes/clickhouse/templates/clickhouseKeeper.yamlkubernetes/clickhouse/values.yamlkubernetes/csv-view/values.yamlkubernetes/debuger/values.yamlkubernetes/docxtemplate/templates/deploy.yamlkubernetes/docxtemplate/values.yamlkubernetes/elastalert-ui/templates/deployment.yamlkubernetes/elastalert-ui/values.yamlkubernetes/freeradius/values.yamlkubernetes/gitlist/values.yamlkubernetes/grafana/templates/deployment.yamlkubernetes/grafana/templates/init-data.yamlkubernetes/grafana/values.yamlkubernetes/icinga2-report/templates/deployment.yamlkubernetes/icinga2-report/values.yamlkubernetes/icinga2/templates/deployment-icinga2-rebalance.yamlkubernetes/icinga2/templates/statefulset-distribute-master.yamlkubernetes/icinga2/templates/statefulset-distribute-satelite.yamlkubernetes/icinga2/values.yamlkubernetes/icingaweb/templates/deployment.yamlkubernetes/icingaweb/values.yamlkubernetes/influxdb/templates/influxdb/statefulset-high-availability.yamlkubernetes/influxdb/templates/relay/deployment.yamlkubernetes/influxdb/values.yamlkubernetes/ingress-nginx/values.yamlkubernetes/juniper-syslog-api/values.yamlkubernetes/mage-ai/templates/redis-deployment.yamlkubernetes/mage-ai/values.yamlkubernetes/mariadb-galera/templates/statefulset.yamlkubernetes/mariadb-galera/values.yamlkubernetes/maxscale/values.yamlkubernetes/nagvis/values.yamlkubernetes/netops-data/templates/deployment.yamlkubernetes/netops-data/values.yamlkubernetes/postfix/values.yamlkubernetes/preparation/values.yamlkubernetes/proxy/values.yamlkubernetes/redis/templates/deployment.yamlkubernetes/redis/values.yamlkubernetes/report-ui/templates/redis-deployment.yamlkubernetes/report-ui/templates/reflex-backend.yamlkubernetes/report-ui/templates/reflex-frontend.yamlkubernetes/report-ui/templates/rq-dashboard-deployment.yamlkubernetes/report-ui/values.yamlkubernetes/rundeck-option-provider/templates/deployment.yamlkubernetes/rundeck-option-provider/values.yamlkubernetes/rundeck/templates/deployment.yamlkubernetes/rundeck/values.yamlkubernetes/snmp-manager/values.yamlkubernetes/streamlit/templates/deploy.yamlkubernetes/streamlit/values.yamlkubernetes/syncthing/values.yamlkubernetes/thruk/values.yamlkubernetes/topology-app/values.yamlkubernetes/topology-web/values.yamlkubernetes/traefik/templates/deployment.yamlkubernetes/traefik/values.yaml
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
kubernetes/freeradius/templates/Deployment.yaml (1)
69-74:⚠️ Potential issue | 🟠 MajorDuplicate
initContainerskey will silently discard user-defined init containers.Lines 69-70 conditionally render
.Values.initContainers, but line 72 unconditionally declares anotherinitContainers:block. In YAML, duplicate keys cause the second to override the first, so any user-providedinitContainersfrom values will be silently ignored.Additionally, ensure consistency: this template uses
init.db.image(MySQL) forwait-for-mariadb, whileinit-db.yamlusesinit.waitForDB.image(BusyBox) for the same-named container. Both approaches work for their respective commands, but the naming inconsistency may cause confusion.Proposed fix to merge user initContainers with the hardcoded one
- {{- if .Values.initContainers }} - initContainers: {{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }} - {{- end }} initContainers: + {{- if .Values.initContainers }} + {{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }} + {{- end }} - name: wait-for-mariadb image: {{ include "common.images.image" ( dict "imageRoot" .Values.init.db.image "global" .Values.global) }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/freeradius/templates/Deployment.yaml` around lines 69 - 74, The template currently emits two initContainers keys causing user-defined .Values.initContainers to be overwritten; change it to emit a single initContainers list that first renders user entries via the common.tplvalues.render include (referencing .Values.initContainers and the helper common.tplvalues.render) and then appends the required wait-for-mariadb container (name: wait-for-mariadb) so both sets are preserved; also unify the image source for the appended container by using the same value key used elsewhere (either .Values.init.db.image or .Values.init.waitForDB.image) so the included image helper (common.images.image) is fed a consistent value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@kubernetes/maxscale/templates/deployment.yaml`:
- Line 20: The YAML linter error is caused by inconsistent spacing around the
helper call; normalize the spacing in the include invocation so it matches the
project's pattern: use {{ include "common.images.image" (dict "imageRoot"
.Values.init.image "global" .Values.global) }} by removing the extra space after
the opening parenthesis and ensuring a single space between the include, the
helper name, and the (dict ...) argument; update the line containing the include
of "common.images.image" and the (dict "imageRoot" .Values.init.image "global"
.Values.global) argument accordingly.
In `@kubernetes/telegraf/values.yaml`:
- Around line 22-23: Update the init container image template to pass the global
image registry into the common.images.image helper so global.imageRegistry
overrides apply: change the include call that currently uses (dict "imageRoot"
.Values.init.image) to also include "global" .Values.global when rendering the
init container image (the same pattern used for the main container). Also verify
the BusyBox reference (repository svtechnmaa/busybox and tag 1.33) is publicly
available; if not, replace it with an official or accessible image in
Values.init.image.
---
Outside diff comments:
In `@kubernetes/freeradius/templates/Deployment.yaml`:
- Around line 69-74: The template currently emits two initContainers keys
causing user-defined .Values.initContainers to be overwritten; change it to emit
a single initContainers list that first renders user entries via the
common.tplvalues.render include (referencing .Values.initContainers and the
helper common.tplvalues.render) and then appends the required wait-for-mariadb
container (name: wait-for-mariadb) so both sets are preserved; also unify the
image source for the appended container by using the same value key used
elsewhere (either .Values.init.db.image or .Values.init.waitForDB.image) so the
included image helper (common.images.image) is fed a consistent value.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: ad5f9a63-69d7-4b49-a2eb-bd59aac12541
📒 Files selected for processing (7)
kubernetes/clickhouse/values.yamlkubernetes/freeradius/templates/Deployment.yamlkubernetes/freeradius/templates/init-db.yamlkubernetes/freeradius/values.yamlkubernetes/ingress-nginx/templates/deployment.yamlkubernetes/maxscale/templates/deployment.yamlkubernetes/telegraf/values.yaml
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@kubernetes/icinga2/templates/statefulset-distribute-master.yaml`:
- Line 441: Remove the trailing inline comment that duplicates the computed
image string on the image line that calls include "common.images.image" with
dict "imageRoot" .Values.master.redis.image "global" .Values.global;
specifically delete the "# {{ .Values.master.redis.image.repository }}:{{
.Values.master.redis.image.tag }}" fragment so the template line only contains
the include call (and clean up any leftover spacing).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1c286d84-773e-408f-b8fc-676deb98f883
📒 Files selected for processing (2)
kubernetes/icinga2/templates/statefulset-distribute-master.yamlkubernetes/telegraf/templates/deployment.yaml
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
kubernetes/report-ui/templates/rq-dashboard-deployment.yaml (1)
25-47:⚠️ Potential issue | 🟠 MajorAdd an explicit container security context in this deployment block.
The
rq-dashboardcontainer still runs with default security context (see static analysis findings in this range). In restricted clusters, this commonly gets denied at admission time. Please set an explicit hardenedsecurityContext(e.g., no privilege escalation, drop capabilities, seccomp profile) and keep resources configurable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/report-ui/templates/rq-dashboard-deployment.yaml` around lines 25 - 47, Add an explicit hardened container securityContext for the rq-dashboard container: in the container spec for name: rq-dashboard, add a securityContext block that sets allowPrivilegeEscalation: false, runAsNonRoot: true (and runAsUser: 1000 or configurable via .Values.dashboard.securityContext.runAsUser), drops all capabilities (capabilities.drop: ["ALL"]), and sets seccompProfile.type: RuntimeDefault (or configurable via .Values.dashboard.securityContext.seccompProfile). Also ensure resources remain configurable by referencing .Values.dashboard.resources in the container spec so resource limits/requests can be overridden.kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml (1)
43-58: 🧹 Nitpick | 🔵 TrivialConsider adding securityContext and resource limits in a follow-up.
Trivy flagged several security best practices for the
inletcontainer (missingsecurityContext.runAsNonRoot,allowPrivilegeEscalation: false, resource limits, etc.). These are pre-existing issues unrelated to this PR's scope, but worth addressing for production hardening.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml` around lines 43 - 58, Add a follow-up change to the inlet container spec (the container named "inlet" in the Akvorado inlet deployment template) to harden it: add a securityContext block with runAsNonRoot: true, a non-root runAsUser (e.g. 1000), allowPrivilegeEscalation: false and readOnlyRootFilesystem where appropriate; and add resources with requests and limits for cpu and memory (e.g. requests.cpu/memory and limits.cpu/memory) so the pod is not unbounded. Keep these additions adjacent to the existing container fields (image, imagePullPolicy, ports, volumeMounts) and ensure they are templated using .Values (e.g. .Values.inlet.securityContext and .Values.inlet.resources) so defaults can be overridden.
♻️ Duplicate comments (1)
kubernetes/influxdb/values.yaml (1)
912-914:⚠️ Potential issue | 🟠 Major
2.221does not match Azure CLI’s published versioning.Azure CLI releases are published as
major.minor.patchversions such as2.22.1and2.84.0, with Microsoft documenting a monthly release cadence for supported versions.2.221does not fit that history, so this tag looks unverifiable and risks breaking the Azure backup job if the mirror does not actually publish it. Please switch back to a verifiable tag or document the mirror-specific tag format if this is intentional. (pypi.org)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kubernetes/influxdb/values.yaml` around lines 912 - 914, The tag value "2.221" for the image repository "svtechnmaa/azure-cli" is not a valid Azure CLI release format; update the tag in the values.yaml (the "tag:" field) to a verifiable upstream Azure CLI release (e.g., a major.minor.patch like "2.22.1" or "2.84.0") or, if this mirror uses a custom scheme, replace the tag with a documented mirror-specific tag and add a brief comment explaining the mirror format and a link to the mirror's tag list so the backup job can be reliably verified.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml`:
- Around line 366-369: The template currently adds imagePullSecrets twice: once
via the helper include {{ include "postgresql-ha.image.pullSecrets" . }} and
again with a hardcoded imagePullSecrets: - name: ghcr-pull-secret block; remove
the hardcoded block to avoid duplicate entries and rely on the helper include
which sources secrets from global.imagePullSecrets, or conversely remove the
include and keep the hardcoded block—prefer the first option: delete the
explicit imagePullSecrets: - name: ghcr-pull-secret entry and ensure values.yaml
sets global.imagePullSecrets: ["ghcr-pull-secret"] so the include
(postgresql-ha.image.pullSecrets) provides the secret.
In `@kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml`:
- Around line 531-534: The statefulset currently has a hardcoded
imagePullSecrets block (entry "- name: ghcr-pull-secret") that duplicates the
helper-generated entries from {{ include "postgresql-ha.image.pullSecrets" . }}
(same issue as in pgpool); fix by removing the hardcoded block so only the
helper-generated imagePullSecrets is used, or alternatively remove the helper
call and convert imagePullSecrets to the single hardcoded value—ensure only one
source of imagePullSecrets remains (reference the imagePullSecrets YAML key and
the include "postgresql-ha.image.pullSecrets" helper and the hardcoded
ghcr-pull-secret entry when making the change).
In `@kubernetes/arangodb/templates/arangodeployment.yaml`:
- Line 8: The YAML line using the Helm template include has an extra space
inside the parentheses; update the expression include "common.images.image" (
dict "imageRoot" .Values.image "global" .Values.global ) to remove the spaces
around the dict call so it becomes include "common.images.image"(dict
"imageRoot" .Values.image "global" .Values.global) (i.e., adjust the
include/dict invocation in the arangodeployment.yaml so there is no extra space
inside the parentheses to satisfy YAMLlint).
In `@kubernetes/icinga2/templates/statefulset-distribute.yaml`:
- Around line 403-406: The imagePullSecrets block is currently inside the
conditional governed by .Values.global.sharedVolume.enabled so it only renders
when shared persistence is enabled; move the imagePullSecrets mapping (the lines
creating "- name: ghcr-pull-secret") out of that conditional and place it
alongside other pod spec-level fields so imagePullSecrets is always emitted
regardless of .Values.global.sharedVolume.enabled, ensuring you remove it from
inside the "if .Values.global.sharedVolume.enabled" block and re-add it after
that conditional within the StatefulSet pod spec.
In `@kubernetes/influxdb/values.yaml`:
- Around line 65-66: Update the inline image documentation comments to reflect
the new default image registry and repository by replacing references to
Bitnami/Docker Hub tags with ghcr.io/svtechnmaa/*; specifically search for the
YAML keys "registry" and "repository" and the comment blocks adjacent to them
(e.g., the comment blocks near lines where registry: ghcr.io and repository:
svtechnmaa/influxdb are defined) and change any example/image doc text and
sample tag guidance at the other occurrences noted (around the blocks for lines
referenced in the review: the same comment pairs near 428-429, 816-817, 887-888,
912-913) so examples and troubleshooting tips point to ghcr.io/svtechnmaa images
and tags instead of Bitnami/Docker Hub.
---
Outside diff comments:
In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml`:
- Around line 43-58: Add a follow-up change to the inlet container spec (the
container named "inlet" in the Akvorado inlet deployment template) to harden it:
add a securityContext block with runAsNonRoot: true, a non-root runAsUser (e.g.
1000), allowPrivilegeEscalation: false and readOnlyRootFilesystem where
appropriate; and add resources with requests and limits for cpu and memory (e.g.
requests.cpu/memory and limits.cpu/memory) so the pod is not unbounded. Keep
these additions adjacent to the existing container fields (image,
imagePullPolicy, ports, volumeMounts) and ensure they are templated using
.Values (e.g. .Values.inlet.securityContext and .Values.inlet.resources) so
defaults can be overridden.
In `@kubernetes/report-ui/templates/rq-dashboard-deployment.yaml`:
- Around line 25-47: Add an explicit hardened container securityContext for the
rq-dashboard container: in the container spec for name: rq-dashboard, add a
securityContext block that sets allowPrivilegeEscalation: false, runAsNonRoot:
true (and runAsUser: 1000 or configurable via
.Values.dashboard.securityContext.runAsUser), drops all capabilities
(capabilities.drop: ["ALL"]), and sets seccompProfile.type: RuntimeDefault (or
configurable via .Values.dashboard.securityContext.seccompProfile). Also ensure
resources remain configurable by referencing .Values.dashboard.resources in the
container spec so resource limits/requests can be overridden.
---
Duplicate comments:
In `@kubernetes/influxdb/values.yaml`:
- Around line 912-914: The tag value "2.221" for the image repository
"svtechnmaa/azure-cli" is not a valid Azure CLI release format; update the tag
in the values.yaml (the "tag:" field) to a verifiable upstream Azure CLI release
(e.g., a major.minor.patch like "2.22.1" or "2.84.0") or, if this mirror uses a
custom scheme, replace the tag with a documented mirror-specific tag and add a
brief comment explaining the mirror format and a link to the mirror's tag list
so the backup job can be reliably verified.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8f508d18-071c-4973-b252-faf1e4a87d16
📒 Files selected for processing (18)
kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yamlkubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yamlkubernetes/akvorado/templates/akvorado-console-deployment.yamlkubernetes/akvorado/templates/akvorado-inlet-deployment.yamlkubernetes/akvorado/templates/akvorado-orchestrator-deployment.yamlkubernetes/akvorado/templates/akvorado-redis-deployment.yamlkubernetes/arangodb/templates/arangodeployment.yamlkubernetes/arangodb/templates/initjob.yamlkubernetes/arangodb/values.yamlkubernetes/filebeat/templates/filebeat.ymlkubernetes/icinga2-report/templates/deployment.yamlkubernetes/icinga2/templates/statefulset-distribute.yamlkubernetes/influxdb/values.yamlkubernetes/ingress-nginx/templates/deployment.yamlkubernetes/logstash/templates/logstash.ymlkubernetes/mariadb-galera/values.yamlkubernetes/report-ui/templates/redis-deployment.yamlkubernetes/report-ui/templates/rq-dashboard-deployment.yaml
|
|
||
| imagePullSecrets: | ||
| - name: ghcr-pull-secret | ||
|
|
There was a problem hiding this comment.
Potential duplicate imagePullSecrets entries.
Line 39 already calls {{ include "postgresql-ha.image.pullSecrets" . }}, which generates imagePullSecrets from global.imagePullSecrets and individual image pullSecrets (see kubernetes/airflow-postgresql-ha/templates/_helpers.tpl). Adding a hardcoded entry here may cause duplicates if ghcr-pull-secret is also configured in global.imagePullSecrets.
Consider either:
- Remove the hardcoded block and rely on
global.imagePullSecrets: ["ghcr-pull-secret"]in values.yaml, or - Remove the
postgresql-ha.image.pullSecretsinclude at line 39 and use only the hardcoded approach.
Option 1: Remove hardcoded block (preferred)
{{- include "common.tplvalues.render" (dict "value" .Values.pgpool.sidecars "context" $) | nindent 8 }}
{{- end }}
-
- imagePullSecrets:
- - name: ghcr-pull-secret
-
volumes:Then ensure values.yaml has:
global:
imagePullSecrets:
- ghcr-pull-secret📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| imagePullSecrets: | |
| - name: ghcr-pull-secret |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml` around
lines 366 - 369, The template currently adds imagePullSecrets twice: once via
the helper include {{ include "postgresql-ha.image.pullSecrets" . }} and again
with a hardcoded imagePullSecrets: - name: ghcr-pull-secret block; remove the
hardcoded block to avoid duplicate entries and rely on the helper include which
sources secrets from global.imagePullSecrets, or conversely remove the include
and keep the hardcoded block—prefer the first option: delete the explicit
imagePullSecrets: - name: ghcr-pull-secret entry and ensure values.yaml sets
global.imagePullSecrets: ["ghcr-pull-secret"] so the include
(postgresql-ha.image.pullSecrets) provides the secret.
|
|
||
| imagePullSecrets: | ||
| - name: ghcr-pull-secret | ||
|
|
There was a problem hiding this comment.
Potential duplicate imagePullSecrets entries (same issue as pgpool).
Line 43 already includes {{ include "postgresql-ha.image.pullSecrets" . }} which generates imagePullSecrets from global.imagePullSecrets. Adding a hardcoded block here creates the same duplication risk as in pgpool/deployment.yaml.
Apply the same fix: either rely on the helper with global.imagePullSecrets configuration, or remove the helper and use only hardcoded values.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml`
around lines 531 - 534, The statefulset currently has a hardcoded
imagePullSecrets block (entry "- name: ghcr-pull-secret") that duplicates the
helper-generated entries from {{ include "postgresql-ha.image.pullSecrets" . }}
(same issue as in pgpool); fix by removing the hardcoded block so only the
helper-generated imagePullSecrets is used, or alternatively remove the helper
call and convert imagePullSecrets to the single hardcoded value—ensure only one
source of imagePullSecrets remains (reference the imagePullSecrets YAML key and
the include "postgresql-ha.image.pullSecrets" helper and the hardcoded
ghcr-pull-secret entry when making the change).
| rootPassword: "" | ||
|
|
||
| kube-arangodb: | ||
| operator: |
There was a problem hiding this comment.
image use template {{ .Values.operator.image }} and we don't clone kube-arangodb so cannot update registry
No description provided.