Skip to content

fix: Update image template for using global.imageRegistry value and change default repository to ghcr.io/svtechnmaa#119

Open
duchieu2k wants to merge 16 commits into
mainfrom
fix/image-registry
Open

fix: Update image template for using global.imageRegistry value and change default repository to ghcr.io/svtechnmaa#119
duchieu2k wants to merge 16 commits into
mainfrom
fix/image-registry

Conversation

@duchieu2k
Copy link
Copy Markdown
Contributor

No description provided.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 3, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Updated many Helm charts: added explicit image registries (mostly ghcr.io/svtechnmaa/*), standardized image values, passed .Values.global into common.images.image calls across numerous templates, added imagePullSecrets: ghcr-pull-secret in several pod specs, and switched some extraInitContainers rendering to use tpl for embedded templates. (50 words)

Changes

Cohort / File(s) Summary
Airflow PostgreSQL HA
kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml, kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml, kubernetes/airflow-postgresql-ha/values.yaml
Pass .Values.global into common.images.image for init/main containers; add imagePullSecrets: ghcr-pull-secret in pod specs; values switched registries/repos to ghcr.io / svtechnmaa/*.
Airflow chart & values
kubernetes/airflow/charts/postgresql/values.yaml, kubernetes/airflow/templates/workers/worker-deployment.yaml, kubernetes/airflow/values.yaml
Image defaults moved to ghcr.io/svtechnmaa/*; extraInitContainers now rendered with tpl so embedded templates evaluate; busybox image references templated.
Akvorado
kubernetes/akvorado/templates/..., kubernetes/akvorado/values.yaml
Image helper calls extended with global: .Values.global; added imagePullSecrets: ghcr-pull-secret; values updated to GHCR svtechnmaa/*, geoip tag pinned.
Core DB & infra charts
kubernetes/arangodb/..., kubernetes/clickhouse/..., kubernetes/mariadb-galera/..., kubernetes/postgresql/...
Replaced inline repo:tag with common.images.image using global; values add image.registry: ghcr.io and swap repositories to svtechnmaa/*; some charts add init image config and imagePullSecrets.
Observability & UI charts
kubernetes/grafana/..., kubernetes/influxdb/..., kubernetes/report-ui/..., kubernetes/icinga*/*
Multiple container/init images updated to pass .Values.global to image helper; several pods now include imagePullSecrets; values moved images to GHCR and adjusted tags/repositories.
Various templates updated
examples: kubernetes/redis/templates/deployment.yaml, kubernetes/traefik/templates/deployment.yaml, kubernetes/streamlit/templates/deploy.yaml, kubernetes/mage-ai/templates/redis-deployment.yaml
Numerous templates extended to call common.images.image with global: .Values.global (image resolution change only).
extraInitContainers tpl changes
charts such as airflow, elastalert-ui, report-ui, others
Changed `toYaml .Values...
Values structure & init image reorganizations
charts like freeradius, maxscale, arangodb, influxdb
Reorganized/expanded image blocks to include registry, repository, tag, pullPolicy; added new init.* image entries and centralized imagePullSecrets in values.
Wide values additions (registry changes)
many kubernetes/*/values.yaml (examples: grafana, redis, influxdb, report-ui, mage-ai, icinga*, rundeck*, traefik, etc.)
Added image.registry: ghcr.io in many charts; swapped numerous repositories to svtechnmaa/* and updated tags for select images.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • vyvuvivo
  • tuhoanganh
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning No pull request description was provided by the author, making it impossible to evaluate whether the description relates to the changeset. Add a detailed pull request description explaining the purpose, rationale, and scope of the image registry and template changes across all affected Kubernetes charts.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: updating image templates to use global.imageRegistry and switching default repositories to ghcr.io/svtechnmaa.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Migrate container images to ghcr.io/svtechnmaa and update image template

✨ Enhancement 🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Migrate all container images to ghcr.io/svtechnmaa registry
• Update common image template to support global imageRegistry parameter
• Add registry field to all image configurations in values files
• Fix template rendering for extraInitContainers using tpl function
Diagram
flowchart LR
  A["Image Registries"] -->|"Change from docker.io<br/>to ghcr.io"| B["Values Files"]
  B -->|"Add registry field"| C["Image Configurations"]
  D["Common Image Template"] -->|"Add global parameter"| E["Template Rendering"]
  E -->|"Support imageRegistry"| F["Container Deployments"]
  G["extraInitContainers"] -->|"Apply tpl function"| H["Template Processing"]
Loading

Grey Divider

File Changes

1. kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml


2. kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml ✨ Enhancement +4/-4

Add global parameter to image template calls

kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml


3. kubernetes/airflow-postgresql-ha/values.yaml ⚙️ Configuration changes +8/-8

Update image registries to ghcr.io/svtechnmaa

kubernetes/airflow-postgresql-ha/values.yaml


View more (72)
4. kubernetes/airflow/charts/postgresql/values.yaml ⚙️ Configuration changes +8/-8

Update image registries to ghcr.io/svtechnmaa

kubernetes/airflow/charts/postgresql/values.yaml


5. kubernetes/airflow/templates/workers/worker-deployment.yaml 🐞 Bug fix +1/-1

Apply tpl function to extraInitContainers

kubernetes/airflow/templates/workers/worker-deployment.yaml


6. kubernetes/airflow/values.yaml ⚙️ Configuration changes +3/-3

Update busybox images with global registry support

kubernetes/airflow/values.yaml


7. kubernetes/akvorado/templates/akvorado-console-deployment.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/akvorado/templates/akvorado-console-deployment.yaml


8. kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml


9. kubernetes/akvorado/templates/akvorado-orchestrator-deployment.yaml ✨ Enhancement +3/-3

Add global parameter to image template calls

kubernetes/akvorado/templates/akvorado-orchestrator-deployment.yaml


10. kubernetes/akvorado/templates/akvorado-redis-deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/akvorado/templates/akvorado-redis-deployment.yaml


11. kubernetes/akvorado/values.yaml ⚙️ Configuration changes +13/-8

Update image registries to ghcr.io/svtechnmaa

kubernetes/akvorado/values.yaml


12. kubernetes/arangodb/templates/arangodeployment.yaml ✨ Enhancement +1/-1

Use common image template for image rendering

kubernetes/arangodb/templates/arangodeployment.yaml


13. kubernetes/arangodb/values.yaml ⚙️ Configuration changes +3/-3

Update image registry to ghcr.io/svtechnmaa

kubernetes/arangodb/values.yaml


14. kubernetes/asynqmon/templates/deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/asynqmon/templates/deployment.yaml


15. kubernetes/asynqmon/values.yaml ⚙️ Configuration changes +4/-4

Update image registry to ghcr.io/svtechnmaa

kubernetes/asynqmon/values.yaml


16. kubernetes/bngblaster/templates/deploy.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/bngblaster/templates/deploy.yaml


17. kubernetes/bngblaster/values.yaml ⚙️ Configuration changes +4/-2

Update image registries to ghcr.io/svtechnmaa

kubernetes/bngblaster/values.yaml


18. kubernetes/clickhouse/templates/clickhouse.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/clickhouse/templates/clickhouse.yaml


19. kubernetes/clickhouse/templates/clickhouseKeeper.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/clickhouse/templates/clickhouseKeeper.yaml


20. kubernetes/clickhouse/values.yaml ⚙️ Configuration changes +8/-6

Update image registries to ghcr.io/svtechnmaa

kubernetes/clickhouse/values.yaml


21. kubernetes/csv-view/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/csv-view/values.yaml


22. kubernetes/debuger/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/debuger/values.yaml


23. kubernetes/docxtemplate/templates/deploy.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/docxtemplate/templates/deploy.yaml


24. kubernetes/docxtemplate/values.yaml ⚙️ Configuration changes +4/-2

Update image registries to ghcr.io/svtechnmaa

kubernetes/docxtemplate/values.yaml


25. kubernetes/elastalert-ui/templates/deployment.yaml 🐞 Bug fix +1/-1

Apply tpl function to extraInitContainers

kubernetes/elastalert-ui/templates/deployment.yaml


26. kubernetes/elastalert-ui/values.yaml ⚙️ Configuration changes +2/-1

Update image registries and add global registry support

kubernetes/elastalert-ui/values.yaml


27. kubernetes/freeradius/values.yaml ⚙️ Configuration changes +4/-3

Update image registries with global registry support

kubernetes/freeradius/values.yaml


28. kubernetes/gitlist/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/gitlist/values.yaml


29. kubernetes/grafana/templates/deployment.yaml ✨ Enhancement +3/-3

Add global parameter to image template calls

kubernetes/grafana/templates/deployment.yaml


30. kubernetes/grafana/templates/init-data.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/grafana/templates/init-data.yaml


31. kubernetes/grafana/values.yaml ⚙️ Configuration changes +8/-5

Update image registries to ghcr.io/svtechnmaa

kubernetes/grafana/values.yaml


32. kubernetes/icinga2-report/templates/deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/icinga2-report/templates/deployment.yaml


33. kubernetes/icinga2-report/values.yaml ⚙️ Configuration changes +2/-2

Update image registry to ghcr.io/svtechnmaa

kubernetes/icinga2-report/values.yaml


34. kubernetes/icinga2/templates/deployment-icinga2-rebalance.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/icinga2/templates/deployment-icinga2-rebalance.yaml


35. kubernetes/icinga2/templates/statefulset-distribute-master.yaml ✨ Enhancement +6/-6

Add global parameter to image template calls

kubernetes/icinga2/templates/statefulset-distribute-master.yaml


36. kubernetes/icinga2/templates/statefulset-distribute-satelite.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/icinga2/templates/statefulset-distribute-satelite.yaml


37. kubernetes/icinga2/values.yaml ⚙️ Configuration changes +6/-4

Update image registries to ghcr.io/svtechnmaa

kubernetes/icinga2/values.yaml


38. kubernetes/icingaweb/templates/deployment.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/icingaweb/templates/deployment.yaml


39. kubernetes/icingaweb/values.yaml ⚙️ Configuration changes +6/-4

Update image registries to ghcr.io/svtechnmaa

kubernetes/icingaweb/values.yaml


40. kubernetes/influxdb/templates/influxdb/statefulset-high-availability.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/influxdb/templates/influxdb/statefulset-high-availability.yaml


41. kubernetes/influxdb/templates/relay/deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/influxdb/templates/relay/deployment.yaml


42. kubernetes/influxdb/values.yaml ⚙️ Configuration changes +11/-11

Update image registries to ghcr.io/svtechnmaa

kubernetes/influxdb/values.yaml


43. kubernetes/ingress-nginx/values.yaml ⚙️ Configuration changes +2/-1

Update image registry to ghcr.io/svtechnmaa

kubernetes/ingress-nginx/values.yaml


44. kubernetes/juniper-syslog-api/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/juniper-syslog-api/values.yaml


45. kubernetes/mage-ai/templates/redis-deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/mage-ai/templates/redis-deployment.yaml


46. kubernetes/mage-ai/values.yaml ⚙️ Configuration changes +7/-7

Update image registries to ghcr.io/svtechnmaa

kubernetes/mage-ai/values.yaml


47. kubernetes/mariadb-galera/templates/statefulset.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/mariadb-galera/templates/statefulset.yaml


48. kubernetes/mariadb-galera/values.yaml ⚙️ Configuration changes +4/-4

Update image registries to ghcr.io/svtechnmaa

kubernetes/mariadb-galera/values.yaml


49. kubernetes/maxscale/values.yaml ⚙️ Configuration changes +3/-1

Update image registries to ghcr.io/svtechnmaa

kubernetes/maxscale/values.yaml


50. kubernetes/nagvis/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/nagvis/values.yaml


51. kubernetes/netops-data/templates/deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/netops-data/templates/deployment.yaml


52. kubernetes/netops-data/values.yaml ⚙️ Configuration changes +4/-3

Update image registries to ghcr.io/svtechnmaa

kubernetes/netops-data/values.yaml


53. kubernetes/postfix/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/postfix/values.yaml


54. kubernetes/preparation/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/preparation/values.yaml


55. kubernetes/proxy/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/proxy/values.yaml


56. kubernetes/redis/templates/deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/redis/templates/deployment.yaml


57. kubernetes/redis/values.yaml ⚙️ Configuration changes +3/-3

Update image registry to ghcr.io/svtechnmaa

kubernetes/redis/values.yaml


58. kubernetes/report-ui/templates/redis-deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/report-ui/templates/redis-deployment.yaml


59. kubernetes/report-ui/templates/reflex-backend.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/report-ui/templates/reflex-backend.yaml


60. kubernetes/report-ui/templates/reflex-frontend.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/report-ui/templates/reflex-frontend.yaml


61. kubernetes/report-ui/templates/rq-dashboard-deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/report-ui/templates/rq-dashboard-deployment.yaml


62. kubernetes/report-ui/values.yaml ⚙️ Configuration changes +16/-11

Update image registries to ghcr.io/svtechnmaa

kubernetes/report-ui/values.yaml


63. kubernetes/rundeck-option-provider/templates/deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/rundeck-option-provider/templates/deployment.yaml


64. kubernetes/rundeck-option-provider/values.yaml ⚙️ Configuration changes +3/-2

Update image registries to ghcr.io/svtechnmaa

kubernetes/rundeck-option-provider/values.yaml


65. kubernetes/rundeck/templates/deployment.yaml ✨ Enhancement +2/-2

Add global parameter to image template calls

kubernetes/rundeck/templates/deployment.yaml


66. kubernetes/rundeck/values.yaml ⚙️ Configuration changes +3/-2

Update image registries to ghcr.io/svtechnmaa

kubernetes/rundeck/values.yaml


67. kubernetes/snmp-manager/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/snmp-manager/values.yaml


68. kubernetes/streamlit/templates/deploy.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/streamlit/templates/deploy.yaml


69. kubernetes/streamlit/values.yaml ⚙️ Configuration changes +4/-2

Update image registries to ghcr.io/svtechnmaa

kubernetes/streamlit/values.yaml


70. kubernetes/syncthing/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/syncthing/values.yaml


71. kubernetes/thruk/values.yaml ⚙️ Configuration changes +1/-0

Add registry field to image configuration

kubernetes/thruk/values.yaml


72. kubernetes/topology-app/values.yaml ⚙️ Configuration changes +2/-2

Update image registry and tag configuration

kubernetes/topology-app/values.yaml


73. kubernetes/topology-web/values.yaml ⚙️ Configuration changes +2/-2

Update image registry and tag configuration

kubernetes/topology-web/values.yaml


74. kubernetes/traefik/templates/deployment.yaml ✨ Enhancement +1/-1

Add global parameter to image template calls

kubernetes/traefik/templates/deployment.yaml


75. kubernetes/traefik/values.yaml ⚙️ Configuration changes +3/-3

Update image registry to ghcr.io/svtechnmaa

kubernetes/traefik/values.yaml


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Apr 3, 2026

Code Review by Qodo

🐞 Bugs (0) 📘 Rule violations (0) 📎 Requirement gaps (0) 🖥 UI issues (0) 🎨 UX Issues (0)

Grey Divider


Action required

1. Registry key typo🐞
Description
clickhouseKeeper.image uses the key regístry (accented i) instead of registry, so
common.images.image reads an empty registry and renders an image without the intended ghcr.io
prefix (defaults to docker.io). This will pull the wrong image or fail if it doesn't exist on
docker.io.
Code

kubernetes/clickhouse/values.yaml[R42-45]

image:
-    repository: "clickhouse/clickhouse-keeper"
-    tag: "25.5"
+    regístry: ghcr.io
+    repository: svtechnmaa/clickhouse-keeper
+    tag: 25.5
Evidence
The ClickHouse Keeper template renders the image via common.images.image using
.Values.clickhouseKeeper.image, and the helper specifically reads .imageRoot.registry; because
the values key is misspelled, the helper will not see the registry and will format the image as
repository:tag only.

kubernetes/clickhouse/values.yaml[41-46]
kubernetes/clickhouse/templates/clickhouseKeeper.yaml[51-55]
kubernetes/common/templates/_images.tpl[6-19]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`kubernetes/clickhouse/values.yaml` defines `clickhouseKeeper.image.regístry` (non-ASCII typo). `common.images.image` reads `.imageRoot.registry`, so the registry is ignored and ClickHouse Keeper will be rendered without `ghcr.io/`.
### Issue Context
The clickhouse-keeper manifest uses `{{ include "common.images.image" ( dict "imageRoot" .Values.clickhouseKeeper.image ... ) }}`.
### Fix Focus Areas
- kubernetes/clickhouse/values.yaml[41-46]
- kubernetes/common/templates/_images.tpl[6-19]
- kubernetes/clickhouse/templates/clickhouseKeeper.yaml[51-55]
### Suggested fix
Rename `regístry` to `registry` under `clickhouseKeeper.image`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Ingress image drops registry🐞
Description
ingress-nginx/values.yaml now splits image.registry and image.repository, but the deployment
template still renders image: {{ .Values.image.repository }}:{{ .Values.image.tag }}, so ghcr.io
is silently dropped. This will attempt to pull svtechnmaa/controller:v1.9.4 from the default
registry (docker.io) instead of ghcr.io/svtechnmaa/controller:v1.9.4.
Code

kubernetes/ingress-nginx/values.yaml[R13-17]

image:
-  repository: registry.k8s.io/ingress-nginx/controller
+  registry: ghcr.io
+  repository: svtechnmaa/controller
tag: v1.9.4
pullPolicy: IfNotPresent
Evidence
The values change introduces a separate image.registry, but the deployment template never uses it
and concatenates only repository:tag, which will not include ghcr.io when repository is
svtechnmaa/controller.

kubernetes/ingress-nginx/values.yaml[13-17]
kubernetes/ingress-nginx/templates/deployment.yaml[42-46]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Ingress-NGINX image rendering ignores `.Values.image.registry`, so after this PR the chart will pull from the wrong registry.
### Issue Context
`values.yaml` now sets:
- `image.registry: ghcr.io`
- `image.repository: svtechnmaa/controller`
But the deployment template uses only `repository:tag`.
### Fix Focus Areas
- kubernetes/ingress-nginx/values.yaml[13-17]
- kubernetes/ingress-nginx/templates/deployment.yaml[42-46]
### Suggested fix
Either:
1) Switch the template to use `common.images.image`:
`image: {{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }}` (and ensure `common` helpers are available),
or
2) Build image with registry explicitly:
`image: {{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}` (handling empty registry if desired).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Init images ignore registry🐞
Description
freeradius and maxscale values now define init.image.registry, but the templates still render
init images as {{ .Values.init.image.repository }}:{{ .Values.init.image.tag }}, dropping the
registry. This will pull svtechnmaa/mysql:8.2.0 from docker.io instead of
ghcr.io/svtechnmaa/mysql:8.2.0.
Code

kubernetes/freeradius/values.yaml[R19-23]

init:
image:
-    repository: mysql
+    registry: ghcr.io
+    repository: svtechnmaa/mysql
tag: 8.2.0
Evidence
Both charts' init container templates concatenate only repository:tag. After this PR the
repository no longer includes the registry hostname, so the rendered image omits ghcr.io even
though values specify it.

kubernetes/freeradius/values.yaml[19-24]
kubernetes/freeradius/templates/Deployment.yaml[72-75]
kubernetes/freeradius/templates/init-db.yaml[27-31]
kubernetes/maxscale/values.yaml[6-11]
kubernetes/maxscale/templates/deployment.yaml[18-21]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`freeradius` and `maxscale` initContainers drop the newly-added `init.image.registry` field when rendering the init image.
### Issue Context
Values now set `init.image.registry: ghcr.io` and `init.image.repository: svtechnmaa/mysql`, but templates still render `repository:tag`.
### Fix Focus Areas
- kubernetes/freeradius/templates/Deployment.yaml[72-75]
- kubernetes/freeradius/templates/init-db.yaml[27-31]
- kubernetes/freeradius/values.yaml[19-24]
- kubernetes/maxscale/templates/deployment.yaml[18-21]
- kubernetes/maxscale/values.yaml[6-11]
### Suggested fix
Update both templates to render init images via the shared helper:
`image: {{ include "common.images.image" (dict "imageRoot" .Values.init.image "global" .Values.global) }}`
(or explicitly concatenate `{{ .Values.init.image.registry }}/{{ .Values.init.image.repository }}:{{ .Values.init.image.tag }}` with an empty-registry fallback).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

4. Duplicate dict key🐞
Description
icinga2 passes the global key twice in the dict passed to common.images.image, which is a
copy/paste error and makes future edits error-prone. While it currently resolves to the same value,
it should be cleaned up to a single global entry.
Code

kubernetes/icinga2/templates/statefulset-distribute-master.yaml[56]

+          image: {{ include "common.images.image" ( dict "imageRoot" .Values.image "global" .Values.global "global" .Values.global) }}
Evidence
The template call includes "global" .Values.global twice; in Helm/Sprig maps, the latter
overwrites the former, so it’s redundant and confusing.

kubernetes/icinga2/templates/statefulset-distribute-master.yaml[53-57]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Redundant duplicate `global` key in the `dict` passed to `common.images.image` in the icinga2 master statefulset template.
### Fix Focus Areas
- kubernetes/icinga2/templates/statefulset-distribute-master.yaml[53-57]
### Suggested fix
Change:
`( dict "imageRoot" .Values.image "global" .Values.global "global" .Values.global)`
To:
`( dict "imageRoot" .Values.image "global" .Values.global )`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment thread kubernetes/clickhouse/values.yaml
Comment thread kubernetes/ingress-nginx/values.yaml
Comment thread kubernetes/freeradius/values.yaml Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
kubernetes/postfix/values.yaml (1)

138-139: ⚠️ Potential issue | 🔴 Critical

Critical: Plaintext credentials and PII in values file.

The mtpUser and mtpPass contain plaintext SMTP credentials and an email address (PII) that are committed to version control. This creates multiple security and compliance risks:

  • Anyone with repository access can view these credentials
  • The password is exposed in git history permanently
  • The email address ha.do@svtech.com.vn is PII that should not be in default values

These credentials should be:

  1. Removed from values.yaml immediately
  2. Stored in Kubernetes Secrets
  3. Referenced in the chart templates via secret references
  4. The current password should be rotated since it's now compromised
🔐 Recommended approach

In values.yaml, replace with secret references:

 mtpConfig:
   mtpRelay: smtp-relay.gmail.com
   mtpPort: 25
-  mtpUser: ha.do@svtech.com.vn
-  mtpPass: Ha.Do@123456
+  # Reference existing secret for credentials
+  existingSecret: "postfix-smtp-credentials"
+  userKey: "smtp-user"
+  passwordKey: "smtp-password"
   mtpMessageSizeLimit: "102400000"
   mtpMailboxSizeLimit: "51200000000"
   mailLogFile: /dev/stdout

Users would create the secret separately:

kubectl create secret generic postfix-smtp-credentials \
  --from-literal=smtp-user='<actual-user>' \
  --from-literal=smtp-password='<actual-password>'

Then update templates to reference these secret values.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/postfix/values.yaml` around lines 138 - 139, Remove the plaintext
values mtpUser and mtpPass from values.yaml (and any committed PII email like
ha.do@svtech.com.vn), create a Kubernetes Secret to hold smtp-user and
smtp-password, update the chart templates that currently read
.Values.mtpUser/.Values.mtpPass to instead reference the Secret (e.g., via
secretKeyRef in env or volumeMount templates), and note that the exposed
password must be rotated immediately; ensure values.yaml only contains a
placeholder flag or instructions to point at the secret name (e.g.,
smtpCredentialsSecret) rather than the actual credentials.
kubernetes/mage-ai/templates/redis-deployment.yaml (1)

28-52: 🧹 Nitpick | 🔵 Trivial

Consider adding a security context to the Redis container.

Static analysis (Trivy KSV-0118) flags that this deployment uses the default security context, which allows root privileges. While this is pre-existing and not introduced by this PR, consider adding explicit security constraints as a follow-up improvement:

🛡️ Example security context addition
       containers:
       - image: {{ include "common.images.image" ( dict "imageRoot" .Values.redis.image "global" .Values.global) }}
         {{- if .Values.redis.image.pullPolicy }}
         imagePullPolicy: {{ .Values.redis.image.pullPolicy }}
         {{- end}}
         name: redis
+        securityContext:
+          runAsNonRoot: true
+          allowPrivilegeEscalation: false
+          readOnlyRootFilesystem: true
         env:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/mage-ai/templates/redis-deployment.yaml` around lines 28 - 52, Add
an explicit securityContext to the Redis container definition (the container
named "redis" in the redis deployment template) to avoid running as root; set
runAsNonRoot: true and a non-root runAsUser (e.g., 1000), disable privilege
escalation (allowPrivilegeEscalation: false), drop all capabilities and set
readOnlyRootFilesystem: true (and any other organization-required constraints).
Place this under the container spec for the "redis" container so the
livenessProbe and env entries remain unchanged.
kubernetes/influxdb/templates/relay/deployment.yaml (1)

53-92: ⚠️ Potential issue | 🟠 Major

Harden relay container security defaults in this touched block.

This container block still misses key hardening controls (e.g., allowPrivilegeEscalation: false, dropping capabilities, seccomp profile, non-root/read-only rootfs), and static analysis flags these in the same changed region.

Suggested baseline hardening pattern
       containers:
         - name: relay
           image: {{ include "common.images.image" ( dict "imageRoot" .Values.relay.image "global" .Values.global) }}
           imagePullPolicy: {{ .Values.relay.image.pullPolicy | quote }}
-          {{- if .Values.relay.securityContext.enabled }}
           securityContext:
+            allowPrivilegeEscalation: false
+            runAsNonRoot: true
+            readOnlyRootFilesystem: true
+            capabilities:
+              drop:
+                - ALL
+            seccompProfile:
+              type: RuntimeDefault
+          {{- if .Values.relay.securityContext.enabled }}
+          securityContext:
             runAsUser: {{ .Values.relay.securityContext.runAsUser }}
           {{- end }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/influxdb/templates/relay/deployment.yaml` around lines 53 - 92,
The relay container's securityContext lacks hardening defaults; update the
container spec (the block that currently renders securityContext / runAsUser for
relay) to enforce container-level controls: set allowPrivilegeEscalation: false,
runAsNonRoot: true (in addition to runAsUser), readOnlyRootFilesystem: true, add
a capabilities section that drops all (drop: ["ALL"]), and set
securityContext.seccompProfile.type to "RuntimeDefault"; ensure these keys are
emitted by the template that renders .Values.relay.securityContext (or as sane
defaults when that value is enabled) so the relay container (image/command/args
block) has the tightened settings.
kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml (1)

23-60: ⚠️ Potential issue | 🟡 Minor

Add missing imagePullSecrets for ghcr.io registry.

The akvorado-inlet-deployment.yaml lacks imagePullSecrets configuration while other templates in this PR (redis, rundeck, docxtemplate) correctly define it. Since all akvorado images use ghcr.io/svtechnmaa/ registry, pods may fail to pull private images without proper authentication.

🔧 Suggested fix
     spec:
+      imagePullSecrets:
+      - name: ghcr-pull-secret
       initContainers:
         - name: wait-for-orchestrator
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml` around lines 23
- 60, The pod spec for the Akvorado inlet deployment is missing
imagePullSecrets, so the initContainer wait-for-orchestrator and the inlet
container may fail to pull images from ghcr.io; add an imagePullSecrets entry
under the top-level spec (the same place as
initContainers/containers/restartPolicy/volumes) that references the
chart/global image pull secret value used elsewhere (use the same value/key as
other templates in this PR), ensuring both the initContainer (name:
wait-for-orchestrator) and the main container (name: inlet) can authenticate to
ghcr.io.
kubernetes/grafana/values.yaml (1)

1-11: 🧹 Nitpick | 🔵 Trivial

Consider adding global.imageRegistry for consistency.

Similar to kubernetes/traefik/values.yaml, this file is missing global.imageRegistry while report-ui/values.yaml defines it. For consistency and to support centralized registry override, consider adding it.

🔧 Proposed fix
 global:
+  imageRegistry: ghcr.io
   adminUser:
   adminPassword:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/grafana/values.yaml` around lines 1 - 11, Add a
global.imageRegistry entry to the global block so the chart supports a
centralized registry override; specifically, update the global map (where keys
like adminUser, adminPassword, sharedVolume, externalIP, frontendVip,
mariadb-galera appear) to include imageRegistry: "" (or a sensible default) and
ensure any image references in the chart respect global.imageRegistry so
behavior matches other charts like report-ui and traefik that support a
centralized registry.
kubernetes/traefik/values.yaml (1)

3-6: 🧹 Nitpick | 🔵 Trivial

Add global.imageRegistry for consistency with other charts and to enable global registry overrides.

The deployment template uses the common.images.image helper which prioritizes global.imageRegistry over the chart's local image.registry when the global value is defined. This file is missing that configuration while other charts in the PR (e.g., report-ui/values.yaml) define global.imageRegistry: ghcr.io. Without this entry, global registry overrides will not apply to the traefik deployment.

🔧 Proposed fix
 global:
+  imageRegistry: ghcr.io
   sharedVolume:
     enabled: true
   sharedPersistenceVolume: []
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/traefik/values.yaml` around lines 3 - 6, Add a
global.imageRegistry key under the existing global block so Helm's
common.images.image helper can pick up global registry overrides; update the
global mapping (where global.sharedVolume and global.sharedPersistenceVolume are
defined) to include global.imageRegistry (e.g., set to ghcr.io or an empty
string to allow global override) so the traefik deployment respects global image
registry settings referenced by common.images.image.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@kubernetes/airflow/charts/postgresql/values.yaml`:
- Around line 60-62: The specified image settings (registry: ghcr.io,
repository: svtechnmaa/postgresql, tag: 11.12.0 in values.yaml) point to a
non-existent or inaccessible image; either publish the image to ghcr.io and make
it public (ensure CI workflow builds and pushes svtechnmaa/postgresql:11.12.0
and that credentials/visibility are correct) or revert these fields back to the
official Bitnami/other known-working image reference (restore
registry/repository/tag to the original values) so Helm will pull a valid
PostgreSQL image.

In `@kubernetes/airflow/values.yaml`:
- Line 555: The BusyBox image YAML entries have an extra space after the colon
(e.g., the line with the key image and the template "{{
.Values.global.imageRegistry | default \"ghcr.io\" }}/svtechnmaa/busybox:1.33");
remove the extra space so there is exactly one space after the colon (change
occurrences of image:  "{{ ... }}" to image: "{{ ... }}") for the image keys
referenced (same pattern appears for the other BusyBox image entries).
- Around line 553-556: The scheduler and triggerer deployments render
extraInitContainers with toYaml, which leaves Helm template expressions
unevaluated; update the render calls in scheduler-deployment.yaml and
triggerer-deployment.yaml to wrap the existing toYaml calls with tpl, i.e.
replace the occurrences that reference .Values.scheduler.extraInitContainers and
.Values.triggerer.extraInitContainers so they use tpl (e.g. tpl (toYaml
.Values.scheduler.extraInitContainers) . | nindent ...) and tpl (toYaml
.Values.triggerer.extraInitContainers) . | nindent ... respectively, mirroring
the workers deployment behavior to ensure templates in those value blocks are
evaluated.

In `@kubernetes/akvorado/templates/akvorado-console-deployment.yaml`:
- Line 26: The Helm template call to include "common.images.image" is using an
extra space after the opening parenthesis in the dict call ("( dict ...") which
triggers YAMLlint; edit the template invocation for include
"common.images.image" so the dict is called without the extra space (use "(dict
\"imageRoot\" .Values.init.image \"global\" .Values.global)" style), ensuring
the include and dict tokens are adjacent.
- Line 43: The Helm template call to include "common.images.image" has
inconsistent spacing: remove the extra space after the opening parenthesis in
the include call so the dict invocation uses (dict "imageRoot"
.Values.console.image "global" .Values.global) instead of ( dict ... ); update
the include expression around include "common.images.image" to use consistent
spacing.

In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml`:
- Line 26: The Helm template include call for the container image uses
inconsistent spacing inside the braces—update the include invocation (include
"common.images.image" (dict "imageRoot" .Values.init.image "global"
.Values.global)) to remove the extra spaces inside the parentheses so it matches
the project's style (same change for the other occurrence at the include
"common.images.image" (dict "imageRoot" .Values.image "global" .Values.global)
line); keep the same symbols and values but normalize spacing around the
dict(...) and include(...) calls for YAMLlint consistency.

In `@kubernetes/akvorado/templates/akvorado-orchestrator-deployment.yaml`:
- Line 26: Helm template expressions lack required internal spacing and trigger
YAMLlint `braces` errors; update the include/dict invocations so there is a
space after the opening handlebars and before the closing ones and also spaces
inside the dict call — e.g. change occurrences of image: {{ include
"common.images.image" ( dict "imageRoot" .Values.init.image "global"
.Values.global) }} to image: {{ include "common.images.image" ( dict "imageRoot"
.Values.init.image "global" .Values.global ) }}; apply the same spacing fix for
the other occurrences that call include "common.images.image" with dict
"imageRoot" and .Values.global.

In `@kubernetes/akvorado/templates/akvorado-redis-deployment.yaml`:
- Line 30: The Helm template braces lack proper spacing for YAMLlint; update the
image line using the include call (include "common.images.image" (dict
"imageRoot" .Values.redis.image "global" .Values.global)) so that there is a
single space after the opening {{ and before the closing }}, and ensure spacing
inside the parentheses around dict (e.g. {{ include "common.images.image" ( dict
"imageRoot" .Values.redis.image "global" .Values.global ) }}), replacing the
existing expression in the image field.

In `@kubernetes/clickhouse/values.yaml`:
- Line 43: Fix the typo in the Helm values key: replace the non-ASCII key name
"regístry" with the correct "registry" so Helm helpers and image resolution
logic (keys like registry used by templates) can find the registry value; update
any occurrences of "regístry" to "registry" in the values.yaml and ensure
templates referencing .Values.registry continue to work.

In `@kubernetes/icinga2/templates/statefulset-distribute-master.yaml`:
- Line 56: The dict passed to the include call for "common.images.image"
contains a duplicated key "global" (see the dict in the include
"common.images.image" invocation); remove the redundant second "global" entry so
the dict only includes unique keys (e.g., keep "imageRoot": .Values.image and
"global": .Values.global) to avoid ambiguity and redundancy.

In `@kubernetes/influxdb/values.yaml`:
- Around line 909-912: The image entry uses an unverified repository and an
invalid tag; update the image block (the image, registry, repository, and tag
keys) to use a verified Azure CLI image and a valid semantic version tag—for
example set registry to docker.io and repository to bitnami/azure-cli (or
registry mcr.microsoft.com and repository azure-cli) and change tag to a valid
semantic version such as 2.84.0 (or 2.22.1 if restoring the original release);
ensure the chosen repository is official (bitnami or Microsoft) and the tag
matches an existing release.

In `@kubernetes/maxscale/values.yaml`:
- Around line 8-10: The init container image is built only from
.Values.init.image.repository and .Values.init.image.tag, omitting
.Values.init.image.registry; update the init container in the deployment
template (init container spec in kubernetes/maxscale/templates/deployment.yaml)
to construct the full image from .Values.init.image.registry + "/" +
.Values.init.image.repository + ":" + .Values.init.image.tag (or reuse
common.images.image pattern with .Values.init.image) so the image becomes
ghcr.io/svtechnmaa/mysql:8.2.0 instead of svtechnmaa/mysql:8.2.0; ensure you
reference .Values.init.image.registry, .Values.init.image.repository and
.Values.init.image.tag accordingly.

In `@kubernetes/netops-data/values.yaml`:
- Around line 25-27: The YAML value for the image tag under the
registry/repository block uses an unquoted numeric literal ("tag: 1.33") which
some parsers may treat as a float; update the tag value to a quoted string
(e.g., change the value for the key "tag" in the busybox image block) so it is
always treated as a string and consistent with the other quoted tags in this
file.

---

Outside diff comments:
In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml`:
- Around line 23-60: The pod spec for the Akvorado inlet deployment is missing
imagePullSecrets, so the initContainer wait-for-orchestrator and the inlet
container may fail to pull images from ghcr.io; add an imagePullSecrets entry
under the top-level spec (the same place as
initContainers/containers/restartPolicy/volumes) that references the
chart/global image pull secret value used elsewhere (use the same value/key as
other templates in this PR), ensuring both the initContainer (name:
wait-for-orchestrator) and the main container (name: inlet) can authenticate to
ghcr.io.

In `@kubernetes/grafana/values.yaml`:
- Around line 1-11: Add a global.imageRegistry entry to the global block so the
chart supports a centralized registry override; specifically, update the global
map (where keys like adminUser, adminPassword, sharedVolume, externalIP,
frontendVip, mariadb-galera appear) to include imageRegistry: "" (or a sensible
default) and ensure any image references in the chart respect
global.imageRegistry so behavior matches other charts like report-ui and traefik
that support a centralized registry.

In `@kubernetes/influxdb/templates/relay/deployment.yaml`:
- Around line 53-92: The relay container's securityContext lacks hardening
defaults; update the container spec (the block that currently renders
securityContext / runAsUser for relay) to enforce container-level controls: set
allowPrivilegeEscalation: false, runAsNonRoot: true (in addition to runAsUser),
readOnlyRootFilesystem: true, add a capabilities section that drops all (drop:
["ALL"]), and set securityContext.seccompProfile.type to "RuntimeDefault";
ensure these keys are emitted by the template that renders
.Values.relay.securityContext (or as sane defaults when that value is enabled)
so the relay container (image/command/args block) has the tightened settings.

In `@kubernetes/mage-ai/templates/redis-deployment.yaml`:
- Around line 28-52: Add an explicit securityContext to the Redis container
definition (the container named "redis" in the redis deployment template) to
avoid running as root; set runAsNonRoot: true and a non-root runAsUser (e.g.,
1000), disable privilege escalation (allowPrivilegeEscalation: false), drop all
capabilities and set readOnlyRootFilesystem: true (and any other
organization-required constraints). Place this under the container spec for the
"redis" container so the livenessProbe and env entries remain unchanged.

In `@kubernetes/postfix/values.yaml`:
- Around line 138-139: Remove the plaintext values mtpUser and mtpPass from
values.yaml (and any committed PII email like ha.do@svtech.com.vn), create a
Kubernetes Secret to hold smtp-user and smtp-password, update the chart
templates that currently read .Values.mtpUser/.Values.mtpPass to instead
reference the Secret (e.g., via secretKeyRef in env or volumeMount templates),
and note that the exposed password must be rotated immediately; ensure
values.yaml only contains a placeholder flag or instructions to point at the
secret name (e.g., smtpCredentialsSecret) rather than the actual credentials.

In `@kubernetes/traefik/values.yaml`:
- Around line 3-6: Add a global.imageRegistry key under the existing global
block so Helm's common.images.image helper can pick up global registry
overrides; update the global mapping (where global.sharedVolume and
global.sharedPersistenceVolume are defined) to include global.imageRegistry
(e.g., set to ghcr.io or an empty string to allow global override) so the
traefik deployment respects global image registry settings referenced by
common.images.image.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3f475d7b-a4ef-4ecc-8f1e-1842d5817802

📥 Commits

Reviewing files that changed from the base of the PR and between ce7f868 and d5cf96c.

📒 Files selected for processing (75)
  • kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml
  • kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml
  • kubernetes/airflow-postgresql-ha/values.yaml
  • kubernetes/airflow/charts/postgresql/values.yaml
  • kubernetes/airflow/templates/workers/worker-deployment.yaml
  • kubernetes/airflow/values.yaml
  • kubernetes/akvorado/templates/akvorado-console-deployment.yaml
  • kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml
  • kubernetes/akvorado/templates/akvorado-orchestrator-deployment.yaml
  • kubernetes/akvorado/templates/akvorado-redis-deployment.yaml
  • kubernetes/akvorado/values.yaml
  • kubernetes/arangodb/templates/arangodeployment.yaml
  • kubernetes/arangodb/values.yaml
  • kubernetes/asynqmon/templates/deployment.yaml
  • kubernetes/asynqmon/values.yaml
  • kubernetes/bngblaster/templates/deploy.yaml
  • kubernetes/bngblaster/values.yaml
  • kubernetes/clickhouse/templates/clickhouse.yaml
  • kubernetes/clickhouse/templates/clickhouseKeeper.yaml
  • kubernetes/clickhouse/values.yaml
  • kubernetes/csv-view/values.yaml
  • kubernetes/debuger/values.yaml
  • kubernetes/docxtemplate/templates/deploy.yaml
  • kubernetes/docxtemplate/values.yaml
  • kubernetes/elastalert-ui/templates/deployment.yaml
  • kubernetes/elastalert-ui/values.yaml
  • kubernetes/freeradius/values.yaml
  • kubernetes/gitlist/values.yaml
  • kubernetes/grafana/templates/deployment.yaml
  • kubernetes/grafana/templates/init-data.yaml
  • kubernetes/grafana/values.yaml
  • kubernetes/icinga2-report/templates/deployment.yaml
  • kubernetes/icinga2-report/values.yaml
  • kubernetes/icinga2/templates/deployment-icinga2-rebalance.yaml
  • kubernetes/icinga2/templates/statefulset-distribute-master.yaml
  • kubernetes/icinga2/templates/statefulset-distribute-satelite.yaml
  • kubernetes/icinga2/values.yaml
  • kubernetes/icingaweb/templates/deployment.yaml
  • kubernetes/icingaweb/values.yaml
  • kubernetes/influxdb/templates/influxdb/statefulset-high-availability.yaml
  • kubernetes/influxdb/templates/relay/deployment.yaml
  • kubernetes/influxdb/values.yaml
  • kubernetes/ingress-nginx/values.yaml
  • kubernetes/juniper-syslog-api/values.yaml
  • kubernetes/mage-ai/templates/redis-deployment.yaml
  • kubernetes/mage-ai/values.yaml
  • kubernetes/mariadb-galera/templates/statefulset.yaml
  • kubernetes/mariadb-galera/values.yaml
  • kubernetes/maxscale/values.yaml
  • kubernetes/nagvis/values.yaml
  • kubernetes/netops-data/templates/deployment.yaml
  • kubernetes/netops-data/values.yaml
  • kubernetes/postfix/values.yaml
  • kubernetes/preparation/values.yaml
  • kubernetes/proxy/values.yaml
  • kubernetes/redis/templates/deployment.yaml
  • kubernetes/redis/values.yaml
  • kubernetes/report-ui/templates/redis-deployment.yaml
  • kubernetes/report-ui/templates/reflex-backend.yaml
  • kubernetes/report-ui/templates/reflex-frontend.yaml
  • kubernetes/report-ui/templates/rq-dashboard-deployment.yaml
  • kubernetes/report-ui/values.yaml
  • kubernetes/rundeck-option-provider/templates/deployment.yaml
  • kubernetes/rundeck-option-provider/values.yaml
  • kubernetes/rundeck/templates/deployment.yaml
  • kubernetes/rundeck/values.yaml
  • kubernetes/snmp-manager/values.yaml
  • kubernetes/streamlit/templates/deploy.yaml
  • kubernetes/streamlit/values.yaml
  • kubernetes/syncthing/values.yaml
  • kubernetes/thruk/values.yaml
  • kubernetes/topology-app/values.yaml
  • kubernetes/topology-web/values.yaml
  • kubernetes/traefik/templates/deployment.yaml
  • kubernetes/traefik/values.yaml

Comment thread kubernetes/airflow/charts/postgresql/values.yaml
Comment thread kubernetes/airflow/values.yaml
Comment thread kubernetes/airflow/values.yaml
Comment thread kubernetes/akvorado/templates/akvorado-console-deployment.yaml
Comment thread kubernetes/akvorado/templates/akvorado-console-deployment.yaml
Comment thread kubernetes/clickhouse/values.yaml Outdated
Comment thread kubernetes/icinga2/templates/statefulset-distribute-master.yaml Outdated
Comment thread kubernetes/influxdb/values.yaml
Comment thread kubernetes/maxscale/values.yaml
Comment thread kubernetes/netops-data/values.yaml
@duchieu2k duchieu2k requested a review from a team April 3, 2026 03:16
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
kubernetes/freeradius/templates/Deployment.yaml (1)

69-74: ⚠️ Potential issue | 🟠 Major

Duplicate initContainers key will silently discard user-defined init containers.

Lines 69-70 conditionally render .Values.initContainers, but line 72 unconditionally declares another initContainers: block. In YAML, duplicate keys cause the second to override the first, so any user-provided initContainers from values will be silently ignored.

Additionally, ensure consistency: this template uses init.db.image (MySQL) for wait-for-mariadb, while init-db.yaml uses init.waitForDB.image (BusyBox) for the same-named container. Both approaches work for their respective commands, but the naming inconsistency may cause confusion.

Proposed fix to merge user initContainers with the hardcoded one
-      {{- if .Values.initContainers }}
-      initContainers: {{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }}
-      {{- end }}
       initContainers:
+        {{- if .Values.initContainers }}
+        {{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }}
+        {{- end }}
         - name: wait-for-mariadb
           image: {{ include "common.images.image" ( dict "imageRoot" .Values.init.db.image "global" .Values.global) }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/freeradius/templates/Deployment.yaml` around lines 69 - 74, The
template currently emits two initContainers keys causing user-defined
.Values.initContainers to be overwritten; change it to emit a single
initContainers list that first renders user entries via the
common.tplvalues.render include (referencing .Values.initContainers and the
helper common.tplvalues.render) and then appends the required wait-for-mariadb
container (name: wait-for-mariadb) so both sets are preserved; also unify the
image source for the appended container by using the same value key used
elsewhere (either .Values.init.db.image or .Values.init.waitForDB.image) so the
included image helper (common.images.image) is fed a consistent value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@kubernetes/maxscale/templates/deployment.yaml`:
- Line 20: The YAML linter error is caused by inconsistent spacing around the
helper call; normalize the spacing in the include invocation so it matches the
project's pattern: use {{ include "common.images.image" (dict "imageRoot"
.Values.init.image "global" .Values.global) }} by removing the extra space after
the opening parenthesis and ensuring a single space between the include, the
helper name, and the (dict ...) argument; update the line containing the include
of "common.images.image" and the (dict "imageRoot" .Values.init.image "global"
.Values.global) argument accordingly.

In `@kubernetes/telegraf/values.yaml`:
- Around line 22-23: Update the init container image template to pass the global
image registry into the common.images.image helper so global.imageRegistry
overrides apply: change the include call that currently uses (dict "imageRoot"
.Values.init.image) to also include "global" .Values.global when rendering the
init container image (the same pattern used for the main container). Also verify
the BusyBox reference (repository svtechnmaa/busybox and tag 1.33) is publicly
available; if not, replace it with an official or accessible image in
Values.init.image.

---

Outside diff comments:
In `@kubernetes/freeradius/templates/Deployment.yaml`:
- Around line 69-74: The template currently emits two initContainers keys
causing user-defined .Values.initContainers to be overwritten; change it to emit
a single initContainers list that first renders user entries via the
common.tplvalues.render include (referencing .Values.initContainers and the
helper common.tplvalues.render) and then appends the required wait-for-mariadb
container (name: wait-for-mariadb) so both sets are preserved; also unify the
image source for the appended container by using the same value key used
elsewhere (either .Values.init.db.image or .Values.init.waitForDB.image) so the
included image helper (common.images.image) is fed a consistent value.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: ad5f9a63-69d7-4b49-a2eb-bd59aac12541

📥 Commits

Reviewing files that changed from the base of the PR and between d5cf96c and c402797.

📒 Files selected for processing (7)
  • kubernetes/clickhouse/values.yaml
  • kubernetes/freeradius/templates/Deployment.yaml
  • kubernetes/freeradius/templates/init-db.yaml
  • kubernetes/freeradius/values.yaml
  • kubernetes/ingress-nginx/templates/deployment.yaml
  • kubernetes/maxscale/templates/deployment.yaml
  • kubernetes/telegraf/values.yaml

Comment thread kubernetes/maxscale/templates/deployment.yaml
Comment thread kubernetes/telegraf/values.yaml
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@kubernetes/icinga2/templates/statefulset-distribute-master.yaml`:
- Line 441: Remove the trailing inline comment that duplicates the computed
image string on the image line that calls include "common.images.image" with
dict "imageRoot" .Values.master.redis.image "global" .Values.global;
specifically delete the "# {{ .Values.master.redis.image.repository }}:{{
.Values.master.redis.image.tag }}" fragment so the template line only contains
the include call (and clean up any leftover spacing).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 1c286d84-773e-408f-b8fc-676deb98f883

📥 Commits

Reviewing files that changed from the base of the PR and between c402797 and 5b71c6d.

📒 Files selected for processing (2)
  • kubernetes/icinga2/templates/statefulset-distribute-master.yaml
  • kubernetes/telegraf/templates/deployment.yaml

Comment thread kubernetes/icinga2/templates/statefulset-distribute-master.yaml Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
kubernetes/report-ui/templates/rq-dashboard-deployment.yaml (1)

25-47: ⚠️ Potential issue | 🟠 Major

Add an explicit container security context in this deployment block.

The rq-dashboard container still runs with default security context (see static analysis findings in this range). In restricted clusters, this commonly gets denied at admission time. Please set an explicit hardened securityContext (e.g., no privilege escalation, drop capabilities, seccomp profile) and keep resources configurable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/report-ui/templates/rq-dashboard-deployment.yaml` around lines 25
- 47, Add an explicit hardened container securityContext for the rq-dashboard
container: in the container spec for name: rq-dashboard, add a securityContext
block that sets allowPrivilegeEscalation: false, runAsNonRoot: true (and
runAsUser: 1000 or configurable via
.Values.dashboard.securityContext.runAsUser), drops all capabilities
(capabilities.drop: ["ALL"]), and sets seccompProfile.type: RuntimeDefault (or
configurable via .Values.dashboard.securityContext.seccompProfile). Also ensure
resources remain configurable by referencing .Values.dashboard.resources in the
container spec so resource limits/requests can be overridden.
kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml (1)

43-58: 🧹 Nitpick | 🔵 Trivial

Consider adding securityContext and resource limits in a follow-up.

Trivy flagged several security best practices for the inlet container (missing securityContext.runAsNonRoot, allowPrivilegeEscalation: false, resource limits, etc.). These are pre-existing issues unrelated to this PR's scope, but worth addressing for production hardening.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml` around lines 43
- 58, Add a follow-up change to the inlet container spec (the container named
"inlet" in the Akvorado inlet deployment template) to harden it: add a
securityContext block with runAsNonRoot: true, a non-root runAsUser (e.g. 1000),
allowPrivilegeEscalation: false and readOnlyRootFilesystem where appropriate;
and add resources with requests and limits for cpu and memory (e.g.
requests.cpu/memory and limits.cpu/memory) so the pod is not unbounded. Keep
these additions adjacent to the existing container fields (image,
imagePullPolicy, ports, volumeMounts) and ensure they are templated using
.Values (e.g. .Values.inlet.securityContext and .Values.inlet.resources) so
defaults can be overridden.
♻️ Duplicate comments (1)
kubernetes/influxdb/values.yaml (1)

912-914: ⚠️ Potential issue | 🟠 Major

2.221 does not match Azure CLI’s published versioning.

Azure CLI releases are published as major.minor.patch versions such as 2.22.1 and 2.84.0, with Microsoft documenting a monthly release cadence for supported versions. 2.221 does not fit that history, so this tag looks unverifiable and risks breaking the Azure backup job if the mirror does not actually publish it. Please switch back to a verifiable tag or document the mirror-specific tag format if this is intentional. (pypi.org)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/influxdb/values.yaml` around lines 912 - 914, The tag value
"2.221" for the image repository "svtechnmaa/azure-cli" is not a valid Azure CLI
release format; update the tag in the values.yaml (the "tag:" field) to a
verifiable upstream Azure CLI release (e.g., a major.minor.patch like "2.22.1"
or "2.84.0") or, if this mirror uses a custom scheme, replace the tag with a
documented mirror-specific tag and add a brief comment explaining the mirror
format and a link to the mirror's tag list so the backup job can be reliably
verified.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml`:
- Around line 366-369: The template currently adds imagePullSecrets twice: once
via the helper include {{ include "postgresql-ha.image.pullSecrets" . }} and
again with a hardcoded imagePullSecrets: - name: ghcr-pull-secret block; remove
the hardcoded block to avoid duplicate entries and rely on the helper include
which sources secrets from global.imagePullSecrets, or conversely remove the
include and keep the hardcoded block—prefer the first option: delete the
explicit imagePullSecrets: - name: ghcr-pull-secret entry and ensure values.yaml
sets global.imagePullSecrets: ["ghcr-pull-secret"] so the include
(postgresql-ha.image.pullSecrets) provides the secret.

In `@kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml`:
- Around line 531-534: The statefulset currently has a hardcoded
imagePullSecrets block (entry "- name: ghcr-pull-secret") that duplicates the
helper-generated entries from {{ include "postgresql-ha.image.pullSecrets" . }}
(same issue as in pgpool); fix by removing the hardcoded block so only the
helper-generated imagePullSecrets is used, or alternatively remove the helper
call and convert imagePullSecrets to the single hardcoded value—ensure only one
source of imagePullSecrets remains (reference the imagePullSecrets YAML key and
the include "postgresql-ha.image.pullSecrets" helper and the hardcoded
ghcr-pull-secret entry when making the change).

In `@kubernetes/arangodb/templates/arangodeployment.yaml`:
- Line 8: The YAML line using the Helm template include has an extra space
inside the parentheses; update the expression include "common.images.image" (
dict "imageRoot" .Values.image "global" .Values.global ) to remove the spaces
around the dict call so it becomes include "common.images.image"(dict
"imageRoot" .Values.image "global" .Values.global) (i.e., adjust the
include/dict invocation in the arangodeployment.yaml so there is no extra space
inside the parentheses to satisfy YAMLlint).

In `@kubernetes/icinga2/templates/statefulset-distribute.yaml`:
- Around line 403-406: The imagePullSecrets block is currently inside the
conditional governed by .Values.global.sharedVolume.enabled so it only renders
when shared persistence is enabled; move the imagePullSecrets mapping (the lines
creating "- name: ghcr-pull-secret") out of that conditional and place it
alongside other pod spec-level fields so imagePullSecrets is always emitted
regardless of .Values.global.sharedVolume.enabled, ensuring you remove it from
inside the "if .Values.global.sharedVolume.enabled" block and re-add it after
that conditional within the StatefulSet pod spec.

In `@kubernetes/influxdb/values.yaml`:
- Around line 65-66: Update the inline image documentation comments to reflect
the new default image registry and repository by replacing references to
Bitnami/Docker Hub tags with ghcr.io/svtechnmaa/*; specifically search for the
YAML keys "registry" and "repository" and the comment blocks adjacent to them
(e.g., the comment blocks near lines where registry: ghcr.io and repository:
svtechnmaa/influxdb are defined) and change any example/image doc text and
sample tag guidance at the other occurrences noted (around the blocks for lines
referenced in the review: the same comment pairs near 428-429, 816-817, 887-888,
912-913) so examples and troubleshooting tips point to ghcr.io/svtechnmaa images
and tags instead of Bitnami/Docker Hub.

---

Outside diff comments:
In `@kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml`:
- Around line 43-58: Add a follow-up change to the inlet container spec (the
container named "inlet" in the Akvorado inlet deployment template) to harden it:
add a securityContext block with runAsNonRoot: true, a non-root runAsUser (e.g.
1000), allowPrivilegeEscalation: false and readOnlyRootFilesystem where
appropriate; and add resources with requests and limits for cpu and memory (e.g.
requests.cpu/memory and limits.cpu/memory) so the pod is not unbounded. Keep
these additions adjacent to the existing container fields (image,
imagePullPolicy, ports, volumeMounts) and ensure they are templated using
.Values (e.g. .Values.inlet.securityContext and .Values.inlet.resources) so
defaults can be overridden.

In `@kubernetes/report-ui/templates/rq-dashboard-deployment.yaml`:
- Around line 25-47: Add an explicit hardened container securityContext for the
rq-dashboard container: in the container spec for name: rq-dashboard, add a
securityContext block that sets allowPrivilegeEscalation: false, runAsNonRoot:
true (and runAsUser: 1000 or configurable via
.Values.dashboard.securityContext.runAsUser), drops all capabilities
(capabilities.drop: ["ALL"]), and sets seccompProfile.type: RuntimeDefault (or
configurable via .Values.dashboard.securityContext.seccompProfile). Also ensure
resources remain configurable by referencing .Values.dashboard.resources in the
container spec so resource limits/requests can be overridden.

---

Duplicate comments:
In `@kubernetes/influxdb/values.yaml`:
- Around line 912-914: The tag value "2.221" for the image repository
"svtechnmaa/azure-cli" is not a valid Azure CLI release format; update the tag
in the values.yaml (the "tag:" field) to a verifiable upstream Azure CLI release
(e.g., a major.minor.patch like "2.22.1" or "2.84.0") or, if this mirror uses a
custom scheme, replace the tag with a documented mirror-specific tag and add a
brief comment explaining the mirror format and a link to the mirror's tag list
so the backup job can be reliably verified.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 8f508d18-071c-4973-b252-faf1e4a87d16

📥 Commits

Reviewing files that changed from the base of the PR and between 9384090 and 7997957.

📒 Files selected for processing (18)
  • kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml
  • kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml
  • kubernetes/akvorado/templates/akvorado-console-deployment.yaml
  • kubernetes/akvorado/templates/akvorado-inlet-deployment.yaml
  • kubernetes/akvorado/templates/akvorado-orchestrator-deployment.yaml
  • kubernetes/akvorado/templates/akvorado-redis-deployment.yaml
  • kubernetes/arangodb/templates/arangodeployment.yaml
  • kubernetes/arangodb/templates/initjob.yaml
  • kubernetes/arangodb/values.yaml
  • kubernetes/filebeat/templates/filebeat.yml
  • kubernetes/icinga2-report/templates/deployment.yaml
  • kubernetes/icinga2/templates/statefulset-distribute.yaml
  • kubernetes/influxdb/values.yaml
  • kubernetes/ingress-nginx/templates/deployment.yaml
  • kubernetes/logstash/templates/logstash.yml
  • kubernetes/mariadb-galera/values.yaml
  • kubernetes/report-ui/templates/redis-deployment.yaml
  • kubernetes/report-ui/templates/rq-dashboard-deployment.yaml

Comment on lines +366 to +369

imagePullSecrets:
- name: ghcr-pull-secret

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential duplicate imagePullSecrets entries.

Line 39 already calls {{ include "postgresql-ha.image.pullSecrets" . }}, which generates imagePullSecrets from global.imagePullSecrets and individual image pullSecrets (see kubernetes/airflow-postgresql-ha/templates/_helpers.tpl). Adding a hardcoded entry here may cause duplicates if ghcr-pull-secret is also configured in global.imagePullSecrets.

Consider either:

  1. Remove the hardcoded block and rely on global.imagePullSecrets: ["ghcr-pull-secret"] in values.yaml, or
  2. Remove the postgresql-ha.image.pullSecrets include at line 39 and use only the hardcoded approach.
Option 1: Remove hardcoded block (preferred)
        {{- include "common.tplvalues.render" (dict "value" .Values.pgpool.sidecars "context" $) | nindent 8 }}
        {{- end }}
-        
-      imagePullSecrets:
-        - name: ghcr-pull-secret
-
      volumes:

Then ensure values.yaml has:

global:
  imagePullSecrets:
    - ghcr-pull-secret
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
imagePullSecrets:
- name: ghcr-pull-secret
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/airflow-postgresql-ha/templates/pgpool/deployment.yaml` around
lines 366 - 369, The template currently adds imagePullSecrets twice: once via
the helper include {{ include "postgresql-ha.image.pullSecrets" . }} and again
with a hardcoded imagePullSecrets: - name: ghcr-pull-secret block; remove the
hardcoded block to avoid duplicate entries and rely on the helper include which
sources secrets from global.imagePullSecrets, or conversely remove the include
and keep the hardcoded block—prefer the first option: delete the explicit
imagePullSecrets: - name: ghcr-pull-secret entry and ensure values.yaml sets
global.imagePullSecrets: ["ghcr-pull-secret"] so the include
(postgresql-ha.image.pullSecrets) provides the secret.

Comment on lines +531 to +534

imagePullSecrets:
- name: ghcr-pull-secret

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential duplicate imagePullSecrets entries (same issue as pgpool).

Line 43 already includes {{ include "postgresql-ha.image.pullSecrets" . }} which generates imagePullSecrets from global.imagePullSecrets. Adding a hardcoded block here creates the same duplication risk as in pgpool/deployment.yaml.

Apply the same fix: either rely on the helper with global.imagePullSecrets configuration, or remove the helper and use only hardcoded values.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@kubernetes/airflow-postgresql-ha/templates/postgresql/statefulset.yaml`
around lines 531 - 534, The statefulset currently has a hardcoded
imagePullSecrets block (entry "- name: ghcr-pull-secret") that duplicates the
helper-generated entries from {{ include "postgresql-ha.image.pullSecrets" . }}
(same issue as in pgpool); fix by removing the hardcoded block so only the
helper-generated imagePullSecrets is used, or alternatively remove the helper
call and convert imagePullSecrets to the single hardcoded value—ensure only one
source of imagePullSecrets remains (reference the imagePullSecrets YAML key and
the include "postgresql-ha.image.pullSecrets" helper and the hardcoded
ghcr-pull-secret entry when making the change).

Comment thread kubernetes/arangodb/templates/arangodeployment.yaml
Comment thread kubernetes/icinga2/templates/statefulset-distribute.yaml
Comment thread kubernetes/influxdb/values.yaml
rootPassword: ""

kube-arangodb:
operator:
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image use template {{ .Values.operator.image }} and we don't clone kube-arangodb so cannot update registry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant