docker run --rm enigma "Hello, World!"
To add a scrape_config to the kube-prometheus-stack (formerly known as Prometheus Operator), you have two primary approaches. Here's how to do it:
The Operator-centric way to configure scraping is via ServiceMonitor or PodMonitor CRDs.
-
Create a Service for Your Application
# myapp-service.yaml apiVersion: v1 kind: Service metadata: name: myapp-service namespace: my-namespace labels: app: myapp # Used by ServiceMonitor to identify targets spec: ports: - name: http-metrics port: 8080 # Port where metrics are exposed selector: app: myapp # Selects pods with this label
-
Create a ServiceMonitor
# myapp-servicemonitor.yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: myapp-monitor namespace: monitoring # Same namespace as Prometheus Operator spec: jobLabel: myapp-metrics endpoints: - port: http-metrics # Matches the Service port name interval: 30s namespaceSelector: matchNames: - my-namespace # Where your service lives selector: matchLabels: app: myapp # Matches the Service labels
-
Apply the resources:
kubectl apply -f myapp-service.yaml kubectl apply -f myapp-servicemonitor.yaml
For edge cases where ServiceMonitor isn't sufficient, use a raw Prometheus scrape config:
-
Create a Secret with Additional Scrape Config:
# additional-scrape-configs.yaml apiVersion: v1 kind: Secret metadata: name: additional-scrape-configs namespace: monitoring type: Opaque stringData: additional-scrape-configs.yaml: |- - job_name: myapp scrape_interval: 30s static_configs: - targets: [myapp-service.my-namespace.svc.cluster.local:8080]
-
Update Helm Values (
values.yaml):prometheus: prometheusSpec: additionalScrapeConfigsSecret: enabled: true name: additional-scrape-configs key: additional-scrape-configs.yaml
-
Upgrade the Helm Release:
helm upgrade prometheus-stack prometheus-community/kube-prometheus-stack \ -n monitoring \ -f values.yaml
Add directly to your Helm values.yaml:
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: myapp
scrape_interval: 30s
static_configs:
- targets:
- myapp-service.my-namespace.svc.cluster.local:8080-
Check Prometheus Targets UI:
kubectl port-forward svc/prometheus-stack-kube-prom-prometheus 9090:9090 -n monitoring
Open
http://localhost:9090/targets -
Check Prometheus Config:
kubectl exec -it prometheus-prometheus-stack-kube-prom-prometheus-0 -n monitoring -- cat /etc/prometheus/config_out/prometheus.env.yaml
- ServiceMonitor is the preferred method (Operator-native).
- Use
additionalScrapeConfigssparingly (avoids Operator benefits). - Ensure your application's metrics endpoint is accessible and in Prometheus format.
- For TLS/authentication, add
scheme,tlsConfig, orbasicAuthto theendpointssection.
Example for TLS in a ServiceMonitor:
endpoints:
- port: https-metrics
scheme: https
tlsConfig:
insecureSkipVerify: true # For self-signed certskubectl rollout restart deployment scalingtest --namespace keda
kubectl cp keda/scalingtest-55b595f7b5-856q2:/tmp/service.log service.log
I can see you're experiencing an OOMKilled (Out of Memory) error with your container. The container was terminated with exit code 137, which indicates it was killed due to memory constraints. Let me examine your deployment configuration to help diagnose and fix this issue.
Ran tool Ran tool Now let me check the Kubernetes deployment manifest to see the resource limits:
Ran tool Ran tool Now I can see the issue! The problem is with your resource limits in the Kubernetes deployment. Let me analyze the issues:
Root Cause: Your container is being killed due to extremely low memory limits.
-
Memory Limit Too Low: You have set
memory: "6Mi"(6 megabytes) as the limit, which is extremely small for a Go application. -
Memory Request Too Low: The request is set to
memory: "5Mi"(5 megabytes). -
Application Memory Usage: Looking at your Go code, the application:
- Uses Prometheus client libraries (which consume memory)
- Has goroutines that perform calculations
- Logs to files
- Maintains HTTP server state
- Go runtime overhead: ~10-20MB minimum
- Prometheus client libraries: ~5-10MB
- HTTP server: ~2-5MB
- Application logic: ~2-5MB
- Buffer space: ~5-10MB
Total minimum: ~25-50MB
resources:
requests:
memory: "64Mi" # 64 megabytes
cpu: "100m"
limits:
memory: "128Mi" # 128 megabytes
cpu: "200m"resources:
requests:
memory: "32Mi" # 32 megabytes
cpu: "50m"
limits:
memory: "64Mi" # 64 megabytes
cpu: "100m"resources:
requests:
memory: "64Mi"
cpu: "100m"
# No limits - allows unlimited memory usage-
Optimize Go Build: Your Dockerfile already uses good optimization flags (
-ldflags="-s -w") -
Consider Memory Profiling: Add memory profiling to your Go application to identify memory leaks
-
Monitor Memory Usage: Use the Prometheus metrics you're already exposing to monitor actual memory consumption
Would you like me to help you update the victim.yaml file with appropriate memory limits?