-
Notifications
You must be signed in to change notification settings - Fork 3
Kubernetes Controllers
Typically PODs are scheduled via higher-level abstraction that relies on controllers.
Controllers 'find' PODs using .spec.selector
Common method to create the groups of PODs with regards to number of instances running. "Replaces" ReplicaSet
- No POD location guarantees by default
- PODs will be suffixed with random string
- PODs started at the same time
- PODs are updated one after another ("previous" one must be
READYin order to continue) by default. Refer toDeploymentStrategy) for all options-
Recreate- Terminate all, start all -
RollingUpdate- Don't let all PODs go down, upgrades gradually
-
Don't use if 3. is not required
- No POD location guarantees by default
- By default started one by one (previous one must be
READYin order to continue). It is possible to start all at once (seepodManagementPolicy). Order of deletion is not specified. - PODs have stable network identifier (predictable, ordered names instead of random hash). This includes PVC as well, the Nth POD will use Nth PVC.
The headless
Serviceis required (must be added manually) - Updated in order (hi -> low), regardless of
podManagementPolicysetting by default. Refer toStatefulSetUpdateStrategy-
OnDelete- no automatic action when update is triggered -
RollingUpdate- Don't let all PODs go down, upgrades gradually.StatefulSetincludespartitionparameter (default = 0). Only PODs with ID greater than or equalpartitionwill be updated, the rest will not, even when manually deleted.
-
Mind:
https://github.com/kubernetes/kubernetes/issues/82612 (change cause message doesn't work for StatefulSet)
https://github.com/kubernetes/kubernetes/issues/67250 (cannot rollout undo statefulset from broken StatefulSet replica)
https://github.com/kubernetes/website/issues/17842 (headless Service requirement clarification)
The only allowed POD RestartPolicy is always
- POD location guarantees: one POD on each node
- PODs started at the same time
- Adding
NodesaddsDaemonSets PODs (with regards totolerations,nodeSelector, etc. - see this doc) - PODs are updated one after another ("previous" one must be
READYin order to continue) by default. Refer toDaemonSetUpdateStrategy-
RollingUpdate- Don't let all PODs go down, upgrades gradually. -
OnDelete- no automatic action when update is triggered
-
In general: replaced by kind: Deployment. Can 'capture' other manually created PODs if they match labels.
Internally, the Deployment uses ReplicaSet
- No POD location guarantees by default
- Doesn't support updates
- Doesn't support
rolloutcommands - Starts and terminates all at once
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational.
Was replaced by ReplicaSet
Schedules PODs and ensures that specified number of them completes successfully.
It is possible to run PODs in parallel.
The Job may be used for work queue processing,
just remember to set .spec.completions to 1 and .spec.parallelism > 0
- No POD location guarantees by default
- Doesn't support
rolloutcommands - By default runs only one POD
For interaction with selected controller kind the kubectl rollout command is used.
The .spec.template must be changed in order to trigger update, otherwise the same deployment revision is used
| Information | Command |
|---|---|
| Status | kubectl rollout status <kind, e.g.: deployment, statefulset> <name> |
History (The CHANGE-CAUSE is copied from annotation kubernetes.io/change-cause), more details can be obtained using --revision
|
kubectl rollout history <kind e.g. deployment> <name> [--revision=N] |
| Rollback | kubectl rollout undo <kind e.g. deployment> <name> |
- General
- OS
- Networks
- Configuration
- Protocols
- Link layer
- Sockets
- Routing
- Tunneling
- Debugging
- LoRa
- Virtualization
- Infrastructure as a code
- Desktop environments
- Monitoring
- Benchmarking