Kubernetes OneTimeCommand Executor. Create ad-hoc Kubernetes jobs which can be peer-reviewed, approved and applied towards production environments. Allows review on actions in production environment, eg retrieve (limited) production data for debugging purposes. Can encrypt pipeline output to make sure production data remains sufficiently protected.
- replace
microservice_namewith your value in service-cm - add cqlsh query into query
- optional: for very specific use cases annotations must be modified patch.yaml
- change the date on which you want to execute the query in service-cm.yaml
tips: Remember that query should ends by ";". Also use table name with keyspace name e.g. "providers.client_authentication_means"
- replace
microservice_namewith your values in service-cm - replace
database_namewith your values in service-cm rds_hostname_prefixis mapped toRDSInstances[HostPrefix]andrds_instance_nameis mapped toRDSInstances[Name]- add sql query into query
- change the date on which you want to execute the query in service-cm.yaml
- optional: for very specific use cases annotations must be modified patch.yaml
- replace
microservice_namewith your value in service-cm - modify patch.yaml to include your shell script after
set -eline in the following format:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-curl.sh: $(ENVIRONMENT_NAME)/database/cassa/creds/$(NAMESPACE_PREFIX)$(MICROSERVICE_NAME)
vault.hashicorp.com/agent-inject-template-curl.sh: |
### Uncomment and change accordingly follwing line if you need to use secrets from vault:
# {{- with secret "$(ENVIRONMENT_NAME)/database/cassa/creds/$(NAMESPACE_PREFIX)$(MICROSERVICE_NAME)" -}}
#!/bin/bash
set -e;
### Insert your script here.
### the following line will run curl to get health of the service configured in step #1 of this section
curl -s http://$(MICROSERVICE_NAME)/$(MICROSERVICE_NAME)/actuator/health
### Uncomment the follwing line if you need to use secrets from vault:
# {{- end }}- make sure do not remove the date validation, only modify the date
run_only_on
#!/bin/bash
set -e +x
today=$(date +%d-%m-%Y)
run_only_on="25-09-2020"
if [ "$run_only_on" == "$(EXECUTION_DATE)" ];
then
curl -skX POST https://$(MICROSERVICE_NAME)/$(MICROSERVICE_NAME)/XXXXX
fi- change the date on which you want to execute the query in service-cm.yaml
- modify the curl in job.yaml
- change the date on which you want to execute the query in service-cm.yaml
Production job doesn't print output of the query. Instead it creates a file called command-output.gpg which uses GPG public key from users' gitlab profile to encrypt output (assume we have sensitive data there). Encrypted file is available as an artifact of the job. To decrypt the output of the command user must:
-
download and extract artifact (artifact TTL - 1 hour)
-
run
cat command-output.gpg | gpg --decrypton a MacOS you might get an error saying
gpg: public key decryption failed: Inappropriate ioctl for device gpg: decryption failed: No secret keyIf that happens, execute
export GPG_TTY=$(tty)before the above command. Consider putting it into your.rc(.bashrc,.zhsrcetc) file to make it permanently applicable.An alternative syntax
gpg --decrypt command-output.gpgshould also work just fine.
Key concept:
- user must add gpg public key to his profile
- encryption works only on
masterdue to gitlab limitation for fetching public gpg key of user - no use of vault due to its centralization - adds more headache on how to grant permissions to decrypt data + gives potential ability to decrypt all outputs of all previously executed jobs
Please do not include sensitive information unencrypted in the OTC queries as this is then part of source control.
- Run it on ACC first. This can be done already on feature branch. If the execution is successful, the output is simply part of console output in GitLab.
- In case the execution fails or times out, use
kubectlto get more details about the error(s)kubectl get jobs | grep otc- display jobs created by kubernetes-otckubectl describe job kubernetes-otc-<jobname>- display details of spcific job- In the end of the output you can find names of the pod(s) created to execute job
- To see logs use
kubectl logs -f <podname>
If the execution of the job has failed, it is not deleted automatically from k8s cluster.
Also the related pod keeps "hanging" in the error state.
Remove the job manually using kubectl delete job kubernetes-otc-<jobname>