Config and scripts for deploying Montagu.
The basic idea here is that all configuration for the various components of montagu will end up here, separated by machine at present. This leads to a little duplication between machines, which is not ideal, but we can revisit that later.
The components currently described are:
montagu.yml: Deployment of the montagu core (api, admin and contribution portal, db, etc)packit.yml: Deployment of the reporting portal (packit and its runners)privateer.json: Backup and restore of the orderly volume, restoration of the databasediagnostic-reports.yml: Automatic diagnostic reports (autogenerated)
Historically relevant information will be found in:
We are aiming to progressively streamline this process.
This document is a bit long and rambly while we have OrderlyWeb deployed and are in the middle of migration, but will be thinned down once the migration is complete.
All commands below here are run from the montagu-config/ directory within the machine you are working with, after ssh-ing into the appropriate machine
Diagnostic report config is committed to this repo, but if you need to change it, it can be re-generated by running scripts/generate_real_diagnostic_reports_config.py.
This will generate a new yaml file "diagnostic-reports.yml" in the current working directory, which can then be copied into place in the relevant instance config directory.
Per machine, you will need to login to the container registry.
docker login ghcr.io -u vimc-robot
The password is in vault at /secret/vimc-robot/github-tokens/ghcr.
We require some python packages: montagu-deploy, packit-deploy and privateer. All are available on PyPI and can be installed with pip3 install --user package-name. We'll set up a pyproject.toml or requirements.txt for this repo at some point in the future which will streamline things.
Be aware that attempting installing with pip is not always sufficient for it to actually install anything, as it may decide that the old version you have is fine and then not actually do anything. You can resolve this by specifying a version (pip3 install --user montagu-deploy==0.1.2), uninstalling first pip3 uninstall --user montagu-deploy) or by passing --force (though this reinstalls everything).
You can find out what versions of things you have by running
montagu --version
packit --version
privateer --version
If you are developing the deploy tools, you might like to run hatch build on the source tree and then copy the resulting .whl file to the machine you are working on. You can then install that package:
pip3 install --user packit_deploy-0.0.11-py3-none-any.whl
Again, watch out to see if pip actually installs this, and be particularly careful if you have not changed the version number.
Each of the tools will respond to a different machine description; these are uat, science or production. You should configure these tools (only needed once) by running
montagu configure uat
packit configure uat
privateer configure uat
at the root of this repository as checked out on the corresponding machine (replacing uat with the machine you want to work with), which will write out configuration information.
Bring up packit and montagu with
packit start --pull
montagu start --pull
The order that you do this does not matter, but nothing will be accessible until montagu has started because it includes the proxy.
See https://github.com/vimc/montagu-deploy for more details on the deploy tool.
After deploying montagu you will need to update the data vis tool by running
./scripts/copy-vis-tool
This must be done each time montagu has been deployed because it updates files in the proxy container, but this is only necessary on production because nobody uses the vis tool on science or uat.
For production, schedule regular backups with
privateer schedule --as production2 start
you can check on the schedule by running
privateer schedule --as production2 status
Redeploy packit (e.g., after making a change); stop and start the containers using packit-deploy. You probably want the --kill argument to swiftly but rudely bring down containers and the --pull argument to make sure that you get the most recent copy of containers to deploy.
packit stop --kill
packit start --pull
If you want to test a branch, you will first need to create a PR in packit so that it builds images for your branch. Then, after that image is pushed, you should return to the relevant machine (most likely you'll be doing this on uat) and edit the appropriate tag: field(s) within uat/packit.yml. You can do this with a local change on the machine, e.g. with vim or nano or by making a branch in montagu-config, depending on the complexity of the changes.
If automatic certificates are enabled, you should run the renew-certificate the first time you deploy montagu to get the initial certificate.
montagu renew-certificate <path>
This command will need to be run periodically to ensure the certificate stays up-to-date. This is done by installing a systemd timer, using the provided install-timer.sh script.
./scripts/install-timer.sh <path>
See backup.md for details on this process. See rebuild.md for an account of rebuilding the systems in 2025.
Upgrades to the major version of postgres are disruptive, and will need special work for both packit and for montagu itself. This is described in upgrade-db.md.
Typically this is done on uat only, but occasionally it will be needed on science. Avoid testing new features on production as that is externally visible, and may be in use by an external partner.
For the component under test, edit the appropriate file in uat/, e.g., uat/packit.yml. Each component has a tag field, which can be used to target an in-development branch building on CI (you may need to have made a PR to trigger these builds). For packit, be sure to edit the tag for both api and app if these are both required. You can make these edits live on the machine in question (vi and nano are both installed), and then deploy above with
packit stop
packit start --pull
You will need your GitHub access token for this process. Using stop --kill will make things a bit faster to shut down, with potentially more risk of corrupting data, but we've not seen any evidence of a downside.