Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 1 addition & 6 deletions local-preview-playbook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ asciidoc:
astra-ui: 'Astra Portal'
astra-url: 'https://astra.datastax.com'
astra-ui-link: '{astra-url}[{astra-ui}^]'
db-classic: 'Managed Cluster'
db-classic: 'Astra Managed Clusters'
db-serverless: 'Serverless (non-vector)'
db-serverless-vector: 'Serverless (vector)'
scb: 'Secure Connect Bundle (SCB)'
Expand All @@ -84,7 +84,6 @@ asciidoc:
astra-stream: 'Astra Streaming'
starlight-kafka: 'Starlight for Kafka'
starlight-rabbitmq: 'Starlight for RabbitMQ'
astra-streaming-examples-repo: 'https://github.com/datastax/astra-streaming-examples'
sstable-sideloader: '{astra-db} Sideloader'
zdm: 'Zero Downtime Migration'
zdm-short: 'ZDM'
Expand Down Expand Up @@ -223,10 +222,6 @@ asciidoc:
capacity-service: 'Capacity Service'
lcm: 'Lifecycle Manager (LCM)'
lcm-short: 'LCM'
cr: 'custom resource (CR)'
cr-short: 'CR'
crd: 'custom resource definition (CRD)'
crd-short: 'CRD'
# Antora Atlas
primary-site-url: https://docs.datastax.com/en
primary-site-manifest-url: https://docs.datastax.com/en/site-manifest.json
Expand Down
6 changes: 3 additions & 3 deletions modules/ROOT/pages/create-target.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ After you review the xref:ROOT:feasibility-checklists.adoc[compatibility require
This includes the following:

* Create the new cluster that will be the target of your migration.
* Recreate the schema from your origin cluster on the target cluster.
* Re-create the schema from your origin cluster on the target cluster.
* Gather authentication credentials and connection details for the target cluster.

The preparation steps depend on your target platform.
Expand Down Expand Up @@ -53,7 +53,7 @@ For example, you could use `scp`:
scp -i some-key.pem /path/to/scb.zip user@client-ip-or-host:
----

. Recreate your client application's schema on your {astra-db} database, including each keyspace and table that you want to migrate.
. Re-create your client application's schema on your {astra-db} database, including each keyspace and table that you want to migrate.
+
[IMPORTANT]
====
Expand Down Expand Up @@ -96,7 +96,7 @@ Store the authentication credentials securely for use by your client application

. Note your cluster's connection details, including the contact points (IP addresses or hostnames) and port number.

. Recreate your origin cluster's schema on your new cluster, including each keyspace and table that you want to migrate.
. Re-create your origin cluster's schema on your new cluster, including each keyspace and table that you want to migrate.
+
[IMPORTANT]
====
Expand Down
4 changes: 2 additions & 2 deletions modules/ROOT/pages/deploy-proxy-monitoring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ Only modify the variables in `zdm_proxy_advanced_config.yml` if you have a speci
====
The following advanced configuration variables are immutable.
If you need to change these variables, {company} recommends that you do so _before_ deploying {product-proxy}.
Future changes require you to recreate your entire {product-proxy} deployment.
Future changes require you to re-create your entire {product-proxy} deployment.
For more information, see xref:ROOT:manage-proxy-instances.adoc#change-immutable-configuration-variables[Change immutable configuration variables].
====

Expand All @@ -257,7 +257,7 @@ This can be changed by setting `metrics_port` to a different value if desired.

All other advanced configuration variables in `zdm_proxy_advanced_config.yml` are mutable.
You can seamlessly change them after deploying {product-proxy} with a rolling restart.
Immutable variables require you to recreate the entire deployment and result in downtime for your {product-proxy} deployment.
Immutable variables require you to re-create the entire deployment and result in downtime for your {product-proxy} deployment.
For more information, see xref:manage-proxy-instances.adoc[].

[#enable-tls-encryption]
Expand Down
2 changes: 1 addition & 1 deletion modules/ROOT/pages/feasibility-checklists.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ This means that any keyspace that your client application uses must exist on bot
However, the keyspaces can have different replication strategies and durable write settings.

At the column level, the schema doesn't need to be a one-to-one match as long as the CQL statements can be executed successfully on both clusters.
For example, if a table has 10 columns, and your client application uses only five of those columns, then you can recreate the table on the target cluster with only the five required columns.
For example, if a table has 10 columns, and your client application uses only five of those columns, then you can re-create the table on the target cluster with only the five required columns.
However, the data from the other columns won't be migrated to the target cluster if those columns don't exist on the target cluster.
Before you decide to omit a column from the target cluster, make sure that it is acceptable to permanently lose that data after the migration.

Expand Down
8 changes: 4 additions & 4 deletions modules/ROOT/pages/manage-proxy-instances.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -54,14 +54,14 @@ ubuntu@52772568517c:~$
ansible-playbook rolling_update_zdm_proxy.yml -i zdm_ansible_inventory
----

The rolling restart playbook recreates each {product-proxy} container, one by one.
The rolling restart playbook re-creates each {product-proxy} container, one by one.
The {product-proxy} deployment remains available at all times, and you can safely use it throughout this operation.
If you modified mutable configuration variables, the new containers use the updated configuration files.

The playbook performs the following actions automatically:

. {product-automation} stops one container gracefully, and then waits for it to shut down.
. {product-automation} recreates the container, and then starts it.
. {product-automation} re-creates the container, and then starts it.
. {product-automation} calls the xref:ROOT:metrics.adoc#call-the-liveliness-and-readiness-endpoints[readiness endpoint] to check the container's status:
+
* If the status check fails, {product-automation} repeats the check up to six times at 5-second intervals.
Expand Down Expand Up @@ -271,7 +271,7 @@ ansible-playbook deploy_zdm_proxy.yml -i zdm_ansible_inventory
----

You can re-run the deployment playbook as many times as necessary.
However, this playbook decommissions and recreates _all_ {product-proxy} instances simultaneously.
However, this playbook decommissions and re-creates _all_ {product-proxy} instances simultaneously.
This results in a brief period of time where the entire {product-proxy} deployment is offline because no instances are available.

For more information, see xref:ROOT:troubleshooting-tips.adoc#configuration-changes-arent-applied-by-zdm-automation[Configuration changes aren't applied by {product-automation}].
Expand All @@ -280,7 +280,7 @@ For more information, see xref:ROOT:troubleshooting-tips.adoc#configuration-chan
== Upgrade the proxy version

The same playbook that you use for configuration changes can also be used to upgrade the {product-proxy} version in a rolling fashion.
All containers are recreated with the given image version.
All containers are re-created with the given image version.

[IMPORTANT]
====
Expand Down
2 changes: 1 addition & 1 deletion modules/ROOT/pages/setup-ansible-playbooks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ This infrastructure can be on-premise or in any cloud provider of your choice.
+
If your {product-proxy} machines use private IPs, which are recommended for production deployments, configure these before running {product-utility}.
If you enable private IPs later, you must reconfigure and redeploy your {product-proxy} instances.
This is a disruptive operation that requires a small amount of downtime because the deployment playbook decommissions and recreates all {product-proxy} containers simultaneously.
This is a disruptive operation that requires a small amount of downtime because the deployment playbook decommissions and re-creates all {product-proxy} containers simultaneously.

* https://docs.docker.com/engine/install/#server[Install Docker] on the machine that will run the Ansible Control Host container, and ensure that the `docker` command https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user[doesn't require superuser privileges].
+
Expand Down
8 changes: 4 additions & 4 deletions modules/ROOT/pages/troubleshooting-tips.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ To tail (stream) the logs as they are written, use the `--follow` (`-f`) option:
docker container logs zdm-proxy-container -f
----

Keep in mind that Docker logs are deleted if the container is recreated.
Keep in mind that Docker logs are deleted if the container is re-created.
--

Collect logs for multiple instances::
Expand Down Expand Up @@ -116,7 +116,7 @@ For example, if you used Docker, you can use the following command to export a c
docker logs my-container > log.txt
----

Keep in mind that Docker logs are deleted if the container is recreated.
Keep in mind that Docker logs are deleted if the container is re-created.
--
======

Expand Down Expand Up @@ -270,8 +270,8 @@ Allowing these values to change from a rolling restart could propagate a misconf

If you change the value of an immutable configuration variable, you must run the `deploy_zdm_proxy.yml` playbook again.
You can run this playbook as many times as needed.
Each time, {product-automation} recreates your entire {product-proxy} deployment with the new configuration.
However, this doesn't happen in a rolling fashion: The existing {product-proxy} instances are torn down simultaneously, and then they are recreated.
Each time, {product-automation} re-creates your entire {product-proxy} deployment with the new configuration.
However, this doesn't happen in a rolling fashion: The existing {product-proxy} instances are torn down simultaneously, and then they are re-created.
This results in a brief period of downtime where the entire {product-proxy} deployment is unavailable.

=== Client application throws unsupported protocol version error
Expand Down
4 changes: 2 additions & 2 deletions modules/sideloader/pages/migrate-sideloader.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ Snapshots have a specific directory structure, such as `*KEYSPACE_NAME*/*TABLE_N
[#record-schema]
== Configure the target database

To prepare your target database for the migration, you must record the schema for each table in your origin cluster that you want to migrate, recreate these schemas in your target database, and then set environment variables required to connect to your database.
To prepare your target database for the migration, you must record the schema for each table in your origin cluster that you want to migrate, re-create these schemas in your target database, and then set environment variables required to connect to your database.

[WARNING]
====
Expand Down Expand Up @@ -229,7 +229,7 @@ CREATE TABLE smart_home.sensor_readings (
) WITH CLUSTERING ORDER BY (room_id ASC, reading_timestamp DESC);
----

. Recreate the schemas in your target database:
. Re-create the schemas in your target database:
+
.. In the {astra-ui-link} navigation menu, click *Databases*, and then click the name of your {astra-db} database.
.. xref:astra-db-serverless:databases:manage-keyspaces.adoc#keyspaces[Create a keyspace] with the exact same name as your origin cluster's keyspace.
Expand Down
2 changes: 1 addition & 1 deletion modules/sideloader/pages/prepare-sideloader.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,7 @@ If the schemas don't match, the migration fails.
+
You don't need to make any changes based on the number of nodes, as long as the keyspaces and table schemas are replicated in the target databases.
+
If you want to migrate the same data to multiple databases, you must recreate the schemas in each of those databases.
If you want to migrate the same data to multiple databases, you must re-create the schemas in each of those databases.
{sstable-sideloader} requires a schema to be present in the target database in order to migrate data.

. For each target database, initialize a migration to prompt {sstable-sideloader} to create migration buckets for each database.
Expand Down
2 changes: 1 addition & 1 deletion modules/sideloader/pages/troubleshoot-sideloader.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ There are two reasons you might need to do this:
+
** The origin and target schemas don't match.
** The migration reached a point that some data was loaded to the target database.
This is unlikely, but, if this happens, you must xref:astra-db-serverless:databases:manage-collections.adoc#delete-a-table-in-the-astra-portal[drop the table] from your target database, and then recreate the table in the target database.
This is unlikely, but, if this happens, you must xref:astra-db-serverless:databases:manage-collections.adoc#delete-a-table-in-the-astra-portal[drop the table] from your target database, and then re-create the table in the target database.
+
In this case, if the migration _didn't_ fail due to a problem with the snapshot data, you can potentially reuse the existing snapshots for the new migration.

Expand Down