From 56159f0ed379ba6d0c429b789ed5b1efe3a985b9 Mon Sep 17 00:00:00 2001 From: James Date: Fri, 27 Mar 2026 12:01:55 +1100 Subject: [PATCH 1/6] docs: document backfill mode tradeoffs including silent delete loss in Precise mode Add pros/cons for Normal and Precise backfill modes, document the default Automatic behavior, and add a comparison table for choosing between modes. Key addition: Precise mode can silently lose deletes during incremental backfills when rows are deleted before the scanner reaches them. --- site/docs/guides/backfilling-data.md | 57 +++++++++++++++++++++++----- 1 file changed, 48 insertions(+), 9 deletions(-) diff --git a/site/docs/guides/backfilling-data.md b/site/docs/guides/backfilling-data.md index 71bf3e98ecd..3bffecfd2d0 100644 --- a/site/docs/guides/backfilling-data.md +++ b/site/docs/guides/backfilling-data.md @@ -199,33 +199,72 @@ The connectors that use CDC (Change Data Capture) allow fine-grained control of In general, you should not change this setting. Make sure you understand your use case, such as [preventing backfills](#preventing-backfills-during-database-upgrades). ::: -The following modes are available: +### Available modes * **Normal:** backfills chunks of the table and emits all replication events regardless of whether they occur within the backfilled portion of the table or not. - In Normal mode, the connector fetches key-ordered chunks of the table for the backfill while performing reads of the WAL. - All WAL changes are emitted immediately, whether or not they relate to an unread portion of the table. Therefore, if a change is made, it shows up quickly even if its table is still backfilling. + In Normal mode, the connector fetches key-ordered chunks of the table for the backfill while performing reads of the replication log. + All replication log changes are emitted immediately, whether or not they relate to an unread portion of the table. Therefore, if a change is made, it shows up quickly even if its table is still backfilling. + + **Pros:** + - All change events (inserts, updates, and deletes) are captured during backfill — no replication events are filtered out. + - Changes appear in the destination quickly, even for tables still being backfilled. + + **Cons:** + - Duplicate events are possible during the overlap between the backfill scan and replication log stream. These are deduplicated downstream by the runtime's reduce logic if your materialization uses standard (non-delta) updates, but with delta updates enabled, duplicates will not be deduplicated and may result in duplicate records. + - The ordering of events for a given key is not guaranteed to be logically consistent during the backfill (e.g. you may see an update before the corresponding insert). * **Precise:** backfills chunks of the table and filters replication events in portions of the table which haven't yet been reached. - In Precise mode, the connector fetches key-ordered chunks of the table for the backfill while performing reads of the WAL. - Any WAL changes for portions of the table that have already been backfilled are emitted. In contrast to Normal mode, however, WAL changes are suppressed if they relate to a part of the table that hasn't been backfilled yet. + In Precise mode, the connector fetches key-ordered chunks of the table for the backfill while performing reads of the replication log. + Any replication log changes for portions of the table that have already been backfilled are emitted. In contrast to Normal mode, however, replication log changes are suppressed if they relate to a part of the table that hasn't been backfilled yet. - WAL changes and backfill chunks get stitched together to produce a fully consistent logical sequence of changes for each key. For example, you are guaranteed to see an insert before an update or delete. + Replication log changes and backfill chunks get stitched together to produce a fully consistent logical sequence of changes for each key. For example, you are guaranteed to see an insert before an update or delete. Note that Precise backfill is not possible in some cases due to equality comparison challenges when using varying character encodings. + **Pros:** + - Produces a logically consistent sequence of changes per key — no duplicates, correct ordering. + - More efficient — avoids processing redundant events. + + **Cons:** + - **Deletes can be silently lost during incremental backfills.** This applies when performing an incremental backfill (where existing records are already present in the collection and destination). If a row is deleted while the backfill scanner has not yet reached it, the DELETE event from the replication log is filtered out (because it relates to a part of the table "ahead" of the scan position). When the scanner eventually reaches that key range, the row no longer exists in the source table and is never seen. The result is the delete is silently dropped — the old version of the row remains in the collection and destination without a delete marker. This does not affect full data flow resets, where the destination is rebuilt from scratch. + + This delete gap only affects rows deleted *during an active backfill*. Once the backfill completes, all subsequent deletes are captured normally. The risk window is the duration of the backfill scan. + * **Only Changes:** skips backfilling the table entirely and jumps directly to replication streaming for the entire dataset. - No backfill of the table content is performed at all. Only WAL changes are emitted. + No backfill of the table content is performed at all. Only replication log changes are emitted. Use this mode when you only need new changes going forward and don't need historical data, or when you want to avoid the overhead of scanning a large table. * **Without Primary Key:** can be used to capture tables without any form of unique primary key. - The connector uses an alternative physical row identifier (such as a Postgres `ctid`) to scan backfill chunks, rather than walking the table in key order. + The connector uses an alternative physical row identifier (such as a Postgres `ctid`) to scan backfill chunks, rather than walking the table in key order. Use this mode when a table lacks a usable primary key or unique index. This mode lacks the exact correctness properties of the Normal backfill mode. -If you do not choose a specific backfill mode, Estuary will default to an automatic mode. +### Default behavior + +If you do not choose a specific backfill mode, Estuary uses **Automatic** mode, which selects the best mode based on the table's characteristics: + +- **Precise** is selected for tables with predictable key ordering (most tables with standard primary keys). +- **Normal** is selected for tables where key ordering is unpredictable (e.g. certain character encodings or collations). +- **Without Primary Key** is selected for tables that lack a usable primary key or unique index. + +For most SQL captures, Automatic will select **Precise**. + +### Choosing a mode + +| Consideration | Normal | Precise | +|---|---|---| +| Deletes during incremental backfill | Captured (safe) | Can be silently lost | +| Event ordering per key | Not guaranteed | Guaranteed | +| Duplicate processing | Possible (deduplicated unless using delta updates) | None | +| Efficiency | Slightly higher data volume | More efficient | +| Default for most tables | No | Yes (via Automatic) | + +If your workload includes hard deletes and you want to ensure no deletes are lost during incremental backfills (e.g. backfills triggered by schema changes), consider setting the backfill mode to **Normal** on affected bindings. The tradeoff is slightly more data processing during the backfill window. + +For MySQL and MariaDB captures, setting `binlog_row_metadata=FULL` can prevent many unnecessary backfills from being triggered by schema changes, reducing the window in which this issue can occur regardless of backfill mode. ## Advanced backfill configuration in specific systems From 31cac1abe4856e204d2e5ded6a0d3ee256e1e666 Mon Sep 17 00:00:00 2001 From: James Date: Fri, 27 Mar 2026 14:32:53 +1100 Subject: [PATCH 2/6] docs: improve readability of backfill mode pros/cons --- site/docs/guides/backfilling-data.md | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/site/docs/guides/backfilling-data.md b/site/docs/guides/backfilling-data.md index 3bffecfd2d0..2cf041dcb56 100644 --- a/site/docs/guides/backfilling-data.md +++ b/site/docs/guides/backfilling-data.md @@ -207,14 +207,12 @@ In general, you should not change this setting. Make sure you understand your us All replication log changes are emitted immediately, whether or not they relate to an unread portion of the table. Therefore, if a change is made, it shows up quickly even if its table is still backfilling. **Pros:** - - All change events (inserts, updates, and deletes) are captured during backfill — no replication events are filtered out. - - Changes appear in the destination quickly, even for tables still being backfilled. + All change events (inserts, updates, and deletes) are captured during backfill — no replication events are filtered out. Changes appear in the destination quickly, even for tables still being backfilled. **Cons:** - - Duplicate events are possible during the overlap between the backfill scan and replication log stream. These are deduplicated downstream by the runtime's reduce logic if your materialization uses standard (non-delta) updates, but with delta updates enabled, duplicates will not be deduplicated and may result in duplicate records. - - The ordering of events for a given key is not guaranteed to be logically consistent during the backfill (e.g. you may see an update before the corresponding insert). + Duplicate events are possible during backfill. With standard (non-delta) materializations, duplicates are deduplicated by the runtime. With delta updates enabled, duplicates may result in duplicate records. Event ordering per key is also not guaranteed (e.g. you may see an update captured before the corresponding insert). -* **Precise:** backfills chunks of the table and filters replication events in portions of the table which haven't yet been reached. +* **Precise:** backfills chunks of the table while capturing replication events only for parts of the table that have been backfilled already. In Precise mode, the connector fetches key-ordered chunks of the table for the backfill while performing reads of the replication log. Any replication log changes for portions of the table that have already been backfilled are emitted. In contrast to Normal mode, however, replication log changes are suppressed if they relate to a part of the table that hasn't been backfilled yet. @@ -224,13 +222,10 @@ In general, you should not change this setting. Make sure you understand your us Note that Precise backfill is not possible in some cases due to equality comparison challenges when using varying character encodings. **Pros:** - - Produces a logically consistent sequence of changes per key — no duplicates, correct ordering. - - More efficient — avoids processing redundant events. + Produces a logically consistent sequence of changes per key — no duplicates, correct ordering. **Cons:** - - **Deletes can be silently lost during incremental backfills.** This applies when performing an incremental backfill (where existing records are already present in the collection and destination). If a row is deleted while the backfill scanner has not yet reached it, the DELETE event from the replication log is filtered out (because it relates to a part of the table "ahead" of the scan position). When the scanner eventually reaches that key range, the row no longer exists in the source table and is never seen. The result is the delete is silently dropped — the old version of the row remains in the collection and destination without a delete marker. This does not affect full data flow resets, where the destination is rebuilt from scratch. - - This delete gap only affects rows deleted *during an active backfill*. Once the backfill completes, all subsequent deletes are captured normally. The risk window is the duration of the backfill scan. + **Deletes can be silently lost during incremental backfills** (where existing records are already present in the collection and destination). If a row is deleted while the backfill scanner has not yet reached it, the DELETE event is filtered out. When the scanner reaches that key range, the row no longer exists in the source table and is never seen — the old version remains in the destination without a delete marker. This does not affect full data flow resets, where the destination is rebuilt from scratch. Only rows deleted *during* the backfill are affected; once the backfill completes, all subsequent deletes are captured normally. * **Only Changes:** skips backfilling the table entirely and jumps directly to replication streaming for the entire dataset. @@ -259,10 +254,9 @@ For most SQL captures, Automatic will select **Precise**. | Deletes during incremental backfill | Captured (safe) | Can be silently lost | | Event ordering per key | Not guaranteed | Guaranteed | | Duplicate processing | Possible (deduplicated unless using delta updates) | None | -| Efficiency | Slightly higher data volume | More efficient | | Default for most tables | No | Yes (via Automatic) | -If your workload includes hard deletes and you want to ensure no deletes are lost during incremental backfills (e.g. backfills triggered by schema changes), consider setting the backfill mode to **Normal** on affected bindings. The tradeoff is slightly more data processing during the backfill window. +If your workload includes hard deletes and you want to ensure no deletes are lost during incremental backfills (e.g. backfills triggered by schema changes), consider setting the backfill mode to **Normal** on affected bindings. The tradeoff is possible duplicate events during the backfill, which are deduplicated automatically unless you are using delta updates. For MySQL and MariaDB captures, setting `binlog_row_metadata=FULL` can prevent many unnecessary backfills from being triggered by schema changes, reducing the window in which this issue can occur regardless of backfill mode. From 060afac904bb9fa5d7eb198a0c10afd57100a041 Mon Sep 17 00:00:00 2001 From: James Date: Wed, 1 Apr 2026 10:43:32 +1100 Subject: [PATCH 3/6] docs: address review feedback on backfill modes --- site/docs/guides/backfilling-data.md | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/site/docs/guides/backfilling-data.md b/site/docs/guides/backfilling-data.md index 2cf041dcb56..297fb9baeef 100644 --- a/site/docs/guides/backfilling-data.md +++ b/site/docs/guides/backfilling-data.md @@ -180,7 +180,11 @@ For example, Postgres currently deletes or requires users to drop logical replic ## Resource configuration backfill modes -The connectors that use CDC (Change Data Capture) allow fine-grained control of backfills for individual tables. These bindings include a "Backfill Mode" dropdown in their resource configuration. This setting then translates to a `mode` field for that resource in the specification. For example: +:::note +Backfill modes apply only to **SQL CDC connectors** (PostgreSQL, MySQL, SQL Server, etc.). Non-SQL connectors and non-CDC connectors do not have this setting. +::: + +SQL CDC connectors allow fine-grained control of backfills for individual tables. These bindings include a "Backfill Mode" dropdown in their resource configuration. This setting then translates to a `mode` field for that resource in the specification. For example: ```yaml "bindings": [ @@ -210,7 +214,7 @@ In general, you should not change this setting. Make sure you understand your us All change events (inserts, updates, and deletes) are captured during backfill — no replication events are filtered out. Changes appear in the destination quickly, even for tables still being backfilled. **Cons:** - Duplicate events are possible during backfill. With standard (non-delta) materializations, duplicates are deduplicated by the runtime. With delta updates enabled, duplicates may result in duplicate records. Event ordering per key is also not guaranteed (e.g. you may see an update captured before the corresponding insert). + Duplicate events are possible during backfill — the same row may be emitted once as a backfill chunk and again as a replication event. With standard (non-delta) materializations, duplicates are deduplicated by the runtime. With delta updates enabled, duplicates may result in duplicate records. The backfill row for a given key may also arrive after a later replication event for that row (for example, a backfill row may arrive after an update to that row), but the most recent replication event always arrives last, so the final value in the destination reflects the current source state. * **Precise:** backfills chunks of the table while capturing replication events only for parts of the table that have been backfilled already. @@ -225,11 +229,11 @@ In general, you should not change this setting. Make sure you understand your us Produces a logically consistent sequence of changes per key — no duplicates, correct ordering. **Cons:** - **Deletes can be silently lost during incremental backfills** (where existing records are already present in the collection and destination). If a row is deleted while the backfill scanner has not yet reached it, the DELETE event is filtered out. When the scanner reaches that key range, the row no longer exists in the source table and is never seen — the old version remains in the destination without a delete marker. This does not affect full data flow resets, where the destination is rebuilt from scratch. Only rows deleted *during* the backfill are affected; once the backfill completes, all subsequent deletes are captured normally. + **During incremental backfills, rows deleted before the scanner reaches them will be silently lost.** If a row is deleted while the backfill scanner has not yet reached it, the DELETE event is filtered out — when the scanner arrives at that key range, the row no longer exists in the source and is never seen, leaving the old version in the destination without a delete marker. The at-risk window spans the entire backfill duration, so larger tables mean a larger window. Normal mode has a much shorter at-risk window for the same scenario (ending when the backfill starts rather than when it finishes), at the cost of possible duplicates. This does not affect full data flow resets, where the destination is rebuilt from scratch. * **Only Changes:** skips backfilling the table entirely and jumps directly to replication streaming for the entire dataset. - No backfill of the table content is performed at all. Only replication log changes are emitted. Use this mode when you only need new changes going forward and don't need historical data, or when you want to avoid the overhead of scanning a large table. + No backfill of the table content is performed at all. Only replication log changes are emitted. Use this mode when you only need new changes going forward and don't need historical data. * **Without Primary Key:** can be used to capture tables without any form of unique primary key. @@ -245,7 +249,7 @@ If you do not choose a specific backfill mode, Estuary uses **Automatic** mode, - **Normal** is selected for tables where key ordering is unpredictable (e.g. certain character encodings or collations). - **Without Primary Key** is selected for tables that lack a usable primary key or unique index. -For most SQL captures, Automatic will select **Precise**. +For many SQL captures, Automatic will select **Precise**. ### Choosing a mode @@ -256,14 +260,20 @@ For most SQL captures, Automatic will select **Precise**. | Duplicate processing | Possible (deduplicated unless using delta updates) | None | | Default for most tables | No | Yes (via Automatic) | -If your workload includes hard deletes and you want to ensure no deletes are lost during incremental backfills (e.g. backfills triggered by schema changes), consider setting the backfill mode to **Normal** on affected bindings. The tradeoff is possible duplicate events during the backfill, which are deduplicated automatically unless you are using delta updates. +If your workload includes hard deletes and you want to ensure no deletes are lost during incremental backfills, consider setting the backfill mode to **Normal** on affected bindings. The tradeoff is possible duplicate events during the backfill, which are deduplicated automatically unless you are using delta updates. -For MySQL and MariaDB captures, setting `binlog_row_metadata=FULL` can prevent many unnecessary backfills from being triggered by schema changes, reducing the window in which this issue can occur regardless of backfill mode. +In most cases, schema changes do not trigger backfills. Exceptions include: +- **SQL Server** (with Automatic Capture Instance Management enabled): schema changes can trigger automatic backfills. +- **MySQL/MariaDB**: schema changes can trigger backfills if executed with binlog writes disabled (common with some schema migration tools), or if the DDL statement cannot be parsed by the connector. Standard `ALTER TABLE` statements executed normally do not trigger backfills. + +For MySQL and MariaDB captures, setting `binlog_row_metadata=FULL` can prevent many schema-change-triggered backfills, reducing the window in which deletes could be missed regardless of backfill mode. ## Advanced backfill configuration in specific systems ### PostgreSQL Capture +If a PostgreSQL table's primary key is uncorrelated with physical insert order (such as a UUIDv4 or other random token), a key-ordered backfill requires frequent random page fetches and may run significantly slower than expected. In these cases, using **Without Primary Key** mode (which uses the physical `ctid` row identifier instead of the primary key) can speed up the backfill considerably, at the cost of the ordering guarantees described above. + PostgreSQL's `xmin` system column can be used as a cursor to keep track of the current location in a table. If you need to re-backfill a Postgres table, you can reduce the affected data volume by specifying a minimum or maximum backfill `XID`. Estuary will only backfill rows greater than or less than the specified `XID`. This can be especially useful in cases where you do not want to re-backfill a full table, but cannot complete the steps in [Preventing backfills](#preventing-backfills) above, such as if you cannot pause database writes during an upgrade. From e068d37d16cdd5ce0ed19b9dffd12c2f89d157ea Mon Sep 17 00:00:00 2001 From: James Date: Wed, 1 Apr 2026 10:56:09 +1100 Subject: [PATCH 4/6] docs: clarify backfill modes section applies to SQL CDC captures --- site/docs/guides/backfilling-data.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/site/docs/guides/backfilling-data.md b/site/docs/guides/backfilling-data.md index 297fb9baeef..c77cbe55634 100644 --- a/site/docs/guides/backfilling-data.md +++ b/site/docs/guides/backfilling-data.md @@ -178,7 +178,7 @@ For example, Postgres currently deletes or requires users to drop logical replic 5. Resume database writes. -## Resource configuration backfill modes +## Resource configuration backfill modes for SQL CDC captures :::note Backfill modes apply only to **SQL CDC connectors** (PostgreSQL, MySQL, SQL Server, etc.). Non-SQL connectors and non-CDC connectors do not have this setting. From 7aff243d296603095ccfcfc1b14498c2628e8a8d Mon Sep 17 00:00:00 2001 From: James Date: Wed, 1 Apr 2026 10:57:06 +1100 Subject: [PATCH 5/6] docs: fix internal anchor links after section title rename --- site/docs/guides/backfilling-data.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/site/docs/guides/backfilling-data.md b/site/docs/guides/backfilling-data.md index c77cbe55634..8aae6a62804 100644 --- a/site/docs/guides/backfilling-data.md +++ b/site/docs/guides/backfilling-data.md @@ -34,7 +34,7 @@ To perform an incremental backfill: 1. Navigate to the Sources tab in Estuary's web UI 2. Start editing your capture and click the **Backfill** button 3. In the **Backfill mode** dropdown, select the **Incremental backfill (advanced)** option -4. (Optional) Choose a specific [**Resource configuration backfill mode**](#resource-configuration-backfill-modes) for the collection for advanced use cases +4. (Optional) Choose a specific [**Resource configuration backfill mode**](#resource-configuration-backfill-modes-for-sql-cdc-captures) for the collection for advanced use cases 5. Save and publish your changes This option is ideal when you want to ensure your collections have the most up-to-date data without @@ -108,7 +108,7 @@ To perform a dataflow reset: 1. Navigate to the Sources tab in Estuary's web UI 2. Start editing your capture and click the **Backfill** button 3. In the **Backfill mode** dropdown, select the **Dataflow reset** option -4. (Optional) Choose a specific [**Resource configuration backfill mode**](#resource-configuration-backfill-modes) for the collection for advanced use cases +4. (Optional) Choose a specific [**Resource configuration backfill mode**](#resource-configuration-backfill-modes-for-sql-cdc-captures) for the collection for advanced use cases 5. Save and publish your changes This option is ideal when you need a complete refresh of your entire data pipeline, especially when @@ -173,7 +173,7 @@ For example, Postgres currently deletes or requires users to drop logical replic 3. Perform the database upgrade. -4. Backfill all bindings of the capture using the ["Only Changes" backfill mode](#resource-configuration-backfill-modes) and make sure to select "Incremental Backfill (Advanced)" from the drop down. +4. Backfill all bindings of the capture using the ["Only Changes" backfill mode](#resource-configuration-backfill-modes-for-sql-cdc-captures) and make sure to select "Incremental Backfill (Advanced)" from the drop down. - This will not cause a full backfill. "Backfilling" all bindings at once resets the WAL (Write-Ahead Log) position for the capture, essentially allowing it to "jump ahead" to the current end of the WAL. The "Only Changes" mode will skip re-reading existing table content. Incremental backfill will append new data to your current collection. 5. Resume database writes. From 44af2f1c378648848b94c8c8bc78295671899a13 Mon Sep 17 00:00:00 2001 From: James Date: Wed, 1 Apr 2026 10:57:51 +1100 Subject: [PATCH 6/6] docs: update backfill mode anchor links in PostgreSQL connector docs --- .../Connectors/capture-connectors/PostgreSQL/PostgreSQL.md | 2 +- .../Connectors/capture-connectors/PostgreSQL/Supabase.md | 2 +- .../capture-connectors/PostgreSQL/amazon-rds-postgres.md | 2 +- .../capture-connectors/PostgreSQL/google-cloud-sql-postgres.md | 2 +- .../Connectors/capture-connectors/PostgreSQL/neon-postgres.md | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md index a3d56e93f0e..3ae16acf183 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md @@ -427,7 +427,7 @@ See [connectors](/concepts/connectors.md#using-connectors) to learn more about u | ---------------- | --------- | ------------------------------------------------------------------------------------------ | ------ | ---------------- | | **`/namespace`** | Namespace | The [namespace/schema](https://www.postgresql.org/docs/9.1/ddl-schemas.html) of the table. | string | Required | | **`/stream`** | Stream | Table name. | string | Required | -| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | +| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes-for-sql-cdc-captures) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | | `/priority` | Backfill Priority | Optional priority for this binding. The highest priority binding(s) will be backfilled completely before any others. Negative priorities are allowed and will cause a binding to be backfilled after others. | integer | `0` | #### SSL Mode diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/Supabase.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/Supabase.md index afe2ef000dc..b41b02a1cdd 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/Supabase.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/Supabase.md @@ -168,7 +168,7 @@ See [connectors](/concepts/connectors.md#using-connectors) to learn more about u |------------------|-----------|--------------------------------------------------------------------------------------------|--------|------------------| | **`/namespace`** | Namespace | The [namespace/schema](https://www.postgresql.org/docs/9.1/ddl-schemas.html) of the table. | string | Required | | **`/stream`** | Stream | Table name. | string | Required | -| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | +| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes-for-sql-cdc-captures) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | | `/priority` | Backfill Priority | Optional priority for this binding. The highest priority binding(s) will be backfilled completely before any others. Negative priorities are allowed and will cause a binding to be backfilled after others. | integer | `0` | #### SSL Mode diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md index ba621862928..1640c935b63 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md @@ -255,7 +255,7 @@ See [connectors](/concepts/connectors.md#using-connectors) to learn more about u | ---------------- | --------- | ------------------------------------------------------------------------------------------ | ------ | ---------------- | | **`/namespace`** | Namespace | The [namespace/schema](https://www.postgresql.org/docs/9.1/ddl-schemas.html) of the table. | string | Required | | **`/stream`** | Stream | Table name. | string | Required | -| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | +| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes-for-sql-cdc-captures) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | | `/priority` | Backfill Priority | Optional priority for this binding. The highest priority binding(s) will be backfilled completely before any others. Negative priorities are allowed and will cause a binding to be backfilled after others. | integer | `0` | #### SSL Mode diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/google-cloud-sql-postgres.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/google-cloud-sql-postgres.md index 6ac3fc0fbef..45734e860b4 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/google-cloud-sql-postgres.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/google-cloud-sql-postgres.md @@ -189,7 +189,7 @@ See [connectors](../../../../concepts/connectors.md#using-connectors) to learn m | ---------------- | --------- | ------------------------------------------------------------------------------------------ | ------ | ---------------- | | **`/namespace`** | Namespace | The [namespace/schema](https://www.postgresql.org/docs/9.1/ddl-schemas.html) of the table. | string | Required | | **`/stream`** | Stream | Table name. | string | Required | -| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | +| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes-for-sql-cdc-captures) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | | `/priority` | Backfill Priority | Optional priority for this binding. The highest priority binding(s) will be backfilled completely before any others. Negative priorities are allowed and will cause a binding to be backfilled after others. | integer | `0` | ### Sample diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/neon-postgres.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/neon-postgres.md index 8c0263f19c3..bdd676af9d8 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/neon-postgres.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/neon-postgres.md @@ -206,7 +206,7 @@ See [connectors](../../../../concepts/connectors.md#using-connectors) to learn m |------------------|-----------|--------------------------------------------------------------------------------------------|--------|------------------| | **`/namespace`** | Namespace | The [namespace/schema](https://www.postgresql.org/docs/9.1/ddl-schemas.html) of the table. | string | Required | | **`/stream`** | Stream | Table name. | string | Required | -| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | +| `/mode` | [Backfill Mode](/reference/backfilling-data/#resource-configuration-backfill-modes-for-sql-cdc-captures) | How the preexisting contents of the table should be backfilled. This should generally not be changed. | string | `""` | | `/priority` | Backfill Priority | Optional priority for this binding. The highest priority binding(s) will be backfilled completely before any others. Negative priorities are allowed and will cause a binding to be backfilled after others. | integer | `0` | ### Sample