-
Notifications
You must be signed in to change notification settings - Fork 134
Description
Describe the issue
While jobs correctly respect deployment trigger presets (https://docs.databricks.com/aws/en/dev-tools/bundles/deployment-modes#custom-presets) in asset bundles, sql alerts require explicit overrides in databricks.yaml file, otherwise the schedule trigger status is not set correctly.
Configuration
databricks.yaml
targets:
qa:
mode: production
presets:
trigger_pause_status: PAUSED
default: true
resources:
alerts:
# without this block, the alert status is incorrectly set to UNPAUSED
my_alert:
schedule:
# in job definitions, this can be left blank, which results in a warning but correctly deploys pause status
pause_status: "PAUSED"
prod:
mode: productionalert.yaml
resources:
alerts:
my_alert:
...
notification:
notify_on_ok: false
retrigger_seconds: 3600
subscriptions:
# FYI, deploying alert destinations is not supported via DABs, so this must be retrieved from the CLI, which is not ideal.
- destination_id: "alert destination id"
schedule:
pause_status: "UNPAUSED"
quartz_cron_schedule: "0 15 * * * ?"
timezone_id: "UTC"Expected Behavior
Jobs correctly deploy the top-level settings, while sql alerts ignore it. Ideally, jobs should also avoid producing this warning, and just take the pause status from the top level preset without requiring a null value:
Warning: expected a string value, found null at resources.jobs.streamAmazonKafka.continuous.pause_status in dbr_resources/jobs/job.yml:8:22.
But at least sql alerts should follow the same pattern as jobs and not ignore the preset.
Actual Behavior
SQL alerts ignore the top level preset specified for the bundle target.
OS and CLI version
Macos - databricks/tap/databricks 0.283.0
Is this a regression?
Unknown