Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .codegen/_openapi_sha
Original file line number Diff line number Diff line change
@@ -1 +1 @@
dbf9b0a4e0432e846520442b14c34fc7f0ca0d8c
76dbe1cb1a0a017a4484757cb4e542a30a87e9b3
82 changes: 82 additions & 0 deletions acceptance/bundle/refschema/out.fields.txt

Large diffs are not rendered by default.

9 changes: 6 additions & 3 deletions acceptance/help/output.txt
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Real-time Serving
serving-endpoints The Serving Endpoints API allows you to create, update, and delete model serving endpoints.

Apps
apps Apps run directly on a customers Databricks instance, integrate with their data, use and extend Databricks services, and enable users to interact through single sign-on.
apps Apps run directly on a customer's Databricks instance, integrate with their data, use and extend Databricks services, and enable users to interact through single sign-on.

Vector Search
vector-search-endpoints **Endpoint**: Represents the compute resources to host vector search indexes.
Expand Down Expand Up @@ -81,7 +81,7 @@ Unity Catalog
model-versions Databricks provides a hosted version of MLflow Model Registry in Unity Catalog.
online-tables Online tables provide lower latency and higher QPS access to data from Delta tables.
policies Attribute-Based Access Control (ABAC) provides high leverage governance for enforcing compliance policies in Unity Catalog.
quality-monitors A monitor computes and monitors data or model quality metrics for a table over time.
quality-monitors [DEPRECATED] This API is deprecated.
registered-models Databricks provides a hosted version of MLflow Model Registry in Unity Catalog.
resource-quotas Unity Catalog enforces resource quotas on all securable objects, which limits the number of resources that can be created.
rfa Request for Access enables users to request access for Unity Catalog securables.
Expand Down Expand Up @@ -136,7 +136,7 @@ Clean Rooms
clean-rooms A clean room uses Delta Sharing and serverless compute to provide a secure and privacy-protecting environment where multiple parties can work together on sensitive enterprise data without direct access to each other's data.

Quality Monitor
quality-monitor-v2 Manage data quality of UC objects (currently support schema).
quality-monitor-v2 [DEPRECATED] This API is deprecated.

Data Quality Monitoring
data-quality Manage the data quality of Unity Catalog objects (currently support schema and table).
Expand All @@ -149,6 +149,9 @@ Tags
tag-policies The Tag Policy API allows you to manage policies for governed tags in Databricks.
workspace-entity-tag-assignments Manage tag assignments on workspace-scoped objects.

Postgres
postgres Use the Postgres API to create and manage Lakebase Autoscaling Postgres infrastructure, including projects, branches, compute endpoints, and roles.

Developer Tools
bundle Databricks Asset Bundles let you express data/AI/analytics projects as code.
sync Synchronize a local directory to a workspace directory
Expand Down
6 changes: 6 additions & 0 deletions bundle/direct/dresources/cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ func (r *ResourceCluster) RemapState(input *compute.ClusterDetails) *compute.Clu
DockerImage: input.DockerImage,
DriverInstancePoolId: input.DriverInstancePoolId,
DriverNodeTypeId: input.DriverNodeTypeId,
DriverNodeTypeFlexibility: input.DriverNodeTypeFlexibility,
EnableElasticDisk: input.EnableElasticDisk,
EnableLocalDiskEncryption: input.EnableLocalDiskEncryption,
GcpAttributes: input.GcpAttributes,
Expand All @@ -64,6 +65,7 @@ func (r *ResourceCluster) RemapState(input *compute.ClusterDetails) *compute.Clu
TotalInitialRemoteDiskSize: input.TotalInitialRemoteDiskSize,
UseMlRuntime: input.UseMlRuntime,
WorkloadType: input.WorkloadType,
WorkerNodeTypeFlexibility: input.WorkerNodeTypeFlexibility,
ForceSendFields: utils.FilterFields[compute.ClusterSpec](input.ForceSendFields),
}
if input.Spec != nil {
Expand Down Expand Up @@ -159,6 +161,7 @@ func makeCreateCluster(config *compute.ClusterSpec) compute.CreateCluster {
DockerImage: config.DockerImage,
DriverInstancePoolId: config.DriverInstancePoolId,
DriverNodeTypeId: config.DriverNodeTypeId,
DriverNodeTypeFlexibility: config.DriverNodeTypeFlexibility,
EnableElasticDisk: config.EnableElasticDisk,
EnableLocalDiskEncryption: config.EnableLocalDiskEncryption,
GcpAttributes: config.GcpAttributes,
Expand All @@ -179,6 +182,7 @@ func makeCreateCluster(config *compute.ClusterSpec) compute.CreateCluster {
TotalInitialRemoteDiskSize: config.TotalInitialRemoteDiskSize,
UseMlRuntime: config.UseMlRuntime,
WorkloadType: config.WorkloadType,
WorkerNodeTypeFlexibility: config.WorkerNodeTypeFlexibility,
ForceSendFields: utils.FilterFields[compute.CreateCluster](config.ForceSendFields),
}

Expand Down Expand Up @@ -206,6 +210,7 @@ func makeEditCluster(id string, config *compute.ClusterSpec) compute.EditCluster
DockerImage: config.DockerImage,
DriverInstancePoolId: config.DriverInstancePoolId,
DriverNodeTypeId: config.DriverNodeTypeId,
DriverNodeTypeFlexibility: config.DriverNodeTypeFlexibility,
EnableElasticDisk: config.EnableElasticDisk,
EnableLocalDiskEncryption: config.EnableLocalDiskEncryption,
GcpAttributes: config.GcpAttributes,
Expand All @@ -226,6 +231,7 @@ func makeEditCluster(id string, config *compute.ClusterSpec) compute.EditCluster
TotalInitialRemoteDiskSize: config.TotalInitialRemoteDiskSize,
UseMlRuntime: config.UseMlRuntime,
WorkloadType: config.WorkloadType,
WorkerNodeTypeFlexibility: config.WorkerNodeTypeFlexibility,
ForceSendFields: utils.FilterFields[compute.EditCluster](config.ForceSendFields),
}

Expand Down
88 changes: 88 additions & 0 deletions bundle/internal/schema/annotations_openapi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,9 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
The optional ID of the instance pool for the driver of the cluster belongs.
The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not
assigned.
"driver_node_type_flexibility":
"description": |-
Flexible node type configuration for the driver node.
"driver_node_type_id":
"description": |-
The node type of the Spark driver.
Expand Down Expand Up @@ -356,6 +359,9 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
This field can only be used when `kind = CLASSIC_PREVIEW`.

`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
"worker_node_type_flexibility":
"description": |-
Flexible node type configuration for worker nodes.
"workload_type":
"description": |-
Cluster Attributes showing for clusters workload types.
Expand Down Expand Up @@ -402,45 +408,61 @@ github.com/databricks/cli/bundle/config/resources.DatabaseInstance:
"effective_capacity":
"description": |-
Deprecated. The sku of the instance; this field will always match the value of capacity.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"deprecation_message": |-
This field is deprecated
"x-databricks-field-behaviors_output_only": |-
true
"effective_custom_tags":
"description": |-
The recorded custom tags associated with the instance.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"effective_enable_pg_native_login":
"description": |-
Whether the instance has PG native password login enabled.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"effective_enable_readable_secondaries":
"description": |-
Whether secondaries serving read-only traffic are enabled. Defaults to false.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"effective_node_count":
"description": |-
The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to
1 primary and 0 secondaries.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"effective_retention_window_in_days":
"description": |-
The retention window for the instance. This is the time window in days
for which the historical data is retained.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"effective_stopped":
"description": |-
Whether the instance is stopped.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"effective_usage_policy_id":
"description": |-
The policy that is applied to the instance.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"enable_pg_native_login":
Expand Down Expand Up @@ -990,11 +1012,15 @@ github.com/databricks/cli/bundle/config/resources.SyncedDatabaseTable:
"description": |-
The name of the database instance that this table is registered to. This field is always returned, and for
tables inside database catalogs is inferred database instance associated with the catalog.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"effective_logical_database_name":
"description": |-
The name of the logical database that this table is registered to.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"logical_database_name":
Expand Down Expand Up @@ -1790,6 +1816,9 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
The optional ID of the instance pool for the driver of the cluster belongs.
The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not
assigned.
"driver_node_type_flexibility":
"description": |-
Flexible node type configuration for the driver node.
"driver_node_type_id":
"description": |-
The node type of the Spark driver.
Expand Down Expand Up @@ -1904,6 +1933,9 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
This field can only be used when `kind = CLASSIC_PREVIEW`.

`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
"worker_node_type_flexibility":
"description": |-
Flexible node type configuration for worker nodes.
"workload_type":
"description": |-
Cluster Attributes showing for clusters workload types.
Expand Down Expand Up @@ -2164,6 +2196,13 @@ github.com/databricks/databricks-sdk-go/service/compute.MavenLibrary:
"description": |-
Maven repo to install the Maven package from. If omitted, both Maven Central Repository
and Spark Packages are searched.
github.com/databricks/databricks-sdk-go/service/compute.NodeTypeFlexibility:
"_":
"description": |-
Configuration for flexible node types, allowing fallback to alternate node types during cluster launch and upscale.
"alternate_node_type_ids":
"description": |-
A list of node type IDs to use as fallbacks when the primary node type is unavailable.
github.com/databricks/databricks-sdk-go/service/compute.PythonPyPiLibrary:
"package":
"description": |-
Expand Down Expand Up @@ -2287,6 +2326,8 @@ github.com/databricks/databricks-sdk-go/service/database.DatabaseInstanceRef:
instance was created.
For a child ref instance, this is the LSN on the instance from which the child instance
was created.
This is an output only field that contains the value computed from the input field combined with
server side defaults. Use the field without the effective_ prefix to set the value.
"x-databricks-field-behaviors_output_only": |-
true
"lsn":
Expand Down Expand Up @@ -2904,16 +2945,20 @@ github.com/databricks/databricks-sdk-go/service/jobs.JobDeployment:
The kind of deployment that manages the job.

* `BUNDLE`: The job is managed by Databricks Asset Bundle.
* `SYSTEM_MANAGED`: The job is managed by Databricks and is read-only.
"metadata_file_path":
"description": |-
Path of the file that contains deployment metadata.
github.com/databricks/databricks-sdk-go/service/jobs.JobDeploymentKind:
"_":
"description": |-
* `BUNDLE`: The job is managed by Databricks Asset Bundle.
* `SYSTEM_MANAGED`: The job is managed by Databricks and is read-only.
"enum":
- |-
BUNDLE
- |-
SYSTEM_MANAGED
github.com/databricks/databricks-sdk-go/service/jobs.JobEditMode:
"_":
"description": |-
Expand Down Expand Up @@ -3766,6 +3811,18 @@ github.com/databricks/databricks-sdk-go/service/ml.ModelTag:
"value":
"description": |-
The tag value.
github.com/databricks/databricks-sdk-go/service/pipelines.AutoFullRefreshPolicy:
"_":
"description": |-
Policy for auto full refresh.
"enabled":
"description": |-
(Required, Mutable) Whether to enable auto full refresh or not.
"min_interval_hours":
"description": |-
(Optional, Mutable) Specify the minimum interval in hours between the timestamp
at which a table was last full refreshed and the current timestamp for triggering auto full
If unspecified and autoFullRefresh is enabled then by default min_interval_hours is 24 hours.
github.com/databricks/databricks-sdk-go/service/pipelines.ConnectionParameters:
"source_catalog":
"description": |-
Expand Down Expand Up @@ -3869,6 +3926,9 @@ github.com/databricks/databricks-sdk-go/service/pipelines.IngestionPipelineDefin
"connection_name":
"description": |-
Immutable. The Unity Catalog connection that this ingestion pipeline uses to communicate with the source. This is used with connectors for applications like Salesforce, Workday, and so on.
"full_refresh_window":
"description": |-
(Optional) A window that specifies a set of time ranges for snapshot queries in CDC.
"ingest_from_uc_foreign_catalog":
"description": |-
Immutable. If set to true, the pipeline will ingest tables from the
Expand Down Expand Up @@ -4025,6 +4085,21 @@ github.com/databricks/databricks-sdk-go/service/pipelines.Notifications:
"email_recipients":
"description": |-
A list of email addresses notified when a configured alert is triggered.
github.com/databricks/databricks-sdk-go/service/pipelines.OperationTimeWindow:
"_":
"description": |-
Proto representing a window
"days_of_week":
"description": |-
Days of week in which the window is allowed to happen
If not specified all days of the week will be used.
"start_hour":
"description": |-
An integer between 0 and 23 denoting the start hour for the window in the 24-hour day.
"time_zone_id":
"description": |-
Time zone id of window. See https://docs.databricks.com/sql/language-manual/sql-ref-syntax-aux-conf-mgmt-set-timezone.html for details.
If not specified, UTC will be used.
github.com/databricks/databricks-sdk-go/service/pipelines.PathPattern:
"include":
"description": |-
Expand Down Expand Up @@ -4313,6 +4388,19 @@ github.com/databricks/databricks-sdk-go/service/pipelines.TableSpec:
"description": |-
Configuration settings to control the ingestion of tables. These settings override the table_configuration defined in the IngestionPipelineDefinition object and the SchemaSpec.
github.com/databricks/databricks-sdk-go/service/pipelines.TableSpecificConfig:
"auto_full_refresh_policy":
"description": |-
(Optional, Mutable) Policy for auto full refresh, if enabled pipeline will automatically try
to fix issues by doing a full refresh on the table in the retry run. auto_full_refresh_policy
in table configuration will override the above level auto_full_refresh_policy.
For example,
{
"auto_full_refresh_policy": {
"enabled": true,
"min_interval_hours": 23,
}
}
If unspecified, auto full refresh is disabled.
"exclude_columns":
"description": |-
A list of column names to be excluded for the ingestion.
Expand Down
3 changes: 2 additions & 1 deletion bundle/internal/validation/generated/enum_fields.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading