Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 76 additions & 7 deletions docs/docs/build/connectors/credentials.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,15 @@ The `.env` file serves several important purposes:
Example `.env` file:
```bash
# AWS S3 credentials
connector.s3.access_key_id=AKIAIOSFODNN7EXAMPLE
connector.s3.secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

# Google Cloud credentials
connector.gcs.credentials_json={"type":"service_account","project_id":"my-project"}
GOOGLE_APPLICATION_CREDENTIALS={"type":"service_account","project_id":"my-project"}

# Database connection
connector.postgres.dsn=postgres://username:password@localhost:5432/mydb
# Database connections
POSTGRES_PASSWORD=mypassword
SNOWFLAKE_PASSWORD=mysnowflakepassword

# Custom variables
my_custom_variable=some_value
Expand All @@ -67,9 +68,77 @@ When creating any connector in Rill via the UI, these will be **automatically ge

Additional variables can then be usable and referenceable for [templating](/build/connectors/templating) purposes in the local instance of your project.

### Credentials Naming Schema
### Credentials Naming Schema

Connector credentials are essentially a form of project variable, prefixed using the `connector.<connector_name>.<property>` syntax. For example, `connector.druid.dsn` and `connector.clickhouse.dsn` are both hard-coded project variables (that happen to correspond to the [Druid](/build/connectors/olap/druid) and [ClickHouse](/build/connectors/olap/clickhouse) OLAP engines respectively). Please see below for each source and its required properties. If you have any questions or need specifics, [contact us](/contact)!
When you create a connector through Rill's UI, credentials are automatically saved to your `.env` file using a standardized naming convention:

#### Generic Credentials (Shared Across Connectors)

Common cloud provider credentials use standard names without a driver prefix:

| Property | Environment Variable |
|----------|---------------------|
| Google Application Credentials | `GOOGLE_APPLICATION_CREDENTIALS` |
| AWS Access Key ID | `AWS_ACCESS_KEY_ID` |
| AWS Secret Access Key | `AWS_SECRET_ACCESS_KEY` |
| Azure Storage Connection String | `AZURE_STORAGE_CONNECTION_STRING` |
| Azure Storage Key | `AZURE_STORAGE_KEY` |
| Azure Storage SAS Token | `AZURE_STORAGE_SAS_TOKEN` |
| Snowflake Private Key | `PRIVATE_KEY` |

#### Driver-Specific Credentials

Credentials specific to a database driver use the `DRIVER_PROPERTY` format:

| Driver | Property | Environment Variable |
|--------|----------|---------------------|
| PostgreSQL | password | `POSTGRES_PASSWORD` |
| PostgreSQL | dsn | `POSTGRES_DSN` |
| MySQL | password | `MYSQL_PASSWORD` |
| Snowflake | password | `SNOWFLAKE_PASSWORD` |
| ClickHouse | password | `CLICKHOUSE_PASSWORD` |

#### Handling Multiple Connectors

When you create multiple connectors that use the same credential type, Rill automatically appends a numeric suffix to avoid conflicts:

```bash
# First BigQuery connector
GOOGLE_APPLICATION_CREDENTIALS={"type":"service_account",...}

# Second BigQuery connector
GOOGLE_APPLICATION_CREDENTIALS_1={"type":"service_account",...}

# Third BigQuery connector
GOOGLE_APPLICATION_CREDENTIALS_2={"type":"service_account",...}
```

This ensures each connector can reference its own credentials without overwriting existing ones.

#### Referencing Variables in YAML

Use the `{{ .env.VARIABLE_NAME }}` syntax to reference environment variables in your connector YAML files:

```yaml
google_application_credentials: "{{ .env.GOOGLE_APPLICATION_CREDENTIALS }}"
password: "{{ .env.POSTGRES_PASSWORD }}"
aws_access_key_id: "{{ .env.AWS_ACCESS_KEY_ID }}"
```

#### Case-Insensitive Variable Lookups

The `{{ env "VAR_NAME" }}` function provides case-insensitive variable lookups, which can be useful when variable names may have inconsistent casing:

```yaml
# All of these will match POSTGRES_PASSWORD in your .env file:
password: '{{ env "POSTGRES_PASSWORD" }}'
password: '{{ env "postgres_password" }}'
password: '{{ env "Postgres_Password" }}'
```

:::note Legacy Naming Convention
Older projects may use the `connector.<connector_name>.<property>` syntax (e.g., `connector.druid.dsn`, `connector.clickhouse.dsn`). This format is still supported for backwards compatibility.
:::

:::tip Avoid committing sensitive information to Git

Expand Down
6 changes: 3 additions & 3 deletions docs/docs/build/connectors/data-source/athena.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,14 @@ Create a connector with your credentials to connect to Athena. Here's an example
type: connector

driver: athena
aws_access_key_id: "{{ .env.connector.athena.aws_access_key_id }}"
aws_secret_access_key: "{{ .env.connector.athena.aws_secret_access_key }}"
aws_access_key_id: "{{ .env.AWS_ACCESS_KEY_ID }}"
aws_secret_access_key: "{{ .env.AWS_SECRET_ACCESS_KEY }}"
output_location: "s3://bucket/path/folder"
region: "us-east-1"
```

:::tip Using the Add Data Form
You can also use the Add Data form in Rill Developer, which will automatically create the `athena.yaml` file and populate the `.env` file with `connector.athena.aws_access_key_id` and `connector.athena.aws_secret_access_key`.
You can also use the Add Data form in Rill Developer, which will automatically create the `athena.yaml` file and populate the `.env` file with `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
:::

### Local AWS Credentials (Local Development Only)
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/build/connectors/data-source/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ type: connector
driver: azure

azure_storage_account: rilltest
azure_storage_key: "{{ .env.connector.azure.azure_storage_key }}"
azure_storage_key: "{{ .env.AZURE_STORAGE_KEY }}"
```

This approach ensures your Azure Blob Storage sources authenticate consistently across both local development and cloud deployment. Follow the [Azure Documentation](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal) to retrieve your storage account keys.
Expand All @@ -49,7 +49,7 @@ type: connector

driver: azure

azure_storage_connection_string: "{{ .env.connector.azure.azure_storage_connection_string }}"
azure_storage_connection_string: "{{ .env.AZURE_STORAGE_CONNECTION_STRING }}"
```

This approach ensures your Azure Blob Storage sources authenticate consistently across both local development and cloud deployment. Follow the [Azure Documentation](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal) to retrieve your connection string.
Expand All @@ -64,7 +64,7 @@ type: connector
driver: azure

azure_storage_account: rilltest
azure_storage_sas_token: "{{ .env.connector.azure.azure_storage_sas_token }}"
azure_storage_sas_token: "{{ .env.AZURE_STORAGE_SAS_TOKEN }}"
```

This method provides fine-grained access control and enhanced security for your Azure Blob Storage connections. Follow the [Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/translator/document-translation/how-to-guides/create-sas-tokens?tabs=Containers) to create your Azure SAS token.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/build/connectors/data-source/bigquery.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ type: connector

driver: bigquery

google_application_credentials: "{{ .env.connector.bigquery.google_application_credentials }}"
google_application_credentials: "{{ .env.GOOGLE_APPLICATION_CREDENTIALS }}"
project_id: "rilldata"
```

Expand Down
12 changes: 6 additions & 6 deletions docs/docs/build/connectors/data-source/gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Create `connectors/my_gcs.yaml`:
type: connector
driver: gcs

google_application_credentials: "{{ .env.connector.gcs.google_application_credentials }}"
google_application_credentials: "{{ .env.GOOGLE_APPLICATION_CREDENTIALS }}"
```

**Step 2: Create model configuration**
Expand All @@ -86,7 +86,7 @@ refresh:
**Step 3: Add credentials to `.env`**

```bash
connector.gcs.google_application_credentials=<json_credentials>
GOOGLE_APPLICATION_CREDENTIALS=<json_credentials>
```

---
Expand Down Expand Up @@ -119,8 +119,8 @@ Create `connectors/my_gcs_hmac.yaml`:
type: connector
driver: gcs

key_id: "{{ .env.connector.gcs.key_id }}"
secret: "{{ .env.connector.gcs.secret }}"
key_id: "{{ .env.KEY_ID }}"
secret: "{{ .env.SECRET }}"
```

**Step 2: Create model configuration**
Expand All @@ -141,8 +141,8 @@ refresh:
**Step 3: Add credentials to `.env`**

```bash
connector.gcs.key_id=GOOG1234567890ABCDEFG
connector.gcs.secret=your-secret-access-key
KEY_ID=GOOG1234567890ABCDEFG
SECRET=your-secret-access-key
```

:::info
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/build/connectors/data-source/https.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ driver: https
path: "https://api.endpoint.com/v1"

headers:
Authorization: "Bearer {{ .env.connector.https.token }}"
Authorization: "Bearer {{ .env.HTTPS_TOKEN }}"
```

### Public URLs
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/build/connectors/data-source/mysql.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ host: "localhost"
port: 3306
database: "mydatabase"
user: "myusername"
password: "{{ .env.connector.mysql.password }}"
password: "{{ .env.MYSQL_PASSWORD }}"
ssl_mode: "DISABLED"
```

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/build/connectors/data-source/openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@ To configure OpenAI access, you'll need to obtain an API key from your OpenAI ac
```yaml
type: connector
driver: openai
api_key: "{{ .env.connector.openai.openai_api_key }}"
api_key: "{{ .env.OPENAI_API_KEY }}"
```
:::tip Security Best Practice

Never commit your OpenAI API key directly to your connector YAML files or version control. Always use environment variables with the `{{ .env.connector.openai.openai_api_key }}` syntax to keep sensitive credentials secure.
Never commit your OpenAI API key directly to your connector YAML files or version control. Always use environment variables with the `{{ .env.OPENAI_API_KEY }}` syntax to keep sensitive credentials secure.

:::
3. **Set up environment variable:**
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/build/connectors/data-source/postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ driver: postgres
host: "localhost"
port: "5432"
user: "postgres"
password: "{{ .env.connector.postgres.password }}"
password: "{{ .env.POSTGRES_PASSWORD }}"
dbname: "postgres"
```

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/build/connectors/data-source/redshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ Create a connector with your credentials to connect to Redshift. Here's an examp
type: connector

driver: redshift
aws_access_key_id: "{{ .env.connector.redshift.aws_access_key_id }}"
aws_secret_access_key: "{{ .env.connector.redshift.aws_secret_access_key }}"
aws_access_key_id: "{{ .env.AWS_ACCESS_KEY_ID }}"
aws_secret_access_key: "{{ .env.AWS_SECRET_ACCESS_KEY }}"
database: "dev"
```

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/build/connectors/data-source/s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ Create a connector with your credentials to connect to S3. Here's an example con
type: connector
driver: s3

aws_access_key_id: "{{ .env.connector.s3.aws_access_key_id }}"
aws_secret_access_key: "{{ .env.connector.s3.aws_secret_access_key }}"
aws_access_key_id: "{{ .env.AWS_ACCESS_KEY_ID }}"
aws_secret_access_key: "{{ .env.AWS_SECRET_ACCESS_KEY }}"
```

This approach ensures your AWS sources authenticate consistently across both local development and cloud deployment environments.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/build/connectors/data-source/snowflake.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Create a connector with your credentials to connect to Snowflake. Here's an exam
type: connector
driver: snowflake

dsn: "{{ .env.connector.snowflake.dsn }}"
dsn: "{{ .env.SNOWFLAKE_DSN }}"
```

:::tip Using the Add Data Form
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/build/connectors/olap/clickhouse.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ driver: clickhouse
host: <HOSTNAME>
port: <PORT>
username: <USERNAME>
password: "{{ .env.connector.clickhouse.password }}"
password: "{{ .env.CLICKHOUSE_PASSWORD }}"
ssl: true # required for ClickHouse Cloud
```
Expand All @@ -62,7 +62,7 @@ Once the file is created, it will be added directly to the `.env` file in the pr
type: connector
driver: clickhouse
dsn: "{{ .env.connector.clickhouse.dsn }}"
dsn: "{{ .env.CLICKHOUSE_DSN }}"
```

:::info Check your port
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/build/connectors/olap/druid.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@ driver: druid
host: <HOSTNAME>
port: <PORT>
username: <USERNAME>
password: "{{ .env.connector.druid.password }}"
password: "{{ .env.DRUID_PASSWORD }}"
ssl: true

# or

dsn: "{{ .env.connector.druid.dsn }}"
dsn: "{{ .env.DRUID_DSN }}"
```

2. You can manually set `connector.druid.dsn` in your project's `.env` file or try pulling existing credentials locally using `rill env pull` if the project has already been deployed to Rill Cloud.
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/build/connectors/olap/motherduck.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Your MotherDuck access token provides access to your data. Keep it secure and ne

## Configuring Rill Developer with MotherDuck

Connect to your OLAP engine via Add Data. This will automatically create the motherduck.yaml file in your connectors folder and populate the .env file with `.connector.motherduck.token`.
Connect to your OLAP engine via Add Data. This will automatically create the motherduck.yaml file in your connectors folder and populate the .env file with `MOTHERDUCK_TOKEN`.

For more information on supported parameters, see our [MotherDuck connector YAML reference docs](/reference/project-files/connectors#motherduck).

Expand All @@ -49,7 +49,7 @@ For more information on supported parameters, see our [MotherDuck connector YAML
type: connector
driver: duckdb

token: '{{ .env.connector.motherduck.token }}'
token: '{{ .env.MOTHERDUCK_TOKEN }}'
path: "md:my_database"
schema_name: "my_schema"
```
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/build/connectors/olap/pinot.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ When using Rill for local development, there are a few options to configure Rill
type: connector
driver: pinot

dsn: "{{ .env.connector.pinot.dsn }}"
dsn: "{{ .env.PINOT_DSN }}"
```

1. You can set `connector.pinot.dsn` in your project's `.env` file or try pulling existing credentials locally using `rill env pull` if the project has already been deployed to Rill Cloud.
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/build/connectors/olap/starrocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ driver: starrocks
host: <HOSTNAME>
port: 9030
username: <USERNAME>
password: "{{ .env.connector.starrocks.password }}"
password: "{{ .env.STARROCKS_PASSWORD }}"
catalog: default_catalog
database: <DATABASE>
ssl: false
Expand All @@ -40,7 +40,7 @@ Rill can also connect to StarRocks using a DSN connection string. StarRocks uses
type: connector
driver: starrocks

dsn: "{{ .env.connector.starrocks.dsn }}"
dsn: "{{ .env.STARROCKS_DSN }}"
```

The DSN format is:
Expand Down Expand Up @@ -80,7 +80,7 @@ driver: starrocks
host: starrocks-fe.example.com
port: 9030
username: analyst
password: "{{ .env.connector.starrocks.password }}"
password: "{{ .env.STARROCKS_PASSWORD }}"
catalog: iceberg_catalog
database: my_database
```
Expand Down
Loading
Loading