From cd51880ee153fc576f0507f860dd9cb3914833e6 Mon Sep 17 00:00:00 2001 From: Bartek Tofel Date: Thu, 2 Apr 2026 11:19:54 +0200 Subject: [PATCH 1/3] update local CRE docs --- core/scripts/cre/environment/README.md | 50 ++--- system-tests/tests/smoke/cre/README.md | 242 ++++++------------------- 2 files changed, 83 insertions(+), 209 deletions(-) diff --git a/core/scripts/cre/environment/README.md b/core/scripts/cre/environment/README.md index e1f4ab68f36..d222168d6bf 100644 --- a/core/scripts/cre/environment/README.md +++ b/core/scripts/cre/environment/README.md @@ -96,7 +96,7 @@ Slack: #topic-local-dev-environments - [GH CLI is not installed](#gh-cli-is-not-installed) # QUICKSTART -Setup platform: allocate and configure the default environment with all dependencies. :rocket: +Setup platform: allocate and configure the default environment with all dependencies. ``` go run . env start --auto-setup ``` @@ -104,7 +104,7 @@ Note: this allocates and configures the full stack. It may take a few minutes th Deploy app: your first workflow ``` -go run . workflow deploy -w ./examples/workflows/v2/cron/main.go -n cron_example +go run . workflow deploy -w ./examples/workflows/v2/cron/main.go --compile -n cron_example ``` @@ -145,7 +145,7 @@ It will compile local CRE as `local_cre`. With it installed you will be able to Git access to plugin repositories (e.g. `capabilities`, `confidential-compute`) is required when building the Chainlink image from source, so plugins can be pulled during the Docker build. If you use a pre-built image with plugins already baked in, Git access is not required. -# QUICKSTART +# QUICKSTART (with ECR configured) ``` # e.g. AWS_ECR=.dkr.ecr..amazonaws.com AWS_ECR= go run . env start --auto-setup @@ -199,12 +199,17 @@ go run . env start --with-contracts-version v1 > Important! **Nightly** Chainlink images are retained only for one day and built at 03:00 UTC. That means that in most cases you should use today's image, not yesterday's. Optional parameters: -- `-a`: Check if all dependencies are present and if not install them (defaults to `false`) -- `-t`: Topology (`simplified` or `full`) -- `-w`: Wait on error before removing up Docker containers (e.g. to inspect Docker logs, e.g. `-w 5m`) -- `-e`: Extra ports for which external access by the DON should be allowed (e.g. when making API calls or downloading WASM workflows) -- `-x`: Registers an example PoR workflow using CRE CLI and verifies it executed successfuly -- `-s`: Time to wait for example workflow to execute successfuly (defaults to `5m`) +- `-a, --auto-setup`: Run setup before starting the environment +- `-w, --wait-on-error-timeout`: Time to wait before removing Docker containers if startup fails +- `-l, --cleanup-on-error`: Remove Docker containers if startup fails +- `-e, --extra-allowed-gateway-ports`: Extra allowed outgoing gateway ports +- `-x, --with-example`: Deploy and register the example workflow +- `-u, --example-workflow-timeout`: Time to wait for the example workflow to succeed +- `-b, --with-beholder`: Deploy Beholder (Chip Ingress + Red Panda) +- `-d, --with-dashboards`: Deploy the observability stack and Grafana dashboards +- `--with-observability`: Start the observability stack +- `--with-billing`: Deploy the billing platform service +- `-g, --grpc-port`: gRPC port for Chip Ingress - `-p`: **DEPRECATED** Use `image` in TOML config instead. See [Using a pre-built Chainlink image](#using-a-pre-built-chainlink-image). - `--with-contracts-version`: Version of workflow/capability registries to use (`v2` by default, use `v1` explicitly for legacy coverage) @@ -373,18 +378,21 @@ go run . workflow deploy [flags] ``` **Key flags:** -- `-w, --workflow-file-path`: Path to the workflow file (default: `./examples/workflows/v2/cron/main.go`) +- `-w, --workflow-file-path`: Path to a compiled base64 workflow file, or to a Go/TypeScript source file when `--compile` is used - `-c, --config-file-path`: Path to the config file (optional) - `-s, --secrets-file-path`: Path to the secrets file (optional) +- `-o, --secrets-output-file-path`: Output path for encrypted secrets (optional) - `-t, --container-target-dir`: Path to target directory in Docker container (default: `/home/chainlink/workflows`) -- `-o, --container-name-pattern`: Pattern to match container name (default: `workflow-node`) -- `-n, --workflow-name`: Workflow name (default: `exampleworkflow`) +- `-p, --container-name-pattern`: Pattern to match workflow node container names (default: `workflow-node`) +- `-n, --name`: Workflow name - `-r, --rpc-url`: RPC URL (default: `http://localhost:8545`) -- `-i, --chain-id`: Chain ID (default: `1337`) -- `-a, --workflow-registry-address`: Workflow registry address (default: `0x9fE46736679d2D9a65F0992F2272dE9f3c7fa6e0`) -- `-b, --capabilities-registry-address`: Capabilities registry address (default: `0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512`) +- `-a, --workflow-registry-address`: Workflow registry address (optional; taken from local CRE state if omitted) +- `-b, --capabilities-registry-address`: Capabilities registry address (optional; taken from local CRE state if omitted) - `-d, --workflow-owner-address`: Workflow owner address (default: `0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266`) - `-e, --don-id`: DON ID (default: `1`) +- `-x, --compile`: Compile the workflow before deploying it +- `-l, --delete-workflow-file`: Delete the workflow artifact after deployment +- `--with-contracts-version`: Registry contract version (`v1` or `v2`) **Example:** ```bash @@ -402,9 +410,8 @@ go run . workflow delete [flags] **Key flags:** - `-n, --name`: Workflow name to delete (default: `exampleworkflow`) - `-r, --rpc-url`: RPC URL (default: `http://localhost:8545`) -- `-i, --chain-id`: Chain ID (default: `1337`) -- `-a, --workflow-registry-address`: Workflow registry address (default: `0x9fE46736679d2D9a65F0992F2272dE9f3c7fa6e0`) -- `-d, --workflow-owner-address`: Workflow owner address (default: `0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266`) +- `-a, --workflow-registry-address`: Workflow registry address (optional; taken from local CRE state if omitted) +- `--with-contracts-version`: Registry contract version (`v1` or `v2`) **Example:** ```bash @@ -421,9 +428,8 @@ go run . workflow delete-all [flags] **Key flags:** - `-r, --rpc-url`: RPC URL (default: `http://localhost:8545`) -- `-i, --chain-id`: Chain ID (default: `1337`) -- `-a, --workflow-registry-address`: Workflow registry address (default: `0x9fE46736679d2D9a65F0992F2272dE9f3c7fa6e0`) -- `-d, --workflow-owner-address`: Workflow owner address (default: `0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266`) +- `-a, --workflow-registry-address`: Workflow registry address (optional; taken from local CRE state if omitted) +- `--with-contracts-version`: Registry contract version (`v1` or `v2`) **Example:** ```bash @@ -926,7 +932,7 @@ For other workflows (v2/cron, v2/node-mode, v2/http), you can deploy them manual ```bash # Deploy v2 cron workflow -go run . env workflow deploy -w ./examples/workflows/v2/cron/main.go --compile -n cron-workflow +go run . workflow deploy -w ./examples/workflows/v2/cron/main.go --compile -n cron-workflow # Deploy v2 http workflow go run . workflow deploy -w ./examples/workflows/v2/http/main.go --compile -n cron-workflow diff --git a/system-tests/tests/smoke/cre/README.md b/system-tests/tests/smoke/cre/README.md index 8c06217f95b..bb974ac7a83 100644 --- a/system-tests/tests/smoke/cre/README.md +++ b/system-tests/tests/smoke/cre/README.md @@ -93,10 +93,12 @@ The TOML config defines how Chainlink node images are used: ### Environment Variables -Only if you want to run the tests on non-default topology you need to set following variables before running the test: +You usually do not need extra environment variables for the default local flow. The helpers default to `core/scripts/cre/environment/configs/workflow-gateway-capabilities-don.toml`. -- `CTF_CONFIGS` -- either `configs/workflow-gateway-don.toml` or `configs/workflow-gateway-capabilities-don.toml` -- `CRE_TOPOLOGY` -- either `workflow-gateway` or `workflow-gateway-capabilities` +Set these only when you need to override the default behavior: + +- `CTF_CONFIGS` -- path to a specific topology TOML when you want a non-default topology +- `TOPOLOGY_NAME` -- optional label used in some test names and logs - `CTF_LOG_LEVEL=debug` -- to display test debug-level logs --- @@ -210,31 +212,20 @@ This section explains how to compile, upload, and register workflows in the CRE ### Workflow Compilation Process -The workflow compilation process follows these steps: +The tests compile workflow sources through `system-tests/lib/cre/workflow/compile.go`. -1. **Source Code Preparation**: Ensure your workflow source code is in Go and follows the CRE workflow structure -2. **Compilation**: Use `creworkflow.CompileWorkflow()` to compile Go code to WebAssembly -3. **Compression**: The compiled WASM is automatically compressed using Brotli and base64 encoded -4. **File Management**: Temporary files are cleaned up automatically +Current behavior: -#### Compilation Example +1. Workflow names must be at least 10 characters long. +2. Go workflows run `go mod tidy` in the workflow directory before build. +3. Go workflows are built with `GOOS=wasip1`, `GOARCH=wasm`, `CGO_ENABLED=0`. +4. The resulting `.wasm` artifact is Brotli-compressed and base64-encoded into a `.br.b64` file. +5. TypeScript workflows are compiled through `bun cre-compile ...`, so `bun` and the generated `package.json` from `go run . env setup` must be present. -```go -workflowFileLocation := "path/to/your/workflow/main.go" -workflowName := "my-workflow-" + uuid.New().String()[0:4] - -// Compile workflow to compressed WASM -compressedWorkflowWasmPath, compileErr := creworkflow.CompileWorkflow(ctx, workflowFileLocation, workflowName) -require.NoError(t, compileErr, "failed to compile workflow") - -// Cleanup temporary files -t.Cleanup(func() { - wasmErr := os.Remove(compressedWorkflowWasmPath) - if wasmErr != nil { - framework.L.Warn().Msgf("failed to remove workflow wasm file %s: %s", compressedWorkflowWasmPath, wasmErr.Error()) - } -}) -``` +Use the helper APIs that the tests already use instead of re-implementing the flow manually. The main path is: + +- `t_helpers.CompileAndDeployWorkflow(...)` for smoke/regression tests +- `creworkflow.CompileWorkflow(...)` or `CompileWorkflowToDir(...)` only when you intentionally need lower-level control #### Compilation Requirements @@ -242,13 +233,13 @@ Go workflows: - **Workflow Name**: Must be at least 10 characters long - **Go Environment**: Requires `go mod tidy` to be run in the workflow directory - **Target Platform**: Compiles for `GOOS=wasip1` and `GOARCH=wasm` -- **Output Format**: Produces `.wasm.br.b64` files (compressed and base64 encoded) +- **Output Format**: Produces `.br.b64` files containing Brotli-compressed, base64-encoded WASM TypeScript workflows: - **Workflow Name**: Must be at least 10 characters long -- **Bun installed**: Requires `Bun` (automatically installed by `go run . env setup`) +- **Bun installed**: Requires `bun` (automatically installed by `go run . env setup`) - **package.json**: Correct `package.json` must exist in `core/scripts/cre/environment` (automatically created by `go run . env setup`) -- **Output Format**: Produces `.wasm.br.b64` files (compressed and base64 encoded) +- **Output Format**: Produces `.br.b64` files containing Brotli-compressed, base64-encoded WASM ### Workflow Configuration @@ -295,7 +286,7 @@ After compilation, workflow files must be distributed to the appropriate contain ```go containerTargetDir := "/home/chainlink/workflows" -// Copy compiled workflow binary +// Copy workflow artifacts to workflow-node containers workflowCopyErr := creworkflow.CopyArtifactsToDockerContainers( containerTargetDir, "workflow-node", @@ -317,28 +308,12 @@ The framework automatically discovers containers by name pattern: Workflows are registered with the blockchain contract using the `RegisterWithContract` function: -#### Registration Process - -```go -workflowID, registerErr := creworkflow.RegisterWithContract( - t.Context(), - sethClient, // Blockchain client - workflowRegistryAddress, // Contract address - donID, // DON identifier - workflowName, // Unique workflow name - "file://" + compressedWorkflowWasmPath, // Binary URL - ptr.Ptr("file://" + workflowConfigFilePath), // Config URL - nil, // Secrets URL (optional) - &containerTargetDir, // Container artifacts directory -) -require.NoError(t, registerErr, "failed to register workflow") -``` - #### Registration Parameters - **Context**: Test context for timeout handling - **Seth Client**: Blockchain client for contract interaction - **Registry Address**: Workflow Registry contract address +- **Registry Version**: Required so the helper can select the v1/v2 registration path - **DON ID**: Decentralized Oracle Network identifier - **Workflow Name**: Unique identifier for the workflow - **Binary URL**: Path to the compiled workflow binary on the host machine (used to read and calculate workflow ID) @@ -346,6 +321,13 @@ require.NoError(t, registerErr, "failed to register workflow") - **Secrets URL**: Path to encrypted secrets on the host machine (optional) - **Artifacts Directory**: Container directory where workflow files are stored (e.g., `/home/chainlink/workflows`) +The exact helper signature changes over time. Before adding a new manual registration flow, check: + +- `system-tests/lib/cre/workflow/workflow.go` +- `system-tests/tests/test-helpers/t_helpers.go` + +For most smoke tests, prefer `t_helpers.CompileAndDeployWorkflow(...)` instead of calling the lower-level helpers directly. + #### URL Resolution Process The `RegisterWithContract` function processes URLs as follows: @@ -370,63 +352,20 @@ This ensures that the Chainlink nodes can locate and load the workflow files fro ### Complete Workflow Setup Example -Here's a complete example of setting up a workflow: - -```go -func setupWorkflow(t *testing.T, workflowSourcePath, workflowName string, config *portypes.WorkflowConfig) { - // 1. Compile workflow - compressedWorkflowWasmPath, compileErr := creworkflow.CompileWorkflow(workflowSourcePath, workflowName) - require.NoError(t, compileErr, "failed to compile workflow") - - // 2. Create configuration file (optional) - var configFilePath string - if config != nil { - configData, err := yaml.Marshal(config) - require.NoError(t, err, "failed to marshal config") - - configFilePath = workflowName + "_config.yaml" - err = os.WriteFile(configFilePath, configData, 0644) - require.NoError(t, err, "failed to write config file") - } - - // 3. Copy files to containers - containerTargetDir := "/home/chainlink/workflows" - err := creworkflow.CopyArtifactsToDockerContainers(compressedWorkflowWasmPath, "workflow-node", containerTargetDir) - require.NoError(t, err, "failed to copy workflow binary") +For new smoke tests, use the existing helper flow instead of reproducing compilation, copy, and registration logic inline: - if configFilePath != "" { - err = creworkflow.CopyArtifactsToDockerContainers(configFilePath, "workflow-node", containerTargetDir) - require.NoError(t, err, "failed to copy config file") - } +1. Build or reuse a test environment with `t_helpers.SetupTestEnvironmentWithConfig(...)` or `SetupTestEnvironmentWithPerTestKeys(...)`. +2. Create a workflow name that stays within the current 64-character limit. Prefer `t_helpers.UniqueWorkflowName(...)` when you need uniqueness. +3. Prepare a typed workflow config value if the workflow needs configuration. +4. Call `t_helpers.CompileAndDeployWorkflow(...)`. +5. Assert execution using the helper patterns already used by nearby smoke tests. - // 4. Register with contract - var configURL *string - if configFilePath != "" { - configURL = ptr.Ptr("file://" + configFilePath) - } +Examples to copy from current tests: - workflowID, registerErr := creworkflow.RegisterWithContract( - t.Context(), - sethClient, - workflowRegistryAddress, - donID, - workflowName, - "file://" + compressedWorkflowWasmPath, - configURL, - nil, // secrets URL (optional) - &containerTargetDir, - ) - require.NoError(t, registerErr, "failed to register workflow") - - // 5. Cleanup - t.Cleanup(func() { - os.Remove(compressedWorkflowWasmPath) - if configFilePath != "" { - os.Remove(configFilePath) - } - }) -} -``` +- EVM capability flows in `v2_evm_capability_test.go` +- HTTP action flows in `v2_http_action_test.go` +- gRPC source flows in `v2_grpc_source_test.go` +- Aptos and Solana flows in the corresponding `v2_*_capability_test.go` files --- @@ -470,44 +409,17 @@ secretsNames: #### Using Secrets in Workflows -```go -// 1. Set environment variables -os.Setenv("API_KEY_ENV_VAR_ALL", "your-api-key-here") -os.Setenv("DB_PASSWORD_ENV_VAR_ALL", "your-db-password") - -// 2. Prepare encrypted secrets -secretsFilePath := "path/to/secrets.yaml" -encryptedSecretsPath, err := creworkflow.PrepareSecrets( - sethClient, - donID, - capabilitiesRegistryAddress, - workflowOwnerAddress, - secretsFilePath, -) -require.NoError(t, err, "failed to prepare secrets") +Use the current `PrepareSecrets(...)` helper only after checking its live signature in `system-tests/lib/cre/workflow/secrets.go`. It currently requires: -// 3. Copy encrypted secrets to containers -err = creworkflow.CopyArtifactsToDockerContainers( - encryptedSecretsPath, - "workflow-node", - "/home/chainlink/workflows", -) -require.NoError(t, err, "failed to copy secrets to containers") - -// 4. Register workflow with secrets -workflowID, registerErr := creworkflow.RegisterWithContract( - ctx, - sethClient, - workflowRegistryAddress, - donID, - workflowName, - "file://" + compressedWorkflowWasmPath, - configURL, - &secretsURL, // Pass the encrypted secrets file path - &containerTargetDir, -) -require.NoError(t, registerErr, "failed to register workflow") -``` +- seth client +- DON ID +- capabilities registry address +- capabilities registry version +- workflow owner address +- secrets config path +- secrets output path + +This helper is lower-level and intentionally coupled to the current registry implementation. If you only need secrets in a new smoke test, copy an up-to-date example from the current test tree instead of relying on old README snippets. #### Secrets Encryption Process @@ -557,56 +469,12 @@ The generated encrypted secrets file contains: #### Complete Example -```go -func setupWorkflowWithSecrets(t *testing.T, workflowSourcePath, workflowName, secretsConfigPath string) { - // Set environment variables with your secrets - os.Setenv("API_KEY_ENV_VAR_ALL", "your-actual-api-key") - os.Setenv("DB_PASSWORD_ENV_VAR_ALL", "your-actual-db-password") - - // Compile workflow - compressedWorkflowWasmPath, err := creworkflow.CompileWorkflow(workflowSourcePath, workflowName) - require.NoError(t, err, "failed to compile workflow") - - // Prepare encrypted secrets - encryptedSecretsPath, err := creworkflow.PrepareSecrets( - sethClient, - donID, - capabilitiesRegistryAddress, - workflowOwnerAddress, - secretsConfigPath, - ) - require.NoError(t, err, "failed to prepare secrets") - - // Copy files to containers - containerTargetDir := "/home/chainlink/workflows" - err = creworkflow.CopyArtifactsToDockerContainers(compressedWorkflowWasmPath, "workflow-node", containerTargetDir) - require.NoError(t, err, "failed to copy workflow") - - err = creworkflow.CopyArtifactsToDockerContainers(encryptedSecretsPath, "workflow-node", containerTargetDir) - require.NoError(t, err, "failed to copy secrets") - - // Register workflow with secrets - secretsURL := "file://" + encryptedSecretsPath - workflowID, registerErr := creworkflow.RegisterWithContract( - t.Context(), - sethClient, - workflowRegistryAddress, - donID, - workflowName, - "file://" + compressedWorkflowWasmPath, - nil, // config URL (optional) - &secretsURL, - &containerTargetDir, - ) - require.NoError(t, registerErr, "failed to register workflow") - - // Cleanup - t.Cleanup(func() { - os.Remove(compressedWorkflowWasmPath) - os.Remove(encryptedSecretsPath) - }) -} -``` +Avoid copying a static README example for secrets setup. The relevant helper signatures have changed more than once. When adding a secrets-based test: + +1. Start from the current helper implementations in `system-tests/lib/cre/workflow`. +2. Verify the registry version you are targeting. +3. Reuse the same artifact copy flow as other current smoke tests. +4. Keep the secrets example in the test itself, close to the code that depends on it. --- From da9dc4b7ec75961e4b09a7c3a66e50b63dac74a6 Mon Sep 17 00:00:00 2001 From: Bartek Tofel Date: Thu, 2 Apr 2026 17:22:20 +0200 Subject: [PATCH 2/3] move local CRE docs do /docs and prepare them integration with central docs --- core/scripts/cre/environment/README.md | 2110 +---------------- docs/index.md | 25 + docs/local-cre/_category_.yaml | 7 + docs/local-cre/environment/_category_.yaml | 7 + docs/local-cre/environment/advanced.md | 445 ++++ docs/local-cre/environment/index.md | 133 ++ docs/local-cre/environment/topologies.md | 111 + docs/local-cre/environment/workflows.md | 115 + .../local-cre/getting-started/_category_.yaml | 7 + docs/local-cre/getting-started/index.md | 92 + docs/local-cre/index.md | 40 + docs/local-cre/reference/_category_.yaml | 7 + docs/local-cre/reference/index.md | 55 + docs/local-cre/system-tests/_category_.yaml | 7 + .../system-tests/ci-and-suite-maintenance.md | 87 + docs/local-cre/system-tests/index.md | 43 + docs/local-cre/system-tests/running-tests.md | 151 ++ .../system-tests/workflows-in-tests.md | 99 + system-tests/tests/smoke/cre/README.md | 867 +------ 19 files changed, 1456 insertions(+), 2952 deletions(-) create mode 100644 docs/index.md create mode 100644 docs/local-cre/_category_.yaml create mode 100644 docs/local-cre/environment/_category_.yaml create mode 100644 docs/local-cre/environment/advanced.md create mode 100644 docs/local-cre/environment/index.md create mode 100644 docs/local-cre/environment/topologies.md create mode 100644 docs/local-cre/environment/workflows.md create mode 100644 docs/local-cre/getting-started/_category_.yaml create mode 100644 docs/local-cre/getting-started/index.md create mode 100644 docs/local-cre/index.md create mode 100644 docs/local-cre/reference/_category_.yaml create mode 100644 docs/local-cre/reference/index.md create mode 100644 docs/local-cre/system-tests/_category_.yaml create mode 100644 docs/local-cre/system-tests/ci-and-suite-maintenance.md create mode 100644 docs/local-cre/system-tests/index.md create mode 100644 docs/local-cre/system-tests/running-tests.md create mode 100644 docs/local-cre/system-tests/workflows-in-tests.md diff --git a/core/scripts/cre/environment/README.md b/core/scripts/cre/environment/README.md index d222168d6bf..b55cc760350 100644 --- a/core/scripts/cre/environment/README.md +++ b/core/scripts/cre/environment/README.md @@ -1,2111 +1,23 @@ # Local CRE environment -The local CRE is developer environment for full stack development of the CRE platform. It deploys and configures DONs, capabilities, contracts, and observability, and optional features. It is built on Docker. +The long-form Local CRE documentation now lives in [`docs/local-cre/`](../../../../docs/local-cre/index.md). -## Contact Us -Slack: #topic-local-dev-environments +Use these pages instead of this legacy README: -## Table of content +- [Getting Started](../../../../docs/local-cre/getting-started/index.md) +- [Environment](../../../../docs/local-cre/environment/index.md) +- [Workflow Operations](../../../../docs/local-cre/environment/workflows.md) +- [Topologies and Capabilities](../../../../docs/local-cre/environment/topologies.md) +- [CRE System Tests](../../../../docs/local-cre/system-tests/index.md) -[QUICKSTART](#quickstart) -1. [Using the CLI](#using-the-cli) - - [Installing the binary](#installing-the-binary) - - [Prerequisites (for Docker)](#prerequisites-for-docker) - - [Setup](#setup) - - [Start Environment](#start-environment) - - [Using a pre-built Chainlink image](#using-a-pre-built-chainlink-image) - - [Beholder](#beholder) - - [Beholder vs. ChIP Test Sink](#beholder-vs-chip-test-sink-port-conflict-and-using-both-together) - - [Storage](#storage) - - [Purging environment state](#purging-environment-state) - - [Stop Environment](#stop-environment) - - [Restart Environment](#restarting-the-environment) - - [Debugging core nodes](#debugging-core-nodes) - - [Debugging capabilities (mac)](#debugging-capabilities-mac) - - [Workflow Commands](#workflow-commands) - - [Additional Workflow Sources](#additional-workflow-sources) - - [Overview](#additional-sources-overview) - - [Configuration](#additional-sources-configuration) - - [File Source JSON Format](#file-source-json-format) - - [Helper Tool: generate_file_source](#helper-tool-generate_file_source) - - [Deploying a File-Source Workflow](#deploying-a-file-source-workflow) - - [Mixed Sources (Contract + File)](#mixed-sources-contract--file) - - [Pausing and Deleting File-Source Workflows](#pausing-and-deleting-file-source-workflows) - - [Key Behaviors](#additional-sources-key-behaviors) - - [Debugging Additional Sources](#debugging-additional-sources) - - [Further use](#further-use) - - [Advanced Usage](#advanced-usage) - - [Testing Billing](#testing-billing) - - [DX Tracing](#dx-tracing) -2. [Job Distributor Image](#job-distributor-image) -3. [Example Workflows](#example-workflows) - - [Available Workflows](#available-workflows) - - [Deployable Example Workflows](#deployable-example-workflows) - - [Manual Workflow Deployment](#manual-workflow-deployment) -4. [Adding a New Standard Capability](#adding-a-new-standard-capability) - - [Capability Types](#capability-types) - - [Step 1: Define the Capability Flag](#step-1-define-the-capability-flag) - - [Step 2: Create the Capability Implementation](#step-2-create-the-capability-implementation) - - [Step 3: Optional Gateway Handler Configuration](#step-3-optional-gateway-handler-configuration) - - [Step 4: Optional Node Configuration Modifications](#step-4-optional-node-configuration-modifications) - - [Step 5: Add Default Configuration](#step-5-add-default-configuration) - - [Step 6: Register the Capability](#step-6-register-the-capability) - - [Step 7: Add to Environment Configurations](#step-7-add-to-environment-configurations) - - [Configuration Templates](#configuration-templates) - - [Important Notes](#important-notes) -5. [Multiple DONs](#multiple-dons) - - [Supported Capabilities](#supported-capabilities) - - [DON Types](#don-types) - - [TOML Configuration Structure](#toml-configuration-structure) - - [Example: Adding a New Topology](#example-adding-a-new-topology) - - [Configuration Modes](#configuration-modes) - - [Port Management](#port-management) - - [Important Notes](#important-notes-1) -6. [Enabling Already Implemented Capabilities](#enabling-already-implemented-capabilities) - - [Available Configuration Files](#available-configuration-files) - - [Capability Types and Configuration](#capability-types-and-configuration) - - [Capability availability](#capability-availability) - - [Enabling Capabilities in Your Topology](#enabling-capabilities-in-your-topology) - - [Configuration Examples](#configuration-examples) - - [Custom Capability Configuration](#custom-capability-configuration) - - [Important Notes](#important-notes-2) - - [Troubleshooting Capability Issues](#troubleshooting-capability-issues) -7. [Hot swapping](#hot-swapping) - - [Chainlink nodes' Docker image](#chainlink-nodes-docker-image) - - [Capability binary](#capability-binary) - - [Automated Hot Swapping with fswatch](#automated-hot-swapping-with-fswatch) -8. [Telemetry Configuration](#telemetry-configuration) - - [OTEL Stack (OpenTelemetry)](#otel-stack-opentelemetry) - - [Chip Ingress (Beholder)](#chip-ingress-beholder) - - [Expected Error Messages](#expected-error-messages) -9. [Using a Specific Docker Image for Chainlink Node](#using-a-specific-docker-image-for-chainlink-node) -10. [Using Existing EVM & P2P Keys](#using-existing-evm--p2p-keys) -11. [TRON Integration](#tron-integration) - - [How It Works](#how-it-works) - - [Example Configuration](#example-configuration) -12. [Connecting to external/public blockchains](#connecting-to-externalpublic-blockchains) -13. [Kubernetes Deployment](#kubernetes-deployment) - - [Prerequisites](#prerequisites-for-kubernetes) - - [Configuration](#kubernetes-configuration) - - [Config and Secrets Overrides](#config-and-secrets-overrides) - - [Example Configuration](#kubernetes-example-configuration) -14. [Troubleshooting](#troubleshooting) - - [Chainlink Node Migrations Fail](#chainlink-node-migrations-fail) - - [Docker Image Not Found](#docker-image-not-found) - - [Docker fails to download public images](#docker-fails-to-download-public-images) - - [GH CLI is not installed](#gh-cli-is-not-installed) - -# QUICKSTART -Setup platform: allocate and configure the default environment with all dependencies. -``` -go run . env start --auto-setup -``` -Note: this allocates and configures the full stack. It may take a few minutes the first time. - -Deploy app: your first workflow -``` -go run . workflow deploy -w ./examples/workflows/v2/cron/main.go --compile -n cron_example -``` - - -# Using the CLI - -The CLI manages CRE test environments. It is located in `core/scripts/cre/environment`. It doesn't come as a compiled binary, so every command has to be executed as `go run . [subcommand]` (although check below!). - -## Installing the binary -You can compile and install the binary by running: -```shell -cd core/scripts/cre/environment -make install -``` - -It will compile local CRE as `local_cre`. With it installed you will be able to access interactive shell **with autocompletions** by running `local_cre sh`. Without installing the binary interactive shell won't be available. - -![image](./images/autocompletion.png) - -> Warning: Control+C won't interrupt commands executed via the interactive shell. - -## Prerequisites (for Docker) ### -1. **Docker installed and running** - - with usage of default Docker socket **enabled** - - with Apple Virtualization framework **enabled** - - with VirtioFS **enabled** - - with use of containerd for pulling and storing images **disabled** -2. **AWS SSO access to SDLC** or **Access to Git repositories** - AWS: - - REQUIRED: `staging-default` profile (with `DefaultEngineeringAccess` role) -> [See more for configuring AWS in CLL](https://smartcontract-it.atlassian.net/wiki/spaces/INFRA/pages/1045495923/Configure+the+AWS+CLI) - Git repositories: - - REQUIRED: read access to [Atlas](https://github.com/smartcontractkit/atlas) and [Capabilities](https://github.com/smartcontractkit/capabilities) and [Job Distributor](https://github.com/smartcontractkit/job-distributor) repositories - - Either AWS or Git access is required in order to pull/build Docker images for: - - Chip Ingress (Beholder stack) - - Chip Config (Beholder stack) - - Job Distributor - - Git access to plugin repositories (e.g. `capabilities`, `confidential-compute`) is required when building the Chainlink image from source, so plugins can be pulled during the Docker build. If you use a pre-built image with plugins already baked in, Git access is not required. - -# QUICKSTART (with ECR configured) -``` -# e.g. AWS_ECR=.dkr.ecr..amazonaws.com -AWS_ECR= go run . env start --auto-setup -``` -> You can find `PROD_ACCOUNT_ID` and `REGION` in the `[profile prod]` section of the [AWS CLI configuration guide](https://smartcontract-it.atlassian.net/wiki/spaces/INFRA/pages/1045495923/Configure+the+AWS+CLI#Configure). If for some reason you want to limit the AWS config to bare minimum, include only `staging-default` profile and `cl-secure-sso` session entries. - -If you are missing requirements, you may need to fix the errors and re-run. - -Refer to [this document](https://docs.google.com/document/d/1HtVLv2ipx2jvU15WYOijQ-R-5BIZrTdAaumlquQVZ48/edit?tab=t.0#heading=h.wqgcsrk9ncjs) for troubleshooting and FAQ. Use `#topic-local-dev-environments` for help. - -## Setup - -Environment can be setup by running `go run . env setup` inside `core/scripts/cre/environment` folder. Its configuration is defined in [configs/setup.toml](configs/setup.toml) file. It will make sure that: -- you have AWS CLI installed and configured -- you have GH CLI installed and authenticated -- you have required Job Distributor, Chip Ingress, and Chip Config images - -**Image Versioning:** - -Docker images for Beholder services (chip-ingress, chip-config) use commit-based tags instead of mutable tags like `local-cre`. This ensures you always know which version is running and prevents hard-to-debug issues from version mismatches. The exact versions are defined in [configs/setup.toml](configs/setup.toml). - -**Plugin installation during image build:** - -All plugins (private, public, local) are installed when the Chainlink Docker image is built. Plugin sources are defined in: -- [plugins/plugins.private.yaml](../../../../plugins/plugins.private.yaml) for private plugins -- [plugins/plugins.public.yaml](../../../../plugins/plugins.public.yaml) for public plugins - -Each plugin entry specifies a `moduleURI` and `gitRef` (commit, branch, or tag). To use a new version of a plugin: - -1. Push your changes to the remote Git repository -2. Update the `gitRef` in the YAML file to point to your new commit/branch/tag -3. Start the environment (or rebuild the image); plugins will be pulled and compiled during the Docker build - -Builds that access private repositories require `GITHUB_TOKEN` to be set (e.g. `export GITHUB_TOKEN=$(gh auth token)`). - -## Start Environment -```bash -# while in core/scripts/cre/environment -go run . env start [--auto-setup] - -# to start environment with the PoR v2 cron example workflow -go run . env start --with-example - -# to start environment with local Beholder -go run . env start --with-beholder - -# to start environment with legacy v1 contracts (default is v2) -go run . env start --with-contracts-version v1 -``` - -> Important! **Nightly** Chainlink images are retained only for one day and built at 03:00 UTC. That means that in most cases you should use today's image, not yesterday's. - -Optional parameters: -- `-a, --auto-setup`: Run setup before starting the environment -- `-w, --wait-on-error-timeout`: Time to wait before removing Docker containers if startup fails -- `-l, --cleanup-on-error`: Remove Docker containers if startup fails -- `-e, --extra-allowed-gateway-ports`: Extra allowed outgoing gateway ports -- `-x, --with-example`: Deploy and register the example workflow -- `-u, --example-workflow-timeout`: Time to wait for the example workflow to succeed -- `-b, --with-beholder`: Deploy Beholder (Chip Ingress + Red Panda) -- `-d, --with-dashboards`: Deploy the observability stack and Grafana dashboards -- `--with-observability`: Start the observability stack -- `--with-billing`: Deploy the billing platform service -- `-g, --grpc-port`: gRPC port for Chip Ingress -- `-p`: **DEPRECATED** Use `image` in TOML config instead. See [Using a pre-built Chainlink image](#using-a-pre-built-chainlink-image). -- `--with-contracts-version`: Version of workflow/capability registries to use (`v2` by default, use `v1` explicitly for legacy coverage) - -## Purging environment state -To remove all state and cache files used by the environment execute: -```bash -# while in core/scripts/cre/environment -go run . env state purge -``` - -This might be helpful if you suspect that state files might be corrupt and you're unable to start the environment. - -### Using a pre-built Chainlink image - -By default, the environment builds the Chainlink image from your local branch. To use a pre-built image instead (e.g. a nightly with all plugins), set the `image` field in your topology TOML for each node: - -```toml -[nodesets.node_specs.node] - image = ".dkr.ecr..amazonaws.com/chainlink:nightly--plugins" - # Leave docker_ctx and docker_file empty or omit them -``` - -Apply this to **all** nodes in the nodeset. Nightly images are built by the [Docker Build action](https://github.com/smartcontractkit/chainlink/actions/workflows/docker-build.yml). - -### Beholder - -When environment is started with `--with-beholder` or with `-b` flag after the DON is ready we will boot up `Chip Ingress` and `Red Panda`, create a `cre` topic and download and install workflow-related protobufs from the [chainlink-protos](https://github.com/smartcontractkit/chainlink-protos/tree/main/workflows) repository. - -Once up and running you will be able to access [CRE topic view](http://localhost:8080/topics/cre) to see workflow-emitted events. These include both standard events emitted by the Workflow Engine and custom events emitted from your workflow. - -#### Filtering out heartbeats -Heartbeat messages spam the topic, so it's highly recommended that you add a JavaScript filter that will exclude them using the following code: `return value.msg !== 'heartbeat';`. - -If environment is already running you can start just the Beholder stack (and register protos) with: -```bash -go run . env beholder start -``` - -**Image Requirements:** - -Beholder requires `chip-ingress` and `chip-config` Docker images with specific versions defined in [configs/setup.toml](configs/setup.toml). The image tags use commit hashes for version tracking (e.g., `chip-ingress:da84cb72d3a160e02896247d46ab4b9806ebee2f`). - -When starting Beholder, the system will: -- **In CI (`CI=true`)**: Skip image checks (docker-compose will pull at runtime) -- **Interactive terminal**: Auto-build missing images from sources. If build fails and `AWS_ECR` is set, you'll be offered to pull from ECR instead -- **Non-interactive (tests, scripts)**: Auto-pull from ECR if `AWS_ECR` is set, otherwise fail with instructions - -To manually ensure images are available, run: -```bash -# Build from sources -go run . env setup - -# Or pull from ECR (requires AWS SSO access) -AWS_ECR=.dkr.ecr.us-west-2.amazonaws.com go run . env setup -``` - -#### Beholder vs. ChIP Test Sink: Port Conflict and Using Both Together - -Both the **real Beholder** (Chip Ingress + Red Panda) and the **ChIP Test Sink** (used by CRE system tests for assertions) bind to the same gRPC port by default (50051). Chainlink nodes are configured to send workflow telemetry to `host.docker.internal:50051`, so only one service can receive on that port at a time. - -**Default behavior in tests:** -- Most CRE smoke/regression tests use the **test sink** (`t_helpers.StartChipTestSink`). The sink listens on 50051, receives CloudEvents from nodes, and runs test assertions. No Kafka/Red Panda. -- Beholder-specific tests (e.g. `Test_CRE_V2_Suite` with Cron Beholder scenario, `Test_CRE_V1_Billing_Cron_Beholder`) use **real Beholder** via `t_helpers.StartBeholder`. They start Beholder on 50051, consume from Kafka, and run assertions. The test cleanup stops Beholder so subsequent tests can use the test sink. - -**To use both together** (test assertions + Red Panda/Kafka observability): - -1. **Start Beholder on a different port** (e.g. 50052): - ```bash - go run . env beholder start --grpc-port 50052 - ``` - Or, when starting the full environment: - ```bash - go run . env start --with-beholder --grpc-port 50052 - ``` - -2. **Run the test sink on the default port (50051)** so it receives events from nodes. The test sink must listen on 50051 because node config is fixed to that port. - -3. **Configure the test sink to forward to Beholder** by setting `UpstreamEndpoint` in the sink config. The `chiptestsink` package supports this, but `t_helpers.StartChipTestSink` does not expose it. To use both: - - Use `chiptestsink.NewServer` directly with `Config{UpstreamEndpoint: "localhost:50052", ...}` instead of `StartChipTestSink`, or - - Extend the test helper to accept an optional upstream endpoint. - -4. **Resulting flow:** Nodes → test sink (50051) → assertions + forward → Beholder (50052) → Kafka/Red Panda. - -**Summary:** Use either Beholder or the test sink alone for simplicity. Use both only when you need test assertions and Red Panda observability in the same run; then run Beholder on a non-default port and configure the sink to forward to it. - -### Storage - -By default, workflow artifacts are loaded from the container's filesystem. The Chainlink nodes can only load workflow files from the local filesystem if `WorkflowFetcher` uses the `file://` prefix. Right now, it cannot read workflow files from both the local filesystem and external sources (like S3 or web servers) at the same time. - -The environment supports two storage backends for workflow uploads: -- Gist (requires deprecated CRE CLI, remote) -- S3 MinIO (built-in, local) - -Configuration details for the CRE CLI are generated automatically into the `cre.yaml` file -(path is printed after starting the environment). - -For more details on the URL resolution process and how workflow artifacts are handled, see the [URL Resolution Process](../../../../system-tests/tests/smoke/cre/guidelines.md#url-resolution-process) section in `system-tests/tests/smoke/cre/guidelines.md`. - -## Stop Environment -```bash -# while in core/scripts/cre/environment -go run . env stop - -# or... if you have the CTF binary -ctf d rm -``` ---- - -## Restarting the environment - -If you are using Blockscout and you restart the environment **you need to restart the block explorer** if you want to see current block history. If you don't you will see stale state of the previous environment. To restart execute: -```bash -ctf bs r -``` ---- - -## Debugging core nodes -Before start the environment set the `CTF_CLNODE_DLV` environment variable to `true` -```bash -export CTF_CLNODE_DLV="true" -``` -Nodes will open a Delve server on port `40000 + node index` (e.g. first node will be on `40000`, second on `40001` etc). You can connect to it using your IDE or `dlv` CLI. - -## Debugging capabilities (mac) -Build the capability with debug symbols (this ensures the binary is not run via Rosetta, which prevents dlv from attaching): -```bash -GOOS=linux GOARCH=arm64 go build -gcflags "all=-N -l" -o -``` - -Use `env swap capability` to deploy the binary to running containers without restarting the environment: -```bash -go run . env swap capability -n -b /path/to/your/binary -``` - -Add or update the `custom_ports` entry in the topology file (e.g., `core/scripts/cre/environment/configs/workflow-gateway-don.toml`) to include the port mapping for the Delve debugger. For example: -```toml -custom_ports = ["5002:5002", "15002:15002", "45000:45000"] -``` - -Start the environment and verify that the container is exposing the new port. Start a shell session on the relevant container, e.g: -```bash -docker exec -it workflow-node1 /bin/bash -``` - -In the shell session list all processes (`ps -aux`) and identify the PID of the capability you want to debug. Also, verify -that rosetta is not being used to run the capability binary that you want to debug. - -Attach dlv to the capability process using the PID you identified above, e.g: -```bash -dlv attach --headless --listen=:45000 --api-version=2 --accept-multiclient -``` - -Attach your IDE to the dlv server on port `45000` (or whatever port you exposed). - - -## Workflow Commands - -The environment provides workflow management commands defined in `core/scripts/cre/environment/environment/workflow.go`: - -### `workflow deploy` -Compiles and uploads a workflow to the environment by copying it to workflow nodes and registering with the workflow registry. It checks if a workflow with same name already exists and deletes it, if it does. - -**Usage:** -```bash -go run . workflow deploy [flags] -``` - -**Key flags:** -- `-w, --workflow-file-path`: Path to a compiled base64 workflow file, or to a Go/TypeScript source file when `--compile` is used -- `-c, --config-file-path`: Path to the config file (optional) -- `-s, --secrets-file-path`: Path to the secrets file (optional) -- `-o, --secrets-output-file-path`: Output path for encrypted secrets (optional) -- `-t, --container-target-dir`: Path to target directory in Docker container (default: `/home/chainlink/workflows`) -- `-p, --container-name-pattern`: Pattern to match workflow node container names (default: `workflow-node`) -- `-n, --name`: Workflow name -- `-r, --rpc-url`: RPC URL (default: `http://localhost:8545`) -- `-a, --workflow-registry-address`: Workflow registry address (optional; taken from local CRE state if omitted) -- `-b, --capabilities-registry-address`: Capabilities registry address (optional; taken from local CRE state if omitted) -- `-d, --workflow-owner-address`: Workflow owner address (default: `0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266`) -- `-e, --don-id`: DON ID (default: `1`) -- `-x, --compile`: Compile the workflow before deploying it -- `-l, --delete-workflow-file`: Delete the workflow artifact after deployment -- `--with-contracts-version`: Registry contract version (`v1` or `v2`) - -**Example:** -```bash -go run . workflow deploy -w ./my-workflow.go -n myworkflow -c ./config.yaml -``` - -### `workflow delete` -Deletes a specific workflow from the workflow registry contract (but doesn't remove it from Docker containers). - -**Usage:** -```bash -go run . workflow delete [flags] -``` - -**Key flags:** -- `-n, --name`: Workflow name to delete (default: `exampleworkflow`) -- `-r, --rpc-url`: RPC URL (default: `http://localhost:8545`) -- `-a, --workflow-registry-address`: Workflow registry address (optional; taken from local CRE state if omitted) -- `--with-contracts-version`: Registry contract version (`v1` or `v2`) - -**Example:** -```bash -go run . workflow delete -n myworkflow -``` - -### `workflow delete-all` -Deletes all workflows from the workflow registry contract. - -**Usage:** -```bash -go run . workflow delete-all [flags] -``` - -**Key flags:** -- `-r, --rpc-url`: RPC URL (default: `http://localhost:8545`) -- `-a, --workflow-registry-address`: Workflow registry address (optional; taken from local CRE state if omitted) -- `--with-contracts-version`: Registry contract version (`v1` or `v2`) - -**Example:** -```bash -go run . workflow delete-all -``` - -### `workflow run-por-example` -Deploys and verifies the PoR v2 cron example workflow. - -**Usage:** -```bash -go run . workflow run-por-example -``` - -This command uses default values and is useful for testing the workflow deployment process. - ---- - -## Additional Workflow Sources - -The workflow registry syncer supports multiple sources of workflow metadata beyond the on-chain contract. This enables flexible deployment scenarios including pure file-based or GRPC-based workflow deployments. - -### Additional Sources Overview - -Three source types are supported: - -1. **ContractWorkflowSource** (optional): Reads from the on-chain workflow registry contract -2. **GRPCWorkflowSource** (additional): Fetches from external GRPC services -3. **FileWorkflowSource** (additional): Reads from a local JSON file - -**Key Features:** -- Contract source is optional - enables pure GRPC-only or file-only deployments -- All additional sources (GRPC and file) are configured via unified `AdditionalSources` config -- Source type is auto-detected by URL scheme (`file://` for file, otherwise GRPC) - -### Additional Sources Configuration - -All additional sources are configured via the `AdditionalSources` config in TOML. The source type is auto-detected based on the URL scheme: - -**File source (detected by `file://` prefix):** -```toml -[WorkflowRegistry] -Address = "0x1234..." # Optional - leave empty for pure file-only deployments - -[[WorkflowRegistry.AdditionalSources]] -Name = "local-file" -URL = "file:///tmp/workflows_metadata.json" -``` - -**GRPC source (URL without `file://` prefix):** -```toml -[WorkflowRegistry] -Address = "0x1234..." - -[[WorkflowRegistry.AdditionalSources]] -Name = "private-registry" -URL = "grpc.private-registry.example.com:443" -TLSEnabled = true -``` - -**Pure GRPC-only deployment (no contract):** -```toml -[WorkflowRegistry] -# No Address = no contract source - -[[WorkflowRegistry.AdditionalSources]] -Name = "private-registry" -URL = "grpc.private-registry.example.com:443" -TLSEnabled = true -``` - -### File Source JSON Format - -The file source reads from the path specified in the URL (e.g., `/tmp/workflows_metadata.json`). - -**JSON Schema:** -```json -{ - "workflows": [ - { - "workflow_id": "<32-byte hex string without 0x prefix>", - "owner": "", - "created_at": "", - "status": "<0=active, 1=paused>", - "workflow_name": "", - "binary_url": "", - "config_url": "", - "tag": "", - "attributes": "", - "don_family": "" - } - ] -} -``` - -**Example:** -```json -{ - "workflows": [ - { - "workflow_id": "0102030405060708091011121314151617181920212223242526272829303132", - "owner": "f39fd6e51aad88f6f4ce6ab8827279cfffb92266", - "created_at": 1733250000, - "status": 0, - "workflow_name": "my-file-workflow", - "binary_url": "file:///home/chainlink/workflows/my_workflow.wasm", - "config_url": "file:///home/chainlink/workflows/my_config.json", - "tag": "v1.0.0", - "don_family": "workflow" - } - ] -} -``` - -See [examples/workflows_metadata_example.json](./examples/workflows_metadata_example.json) for a reference file. - -### Helper Tool: generate_file_source - -A helper tool is provided to generate the workflow metadata JSON with the correct workflowID (which is a hash of the workflow artifacts): - -```bash -cd core/scripts/cre/environment -go run ./cmd/generate_file_source \ - --binary /path/to/workflow.wasm \ - --config /path/to/config.json \ - --name my-workflow \ - --owner f39fd6e51aad88f6f4ce6ab8827279cfffb92266 \ - --output /tmp/workflows_metadata.json \ - --don-family workflow -``` - -**Additional flags:** -- `--binary-url-prefix`: Prefix for the binary URL in the output (e.g., `file:///home/chainlink/workflows/`) -- `--config-url-prefix`: Prefix for the config URL in the output - -### Deploying a File-Source Workflow - -This walkthrough demonstrates deploying a workflow via file source in a local CRE environment. - -**Prerequisites:** -- Local CRE environment set up -- Docker running -- Go toolchain installed - -**Step-by-step:** +## Quickstart ```bash -# 1. Start the environment cd core/scripts/cre/environment go run . env start --auto-setup - -# 2. Deploy a workflow via contract first (this creates the compiled binary in containers) -go run . workflow deploy -w ./examples/workflows/v2/cron/main.go -n cron_contract - -# 3. Get the existing workflow binary from a container -docker cp workflow-node1:/home/chainlink/workflows/cron_contract.wasm /tmp/cron_contract.wasm - -# 4. Generate the file source metadata with a DIFFERENT workflow name -go run ./cmd/generate_file_source \ - --binary /tmp/cron_contract.wasm \ - --name file_source_cron \ - --owner f39fd6e51aad88f6f4ce6ab8827279cfffb92266 \ - --output /tmp/workflows_metadata.json \ - --don-family workflow \ - --binary-url-prefix "file:///home/chainlink/workflows/" \ - --config-url-prefix "file:///home/chainlink/workflows/" - -# 5. Copy the binary to all containers with new name -docker cp /tmp/cron_contract.wasm workflow-node1:/home/chainlink/workflows/file_source_workflow.wasm -docker cp /tmp/cron_contract.wasm workflow-node2:/home/chainlink/workflows/file_source_workflow.wasm -docker cp /tmp/cron_contract.wasm workflow-node3:/home/chainlink/workflows/file_source_workflow.wasm -docker cp /tmp/cron_contract.wasm workflow-node4:/home/chainlink/workflows/file_source_workflow.wasm -docker cp /tmp/cron_contract.wasm workflow-node5:/home/chainlink/workflows/file_source_workflow.wasm - -# 6. Create an empty config file and copy to all containers -echo '{}' > /tmp/file_source_config.json -docker cp /tmp/file_source_config.json workflow-node1:/home/chainlink/workflows/file_source_config.json -docker cp /tmp/file_source_config.json workflow-node2:/home/chainlink/workflows/file_source_config.json -docker cp /tmp/file_source_config.json workflow-node3:/home/chainlink/workflows/file_source_config.json -docker cp /tmp/file_source_config.json workflow-node4:/home/chainlink/workflows/file_source_config.json -docker cp /tmp/file_source_config.json workflow-node5:/home/chainlink/workflows/file_source_config.json - -# 7. Copy the metadata file to all nodes -docker cp /tmp/workflows_metadata.json workflow-node1:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata.json workflow-node2:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata.json workflow-node3:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata.json workflow-node4:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata.json workflow-node5:/tmp/workflows_metadata.json - -# 8. Wait for the syncer to pick up the workflow (default 12 second interval) -# Check logs for "Loaded workflows from file" messages -docker logs workflow-node1 2>&1 | grep -i "file" - -# 9. Verify the workflow is running -docker logs workflow-node1 2>&1 | grep -i "workflow engine" -``` - -### Mixed Sources (Contract + File) - -You can run both contract-deployed and file-source workflows simultaneously: - -```bash -# 1. Deploy workflow via contract -go run . workflow deploy -w ./examples/workflows/v2/cron/main.go -n contract_workflow - -# 2. Add a different workflow via file source (follow steps 3-7 from above) - -# 3. Verify both workflows are running -docker logs workflow-node1 2>&1 | grep -i "Aggregated workflows from all sources" -# Should show totalWorkflows: 2 -``` - -### Pausing and Deleting File-Source Workflows - -**Pausing a workflow** - Change the `status` field to `1`: - -```bash -# Create updated metadata with status=1 (paused) -cat > /tmp/workflows_metadata_paused.json << 'EOF' -{ - "workflows": [ - { - "workflow_id": "", - "owner": "f39fd6e51aad88f6f4ce6ab8827279cfffb92266", - "status": 1, - "workflow_name": "file_source_cron", - "binary_url": "file:///home/chainlink/workflows/file_source_workflow.wasm", - "config_url": "file:///home/chainlink/workflows/file_source_config.json", - "don_family": "workflow" - } - ] -} -EOF - -# Copy to all nodes -docker cp /tmp/workflows_metadata_paused.json workflow-node1:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata_paused.json workflow-node2:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata_paused.json workflow-node3:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata_paused.json workflow-node4:/tmp/workflows_metadata.json -docker cp /tmp/workflows_metadata_paused.json workflow-node5:/tmp/workflows_metadata.json - -# Wait for syncer to detect the change -docker logs workflow-node1 2>&1 | grep -i "paused" -``` - -**Deleting a workflow** - Remove it from the JSON file: - -```bash -# Create empty metadata file -echo '{"workflows":[]}' > /tmp/empty_metadata.json - -# Copy to all nodes -docker cp /tmp/empty_metadata.json workflow-node1:/tmp/workflows_metadata.json -docker cp /tmp/empty_metadata.json workflow-node2:/tmp/workflows_metadata.json -docker cp /tmp/empty_metadata.json workflow-node3:/tmp/workflows_metadata.json -docker cp /tmp/empty_metadata.json workflow-node4:/tmp/workflows_metadata.json -docker cp /tmp/empty_metadata.json workflow-node5:/tmp/workflows_metadata.json - -# Contract workflows continue running; file-source workflow is removed -``` - -### Additional Sources Key Behaviors - -**Source Aggregation:** -- Workflows from all sources are merged into a single list -- Only ContractWorkflowSource provides real blockchain head (block height/hash) -- For pure additional-source deployments, a synthetic head is created (Unix timestamp) -- If one source fails, others continue to work (graceful degradation) - -**Contract Source Optional:** -- If no contract address is configured, the contract source is skipped -- Enables pure GRPC-only or file-only workflow deployments -- Synthetic heads are used when no contract source is present - -**File Source Characteristics:** -- File is read on every sync interval (default 12 seconds) -- Missing file = empty workflow list (not an error) -- Invalid JSON entries are skipped with a warning -- File source is always "ready" (unlike contract source which needs initialization) - -**GRPC Source:** -- Supports JWT-based authentication -- Includes automatic retry logic with exponential backoff (max 2 retries, 100ms-5s delay) -- Only transient errors (Unavailable, ResourceExhausted) are retried - -**Source Tracking:** -- Each workflow includes a `Source` field identifying where it was deployed from -- Source identifiers: `ContractWorkflowSource`, `FileWorkflowSource`, `GRPCWorkflowSource` - -### Debugging Additional Sources - -**Check if file source is being read:** -```bash -docker logs workflow-node1 2>&1 | grep "Loaded workflows from file" -docker logs workflow-node1 2>&1 | grep "Workflow metadata file does not exist" -``` - -**Check aggregated workflows:** -```bash -docker logs workflow-node1 2>&1 | grep "Aggregated workflows from all sources" -docker logs workflow-node1 2>&1 | grep "fetching workflow metadata from all sources" -``` - -**Verify workflow engine started:** -```bash -docker logs workflow-node1 2>&1 | grep "Creating Workflow Engine for workflow spec" -``` - -**Key log messages:** -- `"Loaded workflows from file"` - File was successfully read -- `"Workflow metadata file does not exist"` - File doesn't exist (normal if not using file source) -- `"Source not ready, skipping"` - Contract source not yet initialized -- `"Aggregated workflows from all sources"` with `totalWorkflows` count - Sync completed -- `"All workflow sources failed - will retry next cycle"` (WARN) - All sources failed -- `"Failed to fetch workflows from source"` (ERROR) - Individual source failure - ---- - -## Further use -To manage workflows you will need the CRE CLI. You can either: -- download it from [smartcontract/dev-platform](https://github.com/smartcontractkit/dev-platform/releases/tag/v0.2.0) or -- using GH CLI: - ```bash - gh release download v0.2.0 --repo smartcontractkit/dev-platform --pattern '*darwin_arm64*' - ``` - -Remember that the CRE CLI version needs to match your CPU architecture and operating system. - ---- - -### Advanced Usage: -1. **Choose the Right Topology** - - For a single DON with all capabilities, but with a separate gateway and bootstrap node: `configs/workflow-gateway-don.toml` (default) - - For a full topology (workflow DON + capabilities DON + gateway DON): `configs/workflow-gateway-capabilities-don.toml` - - Use the topology CLI to inspect and compare configs before startup: - ```bash - # list available topology configs - go run . topology list - - # show compact DON + capability matrix view for one config - go run . topology show --config configs/workflow-gateway-capabilities-don.toml - - # regenerate topology docs - go run . topology generate - ``` - - `env start` now prints a compact topology summary with a capability matrix. -2. **Working on plugins** - - Plugins are installed during the Docker image build from the YAML configs (`plugins/plugins.private.yaml`, `plugins/plugins.public.yaml`). - - To use your plugin changes: push to the remote repo, update the `gitRef` in the YAML to your commit/branch/tag, then start the environment (image will rebuild). - - For quick iteration without full rebuilds, use `env swap capability` to hot-swap a locally built binary into running containers. -3. **Decide whether to build or reuse Chainlink Docker image** - - By default, the config builds the image from your local branch (plugins are pulled per the YAML `gitRef` during build). To use a pre-built image: - ```toml - [nodesets.node_specs.node] - image = ":" - ``` - - Make these changes for **all** nodes in the nodeset. Omit or clear `docker_ctx` and `docker_file` when using a pre-built image. - -4. **Decide whether to use Docker or k8s** - - Read [Docker vs Kubernetes in guidelines.md](../../../../system-tests/tests/smoke/cre/guidelines.md) to learn how to switch between Docker and Kubernetes -5. **Start Observability Stack (Docker-only)** - ```bash - # to start Loki, Grafana and Prometheus run: - ctf obs up - - # to start Blockscout block explorer run: - ctf bs u - ``` - - To download the `ctf` binary follow the steps described [here](https://smartcontractkit.github.io/chainlink-testing-framework/framework/getting_started.html) - -Optional environment variables used by the CLI: -- `CTF_CONFIGS`: TOML config paths. Defaults to [./configs/workflow-gateway-don.toml](./configs/workflow-gateway-don.toml) -- `PRIVATE_KEY`: Plaintext private key that will be used for all deployments (needs to be funded). Defaults to `ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80` -- `TESTCONTAINERS_RYUK_DISABLED`: Set to "true" to disable cleanup. Defaults to `false` - -When starting the environment in AWS-managed Kubernetes, make sure required ingress/domain environment variables are set before running commands. - -### Testing Billing -Spin up the billing service and necessary migrations in the `billing-platform-service` repo. -The directions there should be sufficient. If they are not, give a shout to @cre-business. - -So far, all we have working is the workflow run by: -```go -go run . env start --with-example -w 1m -``` - -I recommend increasing your docker resources to near max memory, as this is going to slow your local -machine down anyways, and it could mean the difference between a 5 minute and 2 minute iteration cycle. - -Add the following TOML config to `core/scripts/cre/environment/configs/workflow-gateway-don.toml`: -```toml -[Billing] -URL = 'host.docker.internal:2223' -TLSEnabled = false -``` - -The happy-path: -* workflow runs successfully -* no `switch to metering mode` error logs generated by the workflow run -* SubmitWorkflowReceipt in billing service does not have an err message - ---- - -## DX Tracing - -To track environment usage and quality metrics (success/failure rate, startup time) local CRE environment is integrated with DX. If you have `gh cli` configured and authenticated on your local machine it will be used to automatically setup DX integration in the background. If you don't, tracing data will be stored locally in `~/.local/share/dx/` and uploaded once either `gh cli` is available or valid `~/.local/share/dx/config.json` file appears. - -> Minimum required version of the `GH CLI` is `v2.50.0` - -To opt out from tracing use the following environment variable: -```bash -DISABLE_DX_TRACKING=true -``` - -### Manually creating config file - -Valid config file has the following content: -```json -{ - "dx_api_token":"xxx", - "github_username":"your-gh-username" -} -``` - -DX API token can be found in 1 Password in the engineering vault as `DX - Local CRE Environment`. - -Other environment variables: -* `DX_LOG_LEVEL` -- log level of a rudimentary logger -* `DX_TEST_MODE` -- executes in test mode, which means that data sent to DX won't be included in any reports -* `DX_FORCE_OFFLINE_MODE` -- doesn't send any events, instead saves them on the disk - ---- - -# Job Distributor Image - -Tests require a local Job Distributor image. By default, configs expect version `job-distributor:0.22.1`. - -To build locally: -```bash -git clone https://github.com/smartcontractkit/job-distributor -cd job-distributor -git checkout v0.22.1 -docker build -t job-distributor:0.22.1 -f e2e/Dockerfile.e2e . +go run . workflow deploy -w ./examples/workflows/v2/cron/main.go --compile -n cron_example ``` -If you pull the image from the PRO ECR remember to either update the image name in [TOML config](./configs/) for your chosed topology or to tag that image as `job-distributor:0.22.1`. - -## Example Workflows - -The environment includes several example workflows located in `core/scripts/cre/environment/examples/workflows/`: - -### Available Workflows - -#### V2 Workflows -- **`v2/cron/`**: Simple cron-based workflow that executes on a schedule -- **`v2/node-mode/`**: Node mode workflow example -- **`v2/http/`**: HTTP-based workflow example - -- **`v2/proof-of-reserve/cron-based/`**: Cron-based proof-of-reserve workflow example - -### Deployable Example Workflows - -The following workflow can be deployed using the `workflow run-por-example` command: - -#### Proof-of-Reserve Workflow -The proof-of-reserve workflow executes a proof-of-reserve-like scenario with the following steps: -- Call external HTTP API and fetch value of test asset -- Reach consensus on that value -- Write that value in the consumer contract on chain - -**Usage:** -```bash -go run . workflow run-por-example [flags] -``` - -**Key flags:** -- `-u, --example-workflow-timeout`: Time to wait for workflow execution (default: `5m`) -- `-d, --workflow-don-id`: Workflow DON ID from the registry (default: `1`) -- `-w, --workflow-registry-address`: Workflow registry address (default: `0x9fE46736679d2D9a65F0992F2272dE9f3c7fa6e0`) -- `-r, --rpc-url`: RPC URL (default: `http://localhost:8545`) - -**Examples:** -```bash -# Deploy the PoR v2 cron example -go run . workflow run-por-example - -# Deploy the PoR v2 cron example with custom timeout -go run . workflow run-por-example -u 10m -``` - -#### Cron-based Workflow -- **Trigger**: Every 30 seconds on a schedule -- **Behavior**: Keeps executing until paused or deleted -- **Requirements**: External `cron` capability binary (must be manually compiled or downloaded and configured in TOML) -- **Source**: [`examples/workflows/v2/proof-of-reserve/cron-based/main.go`](./examples/workflows/v2/proof-of-reserve/cron-based/main.go) - -### Manual Workflow Deployment - -For other workflows (v2/cron, v2/node-mode, v2/http), you can deploy them manually using the `workflow deploy` command: - -```bash -# Deploy v2 cron workflow -go run . workflow deploy -w ./examples/workflows/v2/cron/main.go --compile -n cron-workflow - -# Deploy v2 http workflow -go run . workflow deploy -w ./examples/workflows/v2/http/main.go --compile -n cron-workflow - -# Deploy v2 node-mode workflow -go run . workflow deploy -w ./examples/workflows/v2/node-mode/main.go --compile -n cron-workflow -``` - - -## Adding a New Standard Capability - -This section explains how to add new standard capabilities to the CRE test framework. There are two types of capabilities: - -- **DON-level capabilities**: Run once per DON (e.g., cron triggers, HTTP actions) -- **Chain-level capabilities**: Run per chain (e.g., read contract, write EVM) - -### Capability Types - -**DON-level capabilities** are used when: -- The capability operates at the DON level (not chain-specific) -- Examples: cron triggers, HTTP actions, random number generators -- Configuration is shared across all nodes in the DON - -**Chain-level capabilities** are used when: -- The capability needs to interact with specific blockchain networks -- Examples: read contract, write EVM -- Configuration varies per chain (different RPC URLs, chain IDs, etc.) - -### Step 1: Define the Capability Flag - -Add a unique flag in `system-tests/lib/cre/types.go`: - -```go -const ( - // ... existing capabilities ... - RandomNumberGeneratorCapability CapabilityFlag = "random-number-generator" // DON-level example - GasEstimatorCapability CapabilityFlag = "gas-estimator" // Chain-level example -) -``` - -### Step 2: Create the Capability Implementation - -Create a new directory in `system-tests/lib/cre/capabilities/` with your capability name. - -#### DON-level Capability Example (Random Number Generator) - -```go -// system-tests/lib/cre/capabilities/randomnumbergenerator/random_number_generator.go -package randomnumbergenerator - -import ( - capabilitiespb "github.com/smartcontractkit/chainlink-common/pkg/capabilities/pb" - kcr "github.com/smartcontractkit/chainlink-evm/gethwrappers/keystone/generated/capabilities_registry_1_1_0" - keystone_changeset "github.com/smartcontractkit/chainlink/deployment/keystone/changeset" - - "github.com/smartcontractkit/chainlink/system-tests/lib/cre" - "github.com/smartcontractkit/chainlink/system-tests/lib/cre/capabilities" - factory "github.com/smartcontractkit/chainlink/system-tests/lib/cre/don/jobs/standardcapability" - donlevel "github.com/smartcontractkit/chainlink/system-tests/lib/cre/don/jobs/standardcapability/donlevel" - "github.com/smartcontractkit/chainlink/system-tests/lib/cre/flags" -) - -const flag = cre.RandomNumberGeneratorCapability -const configTemplate = `"{"seedValue": {{.SeedValue}}, "maxRange": {{.MaxRange}}}"` - -func New() (*capabilities.Capability, error) { - perDonJobSpecFactory := factory.NewCapabilityJobSpecFactory( - donlevel.CapabilityEnabler, - donlevel.EnabledChainsProvider, - donlevel.ConfigResolver, - donlevel.JobNamer, - ) - - return capabilities.New( - flag, - capabilities.WithJobSpecFn(perDonJobSpecFactory.BuildJobSpec( - flag, - configTemplate, - factory.NoOpExtractor, // all values are defined in TOML, no need to set any runtime ones - factory.BinaryPathBuilder, - )), - capabilities.WithCapabilityRegistryV1ConfigFn(registerWithV1), - ) -} - -func registerWithV1(donFlags []string, _ *cre.NodeSet) ([]keystone_changeset.DONCapabilityWithConfig, error) { - var capabilities []keystone_changeset.DONCapabilityWithConfig - - if flags.HasFlag(donFlags, flag) { - capabilities = append(capabilities, keystone_changeset.DONCapabilityWithConfig{ - Capability: kcr.CapabilitiesRegistryCapability{ - LabelledName: "random-number-generator", - Version: "1.0.0", - CapabilityType: 2, // ACTION - }, - Config: &capabilitiespb.CapabilityConfig{}, - }) - } - - return capabilities, nil -} -``` - -#### Chain-level Capability Example (Gas Estimator) - -```go -// system-tests/lib/cre/capabilities/gasestimator/gas_estimator.go -package gasestimator - -import ( - "errors" - "fmt" - - capabilitiespb "github.com/smartcontractkit/chainlink-common/pkg/capabilities/pb" - kcr "github.com/smartcontractkit/chainlink-evm/gethwrappers/keystone/generated/capabilities_registry_1_1_0" - keystone_changeset "github.com/smartcontractkit/chainlink/deployment/keystone/changeset" - - "github.com/smartcontractkit/chainlink/system-tests/lib/cre" - "github.com/smartcontractkit/chainlink/system-tests/lib/cre/capabilities" - factory "github.com/smartcontractkit/chainlink/system-tests/lib/cre/don/jobs/standardcapability" - chainlevel "github.com/smartcontractkit/chainlink/system-tests/lib/cre/don/jobs/standardcapability/chainlevel" -) - -const flag = cre.GasEstimatorCapability -const configTemplate = `'{"chainId":{{.ChainID}},"defaultGasLimit: {{.DefaultGasLimit}},"maxGasPrice": {{.MaxGasPrice}}, "rpcUrl":"{{.RPCURL}}"}'` - -func New() (*capabilities.Capability, error) { - perChainJobSpecFactory := factory.NewCapabilityJobSpecFactory( - chainlevel.CapabilityEnabler, - chainlevel.EnabledChainsProvider, - chainlevel.ConfigResolver, - chainlevel.JobNamer, - ) - - return capabilities.New( - flag, - capabilities.WithJobSpecFn(perChainJobSpecFactory.BuildJobSpec( - flag, - configTemplate, - func(chainID uint64, nodeMetadata *cre.NodeMetadata) map[string]any { - // assuming that RPC URL was somehow added to the metadata - // and that it is known only at runtime (otherwise it could have been defined in the TOML config, like DefaultGasLimit and MaxGasPrice) - rpcUrl, rpcUrlErr := node.FindLabelValue(nodeMetadata, node.RPCURL) - if rpcUrl != nil { - return nil, errors.Wrap(rpcUrl, "failed to find RPC URL in node labels") - } - - return map[string]any{ - "ChainID": chainID, - "RPCURL": rpcUrl, - } - }, - factory.BinaryPathBuilder, - )), - capabilities.WithCapabilityRegistryV1ConfigFn(registerWithV1), - ) -} - -func registerWithV1(_ []string, nodeSetInput *cre.NodeSet) ([]keystone_changeset.DONCapabilityWithConfig, error) { - capabilities := make([]keystone_changeset.DONCapabilityWithConfig, 0) - - if nodeSetInput == nil { - return nil, errors.New("node set input is nil") - } - - if nodeSetInput.ChainCapabilities == nil { - return nil, nil - } - - if _, ok := nodeSetInput.ChainCapabilities[flag]; !ok { - return nil, nil - } - - for _, chainID := range nodeSetInput.ChainCapabilities[cre.GasEstimatorCapability].EnabledChains { - capabilities = append(capabilities, keystone_changeset.DONCapabilityWithConfig{ - Capability: kcr.CapabilitiesRegistryCapability{ - LabelledName: fmt.Sprintf("gas-estimator-evm-%d", chainID), - Version: "1.0.0", - CapabilityType: 1, // ACTION - }, - Config: &capabilitiespb.CapabilityConfig{}, - }) - } - - return capabilities, nil -} -``` - -### Step 3: Optional Gateway Handler Configuration - -If your capability needs to handle HTTP requests through the gateway, add a gateway handler configuration: - -```go -func New() (*capabilities.Capability, error) { - // ... existing code ... - - return capabilities.New( - flag, - capabilities.WithJobSpecFn(perDonJobSpecFactory.BuildJobSpec( - flag, - configTemplate, - factory.NoOpExtractor, - factory.BinaryPathBuilder, - )), - capabilities.WithGatewayJobHandlerConfigFn(handlerConfig), // Add this - capabilities.WithCapabilityRegistryV1ConfigFn(registerWithV1), - ) -} - -func handlerConfig(donMetadata *cre.DonMetadata) (cre.HandlerTypeToConfig, error) { - if !flags.HasFlag(donMetadata.Flags, flag) { - return nil, nil - } - - return map[string]string{"your-handler-type": ` -ServiceName = "your-service-na,e" -[gatewayConfig.Dons.Handlers.Config] -maxRequestDurationMs = 5000 -[gatewayConfig.Dons.Handlers.Config.RateLimiter] -globalBurst = 10 -globalRPS = 50`}, nil -} -``` - -### Step 4: Optional Node Configuration Modifications - -If your capability requires node configuration changes (like write-evm), add a node config function: - -```go -func New() (*capabilities.Capability, error) { - // ... existing code ... - - return capabilities.New( - flag, - capabilities.WithJobSpecFn(perDonJobSpecFactory.BuildJobSpec( - flag, - configTemplate, - factory.NoOpExtractor, - factory.BinaryPathBuilder, - )), - capabilities.WithNodeConfigFn(nodeConfigFn), // Add this - capabilities.WithCapabilityRegistryV1ConfigFn(registerWithV1), - ) -} - -func nodeConfigFn(input cre.GenerateConfigsInput) (cre.NodeIndexToConfigOverride, error) { - configOverrides := make(cre.NodeIndexToConfigOverride) - - // Add your custom node configuration here - // Example: Add custom TOML section - - return configOverrides, nil -} -``` - -**Note**: Some capabilities like `write-evm` need to modify node configuration outside of this encapsulated implementation. They add TOML strings to various sections of the config (like EVM chains, workflow settings, etc.). This is done in `system-tests/lib/cre/don/config/config.go` where the capability checks for its presence and modifies the configuration accordingly. - -### Step 5: Add Default Configuration - -Add default configuration to `core/scripts/cre/environment/configs/capability_defaults.toml`. For plugin-based capabilities, the binary is installed during image build; set `binary_name` (the full path is `/usr/local/bin/` + binary_name): - -```toml -[capability_configs.random-number-generator] - binary_name = "random-number-generator" - -[capability_configs.random-number-generator.values] - # Add default configuration values here - SeedValue = 42 - MaxRange = 1000 - -[capability_configs.gas-estimator] - binary_name = "gas-estimator" - -[capability_configs.gas-estimator.values] - # Add default configuration values here - DefaultGasLimit = 21000 - MaxGasPrice = 100000000000 # 100 gwei -``` - -`values` are meant for storing values that will be used for capability configuration (be that the job specs or capability registry contract). When you override them (either in the defaults file or per-DON), you must provide the entire structure because individual fields are not merged with previously defined values. - -### Step 6: Register the Capability - -Add your capability to the default set in `system-tests/lib/cre/capabilities/sets/sets.go`: - -```go -func NewDefaultSet(registryChainID uint64, extraAllowedPorts []int, extraAllowedIPs []string, extraAllowedIPsCIDR []string) ([]cre.InstallableCapability, error) { - capabilities := []cre.InstallableCapability{} - - // ... existing capabilities ... - - randomNumberGenerator, rngErr := randomnumbergeneratorcapability.New() - if rngErr != nil { - return nil, errors.Wrap(rngErr, "failed to create random number generator capability") - } - capabilities = append(capabilities, randomNumberGenerator) - - gasEstimator, geErr := gasestimatorcapability.New() - if geErr != nil { - return nil, errors.Wrap(geErr, "failed to create gas estimator capability") - } - capabilities = append(capabilities, gasEstimator) - - return capabilities, nil -} -``` - -Don't forget to add the import at the top of the file: - -```go -import ( - // ... existing imports ... - randomnumbergeneratorcapability "github.com/smartcontractkit/chainlink/system-tests/lib/cre/capabilities/randomnumbergenerator" - gasestimatorcapability "github.com/smartcontractkit/chainlink/system-tests/lib/cre/capabilities/gasestimator" -) -``` - -### Step 7: Add to Environment Configurations - -To actually use your capability in tests, add it to the `capabilities` array inside the relevant topology under `core/scripts/cre/environment/configs/`. All capabilities live in this single list: - -```toml -# In workflow-gateway-don.toml, etc. -capabilities = [ - "ocr3", # DON-wide - "custom-compute", - "web-api-target", - "write-evm-1337", # chain-scoped (workflow DON on Anvil 1337) - "write-evm-2337", - "read-contract-2337" # another per-chain capability -] -``` - -- Include the unsuffixed flag for capabilities that run globally (e.g., `ocr3`, `custom-compute`). -- Append `-` to scope a capability to a specific chain. Each chain you want to target must appear as its own suffixed entry (e.g., `write-evm-1337`). - -Common configuration files: -- `workflow-gateway-don.toml` - Workflow DON with gateway and bootstrap in a separate node -- `workflow-gateway-capabilities-don.toml` - Multiple DONs with different capabilities - -### Configuration Templates - -- Use simple templates or empty strings for shared/global configuration. -- When a capability depends on chain context, reference template variables like `{{.ChainID}}` or `{{.NetworkFamily}}` inside its config and expose it via suffixed flags (`write-evm-1337`). - -### Important Notes - -- **Fake capabilities**: The examples above (random number generator, gas estimator) are fictional and don't exist in the actual codebase -- **Plugin sources**: Capabilities are installed during image build from `plugins/plugins.private.yaml` and `plugins/plugins.public.yaml`; update `gitRef` to pick a version. -- **Bootstrap nodes**: Don't run capabilities on bootstrap nodes - they only run on worker nodes - ---- - -## Multiple DONs - -The CRE system supports multiple DONs (Decentralized Oracle Networks) with complete configuration via TOML files. No Go code is required for DON topology setup. - -### Supported Capabilities - -All capabilities are declared in the `capabilities` array. Capabilities that operate once per DON are listed without a suffix. Capabilities that target a particular chain append `-` (e.g., `write-evm-1337`). - -| Capability | Typical scope | Notes | -| --- | --- | --- | -| `ocr3` | DON-wide | Built-in consensus capability | -| `consensus` | DON-wide | Consensus v2 | -| `cron` | DON-wide | Requires binary; schedule triggers | -| `custom-compute` | DON-wide | Built-in custom compute runtime | -| `web-api-trigger` | DON-wide | HTTP trigger pipeline | -| `web-api-target` | DON-wide | HTTP target pipeline | -| `http-trigger` | DON-wide | Requires binary | -| `http-action` | DON-wide | Requires binary | -| `vault` | DON-wide | Vault integration | -| `mock` | DON-wide | Mock/testing capability | -| `evm-` | Chain-scoped | Add a suffixed entry per chain (e.g., `evm-1337`) | -| `write-evm-` | Chain-scoped | Built-in write capability; suffix per chain | -| `read-contract-` | Chain-scoped | Requires binary | -| `log-event-trigger-` | Chain-scoped | Requires binary | - -### DON Types - -- `workflow` - Handles workflow execution (only one allowed) -- `capabilities` - Provides specific capabilities (multiple allowed) -- `gateway` - Handles external requests (only one allowed) - -### TOML Configuration Structure - -Each DON is defined as a `nodesets` entry in the TOML configuration: - -```toml -[[nodesets]] - nodes = 5 # Number of nodes in this DON - name = "workflow" # Unique name for this nodeset - don_types = ["workflow"] # DON type(s) for this nodeset - override_mode = "all" # "all" for uniform config, "each" for per-node - - # Capabilities configuration (mix DON-wide and chain-scoped entries) - capabilities = [ - "ocr3", - "custom-compute", - "cron", - "write-evm-1337", - "write-evm-2337", - "read-contract-1337" - ] -``` - -### Example: Adding a New Topology - -Here's how to add a new capabilities DON to your configuration: - -```toml -# Add this to your existing TOML configuration file - -[[nodesets]] - nodes = 3 - name = "data-feeds" - don_types = ["capabilities"] - override_mode = "all" - http_port_range_start = 10400 - - # Enable DON-level capabilities (using hypothetical don-level capability) - capabilities = ["llo-streams"] - - # Database configuration - [nodesets.db] - image = "postgres:12.0" - port = 13300 - - # Node specifications - [[nodesets.node_specs]] - roles = ["bootstrap"] # explicitly indicate the roles for each node, i.e. plugin, bootstrap or gateway - [nodesets.node_specs.node] - docker_ctx = "../../../.." - docker_file = "core/chainlink.Dockerfile" - user_config_overrides = """ - [Log] - Level = 'debug' - JSONConsole = true - - [Telemetry] - Enabled = true - Endpoint = 'host.docker.internal:4317' - ChipIngressEndpoint = 'chip-ingress:50051' - InsecureConnection = true - TraceSampleRatio = 1 - HeartbeatInterval = '30s' - - [CRE.WorkflowFetcher] - URL = "file:///home/chainlink/workflows" - """ -``` - -### Configuration Modes - -- **`override_mode = "all"`**: All nodes in the DON use identical configuration -- **`override_mode = "each"`**: Each node can have different configuration (useful for secrets, custom ports, etc.) - -### Port Management - -Each nodeset should use a different `http_port_range_start` to avoid port conflicts: - -- Workflow DON: `10100` -- Capabilities DON: `10200` -- Gateway DON: `10300` -- Additional DONs: `10400`, `10500`, etc. - -### Important Notes - -- Only **one** `workflow` DON and **one** `gateway` DON are allowed -- Multiple `capabilities` DONs are supported -- Chain-scoped capabilities must be listed once per chain using the `capability-` suffix -- Bootstrap nodes can be part of any DON or form a separate DON -- Gateway nodes are only needed for gateway DONs - ---- - -## Enabling Already Implemented Capabilities - -This section explains how to enable already implemented capabilities in existing CRE topologies. All capability configurations are stored in `core/scripts/cre/environment/configs/`, with default settings defined in `capability_defaults.toml`. - -### Available Configuration Files - -The `configs/` directory contains several topology configurations: -- `workflow-gateway-don.toml` - Workflow DON with gateway and bootstrap in a separate node (default) -- `workflow-gateway-capabilities-don.toml` - Full topology with multiple DONs -- `capability_defaults.toml` - Default capability configurations and binary paths - -### Capability Types and Configuration - -Declare every capability inside the `capabilities` array. The same list now covers both DON-wide and per-chain functionality: - -```toml -[[nodesets]] - capabilities = [ - "ocr3", - "custom-compute", - "web-api-target", - "web-api-trigger", - "vault", - "cron", - "write-evm-1337", - "write-evm-2337", - "read-contract-2337" - ] -``` - -- **DON-wide flags** (no suffix): `ocr3`, `consensus`, `custom-compute`, `web-api-target`, `web-api-trigger`, `vault`, `cron`, `http-trigger`, `http-action`, `mock`. -- **Chain-scoped bases** (append `-`): `evm`, `write-evm`, `read-contract`, `log-event-trigger`, plus any new capability that needs per-chain overrides. - -### Capability availability - -Capabilities are installed into the Chainlink image during the Docker build. Plugin sources are defined in `plugins/plugins.private.yaml` and `plugins/plugins.public.yaml`; each entry has a `moduleURI` and `gitRef` (commit, branch, or tag). - -**Built-in capabilities** (no separate plugin): -- `ocr3`, `consensus`, `custom-compute`, `web-api-target`, `web-api-trigger`, `vault`, `write-evm` - -**Plugin-based capabilities** (installed from YAML during image build): -- `cron`, `http-trigger`, `http-action`, `read-contract`, `log-event-trigger`, `evm`, `mock` - -To use a new plugin version: push your changes, update `gitRef` in the YAML, then rebuild the image (or start the environment). - -### Enabling Capabilities in Your Topology - -#### 1. Declare Capabilities - -Edit your topology (e.g., `workflow-gateway-don.toml`) and list every capability the DON should run: - -```toml -[[nodesets]] - name = "workflow" - don_types = ["workflow"] - capabilities = [ - "ocr3", - "custom-compute", - "web-api-target", - "cron", - "http-action", - "write-evm-1337", - "write-evm-2337", - "read-contract-1337" - ] - # ... other configuration -``` - -- Add the unsuffixed flag once for DON-wide functionality. -- Add one suffixed entry per chain for every chain-aware capability you need. - -#### 2. Ensure plugins are in your image - -Plugins are installed during the Docker image build. Either: - -**Option A: Build from source** (default) – the image is built from your local branch; plugins are pulled from the YAML `gitRef` during build. Ensure `GITHUB_TOKEN` is set for private repos. - -**Option B: Use a pre-built image** – set `image` in the TOML for each node to a image that already has the plugins (e.g. nightly): -```toml -[nodesets.node_specs.node] - image = ".dkr.ecr..amazonaws.com/chainlink:nightly--plugins" -``` - -**Option C: Hot-swap for iteration** – use `env swap capability -n -b /path/to/binary` to deploy a locally built binary to running containers without restarting. - -### Configuration Examples - -#### Example 1: Basic Workflow with Cron - -```toml -[[nodesets]] - nodes = 5 - name = "workflow" - don_types = ["workflow"] - capabilities = [ - "ocr3", - "custom-compute", - "web-api-target", - "cron", - "write-evm-1337" - ] -``` - -#### Example 2: Full Capability Setup - -```toml -[[nodesets]] - nodes = 5 - name = "workflow" - don_types = ["workflow", "gateway"] - capabilities = [ - "ocr3", - "custom-compute", - "web-api-target", - "web-api-trigger", - "vault", - "cron", - "http-action", - "http-trigger", - "write-evm-1337", - "write-evm-2337", - "read-contract-1337", - "read-contract-2337", - "log-event-trigger-1337", - "evm-1337", - "evm-2337" - ] -``` - -#### Example 3: Multi-DON with Specialized Capabilities - -```toml -# Workflow DON -[[nodesets]] - name = "workflow" - don_types = ["workflow"] - capabilities = [ - "ocr3", - "custom-compute", - "write-evm-1337", - "evm-1337" - ] - -# Capabilities DON for data feeds -[[nodesets]] - name = "data-feeds" - don_types = ["capabilities"] - exposes_remote_capabilities = true # IMPORTANT: needs to be set to true if capabilities are to be accessible by the workflow DON - capabilities = [ - "http-action", - "cron", - "read-contract-1337", - "read-contract-2337", - "log-event-trigger-1337", - "evm-1337" - ] -``` - -### Custom Capability Configuration - -You can override default capability configurations by modifying the `capability_defaults.toml` file or adding custom configurations in your topology TOML: - -```toml -# Override default configuration for custom-compute -[capability_configs.custom-compute.values] - NumWorkers = 5 - GlobalRPS = 50.0 - GlobalBurst = 100 - -# Override configuration for cron -[capability_configs.cron.values] - CustomScheduleFormat = "extended" -``` - -To override capability configs for a specific DON, declare a `[nodesets.capability_configs]` table inside that nodeset. Keys must match the capability flag exactly—including any chain suffix—because the override replaces the full config for that capability. Partial overrides are not supported; you must provide every value (and binary path if it differs). Examples: - -```toml -[nodesets.capability_configs] - # Global override for a DON-level capability - [nodesets.capability_configs.web-api-target.values] - GlobalRPS = 2000.0 - GlobalBurst = 1500 - PerSenderRPS = 1000.0 - PerSenderBurst = 1500 - - # Chain-specific override keyed by the suffixed flag - [nodesets.capability_configs.write-evm-1337.values] - GasLimitDefault = 700_000 - AcceptanceTimeout = "60s" - - # Override binary name (optional; path is /usr/local/bin/) - [nodesets.capability_configs.read-contract-1337] - binary_name = "readcontract" -``` - -### Important Notes - -- **Plugin availability**: Ensure required capabilities are in your Chainlink image (built from source with plugins YAML, or use a pre-built image) -- **Chain IDs**: Chain-scoped capability flags (e.g., `write-evm-`) must use chain IDs that exist in your blockchain configuration -- **Port conflicts**: Each nodeset should use different `http_port_range_start` values -- **DON limitations**: Only one `workflow` DON and one `gateway` DON are allowed per environment -- **Bootstrap nodes**: Capabilities typically don't run on bootstrap nodes - -### Troubleshooting Capability Issues - -**Capability binary not found:** -- Solution: Ensure the capability is installed in your Chainlink image. Either build from source (plugins are pulled from YAML during build) or use a pre-built image with plugins. For quick iteration, use `env swap capability` to deploy a locally built binary. - -**Capability not supported:** -``` -Error: unsupported capability: unknown-capability -``` -- Solution: Check the list of available capabilities and ensure correct spelling - -**Chain not configured:** -``` -Error: chain 1337 not found for capability write-evm -``` -- Solution: Ensure the chain ID exists in your `[[blockchains]]` configuration - ---- - ---- - -## Hot swapping - -### Chainlink nodes' Docker image - -Swap the Docker images of all Chainlink nodes in the environment without completely restarting the environment. If the environment is configured to build Docker images from source, images will be rebuilt if changes are detected. - -**Usage:** -```bash -go run . env swap nodes [flags] -``` - -**Key flags:** -- `-f, --force`: Force removal of Docker containers (default: `true`). Set to `false` for graceful shutdown -- `-w, --wait-time`: Time to wait for container removal after failed restart (default: `2m`) - -**Example:** -```bash -# Force restart all nodes (faster) -go run . env swap n - -# Graceful restart with longer wait time -go run . env swap n --force=false --wait-time=5m -``` - -### Capability binary - -Swap individual capability binaries in running Chainlink nodes without restarting the entire environment. This targets only the DONs that have the specified capability enabled. - -**Usage:** -```bash -go run . env swap capability [flags] -``` - -**Key flags:** -- `-n, --name`: Name of the capability to swap (required) -- `-b, --binary`: Path to the new binary on the host machine (required) -- `-f, --force`: Force container restart (default: `true`) - -**Example:** -```bash -go run . env swap c --name cron --binary ./new-cron-binary -``` - -**Supported capabilities for hot swapping:** -Run `go run . env swap capability --name unsupported-cap` to see the current list of swappable capabilities. - -### Automated Hot Swapping with fswatch - -For development workflows, you can automate capability hot swapping using `fswatch` to monitor file changes: - -**Install fswatch:** -```bash -brew install fswatch -``` - -**Example usage:** -```bash -# Monitor cron capability source directory and auto-rebuild + hot swap -# (from within your capability source directory) -fswatch -o . | xargs -n1 sh -c ' - export PATH="$HOME/.asdf/shims:$PATH" && - GOOS="linux" GOARCH="amd64" CGO_ENABLED=0 go build -o /tmp/cron && - cd /path/to/chainlink/core/scripts/cre/environment && - go run . env swap c --name cron --binary /tmp/cron -' -``` - -**⚠️ Important:** Pay attention to the binary output location in the build command. In the example above, the binary is compiled to `/tmp/cron` (outside the source directory). If you compile to the current directory (`.`), it would trigger an infinite loop as `fswatch` would detect the newly created binary as a change. - -**⚠️ Important:** If you are using ASDF it is crucial to add `export PATH="$HOME/.asdf/shims:$PATH"` to avoid using system's Go binary and resulting download of all `go.mod` dependencies (both for the capability and the whole local CRE). - ---- - -## Telemetry Configuration - -Chainlink nodes are configured by default to send telemetry data to two services: - -### OTEL Stack (OpenTelemetry) -Nodes send telemetry to `host.docker.internal:4317` for metrics and tracing. Start the OTEL observability stack with: - -```bash -ctf obs u -``` - -This provides access to Grafana, Prometheus, and Loki for monitoring and log aggregation. - -### Chip Ingress (Beholder) -Nodes send workflow events to `chip-ingress:50051` for workflow monitoring. Start Chip Ingress either: - -**Option 1: Start with environment** -```bash -go run . env start --with-beholder -``` - -**Option 2: Start separately** -```bash -go run . env beholder start -``` - -### OTel Tracing Configuration - -To enable OpenTelemetry (OTel) tracing for workflow engines and see traces in Tempo/Grafana, **multiple configuration toggles must be set**: - -| Toggle | Location | Required Value | Purpose | -|--------|----------|----------------|---------| -| `Tracing.Enabled` | Node TOML | `true` | **Required.** Gates trace export; without this, no traces are exported even when `Telemetry.Enabled` is true | -| `Tracing.CollectorTarget` | Node TOML | e.g., `host.docker.internal:4318` | OTLP endpoint for trace export. Must differ from `Telemetry.Endpoint` when both [Tracing] and [Telemetry] are enabled | -| `Tracing.SamplingRatio` | Node TOML | `> 0` (e.g., `1.0`) | Sampling rate for the span exporter (0 = no traces, 1 = 100%) | -| `Telemetry.Enabled` | Node TOML | `true` | Enables the OTel/beholder exporter | -| `Telemetry.TraceSampleRatio` | Node TOML | `> 0` (e.g., `1.0`) | Sampling rate used by the beholder client (0 = no traces, 1 = 100%) | -| `CRE.DebugMode` | Node TOML | `true` | Enables detailed tracing in workflow engines and syncer | -| `OTEL_SERVICE_NAME` | Environment variable | e.g., `chainlink-node` | Sets the service name for traces in Tempo | -| `Pyroscope.LinkTracesToProfiles` | Node TOML | `true` | Enables traces-to-profiles linking in Grafana (requires Pyroscope) | - -**Sampling ratios:** When both [Tracing] and [Telemetry] are enabled, `Tracing.SamplingRatio` controls the span exporter and `Telemetry.TraceSampleRatio` is used by the beholder client. Set both to the same value (e.g., `1.0`) for consistent trace capture. - -**Example TOML configuration:** - -```toml -[Tracing] -Enabled = true -CollectorTarget = 'host.docker.internal:4318' # Must differ from Telemetry.Endpoint when both enabled -SamplingRatio = 1.0 # 100% sampling - adjust for production -Mode = 'unencrypted' # Use for local dev; use 'tls' in production - -[Telemetry] -Enabled = true -Endpoint = 'host.docker.internal:4317' # Must differ from Tracing.CollectorTarget -InsecureConnection = true -TraceSampleRatio = 1.0 # 100% sampling - adjust for production - -[CRE] -DebugMode = true # WARNING: Not suitable for production due to overhead - -[Pyroscope] -ServerAddress = 'http://host.docker.internal:4040' -LinkTracesToProfiles = true # Enables traces-to-profiles in Grafana -``` - -**Note:** When both [Tracing] and [Telemetry] are enabled, config validation requires `Tracing.CollectorTarget` and `Telemetry.Endpoint` to be different. For a single OTel collector, use different ports (e.g., gRPC on 4317, HTTP on 4318) if your collector exposes both. - -**Example environment variable (in nodeset config):** - -```toml -[[nodesets]] - env_vars = { OTEL_SERVICE_NAME = "chainlink-node" } -``` - -**Common issues:** - -| Symptom | Likely Cause | -|---------|--------------| -| No traces at all | `Tracing.Enabled = false`, `Telemetry.Enabled = false`, or sampling ratios set to `0` | -| No workflow engine traces | `CRE.DebugMode = false` | -| Traces show `unknown_service:chainlink` | Missing `OTEL_SERVICE_NAME` env var | -| Traces not exported | Telemetry endpoint unreachable (check `go run . obs up -f `) | -| No traces-to-profiles link in Grafana | `Pyroscope.LinkTracesToProfiles = false` or Pyroscope not running | - -**Important notes:** - -- `CRE.DebugMode` adds performance overhead and should only be enabled during development/debugging, not in production environments. -- **Tracing is only implemented for V2 components:** - - **V2 Syncer**: Only used when workflow registry contracts are v2.x. If you're using v1.x contracts, the V1 syncer is used and has no tracing. - - **V2 Engine (NoDAG)**: Only used by V2/NoDAG workflows. V1/DAG workflows use the V1 engine which has no tracing. -- To use tracing, ensure your environment is configured with **v2 workflow registry contracts** and you're deploying **V2 workflows**. - -### Expected Error Messages - -If these telemetry services are not running, you will see frequent "expected" error messages in the logs due to connection failures: - -``` -failed to connect to telemetry endpoint: connection refused -failed to send to chip-ingress: connection refused -``` - -These errors are harmless but can clutter logs. To avoid them, either start the telemetry services or disable telemetry in your node configuration. - ---- - -## Using a Specific Docker Image for Chainlink Node - -Default behavior builds an image from your current branch: - -```toml -[[nodeset.node_specs]] - [nodeset.node_specs.node] - docker_ctx = "../../.." - docker_file = "core/chainlink.Dockerfile" -``` - -To use a prebuilt image: - -```toml -[[nodeset.node_specs]] - [nodeset.node_specs.node] - image = "image-you-want-to-use" -``` - -Apply this to every node in your config. - ---- - -## Using Existing EVM \& P2P Keys - -When using public chains with limited funding, use pre-funded, encrypted keys: - -TOML format: - -```toml -[[nodesets.node_specs]] - [nodesets.node_specs.node] - test_secrets_overrides = """ - [EVM] - [[EVM.Keys]] - JSON = '{...}' - Password = '' - ID = 1337 - [P2PKey] - JSON = '{...}' - Password = '' - """ -``` - -> Requires `override_mode = "each"` and the same keys across all chains - -These limitations come from the current CRE SDK logic and not Chainlink itself. - ---- - -## TRON Integration - -TRON blockchain support is integrated into the CRE environment by configuring TRON chains as EVM chains. The system wraps the TRON Transaction Manager (TXM) with the EVM chainWriter/write target, while all read operations remain identical to standard EVM chains. - -### How It Works - -- **Configuration**: TRON chains are configured as EVM chains in TOML files with `family = "tron"` -- **Read Operations**: All contract reads, balance queries, and data fetching work exactly like EVM chains -- **Write Operations**: Transaction broadcasting is handled by wrapping TRON's TXM with EVM chainWriter -- **Contract Deployments**: Use tron-specific deployment logic but the contracts are identical to EVM. -- **Docker Image**: Uses `tronbox/tre:dev` for TRON network simulation -- **Funding**: Nodes are automatically funded with 100 TRX (100,000,000 SUN) - -### Example Configuration - -```toml -[[blockchains]] - chain_id = "3360022319" # local network chain ID that corresponds to EVM selector - type = "tron" - port = "9090" # can use any open port - image = "tronbox/tre:dev" # this specific image works both locally and in CI - -... -[[nodesets]] -... - capabilities = [ - "ocr3", - "custom-compute", - "write-evm-1337", - "write-evm-3360022319", - "read-contract-1337", - "read-contract-3360022319" - ] - # Tron is configured as an EVM chain so we can reuse EVM capabilities by suffixing - # the TRON chain ID (3360022319). -``` - ---- - -## Connecting to external/public blockchains - -In order to connect to existing blockchains you need to take advantage of the "cached output" capability of the blockchain component. Assuming we want to connect to Sepolia via a public RPC we'd add the following bit to the TOML config: -```toml -[[blockchains]] - type = "anvil" # type here doesn't really matter as long as it's a valid one - chain_id = "11155111" - - [blockchains.out] - use_cache = true - type = "anvil" # type here doesn't really matter as long as it's a valid one - family = "evm" - chain_id = "11155111" - - [[blockchains.out.nodes]] - ws_url = "wss://sepolia.gateway.tenderly.co/" - http_url = "https://sepolia.gateway.tenderly.co/" - internal_ws_url = "wss://sepolia.gateway.tenderly.co/" - internal_http_url = "https://sepolia.gateway.tenderly.co/" -``` - -Now, unless you enable a chain capability for that chain, it won't be added to node TOML config. If you do want to use it anyway, without any capability accessing it you need to use `supported_evm_chains` key in a following way: -```toml -[[nodesets]] - nodes = 5 - name = "workflow" - don_types = ["workflow", "gateway"] - override_mode = "each" - http_port_range_start = 10100 - - supported_evm_chains = [1337, 2337, 11155111] # add all chains; ones used by capabilities and the extra chain -``` -> Important! When using `supported_evm_chains` you need to add there ALL chains that the node will connect to. It will take precedence over chain capabilities. - -EVM keys will only be generated for a chain that either is referenced by any chain capabilities or which is present in the `supported_evm_chains` array. - -Check [workflow-gateway-don.toml](configs/workflow-gateway-don.toml) for an example. - -## Kubernetes Deployment - -This section explains how to deploy and connect to Chainlink nodes running in an existing Kubernetes cluster. Unlike Docker (which starts containers locally), Kubernetes mode assumes nodes are **already running** in the cluster and generates the appropriate service URLs to connect to them. - -The support for Kubernetes is designed to work with an internal platform for running and managing containerized applications. For more details on where to find domain names and how to configure a DON running on it, please check the internal docs. - -### Prerequisites for Kubernetes - -1. **Kubernetes cluster with Chainlink nodes deployed** - Nodes must already be running in the cluster -2. **Helm charts with overlay support** - The cluster deployment must support config and secrets overrides via Kubernetes ConfigMaps and Secrets -3. **External ingress configured** - For external access, ingress must be configured with a domain -4. **kubectl configured** - Your local kubectl must be configured to access the cluster -5. **Namespace** - All nodes should be deployed in a single namespace - -### Kubernetes Configuration - -Configure the Kubernetes infrastructure in your TOML file: - -```toml -[infra] - type = "kubernetes" - - [infra.kubernetes] - namespace = "my-namespace" # Kubernetes namespace where nodes are deployed - external_domain = "example.com" # Domain for external access (ingress) - external_port = 80 # External port for services - label_selector = "app=chainlink" # Label selector to identify Chainlink pods - node_api_user = "admin@chain.link" # API credentials for node authentication - node_api_password = "your-secure-password" # API password (use secrets in production) -``` - -**Configuration Fields:** - -| Field | Required | Description | -|-------|----------|-------------| -| `namespace` | Yes | Kubernetes namespace where DON nodes are deployed | -| `external_domain` | Yes | Domain for external ingress (e.g., `example.com`) | -| `external_port` | No | External port for services (default: 80) | -| `label_selector` | No | Label selector to identify Chainlink pods | -| `node_api_user` | No | Node API username (defaults to `admin@chain.link`) | -| `node_api_password` | No | Node API password (defaults to `password` for testing) | - -### Config and Secrets Overrides - -Kubernetes deployments support dynamic configuration overrides via ConfigMaps and Secrets. This allows you to modify node configurations without redeploying: - -**How it works:** - -1. The CLI generates node-specific TOML configuration based on your topology -2. Configuration is pushed to the cluster as Kubernetes ConfigMaps (for config) and Secrets (for sensitive data) -3. Nodes are configured via Helm chart overlays to mount these ConfigMaps/Secrets -4. When configuration changes, the CLI updates the ConfigMaps/Secrets and nodes pick up the changes - -**Helm Chart Requirements:** - -Your Helm chart must support the overlay pattern. - -**What gets created:** - -- **ConfigMap** (`-config-override`): Contains TOML configuration overrides -- **Secret** (`-secrets-override`): Contains sensitive configuration (database URLs, private keys, etc.) - -### Kubernetes Example Configuration - -Here's a complete example for connecting to a Kubernetes-deployed DON: - -```toml -[[blockchains]] - chain_id = "1337" - type = "anvil" - - # Use cached output to connect to existing blockchain - [blockchains.out] - use_cache = true - type = "anvil" - family = "evm" - chain_id = "1337" - - [[blockchains.out.nodes]] - ws_url = "wss://anvil-service-rpc.example.com" - http_url = "https://anvil-service-rpc.example.com" - internal_ws_url = "ws://anvil-service:8545" - internal_http_url = "http://anvil-service:8545" - -[infra] - type = "kubernetes" - - [infra.kubernetes] - namespace = "my-namespace" - external_domain = "example.com" - external_port = 80 - label_selector = "app=chainlink" - node_api_user = "admin@chain.link" - node_api_password = "secure-password-here" - -[jd] - csa_encryption_key = "d1093c0060d50a3c89c189b2e485da5a3ce57f3dcb38ab7e2c0d5f0bb2314a44" - -[[nodesets]] - nodes = 5 - name = "workflow" - don_types = ["workflow", "gateway"] - override_mode = "all" - - capabilities = [ - "ocr3", - "custom-compute", - "web-api-target", - "web-api-trigger", - "vault", - "write-evm-1337" - ] -``` - -**Service URL Generation:** - -For Kubernetes deployments, service URLs are generated based on naming conventions (using namespace `my-namespace` as example): - -| Node Type | Internal URL | External URL | -|-----------|--------------|--------------| -| Bootstrap | `http://workflow-bt-0:6688` | `https://my-namespace-workflow-bt-0.example.com` | -| Plugin | `http://workflow-0:6688` | `https://my-namespace-workflow-0.example.com` | -| Gateway | `http://my-namespace-gateway.example.com:80` | Same | - ---- - -## Troubleshooting - -### Chainlink Node Migrations Fail - -Remove old Postgres volumes: - -```bash -ctf d rm -``` - -Or remove volumes manually if `ctf` CLI is unavailable. - -### Docker Image Not Found - -If Docker build succeeds but the image isn’t found at runtime: - -- Restart your machine -- Retry the test - -### Docker fails to download public images -Make sure you are logged in to Docker. Run: `docker login` - -``` -Error: failed to setup test environment: failed to create blockchains: failed to deploy blockchain: create container: Error response from daemon: Head "https://ghcr.io/v2/foundry-rs/foundry/manifests/stable": denied: denied -``` -your ghcr token is stale. do -``` -docker logout ghcr.io -docker pull ghcr.io/foundry-rs/foundry -``` -and try starting the environment - -### GH CLI is not installed -Either download from [cli.github.com](https://cli.github.com) or install with Homebrew with: -```bash -brew install gh -``` - -Once installed, configure it by running: -```bash -gh auth login -``` +## Contact -For GH CLI to be used by the environment to download the CRE CLI you must have access to [smartcontract/dev-platform](https://github.com/smartcontractkit/dev-platform) repository. +Slack: `#topic-local-dev-environments` diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 00000000000..bb5a8fc1138 --- /dev/null +++ b/docs/index.md @@ -0,0 +1,25 @@ +--- +title: CRE Docs +sidebar_position: 0 +--- + +# CRE Docs + +This local docs bundle exposes documentation from the `chainlink` repository through the CRE Docusaurus site. + +## Main Sections + +- [Local CRE](local-cre/index.md) +- [Configuration Reference](CONFIG.md) +- [Secrets Reference](SECRETS.md) +- [Contributing](CONTRIBUTING.md) +- [Community](COMMUNITY.md) + +## Local CRE + +For Local CRE environment setup, workflow operations, topologies, and CRE smoke tests, start here: + +- [Overview](local-cre/index.md) +- [Getting Started](local-cre/getting-started/index.md) +- [Environment](local-cre/environment/index.md) +- [System Tests](local-cre/system-tests/index.md) diff --git a/docs/local-cre/_category_.yaml b/docs/local-cre/_category_.yaml new file mode 100644 index 00000000000..b675f33d608 --- /dev/null +++ b/docs/local-cre/_category_.yaml @@ -0,0 +1,7 @@ +position: 50 +label: "Local CRE" +collapsible: false +collapsed: false +link: + type: doc + id: CRE/local-cre/local-cre-index diff --git a/docs/local-cre/environment/_category_.yaml b/docs/local-cre/environment/_category_.yaml new file mode 100644 index 00000000000..9f55e355bcc --- /dev/null +++ b/docs/local-cre/environment/_category_.yaml @@ -0,0 +1,7 @@ +position: 2 +label: "Environment" +collapsible: false +collapsed: false +link: + type: doc + id: CRE/local-cre/environment/local-cre-environment-index diff --git a/docs/local-cre/environment/advanced.md b/docs/local-cre/environment/advanced.md new file mode 100644 index 00000000000..3a981c93195 --- /dev/null +++ b/docs/local-cre/environment/advanced.md @@ -0,0 +1,445 @@ +--- +id: local-cre-environment-advanced +title: Advanced Topics +sidebar_label: Advanced +sidebar_position: 3 +--- + +# Advanced Topics + +This page groups the advanced Local CRE topics in one place. + +## Billing + +Local CRE can bootstrap the billing service with `--with-billing`, and the smoke package also includes billing coverage. Use this when validating workflow billing integrations locally rather than only running the base DON stack. + +## Adding a New Standard Capability + +New Local CRE extension work should be expressed as a feature, not through the older installable-capability path. + +At a high level, adding a feature means: + +1. choose the capability flag the feature is responsible for +2. add a feature implementation under `system-tests/lib/cre/features/...` +3. implement the feature lifecycle hooks such as `PreEnvStartup` and `PostEnvStartup` +4. register any required on-chain capability configuration from the feature hook +5. add any node configuration changes needed for that feature +6. include the feature in the default feature set +7. expose the corresponding flag through one or more topology configs + +In the current implementation: + +- `InstallableCapability` is deprecated for new work +- `Feature` is the primary interface for new capabilities and related setup +- features are applied before environment startup and can register capability config on-chain + +This is contributor-facing material. If you are only choosing or running topologies, use [Topologies and Capabilities](topologies.md) instead. + +## Hot Swapping + +Local CRE supports swapping: + +- the Chainlink node image +- a capability binary + +Use hot swapping when you want to refresh part of the running environment without doing a full `env stop` and `env start`. + +### Swapping Chainlink nodes + +To recreate the Chainlink node containers with updated images or rebuilt local code: + +```bash +go run . env swap nodes +``` + +This command: + +- reloads the saved Local CRE state +- removes the current node containers for each DON +- recreates those node sets +- rebuilds the Docker image if the topology is configured to build from source and the local source has changed + +Useful flags: + +- `--force` to force container removal +- `--wait-time` to control how long Local CRE waits before retrying after removal/startup issues + +Use `env swap nodes` when: + +- you changed Chainlink core code +- you changed the Docker build inputs for node images +- you want the running DONs to pick up a newly built node image + +### Swapping a capability binary + +The capability swap flow is still the fastest local loop when you need to rebuild a plugin without recreating the whole environment. + +Typical command: + +```bash +go run . env swap capability -n cron -b /path/to/cron +``` + +`env swap capability` supports: + +- `--name` +- `--binary` +- `--force` + +This command: + +- finds DONs that use the named capability flag +- cancels matching job proposals +- copies the new binary into the relevant containers +- re-approves the proposals so the jobs restart with the updated binary + +Use `env swap capability` when: + +- you rebuilt a swappable capability locally +- you only need to refresh one capability instead of the entire node image + +Only capabilities supported by the swappable capability provider can be hot-reloaded this way. + +## DX Tracing and Telemetry + +Use this part of the stack when you need to answer questions like: + +- did the workflow engine initialize successfully +- did a workflow emit the expected user logs or base messages +- are Beholder messages reaching Kafka and Loki +- are node and connector logs reaching Grafana/Loki +- do I need more than plain container logs to debug a failing run + +### What each option gives you + +- `--with-beholder` + Starts the Beholder stack used by the CRE tests for workflow-related messages and heartbeat validation. +- `--with-observability` + Starts the observability stack. +- `--with-dashboards` + Starts observability and provisions the Grafana dashboards used for inspection. +- `go run . obs up` + Manages the observability stack directly from the Local CRE CLI. + +### When to use which + +Use plain container logs first when: + +- startup fails early +- the problem is isolated to one container +- you do not need workflow-event correlation + +Use Beholder when: + +- the test expects specific workflow log messages +- you need to validate workflow heartbeats or emitted events +- you are debugging cron, HTTP action, DON time, or other scenarios that already rely on Beholder listeners in the test helpers + +Use observability and dashboards when: + +- you need aggregated logs in Grafana/Loki +- you want to inspect log streaming end to end +- you need a broader view across nodes, connectors, and supporting services + +### DX tracing + +In Local CRE, DX refers to usage tracking for the Local CRE tooling itself, not workflow/node telemetry. + +The CLI records events such as: + +- environment setup result +- environment startup result and startup time +- workflow deployment +- Beholder startup +- billing startup +- capability swaps +- node swaps + +The tracker configuration in the Local CRE code uses: + +- GitHub variable name: `API_TOKEN_LOCAL_CRE` +- product name: `local_cre` + +This is separate from observability, Loki, Grafana, Beholder logs, or any workflow-level tracing. + +If you are debugging Local CRE usage instrumentation, look at the tracking hooks in the environment commands. If you are debugging workflow execution, logs, or message flow, use the observability and Beholder paths described above instead. + +### Practical debugging flow + +1. start the environment normally +2. add `--with-beholder` if the scenario depends on workflow messages +3. add `--with-observability` or `--with-dashboards` when you need Grafana/Loki +4. rerun the workflow or smoke test +5. inspect Beholder output, Grafana, and Loki before changing topology or code + +## Using Specific Images and Existing Keys + +Use this path when you need reproducible node images or stable node identity instead of whatever Local CRE generates from the working tree. + +### Using a specific Chainlink image + +By default, most Docker-based topologies build the Chainlink node image from the local checkout: + +```toml +[nodesets.node_specs.node] + docker_ctx = "../../../.." + docker_file = "core/chainlink.Dockerfile" +``` + +To pin a prebuilt image instead, replace the build settings with an explicit image: + +```toml +[nodesets.node_specs.node] + image = "chainlink-tmp:latest" +``` + +Apply that change to every node spec in the nodeset that should use the pinned image. + +Use explicit images when: + +- you want repeatable runs across machines +- you want to test a nightly or CI-built image +- you are comparing behavior between two image versions +- you are running in Kubernetes, where runtime builds are not the normal path + +The example override file at `core/scripts/cre/environment/configs/examples/workflow-don-overrides.toml` shows this pattern in practice. + +### Reusing existing EVM and P2P keys + +Local CRE normally generates fresh node keys. The lower-level CRE types also support importing an existing node-secrets payload instead of generating new keys. + +That path is useful when you need: + +- stable peer IDs across restarts +- stable EVM addresses for repeatable tests +- a previously generated node identity restored into a new environment + +The implementation detail to be aware of is: + +- `NodeKeyInput.ImportedSecrets` bypasses key generation and imports existing encrypted node secrets + +This is a contributor or integrator workflow rather than a normal Local CRE quickstart path. If you need stable keys, treat that as a deliberate topology/configuration change and validate the resulting peer IDs and on-chain addresses after startup. + +## External and Public Blockchains + +The default Local CRE flow uses local Anvil chains. Switch to external or public blockchains only when the workflow or capability truly depends on a non-local RPC. + +### What actually changes + +When you stop using only local Anvil chains, you need all of the following to line up: + +- the `[[blockchains]]` entries must use the correct chain type and chain ID +- the topology must support that chain on the relevant DONs +- the node image and enabled features must support the chain family you are targeting +- the chain RPC endpoints must be reachable and healthy before Local CRE startup + +### Example: use a preconfigured external EVM RPC + +For a non-local chain, the practical pattern is to provide a blockchain entry whose output URLs are already known, instead of asking Local CRE to spin up a local chain container. + +Example: + +```toml +[[blockchains]] + chain_id = "11155111" + type = "anvil" + + [blockchains.out] + type = "anvil" + use_cache = true + + [[blockchains.out.nodes]] + ws_url = "wss://0xrpc.io/sep" + http_url = "https://0xrpc.io/sep" + internal_ws_url = "wss://0xrpc.io/sep" + internal_http_url = "https://0xrpc.io/sep" +``` + +Then make sure the DON that needs that chain supports it. For example: + +```toml +[[nodesets]] + name = "bootstrap-gateway" + supported_evm_chains = [1337, 11155111] +``` + +And if a workflow DON needs chain-specific capabilities on that chain, its capability list must include the matching flag, for example: + +```toml +capabilities = ["read-contract-11155111", "write-evm-11155111"] +``` + +### When to use this pattern + +Use it when: + +- a workflow must read from or write to a public testnet or mainnet +- a capability depends on chain-specific behavior that Anvil does not reproduce +- you are validating against an already-running external chain endpoint + +### Practical checks before startup + +Before you run `env start`, verify: + +1. the RPC URLs respond and match the expected chain ID +2. the relevant DONs support that chain +3. the capability flags include the chain-qualified capability names you need +4. the node image contains any required plugins for that chain family + +Compared with the default Anvil flow, this setup is much more sensitive to RPC health, endpoint latency, and mismatches between topology config and the actual remote chain. + +## Kubernetes Deployment + +Kubernetes is the alternative infra mode to the default Docker setup. + +Switch the topology to Kubernetes by setting: + +```toml +[infra] + type = "kubernetes" +``` + +Use Kubernetes when: + +- you need cluster-like execution rather than a single local Docker network +- your test environment depends on prebuilt images instead of local source builds +- you are validating behavior closer to remote or CI-style execution + +Unlike Docker mode, Kubernetes mode assumes the nodes are already running in the cluster and Local CRE connects to them by generating the expected service URLs. + +### Prerequisites + +Before using Kubernetes mode, make sure you have: + +1. a Kubernetes cluster with the Chainlink nodes already deployed +2. Helm charts or deployment overlays that support config and secret overrides +3. external ingress configured if you need external access +4. local `kubectl` access to the cluster +5. all DON nodes deployed in a single namespace + +### Kubernetes configuration + +The Kubernetes fields live under `infra.kubernetes`: + +```toml +[infra] + type = "kubernetes" + + [infra.kubernetes] + namespace = "my-namespace" + external_domain = "example.com" + external_port = 80 + label_selector = "app=chainlink" + node_api_user = "admin@chain.link" + node_api_password = "secure-password-here" +``` + +What these fields are for: + +- `namespace` + The namespace where the DON nodes are already running. +- `external_domain` + The domain used to derive externally reachable service URLs. +- `external_port` + The ingress port, usually `80`. +- `label_selector` + The selector used to discover the relevant Chainlink pods. +- `node_api_user` and `node_api_password` + The credentials Local CRE uses to talk to the nodes. + +### Main differences from Docker + +In Docker mode, many topologies use: + +```toml +docker_ctx = "../../../.." +docker_file = "core/chainlink.Dockerfile" +``` + +In Kubernetes mode, prefer explicit images instead: + +```toml +[nodesets.node_specs.node] + image = "chainlink:your-tag" + +[jd] + image = "job-distributor:your-tag" +``` + +Kubernetes is therefore the wrong choice for fast local code iteration and the right choice for image-based validation. + +### Config and secret overrides + +Kubernetes mode is designed to work with deployments that accept node-specific config overlays. + +The expected model is: + +1. Local CRE generates node-specific TOML config from the topology +2. that config is pushed into the cluster as ConfigMaps and Secrets +3. the running nodes mount those overlays through the deployment or Helm chart +4. updated configs are picked up without rebuilding the images + +The original Local CRE guidance called out the expected objects: + +- a ConfigMap such as `-config-override` +- a Secret such as `-secrets-override` + +If your chart or deployment setup does not support that overlay pattern, Kubernetes mode will not behave like the standard Local CRE flow. + +### Example configuration + +Representative Kubernetes-connected topology: + +```toml +[[blockchains]] + chain_id = "1337" + type = "anvil" + + [blockchains.out] + use_cache = true + type = "anvil" + family = "evm" + chain_id = "1337" + + [[blockchains.out.nodes]] + ws_url = "wss://anvil-service-rpc.example.com" + http_url = "https://anvil-service-rpc.example.com" + internal_ws_url = "ws://anvil-service:8545" + internal_http_url = "http://anvil-service:8545" + +[infra] + type = "kubernetes" + + [infra.kubernetes] + namespace = "my-namespace" + external_domain = "example.com" + external_port = 80 + label_selector = "app=chainlink" + node_api_user = "admin@chain.link" + node_api_password = "secure-password-here" + +[jd] + csa_encryption_key = "d1093c0060d50a3c89c189b2e485da5a3ce57f3dcb38ab7e2c0d5f0bb2314a44" + image = "job-distributor:your-tag" +``` + +### Service URL generation + +Local CRE derives service URLs in Kubernetes from naming conventions. Using `my-namespace` as an example: + +- bootstrap node internal URL: `http://workflow-bt-0:6688` +- bootstrap node external URL: `https://my-namespace-workflow-bt-0.example.com` +- plugin node internal URL: `http://workflow-0:6688` +- plugin node external URL: `https://my-namespace-workflow-0.example.com` + +This is why the namespace, external domain, and label selector matter so much in Kubernetes mode. + +### Practical guidance + +Before using Kubernetes: + +1. replace local build settings with explicit image tags +2. set the Kubernetes-specific infra fields required by your environment, such as namespace +3. confirm the referenced images already contain the plugins and binaries your topology needs +4. provide blockchain output URLs for chains that are already running remotely +5. reuse the same topology validation flow with `topology show` and generated topology docs diff --git a/docs/local-cre/environment/index.md b/docs/local-cre/environment/index.md new file mode 100644 index 00000000000..d6c4967f7bc --- /dev/null +++ b/docs/local-cre/environment/index.md @@ -0,0 +1,133 @@ +--- +id: local-cre-environment-index +title: Environment Operations +sidebar_label: Environment +sidebar_position: 0 +--- + +# Environment Operations + +The Local CRE CLI lives in `core/scripts/cre/environment`. The default invocation style is: + +```bash +cd core/scripts/cre/environment +go run . [subcommand] +``` + +You can also install the binary: + +```bash +make install +``` + +That produces `local_cre`, including the interactive shell (`local_cre sh`). + +## Environment Lifecycle + +Start: + +```bash +go run . env start [--auto-setup] +``` + +Current `env start` flags from source code include: + +- `--auto-setup` +- `--wait-on-error-timeout` +- `--cleanup-on-error` +- `--extra-allowed-gateway-ports` +- `--with-example` +- `--example-workflow-timeout` +- `--with-beholder` +- `--with-dashboards` +- `--with-observability` +- `--with-billing` +- `--with-contracts-version` +- `--setup-config` +- `--grpc-port` + +Stop: + +```bash +go run . env stop +go run . env stop --all +``` + +Restart: + +```bash +go run . env restart +go run . env restart --with-beholder +``` + +Purge environment state: + +```bash +go run . env state purge +``` + +Use purge when the saved state or cached environment artifacts look inconsistent. + +## Setup and Images + +By default Local CRE builds the Chainlink image from the local branch. To use a pre-built image instead, set `image` in each node definition in the topology TOML and omit `docker_ctx` and `docker_file`. + +The deprecated `-p/--with-plugins-docker-image` flag still exists, but contributors should use TOML-based image selection instead. + +## Beholder and Observability + +Use `--with-beholder` when you need the ChIP ingress stack and Red Panda. Use `--with-observability` or `--with-dashboards` when you need the Grafana-based observability stack. + +Important related flags: + +- `--grpc-port` for ChIP ingress +- `--with-dashboards` to provision the dashboards on top of observability + +When dashboards are enabled, the CLI waits for Grafana at `http://localhost:3000`. + +## Storage and State + +Local CRE persists state to the repo-local state file that the system tests later reuse. This is why the smoke-test helpers can detect an existing environment and avoid recreating it. + +If you need to reset the environment completely, use state purge and then re-run setup/start. + +## Debugging + +For day-to-day debugging, the main patterns remain: + +- inspect core node logs and container state +- rebuild or swap capabilities +- use observability and Beholder when tracing workflow activity + +Hot-swapping guidance and workflow-specific commands are covered in: + +- [Workflow Operations](workflows.md) +- [Advanced Topics](advanced.md) + +## Telemetry and Tracing + +The Local CRE stack supports: + +- OTel-based observability +- Chip ingress / Beholder integration +- DX tracing + +If you need the full tracing stack for debugging or demos, enable observability during startup and follow the environment-specific tracing configuration described in the advanced page. + +## Troubleshooting + +The most common failures in this area are: + +- Chainlink node migrations fail +- Docker image not found +- Docker cannot download required public images +- `gh` is missing or unauthenticated + +When startup problems happen: + +1. rerun `go run . env setup` +2. confirm image access and authentication +3. purge state if the saved state is stale +4. restart with observability or Beholder if you need more signals + +For topology-specific issues, continue with [Topologies and Capabilities](topologies.md). diff --git a/docs/local-cre/environment/topologies.md b/docs/local-cre/environment/topologies.md new file mode 100644 index 00000000000..4747c5d09c7 --- /dev/null +++ b/docs/local-cre/environment/topologies.md @@ -0,0 +1,111 @@ +--- +id: local-cre-environment-topologies +title: Topologies and Capabilities +sidebar_label: Topologies +sidebar_position: 2 +--- + +# Topologies and Capabilities + +Topologies control the DON layout, chains, infra type, and capability placement for Local CRE. + +## Default Behavior + +The CLI shell defaults `CTF_CONFIGS` to `configs/workflow-gateway-don.toml`. During environment startup, Local CRE also ensures the default capabilities configuration is prepended. + +For smoke tests, the default local flow uses `configs/workflow-gateway-capabilities-don.toml` unless you override it. + +## Discovering Topologies + +Use the topology commands: + +```bash +go run . topology list +go run . topology show --config configs/workflow-gateway-capabilities-don.toml +go run . topology generate +``` + +Implementation-backed defaults: + +- `topology show --config` defaults to `configs/workflow-gateway-don.toml` +- `topology show --output-dir` defaults to `state` +- `topology generate --output-dir` defaults to `docs/topologies` +- `topology generate --index-path` defaults to `docs/TOPOLOGIES.md` + +## Generated Topology Docs + +Generated topology docs already live in the environment package: + +- `core/scripts/cre/environment/docs/TOPOLOGIES.md` +- `core/scripts/cre/environment/docs/topologies/*.md` + +Use them for: + +- topology class +- DON count +- capability placement +- per-DON node counts and chain assignments + +In particular, the generated matrix for `workflow-gateway-capabilities-don.toml` is the most useful reference for the default local smoke-test topology. + +## Multiple DONs + +Use a multi-DON topology when the workflow stack needs responsibilities split across separate DONs instead of running everything in one place. + +In practice, the common DON roles are: + +- `workflow` for workflow execution +- `capabilities` for capabilities exposed to other DONs +- `bootstrap` for DON bootstrapping +- `gateway` for connector and gateway traffic + +When you inspect or change a multi-DON topology, verify these questions: + +- which DON should execute the workflow +- which capabilities must stay local to that DON +- which capabilities must be remotely exposed from a separate DON +- which chains and ports those DONs need to reach + +Keep these mental rules: + +- workflow-only topologies are simpler for quick local iteration +- capability-enabled topologies are the standard path for realistic smoke coverage +- sharded and specialized topologies should be chosen only when the test or feature needs them + +## Enabling Existing Capabilities + +Enabling a capability is not one step. All of the following need to line up: + +1. the topology TOML must place that capability on the correct DON +2. the node image used by that DON must actually contain the plugin or binary +3. if the workflow needs outbound access through the gateway to services outside the default setup, the environment must allow the required gateway egress ports +4. the generated topology docs should confirm that the final placement matches what you intended + +As a rule of thumb: + +- use a simpler topology when the workflow only needs local capabilities such as cron or consensus +- use the capability-enabled topology when the workflow needs remotely exposed capabilities such as EVM, read-contract, vault, or web API targets + +After changing capability placement, re-run: + +```bash +go run . topology show --config .toml +go run . topology generate +``` + +Then check the generated capability matrix before starting the environment. + +## Adding or Modifying Topologies + +When introducing a new topology: + +1. add or update the TOML config under `configs/` +2. regenerate the topology docs with `go run . topology generate` +3. use `go run . topology show --config ` to sanity-check the result +4. update test guidance if the new topology is intended for smoke coverage + +## Related Pages + +- [Environment](index.md) +- [Advanced Topics](advanced.md) +- [Running Tests](../system-tests/running-tests.md) diff --git a/docs/local-cre/environment/workflows.md b/docs/local-cre/environment/workflows.md new file mode 100644 index 00000000000..ee0ae5d19f8 --- /dev/null +++ b/docs/local-cre/environment/workflows.md @@ -0,0 +1,115 @@ +--- +id: local-cre-environment-workflows +title: Workflow Operations +sidebar_label: Workflows +sidebar_position: 1 +--- + +# Workflow Operations + +Local CRE includes CLI helpers for compiling, deploying, deleting, and testing workflows. + +## Core Commands + +Deploy a workflow: + +```bash +go run . workflow deploy -w ./path/to/workflow/main.go --compile -n my_workflow_name +``` + +Delete a workflow from the registry: + +```bash +go run . workflow delete -n my_workflow_name +``` + +Delete all workflows from the registry: + +```bash +go run . workflow delete-all +``` + +Run the proof-of-reserve example verifier: + +```bash +go run . workflow run-por-example +``` + +## Deploy Flags + +The deploy command supports the following implementation-backed flags: + +- `--workflow-file-path` +- `--config-file-path` +- `--secrets-file-path` +- `--secrets-output-file-path` +- `--container-target-dir` +- `--container-name-pattern` +- `--rpc-url` +- `--workflow-owner-address` +- `--workflow-registry-address` +- `--capabilities-registry-address` +- `--don-id` +- `--name` +- `--delete-workflow-file` +- `--compile` +- `--with-contracts-version` + +`--workflow-file-path` and `--name` are required. + +## Workflow Compilation + +When you pass `--compile`, Local CRE uses the shared workflow compiler: + +- Go workflows run `go mod tidy` +- Go builds use `CGO_ENABLED=0`, `GOOS=wasip1`, and `GOARCH=wasm` +- TypeScript workflows compile through `bun cre-compile` +- the output artifact is compressed into a `.br.b64` file +- workflow names must be at least 10 characters long + +These same rules are used by the system-test helpers. + +## Workflow Configuration and Secrets + +Configuration files are optional and workflow-specific. When you use secrets: + +- `--secrets-file-path` points to the unencrypted input mapping +- `--secrets-output-file-path` controls the encrypted output path + +This matches the registration path used by the shared workflow package in `system-tests/lib/cre/workflow`. + +## Example Workflows + +Common example deployment patterns include: + +- PoR v2 cron example +- cron-based workflows +- HTTP workflows +- node-mode workflows + +Use the examples under `core/scripts/cre/environment/examples/workflows/` as the fastest way to validate an environment after startup. + +## Additional Workflow Sources + +Local CRE supports both contract-backed and file-backed workflow sources. + +Key ideas for this mode: + +- you can deploy via contract first and then reuse the compiled artifact +- you can generate metadata for file-source workflows +- you can mix contract and file-backed workflows in the same environment +- file-source workflows can be paused or removed without removing contract workflows + +Use this mode when you need to iterate quickly on workflow packaging or test workflows without re-registering each time through the contract path. + +## Manual Deployment + +For lower-level control: + +1. start the environment +2. compile a workflow or reuse an existing `.br.b64` +3. provide config and optional secrets +4. deploy through `workflow deploy` +5. inspect the registry and workflow containers if verification fails + +For test-specific deployment behavior, see [Workflows in Tests](../system-tests/workflows-in-tests.md). diff --git a/docs/local-cre/getting-started/_category_.yaml b/docs/local-cre/getting-started/_category_.yaml new file mode 100644 index 00000000000..302e7dddceb --- /dev/null +++ b/docs/local-cre/getting-started/_category_.yaml @@ -0,0 +1,7 @@ +position: 1 +label: "Getting Started" +collapsible: false +collapsed: false +link: + type: doc + id: CRE/local-cre/getting-started/local-cre-getting-started-index diff --git a/docs/local-cre/getting-started/index.md b/docs/local-cre/getting-started/index.md new file mode 100644 index 00000000000..09e070d0c01 --- /dev/null +++ b/docs/local-cre/getting-started/index.md @@ -0,0 +1,92 @@ +--- +id: local-cre-getting-started-index +title: Getting Started +sidebar_label: Getting Started +sidebar_position: 0 +--- + +# Getting Started + +This page gives you the shortest path from a clean checkout to a running Local CRE environment and a local smoke-test run. + +## Prerequisites + +For the standard Docker-based flow, Local CRE expects: + +- Docker installed and running +- access to required images through AWS SSO or direct repository access +- `gh` authenticated if you build images that need private plugin access +- `go` available locally + +The existing environment setup flow is still the canonical prerequisite bootstrap: + +```bash +cd core/scripts/cre/environment +go run . env setup +``` + +`env setup` is driven by `configs/setup.toml` and can also be run with: + +```bash +go run . env setup --config configs/setup.toml --no-prompt +``` + +If you need to rebuild or repull prerequisites, `env setup` also supports `--purge`. Billing assets can be included with `--with-billing`. + +## Quickstart + +Start the default Local CRE stack: + +```bash +cd core/scripts/cre/environment +go run . env start --auto-setup +``` + +Deploy a first workflow: + +```bash +go run . workflow deploy -w ./examples/workflows/v2/cron/main.go --compile -n cron_example +``` + +The environment command writes Local CRE state to the repo-local state file, which is what the test helpers later consume. + +## Common Startup Variants + +Start with the example workflow: + +```bash +go run . env start --with-example +``` + +Start with Beholder: + +```bash +go run . env start --with-beholder +``` + +Start against v1 contracts: + +```bash +go run . env start --with-contracts-version v1 +``` + +Set extra gateway ports when your workflow needs outbound access to local services: + +```bash +go run . env start --extra-allowed-gateway-ports 8080,8171 +``` + +## First Smoke Test Run + +Once the environment is up, run the CRE smoke package: + +```bash +go test ./system-tests/tests/smoke/cre -timeout 20m -run '^Test_CRE_' +``` + +The smoke tests default to the capability-enabled topology when you do not override `CTF_CONFIGS`, and the test helpers can start Local CRE automatically if the state file does not exist yet. + +Continue with: + +- [Environment](../environment/index.md) for lifecycle and debugging +- [Running Tests](../system-tests/running-tests.md) for local, Kubernetes, and CI execution modes diff --git a/docs/local-cre/index.md b/docs/local-cre/index.md new file mode 100644 index 00000000000..f351f849d9c --- /dev/null +++ b/docs/local-cre/index.md @@ -0,0 +1,40 @@ +--- +id: local-cre-index +title: Local CRE +sidebar_label: Overview +sidebar_position: 0 +--- + +# Local CRE + +Local CRE is the developer environment for building, running, and testing CRE workflows locally. It covers the environment lifecycle in `core/scripts/cre/environment` and the smoke-test flows in `system-tests/tests/smoke/cre`. + +Use this doc set when you need to: + +- bootstrap a local CRE stack on Docker or Kubernetes +- choose or inspect a topology +- deploy or debug workflows +- run or extend CRE smoke tests +- understand how the test helpers interact with a running Local CRE environment + +## Start Here + +- New to Local CRE: [Getting Started](getting-started/index.md) +- Running the environment day to day: [Environment](environment/index.md) +- Writing and debugging workflows: [Workflow Operations](environment/workflows.md) +- Choosing a topology: [Topologies and Capabilities](environment/topologies.md) +- Running or extending smoke tests: [System Tests](system-tests/index.md) +- Generated artifacts and references: [Reference](reference/index.md) + +## Scope + +This section is intentionally limited to Local CRE and the CRE system tests. It does not attempt to document the entire repository. + +## Structure + +This section is organized around the main Local CRE workflows: + +- environment setup and lifecycle +- workflow deployment and debugging +- topology and capability selection +- CRE smoke-test execution and maintenance diff --git a/docs/local-cre/reference/_category_.yaml b/docs/local-cre/reference/_category_.yaml new file mode 100644 index 00000000000..ad0960a5ffc --- /dev/null +++ b/docs/local-cre/reference/_category_.yaml @@ -0,0 +1,7 @@ +position: 4 +label: "Reference" +collapsible: false +collapsed: false +link: + type: doc + id: CRE/local-cre/reference/local-cre-reference-index diff --git a/docs/local-cre/reference/index.md b/docs/local-cre/reference/index.md new file mode 100644 index 00000000000..06eadfb49f2 --- /dev/null +++ b/docs/local-cre/reference/index.md @@ -0,0 +1,55 @@ +--- +id: local-cre-reference-index +title: Reference +sidebar_label: Reference +sidebar_position: 0 +--- + +# Reference + +This page collects the most useful implementation-backed references for Local CRE contributors. + +## Key Source Anchors + +- `core/scripts/cre/environment/main.go` +- `core/scripts/cre/environment/environment/environment.go` +- `core/scripts/cre/environment/environment/workflow.go` +- `core/scripts/cre/environment/environment/topology.go` +- `system-tests/tests/smoke/cre/cre_suite_test.go` +- `system-tests/tests/test-helpers/before_suite.go` +- `system-tests/tests/test-helpers/t_helpers.go` +- `system-tests/lib/cre/workflow/compile.go` + +## Generated Artifacts + +Generated topology docs live under: + +- `core/scripts/cre/environment/docs/TOPOLOGIES.md` +- `core/scripts/cre/environment/docs/topologies/` + +They are generated by: + +```bash +cd core/scripts/cre/environment +go run . topology generate +``` + +## Main Environment Commands + +```bash +go run . env setup +go run . env start +go run . env stop +go run . env restart +go run . workflow deploy +go run . workflow delete +go run . topology list +go run . topology show +go run . topology generate +``` + +## Related Pages + +- [Getting Started](../getting-started/index.md) +- [Environment](../environment/index.md) +- [System Tests](../system-tests/index.md) diff --git a/docs/local-cre/system-tests/_category_.yaml b/docs/local-cre/system-tests/_category_.yaml new file mode 100644 index 00000000000..a25dfa5542d --- /dev/null +++ b/docs/local-cre/system-tests/_category_.yaml @@ -0,0 +1,7 @@ +position: 3 +label: "System Tests" +collapsible: false +collapsed: false +link: + type: doc + id: CRE/local-cre/system-tests/local-cre-system-tests-index diff --git a/docs/local-cre/system-tests/ci-and-suite-maintenance.md b/docs/local-cre/system-tests/ci-and-suite-maintenance.md new file mode 100644 index 00000000000..6423b60f24c --- /dev/null +++ b/docs/local-cre/system-tests/ci-and-suite-maintenance.md @@ -0,0 +1,87 @@ +--- +id: local-cre-system-tests-ci-and-suite-maintenance +title: CI and Suite Maintenance +sidebar_label: CI and Suite Maintenance +sidebar_position: 3 +--- + +# CI and Suite Maintenance + +The smoke suite is designed to be discoverable and repeatable across multiple topologies. + +## Auto-Discovery + +At a high level, the CI flow works like this: + +- CI discovers tests in `system-tests/tests/smoke/cre` +- it builds a matrix across supported topologies and test entrypoints +- each test runs with configuration appropriate to that topology + +In practice, this means you do not need to manually register every new CRE smoke test in a bespoke list. + +## Naming and Placement + +For discoverability and consistency: + +- put the test in `system-tests/tests/smoke/cre` +- use the `Test_CRE_` prefix +- follow the existing package patterns for setup and helper usage + +## Architecture Pattern + +The suite uses a separated pattern: + +- environment creation happens once per topology +- multiple tests reuse that environment +- deployed contracts and nodes are shared within the topology run + +This keeps the suite cheaper and faster than recreating Local CRE for every individual test. + +## Parallelism and Shared Resources + +Parallel execution is controlled by `CRE_TEST_PARALLEL_ENABLED`. + +That flag only permits parallelism. Each test still decides whether it is safe to call `t.Parallel()`. In `cre_suite_test.go`: + +- some scenarios parallelize immediately +- some scenarios parallelize only when `CRE_TEST_CHIP_SINK_FANOUT_ENABLED=1` +- some scenarios stay serial because they depend on non-shareable infrastructure + +The fanout flag matters for tests that share the ChIP test sink. In fanout mode, the helper starts one sink server and distributes events to per-test subscribers. That keeps parallel cases isolated without forcing a separate sink process per test. + +## Supported Topologies in CI + +By default, the CRE workflow runs tests against: + +- `workflow-gateway-capabilities` + +Some tests must replace that default topology set with explicit per-test overrides in `.github/workflows/cre-system-tests.yaml`. Current examples are: + +- `Test_CRE_V2_Aptos_Suite` -> `workflow-gateway-aptos` +- `Test_CRE_V2_Solana_Suite` -> `workflow` +- `Test_CRE_V2_Sharding` -> `workflow-gateway-sharded` + +If a new test only works with a non-default topology, adding the test code is not enough. You must also add an explicit override in the workflow matrix so CI runs the test with the matching `topology` and `configs` pair. + +Keep tests topology-agnostic where possible. Use a per-test topology override only when the workflow genuinely depends on a different chain family or topology layout. + +## Adding a New Test + +When adding a new smoke test: + +1. place it in the smoke package +2. follow the existing naming convention +3. prefer shared helpers +4. decide whether it belongs in an existing bucket or needs a new entrypoint +5. if it needs a non-default topology, add the explicit workflow-matrix override +6. verify it works with the expected topology matrix +7. keep external dependencies explicit + +## Troubleshooting Discovery + +If CI does not pick up a test: + +1. confirm the function name starts with `Test_` +2. confirm the file is in the smoke package +3. confirm the package compiles +4. confirm the test does not depend on local-only assumptions absent in CI diff --git a/docs/local-cre/system-tests/index.md b/docs/local-cre/system-tests/index.md new file mode 100644 index 00000000000..445d961b11b --- /dev/null +++ b/docs/local-cre/system-tests/index.md @@ -0,0 +1,43 @@ +--- +id: local-cre-system-tests-index +title: CRE System Tests +sidebar_label: System Tests +sidebar_position: 0 +--- + +# CRE System Tests + +The CRE smoke suite lives in `system-tests/tests/smoke/cre`. These tests are designed to run against a Local CRE environment and reuse the same workflow, topology, and registry concepts documented in the environment pages. + +## Smoke vs Regression + +The package-level rule is: + +- smoke tests cover happy-path and sanity-check behavior +- edge cases and negative conditions belong in `system-tests/tests/regression/cre` + +## How the Tests Use Local CRE + +The helper flow is implementation-backed: + +- if `CTF_CONFIGS` is empty, the helpers set it to the requested config +- if the Local CRE state file does not exist, the helpers start Local CRE with `go run . env start` +- after environment creation, `CTF_CONFIGS` is switched to the local CRE state file so the tests use the deployed environment rather than the original topology TOML + +This is why you can either: + +- start Local CRE manually, then run tests +- or let the helpers bootstrap it for you + +## Main Topics + +- [Running Tests](running-tests.md) +- [Workflows in Tests](workflows-in-tests.md) +- [CI and Suite Maintenance](ci-and-suite-maintenance.md) + +## Recommended Local Flow + +1. bring up Local CRE +2. confirm the desired topology +3. run the relevant smoke tests +4. use Beholder or observability only when the scenario needs extra signals diff --git a/docs/local-cre/system-tests/running-tests.md b/docs/local-cre/system-tests/running-tests.md new file mode 100644 index 00000000000..6a747ee5046 --- /dev/null +++ b/docs/local-cre/system-tests/running-tests.md @@ -0,0 +1,151 @@ +--- +id: local-cre-system-tests-running-tests +title: Running CRE System Tests +sidebar_label: Running Tests +sidebar_position: 1 +--- + +# Running CRE System Tests + +This page covers local, Kubernetes, and CI-facing execution concerns for the CRE smoke suite. + +## Local Run Flow + +The usual local path is: + +```bash +cd core/scripts/cre/environment +go run . env setup +go run . env start --with-beholder +``` + +Then run the tests: + +```bash +go test ./system-tests/tests/smoke/cre -timeout 20m -run '^Test_CRE_' +``` + +The comments in `cre_suite_test.go` also call out the pattern of starting Local CRE first and then running the smoke package. + +## Environment Variables + +The main variables used by the smoke suite are: + +- `CTF_CONFIGS` points to the topology TOML before startup and to the generated Local CRE state file after startup. The helpers in `system-tests/tests/test-helpers/before_suite.go` switch to the state file automatically for local runs. +- `TOPOLOGY_NAME` is used in test names, bucket labels, and log output so results stay tied to the topology under test. +- `CTF_LOG_LEVEL=debug` enables more verbose framework logs during setup and test execution. +- `CTF_JD_IMAGE` pins the Job Distributor image when you do not want the default local image selection. +- `CTF_CHAINLINK_IMAGE` pins the Chainlink node image. `system-tests/lib/cre/environment/dons.go` checks this variable directly when selecting node images. + +## Parallel Execution + +Parallel test execution is opt-in. Set: + +```bash +CRE_TEST_PARALLEL_ENABLED=1 +``` + +The smoke suite does not blindly parallelize every case. The runner in `cre_suite_test.go` enables `t.Parallel()` only for scenarios that are safe to run together. + +Current behavior: + +- tests such as Proof of Reserve, Vault DON, and HTTP Trigger Action can run in parallel when `CRE_TEST_PARALLEL_ENABLED=1` +- tests such as HTTP Action CRUD, DON Time, Consensus, EVM Write, and EVM LogTrigger require both parallel mode and ChIP sink fanout mode +- Cron Beholder stays serial because it uses the real ChIP ingress stack instead of the shared fanout helper + +## ChIP Sink Fanout Mode + +Some workflows depend on the ChIP test sink. Running those cases in parallel requires the fanout server mode: + +```bash +CRE_TEST_PARALLEL_ENABLED=1 \ +CRE_TEST_CHIP_SINK_FANOUT_ENABLED=1 \ +go test ./system-tests/tests/smoke/cre -timeout 20m -run '^Test_CRE_V2' +``` + +`CRE_TEST_CHIP_SINK_FANOUT_ENABLED` starts a singleton sink and fans events out to per-test subscribers. That allows multiple tests to share the sink without one test consuming another test's events or closing shared channels underneath another run. + +Use fanout mode when parallelizing: + +- HTTP Action CRUD +- DON Time +- Consensus +- EVM Write +- EVM LogTrigger + +Without fanout mode, those cases still run, but they stay serial even if `CRE_TEST_PARALLEL_ENABLED=1`. + +## Topology Defaults + +For the default local flow, use: + +`core/scripts/cre/environment/configs/workflow-gateway-capabilities-don.toml` + +Override `CTF_CONFIGS` only when you intentionally need a different topology such as sharded, gateway-only, or chain-specific coverage. + +## Timeouts and Debugging + +Practical timeout guidance: + +- allow about 20 minutes when the image is built from source +- expect much shorter runs when using prebuilt images + +For local debugging, a useful pattern is: + +```bash +CTF_LOG_LEVEL=debug \ +go test ./system-tests/tests/smoke/cre -timeout 20m -run '^Test_CRE_V2_Suite_Bucket_A$' +``` + +That keeps the run narrow while preserving the topology and workflow setup used by the full suite. + +Example VS Code launch configuration: + +```json +{ + "name": "Launch CRE V2 Bucket A", + "type": "go", + "request": "launch", + "mode": "test", + "program": "${workspaceFolder}/system-tests/tests/smoke/cre", + "args": ["-test.run", "^Test_CRE_V2_Suite_Bucket_A$"] +} +``` + +## Bucketed Test Selection + +The larger CRE smoke suites are split into runtime-balanced buckets instead of one oversized test entrypoint. + +The old V2 suite is split into: + +- `Test_CRE_V2_Suite_Bucket_A` +- `Test_CRE_V2_Suite_Bucket_B` +- `Test_CRE_V2_Suite_Bucket_C` + +Those buckets are defined in `system-tests/tests/smoke/cre/v2suite/config/bucketing.go`: + +- `suite-bucket-a`: ProofOfReserve, HTTPTriggerAction, DONTime, Consensus +- `suite-bucket-b`: VaultDON +- `suite-bucket-c`: CronBeholder, HTTPActionCRUD + +The EVM read suite uses a separate bucket registry in `system-tests/tests/smoke/cre/evm/evmread/config/bucketing.go`: + +- `Test_CRE_V2_EVM_Read_HeavyCalls` +- `Test_CRE_V2_EVM_Read_StateQueries` +- `Test_CRE_V2_EVM_Read_TxArtifacts` + +Use bucketed entrypoints when you want: + +- shorter local feedback loops +- more stable CI runtimes +- a controlled way to rebalance scenario runtimes as the suite grows + +## Minimal Troubleshooting Checklist + +If a test run fails before test logic starts: + +1. confirm `CTF_CONFIGS` +2. confirm the Local CRE state file is valid +3. confirm required images exist +4. rerun `go run . env setup` +5. rerun with debug logging diff --git a/docs/local-cre/system-tests/workflows-in-tests.md b/docs/local-cre/system-tests/workflows-in-tests.md new file mode 100644 index 00000000000..9e282540f97 --- /dev/null +++ b/docs/local-cre/system-tests/workflows-in-tests.md @@ -0,0 +1,99 @@ +--- +id: local-cre-system-tests-workflows-in-tests +title: Workflows in Tests +sidebar_label: Workflows in Tests +sidebar_position: 2 +--- + +# Workflows in Tests + +The smoke tests should usually use the shared helpers rather than re-implementing workflow compilation, artifact copying, and registration. + +## Recommended Helper + +Prefer: + +```go +t_helpers.CompileAndDeployWorkflow(...) +``` + +That helper: + +- copies artifacts to workflow DONs by default +- supports additional artifact-copy targets when needed +- creates workflow artifacts +- resolves the workflow registry from the deployed environment +- registers the workflow with the current registry version + +Use `WithArtifactCopyDONTypes(...)` when a test intentionally needs artifact copy to additional DON types. + +## Choosing the Right Test Environment Helper + +The more important authoring choice is usually not workflow compilation. It is which environment helper the test uses. + +Use: + +```go +t_helpers.SetupTestEnvironmentWithPerTestKeys(...) +``` + +for workflow-plane tests that may run in parallel or perform independent on-chain writes. This path: + +- creates a fresh funded key pair for the test +- swaps the EVM clients and deployer key to use that test-specific signer +- authorizes the signer on the v2 workflow registry when needed +- avoids nonce collisions and shared-key coupling between parallel tests + +Use: + +```go +t_helpers.SetupTestEnvironmentWithConfig(...) +``` + +for admin, control-plane, or ownership-sensitive tests that must use the shared root signer. This is the safer choice for flows such as: + +- V1 registry tests +- sharding and ownership-admin operations +- tests that intentionally act as the environment owner + +As a rule: + +- default to `SetupTestEnvironmentWithPerTestKeys(...)` for v2 workflow execution tests +- use `SetupTestEnvironmentWithConfig(...)` only when the test needs shared owner privileges or intentionally avoids per-test signer isolation + +## Compilation Rules + +The shared compiler in `system-tests/lib/cre/workflow/compile.go` applies these rules: + +- workflow names must be at least 10 characters long +- Go workflows run `go mod tidy` before build +- Go workflows compile with `CGO_ENABLED=0`, `GOOS=wasip1`, and `GOARCH=wasm` +- TypeScript workflows compile via `bun cre-compile` +- the final artifact is Brotli-compressed and base64-encoded to `.br.b64` + +## Config Files, Secrets, and YAML Workflows + +Workflow config files are optional and specific to the workflow under test. + +Secrets support is provided by the shared workflow package, which prepares encrypted secrets for registration against the current DON and capabilities registry. + +This area also includes: + +- workflow secrets handling +- YAML/DSL workflows +- direct JD job proposal flows + +Keep those patterns only for tests that truly need lower-level control. The standard smoke path should still prefer `CompileAndDeployWorkflow`. + +## Price Data Sources + +The PoR-related tests use a shared `PriceProvider` abstraction with two main implementations: + +- `TrueUSDPriceProvider` +- `FakePriceProvider` + +`TrueUSDPriceProvider` uses the live TrueUSD reserve endpoint and mainly validates that prices become non-zero. + +`FakePriceProvider` starts a shared fake HTTP server once, generates a bounded sequence of test prices for each feed, enforces auth headers, and tracks both expected and actual prices for stricter assertions. + +Use the fake provider for local and repeatable smoke coverage. Use the live provider only when a scenario intentionally validates the integration path against live data. diff --git a/system-tests/tests/smoke/cre/README.md b/system-tests/tests/smoke/cre/README.md index bb974ac7a83..cfaed5a4bfc 100644 --- a/system-tests/tests/smoke/cre/README.md +++ b/system-tests/tests/smoke/cre/README.md @@ -1,864 +1,25 @@ # Test Modification and Execution Guide -## Table of Contents +The long-form CRE smoke-test documentation now lives in [`docs/local-cre/system-tests/`](../../../../docs/local-cre/system-tests/index.md). -- [Test Modification and Execution Guide](#test-modification-and-execution-guide) -- [0. Smoke vs Regression suites](#0-smoke-vs-regression-suites) -- [1. Running the Test](#1-running-the-test) - - [Chainlink Node Image](#chainlink-node-image) - - [Environment Variables](#environment-variables) - - [`cron` Capability Binary](#cron-capability-binary) - - [Test Timeout](#test-timeout) - - [Visual Studio Code Debug Configuration](#visual-studio-code-debug-configuration) -- [2. Using the CLI](#2-using-the-cli) -- [3. Docker vs Kubernetes (k8s)](#3-docker-vs-kubernetes-k8s) -- [5. Setting Docker Images for Kubernetes Execution](#5-setting-docker-images-for-kubernetes-execution) - - [Notes](#notes) - - [Job Distributor (JD) Image in Kubernetes](#job-distributor-jd-image-in-kubernetes) -- [11. Using a New Workflow](#11-using-a-new-workflow) - - [Workflow Compilation Process](#workflow-compilation-process) - - [Workflow Configuration](#workflow-configuration) - - [File Distribution to Containers](#file-distribution-to-containers) - - [Workflow Registration](#workflow-registration) - - [Complete Workflow Setup Example](#complete-workflow-setup-example) - - [12. Workflow Secrets](#12-workflow-secrets) - - [13. YAML Workflows (Data Feeds DSL)](#13-yaml-workflows-data-feeds-dsl) -- [14. Adding a New Test to the CI](#14-adding-a-new-test-to-the-ci) - - [How Auto-Discovery Works](#how-auto-discovery-works) - - [Test Architecture Pattern](#test-architecture-pattern) - - [What You Need to Do](#what-you-need-to-do) - - [CI Configuration Details](#ci-configuration-details) - - [Environment Setup](#environment-setup) - - [Test Execution](#test-execution) - - [Important Notes](#important-notes) - - [Troubleshooting](#troubleshooting) -- [15. Price Data Source](#15-price-data-source) - - [PriceProvider Interface](#priceprovider-interface) - - [Live Price Source (TrueUSDPriceProvider)](#live-price-source-trueusdpriceprovider) - - [Mocked Price Source (FakePriceProvider)](#mocked-price-source-fakepriceprovider) - - [Mock Server Implementation](#mock-server-implementation) - - [Price Validation Logic](#price-validation-logic) - - [Configuration](#configuration) - - [Usage in Tests](#usage-in-tests) - - [Key Benefits](#key-benefits) +Start with: ---- +- [CRE System Tests](../../../../docs/local-cre/system-tests/index.md) +- [Running Tests](../../../../docs/local-cre/system-tests/running-tests.md) +- [Workflows in Tests](../../../../docs/local-cre/system-tests/workflows-in-tests.md) +- [CI and Suite Maintenance](../../../../docs/local-cre/system-tests/ci-and-suite-maintenance.md) +- [Local CRE Overview](../../../../docs/local-cre/index.md) -## Test Modification and Execution Guide - -This guide explains how to set up and run system tests for Chainlink workflows using the CRE (Composable Runtime Environment) framework. It includes support for Docker and Kubernetes, multiple capabilities, and integration with Chainlink nodes and job distributor services. - ---- - -For more information about the local CRE check its [README.md](../../../../core/scripts/cre/environment/README.md). - ---- - -## 0. Smoke vs Regression suites - -In practice, everything what is not a "happy path" functional system-tests (i.e. edge cases, negative conditions) should go to a `regression` package. - -## 1. Running the Test - -Before starting, you’ll need to configure your environment correctly. To do so execute the automated setup function: - -```bash -cd core/scripts/cre/environment && go run . env setup -``` - -### Chainlink Node Image - -The TOML config defines how Chainlink node images are used: - -- **Default behavior**: Builds the Docker image from your current branch. - - ```toml - [nodesets.node_specs.node] - docker_ctx = "../../../.." - docker_file = "core/chainlink.Dockerfile" - ``` - -- **Using a pre-built image**: Replace the config with: - - ```toml - [nodesets.node_specs.node] - image = "my-docker-image:my-tag" - ``` - - Apply this to every `nodesets.node_specs.node` section. - -**Minimum required version**: Commit [e13e5675](https://github.com/smartcontractkit/chainlink/commit/e13e5675d3852b04e18dad9881e958066a2bf87a) (Feb 25, 2025) - ---- - -### Environment Variables - -You usually do not need extra environment variables for the default local flow. The helpers default to `core/scripts/cre/environment/configs/workflow-gateway-capabilities-don.toml`. - -Set these only when you need to override the default behavior: - -- `CTF_CONFIGS` -- path to a specific topology TOML when you want a non-default topology -- `TOPOLOGY_NAME` -- optional label used in some test names and logs -- `CTF_LOG_LEVEL=debug` -- to display test debug-level logs - ---- - -### `cron` Capability Binary - -This binary is needed for tests using the cron capability. - -**Option 1**: Use a CL node image that already includes the binary (e.g. via `CTF_CHAINLINK_IMAGE`). The image has it at `/usr/local/bin/cron`. - -**Option 2**: Build the capability locally and use `env swap capability` to deploy it to running containers: -`go run . env swap capability -n cron -b /path/to/your/cron` from `core/scripts/cre/environment`. - ---- - -### Test Timeout - -- If building the image: Set Go test timeout to **20 minutes**. -- If using pre-built images: Execution takes **4–7 minutes**. - ---- - -### Visual Studio Code Debug Configuration - -Example `launch.json` entry: - -```json -{ - "name": "Launch Capability Test", - "type": "go", - "request": "launch", - "mode": "test", - "program": "${workspaceFolder}/system-tests/tests/smoke/cre", - "args": ["-test.run", "Test_CRE_Suite"] -} -``` - -**CI behavior differs**: In CI, workflows and binaries are uploaded ahead of time, and images are injected via: - -- `CTF_JD_IMAGE` -- `CTF_CHAINLINK_IMAGE` - ---- - -## 2. Using the CLI - -Local CRE environment and documentation were migrated to [core/scripts/cre/environment/README.md](../../../../core/scripts/cre/environment/README.md). - ---- - -## 3. Docker vs Kubernetes (k8s) - -The environment type is set in your TOML config: - -```toml -[infra] - type = "kubernetes" # Options: "docker" or "kubernetes" -``` - -To run tests in Kubernetes, use the `kubernetes` option. - -- **AWS cloud provider** - -Example TOML for Kubernetes: - -```toml -[infra.kubernetes] - namespace = "kubernetes-local" -``` - ---- - -## 5. Setting Docker Images for Kubernetes Execution - -Kubernetes mode does **not** build Docker images during test runtime. - -❌ Not allowed: - -```toml -[nodesets.node_specs.node] - docker_ctx = "../../../.." - docker_file = "core/chainlink.Dockerfile" -``` - -✅ Required: - -```toml -[nodesets.node_specs.node] - image = "chainlink:112b9323-plugins-cron" -``` - -### Notes - -- All nodes in a single nodeset **must** use the same image. -- You must specify an image tag explicitly (e.g., `:v1.2.3`). - -### Job Distributor (JD) Image in Kubernetes - -```toml -[jd] - image = "/job-distributor:0.22.1" -``` - -Replace `` placeholder with the actual value. - ---- - -## 11. Using a New Workflow - -This section explains how to compile, upload, and register workflows in the CRE testing framework. The process involves compiling Go workflow source code to WebAssembly, copying files to containers, and registering with the blockchain contract. - -### Workflow Compilation Process - -The tests compile workflow sources through `system-tests/lib/cre/workflow/compile.go`. - -Current behavior: - -1. Workflow names must be at least 10 characters long. -2. Go workflows run `go mod tidy` in the workflow directory before build. -3. Go workflows are built with `GOOS=wasip1`, `GOARCH=wasm`, `CGO_ENABLED=0`. -4. The resulting `.wasm` artifact is Brotli-compressed and base64-encoded into a `.br.b64` file. -5. TypeScript workflows are compiled through `bun cre-compile ...`, so `bun` and the generated `package.json` from `go run . env setup` must be present. - -Use the helper APIs that the tests already use instead of re-implementing the flow manually. The main path is: - -- `t_helpers.CompileAndDeployWorkflow(...)` for smoke/regression tests -- `creworkflow.CompileWorkflow(...)` or `CompileWorkflowToDir(...)` only when you intentionally need lower-level control - -#### Compilation Requirements - -Go workflows: -- **Workflow Name**: Must be at least 10 characters long -- **Go Environment**: Requires `go mod tidy` to be run in the workflow directory -- **Target Platform**: Compiles for `GOOS=wasip1` and `GOARCH=wasm` -- **Output Format**: Produces `.br.b64` files containing Brotli-compressed, base64-encoded WASM - -TypeScript workflows: -- **Workflow Name**: Must be at least 10 characters long -- **Bun installed**: Requires `bun` (automatically installed by `go run . env setup`) -- **package.json**: Correct `package.json` must exist in `core/scripts/cre/environment` (automatically created by `go run . env setup`) -- **Output Format**: Produces `.br.b64` files containing Brotli-compressed, base64-encoded WASM - -### Workflow Configuration - -Workflows may require configuration files that define their runtime parameters. Configuration is optional and depends on the specific workflow implementation: - -#### Creating Configuration Files (Optional) - -```go -func createConfigFile(feedsConsumerAddress common.Address, workflowName, feedID, dataURL, writeTargetName string) (string, error) { - workflowConfig := portypes.WorkflowConfig{ - ComputeConfig: portypes.ComputeConfig{ - FeedID: feedID, - URL: dataURL, - DataFeedsCacheAddress: feedsConsumerAddress.Hex(), - WriteTargetName: writeTargetName, - }, - } - - configMarshalled, err := yaml.Marshal(workflowConfig) - if err != nil { - return "", errors.Wrap(err, "failed to marshal workflow config") - } - - outputFile := workflowName + "_config.yaml" - if err := os.WriteFile(outputFile, configMarshalled, 0644); err != nil { - return "", errors.Wrap(err, "failed to write output file") - } - - outputFileAbsPath, err := filepath.Abs(outputFile) - if err != nil { - return "", errors.Wrap(err, "failed to get absolute path of the config file") - } - - return outputFileAbsPath, nil -} -``` - -### File Distribution to Containers - -After compilation, workflow files must be distributed to the appropriate containers: - -#### Copying Files to Containers - -```go -containerTargetDir := "/home/chainlink/workflows" - -// Copy workflow artifacts to workflow-node containers -workflowCopyErr := creworkflow.CopyArtifactsToDockerContainers( - containerTargetDir, - "workflow-node", - compressedWorkflowWasmPath, workflowConfigFilePath -) -require.NoError(t, workflowCopyErr, "failed to copy workflow to docker containers") -``` - -#### Container Discovery - -The framework automatically discovers containers by name pattern: - -- Uses Docker API to list running containers -- Matches container names against the provided pattern -- Copies files to all matching containers -- Creates target directories if they don't exist - -### Workflow Registration - -Workflows are registered with the blockchain contract using the `RegisterWithContract` function: - -#### Registration Parameters - -- **Context**: Test context for timeout handling -- **Seth Client**: Blockchain client for contract interaction -- **Registry Address**: Workflow Registry contract address -- **Registry Version**: Required so the helper can select the v1/v2 registration path -- **DON ID**: Decentralized Oracle Network identifier -- **Workflow Name**: Unique identifier for the workflow -- **Binary URL**: Path to the compiled workflow binary on the host machine (used to read and calculate workflow ID) -- **Config URL**: Path to the workflow configuration file on the host machine (optional, used to read and calculate workflow ID) -- **Secrets URL**: Path to encrypted secrets on the host machine (optional) -- **Artifacts Directory**: Container directory where workflow files are stored (e.g., `/home/chainlink/workflows`) - -The exact helper signature changes over time. Before adding a new manual registration flow, check: - -- `system-tests/lib/cre/workflow/workflow.go` -- `system-tests/tests/test-helpers/t_helpers.go` - -For most smoke tests, prefer `t_helpers.CompileAndDeployWorkflow(...)` instead of calling the lower-level helpers directly. - -#### URL Resolution Process - -The `RegisterWithContract` function processes URLs as follows: - -1. **Host Paths**: Binary URL, Config URL, and Secrets URL are paths on the host machine -2. **File Reading**: The function reads these files to calculate the workflow ID and validate content -3. **Container Path Construction**: If `artifactsDirInContainer` is provided, the function constructs container paths by: - - Extracting the filename from the host path using `filepath.Base()` - - Joining it with the artifacts directory: `file://{artifactsDir}/{filename}` -4. **Contract Registration**: The constructed container paths are registered with the blockchain contract - -**Important**: The `Artifacts Directory` must match the `CRE.WorkflowFetcher.URL` configuration in your TOML file: - -```toml -[CRE.WorkflowFetcher] -URL = "file:///home/chainlink/workflows" -``` - -This ensures that the Chainlink nodes can locate and load the workflow files from the correct container path. - -> The Chainlink node can only load workflow files from the local filesystem if `WorkflowFetcher` uses the `file://` prefix. Right now, it cannot read workflow files from both the local filesystem and external sources (like S3 or web servers) at the same time. - -### Complete Workflow Setup Example - -For new smoke tests, use the existing helper flow instead of reproducing compilation, copy, and registration logic inline: - -1. Build or reuse a test environment with `t_helpers.SetupTestEnvironmentWithConfig(...)` or `SetupTestEnvironmentWithPerTestKeys(...)`. -2. Create a workflow name that stays within the current 64-character limit. Prefer `t_helpers.UniqueWorkflowName(...)` when you need uniqueness. -3. Prepare a typed workflow config value if the workflow needs configuration. -4. Call `t_helpers.CompileAndDeployWorkflow(...)`. -5. Assert execution using the helper patterns already used by nearby smoke tests. - -Examples to copy from current tests: - -- EVM capability flows in `v2_evm_capability_test.go` -- HTTP action flows in `v2_http_action_test.go` -- gRPC source flows in `v2_grpc_source_test.go` -- Aptos and Solana flows in the corresponding `v2_*_capability_test.go` files - ---- - -### 12. Workflow Secrets - -Workflow secrets provide a secure way to pass sensitive data (like API keys, private keys, or database credentials) to workflows running on Chainlink nodes. The secrets are encrypted using each node's public encryption key and can only be decrypted by the intended recipient nodes. - -#### How Secrets Work - -1. **Configuration**: Define which environment variables contain your secrets -2. **Encryption**: Secrets are encrypted using each DON node's public encryption key -3. **Distribution**: Encrypted secrets are distributed to the appropriate nodes -4. **Decryption**: Each node decrypts only the secrets intended for it - -#### Creating Secrets Configuration - -Create a YAML file that maps secret names to environment variables: - -```yaml -# secrets.yaml -secretsNames: - API_KEY_SECRET: - - API_KEY_ENV_VAR_ALL - DATABASE_PASSWORD: - - DB_PASSWORD_ENV_VAR_ALL - PRIVATE_KEY: - - PRIVATE_KEY_ENV_VAR_ALL -``` - -#### Environment Variable Naming - -- Use `_ENV_VAR_ALL` suffix for secrets shared across all nodes in the DON -- Use `_ENV_VAR_NODE_{NODE_INDEX}` suffix for node-specific secrets (where `NODE_INDEX` is the sequential position of the node in the DON: 0, 1, 2, etc.) -- Environment variables must be set before running the workflow registration - -**Note**: `NODE_INDEX` refers to the node's position in the DON (0-based indexing), not the P2P ID. For example: - -- `API_KEY_ENV_VAR_NODE_0` for the first node in the DON -- `API_KEY_ENV_VAR_NODE_1` for the second node in the DON -- `API_KEY_ENV_VAR_NODE_2` for the third node in the DON - -#### Using Secrets in Workflows - -Use the current `PrepareSecrets(...)` helper only after checking its live signature in `system-tests/lib/cre/workflow/secrets.go`. It currently requires: - -- seth client -- DON ID -- capabilities registry address -- capabilities registry version -- workflow owner address -- secrets config path -- secrets output path - -This helper is lower-level and intentionally coupled to the current registry implementation. If you only need secrets in a new smoke test, copy an up-to-date example from the current test tree instead of relying on old README snippets. - -#### Secrets Encryption Process - -The `PrepareSecrets` function performs these steps: - -1. **Load Configuration**: Parses the secrets YAML file -2. **Read Environment Variables**: Loads secret values from environment variables -3. **Get Node Information**: Retrieves node public keys from the Capabilities Registry contract -4. **Filter DON Nodes**: Identifies nodes that belong to the specific DON -5. **Encrypt Secrets**: Encrypts secrets using each node's public encryption key -6. **Generate Metadata**: Creates metadata including encryption keys and node assignments -7. **Save Encrypted File**: Outputs a JSON file with encrypted secrets and metadata - -#### Encrypted Secrets File Structure - -The generated encrypted secrets file contains: - -```json -{ - "encryptedSecrets": { - "node_p2p_id_1": "encrypted_secret_for_node_1", - "node_p2p_id_2": "encrypted_secret_for_node_2" - }, - "metadata": { - "workflowOwner": "0x...", - "capabilitiesRegistry": "0x...", - "donId": "1", - "dateEncrypted": "2024-01-01T00:00:00Z", - "nodePublicEncryptionKeys": { - "node_p2p_id_1": "public_key_1", - "node_p2p_id_2": "public_key_2" - }, - "envVarsAssignedToNodes": { - "node_p2p_id_1": ["API_KEY_ENV_VAR_ALL"], - "node_p2p_id_2": ["API_KEY_ENV_VAR_ALL"] - } - } -} -``` - -#### Security Considerations - -- **Node-Specific Encryption**: Each node can only decrypt secrets intended for it -- **DON Isolation**: Secrets are encrypted per DON and cannot be shared across different DONs -- **Environment Variables**: Secrets are loaded from environment variables, not hardcoded -- **Temporary Files**: Encrypted secrets files are automatically cleaned up after registration - -#### Complete Example - -Avoid copying a static README example for secrets setup. The relevant helper signatures have changed more than once. When adding a secrets-based test: - -1. Start from the current helper implementations in `system-tests/lib/cre/workflow`. -2. Verify the registry version you are targeting. -3. Reuse the same artifact copy flow as other current smoke tests. -4. Keep the secrets example in the test itself, close to the code that depends on it. - ---- - -### 13. YAML Workflows (Data Feeds DSL) - -No compilation required. Define YAML workflow inline and propose it like any job: - -```toml -type = "workflow" -schemaVersion = 1 -name = "df-workflow" -externalJobID = "df-workflow-id" -workflow = """ -name: df-workflow -owner: '0xabc...' -triggers: - - id: streams-trigger@1.0.0 - config: - maxFrequencyMs: 5000 - feedIds: - - '0xfeed...' -consensus: - - id: offchain_reporting@1.0.0 - ref: ccip_feeds - inputs: - observations: - - $(trigger.outputs) - config: - report_id: '0001' - key_id: 'evm' - aggregation_method: data_feeds - encoder: EVM - encoder_config: - abi: (bytes32 FeedID, uint224 Price, uint32 Timestamp)[] Reports -targets: - - id: write_geth@1.0.0 - inputs: - signed_report: $(ccip_feeds.outputs) - config: - address: '0xcontract...' - deltaStage: 10s - schedule: oneAtATime -""" -``` - -Then propose the job using JD, either directly: - -```go -offChainClient.ProposeJob(ctx, &jobv1.ProposeJobRequest{NodeId: nodeID, Spec: workflowSpec}) -``` - -Or using the `CreateJobs` helper: - -```go -createJobsInput := keystonetypes.CreateJobsInput{ - CldEnv: env, - DonTopology: donTopology, - DonToJobSpecs: donToJobSpecs, -} -createJobsErr := libdon.CreateJobs(testLogger, createJobsInput) -``` - -## 14. Adding a New Test to the CI - -The CRE system tests use **auto-discovery** to automatically find and run all tests in the `system-tests/tests/smoke/cre` directory. This means you don't need to manually register your test in any CI configuration files. - -### How Auto-Discovery Works - -The CI workflow (`.github/workflows/cre-system-tests.yaml`) automatically: - -1. **Discovers Tests**: Uses `go test -list .` to find all test functions in the package -2. **Creates Test Matrix**: Generates a matrix with each test and supported DON topologies -3. **Runs Tests**: Executes each test with different configurations automatically - -### Test Architecture Pattern - -The CRE system tests follow a **separated architecture pattern** where: - -- **Environment Creation**: The environment (DONs, contracts, nodes) is created once per topology -- **Test Execution**: Multiple tests run on the same environment instance -- **Shared State**: Tests can leverage the same deployed contracts and node infrastructure - -This pattern allows for efficient resource usage and enables running the same test logic across different DON topologies without recreating the entire environment for each test. - -#### Supported DON Topologies - -Each test is automatically run with these two topologies: - -- **workflow-gateway**: Uses `configs/workflow-gateway-don.toml,configs/ci-config.toml` -- **workflow-gateway-capabilities**: Uses `configs/workflow-gateway-capabilities-don.toml,configs/ci-config.toml` - -### What You Need to Do - -#### 1. Create Your Test Function - -Simply add your test function to any `.go` file in the `system-tests/tests/smoke/cre` directory: - -```go -func Test_CRE_My_New_Workflow(t *testing.T) { - // Your test implementation - // The CI will automatically discover and run this test -} -``` - -#### 2. Follow Test Naming Convention - -Use the `Test_CRE_` prefix for your test functions to ensure they're properly identified: - -```go -func Test_CRE_MyWorkflow(t *testing.T) // ✅ Good -func Test_CRE_AnotherWorkflow(t *testing.T) // ✅ Good -func TestMyWorkflow(t *testing.T) // ❌ Will be discovered but not recommended -``` - -#### 3. Use Standard Test Structure - -Follow the existing test patterns in the directory. Note that the environment is created separately and shared across tests: - -```go -func Test_CRE_My_New_Workflow(t *testing.T) { - // 1. Set configuration if needed - confErr := setConfigurationIfMissing("path/to/config.toml", "topology") - require.NoError(t, confErr, "failed to set configuration") - - // 2. Load existing environment (created by CI) - in, err := framework.Load[environment.Config](nil) - require.NoError(t, err, "couldn't load environment state") - - // 3. Your test logic using the shared environment - // The environment (DONs, contracts, nodes) is already set up and ready to use - // ... -} -``` - -**Important**: Your test should be designed to work with any of the supported DON topologies. The same test logic should ideally be compatible with: - -- `workflow` topology -- `workflow-gateway` topology -- `workflow-gateway-capabilities` topology - -This ensures maximum test coverage and validates your workflow across different deployment configurations. - -### CI Configuration Details - -The auto-discovery process works as follows: - -```yaml -# From .github/workflows/cre-system-tests.yaml -- name: Define test matrix - run: | - tests=$(go test github.com/smartcontractkit/chainlink/system-tests/tests/smoke/cre -list . | grep -v "ok" | grep -v "^$" | jq -R -s 'split("\n")[:-1] | map([{"test_name": ., "topology": "workflow-gateway", "configs":"configs/workflow-gateway-don.toml,configs/ci-config.toml"}, {"test_name": ., "topology": "workflow-gateway-capabilities", "configs":"configs/workflow-gateway-capabilities-don.toml,configs/ci-config.toml"}]) | flatten') -``` - -### Environment Setup - -The CI automatically sets up the test environment: - -- **Dependencies**: Downloads required capability binaries -- **Local CRE**: Starts the CRE environment with the specified topology (once per topology) -- **Configuration**: Uses the appropriate TOML configuration files -- **Artifacts**: Handles test logs and artifacts automatically -- **Shared Infrastructure**: All tests within the same topology share the same environment instance - -This approach ensures that: - -- Environment creation overhead is minimized -- Tests can leverage shared contracts and node infrastructure -- The same test logic can be validated across different DON configurations - -### Test Execution - -Each test runs with: +## Quickstart ```bash -go test github.com/smartcontractkit/chainlink/system-tests/tests/smoke/cre \ - -v -run "^(${TEST_NAME})$" \ - -timeout ${TEST_TIMEOUT} \ - -count=1 \ - -test.parallel=1 \ - -json -``` - -### Important Notes - -- **No Manual Registration**: You don't need to add your test to any CI configuration files -- **Automatic Matrix**: Each test runs with all two DON topologies automatically -- **Standard Configurations**: Uses the existing TOML configuration files -- **Dependency Management**: Capabilities and dependencies are handled automatically -- **Logging**: Test logs are automatically captured and uploaded on failure - -### Troubleshooting - -If your test isn't being discovered: - -1. **Check Function Name**: Ensure it starts with `Test_CRE_` -2. **Check Location**: Ensure it's in the `system-tests/tests/smoke/cre` directory -3. **Check Syntax**: Ensure the test function compiles without errors -4. **Check Dependencies**: Ensure all required dependencies are available - -> **Note**: The auto-discovery system eliminates the need for manual CI configuration, making it much easier to add new tests to the CI pipeline. - ---- - -## 15. Price Data Source - -The CRE system supports both **live** and **mocked** price feeds through a unified `PriceProvider` interface. This allows for flexible testing scenarios while maintaining consistent behavior across different data sources. - -### PriceProvider Interface - -The system uses a common interface that abstracts price data source logic: - -```go -type PriceProvider interface { - URL() string - NextPrice(feedID string, price *big.Int, elapsed time.Duration) bool - ExpectedPrices(feedID string) []*big.Int - ActualPrices(feedID string) []*big.Int - AuthKey() string -} -``` - -### Live Price Source (TrueUSDPriceProvider) - -For integration testing with real data: - -```go -// Create a live price provider -priceProvider := NewTrueUSDPriceProvider(testLogger, feedIDs) - -// The provider uses the live TrueUSD API -// URL: https://api.real-time-reserves.verinumus.io/v1/chainlink/proof-of-reserves/TrueUSD -``` - -**Characteristics:** - -- Uses real-time data from the TrueUSD API -- No authentication required -- Validates that prices are non-zero -- Tracks actual prices received by the workflow -- Limited validation capabilities (can only check for non-zero values) - -### Mocked Price Source (FakePriceProvider) - -For local testing and controlled scenarios: - -```go -// Create a fake price provider -fakeInput := &fake.Input{Port: 8171} -authKey := "your-auth-key" -priceProvider, err := NewFakePriceProvider(testLogger, fakeInput, authKey, feedIDs) -require.NoError(t, err, "failed to create fake price provider") -``` - -**Characteristics:** +cd core/scripts/cre/environment +go run . env setup +go run . env start --with-beholder -- Generates random prices for testing -- Provides controlled price sequences -- Validates exact price matches -- Supports authentication -- Tracks both expected and actual prices - -### Mock Server Implementation - -The fake price provider sets up a mock HTTP server that: - -1. **Generates Random Prices**: Creates random price values between 1.00 and 200.00 -2. **Supports Authentication**: Validates Authorization headers -3. **Responds to Feed Queries**: Handles `GET` requests with `feedID` parameter -4. **Returns Structured Data**: Provides JSON responses in the expected format: - -```json -{ - "accountName": "TrueUSD", - "totalTrust": 123.45, - "ripcord": false, - "updatedAt": "2024-01-01T00:00:00Z" -} -``` - -### Price Validation Logic - -Both providers implement smart validation: - -#### Live Provider Validation - -```go -func (l *TrueUSDPriceProvider) NextPrice(feedID string, price *big.Int, elapsed time.Duration) bool { - // Wait for non-zero price - if price == nil || price.Cmp(big.NewInt(0)) == 0 { - return true // Continue waiting - } - // Price found, stop waiting - return false -} -``` - -#### Mock Provider Validation - -```go -func (f *FakePriceProvider) NextPrice(feedID string, price *big.Int, elapsed time.Duration) bool { - // Check if this is a new price we haven't seen - if !f.priceAlreadyFound(feedID, price) { - // Record the new price - f.actualPrices[feedID] = append(f.actualPrices[feedID], price) - - // Move to next expected price - f.priceIndex[feedID] = ptr.Ptr(len(f.actualPrices[feedID])) - - // Continue if more prices expected - return len(f.actualPrices[feedID]) < len(f.expectedPrices[feedID]) - } - return true // Continue waiting -} -``` - -### Configuration - -#### TOML Configuration - -The price provider is **not** configured directly in TOML. Instead, the TOML only configures the fake server port: - -```toml -[fake] - port = 8171 -``` - -#### Programmatic Configuration - -Price providers are created programmatically in your Go test code: - -```go -// For live price provider (no TOML configuration needed) -priceProvider := NewTrueUSDPriceProvider(testLogger, feedIDs) - -// For fake price provider (uses port from TOML [fake] section) -fakeInput := &fake.Input{Port: 8171} // or use in.Fake from loaded config -authKey := "your-auth-key" -priceProvider, err := NewFakePriceProvider(testLogger, fakeInput, authKey, feedIDs) -require.NoError(t, err) +go test ./system-tests/tests/smoke/cre -timeout 20m -run '^Test_CRE_' ``` -### Usage in Tests - -```go -func Test_CRE_Price_Feed(t *testing.T) { - feedIDs := []string{ - "018e16c39e000320000000000000000000000000000000000000000000000000", - "018e16c38e000320000000000000000000000000000000000000000000000000", - } - - // Choose your provider - var priceProvider PriceProvider - - if useLiveProvider { - priceProvider = NewTrueUSDPriceProvider(testLogger, feedIDs) - } else { - fakeInput := &fake.Input{Port: 8171} - priceProvider, err = NewFakePriceProvider(testLogger, fakeInput, "auth-key", feedIDs) - require.NoError(t, err) - } - - // Use the provider in your workflow configuration - workflowConfig := &portypes.WorkflowConfig{ - ComputeConfig: portypes.ComputeConfig{ - FeedID: feedIDs[0], - URL: priceProvider.URL(), - // ... other config - }, - } - - // Validate price updates - assert.Eventually(t, func() bool { - price := getLatestPrice(feedID) - return !priceProvider.NextPrice(feedID, price, time.Since(startTime)) - }, timeout, interval) -} -``` - -### Key Benefits - -1. **Unified Interface**: Same API for both live and mocked sources -2. **Flexible Testing**: Easy switching between real and fake data -3. **Controlled Validation**: Mock provider enables precise price validation -4. **Authentication Support**: Mock server supports auth for realistic testing -5. **Price Tracking**: Both providers track actual prices received by workflows +## Rule of Thumb ---- +Happy-path and sanity checks belong in `smoke`; edge cases and negative conditions belong in `regression`. From 4893c375e3c827fe6247cbee3f053491c1a50e3e Mon Sep 17 00:00:00 2001 From: Bartek Tofel Date: Thu, 2 Apr 2026 19:01:50 +0200 Subject: [PATCH 3/3] CR fixes --- docs/local-cre/_category_.yaml | 2 +- docs/local-cre/environment/_category_.yaml | 2 +- docs/local-cre/getting-started/_category_.yaml | 2 +- docs/local-cre/getting-started/index.md | 8 ++++++++ docs/local-cre/reference/_category_.yaml | 2 +- docs/local-cre/system-tests/_category_.yaml | 2 +- .../local-cre/system-tests/ci-and-suite-maintenance.md | 1 + docs/local-cre/system-tests/running-tests.md | 10 +++++++++- 8 files changed, 23 insertions(+), 6 deletions(-) diff --git a/docs/local-cre/_category_.yaml b/docs/local-cre/_category_.yaml index b675f33d608..9a096fa23c6 100644 --- a/docs/local-cre/_category_.yaml +++ b/docs/local-cre/_category_.yaml @@ -4,4 +4,4 @@ collapsible: false collapsed: false link: type: doc - id: CRE/local-cre/local-cre-index + id: local-cre-index diff --git a/docs/local-cre/environment/_category_.yaml b/docs/local-cre/environment/_category_.yaml index 9f55e355bcc..9d92335186b 100644 --- a/docs/local-cre/environment/_category_.yaml +++ b/docs/local-cre/environment/_category_.yaml @@ -4,4 +4,4 @@ collapsible: false collapsed: false link: type: doc - id: CRE/local-cre/environment/local-cre-environment-index + id: local-cre-environment-index diff --git a/docs/local-cre/getting-started/_category_.yaml b/docs/local-cre/getting-started/_category_.yaml index 302e7dddceb..7eec05c0642 100644 --- a/docs/local-cre/getting-started/_category_.yaml +++ b/docs/local-cre/getting-started/_category_.yaml @@ -4,4 +4,4 @@ collapsible: false collapsed: false link: type: doc - id: CRE/local-cre/getting-started/local-cre-getting-started-index + id: local-cre-getting-started-index diff --git a/docs/local-cre/getting-started/index.md b/docs/local-cre/getting-started/index.md index 09e070d0c01..4893cdac758 100644 --- a/docs/local-cre/getting-started/index.md +++ b/docs/local-cre/getting-started/index.md @@ -84,6 +84,14 @@ Once the environment is up, run the CRE smoke package: go test ./system-tests/tests/smoke/cre -timeout 20m -run '^Test_CRE_' ``` +For the default smoke-test flow, start Local CRE without `--with-beholder`. Most tests start the ChIP test sink on the default gRPC port (`50051`), and Beholder uses that same port through Chip Ingress. + +Enable Beholder only when: + +- you are running Beholder-specific tests +- you intentionally need the Beholder stack for debugging +- you move Beholder to a different port with `--grpc-port` so it does not conflict with the test sink + The smoke tests default to the capability-enabled topology when you do not override `CTF_CONFIGS`, and the test helpers can start Local CRE automatically if the state file does not exist yet. Continue with: diff --git a/docs/local-cre/reference/_category_.yaml b/docs/local-cre/reference/_category_.yaml index ad0960a5ffc..d01d98d6a4c 100644 --- a/docs/local-cre/reference/_category_.yaml +++ b/docs/local-cre/reference/_category_.yaml @@ -4,4 +4,4 @@ collapsible: false collapsed: false link: type: doc - id: CRE/local-cre/reference/local-cre-reference-index + id: local-cre-reference-index diff --git a/docs/local-cre/system-tests/_category_.yaml b/docs/local-cre/system-tests/_category_.yaml index a25dfa5542d..ea225219b15 100644 --- a/docs/local-cre/system-tests/_category_.yaml +++ b/docs/local-cre/system-tests/_category_.yaml @@ -4,4 +4,4 @@ collapsible: false collapsed: false link: type: doc - id: CRE/local-cre/system-tests/local-cre-system-tests-index + id: local-cre-system-tests-index diff --git a/docs/local-cre/system-tests/ci-and-suite-maintenance.md b/docs/local-cre/system-tests/ci-and-suite-maintenance.md index 6423b60f24c..a313c962283 100644 --- a/docs/local-cre/system-tests/ci-and-suite-maintenance.md +++ b/docs/local-cre/system-tests/ci-and-suite-maintenance.md @@ -59,6 +59,7 @@ Some tests must replace that default topology set with explicit per-test overrid - `Test_CRE_V2_Aptos_Suite` -> `workflow-gateway-aptos` - `Test_CRE_V2_Solana_Suite` -> `workflow` +- `Test_CRE_V1_Tron` -> `workflow` - `Test_CRE_V2_Sharding` -> `workflow-gateway-sharded` If a new test only works with a non-default topology, adding the test code is not enough. You must also add an explicit override in the workflow matrix so CI runs the test with the matching `topology` and `configs` pair. diff --git a/docs/local-cre/system-tests/running-tests.md b/docs/local-cre/system-tests/running-tests.md index 6a747ee5046..1bec93bbb46 100644 --- a/docs/local-cre/system-tests/running-tests.md +++ b/docs/local-cre/system-tests/running-tests.md @@ -16,7 +16,7 @@ The usual local path is: ```bash cd core/scripts/cre/environment go run . env setup -go run . env start --with-beholder +go run . env start ``` Then run the tests: @@ -27,6 +27,14 @@ go test ./system-tests/tests/smoke/cre -timeout 20m -run '^Test_CRE_' The comments in `cre_suite_test.go` also call out the pattern of starting Local CRE first and then running the smoke package. +Do not enable `--with-beholder` for the default smoke-test flow. Most CRE smoke tests start the ChIP test sink on the default gRPC port (`50051`), and Beholder starts Chip Ingress on that same port. If both try to use the default port, the test sink startup fails. + +Enable Beholder only when: + +- you are running Beholder-specific coverage +- you need the Beholder stack for debugging +- you start it on a different `--grpc-port` so the smoke-test sink can still bind its default port + ## Environment Variables The main variables used by the smoke suite are: