This document provides a comprehensive guide to all environment variables used by the Dealbot. Understanding these variables is essential for proper configuration in development, testing, and production environments.
| Category | Variables |
|---|---|
| Application | NODE_ENV, DEALBOT_PORT, DEALBOT_HOST, DEALBOT_RUN_MODE, DEALBOT_METRICS_PORT, DEALBOT_METRICS_HOST, DEALBOT_ALLOWED_ORIGINS, ENABLE_DEV_MODE |
| Database | DATABASE_HOST, DATABASE_PORT, DATABASE_POOL_MAX, DATABASE_USER, DATABASE_PASSWORD, DATABASE_NAME |
| Blockchain | NETWORK, RPC_URL, WALLET_ADDRESS, WALLET_PRIVATE_KEY, SESSION_KEY_PRIVATE_KEY, CHECK_DATASET_CREATION_FEES, USE_ONLY_APPROVED_PROVIDERS, PDP_SUBGRAPH_ENDPOINT |
| Dataset Versioning | DEALBOT_DATASET_VERSION |
| Scheduling | PROVIDERS_REFRESH_INTERVAL_SECONDS, DATA_RETENTION_POLL_INTERVAL_SECONDS, DEALBOT_MAINTENANCE_WINDOWS_UTC, DEALBOT_MAINTENANCE_WINDOW_MINUTES |
| Jobs (pg-boss) | DEALBOT_PGBOSS_SCHEDULER_ENABLED, DEALBOT_PGBOSS_POOL_MAX, DEALS_PER_SP_PER_HOUR, DATASET_CREATIONS_PER_SP_PER_HOUR, RETRIEVALS_PER_SP_PER_HOUR, JOB_SCHEDULER_POLL_SECONDS, JOB_WORKER_POLL_SECONDS, PG_BOSS_LOCAL_CONCURRENCY, JOB_CATCHUP_MAX_ENQUEUE, JOB_SCHEDULE_PHASE_SECONDS, JOB_ENQUEUE_JITTER_SECONDS, DEAL_JOB_TIMEOUT_SECONDS, RETRIEVAL_JOB_TIMEOUT_SECONDS, IPFS_BLOCK_FETCH_CONCURRENCY |
| Dataset | DEALBOT_LOCAL_DATASETS_PATH, RANDOM_PIECE_SIZES |
| Timeouts | CONNECT_TIMEOUT_MS, HTTP_REQUEST_TIMEOUT_MS, HTTP2_REQUEST_TIMEOUT_MS, IPNI_VERIFICATION_TIMEOUT_MS, IPNI_VERIFICATION_POLLING_MS |
| Piece Cleanup | MAX_DATASET_STORAGE_SIZE_BYTES, TARGET_DATASET_STORAGE_SIZE_BYTES, JOB_PIECE_CLEANUP_PER_SP_PER_HOUR, MAX_PIECE_CLEANUP_RUNTIME_SECONDS |
| SP Blocklist | BLOCKED_SP_IDS, BLOCKED_SP_ADDRESSES |
| Prometheus Metrics | PROMETHEUS_WALLET_BALANCE_TTL_SECONDS, PROMETHEUS_WALLET_BALANCE_ERROR_COOLDOWN_SECONDS |
| Web Frontend | VITE_API_BASE_URL, VITE_PLAUSIBLE_DATA_DOMAIN, DEALBOT_API_BASE_URL |
- Type:
string - Required: No
- Default:
development - Valid values:
development,production
Role: Determines the runtime environment mode. Affects logging verbosity, error handling, and optimization behaviors.
When to update:
- Set to
productionwhen deploying to production environments - Keep as
developmentfor local development
Database migrations won't run if `NODE_ENV` is set to `development`. TypeORM handles entity schema changes automatically.
- Type:
number - Required: No
- Default:
3000(config) /8080(recommended in .env.example)
Role: The port on which the Dealbot backend HTTP server listens.
When to update:
- When the default port conflicts with another service
- When deploying behind a reverse proxy that expects a specific port
- When running multiple Dealbot instances on the same machine
Example scenario: Running Dealbot alongside another service that uses port 8080:
DEALBOT_PORT=9000- Type:
string - Required: No
- Default:
both - Valid values:
api,worker,both
Role: Controls which components run in the process:
api: API server + scheduler (no workers)worker: workers +/metricsonly (no API)both: API server + scheduler + workers
Example:
DEALBOT_RUN_MODE=worker- Type:
number - Required: No
- Default:
9090
Role: Port used for the metrics-only HTTP server when DEALBOT_RUN_MODE=worker.
Example:
DEALBOT_METRICS_PORT=9090- Type:
string - Required: No
- Default:
0.0.0.0
Role: Host/interface used for the metrics-only HTTP server when DEALBOT_RUN_MODE=worker.
Example:
DEALBOT_METRICS_HOST=0.0.0.0- Type:
string - Required: No
- Default:
127.0.0.1(localhost only)
Role: The network interface/host the server binds to.
When to update:
- Set to
0.0.0.0to accept connections from any network interface (required for containerized deployments) - Keep as
127.0.0.1for local-only access during development
Example scenario: Deploying in Kubernetes where the service needs to be accessible from other pods:
DEALBOT_HOST=0.0.0.0- Type:
string(comma-separated URLs) - Required: No
- Default: Empty (CORS disabled)
Role: Configures Cross-Origin Resource Sharing (CORS) to allow the web frontend to make API requests to the backend.
When to update:
- When the web frontend URL changes
- When adding additional frontend deployments (staging, preview environments)
- When developing locally with a different frontend port
Example scenario: Allowing both local development and a staging frontend:
DEALBOT_ALLOWED_ORIGINS=http://localhost:5173,http://127.0.0.1:5173,https://staging.dealbot.example.com- Type:
boolean - Required: No
- Default:
false
Role: Enables the /api/dev/* endpoints for manually triggering deals and retrievals during local development.
When to update:
- Set to
trueonly for local development or isolated test environments - Keep as
falsefor any deployed Dealbot instance unless separate gating or security measures are in place
Security warning: This flag exposes unauthenticated dev-only endpoints that bypass normal scheduling and safeguards. Do not enable in production or shared environments without additional access controls.
Example:
ENABLE_DEV_MODE=true- Type:
string - Required: Yes
Role: Hostname or IP address of the PostgreSQL database server.
When to update:
- When connecting to a remote database instead of localhost
- When using a managed database service (AWS RDS, Cloud SQL, etc.)
- When the database is in a different Kubernetes namespace or cluster
Example scenarios:
# Local development
DATABASE_HOST=localhost
# Kubernetes internal service
DATABASE_HOST=dealbot-postgres.dealbot.svc.cluster.local
# Managed database
DATABASE_HOST=dealbot-db.abc123.us-east-1.rds.amazonaws.com- Type:
number - Required: No
- Default:
5432
Role: Port number for the PostgreSQL connection.
When to update:
- When using a non-standard PostgreSQL port
- When connecting through a port-forwarded tunnel
- Type:
number - Required: No
- Default:
1
Role: Maximum number of connections in the TypeORM pool per process.
Lower this when using a session-mode pooler with a low pool_size.
Example:
DATABASE_POOL_MAX=1- Type:
string - Required: Yes
Role: Username for PostgreSQL authentication.
When to update:
- When using a different database user for security isolation
- When connecting to a managed database with specific credentials
- Type:
string - Required: Yes
- Security: SENSITIVE - Never commit to version control
Role: Password for PostgreSQL authentication.
When to update:
- When rotating database credentials
- When connecting to a different database environment
Note: The bundled local PostgreSQL uses dealbot_password by default. Only set this when using an external database or a non-default password.
- Type:
string - Required: Yes
- Default:
filecoin_dealbot
Role: Name of the PostgreSQL database to connect to.
When to update:
- When using a different database name for environment isolation (e.g.,
postgres,dealbot_staging,dealbot_prod)
- Type:
string - Required: No
- Default:
calibration - Valid values:
mainnet,calibration
Role: Determines which Filecoin network to interact with. This affects contract addresses, RPC endpoints, and token economics.
When to update:
- Set to
calibrationfor testing with test FIL/USDFC tokens - Set to
mainnetfor production deployments with real FIL/USDFC
mainnet will use real FIL/USDFC tokens. Ensure your wallet is funded and you understand the costs involved.
- Type:
string(HTTP/HTTPS URL) - Required: No
- Default: Uses the default public RPC for the configured network
Role: Custom Filecoin RPC endpoint URL. When set, all on-chain calls (Synapse SDK, viem) use this endpoint instead of the default public RPC. Use an authenticated endpoint to avoid rate limiting on shared public infrastructure.
Providers like Glif/Chain.Love support passing the API key as a query parameter:
RPC_URL=https://filecoin.chain.love/rpc/v1?token=YOUR_API_KEYWhen to update:
- When DealBot is hitting 429 rate limits on the default public RPC
- When switching RPC providers
- When rotating API keys
Security: Treat as a secret if the URL contains an API key.
- Type:
string(Ethereum-style address) - Required: Yes
- Security: Public, but should match
WALLET_PRIVATE_KEY
Role: The Ethereum/FEVM address used for signing transactions and paying for storage deals.
When to update:
- When switching to a different wallet
- When setting up a new Dealbot instance
- When rotating keys for security
Example:
WALLET_ADDRESS=0x1234567890abcdef1234567890abcdef12345678- Type:
string - Required: Yes
- Security: HIGHLY SENSITIVE - Never commit to version control, use secrets management
Role: Private key for signing blockchain transactions. Required in direct key mode. Not needed when SESSION_KEY_PRIVATE_KEY is set (session key mode), since the session key handles all signing. If both are set, SESSION_KEY_PRIVATE_KEY takes precedence and WALLET_PRIVATE_KEY is ignored.
When to update:
- When rotating keys for security
- When setting up a new Dealbot instance
- When switching wallets
Security best practices:
- Use Kubernetes Secrets or a secrets manager (Vault, AWS Secrets Manager)
- Never log or expose this value
- Type:
string(0x-prefixed hex private key) - Required: No
- Security: HIGHLY SENSITIVE - Treat like
WALLET_PRIVATE_KEY
Role: When set, DealBot uses session key authentication. The session key must be registered on the SessionKeyRegistry contract from the WALLET_ADDRESS (typically a Safe multisig). Storage operations (create dataset, add pieces) are signed with this key instead of WALLET_PRIVATE_KEY.
Session keys are scoped (only storage operations, not deposits or withdrawals) and time-limited (expiry set during registration). See runbooks/wallet-and-session-keys.md for the full setup process.
When to update:
- When rotating session keys
- When switching to session key mode from direct key mode
- When the session key has been compromised
- Type:
boolean - Required: No
- Default:
true
Role: When enabled, validates that the wallet has sufficient balance to cover dataset creation fees + 100 GiB of storage costs.
When to update:
- Set to
falseto skip addition of dataset creation fees into storage costs.
- Type:
boolean - Required: No
- Default:
true
Role: Restricts deal-making to only Filecoin Warm Storage Service (FWSS) approved storage providers. This ensures deals are made with approved providers.
When to update:
- Set to
falseto test with any storage provider available for testing (providers that support "PDP" product in ServiceProviderRegistry)
- Type:
string(URL) - Required: No
- Default: Empty string (feature disabled)
Role: The Graph API endpoint for querying PDP (Proof of Data Possession) subgraph data. This endpoint is used to retrieve data retention info for provider data.
When to update:
- When switching between different Graph API endpoints
Example:
PDP_SUBGRAPH_ENDPOINT=https://api.thegraph.com/subgraphs/filecoin/pdp- Type:
string - Required: No
- Default: Not set (no versioning)
Role: Creates versioning for datasets, allowing multiple dataset versions without changing wallet addresses. Useful for separating test data from production data or managing dataset migrations.
When to update:
- When you want to create a fresh set of datasets
- When separating environments (e.g.,
dealbot-v1,dealbot-staging)
Example scenario: Creating a new dataset version for testing:
DEALBOT_DATASET_VERSION=dealbot-v2These variables control when and how often the Dealbot runs its automated jobs.
Note: Dealbot uses pg-boss for rate-based scheduling (see Jobs (pg-boss)).
- Type:
number - Required: No
- Default:
14400(4 hours, recommended)
Role: How often the providers refresh job runs, in seconds.
When to update:
- Increase for less frequent providers refresh (reduces costs, slower testing)
- Decrease for more aggressive testing (higher costs, faster feedback)
Example scenario: Running providers refresh every 4 hours (default):
PROVIDERS_REFRESH_INTERVAL_SECONDS=14400- Type:
number - Required: No
- Default:
3600(1 hour)
Role: How often the data retention polling job runs, in seconds. This job checks and manages data retention stats of providers for stored datasets.
When to update:
- Increase for less frequent data retention checks
- Decrease for more frequent monitoring of data retention policies
Example scenario: Running data retention checks every 2 hours:
DATA_RETENTION_POLL_INTERVAL_SECONDS=7200- Type:
number - Required: No
- Default:
0
Role: Delay before the first deal creation job runs after startup.
When to update:
- Increase to allow other services to initialize first
- Keep at
0for immediate deal creation on startup
- Type:
number - Required: No
- Default:
600(10 minutes) /300(5 minutes in .env.example)
Role: Delay before the first retrieval test runs after startup. This offset prevents retrieval tests from running concurrently with deal creation.
When to update:
- Adjust to stagger job execution and prevent resource contention
- Increase if deal creation takes longer than expected
- Type:
string(comma-separated HH:MM times in UTC) - Required: No
- Default:
07:00,22:00
Role: Daily maintenance windows (UTC) during which deal creation and retrieval checks are skipped.
Notes:
- Times must be in 24-hour
HH:MMformat. - Applies to both cron and pg-boss modes.
Example:
DEALBOT_MAINTENANCE_WINDOWS_UTC=06:30,21:30- Type:
number - Required: No
- Default:
20 - Minimum:
20 - Maximum:
360(6 hours). With two daily windows, this keeps maintenance time ≤ runtime.
Role: Duration (minutes) of each maintenance window in DEALBOT_MAINTENANCE_WINDOWS_UTC.
Example:
DEALBOT_MAINTENANCE_WINDOW_MINUTES=30In this mode, scheduling is rate-based (per hour) and persisted in Postgres so restarts do not reset timing.
- Type:
number - Required: No
- Default:
4
Role: Target deal creation rate per storage provider.
Limits: Config schema caps this at 20 to avoid excessive on-chain activity.
Notes: Fractional values are supported. For example, 0.25 means one deal every 4 hours per storage provider.
- Type:
number - Required: No
- Default:
2
Role: Target retrieval test rate per storage provider.
Limits: Config schema caps this at 20 to avoid overloading providers.
Notes: Fractional values are supported. For example, 0.25 means one retrieval every 4 hours per storage provider.
- Type:
number - Required: No
- Default:
1
Role: Target dataset creation rate per storage provider.
Limits: Config schema caps this at 20 to avoid excessive dataset generation.
Notes: Fractional values are supported. For example, 0.5 means one dataset creation every 2 hours per storage provider.
- Type:
number - Required: No
- Default:
300
Role: How often the scheduler polls Postgres for due jobs.
Notes: Minimum is 60 seconds to avoid excessive polling; default is 300 seconds.
- Type:
number - Required: No
- Default:
60
Role: How often pg-boss workers check for new jobs.
Notes: Minimum is 5 seconds. Lower values reduce job pickup latency but increase DB chatter.
- Type:
number - Required: No
- Default:
20 - Minimum:
1
Role: Per-instance pg-boss worker concurrency for the sp.work queue (localConcurrency). This is the total concurrency budget shared by deal and retrieval jobs.
When to update:
- Increase for faster throughput (more concurrent jobs; higher load)
- Decrease to reduce load or for more conservative testing
Example:
PG_BOSS_LOCAL_CONCURRENCY=20Sizing note: A rough estimate for required concurrency is
(providers * jobs_per_hour_per_provider * avg_duration_seconds) / 3600.
Use p95 duration for a more conservative default.
- Type:
boolean - Required: No
- Default:
true
Role: Enables/disables the pg-boss scheduler loop that enqueues due jobs. Set to false for worker-only pods that should only process existing jobs.
Example:
DEALBOT_PGBOSS_SCHEDULER_ENABLED=false- Type:
number - Required: No
- Default:
1
Role: Maximum number of pg-boss connections per instance. Lower this when running through a
session-mode pooler (e.g. Supabase) to avoid exceeding pooler pool_size.
Example:
DEALBOT_PGBOSS_POOL_MAX=2- Type:
number - Required: No
- Default:
10
Role: Maximum number of jobs to enqueue per schedule row per poll. Any remaining backlog is handled by future polls.
- Type:
number - Required: No
- Default:
0
Role: Per-instance schedule phase offset (seconds) applied when initializing schedules. Use this to stagger multiple dealbot deployments that are not sharing a database.
- Type:
number - Required: No
- Default:
0
Role: Random delay (seconds) applied when enqueuing jobs to avoid synchronized bursts.
- Type:
number - Required: No
- Default:
360(6 minutes) - Minimum:
120(2 minutes) - Enforced: Yes (config validation)
Role: Maximum runtime for data storage jobs before forced abort. When a deal job exceeds this timeout, it is actively cancelled using AbortController.
When to update:
- Increase if deal uploads consistently take longer than the default (e.g., slower networks, IPNI delays)
- Decrease if you want to fail-fast on stuck jobs
Note: This is independent of HTTP-level timeouts. The job timeout enforces end-to-end execution time of a Data Storage Check job including all operations (provider lookup, upload, IPNI verification, etc.).
- Type:
number - Required: No
- Default:
60(1 minute) - Minimum:
60 - Enforced: Yes (config validation)
Role: Maximum runtime for retrieval test jobs before forced abort. When a retrieval job exceeds this timeout, it is actively cancelled using AbortController.
When to update:
- Increase if retrieval tests consistently take longer than the default
- Decrease to detect and fail stuck retrievals faster
Note: This is independent of HTTP-level timeouts. The job timeout enforces end-to-end execution time of a Retrieval Check job.
- Type:
number - Required: No
- Default:
6 - Minimum:
1 - Enforced: Yes (config validation)
Role: Maximum number of parallel block fetches when validating IPFS retrievals via DAG traversal.
When to update:
- Increase to speed up validation on fast networks and responsive gateways
- Decrease to reduce pressure on slower storage providers or constrained environments
Note: This affects the number of concurrent /ipfs/<cid> requests per retrieval.
These variables control the automatic cleanup of old pieces from storage providers to prevent unbounded data growth. Cleanup runs as a periodic pg-boss job per SP.
The cleanup flow checks live provider data first (via filecoin-pin's calculateActualStorage()) to determine how much data an SP is storing. When usage exceeds the high-water mark (MAX_DATASET_STORAGE_SIZE_BYTES), the cleanup job deletes the oldest pieces until usage drops below the low-water mark (TARGET_DATASET_STORAGE_SIZE_BYTES). This high-water/low-water approach prevents thrashing near the threshold.
If the live query fails, cleanup falls back to DB-based SUM(piece_size) for the quota decision. Deal creation continues regardless of cleanup state.
- Type:
number(integer, bytes) - Required: No
- Default:
25769803776(24 GiB) - Minimum:
1
Role: High-water mark. Maximum total stored data per SP (in bytes) before cleanup kicks in. When live storage for a provider exceeds this value, the cleanup job triggers and deletes the oldest pieces until usage drops below TARGET_DATASET_STORAGE_SIZE_BYTES (the low-water mark).
When to update:
- Increase for longer runway before cleanup kicks in (e.g. months vs weeks)
- Decrease if SP storage is constrained or costs are a concern
Example:
MAX_DATASET_STORAGE_SIZE_BYTES=12884901888 # 12 GiB per SP- Type:
number(integer, bytes) - Required: No
- Default:
21474836480(20 GiB) - Minimum:
1
Role: Low-water mark. When cleanup triggers (live usage exceeds MAX_DATASET_STORAGE_SIZE_BYTES), pieces are deleted until usage drops below this target. The gap between MAX and TARGET creates headroom so cleanup doesn't re-trigger immediately.
Headroom math: At 4 deals/SP/hour × 10 MiB = ~960 MiB/day growth. With 4 GiB headroom (24 GiB MAX − 20 GiB TARGET), cleanup provides ~4 days of breathing room per run, which aligns with the daily default cadence.
When to update:
- Decrease for more aggressive cleanup (larger gap = more headroom)
- Increase toward MAX for minimal cleanup (smaller gap = less headroom)
- Must be less than
MAX_DATASET_STORAGE_SIZE_BYTESfor cleanup to have effect
Example:
TARGET_DATASET_STORAGE_SIZE_BYTES=16106127360 # 15 GiB per SP (9 GiB headroom)- Type:
number - Required: No
- Default:
0.0417(~1/24, approximately once per day) - Minimum:
0.001 - Maximum:
20
Role: Target number of piece cleanup runs per storage provider per hour. Controls how frequently the cleanup job runs for each SP. The rate is converted to an interval internally (e.g. 1/hr = every 3600s, 1/24/hr ≈ every 86400s = once per day).
Only used when DEALBOT_JOBS_MODE=pgboss.
When to update:
- Increase to run cleanup more frequently when SPs are frequently over quota
- Decrease to reduce scheduling overhead
Example:
# Once per hour (more aggressive)
JOB_PIECE_CLEANUP_PER_SP_PER_HOUR=1
# Once per week (very conservative)
JOB_PIECE_CLEANUP_PER_SP_PER_HOUR=0.006- Type:
number - Required: No
- Default:
300(5 minutes) - Minimum:
60
Role: Maximum runtime for a cleanup job before forced abort via AbortController. Prevents stuck cleanup jobs from blocking the SP work queue.
Only used when DEALBOT_JOBS_MODE=pgboss.
When to update:
- Increase if piece deletion calls to the Synapse SDK are known to be slow
- Decrease for faster abort detection on stuck jobs
- Type:
string(file path) - Required: No
- Default:
./datasets
Role: Directory path where randomly generated dataset files are stored.
When to update:
- When using a different storage location
- Type:
string(comma-separated numbers in bytes) - Required: No
- Default:
10485760(10 MiB)
Role: Sizes of randomly generated content used for data-storage checks, in bytes (original content size before CAR conversion).
Note: For IPNI-enabled deals, original content size is stored in deal metadata (metadata.ipfs_pin.originalSize) while deals.file_size stores the CAR size (bytes uploaded).
When to update:
- Add smaller sizes for quick tests
- Add larger sizes for stress testing
- Adjust based on storage provider capabilities
Example scenario: Testing with smaller files only:
RANDOM_PIECE_SIZES=1024,10240,102400- Type:
number(milliseconds) - Required: No
- Default:
10000(10 seconds) - Minimum:
1000
Role: Maximum time to wait for establishing a connection and receiving initial response headers.
When to update:
- Increase for slow networks or distant servers
- Decrease for faster failure detection
- Type:
number(milliseconds) - Required: No
- Default:
240000(4 minutes) - Minimum:
1000
Role: Maximum total time for HTTP/1.1 requests, including body transfer.
When to update:
- Increase for large file retrievals
- Decrease to fail faster on slow providers
- Type:
number(milliseconds) - Required: No
- Default:
240000(4 minutes) - Minimum:
1000
Role: Maximum total time for HTTP/2 requests, including body transfer.
When to update:
- Typically kept in sync with
HTTP_REQUEST_TIMEOUT_MS
- Type:
number(milliseconds) - Required: No
- Default:
60000(60 seconds) - Minimum:
1000
Role: Maximum time to wait for IPNI verification to confirm the provider for a root CID. Used by both data-storage and retrieval checks.
When to update:
- Increase if IPNI propagation is slow
- Decrease to fail faster on unresponsive indexers
- Type:
number(milliseconds) - Required: No
- Default:
2000(2 seconds) - Minimum:
250
Role: Polling interval for IPNI verification. Used by both data-storage and retrieval checks.
When to update:
- Increase to reduce IPNI query load
- Decrease to detect results faster
Both variables are optional and default to an empty list (no providers blocked). Values are comma-separated lists of provider IDs or addresses. Addresses are matched case-insensitively.
A blocked provider is excluded from all scheduled check types: data-storage, retrieval, and data-retention. Blocking applies to scheduled automation only — manual/dev-triggered checks (via dev-tools endpoints) are not affected.
- Type:
string(comma-separated provider IDs) - Required: No
- Default:
""(empty — no providers blocked)
Role: Global blocklist by provider numeric ID. Providers listed here are excluded from all scheduled check types (data-storage, retrieval, and data-retention).
Example: BLOCKED_SP_IDS=1234,5678
- Type:
string(comma-separated provider Ethereum addresses) - Required: No
- Default:
""(empty — no providers blocked)
Role: Global blocklist by provider address. Providers listed here are excluded from all scheduled check types (data-storage, retrieval, and data-retention). Matching is case-insensitive.
Example: BLOCKED_SP_ADDRESSES=0xAbCd...,0x1234...
- Type:
number(seconds) - Required: No
- Default:
3600(1 hour)
Role: Cache time-to-live for wallet balance collection. Wallet balances are cached and only refreshed when this TTL expires, even when Prometheus scrapes the /metrics endpoint.
When to update:
- Increase to reduce blockchain RPC calls (slower balance updates, lower load)
- Decrease for more frequent balance updates (higher RPC load, faster visibility)
Example scenario: Increasing cache TTL to 2 hours:
PROMETHEUS_WALLET_BALANCE_TTL_SECONDS=7200- Type:
number(seconds) - Required: No
- Default:
60(1 minute)
Role: Cooldown period after a failed wallet balance fetch before retrying. After an error, the cache is considered expired but a new fetch will only be attempted after this cooldown.
When to update:
- Increase to reduce retry pressure on failing RPC endpoints
- Decrease to recover from transient errors faster
Example scenario: Increasing cooldown to 5 minutes:
PROMETHEUS_WALLET_BALANCE_ERROR_COOLDOWN_SECONDS=300- Type:
string(URL) - Required: No
- Default:
http://localhost:8080 - Location:
apps/web/.env(dev) and container runtime (production)
Role: Base URL for the backend API, used by the Vite development server to proxy API requests.
In production containers, this is read from runtime-config.js, which is generated at container
startup from environment variables.
Runtime wiring (Docker/K8s):
- Container entrypoint writes
/srv/runtime-config.jsfromDEALBOT_API_BASE_URL - Fallback:
VITE_API_BASE_URLifDEALBOT_API_BASE_URLis not set
When to update:
- When
DEALBOT_PORTis changed in the backend - When the backend is running on a different host
Example scenario: Backend running on a different port:
VITE_API_BASE_URL=http://localhost:9000- Type:
string(domain) - Required: No
- Default: Empty (Plausible disabled)
- Location:
apps/web/.env(dev) and container runtime (production)
Role: Enables Plausible analytics for the web frontend when set. The value should match the Plausible site domain
you want to attribute events to (e.g., dealbot.filoz.org or staging.dealbot.filoz.org).
Runtime wiring (Docker/K8s):
- Container entrypoint writes
/srv/runtime-config.jsfromVITE_PLAUSIBLE_DATA_DOMAIN
When to update:
- Set to the production domain for production deployments
- Set to the staging domain for staging deployments
- Leave empty to disable analytics (local development or privacy-sensitive environments)
Example:
VITE_PLAUSIBLE_DATA_DOMAIN=dealbot.filoz.orgDocker run example:
docker run \
-e DEALBOT_API_BASE_URL=http://dealbot-api:3130 \
-e VITE_PLAUSIBLE_DATA_DOMAIN=dealbot.filoz.org \
-p 8080:80 \
dealbot-web:latest- Type:
string(URL) - Required: No
- Default: Empty (uses relative URLs)
- Location: Container runtime env (production)
Role: Runtime override for the web frontend API base URL. Used to populate
/srv/runtime-config.js on container startup.
When to update:
- Set in production to point the frontend at your backend service
- Leave empty to use relative
/apipaths
| File | Purpose |
|---|---|
.env.example (root) |
Kubernetes secrets template (wallet credentials only) |
apps/backend/.env.example |
Full backend configuration template |
apps/web/.env.example |
Frontend configuration template |
For local Kubernetes development, see DEVELOPMENT.md for setup instructions.