This document records the current automated test coverage, the manual smoke tests used during development, and the expected verification flow after future changes.
Current backend tests live in:
backend/tests/test_chat_service.pybackend/tests/test_comparison_engine.pybackend/tests/test_dataset_service.pybackend/tests/test_data_manager.pybackend/tests/test_dependency_service.pybackend/tests/test_permission_manager.pybackend/tests/test_run_service.py
These tests cover:
- column mapping and data normalization
- dataset preview and save behavior
- comparison mismatch logic
- permission grant and access checks
- dependency status reporting
- replay and live run behavior
- chat fallback and sanitization behavior
cd "D:\python , pine script\backend"
.\run-tests.ps1cd "D:\python , pine script\frontend"
npm run buildRuns all 6 built-in indicator Pine scripts through the PineTS engine and writes
docs/builtin-parity-pine.json.
cd "D:\python , pine script\frontend"
npm run test:parityPrerequisites: backend and frontend dev server do not need to be running.
Requires a dataset CSV (uses %APPDATA%\TradingStrategyComparator\cache\datasets\index.json
if available, falls back to data/cache/datasets/dataset-demo-5m.csv).
Certifies all 6 built-in indicators via the Python execution engine against the first non-demo saved dataset. Fails if only the demo dataset is available.
cd "D:\python , pine script"
C:\Users\sakth\Desktop\vayu\.venv\Scripts\python.exe scripts\certify_builtins.py --strictRuns Pine certification first, then combines Pine results with Python results to
produce a unified docs/builtin-parity-report.json and docs/builtin-parity-summary.md
with per-indicator python_status, pine_status, and parity_status.
cd "D:\python , pine script\frontend"
npm run test:parity
cd "D:\python , pine script"
C:\Users\sakth\Desktop\vayu\.venv\Scripts\python.exe scripts\certify_builtins.py --include-pinePrerequisites — both servers must be running before executing:
# Terminal 1 — backend
cd "D:\python , pine script\backend"
PYTHONPATH="D:\python , pine script" python -m uvicorn app.main:app --port 8000
# Terminal 2 — frontend dev server
cd "D:\python , pine script\frontend"
npm run dev
# Terminal 3 — run smoke tests
cd "D:\python , pine script\frontend"
npm run test:smokeOne-time browser install (run once after npm install):
cd "D:\python , pine script\frontend"
npx playwright install chromiumSmoke test coverage: all 6 routes render, Workspace demo banner + canvases present, Imports dataset list, Runs seeded entry, Library load into Workspace, Settings model list.
- Start the backend with
.\run-backend.ps1 - Open
http://127.0.0.1:8000/health - Confirm
status = ok
- Start the frontend with
npm run dev -- --host 127.0.0.1 --port 5173 - Open
/imports - Confirm the page loads without a blank screen
- Use
C:\Users\sakth\Downloads\SBIN_5.xlsx - Click
Preview source - Confirm:
- sheet
Sheet1 18850rows- columns
t, o, h, l, c, v, dt - inferred mapping
dt/o/h/l/c/v
- sheet
- Click
Save mapping - Confirm the new dataset appears in the saved datasets list
- Confirm the app navigates to
/workspace
- Click
Run replay - Confirm candles appear on the Python chart
- Confirm a run appears in
/runs - Confirm mismatch analysis is populated if Pine data is attached
- Click
Start live run - Confirm the run lifecycle becomes
live - Confirm progress increases over time
- Confirm
/runs/{run_id}/streamemits updates
- Open
/settings - Paste a valid bridge JSON payload
- Click
Save bridge artifact - Return to
/workspace - Select the saved artifact
- Re-run replay and confirm Pine series are available
- Open
/settings - Confirm local models appear
- Confirm
nomic-embed-text:latestis labeled non-chat - Open
/workspace - Click
Ask LLM - Confirm a clean human-readable response or a structured fallback message appears
Run this checklist after every installer build before distributing.
- Delete
%APPDATA%\TradingStrategyComparatorentirely - Run
dist-installer\Trading Strategy Comparator Setup 1.0.0.exeand install - Launch from the Desktop or Start Menu shortcut
- Confirm the loading screen appears and clears within 10 seconds
- Confirm the app opens directly on the Workspace tab (not Imports)
- Confirm the green "Showing bundled demo data" banner is visible
- Confirm the Pine pane shows candles — indicators should appear within a few seconds as Pine auto-runs
- Confirm the Python pane shows candles and EMA overlays immediately (from the seeded run)
- Open Runs tab —
run-demo-emashould be listed withcompletedstatus - Open Imports tab —
Demo Dataset (EMA 5m · 300 bars)should be listed
- Close the app and relaunch (without deleting APPDATA)
- Confirm the app opens on Workspace and the same demo data is shown
- Confirm no duplicate demo entries appear in Imports or Runs
- Open Imports and load a real Excel/CSV workbook (e.g. SBIN_5.xlsx)
- Click Preview source — confirm row count and inferred mapping
- Click Save mapping — confirm the new dataset appears and the app navigates to Workspace
- Confirm the demo banner disappears and the new dataset candles replace the demo candles
- Open Library tab
- Click Load to Workspace on any built-in (e.g. RSI)
- Confirm the Pine and Python editors update
- Click Run Pine — confirm Pine pane renders RSI indicator
- Click Run Python — confirm Python pane renders RSI values
- Uninstall via Add/Remove Programs
- Confirm the shortcut is removed
- Confirm
%APPDATA%\TradingStrategyComparatoris preserved (user data survives uninstall)
Run each item before tagging a release or distributing an installer to others.
[ ] npm run build passes without TypeScript errors
[ ] build-installer.ps1 completes all 3 stages with exit code 0
[ ] Installer file is present: dist-installer/Trading Strategy Comparator Setup 1.0.0.exe
[ ] Clean first-run smoke test: APPDATA wiped, app launches, demo data visible
[ ] GET /health returns { status: "ok", frontend_ready: true }
[ ] GET /data-sources contains dataset-demo-5m with row_count 300
[ ] GET /runs contains run-demo-ema with 300 candles and python_series ema_fast + ema_slow
[ ] Python pane shows chart immediately without user input
[ ] Pine pane shows chart after auto-run (< 10 s)
[ ] npm run test:parity exits 0 (6/6 Pine built-ins pass)
[ ] scripts/certify_builtins.py exits 0 (6/6 Python built-ins pass)
[ ] scripts/certify_builtins.py --include-pine exits 0 (all parity_status = pass)
[ ] No "JavaScript error in the main process" popup on launch or close
Repeatable certification validates all 6 built-in indicators via both the Python execution engine and the PineTS engine. Results are written to:
docs/builtin-parity-pine.json— Pine results (written bynpm run test:parity)docs/builtin-parity-report.json— combined Python + Pine resultsdocs/builtin-parity-summary.md— human-readable summary table
| Flag | Effect |
|---|---|
| (none) | Python-only certification, demo dataset allowed |
--strict |
Fails if only demo dataset available (canonical dataset required) |
--include-pine |
Merges docs/builtin-parity-pine.json into report for Pine + parity status |
| Indicator | Python status | Pine status | Series produced (Python) |
|---|---|---|---|
| EMA Crossover | ✅ pass | ✅ pass | ema_fast, ema_slow, long_condition |
| RSI | ✅ pass | ✅ pass | rsi, long_condition, short_condition |
| MACD | ✅ pass | ✅ pass | macd_line, signal_line, histogram, long_condition |
| Super Trend | ✅ pass | ✅ pass | supertrend, direction, long_condition |
| Bollinger Bands | ✅ pass | ✅ pass | bb_middle, bb_upper, bb_lower, long_condition |
| VWAP 3-Band | ✅ pass | ✅ pass | vwap, vwap_upper1–3, vwap_lower1–3, long_condition |
Parity is also validated visually via the Alignment tab in the running app.
Last explicitly verified during development:
- backend health endpoint reachable
- frontend routes render
- SBIN Excel preview works
- dataset save works
- replay and live run creation work
- bridge artifact upload works
- chat model list works
- app chat can return a clean Ollama response
- no automated coverage for sub-pane indicator rendering (PineTS Vitest tests cover series output, not canvas rendering)
- no real provider-backed live mode test because provider ingestion is not implemented yet
- Playwright smoke tests require both servers running manually (no CI entrypoint yet)
After touching imports or data code:
- preview SBIN workbook
- save dataset
- run replay once
After touching Python execution or comparison code:
- run backend tests
- run replay with default Python strategy
- inspect mismatch panel
After touching bridge code:
- save a bridge artifact
- attach it in
/workspace - run replay again
After touching chat code:
- refresh model list
- send one real Ollama prompt
- send one missing-model prompt and confirm fallback behavior
After touching frontend routing or layout:
- run
npm run test:smoke(requires both servers running) — covers all 6 routes - confirm no blank page and no obvious layout breakage
After touching built-in indicator code or PineTS integration:
- run
npm run test:parity— 6/6 Pine indicators must pass - run
scripts/certify_builtins.py --include-pine— Python + parity must all pass