Data Explorer is a high-fidelity, high-performance database management and visualization IDE. It provides a unified, intelligent interface for developers and data engineers to explore, query, and visualize multi-engine databases, all supercharged by a context-aware AI.
- Context-Aware SQL Generation: Describe complex data needs in natural language and the AI generates SQL based on your live schema and foreign key relationships.
- Vision Integration: Upload screenshots of DB diagrams or whiteboards for AI-assisted schema reconstruction and query help.
- Global Assistant Panel: A resizable, toggleable sidebar available across major modules with SSE-based streaming responses.
- Provider-Aware Routing: Gemini remains the premium lane, while Cerebras and OpenRouter can be configured as lower-cost or fallback lanes.
- Intelligent Model Fallback: The AI layer can iterate through providers and models when a requested lane fails, helping keep generation more resilient.
- Surgical Precision Autocomplete: Inline AI suggestions prioritize exact SQL syntax completion without unnecessary explanations.
- Chain Tabbing: Accept multi-line AI suggestions incrementally with the
Tabkey for fast editing flow. - Chat Persistence: Conversation history is stored in app state for continuity across sessions.
- Unified Engine Architecture: Connect to multiple engines through a strategy-based backend:
- PostgreSQL (Neon, Supabase, RDS, Local)
- MySQL (PlanetScale, Local, TiDB)
- SQL Server (Azure SQL, Local MSSQL)
- MongoDB and MongoDB Atlas (SRV)
- Enterprise-Grade Credential Protection: Saved connection passwords are encrypted using AES-256-GCM before persistence.
- Connection Safety Controls: Each saved connection can now be configured as read-only and can independently allow or block query execution, schema changes, and import/export flows.
- Connection Health Visibility: Saved connections track their last health-check result, latency, and last successful connection timestamp so operators can spot broken credentials or degraded endpoints faster.
- Centralized Persistence: Connections are stored in the app database so they survive client refreshes and redeployments.
- Real-time Discovery: Fast schema inspection for tables, columns, views, indexes, and metadata.
- Cross-DB Browsing: Jump between databases and schemas in a single workspace.
- Intelligent Auto-Layout: Dynamic graph generation using React Flow with automatic node positioning.
- Schema Exploration: Visualize table structures and relationships in an ERD-style workspace.
- Visual Intelligence: Primary and foreign keys are highlighted for fast structural understanding.
- Heuristic Dashboard: Surface connection health, storage distribution, and table usage metrics.
- Visual Intelligence: Interactive charts powered by Recharts for operational visibility.
- Table Distribution: Heatmaps and row-count style analytics to spot bottlenecks quickly.
- Monaco Engine: VS Code-grade editing experience with advanced syntax highlighting and multi-cursor support.
- Schema-Aware IntelliSense: Deep autocompletion that understands your tables, columns, and custom types.
- Smart Safeguards and Optimization: Automatic limit injection, pagination support, and per-engine query guards help prevent accidental OOM scenarios.
- Backend-Enforced Guardrails: Read-only and restricted connections are enforced server-side, so blocked updates, deletes, DDL, and imports cannot be bypassed by skipping the UI.
- Keyboard Shortcuts: Use
Ctrl+Enterto run,Ctrl+Sto save, andCtrl+Hto open history. - Execution Analytics: Query plans and execution timing are surfaced in the workspace.
- Tabbed Results: Separate panes for data, messages, and explain output.
- Export Capabilities: Export result data as CSV, JSON, or SQL from the grid tooling.
- Fully Responsive Architecture: Works across mobile and desktop layouts.
- Mobile-Adaptive Interface:
- Animated menu and drawer patterns
- Touch-friendly controls
- Overlay navigation for constrained screens
- Horizontal tab management for dense workflows
- High-Density Desktop Mode: Designed for productivity with compact, information-rich layouts.
- Modern UI System: Uses Radix UI, Tailwind, motion, and a componentized feature-driven structure.
- Standardized API Communication: Consistent response shapes keep frontend handling predictable.
- Uptime Monitoring Ready: Exposes
/api/healthfor monitoring and host keep-alive setups.
| Layer | Technologies |
|---|---|
| Core Framework | React 19, Vite, TypeScript |
| State Management | Zustand, TanStack Query v5 |
| UI Components | Radix UI, Shadcn UI, Tailwind CSS |
| Visualizations | React Flow, Recharts, Lucide Icons |
| Editor | Monaco Editor |
| Animations | Framer Motion, CSS Keyframes, Intersection Observer |
| Layer | Technologies |
|---|---|
| Architecture | NestJS |
| ORM / Persistence | Prisma |
| AI Engine | Google Generative AI (Gemini API), SSE Streaming |
| Engines Support | pg, mysql2, mssql, mongodb |
| Security | JWT, Passport.js, AES-256-GCM encryption |
Data Explorer/
├── client/ # React + Vite frontend application
│ ├── src/
│ │ ├── assets/ # Static frontend assets and bundled icons
│ │ ├── core/ # Client-side domain layer and shared app logic
│ │ │ ├── adapters/ # API/database adapter implementations
│ │ │ ├── config/ # Frontend environment and runtime config
│ │ │ ├── domain/ # Core entities and interfaces
│ │ │ ├── services/ # Zustand store, API services, orchestration
│ │ │ └── utils/ # Client-side utility helpers
│ │ ├── infrastructure/ # Infra-specific frontend plumbing and bootstrapping
│ │ ├── lib/ # Small shared utilities such as class merging helpers
│ │ ├── presentation/ # UI layer and feature composition
│ │ │ ├── components/ # Reusable UI pieces, docs primitives, code editor pieces
│ │ │ ├── hooks/ # UI/data hooks such as schema sync and media queries
│ │ │ ├── modules/ # Feature-driven application modules
│ │ │ │ ├── Connection/ # Connection dialog and connection management flows
│ │ │ │ ├── Dashboard/ # Insights and analytics dashboards
│ │ │ │ ├── DataGrid/ # Result grid, export, import, and table data tools
│ │ │ │ ├── Explorer/ # Sidebar tree, navigation, and database explorer UX
│ │ │ │ ├── LandingPage/ # Marketing/landing page sections
│ │ │ │ ├── Layout/ # AppShell, navbar, profile dialog, and shell composition
│ │ │ │ ├── Migration/ # Migration workflow UI and progress surfaces
│ │ │ │ ├── NoSqlExplorer/ # MongoDB-focused workspace and JSON exploration
│ │ │ │ ├── Query/ # SQL editor, AI assistant, history, results, explain plans
│ │ │ │ └── Visualization/ # ERD and visual exploration workspaces
│ │ │ └── pages/ # Route-level entry pages such as login, docs, admin
│ │ └── test/ # Frontend tests and test setup
├── server/ # NestJS backend API
│ ├── prisma/ # Prisma schema, migrations/config, and generated setup inputs
│ ├── src/
│ │ ├── ai/ # Gemini integration, prompts, chat, streaming, autocomplete
│ │ ├── audit/ # Audit logging and audit history APIs
│ │ ├── auth/ # JWT auth, OAuth, token exchange, guards, roles
│ │ ├── common/ # Shared decorators, filters, middlewares, interceptors, utils
│ │ ├── connections/ # Saved connection lifecycle and persistence
│ │ ├── database-strategies/ # Per-engine query/metadata/export strategy implementations
│ │ ├── mail/ # Mail sending services and templates
│ │ ├── metadata/ # Schema discovery and metadata aggregation
│ │ ├── migration/ # Cross-database migration orchestration and progress streaming
│ │ ├── otp/ # OTP flows for verification and recovery
│ │ ├── prisma/ # Prisma service/module wiring for NestJS
│ │ ├── query/ # Query execution, DML helpers, and result APIs
│ │ ├── seed/ # Seed/bootstrap services
│ │ ├── tests/ # Strategy and service unit tests
│ │ ├── users/ # User profile, settings, roles, billing, onboarding
│ │ └── utils/ # Encryption, SQL guards, and backend utility helpers
├── docker-compose.yml # Local container orchestration for db + backend + frontend
├── package.json # Root dev scripts for running client and server together
├── .env.example # Root Docker-oriented example values
├── server/.env.example # Backend env template for local/dev/prod setup
└── README.md # Project overview and setup guide- Universal Drivers: Click the
+in the Explorer sidebar to add a new connection. The main UI currently exposes PostgreSQL, MySQL, SQL Server, MongoDB, and MongoDB Atlas (SRV). - Metadata Sync: After connecting, the app crawls your schema and builds the tree view.
- Switching Context: Move between saved connections and databases from the same workspace.
- Multi-Tab Editing: Open multiple SQL tabs and execute with Ctrl/Cmd + Enter.
- Smart IntelliSense: Schema-aware suggestions are available inside the editor.
- Explain and Analyze: Use the explain action to inspect query plans directly in the results panel.
- Saved Queries and History: Save reusable queries and reopen recent executions quickly.
- Contextual Knowledge: The assistant is aware of your active connection, schema context, and current workspace state.
- Vision Features: Drop in screenshots or reference material for AI-assisted schema and SQL help.
- Prompt Engineering: Ask practical questions like:
"Summarize the relationship between orders and customers""Find the top 5 customers with high churn risk based on transaction volume"
- ERD Exploration: Open the ERD module to inspect table relationships visually.
- Insights Dashboard: Review health signals, charts, and high-level usage indicators.
- NoSQL Workspace: MongoDB-oriented flows are available through the NoSQL explorer modules.
- Node.js 20+
- npm
- Google Gemini API Key (optional, only needed for AI features)
- Docker and Docker Compose (optional, for the containerized path)
This is the fastest way to run Data Explorer locally with PostgreSQL, the backend API, and an Nginx-served frontend.
-
Environment Setup
- Copy
server/.env.exampletoserver/.env. - Fill in at least
JWT_SECRET,ENCRYPTION_KEY, and any optional AI or OAuth variables you want to use. - Sync the Prisma schema before first boot:
npx prisma db push
- If you are serving the frontend through Docker on port
80, set:FRONTEND_URL=http://localhost
- Copy
-
Launch
docker-compose up --build -d
- Web Interface: http://localhost
- Backend API: http://localhost:3001/api/health
- Primary Database:
localhost:5435- user:
postgres - password:
postgres - database:
data_explorer
- user:
Note:
docker-compose.ymlreads backend env vars fromserver/.env, not from the root.env.example.
-
Clone and Install
git clone https://github.com/KasierBach/Data-Explorer-Editor.git cd Data-Explorer-Editor npm install -
Backend Configuration
cd server npm install cp .env.example .env npx prisma generate npx prisma db push npm run start:devRecommended local values in
server/.env:DATABASE_URL=postgresql://postgres:postgres@localhost:5435/data_explorer FRONTEND_URL=http://localhost:5173 JWT_SECRET=replace-with-a-random-secret-at-least-32-bytes-long ENCRYPTION_KEY=replace-with-exactly-32-random-characters
-
Frontend Configuration
cd ../client npm install npm run devOptional
client/.env:VITE_API_URL=http://localhost:3001/api
-
Launch Visit http://localhost:5173 to start exploring.
-
Run Both from Root
npm run dev
The backend reads configuration from server/.env. The frontend reads VITE_API_URL from client/.env or from the Docker build arg.
| Variable | Required | Description |
|---|---|---|
DATABASE_URL |
Yes | Connection string for the app's central PostgreSQL database. Docker default: postgresql://postgres:postgres@db:5432/data_explorer. |
GEMINI_API_KEY |
No | Google Gemini API key used to enable AI features. |
AI_PROVIDER_TIMEOUT_MS |
No | Timeout in milliseconds for AI provider requests. Default: 15000. |
AI_STREAM_IDLE_TIMEOUT_MS |
No | Idle timeout in milliseconds for streaming AI responses. Default: 15000. |
CEREBRAS_API_KEY |
No | Optional lower-cost AI provider key used by AI routing in Auto / Fast mode. |
CEREBRAS_BASE_URL |
No | Base URL for Cerebras' OpenAI-compatible API. Default: https://api.cerebras.ai/v1. |
CEREBRAS_CHAT_MODEL |
No | Preferred Cerebras chat model for cheaper AI routing. |
OPENROUTER_API_KEY |
No | Optional fallback AI provider key used when you want more free/cheap routing options. |
OPENROUTER_BASE_URL |
No | Base URL for OpenRouter. Default: https://openrouter.ai/api/v1. |
OPENROUTER_CHAT_MODEL |
No | OpenRouter model slug to use in Auto / Fast mode. |
JWT_SECRET |
Yes | Strong secret used to sign access tokens. Placeholder values are rejected. |
REFRESH_TOKEN_SECRET |
No | Recommended separate secret for refresh-token cookies. If omitted, the app falls back to JWT_SECRET. |
ENCRYPTION_KEY |
Yes | Exactly 32 characters, used to encrypt saved database connection passwords. |
LEGACY_ENCRYPTION_KEYS |
No | Comma-separated list of older keys used to decrypt connections saved before the hardening update. |
PORT |
No | Server port. Local default: 3001. |
FRONTEND_URL |
Yes | Frontend URL used for CORS and OAuth redirects. Use http://localhost:5173 for Vite local dev, or http://localhost when serving the frontend through Docker on port 80. |
ALLOW_INTERNAL_IPS |
No | Recommended default is false. Local development still allows localhost, 127.0.0.1, and ::1. |
GOOGLE_CLIENT_ID |
No | Google login client ID. |
GOOGLE_CLIENT_SECRET |
No | Google login client secret. |
GOOGLE_CALLBACK_URL |
No | Google OAuth callback URL. Local default: http://localhost:3001/api/auth/google/callback. |
GITHUB_CLIENT_ID |
No | GitHub login client ID. |
GITHUB_CLIENT_SECRET |
No | GitHub login client secret. |
GITHUB_CALLBACK_URL |
No | GitHub OAuth callback URL. Local default: http://localhost:3001/api/auth/github/callback. |
MAIL_USER |
No | Email address or sender used for system mail. |
MAIL_PASS |
No | App password or provider API key. |
VITE_API_URL |
No | Frontend backend API URL. Default: http://localhost:3001/api. |
Recent hardening introduced a few important changes:
- Safer OAuth flow: Google and GitHub login no longer return bearer tokens in the query string. The frontend now receives
/login#code=...and exchanges it for a normal token. - Stronger secret enforcement: Weak or placeholder
JWT_SECRETvalues prevent the backend from starting. - Encrypted connection credentials: Saved DB connection passwords are encrypted with AES-256-GCM, and the backend supports
LEGACY_ENCRYPTION_KEYSto preserve older saved connections. - Localhost still works in development: The SSRF validator is fail-closed for unsafe hosts, while local development still allows
localhost,127.0.0.1, and::1. - Reduced XSS exposure: AI markdown no longer renders raw HTML in the client.
- Safer session handling: Refresh tokens now live in
httpOnlycookies, while access tokens stay in memory and are re-bootstrapped through/auth/refresh. - AI request guardrails: Provider calls now use request timeouts and streaming idle timeouts so a slow lane cannot hang the backend indefinitely.
If you configure Google or GitHub login, make sure:
FRONTEND_URLmatches the frontend origin you are actually using- callback URLs in your OAuth provider match your local or production backend
- after login, the app redirects in this format:
/login#code=...
The frontend then calls the exchange endpoint to convert the short-lived code into a normal access token.
If older saved DB connections stopped working after changing ENCRYPTION_KEY, you can add:
LEGACY_ENCRYPTION_KEYS=12345678901234567890123456789012,abcdefghijklmnopqrstuvwxyz123456Notes:
- Each key must be exactly 32 characters
- In local development, the system automatically tries the old fallback key to reduce the chance that you need to re-enter passwords
- If an old password was encrypted with a key you no longer have, it cannot be recovered from the stored data
cd client
npm run build
cd ../server
npx nest buildcd client
npm test
cd ../server
npm test-
Google / GitHub login is not working
- Check
JWT_SECRET - Check the callback URLs in your OAuth provider
- Check
FRONTEND_URL
- Check
-
Older saved connections no longer work
- Check
ENCRYPTION_KEY - Add
LEGACY_ENCRYPTION_KEYSif you previously used another key - Restart the backend after updating env values
- Check
-
Localhost databases do not connect
- Local development already allows
localhost,127.0.0.1, and::1 - Only enable
ALLOW_INTERNAL_IPS=trueif you intentionally need broader private-network access
- Local development already allows
-
Prisma build fails with
EPERMon Windows- This usually happens because the backend dev server is still locking the Prisma engine file
- Stop the running backend process and build again
- Bug Reports: Open an issue on the GitHub Repo.
- Custom Adapters: New database engine support can be added via the
database-strategiesmodule.
This project is licensed under the MIT License.