Skip to content

neuraparse/taskNebula

Repository files navigation

TaskNebula

The open-source project management system with a real AI copilot

Docker Pulls Latest Image Next.js 15 TypeScript PostgreSQL 16 License MIT

A self-hosted issue tracker that feels like Linear, scales like Jira, and drafts work with you like a pair programmer. Bring your own OpenAI / Anthropic key — or run fully offline with the native planner.

Docker Hub · Quick start · AI · Features · Report a bug


TaskNebula home
Draft issues from a prompt

Draft issues from a prompt

Per-issue AI assist

Per-issue AI assist

Kanban board

Real-time Kanban board

Dashboard

Workspace dashboard


⚡ Quick start

One curl, then open your browser:

curl -fsSL https://raw.githubusercontent.com/neuraparse/tasknebula/main/scripts/quickstart.sh | bash

The script pulls neuraparse/tasknebula:latest, spins up Postgres + Redis

  • LiveKit via Docker Compose, generates a strong AUTH_SECRET, and opens http://localhost:3000. First-run wizard creates your admin account.
Build from source instead
git clone https://github.com/neuraparse/tasknebula.git
cd tasknebula
cp .env.example .env
echo "AUTH_SECRET=$(openssl rand -base64 32)" >> .env
docker compose up -d --build
Pin a specific version
# default is :latest
TASKNEBULA_IMAGE=neuraparse/tasknebula:0.2.4 docker compose up -d

Release notes live on the GitHub releases page.


🧠 AI Assistant & Agents

AI is off by default. Enable it per workspace from Settings → AI & Agents in under 30 seconds. Everything is DB-managed — no env vars to redeploy when you rotate a key.

What you can do

Feature Where What it does
Draft-with-AI Backlog → Draft with AI Type one prompt, the LLM decides whether it's one ticket or a whole checklist and returns structured, editable drafts. Select which to create in bulk.
Per-issue assist Issue detail sidebar → AI assist Summarise an issue with its comments, rewrite the description, suggest next steps, or propose labels. One click to Apply — no copy-paste.
Native fallback Built-in If no LLM credential is configured, TaskNebula still ships a deterministic heuristic planner so the buttons are never dead.
Platform keys Admin → Agent control Super-admins drop in an OpenAI / Anthropic key that all workspaces fall back to. AES-256-GCM encrypted, redacted previews, audit-logged rotations.
Workspace keys Settings → AI & Agents → Quick setup Each workspace can override the platform default with its own key. Single Quick-Setup button writes provider + model + key + toggle in one transaction.
Model profiles Settings → AI & Agents → Your model profiles Save reusable provider+model+tuning combos (temperature, max tokens, reasoning effort) with full revision history. Saved profiles appear inline in the Quick Setup dropdown.

Supported providers & models

Provider Models out of the box
OpenAI gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4, gpt-3.5-turbo, o1, o1-mini, plus the speculative gpt-5.x family
Anthropic claude-opus-4-7, claude-sonnet-4-6, claude-haiku-4-5, plus claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus
Native Built-in heuristic planner (no API calls)

Fails gracefully

Every failure — bad key, rate limit, model not available — becomes an in-app notification with a Sparkles/Bot icon, a one-line action hint ("Open Settings → AI & Agents to add a key"), and a deep-link straight to the relevant project AI settings. Full audit trail in Admin → Audit logs.


📦 Features

Project management

  • Kanban board with drag-and-drop
  • Stories, tasks, bugs, epics, subtasks
  • Sprints, burndown, velocity
  • Custom fields, issue links, attachments
  • Custom workflows, transition rules
  • Backlog grooming + roadmap view

Collaboration

  • Real-time presence & live activity feed
  • @mentions, watchers, reactions
  • Threaded comments, email digests
  • LiveKit voice rooms for ad-hoc calls
  • Project-scoped chat + issue threads
  • In-app notification bell with deep-links

Admin & governance

  • 30+ granular permission types
  • Role-based access + issue security levels
  • 63+ audit-log action types
  • API keys + signed webhooks
  • OAuth (GitHub / Google / custom)
  • Multi-org, per-org plan + feature flags

AI (opt-in)

  • Draft-with-AI (auto single/multi)
  • Per-issue summarise / rewrite / suggest
  • Platform + workspace credential chain
  • Saved model profiles w/ revisions
  • OpenAI + Anthropic first-class
  • Failure → notification w/ action hint

Analytics

  • Burndown, velocity, cycle time
  • Throughput + lead-time distributions
  • Project health scorecard
  • CSV / JSON export
  • Sprint retrospectives view
  • Cross-project rollups

Developer experience

  • Cmd+K command palette
  • Keyboard shortcuts everywhere
  • Dark mode + mobile responsive
  • Route-level skeletons (no blank loads)
  • SSE-based real-time sync
  • One-command production deploy

🛠 Tech stack

Frontend

  • Next.js 15 (App Router, RSC)
  • React 19
  • Tailwind CSS + shadcn/ui
  • TanStack Query for client state
  • SSE for realtime fan-out

Backend

  • Next.js API routes
  • Drizzle ORM
  • PostgreSQL 16 + pgvector
  • Redis 7 for cache + presence
  • LiveKit for voice rooms
  • Auth.js v5 (NextAuth)

🚢 Production

# .env
APP_URL=https://tasks.yourcompany.com
AUTH_SECRET=your-secret-here   # openssl rand -base64 32

Put any reverse proxy in front for SSL. Caddy is the easiest:

tasks.yourcompany.com {
    reverse_proxy localhost:3000
}

See .env.example for the full list — SMTP, OAuth, LiveKit tuning, optional platform LLM keys, etc. Everything AI-related can be configured through the UI after first boot; env vars are only a fallback for dev.

Voice rooms behind HTTPS

LiveKit ships plain ws:// on port 7880. A production browser loading TaskNebula over https:// refuses mixed-content wss:// without TLS in front. Put nginx (or Caddy) on its own subdomain:

  1. DNS: add livekit.your-domain.example.com → same server IP as the app.
  2. nginx: copy the ready template at nginx/tasknebula-livekit.conf, replace the hostname, then:
    sudo certbot --nginx -d livekit.your-domain.example.com
    sudo nginx -t && sudo nginx -s reload
  3. .env:
    NEXT_PUBLIC_LIVEKIT_URL=wss://livekit.your-domain.example.com
  4. Rebuild so the build-time env is baked into the Next bundle:
    docker compose up -d --build web

The template terminates TLS, forwards to 127.0.0.1:7880, and keeps the WebSocket upgrade headers + 24h idle timeout LiveKit needs.


🧰 Commands

docker compose up -d                  # Start (pulls :latest)
docker compose down                   # Stop
docker compose logs -f web            # Tail the web service
docker compose pull web && \
  docker compose up -d                # Update to the newest image
docker compose up -d --build          # Rebuild from local source
git pull && docker compose up -d --build   # Pull + rebuild

docker-compose.yml defaults to image: neuraparse/tasknebula:latest. Set TASKNEBULA_IMAGE=neuraparse/tasknebula:0.2.4 in .env to pin.


👩‍💻 Development

pnpm install
docker compose up -d postgres redis
cp .env.example .env
cp apps/web/.env.example apps/web/.env.local
cd packages/db && pnpm tsx scripts/migrate.ts && cd ../..
pnpm dev

Run the full test suite:

pnpm --filter @tasknebula/web exec jest

AI-related tests live under apps/web/src/lib/ai/__tests__, apps/web/src/lib/agents/__tests__, and apps/web/src/app/api/ai/**/__tests__.


🤝 Contributing

Pull requests welcome. See CONTRIBUTING.md for the branch model, commit conventions, and how to run the linter + tests locally before pushing.

📄 License

MIT — see LICENSE.


Built by Neura Parse · Powered by open source

About

Self-hosted project management with AI copilot. Linear-like UX, Jira scale, pair-programmer AI. Next.js + PostgreSQL + Docker.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages