Feedback Funnel is a high-performance, asynchronous data pipeline designed to ingest, analyze, and visualize customer feedback at scale. Instead of performing heavy AI tasks synchronously in request handlers, the system pushes work to a Redis-backed job queue and processes it asynchronously with a worker pool, ensuring sub-second ingestion times and high availability.
Key features
- Asynchronous ingestion with Redis-backed FIFO queue (BLPop)
- Producer-consumer worker pool to offload AI analysis from request cycle
- Secure GitHub webhook handling with HMAC SHA-256 validation
- AI analysis via OpenAI (sentiment, category, and summarization)
- Dashboard built with Next.js + Recharts for visualization
Tech stack
- Backend: Go (Gin) — HTTP API, webhook endpoints, worker
- Frontend: TypeScript, Next.js 14 (App Router), Tailwind CSS
- Database: PostgreSQL
- Queue / Broker: Redis
- AI: OpenAI GPT-4 API
Architecture & workflow
-
Ingestion layer: The API accepts feedback from manual API calls and GitHub webhooks. Incoming requests are validated and stored in the database, then the feedback ID is pushed to a Redis list for processing.
-
Immediate response: The API returns HTTP 202 Accepted after enqueueing the job so clients are not blocked by AI latency.
-
Worker pool: A separate Go process polls Redis with
BLPopand processes items using goroutines. Workers call OpenAI to run:- Sentiment analysis (Positive / Neutral / Negative)
- Categorization (Bug / Feature Request / Praise)
- Short summarization (one-line actionable summary)
-
Results are saved back to the database and surfaced to the frontend dashboard.
Getting started
Prerequisites
- Go 1.21+
- Node.js 18+
- PostgreSQL
- Redis (local or Docker)
- OpenAI API key
Backend (local)
- Change into the backend folder and create a
.envfile from the example:
cd backend
cp .env.example .env
# Edit .env to set real values- Run the server:
go run ./Frontend (local)
- Install dependencies and run:
cd frontend
npm install
npm run dev- Open
http://localhost:3000(frontend) andhttp://localhost:8080(backend) by default.
Configuration notes
- Keep real secrets (OpenAI key, DB credentials, webhook secret) in local env files like
.envor.env.localand ensure those are ignored by.gitignore. - The project includes
backend/.env.exampleas a template for required variables.
Security
- GitHub webhook handling validates
X-Hub-Signature-256usingGITHUB_WEBHOOK_SECRETwhen set.
Future roadmap
- Slack integration: forward high-priority / negative sentiment feedback to a triage channel
- Batch processing: aggregate feedback and batch calls to OpenAI to reduce cost
- Support for self-hosted / custom models (e.g., Llama 3 via Ollama)
License
This project is licensed under the MIT License — see the LICENSE file.