An intelligent customer support chatbot using RAG (Retrieval-Augmented Generation) with OrkaJS, Next.js and NestJS.
- Overview
- Features
- Architecture
- Prerequisites
- Installation
- Configuration
- Usage
- Docker
- Project structure
- Deployment
This project is a complete example of a Support Bot application using RAG technology to provide accurate, knowledge-based answers. It demonstrates the integration of Orka AI into a modern, full-stack architecture.
Frontend:
- Next.js 14 (App Router)
- React 18
- TypeScript
- TailwindCSS
- Lucide Icons
Backend:
- NestJS
- OrkaJS (RAG & LLM)
- OpenAI GPT-4
- TypeScript
Infrastructure:
- Docker & Docker Compose
- NPM Workspaces (Mono-repo)
- π¬ Real-time chat - Intuitive conversation interface
- π§ Intelligent RAG - Answers based on a knowledge base
- π¨ Modern UI - Responsive design with smooth animations
- π Type-safe - TypeScript end-to-end
- π³ Dockerized - Easy deployment with Docker
- π¦ Mono-repo - Simplified management with NPM Workspaces
βββββββββββββββββββ
β Frontend β
β (Next.js 14) β
β Port: 3000 β
ββββββββββ¬βββββββββ
β
β HTTP/REST
β
ββββββββββΌβββββββββ
β Backend β
β (NestJS) β
β Port: 3001 β
ββββββββββ¬βββββββββ
β
β
ββββββΌββββββ
β Orka AI β
β RAG β
ββββββ¬ββββββ
β
ββββββΌββββββ
β OpenAI β
β GPT-4 β
ββββββββββββ
- Node.js >= 18.0.0
- npm >= 9.0.0
- Docker & Docker Compose (optional)
- OpenAI API key
π For a detailed installation guide, see INSTALLATION.md
# 1. Install dependencies
npm install
# 2. Configure the backend
cp apps/backend/.env.example apps/backend/.env
# Edit apps/backend/.env and add your OPENAI_API_KEY
# 3. Start the application
npm run dev
# 4. Access http://localhost:3000NODE_ENV=development
PORT=3001
OPENAI_API_KEY=your_openai_api_key_hereNEXT_PUBLIC_API_URL=http://localhost:3001# Start backend and frontend simultaneously
npm run dev
# Or separately:
npm run dev:backend # Backend on http://localhost:3001
npm run dev:frontend # Frontend on http://localhost:3000# Build
npm run build
# Start
npm start- Frontend: http://localhost:3000
- Backend API: http://localhost:3001
- Health Check: http://localhost:3001/api/support/health
# Start in development mode
docker-compose -f docker-compose.dev.yml up
# With rebuild
docker-compose -f docker-compose.dev.yml up --build# Build and start
npm run docker:build
npm run docker:up
# Or directly
docker-compose up --build
# Stop
npm run docker:downCreate a .env file at the root:
OPENAI_API_KEY=your_openai_api_key_hereAsk a question to the support bot.
Request:
{
"question": "How do I reset my password?",
"conversationId": "optional-conversation-id"
}Response:
{
"answer": "To reset your password, follow these steps...",
"conversationId": "conv_1234567890_abc123"
}Check the service status.
Response:
{
"status": "healthy",
"timestamp": "2024-03-06T15:30:00.000Z"
}Edit apps/backend/src/support-bot/orka.service.ts in the initializeKnowledgeBase() method:
const supportDocuments = [
{
content: `# Your content here`,
metadata: { category: 'category-name', topic: 'topic-name' },
},
// Add more documents...
];Modify the systemPrompt in apps/backend/src/support-bot/orka.service.ts:
systemPrompt: `You are a personalized support assistant...`In apps/backend/src/support-bot/orka.service.ts:
const llmAdapter = new OpenAIAdapter({
apiKey,
model: 'gpt-4-turbo-preview', // Change here
temperature: 0.7,
});# Backend
cd apps/backend
npm run test
# Frontend
cd apps/frontend
npm run test- Remove
node_modulesand build files - Remove all
.env.localfiles (contain API keys) - Verify that
.gitignoreis correct - Test installation from scratch
MIT
Contributions are welcome! Consult CONTRIBUTING.md for more details.
For any questions, consult:
Developed with β€οΈ using OrkaJS