A powerful chatbot application that uses Agentic RAG (Retrieval Augmented Generation) to intelligently answer questions about uploaded PDF documents. The system combines the power of LangChain, OpenAI's GPT models, and vector storage to provide accurate, context-aware responses.
Features • Technology Stack • API Documentation • Setup • Usage • Security
- 📄 PDF Document Upload & Processing
- 💬 Interactive Chat Interface
- 🔍 Intelligent Document Search using RAG
- 🧠 Context-Aware Responses
- 📊 Conversation History Management
- 🎭 Customizable Assistant Personalities
- 🔒 Secure Document Handling
- 🚀 RESTful API Architecture
- FastAPI - Modern web framework for building APIs
- LangChain - Framework for developing applications powered by language models
- OpenAI GPT-4o - Advanced language model for generating responses
- FAISS - Vector storage for efficient document retrieval
- PyPDF - PDF document processing
- Streamlit - Interactive web interface
- Python Requests - HTTP client for API communication
The backend provides a RESTful API with the following endpoints:
POST /conversations/newCreates a new conversation session and returns a unique conversation ID.
Response:
{
"conversation_id": "uuid-string",
"message": "New conversation created successfully"
}POST /conversations/{conversation_id}/uploadUpload PDF documents for processing in a specific conversation.
Parameters:
conversation_id(path): UUID of the conversationfiles(form-data): List of PDF files to upload
Response:
{
"message": "Documents uploaded and processed successfully."
}POST /conversations/{conversation_id}/chatSend a question and receive an AI-generated response based on the uploaded documents.
Request Body:
{
"question": "Your question here",
"assistant_name": "AI Assistant",
"assistant_behavior": "Professional",
"custom_instructions": ""
}Response:
{
"answer": "AI-generated response based on document context"
}GET /conversations- List all conversationsGET /conversations/{conversation_id}/history- Get chat historyGET /conversations/{conversation_id}/files- List uploaded filesGET /health- API health checkPOST /conversations/{conversation_id}/load- Load conversation documents
- Clone the repository:
git clone https://github.com/mostafa-ghaith/agentic-rag-app.git
cd agentic-rag-app- Set up environment variables:
Create a
.envfile in the root directory with the following variables:
OPENAI_API_KEY=your_openai_api_key- Using Docker (Recommended):
docker-compose up --build- Manual Setup:
Backend:
cd backend
pip install -r requirements.txt
uvicorn api.main:app --reloadFrontend:
cd frontend
pip install -r requirements.txt
streamlit run app.py- Access the web interface at
http://localhost:8501 - Upload PDF documents using the sidebar and click on "Process Documents"
- Set the custom behaviour of the chatbot as wanted (Optional)
- Start chatting with the bot about your documents
- You can navigate to previous conversations and continue them from the sidebar
To contribute to the project:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
- API keys and sensitive data are managed through environment variables
- Document storage is temporary and session-based
- Secure file handling and validation
- Rate limiting on API endpoints
- Input sanitization and validation
- CORS policy implementation
- Regular security updates and dependency scanning
MIT License
Copyright (c) 2025 Mostafa Ghaith
Mostafa Ghaith
linkedin.com/in/mostafa-ghaith
github.com/mostafa-ghaith

