A Retrieval-Augmented Generation (RAG) system for querying Laravel documentation using Ollama and ChromaDB.
This project allows you to chat with the Laravel documentation. It ingests markdown files from the documentation, stores them in a vector database (ChromaDB), and uses a local LLM (via Ollama) to answer questions based on the documentation context.
- Python 3.8+
- Ollama installed and running.
You need to pull the following models:
ollama pull nomic-embed-text
ollama pull llama3.2- Clone the repository.
- Install the dependencies:
pip install -r requirements.txtThe project expects Laravel documentation markdown files in the laravel-docs/ directory.
Run the ingestion script to process the documents and create the vector database:
python ingest.pyThis script will:
- Read markdown files from
laravel-docs/. - Chunk the text.
- Generate embeddings using
nomic-embed-text. - Store the embeddings in a local ChromaDB instance (
./chroma_db).
Start the interactive CLI to ask questions:
python query.pyType your question when prompted. The system will:
- Search for relevant documentation chunks.
- Use
llama3.2to generate an answer based on the retrieved context. - Display the answer and the source files used.
ingest.py: Script to process documentation and populate the vector database.query.py: Script to run the interactive query interface.requirements.txt: Python dependencies.laravel-docs/: Directory containing the Laravel documentation markdown files.chroma_db/: (Created after ingestion) Directory storing the vector database.