You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RAG implemented from scratch without using LangChain and LangGraph - designed specifically for processing and querying PDF documents with advanced support for visual content like tables, charts, and mathematical formulas.
Production-ready multilingual RAG system for scientific PDFs. Supports 10+ Indic languages with E5 embeddings, ChromaDB vector store, Gemini 2.5 Flash LLM, and NLLB-200 translation. Ask questions in any language, get accurate answers with citations
AI-powered PDF Q&A chatbot. Upload any document and have a real conversation with it. Built with RAG architecture using LangChain, Groq (Llama 3.3-70B), ChromaDB, and HuggingFace embeddings, completely free to run.
A high-performance Speculative RAG pipeline designed to reduce latency by combining fast draft generation and accurate verification using Groq Llama models, local HuggingFace embeddings, ChromaDB vector search, and end-to-end observability with Langfuse.
Enables context-aware question answering over PDFs using retrieval-augmented generation with vector embeddings. Built with Next.js App Router and OpenAI models for low-latency document search and response generation.
An AI-powered research assistant that answers academic questions from uploaded PDFs or links (arXiv, PubMed) and returns context-rich answers with citation support using LangChain, LLaMA 3 (Groq), and FAISS.
🧠 Hands-on RAG workshop using InterSystems IRIS & LLMs - Build PDF Q&A systems, natural language-to-SQL interfaces, and learn AI agent architecture with local/cloud options