LET (LLM Explanation Tool) is a comprehensive web-based platform for generating, evaluating, and comparing natural language explanations from large language models (LLMs). Built for researchers and practitioners in explainable AI, LET addresses the growing need to understand and assess the quality of AI-generated explanations across multiple dimensions.
While most existing explainability frameworks focus on feature attribution methods (e.g., LIME, SHAP), LET emphasizes self-explanations and post-hoc explanations expressed in natural language. This reflects the growing importance of LLMs in human-AI interaction and the need for explanations that are both faithful (accurately reflecting model reasoning) and plausible (convincing to human users).
LET enables:
- Multi-provider LLM integration (OpenAI, Gemini, DeepSeek, Groq, Ollama)
- Traditional transformer classifiers (BERT) with SHAP-based explanations
- Systematic evaluation of explanation quality using the LExT framework
- Interactive and batch processing of benchmark datasets
- Side-by-side comparison of explanation types and providers
✅ Provider-agnostic design: Automatically supports new models from connected providers
✅ Dual explanation modes: Self-explanations and post-hoc explanations
✅ Traditional baselines: BERT + SHAP for comparison
✅ Rigorous evaluation: LExT framework for faithfulness and plausibility
✅ Flexible datasets: Built-in benchmarks + custom upload support
✅ Chain-of-Thought prompting: Elicit step-by-step reasoning
✅ Interactive exploration: Both batch processing and instance-level analysis
✅ User rating system: Collect human feedback on explanation quality
✅ Privacy-preserving option: Local deployment with Ollama
The easiest way to run the application is using Docker Compose. This method automatically sets up MongoDB, backend, and frontend with a single command.
- Docker Desktop installed and running
-
Clone the repository (if you haven't already):
git clone <your-repo-url> cd thesisXNLP
-
Start all services:
docker compose watch
Or for standard mode:
docker compose up --build
-
Access the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:5000
- MongoDB: localhost:27017
-
Stop services:
docker compose down
- MongoDB: Automatically started in a container with persistent data storage
- Backend: Flask application with all dependencies installed
- Frontend: React development server with hot reload
- Public Datasets: Automatically seeded on backend startup (casehold.csv, imdb.csv, etc.)
Using docker compose watch provides:
- ✅ Automatic file syncing (no rebuild needed for code changes)
- ✅ Flask auto-reload on Python file changes
- ✅ React hot-reload on frontend changes
- ✅ Automatic rebuilds when dependencies change
# View all logs
docker compose logs -f
# View backend logs (Flask HTTP requests)
docker logs -f backend
# View frontend logs
docker logs -f frontendCreate a .env file in the project root to customize settings:
FLASK_SECRET_KEY=your-secure-secret-key-hereFor more Docker details, see DOCKER.md.
If you prefer to run services manually without Docker:
- Python 3.10+
- Node.js 16+
- MongoDB 4.4+
If you don't have Conda installed:
macOS / Linux:
# Download Miniconda (recommended) or Anaconda
# Miniconda: https://docs.conda.io/en/latest/miniconda.html
# Anaconda: https://www.anaconda.com/products/distribution
# After installation, restart your terminal or run:
source ~/.bashrc # or source ~/.zshrc for zshWindows:
- Download and install Miniconda or Anaconda from:
- Use Anaconda Prompt for terminal commands
conda env create -f environment.yml # Run once to create the environment
conda activate let # Run in every new shell before using the backend
cd backend
cp .env.example .env # On Windows use: copy .env.example .env
# Edit .env with your MongoDB URI and other settingsmacOS
- Install MongoDB Community Edition:
brew tap mongodb/brew brew install mongodb-community@7.0
- Start the service:
brew services start mongodb/brew/mongodb-community
- Verify it is running:
mongosh
Ubuntu / Debian
- Import the MongoDB public key and add the repository (example for 7.0):
curl -fsSL https://pgp.mongodb.com/server-7.0.asc | \ sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor echo "deb [signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg] \ https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | \ sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list sudo apt-get update sudo apt-get install -y mongodb-org
- Start and enable the service:
sudo systemctl start mongod sudo systemctl enable mongod - Check status:
sudo systemctl status mongod
Windows
- Download the MSI installer from https://www.mongodb.com/try/download/community (choose the latest stable version).
- Run the installer and select Install MongoDB as a Service (default settings are fine).
- After installation, open Command Prompt and run:
If it opens the shell, the server is running. If not, start the service via Services → MongoDB Server → Start.
mongosh
Optional seeding (all platforms)
- Import seed data with
mongorestore --uri "<MONGO_URI>" dump/or execute scripts viamongosh.
Create or update backend/.env so Flask points to the correct database:
FLASK_SECRET_KEY=change-me
MONGO_URI=mongodb://localhost:27017/auth_app # local development
SESSION_COOKIE_NAME=let_session
UPLOAD_FOLDER=uploads
Flask initializes the MongoDB client through mongo.init_app(app) on startup, so as long as the URI is reachable the database and collections will be created automatically on first write. Make sure the MongoDB service is running before launching Flask.
conda activate let
cd explainable-nlp
npm installConfigure provider API keys either:
- During registration via the Settings panel
- In the Settings page after login
At least one provider key is required to run classifications or generate explanations.
To enable Ollama for local model deployment:
# Install Ollama (see https://ollama.ai)
ollama pull llama2 # Or any other model
# Ensure sufficient GPU memory for your chosen modelNote: Ollama is only available in local deployments, not hosted versions.
If you installed using Docker, simply run:
docker compose watchThis starts all services (MongoDB, backend, frontend) automatically. Access the app at http://localhost:3000.
If you installed manually, make sure the Conda environment is active (conda activate let) and run the appropriate script from the project root:
-
macOS / Linux
./start.sh
-
Windows (Anaconda Prompt / PowerShell)
start_windows.bat
The script opens a new terminal window for the frontend (
npm start) and keeps the backend (python app.py) in the current window. Stop the backend withCtrl+Cand close the frontend window when you are done.
The result scripts/ directory contains a comprehensive analysis pipeline for processing experimental data and generating statistical results. This includes:
- Data processing: Combines raw data from Prolific and friends/family sources
- Hypothesis testing: 16 hypotheses with cluster-robust standard errors
- Demographic analyses: Age, gender, CS/AI expertise, education level, and NLP experience
- Visualization: Automated generation of plots and statistical summaries
Quick Start:
cd "result scripts"
pip install -r requirements.txt
python main.py # Run all analysesFor detailed documentation, usage examples, and workflow instructions, see result scripts/README.md.
For comprehensive technical documentation, see PROTOTYPE_DOCUMENTATION.md which includes:
- System architecture
- Supported models and datasets
- Evaluation methodology
- User interface design
- Complete prompt templates
For Docker-specific documentation, see DOCKER.md.
If you use LET in your research, please cite:
@mastersthesis{eren2025let,
author = {Yarkin Eren},
title = {LET: LLM Explanation Tool for Evaluating Faithfulness and Plausibility},
school = {Technical University of Munich},
year = {2025}
}