Duration: 3-5 minutes Goal: Demonstrate the end-to-end flow of fine-tuning a small LLM on CPU.
- Ensure LM Studio is running with a model loaded (port 1234).
- Ensure Backend is running (
python -m uvicorn main:app --reload). - Ensure Frontend is running (
npm run dev). - Have a small sample dataset ready (e.g.,
sample_data.jsonwith 10-20 examples).
- Say: "This is FineTuneLite, a local studio for fine-tuning LLMs on CPU."
- Show: Dashboard page. Point out the "System Status" (CPU/RAM usage).
- Action: Navigate to Datasets.
- Action: Click "Upload Dataset". Select
sample_data.json. - Show: The dataset appears in the list with row count and size.
- Say: "We support JSON and CSV. The system automatically parses and validates the data."
- Action: Navigate to Fine-tune.
- Step 1: Select
ibm-granite-4.0-h-tinyas the base model. - Step 2: Select the uploaded dataset. Set Epochs=1, Batch Size=1 (for speed).
- Step 3: Review settings and click "Start Training".
- Say: "We use LoRA (Low-Rank Adaptation) to make training efficient enough for a laptop CPU."
- Action: You are redirected to Training Jobs.
- Show: The job status changes to "Running".
- Show: The loss graph (if implemented) or status updates.
- Say: "The backend runs the training loop in a background thread, logging loss metrics to SQLite."
- Action: Navigate to Playground.
- Action: Select a model from the dropdown (defaults to IBM Granite 4.0 H Tiny).
- Action: Type "Hello, who are you?".
- Show: The model responds.
- Say: "Inference is handled via LM Studio, ensuring fast local responses with any model you have loaded."
- Action: Toggle "Teacher/Critic Mode" ON.
- Action: Select a student model (e.g., smaller model) and teacher model (e.g., Granite).
- Action: Send a message and click "Ask Teacher for Feedback".
- Show: The teacher model provides critique and improved answer.
- Say: "This feature lets you use a stronger model to evaluate and improve responses from smaller models."
- "FineTuneLite makes custom LLMs accessible to anyone with a standard laptop, preserving privacy and eliminating cloud costs."