Flm Compagnon is a modern GUI designed to accompany and manage the FastFlowLM (FLM) project. It offers a smooth user experience to interact with your local AI models, monitor the server, and manage your configurations.
Important
This application requires FastFlowLM (FLM) to be installed on your system. Without it, Flm Compagnon will not function.
- Models: Model manager (download, delete, inspect details).
- Server: Configuration and management of the FLM server instance.
- Resource Monitor: Real-time NPU and RAM usage monitoring with historical charts when server is running.
- Presets: Save and manage custom server configurations as presets for quick access.
- System Tray: Quick access to server controls, model selection, presets, and status directly from the notification area.
- Auto-Updates: Automatic update check at startup for both FLM Companion and FastFlowLM with integrated installer.
- Auto-start: Option to launch the application automatically at Windows startup.
- Start Minimized: Option to launch the application minimized to the system tray (configurable in settings).
- Settings: Application customization.
- About: View application version, hardware information, FLM changelog, and check for updates.
- Multilingual: Interface management in English, French, and Japanese.
- Theme: Management of light and dark themes.
Completed
- Add NPU and RAM usage monitoring and display (real-time stats)
- Clean code and optimisation
- Fix the server management design for consistency
- Add a version check for the companion application (+ changelog)
- Add caching for the list of models, CPU version, and RAM
- Force an update of the model list on the server configuration side when models are modified (DLL, delete)
- Add menus to the notification area icon (server management, models)
- Complete the translation of all texts for multilingual support
- Flm update 0.9.21 → add the option to launch the server without a model using ASR for Whisper
- Flm update 0.9.22 → add the option to launch the server with host parameters
- Add a startup check when FLM is launched (verify model availability and server prerequisites)
- Add an automatic update check at application startup with integrated installer
- Finalize saving and loading of custom usage configuration (persist user presets)
- Add preset management system (save, delete, and quick access to server configurations)
- Display FLM changelog in About view
- GitHub API rate limiting with caching to prevent 403 errors
- Ensure "Run at startup" setting is preserved across updates and installer actions
- Add an in-app memory / resource calculator for chosen model + server configuration
This project is open-source and open to contributions! Feel free to propose improvements via Pull Requests or report issues.
To run the project in development mode, you will need Rust and Node.js.
-
Install JavaScript dependencies:
npm install
-
Run the application in development mode:
npm run tauri dev
To build the project in release mode, you will need Rust and Node.js.
-
Install JavaScript dependencies:
npm install
-
Build the application in release mode:
npm run tauri build

