This project provides example chatbots that demonstrate the integration of TrueConf ChatBot Connector with the corporate video conferencing system TrueConf Server. Each bot is designed for a specific task β from monitoring server status to working with local LLM models, simulating the capabilities of ChatGPT and similar AI services.
The bots are easy to adapt to closed infrastructures and perfectly align with the concept of TrueConf Server β an autonomous solution for secure networks.
Key features:
- π¬ Echo Bot β a simple example of handling incoming messages, serving as a template for building custom scenarios.
- π₯ Sick Leave Bot β collects employeesβ messages in a group chat and automatically forwards them to the HR chat, preserving sender info and content.
- π Monitoring Bot β tracks key TrueConf Server metrics via API, sends alerts on changes, and enables fast response to incidents.
- π€ GPT Bot β works with a local LLM model (in GGUF format), does not require internet access, and can be deployed in isolated environments.
- βοΈ Flexible Configuration β all parameters are managed through
config.toml, including servers, users, and models. - π Localization Support β bots can reply in different languages, including Russian, English, and any manually added ones.
The bots run as a single process and can be enabled/disabled through configuration. The projectβs architecture allows extending functionality, connecting new modules, and tailoring behavior to your organizationβs needs.
For easier development, the project is divided into logical modules.
app/
βββ bots/
β βββ _echo_bot.py # Echo Bot: replies with the same message
β βββ _gpt_bot.py # GPT-based bot (AI)
β βββ _hospital_bot.py # Bot for automating sick leave reports
β βββ _monitoring_bot.py # Server monitoring bot
β βββ __init__.py # Package initializer for bots
β βββ config.py # Pydantic model configurator
β βββ utils.py # Utility functions
β
config.toml # Project configuration constants
main.py # Main entry point: launches all bots
The constants used in this project are defined in config.toml. The configuration format is TOML, which is convenient for storing structured data. This section describes the global parameters shared across all bots. Settings for individual bots are covered in their respective sections.
Example:
bots_language = "ru" # Bots' response language
server_address = "10.140.0.33" # IP address (FQDN) of TrueConf ServerCaution
Do not rename parameters, as they are directly referenced in the code. You may only change their values.
When the project starts, a special configurator automatically converts config.toml into config_models.py with Pydantic objects.
Warning
You must manually delete config_models.py if new parameters are added to config.toml β otherwise, changes will not take effect.
The project supports internationalization β bots can respond in a specified language. By default, English (en) is used.
To add a new language:
- Copy
app/locales/en.ymlinto a new file, e.g.,de.yml. - Translate the strings inside the new file.
- Set the new language code in the
bots_languageparameter ofconfig.toml.
You can also edit the strings in existing language files to adjust default phrasing.
The bot requires a dedicated account on TrueConf Server. Create one through the admin panel, e.g., named echo_bot, and provide the corresponding login and password in the configuration file:
[echo_bot]
username = "echo_bot"
password = "verystrongpassword"Note
The Echo Bot replies with the same message it receives from the user. Itβs useful for testing connectivity and debugging.
This bot also requires a dedicated account on TrueConf Server. Create one in the admin panel, e.g., named monitoring_bot, and specify the login and password in the configuration file:
[monitoring_bot]
username = "monitoring_bot"
password = "verystrongpassword"Since the monitoring bot accesses server statistics via API, you need to:
-
Create an OAuth application in the TrueConf Server admin panel with the following scopes:
statistics:read;server.license:read.
-
Copy the
client_idandclient_secretvalues and paste them into the configuration file:
[monitoring_bot]
client_id = "your_client_id"
client_secret = "your_client_secret"Note
You donβt need to manually obtain an access_token. The script automatically requests and refreshes the token whenever the bot interacts with the server.
The bot requires a dedicated account on TrueConf Server. Create one through the admin panel, e.g., named hospital_bot, and specify the login and password in the configuration file:
[hospital_bot]
username = "hospital_bot"
password = "verystrongpassword"The bot also requires a dedicated account on TrueConf Server. Create one in the admin panel, e.g., named gpt_bot, and provide the corresponding login and password in the configuration file:
[gpt_bot]
username = "gpt_bot"
password = "verystrongpassword"The gpt_bot works with a local LLM model (e.g., LLaMA, Mistral, etc.) via the llama-cpp-python library. The model runs on your machine β locally, on CPU or GPU, without connecting to cloud services.
This approach enables:
- Building functionality similar to ChatGPT, Claude, Gemini, DeepSeek, and others inside a closed system;
- Ensuring maximum data privacy β no request ever leaves the local network;
- Deploying an AI bot in offline or fully isolated infrastructures;
- Integrating AI into corporate systems, including TrueConf Server, which is also designed for on-premise isolated networks.
Tip
This setup is ideal for enterprises, government organizations, defense, education, and healthcare institutions where external APIs and cloud LLMs are prohibited due to security requirements.
To use gpt_bot with a local LLM model (e.g., LLaMA or GGUF-based alternatives), configure the parameters in the [gpt_bot.llama] section of the configuration file.
- Model Parameters
[gpt_bot.llama]
repo_id = "Qwen/Qwen2.5-7B-Instruct-GGUF"
filename = "qwen2.5-7b-instruct-q3_k_m.gguf"repo_idβ the model identifier on Hugging Face, e.g.,Qwen/Qwen2.5-7B-Instruct-GGUF.filenameβ the model file in .gguf format that must be downloaded from the repository.
- Local Storage
[gpt_bot.llama]
local_dir = "/path/to/models"local_dirβ the directory on your machine where the model will be stored after the first download. Once downloaded, the model is reused from this location, allowing the bot to run without constant internet access.
- Token Settings
[gpt_bot.llama]
n_ctx = 2048
max_tokens = 512n_ctxβ maximum number of input tokens (context window size).max_tokensβ maximum number of tokens the model can generate in response.
Tip
Increasing n_ctx allows processing longer inputs but requires more resources.
The max_tokens value directly affects response length and performance.
- GPU Usage
[gpt_bot.llama]
n_gpu_layers = -1-
n_gpu_layersβ number of layers to offload to GPU (if available). Options:-1β use all possible layers on GPU (recommended if you have sufficient VRAM).0β disable GPU (run entirely on CPU).>0β offload only the specified number of layers to GPU, run the rest on CPU (useful for limited VRAM).
Tip
Use -1 for powerful GPUs (8GB+ VRAM).
If you encounter memory errors, try lowering the value to 16, 8, 4, or even 0.
π‘ Final structure of the [gpt_bot.llama] section:
[gpt_bot.llama]
repo_id = "Qwen/Qwen2.5-7B-Instruct-GGUF"
filename = "qwen2.5-7b-instruct-q3_k_m.gguf"
local_dir = "/path/to/models"
n_ctx = 2048
max_tokens = 512
n_gpu_layers = -1To run this project on your machine, you need the following:
-
Python β tested on version 3.13.2. Compatible with versions 3.10 and higher.
-
Pipenv β virtual environment and dependency manager. Install with:
pip install pipenv
After installing pipenv, navigate to the project directory and run:
pipenv installThis installs all dependencies listed in Pipfile, including llama-cpp-python, used by gpt_bot to run local LLM models. The package is built from source during installation, so a C compiler is required (or use prebuilt wheels):
- Linux β
gccorclang(most distros come withgccpreinstalled); - Windows β Visual Studio or MSYS2;
- macOS β Xcode (available on App Store).
If you donβt need GPU support and prefer a CPU-only build, install a prebuilt package:
- Install
llama-cpp-python:
pipenv run pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu- Then install the remaining packages:
pipenv installNote
This method is best for quick setup and testing. Performance will be lower compared to GPU versions.
Requirements:
- CUDA version 12.1, 12.2, 12.3, 12.4, or 12.5
- Python 3.10, 3.11, or 3.12
pipenv run pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/<cuda-version>Where <cuda-version> can be:
- cu121: CUDA 12.1
- cu122: CUDA 12.2
- cu123: CUDA 12.3
- cu124: CUDA 12.4
- cu125: CUDA 12.5
Example for CUDA 12.1:
pipenv run pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121Then install the remaining dependencies:
pipenv installRequirements:
- macOS 11.0 or later;
- Python 3.10, 3.11, or 3.12.
pip install llama-cpp-python extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metalIf you use another GPU backend (e.g., AMD ROCm) or installation fails on Windows, build the package manually.
Tip
Check supported backends in the official repository.
- Install MSYS2 in the default directory
C:\msys64. - Launch MSYS2 UCRT64 terminal.
- Install packages:
pacman -S mingw-w64-ucrt-x86_64-toolchain mingw-w64-ucrt-x86_64-cmake mingw-w64-ucrt-x86_64-make git mingw-w64-ucrt-x86_64-python mingw-w64-ucrt-x86_64-python-pip- Verify versions:
gcc --version
cmake --version
git --version
python --version
pip --version- Build
llama-cpp-pythonwith AMD ROCm (hipBLAS):
pipenv run CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install llama-cpp-python- Install the remaining dependencies:
pipenv installAfter configuring and installing dependencies, start the bots with:
pipenv run python main.pyTip
First Run
- A special configurator generates the
config_models.pyfile required by the project. Do not delete it! - The model is downloaded from Hugging Face. After that, the bots launch automatically. On subsequent runs, bots start immediately.
For proper operation, the bot must be added to a group chat. On first addition, the bot automatically saves the chat_id and chat_name in the configuration and sends a welcome message.
Caution
After adding, do not add the bot to other group chats β this resets the current binding, and the bot will forward messages to the new chat. If necessary, you can extend this mechanism by forking the repository.



