Skip to content

TrueConf/trueconf-chatbot-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

trueconf

Demonstration of Chatbot Functionality in TrueConf Server

PyPI - Version

English / Русский

Introduction

This project provides example chatbots that demonstrate the integration of TrueConf ChatBot Connector with the corporate video conferencing system TrueConf Server. Each bot is designed for a specific task β€” from monitoring server status to working with local LLM models, simulating the capabilities of ChatGPT and similar AI services.

The bots are easy to adapt to closed infrastructures and perfectly align with the concept of TrueConf Server β€” an autonomous solution for secure networks.

Key features:

  • πŸ’¬ Echo Bot β€” a simple example of handling incoming messages, serving as a template for building custom scenarios.
  • πŸ₯ Sick Leave Bot β€” collects employees’ messages in a group chat and automatically forwards them to the HR chat, preserving sender info and content.
  • πŸ“Š Monitoring Bot β€” tracks key TrueConf Server metrics via API, sends alerts on changes, and enables fast response to incidents.
  • πŸ€– GPT Bot β€” works with a local LLM model (in GGUF format), does not require internet access, and can be deployed in isolated environments.
  • βš™οΈ Flexible Configuration β€” all parameters are managed through config.toml, including servers, users, and models.
  • 🌐 Localization Support β€” bots can reply in different languages, including Russian, English, and any manually added ones.

The bots run as a single process and can be enabled/disabled through configuration. The project’s architecture allows extending functionality, connecting new modules, and tailoring behavior to your organization’s needs.

Project Structure

For easier development, the project is divided into logical modules.

app/
β”œβ”€β”€ bots/
β”‚   β”œβ”€β”€ _echo_bot.py           # Echo Bot: replies with the same message
β”‚   β”œβ”€β”€ _gpt_bot.py            # GPT-based bot (AI)
β”‚   β”œβ”€β”€ _hospital_bot.py       # Bot for automating sick leave reports
β”‚   β”œβ”€β”€ _monitoring_bot.py     # Server monitoring bot
β”‚   β”œβ”€β”€ __init__.py            # Package initializer for bots
β”‚   β”œβ”€β”€ config.py              # Pydantic model configurator
β”‚   └── utils.py               # Utility functions
β”‚  
config.toml                    # Project configuration constants  
main.py                        # Main entry point: launches all bots

Configuration

The constants used in this project are defined in config.toml. The configuration format is TOML, which is convenient for storing structured data. This section describes the global parameters shared across all bots. Settings for individual bots are covered in their respective sections.

Example:

bots_language = "ru" # Bots' response language
server_address = "10.140.0.33" # IP address (FQDN) of TrueConf Server

Caution

Do not rename parameters, as they are directly referenced in the code. You may only change their values.

Configuration Conversion

When the project starts, a special configurator automatically converts config.toml into config_models.py with Pydantic objects.

Warning

You must manually delete config_models.py if new parameters are added to config.toml β€” otherwise, changes will not take effect.

Localization

The project supports internationalization β€” bots can respond in a specified language. By default, English (en) is used.

To add a new language:

  1. Copy app/locales/en.yml into a new file, e.g., de.yml.
  2. Translate the strings inside the new file.
  3. Set the new language code in the bots_language parameter of config.toml.

You can also edit the strings in existing language files to adjust default phrasing.

Bot Configuration

Echo Bot

The bot requires a dedicated account on TrueConf Server. Create one through the admin panel, e.g., named echo_bot, and provide the corresponding login and password in the configuration file:

[echo_bot]
username = "echo_bot"
password = "verystrongpassword"

Note

The Echo Bot replies with the same message it receives from the user. It’s useful for testing connectivity and debugging.

Server Monitoring Bot

This bot also requires a dedicated account on TrueConf Server. Create one in the admin panel, e.g., named monitoring_bot, and specify the login and password in the configuration file:

[monitoring_bot]
username = "monitoring_bot"
password = "verystrongpassword"

OAuth Access to Statistics

Since the monitoring bot accesses server statistics via API, you need to:

  1. Create an OAuth application in the TrueConf Server admin panel with the following scopes:

    • statistics:read;
    • server.license:read.
  2. Copy the client_id and client_secret values and paste them into the configuration file:

[monitoring_bot]
client_id = "your_client_id"
client_secret = "your_client_secret"

Note

You don’t need to manually obtain an access_token. The script automatically requests and refreshes the token whenever the bot interacts with the server.

Hospital Bot

The bot requires a dedicated account on TrueConf Server. Create one through the admin panel, e.g., named hospital_bot, and specify the login and password in the configuration file:

[hospital_bot]
username = "hospital_bot"
password = "verystrongpassword"

GPT Bot

The bot also requires a dedicated account on TrueConf Server. Create one in the admin panel, e.g., named gpt_bot, and provide the corresponding login and password in the configuration file:

[gpt_bot]
username = "gpt_bot"
password = "verystrongpassword"

Using a Local LLM Model

The gpt_bot works with a local LLM model (e.g., LLaMA, Mistral, etc.) via the llama-cpp-python library. The model runs on your machine β€” locally, on CPU or GPU, without connecting to cloud services.

This approach enables:

  • Building functionality similar to ChatGPT, Claude, Gemini, DeepSeek, and others inside a closed system;
  • Ensuring maximum data privacy β€” no request ever leaves the local network;
  • Deploying an AI bot in offline or fully isolated infrastructures;
  • Integrating AI into corporate systems, including TrueConf Server, which is also designed for on-premise isolated networks.

Tip

This setup is ideal for enterprises, government organizations, defense, education, and healthcare institutions where external APIs and cloud LLMs are prohibited due to security requirements.

LLM Model Configuration

To use gpt_bot with a local LLM model (e.g., LLaMA or GGUF-based alternatives), configure the parameters in the [gpt_bot.llama] section of the configuration file.

  1. Model Parameters
[gpt_bot.llama]
repo_id = "Qwen/Qwen2.5-7B-Instruct-GGUF"
filename = "qwen2.5-7b-instruct-q3_k_m.gguf"
  • repo_id β€” the model identifier on Hugging Face, e.g., Qwen/Qwen2.5-7B-Instruct-GGUF.
  • filename β€” the model file in .gguf format that must be downloaded from the repository.
  1. Local Storage
[gpt_bot.llama]
local_dir = "/path/to/models"
  • local_dir β€” the directory on your machine where the model will be stored after the first download. Once downloaded, the model is reused from this location, allowing the bot to run without constant internet access.
  1. Token Settings
[gpt_bot.llama]
n_ctx = 2048
max_tokens = 512
  • n_ctx β€” maximum number of input tokens (context window size).
  • max_tokens β€” maximum number of tokens the model can generate in response.

Tip

Increasing n_ctx allows processing longer inputs but requires more resources. The max_tokens value directly affects response length and performance.

  1. GPU Usage
[gpt_bot.llama]
n_gpu_layers = -1
  • n_gpu_layers β€” number of layers to offload to GPU (if available). Options:

    • -1 β€” use all possible layers on GPU (recommended if you have sufficient VRAM).
    • 0 β€” disable GPU (run entirely on CPU).
    • >0 β€” offload only the specified number of layers to GPU, run the rest on CPU (useful for limited VRAM).

Tip

Use -1 for powerful GPUs (8GB+ VRAM). If you encounter memory errors, try lowering the value to 16, 8, 4, or even 0.

πŸ’‘ Final structure of the [gpt_bot.llama] section:

[gpt_bot.llama]
repo_id = "Qwen/Qwen2.5-7B-Instruct-GGUF"
filename = "qwen2.5-7b-instruct-q3_k_m.gguf"
local_dir = "/path/to/models"
n_ctx = 2048
max_tokens = 512
n_gpu_layers = -1

Environment Setup

Dependencies

To run this project on your machine, you need the following:

  • Python β€” tested on version 3.13.2. Compatible with versions 3.10 and higher.

  • Pipenv β€” virtual environment and dependency manager. Install with:

    pip install pipenv

Installing Project Dependencies

After installing pipenv, navigate to the project directory and run:

pipenv install

This installs all dependencies listed in Pipfile, including llama-cpp-python, used by gpt_bot to run local LLM models. The package is built from source during installation, so a C compiler is required (or use prebuilt wheels):

  • Linux β€” gcc or clang (most distros come with gcc preinstalled);
  • Windows β€” Visual Studio or MSYS2;
  • macOS β€” Xcode (available on App Store).

⚑ Installing Prebuilt llama-cpp-python Wheel (.whl)

CPU-only

If you don’t need GPU support and prefer a CPU-only build, install a prebuilt package:

  1. Install llama-cpp-python:
pipenv run pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
  1. Then install the remaining packages:
pipenv install

Note

This method is best for quick setup and testing. Performance will be lower compared to GPU versions.

GPU (CUDA, Metal)
CUDA

Requirements:

  • CUDA version 12.1, 12.2, 12.3, 12.4, or 12.5
  • Python 3.10, 3.11, or 3.12
pipenv run pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/<cuda-version>

Where <cuda-version> can be:

  • cu121: CUDA 12.1
  • cu122: CUDA 12.2
  • cu123: CUDA 12.3
  • cu124: CUDA 12.4
  • cu125: CUDA 12.5

Example for CUDA 12.1:

pipenv run pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121

Then install the remaining dependencies:

pipenv install
Metal (macOS)

Requirements:

  • macOS 11.0 or later;
  • Python 3.10, 3.11, or 3.12.
pip install llama-cpp-python extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal

πŸ”§ Manual Build for Other GPU Backends

If you use another GPU backend (e.g., AMD ROCm) or installation fails on Windows, build the package manually.

Tip

Check supported backends in the official repository.

  1. Install MSYS2 in the default directory C:\msys64.
  2. Launch MSYS2 UCRT64 terminal.
  3. Install packages:
pacman -S mingw-w64-ucrt-x86_64-toolchain mingw-w64-ucrt-x86_64-cmake mingw-w64-ucrt-x86_64-make git mingw-w64-ucrt-x86_64-python mingw-w64-ucrt-x86_64-python-pip
  1. Verify versions:
gcc --version
cmake --version
git --version
python --version
pip --version
  1. Build llama-cpp-python with AMD ROCm (hipBLAS):
pipenv run CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install llama-cpp-python
  1. Install the remaining dependencies:
pipenv install

Running the Project

After configuring and installing dependencies, start the bots with:

pipenv run python main.py

Tip

First Run

  1. A special configurator generates the config_models.py file required by the project. Do not delete it!
  2. The model is downloaded from Hugging Face. After that, the bots launch automatically. On subsequent runs, bots start immediately.

Adding the Hospital Bot to a Chat

For proper operation, the bot must be added to a group chat. On first addition, the bot automatically saves the chat_id and chat_name in the configuration and sends a welcome message.

Caution

After adding, do not add the bot to other group chats β€” this resets the current binding, and the bot will forward messages to the new chat. If necessary, you can extend this mechanism by forking the repository.

About

πŸ’¬ Example chatbots for TrueConf Server: monitoring, message forwarding, and local LLM integration.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages