This repository was archived by the owner on Dec 14, 2025. It is now read-only.
Releases: mixrifyAI/VeloraAI
Releases · mixrifyAI/VeloraAI
VeloraAI V1 Beta
✨ Features
- ⚡ Quick-Start Model Loading — Choose from pre-integrated models or load your own via
TestingMode. - 🧠 Support for Multiple Models — CrystalThink, Qwen, Mistral, DeepSeek, Llama and more.
- 🔁 Event-driven Response System — React to
TextGenerated,ResponseStarted, andResponseEndedin real-time. - 🔐 Customizable System Prompts — Use friendly or aggressive instruction styles (e.g.,
NoBSMode). - 📦 Model Downloader — Automatically fetches models from Hugging Face if not already available.
- 📷 Experimental Vision Mode — Send image + prompt for visual reasoning (WIP).
🧱 Built With
- LLamaSharp — Backbone inference engine.
- .NET 8.0 — Modern C# support.
- WinForms & Console — Sample UI and CLI clients included.
📂 Models Available
| Model | Size | Strengths |
|---|---|---|
| Crystal_Think_V2_Q4 | 2.32 GB | 🥇 Fast, tiny, math-heavy reasoning, Chain-of-Thought format |
| Qwen_V3_4B_Chat | 2.70 GB | 🥈 Fast general model with good code and reasoning |
| Mistral_7B_Chat | 2.87 GB | 🥉 Informative and precise longer-form chat |
| Llama_7B_Chat | 3.07 GB | Reliable general conversations |
| DeepSeek_6B_Coder | 3.07 GB | Code generation, math-only |
| DeepSeek_7B_Chat | 5.28 GB | Slower general chat, strong context retention |
🔧 Usage
1. Authenticate and Start Chatting
var result = await VeloraAI.AuthenticateAsync(VeloraAI.Models.Crystal_Think_V2_Q4);
if (result == VeloraAI.AuthState.Authenticated)
{
await VeloraAI.AskAsync("What is the capital of France?");
}2. Hook Into Events
VeloraAI.TextGenerated += (_, text) => Console.Write(text);
VeloraAI.ResponseStarted += (_, __) => Console.WriteLine("\n[VELORA is typing...]");
VeloraAI.ResponseEnded += (_, __) => Console.WriteLine("\n\n[Done]");3. Use Custom Models
VeloraAI.TestingMode = true;
VeloraAI.TestingModelPath = @"C:\path\to\your_model.gguf";
await VeloraAI.AuthenticateAsync(VeloraAI.Models.TestingModel);⚙️ Advanced Prompt Modes
Friendly Assistant (Default)
Follows a natural conversational tone with emojis and personality.
NoBS Mode
Blunt, hyper-logical response style with no emotional overhead or filler.
await VeloraAI.AuthenticateAsync(VeloraAI.Models.Crystal_Think_V2_Q4, NoBSMode: true);📥 Model Auto-Download
Models are downloaded on first use to:
%APPDATA%/VeloraAI
Progress can be tracked using:
VeloraAI.CurrentDownloadProgress;🔄 Reset History
VeloraAI.ResetHistory(); // or use custom system prompt🎯 Custom Inference Parameters
You can fine-tune Velora's behavior using the following optional parameters in AskAsync:
| Parameter | Description | Recommended for Speed |
|---|---|---|
Temperature |
Controls randomness (lower = more deterministic) | 0.2 - 0.3 |
TopP |
Nucleus sampling threshold | 0.0 - 0.3 |
TopK |
Limits token pool to top-K options | 0 for fastest |
RepeatPenalty |
Penalizes repetition | 1.05 - 1.2 |
MaxTokens |
Maximum tokens to generate | 80 - 128 |
await VeloraAI.AskAsync(
prompt: "Summarize this paragraph.",
temperature: 0.25f,
TopP: 0.2f,
TopK: 0,
RepeatPenalty: 1.1f,
maxTokens: 80
);🛠️ Contributing
Pull requests are welcome! Please submit improvements, optimizations, or new model integrations.
📄 License
MIT
💬 Example Console Output
Authenticating model...
Authentication result: Authenticated
> What is 21 * 2?
[VELORA is typing...]
42
[Done]
🧪 Credits
- Developed by voidZiAD
- Powered by LLamaSharp, GGUF models, and the C#/.NET 8.0 ecosystem
🌐 Links
🧠 "VELORA" Personality
"I'm VELORA — not just another chatbot. I'm here to help you code, reason, and think clearer. No nonsense, just clarity."