Heiervang Technologies fork of llama.cpp
HT Discussions | Fork Management Guide | Upstream: llama.cpp
This is the Heiervang Technologies fork of llama.cpp. The ht branch contains the following changes on top of upstream master.
We do not plan to contribute any of these changes back to upstream llama.cpp, unless the upstream maintainers explicitly ask for a specific change. This fork exists for HT product work — syncs go one way (upstream → master → ht). The "Tracked upstream" column below is strictly informational: it points to any pre-existing upstream issue or PR discussing the topic, for the reader's reference, not because we intend to land anything there.
Unlike upstream, we accept contributions from AI agents and assistants. We judge code by its quality, not its authorship — see CONTRIBUTING.md.
| Change | Description | Tracked upstream |
|---|---|---|
| TurboQuant KV cache | New TBQ3_0 / TBQ4_0 quantized KV cache types with rotated-domain attention; CPU backend plus fused CUDA kernels (SET_ROWS, rotation, flash-attention) |
No |
| Router-mode robustness | llama-server router detects worker crashes via subprocess_alive polling; fixes hardcoded proxy timeout |
#22003 |
| Tool-calling resilience | Fallback tool-call parser and skip non-function tool types so non-conforming models still work |
No |
| Developer-role remap | --remap-developer-role flag merges developer messages into the system prompt for templates that reject duplicates |
No |
| Change | Description | Tracked upstream |
|---|---|---|
| OpenAI-compatible TTS | Play/stop per assistant message plus an autoplay toggle that fires once after each generation (does not replay), wired to any POST /v1/audio/speech endpoint. Tested against the vLLM Qwen3-TTS multiplexer. |
No |
| Voice picker | Dropdown in the chat header populated from GET /v1/audio/voices so the active voice is one click away (no settings dive). Defaults to zap; falls back to the picked voice on every TTS call. |
No |
| OpenAI-compatible STT | Record from the mic and transcribe via any POST /v1/audio/transcriptions endpoint — dictate directly into the chat composer or at the doc-editor cursor, with optional auto-send for a hands-free voice flow and tap-mic-again to cancel in-flight transcription. Tested against vLLM Qwen3-ASR. |
No |
| Always-visible mic | Mic button sits alongside send at all times with concrete toast errors for NotAllowedError / NotFoundError / NotReadableError; records even when neither STT nor audio modality is configured (falls back to attaching the .wav). |
No |
| Change | Description | Tracked upstream |
|---|---|---|
| Image-generation tab | First-class Image workspace mode alongside Chat / Doc — prompt + negative prompt, source-image upload for edits, ASCII-locked aspect-ratio picker, simple/advanced toggle, results land directly in the gallery. Talks to any OpenAI-compatible /v1/images/generations and /v1/images/edits endpoint. |
No |
| Image-tool composer toggle | /image slash command and a sparkle pill in the chat composer enable the model's generate_image / edit_image tools; tool-call results render inline in the message and are auto-saved as gallery artifacts. |
No |
| Themed loading states | Spinners and progress bars match the active theme color so generation feels native to the app rather than a third-party iframe. | No |
| Change | Description | Tracked upstream |
|---|---|---|
| Doc mode | Second workspace mode alongside chats: standalone markdown docs with CodeMirror 6 editor, split edit/preview, live word count, auto-title from first H1, duplicate / download / delete, Ctrl+S flush-save, and "Chat about this" to lift the document into a seeded conversation. |
No |
| AI commands in docs | Ctrl+Shift+K (or ⌘⇧K on macOS) command palette streams prompt output into the editor live with a preview, names the running command next to a Stop button, and Esc cancels — covers summarize, rewrite, translate, expand, and user-defined prompts. |
No |
| Inline ghost-text completions | AI autocomplete in the doc editor: Tab to accept, Esc to dismiss, Ctrl+Tab to force a suggestion; per-doc on/off toggle in the header. Works against any completion-capable model. |
No |
| Change | Description | Tracked upstream |
|---|---|---|
| Sandbox terminals tab | Workspace tile that spawns gVisor-hardened containers on the unleash-sandbox Docker network — internet yes, LAN no. Live xterm.js panels with theme-matched palettes (default + a few CRT-flavoured variants for fun), sudo available inside the container. |
No |
| One-click setup gate | If the four invariants (Docker daemon, runsc runtime, unleash-sandbox network with icc=off, iptables LAN-drop, unleash:latest image) aren't all green, the page surfaces a "Sandbox setup incomplete" panel with the exact one-shot command (sudo unleash sandbox setup) and a Refresh button instead of silently failing. |
No |
| Inline terminal drawer | When the model calls run_in_terminal, the live terminal drops below the chat composer as a drawer so you can either type into the terminal or keep chatting; auto-picks the lone terminal so the model never has to call list_terminals first. |
No |
| Pop-out terminal | Picture-in-picture button on the drawer spawns a native Tauri window pointed at #/terminals/<id>; stable per-terminal labels so re-clicking focuses the existing window. Falls back to window.open in plain browser mode. |
No |
| Terminal tools | run_in_terminal, send_keys, list_terminals exposed as MCP-style tools the model can call; send_keys mode lookup tolerant to display name and container id, so models don't have to memorize panel ids. |
No |
| Change | Description | Tracked upstream |
|---|---|---|
| Multi-modal artifact gallery | First-class Artifacts workspace mode storing image / audio / video / pdf / html / svg / markdown / code with thumbnails, kind filters, search, drag-and-drop upload, Upload button, and bulk-select / bulk-delete. Backed by IndexedDB so artifacts survive reloads. |
No |
| Artifact detail page | Per-artifact view with rev list, in-place rename, Ctrl+S save-as-new-revision text editing, copy / download per revision, pin-default and rollback-to-revision, and an "Open source chat" jump back to the conversation that produced it. |
No |
| Pop-out artifact | Picture-in-picture button in the detail header spawns a native Tauri window at #/artifacts/<id> (1100×800) for the canvas-y kinds (HTML / SVG / image / video). Stable per-artifact label so re-pop focuses the existing window. |
No |
| Artifacts drawer in chat | Side panel that extracts HTML / SVG snippets from assistant messages and renders them in a sandboxed preview with source toggle; toolbar button surfaces a live artifact count with an informative tooltip. | No |
| Markdown image pipeline | Inline  images in user messages are lifted into vision-encoder attachments while still rendering inline in the bubble (dedup prevents duplicate chips); generated images from the generate_image tool are auto-saved as gallery artifacts. |
No |
| Change | Description | Tracked upstream |
|---|---|---|
| Nextcloud connection | Settings → Connections panel takes a Nextcloud server URL, username, and app password (link straight to the Nextcloud security page so users skip the admin-password trap). WebDAV client is pure-fetch; password is stored encrypted at rest in IndexedDB, not in localStorage. |
No |
| Browse drawer | Drill-down browser over /AI/ (configurable) on the connected Nextcloud — PROPFIND Depth: 1 per folder, Add to gallery per file, breadcrumb navigation, and a manual refresh. |
No |
| Auto-upload + sync badges | Every new artifact (manual or model-generated) is auto-uploaded to Nextcloud when a connection is configured. Each card surfaces its sync state inline: syncing, synced (links to remote file), failed (click to retry), or cloud if the artifact came from Nextcloud. Badges suppressed when no connection is configured. |
No |
| Mirror-deletes (opt-in) | Toggle in Settings to also delete the remote file when an artifact is removed locally; off by default so a stray bulk-delete can't nuke the remote folder. | No |
| ETag round-trip | Uploads use MKCOL for the date-partitioned folder tree (/AI/YYYY-MM-DD/sess-<id>/), capture the returned ETag, and surface it as the artifact's syncRemoteEtag so subsequent syncs can detect drift. |
No |
| Change | Description | Tracked upstream |
|---|---|---|
| Rebranded WebUI | ht-llama.cpp branding with turquoise/purple theme, configurable hue picker (live HSL preview, randomize button), and banner. |
No |
| Backend pill | Header shows the active server hostname in a themed pill — useful when the same Tauri/desktop shell points at different backends. | No |
| Model picker | Compact dropdown in the chat composer for the active model with one-click switching. | No |
| Model loading UX | Cancel button for in-progress model loads; clearer error states in router mode (toast instead of full-screen modal). | No |
| LoRA adapter UI | Auto-discovery of LoRA adapters served by llama-server with enable/disable panel and router-mode awareness. |
No |
| Sidebar bulk delete | Multi-select conversations in the sidebar (click checkbox or shift-click range) and delete in one go; confirmation via themed AlertDialog. |
No |
| Themed dialogs everywhere | All destructive paths (gallery delete, conversation delete, settings import/export errors, terminal destroy) use the shadcn AlertDialog / toast instead of native window.confirm / window.alert. |
No |
| Configurable backend URL | Frontend works as a standalone static bundle pointing at any remote llama-server. |
No |
| Change | Description | Tracked upstream |
|---|---|---|
| Tauri desktop shell | Native desktop wrapper around the WebUI in tools/server/webui-tauri/ with .desktop launchers, HT icon set across Linux/Windows/iOS/Android, and Linux webkit2gtk getUserMedia auto-approval so mic capture works inside the bundled app. |
No |
| Multi-window pop-out | The Tauri shell brokers WebviewWindow creation for terminal and artifact pop-outs, scoped via the term-* / art-* capability globs so each spawned window inherits the default permission set. |
No |
| Change | Description | Tracked upstream |
|---|---|---|
Release CI on ht |
GitHub Actions release workflow runs on the ht branch, not just on tags. |
No |
| Branch | Purpose |
|---|---|
master |
Clean fast-forward mirror of upstream master — never commit directly |
ht |
HT-specific changes on top of master — default branch |
Feature branches are created from ht and squash-merged back via PR.
For questions or inquiries, use the HT Discussions page. For details on fork workflow and sync procedures, see the Fork Management Guide.
LLM inference in C/C++
- Hugging Face cache migration: models downloaded with
-hfare now stored in the standard Hugging Face cache directory, enabling sharing with other HF tools. - guide : using the new WebUI of llama.cpp
- guide : running gpt-oss with llama.cpp
- [FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗
- Support for the
gpt-ossmodel with native MXFP4 format has been added | PR | Collaboration with NVIDIA | Comment - Multimodal support arrived in
llama-server: #12898 | documentation - VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
- Hugging Face Inference Endpoints now support GGUF out of the box! ggml-org#9669
- Hugging Face GGUF editor: discussion | tool
Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:
- Install
llama.cppusing brew, nix or winget - Run with Docker - see our Docker documentation
- Download pre-built binaries from the releases page
- Build from source by cloning this repository - check out our build guide
Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.
Example command:
# Use a local model file
llama-cli -m my_model.gguf
# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUFThe main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
range of hardware - locally and in the cloud.
- Plain C/C++ implementation without any dependencies
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
- AVX, AVX2, AVX512 and AMX support for x86 architectures
- RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
- Vulkan and SYCL backend support
- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
The llama.cpp project is the main playground for developing new features for the ggml library.
Models
Typically finetunes of the base models below are supported as well.
Instructions for adding support for new models: HOWTO-add-model.md
- LLaMA 🦙
- LLaMA 2 🦙🦙
- LLaMA 3 🦙🦙🦙
- Mistral 7B
- Mixtral MoE
- DBRX
- Jamba
- Falcon
- Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2
- Vigogne (French)
- BERT
- Koala
- Baichuan 1 & 2 + derivations
- Aquila 1 & 2
- Starcoder models
- Refact
- MPT
- Bloom
- Yi models
- StableLM models
- Deepseek models
- Qwen models
- PLaMo-13B
- Phi models
- PhiMoE
- GPT-2
- Orion 14B
- InternLM2
- CodeShell
- Gemma
- Mamba
- Grok-1
- Xverse
- Command-R models
- SEA-LION
- GritLM-7B + GritLM-8x7B
- OLMo
- OLMo 2
- OLMoE
- Granite models
- GPT-NeoX + Pythia
- Snowflake-Arctic MoE
- Smaug
- Poro 34B
- Bitnet b1.58 models
- Flan T5
- Open Elm models
- ChatGLM3-6b + ChatGLM4-9b + GLMEdge-1.5b + GLMEdge-4b
- GLM-4-0414
- SmolLM
- EXAONE-3.0-7.8B-Instruct
- FalconMamba Models
- Jais
- Bielik-11B-v2.3
- RWKV-7
- RWKV-6
- QRWKV-6
- GigaChat-20B-A3B
- Trillion-7B-preview
- Ling models
- LFM2 models
- Hunyuan models
- BailingMoeV2 (Ring/Ling 2.0) models
Bindings
- Python: ddh0/easy-llama
- Python: abetlen/llama-cpp-python
- Go: go-skynet/go-llama.cpp
- Node.js: withcatai/node-llama-cpp
- JS/TS (llama.cpp server client): lgrammel/modelfusion
- JS/TS (Programmable Prompt Engine CLI): offline-ai/cli
- JavaScript/Wasm (works in browser): tangledgroup/llama-cpp-wasm
- Typescript/Wasm (nicer API, available on npm): ngxson/wllama
- Ruby: yoshoku/llama_cpp.rb
- Rust (more features): edgenai/llama_cpp-rs
- Rust (nicer API): mdrokz/rust-llama.cpp
- Rust (more direct bindings): utilityai/llama-cpp-rs
- Rust (automated build from crates.io): ShelbyJenkins/llm_client
- C#/.NET: SciSharp/LLamaSharp
- C#/VB.NET (more features - community license): LM-Kit.NET
- Scala 3: donderom/llm4s
- Clojure: phronmophobic/llama.clj
- React Native: mybigday/llama.rn
- Java: kherud/java-llama.cpp
- Java: QuasarByte/llama-cpp-jna
- Zig: deins/llama.cpp.zig
- Flutter/Dart: netdur/llama_cpp_dart
- Flutter: xuegao-tzx/Fllama
- PHP (API bindings and features built on top of llama.cpp): distantmagic/resonance (more info)
- Guile Scheme: guile_llama_cpp
- Swift srgtuszy/llama-cpp-swift
- Swift ShenghaiWang/SwiftLlama
- Delphi Embarcadero/llama-cpp-delphi
- Go (no CGo needed): hybridgroup/yzma
- Android: llama.android
UIs
(to have a project listed here, it should clearly state that it depends on llama.cpp)
- AI Sublime Text plugin (MIT)
- BonzAI App (proprietary)
- cztomsik/ava (MIT)
- Dot (GPL)
- eva (MIT)
- iohub/collama (Apache-2.0)
- janhq/jan (AGPL)
- johnbean393/Sidekick (MIT)
- KanTV (Apache-2.0)
- KodiBot (GPL)
- llama.vim (MIT)
- LARS (AGPL)
- Llama Assistant (GPL)
- LlamaLib (Apache-2.0)
- LLMFarm (MIT)
- LLMUnity (MIT)
- LMStudio (proprietary)
- LocalAI (MIT)
- LostRuins/koboldcpp (AGPL)
- MindMac (proprietary)
- MindWorkAI/AI-Studio (FSL-1.1-MIT)
- Mobile-Artificial-Intelligence/maid (MIT)
- Mozilla-Ocho/llamafile (Apache-2.0)
- nat/openplayground (MIT)
- nomic-ai/gpt4all (MIT)
- ollama/ollama (MIT)
- oobabooga/text-generation-webui (AGPL)
- PocketPal AI (MIT)
- psugihara/FreeChat (MIT)
- ptsochantaris/emeltal (MIT)
- pythops/tenere (AGPL)
- ramalama (MIT)
- semperai/amica (MIT)
- withcatai/catai (MIT)
- Autopen (GPL)
Tools
- akx/ggify – download PyTorch models from Hugging Face Hub and convert them to GGML
- akx/ollama-dl – download models from the Ollama library to be used directly with llama.cpp
- crashr/gppm – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
- gpustack/gguf-parser - review/check the GGUF file and estimate the memory usage
- Styled Lines (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
- unslothai/unsloth – 🦥 exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
Infrastructure
- Paddler - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
- GPUStack - Manage GPU clusters for running LLMs
- llama_cpp_canister - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
- llama-swap - transparent proxy that adds automatic model switching with llama-server
- Kalavai - Crowdsource end to end LLM deployment at any scale
- llmaz - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
- LLMKube - Kubernetes operator for llama.cpp with multi-GPU and Apple Silicon Metal support"
Games
- Lucy's Labyrinth - A simple maze game where agents controlled by an AI model will try to trick you.
| Backend | Target devices |
|---|---|
| Metal | Apple Silicon |
| BLAS | All |
| BLIS | All |
| SYCL | Intel and Nvidia GPU |
| OpenVINO [In Progress] | Intel CPUs, GPUs, and NPUs |
| MUSA | Moore Threads GPU |
| CUDA | Nvidia GPU |
| HIP | AMD GPU |
| ZenDNN | AMD CPU |
| Vulkan | GPU |
| CANN | Ascend NPU |
| OpenCL | Adreno GPU |
| IBM zDNN | IBM Z & LinuxONE |
| WebGPU [In Progress] | All |
| RPC | All |
| Hexagon [In Progress] | Snapdragon |
| VirtGPU | VirtGPU APIR |
The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:
You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, by using this CLI argument: -hf <user>/<model>[:quant]. For example:
llama-cli -hf ggml-org/gemma-3-1b-it-GGUFBy default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. The MODEL_ENDPOINT must point to a Hugging Face compatible API endpoint.
After downloading a model, use the CLI tools to run it locally - see below.
llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.
The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:
- Use the GGUF-my-repo space to convert to GGUF format and quantize model weights to smaller sizes
- Use the GGUF-my-LoRA space to convert LoRA adapters to GGUF format (more info: ggml-org#10123)
- Use the GGUF-editor space to edit GGUF meta data in the browser (more info: ggml-org#9268)
- Use the Inference Endpoints to directly host
llama.cppin the cloud (more info: ggml-org#9669)
To learn more about model quantization, read this documentation
-
Run in conversation mode
Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding
-cnvand specifying a suitable chat template with--chat-template NAMEllama-cli -m model.gguf # > hi, who are you? # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today? # # > what is 1+1? # Easy peasy! The answer to 1+1 is... 2!
-
Run in conversation mode with custom chat template
# use the "chatml" template (use -h to see the list of supported templates) llama-cli -m model.gguf -cnv --chat-template chatml # use a custom template llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
-
Constrain the output with a custom grammar
llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:' # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}
The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.
For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
A lightweight, OpenAI API compatible, HTTP server for serving LLMs.
-
Start a local HTTP server with default configuration on port 8080
llama-server -m model.gguf --port 8080 # Basic web UI can be accessed via browser: http://localhost:8080 # Chat completion endpoint: http://localhost:8080/v1/chat/completions
-
Support multiple-users and parallel decoding
# up to 4 concurrent requests, each with 4096 max context llama-server -m model.gguf -c 16384 -np 4 -
Enable speculative decoding
# the draft.gguf model should be a small variant of the target model.gguf llama-server -m model.gguf -md draft.gguf -
Serve an embedding model
# use the /embedding endpoint llama-server -m model.gguf --embedding --pooling cls -ub 8192 -
Serve a reranking model
# use the /reranking endpoint llama-server -m model.gguf --reranking -
Constrain all outputs with a grammar
# custom grammar llama-server -m model.gguf --grammar-file grammar.gbnf # JSON llama-server -m model.gguf --grammar-file grammars/json.gbnf
A tool for measuring the perplexity 1 (and other quality metrics) of a model over a given text.
-
Measure the perplexity over a text file
llama-perplexity -m model.gguf -f file.txt # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ... # Final estimate: PPL = 5.4007 +/- 0.67339
-
Measure KL divergence
# TODO
-
Run default benchmark
llama-bench -m model.gguf # Output: # | model | size | params | backend | threads | test | t/s | # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 | # # build: 3e0ba0e60 (4229)
-
Basic text completion
llama-simple -m model.gguf # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
- Contributors can open PRs
- Collaborators will be invited based on contributions
- Maintainers can push to branches in the
llama.cpprepo and merge PRs into themasterbranch - Any help with managing issues, PRs and projects is very appreciated!
- See good first issues for tasks suitable for first contributions
- Read the CONTRIBUTING.md for more information
- Make sure to read this: Inference at the edge
- A bit of backstory for those who are interested: Changelog podcast
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
- LLaMA:
- GPT-3
- GPT-3.5 / InstructGPT / ChatGPT:
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "MyLlamaPackage",
targets: [
.executableTarget(
name: "MyLlamaPackage",
dependencies: [
"LlamaFramework"
]),
.binaryTarget(
name: "LlamaFramework",
url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
)
]
)The above example is using an intermediate build b5046 of the library. This can be modified
to use a different version by changing the URL and checksum.
Command-line completion is available for some environments.
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bashOptionally this can be added to your .bashrc or .bash_profile to load it
automatically. For example:
$ echo "source ~/.llama-completion.bash" >> ~/.bashrc- yhirose/cpp-httplib - Single-header HTTP server, used by
llama-server- MIT license - stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
- nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
- miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain
- subprocess.h - Single-header process launching solution for C and C++ - Public domain
