Skip to content

feat: add native LLaMA.cpp local provider support#1346

Merged
bytecii merged 7 commits intoeigent-ai:mainfrom
it-education-md:feat/333-llamacpp-local-provider
Mar 3, 2026
Merged

feat: add native LLaMA.cpp local provider support#1346
bytecii merged 7 commits intoeigent-ai:mainfrom
it-education-md:feat/333-llamacpp-local-provider

Conversation

@it-education-md
Copy link
Contributor

@it-education-md it-education-md commented Feb 23, 2026

Related Issue

Closes #333

Description

This PR adds native LLaMA.cpp support as a dedicated Local Model provider in Settings.

It includes:

  • LLaMA.cpp provider entry in Local Models (selection/default/save/reset flow)
  • llama-server integration with /v1/health and /v1/models
  • backend platform compatibility mapping (llama.cpp -> openai-compatible-model)
  • model suggestion updates + tests
  • docs updates for Local Model / Quick Start

Testing Evidence (REQUIRED)

  • I have included human-verified testing evidence in this PR.
  • This PR includes frontend/UI changes, and I attached screenshot(s) or screen recording(s).
  • No frontend/UI changes in this PR.

Evidence:

  • pnpm -s eslint src/pages/Agents/Models.tsx (pass)
  • cd backend && uv run pytest tests/app/model/test_model_platform.py tests/app/component/test_model_suggestions.py -q (pass)
  • Manual verification with local llama-server:
    • /v1/health reachable
    • /v1/models returns model list
    • Save + set default in Settings > Models > Local > LLaMA.cpp works
    • Screenshot attached

Video recording:

eigent-llamacpp.mp4

What is the purpose of this pull request?

  • Bug fix
  • New Feature
  • Documentation update
  • Other

Contribution Guidelines Acknowledgement

@it-education-md it-education-md force-pushed the feat/333-llamacpp-local-provider branch from c137f2a to 98c5ac1 Compare March 2, 2026 07:39
@bytecii bytecii force-pushed the feat/333-llamacpp-local-provider branch from babce62 to 2ccfcd3 Compare March 2, 2026 09:45
@bytecii
Copy link
Collaborator

bytecii commented Mar 2, 2026

Did some refactoring and tested on ollama
image

@it-education-md can you help to test on the llama.cpp again to see whether it works? if there is no other issue I will approve and merge the PR. Thanks.

@it-education-md
Copy link
Contributor Author

it-education-md commented Mar 2, 2026

@it-education-md can you help to test on the llama.cpp again to see whether it works? if there is no other issue I will approve and merge the PR. Thanks.

Sure, checking now.

@it-education-md
Copy link
Contributor Author

Screencast.from.2026-03-02.06-18-18.mp4

@bytecii Cool, it worked for me!

@bytecii bytecii merged commit d606fae into eigent-ai:main Mar 3, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] Support llama.cpp for self-hosted models

2 participants