llmix is a unified Python interface for multiple LLM providers.
It now includes:
- Python 3.9+ support
- prompt and low-level messages APIs
- request-level structured JSON output controls
- OpenAI, Groq, Gemini, Claude/Anthropic, and Ollama adapters
- Ollama native and OpenAI-compatible transport modes
- lightweight local relevance-based RAG for code/docs repositories
- normalized provider error metadata for UI integrations
poetry installFor local development and tests:
poetry install -E devfrom __future__ import annotations
from llmix import LLMix
lm = LLMix(
{
"default_provider": "ollama",
"default_model": "llama3.2:latest",
"providers": {
"ollama": {
"base_url": "http://localhost:11434",
"transport": "openai",
}
},
}
)
response = lm.chat(
"Return a JSON object with keys provider and summary.",
expect_json=True,
)
print(response.content)See docs/integration.md for:
- one-shot JSON calls
- streaming calls
- low-level messages calls
- Claude/Anthropic alias usage
- Ollama OpenAI-compatible mode
- indexing and retrieving from code/docs repositories
Additional documentation:
poetry run pytest -q