[Feature Request] Multi model support #23
Replies: 18 comments
-
|
Thanks - I will look into this just as soon as multi-agent sessions (multiple claude's/codex/geminis etc) is shipped with a few more ux tweaks that are already working. |
Beta Was this translation helpful? Give feedback.
-
|
Hey! Happy to help out here. Before we bring in new agents like Ollama or Qwen, I can put together a small PR to introduce a BaseAgent interface. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Another suggestion: if possible in the ui give us option to make an agent (people who are not good with editing configs can create agents from the website). And another thing. In the config file add an option "context window". As not all ollama models have the same context window. (Default is like 128K) |
Beta Was this translation helpful? Give feedback.
-
|
And if you can create cron loops (the AI agents work togather to create an app or an feature togather everyday at a specific time) it would be great... |
Beta Was this translation helpful? Give feedback.
-
|
A freind of mine has a project like the one I suggested above. Check it out! https://github.com/ikislay/auto-claude |
Beta Was this translation helpful? Give feedback.
-
|
Both good ideas! For now: UI agent creation: on the radar for a future update. Right now it's a 5-line config file which keeps things simple and predictable. PRs welcome if someone wants to tackle it. Context windown: the wrapper sends a fixed window of recent chat messages (configurable in local toml), so context limits aren't really a factor. The OpenAI-compatible API handles context sizing on the model side. |
Beta Was this translation helpful? Give feedback.
-
Okay. 1 more thing... How to automatically generated those ai hat icons for the new added ai agents? |
Beta Was this translation helpful? Give feedback.
-
|
Re: the scheduler good idea! I'll add it to the roadmap :) |
Beta Was this translation helpful? Give feedback.
-
Another thing. Generate a to do board/kanban dashboard with this same themeing schemes so the AI models can keep up with tasks. And another thing, add a like "enabled = true" so we can enable to disable the ai model. For example I don't have access to claude or codex (free tier) so I don't enable then and only use ollama/open router free models. In the kanban dashboard you can let the ai model list it's task to it's section and move it to completed or to be done by itself. |
Beta Was this translation helpful? Give feedback.
-
|
We still need image support btw. |
Beta Was this translation helpful? Give feedback.
-
|
And a chat history button |
Beta Was this translation helpful? Give feedback.
-
|
And you know that claude code has that esc button shortcut that lets us move back (for example you wanna go the start of the convo without starting another chat). Maybe do something like this. |
Beta Was this translation helpful? Give feedback.
-
In the current version you don't see agents that aren't actively running. |
Beta Was this translation helpful? Give feedback.
-
I just made it so agents can create hats for other agents, so just ask claude or codex or gemini to make one for your local agent :) |
Beta Was this translation helpful? Give feedback.
-
We already have image sharing in the web UI (upload/paste/drag). I assume you mean making API agents see images? That's trickier: The wrapper would need to fetch image attachments from chat messages, then convert them to base64 and include in the model's messages array as image_url content blocks. This only works if the model supports vision. It's doable but not trivial, and most small local models don't have vision anyway. I'll consider that for the future though! |
Beta Was this translation helpful? Give feedback.
-
I really don't want this to become like JIRA for agents tbh, I have enough of that at work! So a kanban probably isn't the direction I will go in - but I do have plans to experiment with a 'swarm' mode that will let you do a focused coordinated effort on a bigger piece of work, if it works out well that will likely be the next major feature. |
Beta Was this translation helpful? Give feedback.
-
|
Okay thanks 👍 |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Feature Request: Multi-Model Support (Ollama + Cloud Models)
Problem
agentchattr is locked to Claude Code, Codex, and Gemini CLI. No way to use local models or mix models by task.
Proposed solution
Add Ollama and OpenAI-compatible endpoint support as a new agent type in config.toml:
These agents join the same chat room, respond to @mentions, and participate in agentic loops like any other agent — no CLI subprocess needed, just a polling wrapper calling the model API.
Why it's useful
Route cheap/fast tasks (triage, summaries) to a local 3B model instead of burning cloud tokens
Keep sensitive code fully local via Ollama
Assign models by role: one for codegen, one for review, one for routing
Works in air-gapped environments
Implementation surface
Mostly contained to agents.py (new OllamaAgent class) and a new wrapper_api.py for the trigger loop. config.toml schema would need type, model, base_url, api_key_env, and optionally roles. MCP tools are already model-agnostic so mcp_bridge.py wouldn't need changes.
Beta Was this translation helpful? Give feedback.
All reactions