Skip to content

feat: Add custom OpenAI-compatible AI provider#196

Open
r-pedraza wants to merge 1 commit intomasterfrom
feat/create-custom-AI-provider
Open

feat: Add custom OpenAI-compatible AI provider#196
r-pedraza wants to merge 1 commit intomasterfrom
feat/create-custom-AI-provider

Conversation

@r-pedraza
Copy link
Copy Markdown
Contributor

Pull Request

📝 Summary

Adds support for custom OpenAI-compatible AI providers, enabling integration with self-hosted LLM servers (e.g., vLLM, Ollama, LM Studio) or alternative OpenAI-compatible APIs. The new CustomProvider class extends the AI provider ecosystem with configurable base URLs, flexible authentication, and comprehensive error handling.

🔧 Changes Made

  • Added CustomProvider class in titan_cli/ai/providers/custom.py supporting OpenAI-compatible endpoints
  • Integrated OpenAI Python SDK (openai>=1.0.0,<2.0.0) as a new dependency
  • Implemented configurable base URL and model selection for custom endpoints
  • Added support for optional API key authentication (placeholder for unauthenticated servers)
  • Implemented comprehensive error mapping (authentication, rate limit, API errors)
  • Added streaming support for real-time response generation
  • Created extensive test suite (tests/ai/providers/test_custom_provider.py) with 247 lines covering initialization, generation, error handling, and streaming scenarios

🧪 Testing

  • Unit tests added/updated (poetry run pytest)
  • All tests passing (make test)
  • Manual testing with titan-dev

Comprehensive unit test coverage includes:

  • Provider initialization with various parameter combinations (valid/invalid base URLs, with/without API keys)
  • Successful generation with token usage tracking
  • Error handling for authentication failures, rate limits, and API errors
  • Streaming response generation with proper chunk handling
  • Edge cases like missing content and empty responses

📊 Logs

  • No new log events

✅ Checklist

  • Self-review done
  • Follows the project's logging rules (no secrets, no content in logs)
  • New and existing tests pass
  • Documentation updated if needed

Add support for custom OpenAI-compatible endpoints including LiteLLM,
vLLM, and other on-premise deployments.

Key features:
- New CustomProvider class using OpenAI SDK
- Optional API key support (some endpoints don't require auth)
- Required base_url validation for custom endpoints
- UI wizard integration with custom provider option
- Comprehensive test coverage (18 tests, 446 lines)

Changes:
- Add CustomProvider to provider registry
- Update AIClient to handle optional API keys for custom provider
- Add custom provider constants and defaults
- Enhance AI config wizard with custom provider flow
- Add examples for LiteLLM, vLLM, and custom deployments

Supported use cases:
- LiteLLM proxy servers (http://localhost:4000)
- vLLM inference servers
- Custom on-premise OpenAI-compatible APIs
- Any OpenAI SDK-compatible endpoint

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@r-pedraza r-pedraza self-assigned this Apr 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant