- Project Goal: Build a Slack bot that interacts with external tools and data sources via the Model Context Protocol (MCP), implemented in Go.
- Architecture: See
README.mdfor the high-level design. The core components are the Slack integration, the Go application (slack-mcp-client), and external MCP servers. - Slack Integration: Full-featured Slack client using
slack-go/slackwith Socket Mode for secure communication, supporting mentions, direct messages, and rich Block Kit formatting. - MCP Client Configuration: Uses a flexible configuration approach for multiple MCP servers through
mcp-servers.jsonfollowing the schema defined inmcp-schema.json. - LLM Integration: Multi-provider LLM support through a factory pattern with LangChain (v0.1.14) as the gateway, supporting OpenAI, Anthropic, and Ollama providers.
-
Configuration System (
internal/config/)- Configuration loaded from environment variables for Slack credentials and LLM settings
- MCP server configurations from JSON file following the schema with
mcpServersproperty - Each server defined with
command,args,mode, and optionalenvproperties - Support for both HTTP/SSE and stdio transport modes
- LLM provider configuration with factory pattern support
-
Slack Client (
internal/slack/)- Connects to Slack using Socket Mode for secure, firewall-friendly communication
- Handles app mentions and direct messages
- Processes user prompts and forwards them to LLM providers
- Advanced message formatting with Block Kit support through
internal/slack/formatter/ - Returns responses and tool results back to Slack channels/DMs with rich formatting
-
MCP Client (
internal/mcp/)- Support for multiple transport protocols:
- HTTP/SSE (Server-Sent Events): For real-time communication with web-based MCP servers
- stdio: For local development with command-line tools
- Dynamic initialization with proper command line argument parsing
- Runtime discovery of available tools from MCP servers
- Uses mcp-go v0.42.0 with enhanced HTTP transport and session management
- Support for multiple transport protocols:
-
LLM Provider System (
internal/llm/)- Factory pattern for provider registration and initialization
- Registry system for managing multiple LLM providers
- Configuration-driven provider setup
- LangChain (v0.1.14) as the unified gateway for all providers
- Support for OpenAI, Anthropic, and Ollama providers with enhanced stability
- Streaming memory and goroutine leak fixes (v0.1.14)
- Improved agent parsing and error handling
- Availability checking and fallback mechanisms
-
Handler System (
internal/handlers/)- Interface-based design for tool handlers
- LLM-MCP Bridge for detecting tool invocation patterns
- Registry for centralized handler management
- Support for both structured JSON tool calls and natural language detection
-
Common Utilities (
internal/common/)- Structured logging with hierarchical loggers (
internal/common/logging/) - Standardized error handling (
internal/common/errors/) - HTTP client with retry logic (
internal/common/http/) - Shared types and utilities
- Structured logging with hierarchical loggers (
MCP servers are configured using a JSON file following this structure:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/tuannvm/Projects",
"/Users/tuannvm/Downloads"
],
"env": {
"DEBUG": "mcp:*"
}
},
"github": {
"command": "github-mcp-server",
"args": ["stdio"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-github-token"
}
},
"web-server": {
"mode": "http",
"url": "http://localhost:8080/mcp",
"initialize_timeout_seconds": 30
}
}
}LLM providers are configured in the main configuration with factory pattern support:
llm_provider: "openai" # Which provider to use
llm_providers:
openai:
type: "openai"
model: "gpt-4.1" # Latest: 21.4% improvement in coding (Oct 2025)
# api_key loaded from OPENAI_API_KEY env var
ollama:
type: "ollama"
model: "llama3.3" # Latest: 70B state-of-the-art (2025)
base_url: "http://localhost:11434"
anthropic:
type: "anthropic"
model: "claude-sonnet-4.5" # Latest: Best for coding and agents (Sept 2025)
# api_key loaded from ANTHROPIC_API_KEY env var-
Start-up Sequence:
- Parse command-line arguments with debug flags
- Load environment variables and configuration file
- Initialize structured logging system
- Initialize LLM provider registry with configured providers
- Initialize MCP clients for each configured server
- Connect to Slack using Socket Mode
-
Message Processing:
- Receive messages from Slack (mentions or DMs)
- Forward messages to configured LLM provider through registry
- Process LLM response through LLM-MCP Bridge with enhanced parsing (v0.1.14)
- If tool invocation is detected:
- Execute appropriate MCP tool call
- Format tool results with Slack-compatible formatting
- Return final response to Slack with Block Kit formatting
- Streaming responses now have memory and goroutine leak protection (v0.1.14)
-
Tool Detection & Execution:
- JSON pattern matching for structured tool invocations
- Regular expression matching for natural language requests
- Validate against available tools discovered from MCP servers
- Execute tool call with appropriate parameters
- Process and format tool results with rich Slack formatting
The application includes a comprehensive Slack formatting system:
-
Automatic Format Detection:
- Plain text with mrkdwn formatting
- JSON Block Kit structures
- Structured data converted to Block Kit
-
Markdown Support:
- Automatic conversion from standard Markdown to Slack mrkdwn
- Support for bold, italic, strikethrough, code blocks, lists, links
- Quoted string conversion to inline code blocks
-
Block Kit Support:
- Headers, sections, fields, actions, dividers
- Automatic field truncation for Slack limits
- Rich interactive components
-
Logging System:
- Structured logging with different levels (Debug, Info, Warn, Error)
- Component-specific loggers for better tracking
- Environment variable support for log level configuration
-
MCP Transport Issues:
- Resolved stdio transport issues by sequential processing
- Proper timeout handling for server initialization
- Support for both HTTP/SSE and stdio transports
-
Configuration Validation:
- Automatic fallback to available providers
- Server disable/enable functionality
- Environment variable overrides
-
Enhanced Tool Discovery
- Better caching of discovered tools
- Dynamic tool refresh capabilities
- Tool usage analytics
-
Advanced LLM Features
- Function calling support for compatible providers
- Conversation context management
- Multi-turn conversation support
-
Monitoring & Observability
- Metrics collection for tool usage
- Performance monitoring
- Health check endpoints
-
Security Enhancements
- Tool permission controls
- User-based access restrictions
- API key rotation support