Skip to content

(feat) Add tidewave for debugging dev#30

Merged
kyle-neal merged 1 commit intomainfrom
feat/tidewave
Mar 18, 2026
Merged

(feat) Add tidewave for debugging dev#30
kyle-neal merged 1 commit intomainfrom
feat/tidewave

Conversation

@kyle-neal
Copy link
Copy Markdown
Owner

Inspired an article in today's Elixir Radar email.

What was set up

Three files were changed:

mix.exs — added {:tidewave, "~> 0.5", only: :dev}. The :only :dev ensures it never runs in production (it would expose raw code evaluation and DB access).

endpoint.ex — added the Tidewave plug before the code-reloading block:

if Code.ensure_loaded?(Tidewave) do
  plug Tidewave
end

The Code.ensure_loaded? guard means in test/prod (where the dep doesn't exist) this is a no-op.

dev.exs — added:

config :phoenix_live_view,
  debug_heex_annotations: true,
  debug_attributes: true

This enriches LiveView components with source location metadata that Tidewave surfaces to agents.

When you run mix phx.server, Tidewave starts an MCP server at http://localhost:4000/mcp. That's the endpoint you point AI agents at.


Integrating with AI tools

GitHub Copilot (VS Code)

Add to .vscode/mcp.json in this workspace (create it if it doesn't exist):

{
  "servers": {
    "revstack": {
      "type": "sse",
      "url": "http://localhost:4000/mcp"
    }
  }
}

Then in VS Code settings enable MCP. Copilot will automatically list Tidewave's tools (project_eval, execute_sql_query, get_logs, get_docs, etc.) when you open Copilot Chat.

Claude Desktop

Open Settings → Developer → Edit Config (claude_desktop_config.json) and add:

{
  "mcpServers": {
    "revstack": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "http://localhost:4000/mcp"]
    }
  }
}

Restart Claude Desktop — the Tidewave tools will appear in the tool panel.

ChatGPT (Custom GPT / Operator API)

ChatGPT does not support SSE-based MCP directly yet. Use a bridge like mcp-proxy to expose it as an OpenAPI spec, then add it as a Custom GPT action.

Codex (OpenAI Codex CLI)

Codex CLI supports MCP natively. Add to ~/.codex/config.toml:

[mcp_servers.revstack]
type = "sse"
url  = "http://localhost:4000/mcp"

Then use codex --approval-mode full-auto so it can drive the full observe→hypothesize→act loop.


Load test + observe workflow

Start your server (mix phx.server), run wrk or k6 in one terminal, then prompt the agent:

"A load test is running against http://localhost:4000. Please observe the system — check top processes by memory and message queue length, look at ETS table sizes, query the database for slow query stats, and identify any bottlenecks. Then suggest optimizations."

The agent will autonomously execute expressions like:

# Top 10 processes by memory
:erlang.processes()
|> Enum.map(fn pid ->
  case Process.info(pid, [:memory, :message_queue_len, :registered_name, :current_function]) do
    nil -> nil
    info -> {pid, info}
  end
end)
|> Enum.reject(&is_nil/1)
|> Enum.sort_by(fn {_, i} -> Keyword.get(i, :memory) end, :desc)
|> Enum.take(10)

# Message queue backlog
:erlang.processes()
|> Enum.flat_map(fn pid ->
  case Process.info(pid, [:message_queue_len, :registered_name]) do
    nil -> []
    info -> if info[:message_queue_len] > 100, do: [{pid, info}], else: []
  end
end)

# DB pool status
:sys.get_state(Revstack.Repo)

via the project_eval tool — no copy-pasting required. It can then follow up with execute_sql_query to check pg_stat_statements for slow queries and get_logs to correlate log output, all in one conversation loop.

@kyle-neal kyle-neal merged commit 7922e81 into main Mar 18, 2026
1 check passed
@kyle-neal kyle-neal deleted the feat/tidewave branch March 18, 2026 17:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant