Conversation
Add Respan as a monitoring tool integration for tracing Haystack pipelines. Includes RespanConnector for automatic tracing, RespanGenerator/RespanChatGenerator for gateway routing, and configuration reference. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Adds documentation for a new Respan observability/monitoring integration within the Haystack integrations docs, describing tracing via a connector component and gateway-based generators.
Changes:
- Adds a new integration page:
integrations/respan.md - Documents installation, environment configuration, and example pipelines (basic + RAG)
- Adds configuration/options and custom-attributes guidance
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Add Respan logo to logos/ directory - Fix author indentation to match repo conventions (4 spaces) - Fix RespanConnector description: correctly describe as pass-through component that connects to the generator, not standalone tracer - Remove broken markdown tables with escaped pipes, use bullet lists - Remove unverified RAG example, keep examples from official docs - Clarify OPENAI_API_KEY only needed for OpenAIGenerator examples - Add RespanGenerator/RespanChatGenerator config documentation - Add Respan-managed prompts section from official docs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace GitHub org avatar with official 512x512 branding asset. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
bilgeyucel
left a comment
There was a problem hiding this comment.
Thank you for the contribution @drPod! I left some minor comments. Also, can you update the README in the PyPI page? It highlights keywordsai instead of respan
| ```python | ||
| from haystack import Pipeline | ||
| from haystack.components.generators import OpenAIGenerator | ||
| from respan_exporter_haystack import RespanConnector | ||
|
|
||
| pipeline = Pipeline() | ||
| pipeline.add_component("respan", RespanConnector(api_key="your-api-key")) | ||
| pipeline.add_component("llm", OpenAIGenerator(model="gpt-4o-mini")) | ||
| pipeline.connect("respan", "llm") | ||
|
|
||
| result = pipeline.run({"respan": {"prompt": "Tell me a joke about AI"}}) | ||
| print(result) |
There was a problem hiding this comment.
Can you rewrite this example with OpenAIChatGenerator?
| from respan_exporter_haystack import RespanGenerator | ||
|
|
||
| pipeline = Pipeline() | ||
| pipeline.add_component("prompt_builder", PromptBuilder(template="Tell me about {{topic}}")) |
There was a problem hiding this comment.
Can you rewrite this example code with ChatPromptBuilder? This would also require you to replace RespanGenerator with RespanChatGenerator
|
|
||
| ### Use Respan-managed prompts | ||
|
|
||
| You can use prompts managed in the Respan platform by passing a `prompt_id` and `prompt_version` to `RespanGenerator`: |
There was a problem hiding this comment.
this is interesting! Can you put a screenshot of this feature on the platform?
Summary
RespanConnectorfor automatic pipeline tracing,RespanGeneratorandRespanChatGeneratorfor gateway routingAbout Respan
Respan is an observability platform for monitoring and tracing LLM applications. The
respan-exporter-haystackpackage is available on PyPI.References
🤖 Generated with Claude Code