Skip to content

feat: upgrade langchain callback handler to OTLP span format#51

Draft
haroldorquesta wants to merge 5 commits intomainfrom
feat/langchain-callback-upgrade
Draft

feat: upgrade langchain callback handler to OTLP span format#51
haroldorquesta wants to merge 5 commits intomainfrom
feat/langchain-callback-upgrade

Conversation

@haroldorquesta
Copy link
Copy Markdown
Contributor

Rewrite the langchain integration to convert callback events into OTLP spans sent to /v2/traces/v1/traces, reusing the existing OTEL pipeline instead of the limited /v2/langchain endpoint.

Key changes:

  • Split monolithic init.py into focused submodules
  • Add global setup()/teardown() with register_configure_hook for automatic tracing
  • Add async handler (AsyncOrqLangchainCallback)
  • Send OTLP spans with gen_ai./langsmith. attributes
  • Fix response role bug (was SYSTEM, now assistant)
  • Fix class-level mutable dict bug (now instance-level)
  • Fix llm_output=None crash with robust token extraction
  • Fix memory leak (events cleaned up after send)
  • Replace print() with logger.debug()
  • Handle all message types (AIMessage, ToolMessage, FunctionMessage, ChatMessage)
  • Add error callbacks (on_llm_error, on_chain_error, on_tool_error, on_retriever_error)
  • Add streaming token accumulation (on_llm_new_token)

Rewrite the langchain integration to convert callback events into OTLP spans
sent to /v2/traces/v1/traces, reusing the existing OTEL pipeline instead of
the limited /v2/langchain endpoint.

Key changes:
- Split monolithic __init__.py into focused submodules
- Add global setup()/teardown() with register_configure_hook for automatic tracing
- Add async handler (AsyncOrqLangchainCallback)
- Send OTLP spans with gen_ai.*/langsmith.* attributes
- Fix response role bug (was SYSTEM, now assistant)
- Fix class-level mutable dict bug (now instance-level)
- Fix llm_output=None crash with robust token extraction
- Fix memory leak (events cleaned up after send)
- Replace print() with logger.debug()
- Handle all message types (AIMessage, ToolMessage, FunctionMessage, ChatMessage)
- Add error callbacks (on_llm_error, on_chain_error, on_tool_error, on_retriever_error)
- Add streaming token accumulation (on_llm_new_token)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@haroldorquesta haroldorquesta self-assigned this Apr 2, 2026
haroldorquesta and others added 4 commits April 3, 2026 15:41
Rewrite span builder and client to produce the exact OTLP JSON envelope
(resourceSpans > scopeSpans > spans) that the backend's LangSmith adapter
expects, instead of the custom {"spans": [...]} format.

Key changes:
- OTLP envelope with scope.name="langsmith" for adapter detection
- Nanosecond timestamps (startTimeUnixNano/endTimeUnixNano)
- OTLP attribute encoding (stringValue/intValue/doubleValue/etc)
- Input/output as gen_ai.prompt and gen_ai.completion JSON strings
- Token usage as gen_ai.usage.* attributes
- Root spans with empty parentSpanId -> SpanTypes.Trace
- UUID-based IDs (hyphens stripped) for trace_id and span_id
- Batch flush via timer to send all spans in one request
- Post to /v2/otel/v1/traces (OTEL endpoint, not traced endpoint)
- Add resolve_span_name utility for name resolution from multiple sources

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…case fixes

Add debug logging for span lifecycle events, pass chain name from kwargs,
handle None serialized data in provider extraction, and update default
fallback name to LangGraph.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant