You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The MS JS distro (0.1.0-beta) LangChainTraceInstrumentor mis-attributes the LLM response model into the gen_ai.request.model attribute and never sets gen_ai.response.model. The actual request deployment alias is also lost in some flows. The bug reproduces across three LangChain code paths and also affects span naming.
Affected scenarios
1. LangChain Node.js – AzureChatOpenAI – Chat Completions API (CAPI)
Span name: chat gpt-3.5-turbo
gen_ai.request.model: o4-mini-2025-04-16 ← wrong (this is the response model)
gen_ai.response.model: (empty)
Prompt tokens: 18 / Completion tokens: 0
Root cause: The instrumentor reads LLMResult.llmOutput.model_name (the response model) and writes it to gen_ai.request.model, conflating the two attributes. The actual request deployment alias is dropped entirely. The span name falls back to LangChain''s default model field (gpt-3.5-turbo) because no model kwarg is passed to AzureChatOpenAI.
gen_ai.request.model: o4-mini-2025-04-16 ← wrong (this is the response model)
gen_ai.response.model: (empty)
Prompt tokens: 5 / Completion tokens: 0
Root cause: Same as Initial Microsoft Distro Scaffolding #1 — the response model is mis-attributed into gen_ai.request.model. Here the span name picks up the deployment string (deployment-o4-mini) because it is passed as the model kwarg.
Root cause: The LangChain callback path doesn''t extract a model from the Responses-API result object (the field shape differs from Chat Completions), so it falls back to the request deployment for gen_ai.request.model and leaves gen_ai.response.model empty.
Expected Behavior
gen_ai.request.model should be the model/deployment requested by the caller (the deployment alias for Azure/Foundry, or the model kwarg passed to the client).
gen_ai.response.model should be populated from LLMResult.llmOutput.model_name (Chat Completions) or the equivalent field in the Responses-API result object.
Span name should follow GenAI semantic conventions and use the request model.
Across all three scenarios above, the request and response model attributes should be distinct and both populated.
Steps to Reproduce
Install @microsoft/opentelemetry-distro-javascript (or the appropriate LangChain instrumentation package) at version 0.1.0-beta.
Configure the distro to instrument LangChain.
Reproduce each of the three scenarios:
a. Build a AzureChatOpenAI client (CAPI) without passing a model kwarg, and invoke it.
b. Build a ChatOpenAI client pointed at Foundry (CAPI) with the deployment passed as the model kwarg, and invoke it.
c. Build a ChatOpenAI client pointed at Foundry with useResponsesApi: true (RAPI), and invoke it.
Component
general
Description
The MS JS distro (
0.1.0-beta)LangChainTraceInstrumentormis-attributes the LLM response model into thegen_ai.request.modelattribute and never setsgen_ai.response.model. The actual request deployment alias is also lost in some flows. The bug reproduces across three LangChain code paths and also affects span naming.Affected scenarios
1. LangChain Node.js –
AzureChatOpenAI– Chat Completions API (CAPI)chat gpt-3.5-turbogen_ai.request.model:o4-mini-2025-04-16← wrong (this is the response model)gen_ai.response.model: (empty)LLMResult.llmOutput.model_name(the response model) and writes it togen_ai.request.model, conflating the two attributes. The actual request deployment alias is dropped entirely. The span name falls back to LangChain''s defaultmodelfield (gpt-3.5-turbo) because nomodelkwarg is passed toAzureChatOpenAI.2. LangChain Node.js –
ChatOpenAI(Foundry) – Chat Completions API (CAPI)chat deployment-o4-minigen_ai.request.model:o4-mini-2025-04-16← wrong (this is the response model)gen_ai.response.model: (empty)gen_ai.request.model. Here the span name picks up the deployment string (deployment-o4-mini) because it is passed as themodelkwarg.3. LangChain Node.js –
ChatOpenAI(Foundry,useResponsesApi) – Responses API (RAPI)chat deployment-o4-minigen_ai.request.model:deployment-o4-minigen_ai.response.model: (empty)gen_ai.request.modeland leavesgen_ai.response.modelempty.Expected Behavior
gen_ai.request.modelshould be the model/deployment requested by the caller (the deployment alias for Azure/Foundry, or themodelkwarg passed to the client).gen_ai.response.modelshould be populated fromLLMResult.llmOutput.model_name(Chat Completions) or the equivalent field in the Responses-API result object.Steps to Reproduce
@microsoft/opentelemetry-distro-javascript(or the appropriate LangChain instrumentation package) at version0.1.0-beta.a. Build a
AzureChatOpenAIclient (CAPI) without passing amodelkwarg, and invoke it.b. Build a
ChatOpenAIclient pointed at Foundry (CAPI) with the deployment passed as themodelkwarg, and invoke it.c. Build a
ChatOpenAIclient pointed at Foundry withuseResponsesApi: true(RAPI), and invoke it.gen_ai.request.modelcontains the response model (Initial Microsoft Distro Scaffolding #1, Add planning and other md files #2) or the request deployment with no response model set (Migrate AzMon Distro #3).gen_ai.response.modelis empty in all three cases.Environment
@microsoft/opentelemetry-distro-javascript0.1.0-betaAdditional Context
Suggested fix
LLMResult.llmOutput.model_nametogen_ai.request.model. Route it togen_ai.response.modelinstead.gen_ai.request.modelfrom the invocation params / client configuration (e.g., the deployment alias for Azure/Foundry, or themodelkwarg).gen_ai.response.modelis populated whenuseResponsesApiis enabled.AzureChatOpenAICAPI,ChatOpenAIFoundry CAPI,ChatOpenAIFoundry RAPI).