Fix reasoning effort loss in cross-provider thinking conversion#120
Open
Matt Perpick (clutchski) wants to merge 1 commit intomainfrom
Open
Fix reasoning effort loss in cross-provider thinking conversion#120Matt Perpick (clutchski) wants to merge 1 commit intomainfrom
Matt Perpick (clutchski) wants to merge 1 commit intomainfrom
Conversation
…ersion When converting Anthropic `thinking.budget_tokens` to universal `ReasoningConfig`, the budget→effort heuristic used DEFAULT_MAX_TOKENS (4096) instead of the actual request max_tokens. This caused incorrect effort levels when max_tokens differed from the default (e.g., budget=1024 with max_tokens=1024 → High, not Low). Added `From<(&Thinking, Option<i64>)>` impl that accepts max_tokens context, and updated both Anthropic and Bedrock adapters to pass actual max_tokens. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
31b981d to
b46ccdd
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
thinking.budget_tokensto universal format. Thebudget_to_effortheuristic was usingDEFAULT_MAX_TOKENS(4096) instead of the actual requestmax_tokens, causing effort "high" or "medium" to become "low" when max_tokens was small.From<(&Thinking, Option<i64>)>impl that accepts max_tokens context, updated both Anthropic and Bedrock adapters.thinkingEnabledParamtest case to payload snapshots and cross-provider test to reproduce the bug.Test plan
test_anthropic_thinking_to_openai_effort_with_small_max_tokensFromimpl unit test:test_from_anthropic_thinking_without_max_tokens_uses_defaultthinkingEnabledParamcase🤖 Generated with Claude Code