fix: Enhance LLM tip generation robustness #56
Conversation
…t messages and handling malformed/empty responses, and add validation for trajectory data.
|
No actionable comments were generated in the recent review. 🎉 📝 WalkthroughWalkthroughAdded a module logger and improved error handling in Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@kaizen/llm/tips/tips.py`:
- Around line 152-156: Wrap the call to
TipGenerationResponse.model_validate(json.loads(clean_response)) in an
additional except block that catches pydantic.ValidationError (import
ValidationError from pydantic alongside other imports), log a warning similar to
the JSONDecodeError case using logger.warning with the error and a truncated
repr(clean_response[:500]), and return [] so the function gracefully handles
schema validation failures; ensure the new except references
TipGenerationResponse.model_validate, logger.warning, and clean_response.
🧹 Nitpick comments (2)
kaizen/llm/tips/tips.py (2)
78-80: Consider logging when skipping non-empty assistant content.The previous code raised an exception for unrecognized content, which served as an alert for unexpected data shapes. Silently continuing is fine for empty/
Nonecontent, but ifcontentis a non-empty, non-string, non-list value (e.g., a dict or unexpected type), this will now silently discard it. A debug-level log would help surface unexpected content shapes without breaking the flow.💡 Suggested improvement
else: - # Skip empty assistant messages (common from tool-calling patterns) - continue + # Skip empty assistant messages (common from tool-calling patterns) + if content: + logger.debug(f"Skipping unexpected assistant content type: {type(content).__name__}") + continue
5-5: Logger name"entities-mcp"looks like it belongs to a different module.The conventional Python pattern is
logging.getLogger(__name__), which would yield"kaizen.llm.tips.tips"and integrate naturally with hierarchical logger configuration. Using a hardcoded name from an unrelated namespace makes filtering and log routing confusing.💡 Suggested fix
-logger = logging.getLogger("entities-mcp") +logger = logging.getLogger(__name__)
…d validate trajectory data input.
|
LGTM. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@kaizen/llm/tips/tips.py`:
- Line 16: The module-level logger is created with a hardcoded name
"entities-mcp" which is incorrect; update the logger instantiation in tips.py
(the logger variable) to use the module's __name__ (logging.getLogger(__name__))
so logs inherit the package logger hierarchy and are attributable to
kaizen.llm.tips.tips instead of a copy-pasted name.
|
Caution Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted. Error details |
Enhance LLM tip generation robustness by skipping empty assistant messages and handling malformed/empty responses, and add validation for trajectory data.
Summary by CodeRabbit
Bug Fixes
Chores