-
Notifications
You must be signed in to change notification settings - Fork 3
Gemini SDK #1187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: testing
Are you sure you want to change the base?
Gemini SDK #1187
Conversation
|
/windsurf-review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other comments (12)
- src/services/commonServices/Google/geminiCall.py (98-98) The variable `tools` is used in the response formatter on line 98, but it's not defined in this code path. It should be initialized to an empty dictionary when there are no function calls.
- src/services/commonServices/Google/gemini_modelrun.py (22-22) The `model_dump()` method might not be available in the Gemini SDK response object. According to Google's Generative AI SDK documentation, you should use `.text` for simple text responses or access specific properties of the response object. Consider checking the SDK documentation for the correct way to serialize the response.
-
src/services/commonServices/Google/geminiCall.py (61-61)
There's a variable name mismatch here. You're using `image_url` in this line, but it should be `self.image_data` since you're checking if it ends with .png.
user_parts.append(types.Part.from_uri(file_uri=self.image_data, mime_type="image/png" if self.image_data.endswith(".png") else "image/jpeg")) -
src/services/commonServices/baseService/baseService.py (154-157)
The exception handling for JSON parsing in the Gemini case doesn't properly handle or log errors. Consider adding error logging to help with debugging:
try: function_response['content'] = json.loads(function_response['content']) except Exception as e: logger.error(f"Error parsing function response content as JSON: {str(e)}") pass -
src/services/commonServices/baseService/baseService.py (154-157)
Using a bare `except:` clause is not recommended as it catches all exceptions including KeyboardInterrupt, SystemExit, etc. Consider specifying the exception types you expect to handle:
try: function_response['content'] = json.loads(function_response['content']) except (ValueError, TypeError) as e: pass - src/configs/serviceKeys.py (127-128) The parameter naming convention has changed from the previous style. For example, 'logprobs' → 'response_logprobs' and 'topK' → 'top_k'. Make sure all code that references these parameters has been updated accordingly to prevent runtime errors.
- src/services/commonServices/Google/gemini_modelrun.py (11-12) The token validation code is commented out but should be updated to work with the Gemini SDK. Token counting and limits are likely different between OpenAI and Gemini models, so the validation logic may need to be adjusted.
- src/configs/serviceKeys.py (127-132) Several parameters have been removed from the Gemini service configuration without replacements: 'echo_input', 'input_text', 'best_response_count', and 'response_suffix'. Is this intentional? If these parameters are needed for the Gemini SDK, they should have equivalent mappings.
-
src/services/commonServices/createConversations.py (286-286)
The role check uses 'tools_calls' (plural) but other methods in this file use 'tools_call' (singular). This inconsistency might cause messages with role 'tools_call' to be incorrectly processed.
if role not in {'tools_call', "tools"}: - src/services/commonServices/baseService/baseService.py (139-139) The `google.genai.types` module is imported multiple times in different functions. Consider moving this import to the top of the file with other imports to follow the Python convention of keeping all imports at the top.
- src/services/commonServices/Google/geminiCall.py (46-46) The `system_instruction` variable is defined on line 46 but never used in the code. If this is meant to be included in the contents, you should add it.
-
src/services/utils/token_calculation.py (53-53)
Consider simplifying the usage_metadata initialization to avoid potential issues with None values:
usage_metadata = model_response.get('usage_metadata') or {}
💡 To request another review, post a new comment with "/windsurf-review".
| threads.append({'role': 'user', 'content': 'provide the summary of the previous conversation stored in the memory?'}) | ||
| threads.append({'role': 'assistant', 'content': f'Summary of previous conversations : {memory}' }) | ||
| contents.append(types.Content(role='user', parts=[types.Part(text='Please Provide the summary of the previous conversation stored in the memory.')])) | ||
| contents.append(types.Content(role='model', parts=[types.Part(text='Summary of previous conversations: {memory}')])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The memory summary message is missing proper f-string formatting. The variable memory won't be interpolated correctly.
| contents.append(types.Content(role='model', parts=[types.Part(text='Summary of previous conversations: {memory}')])) | |
| contents.append(types.Content(role='model', parts=[types.Part(text=f'Summary of previous conversations: {memory}')])) |
| if url.lower().endswith('.pdf'): | ||
| parts.append(types.Part.from_url(file_uri=url, mime_type="application/pdf")) | ||
| else: | ||
| mime_type = "image/png" if url.lower().endswith(".png") else "jpeg" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mime_type for JPEG images is missing the 'image/' prefix, which is inconsistent with PNG handling and may cause issues with the Gemini API.
| mime_type = "image/png" if url.lower().endswith(".png") else "jpeg" | |
| mime_type = "image/png" if url.lower().endswith(".png") else "image/jpeg" |
| "fallback" : response.get('fallback') or False, | ||
| "firstAttemptError" : response.get('firstAttemptError') or '', | ||
| "finish_reason" : finish_reason_mapping(response.get("choices", [{}])[0].get("finish_reason", "")) | ||
| "finish_reason": finish_reason_mapping(candidates[0].get("finish_reason").value.lower() if candidates and candidates[0].get("finish_reason") else "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The finish_reason handling could cause a runtime error if candidates[0].get("finish_reason") is None. Consider adding a safer check:
| "finish_reason": finish_reason_mapping(candidates[0].get("finish_reason").value.lower() if candidates and candidates[0].get("finish_reason") else "") | |
| "finish_reason": finish_reason_mapping(candidates[0].get("finish_reason").value.lower() if candidates and candidates[0].get("finish_reason") and hasattr(candidates[0].get("finish_reason"), "value") else "") |
ef07841 to
9ee8fce
Compare
| contents = [] | ||
| system_instruction = self.configuration['prompt'] | ||
|
|
||
| contents.extend(conversation) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why extend.
| for image_url in self.image_data: | ||
| user_parts.append(types.Part.from_uri(file_uri=image_url, mime_type="image/png" if image_url.endswith(".png") else "image/jpeg")) | ||
| if isinstance(self.image_data, str): | ||
| user_parts.append(types.Part.from_uri(file_uri=image_url, mime_type="image/png" if image_data.endswith(".png") else "image/jpeg")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
???
| return {'success': True, 'response': chat_completion.to_dict()} | ||
| chat_completion = await client.aio.models.generate_content( | ||
| **config | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do in 1 line
| if isinstance(function_response['content'], str): | ||
| try: | ||
| function_response['content'] = json.loads(function_response['content']) | ||
| except: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why? parse.
| 'top_p': new_config.get('top_p'), | ||
| 'top_k': new_config.get('top_k'), | ||
| 'max_output_tokens': new_config.get('max_output_tokens'), | ||
| 'stop_sequences': new_config.get('stop_sequences'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why?
| codes_mapping[part['function_call']['id']] = { | ||
| 'name': name, | ||
| 'args': args, | ||
| "error": False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error
No description provided.