Skip to content

Conversation

@quinneydavid
Copy link

Issue Fixed
This pull request addresses an issue where the application would attempt to use empty API keys for LLM services, causing runtime errors. Previously, the code only checked if environment variables were set (os.getenv("OPENAI_API_KEY")) but didn't validate whether they contained actual valid values.

Changes Made
The fix implements proper validation and fallback mechanisms in the LLMService class:

Added validation to check if API keys are empty, None, or just whitespace in all three main methods:

get_artist_recommendations()
get_track_recommendations()
generate_playlist_name()
Implemented a robust fallback mechanism:

When an OpenAI API key is empty or invalid, the service now logs a warning and automatically falls back to using Anthropic
Added model switching logic to ensure the appropriate model provider is used when falling back
Added a final validation check before API calls to ensure no empty API keys are used, raising a clear error message if no valid keys are available

Testing
The changes have been tested with various environment configurations:

With valid OpenAI API key
With empty OpenAI API key (falls back to Anthropic)
With no API keys set (raises appropriate error)
This fix improves application stability by preventing runtime errors when API keys are technically present in the environment but contain no usable value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant