Fix playlist import race condition on large libraries (#127)Import playlists issue#192
Open
Fix playlist import race condition on large libraries (#127)Import playlists issue#192
Conversation
This fixes a race condition where playlist imports would fail because the song database query returned stale cached data from a StateFlow before it had time to update after song insertion. The issue was particularly noticeable with large libraries (7000+ songs) where the StateFlow emission delay was more pronounced. Changes: - Add getSongsDirect() method to SongRepository interface that bypasses the StateFlow cache and queries the database directly - Implement getSongsDirect() in LocalSongRepository to query fresh data from the Room DAO - Update importPlaylists() to use getSongsDirect() instead of getSongs() to ensure fresh song data is available when matching playlist songs This ensures playlist imports can reliably match songs regardless of library size or timing of StateFlow updates.
This improves upon the previous fix by removing the code smell of exposing getSongsDirect() in the repository interface. Instead of adding a parallel query method, we now ensure that insertUpdateAndDelete() properly synchronizes the StateFlow cache before returning. This maintains the clean reactive architecture while guaranteeing that subsequent reads will see the updated data. Changes: - Remove getSongsDirect() from SongRepository interface - Add cache synchronization to insertUpdateAndDelete() in LocalSongRepository by waiting for StateFlow to emit updated data - Revert importPlaylists() to use standard getSongs() flow This approach: - Maintains single responsibility and clean interfaces - Ensures write operations are truly complete before returning - Preserves the reactive Flow-based architecture - Follows the principle of least surprise
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #127 - Playlist imports were failing on large libraries due to a race condition between database writes and StateFlow cache updates.
This PR implements a clean architectural solution by ensuring write operations fully complete (including cache synchronization) before returning.
Root Cause
When importing large music libraries (7000+ songs):
StateFlowcache inLocalSongRepositoryupdated asynchronously via Room'sInvalidationTracker.firstOrNull()on the StateFlowSolution
Made write operations synchronous with cache updates:
insertUpdateAndDelete()inLocalSongRepositoryto wait for the StateFlow cache to synchronize before returningsongsRelay.first { it != null }to suspend until the cache reflects database changesWhy this approach:
Testing
The fix ensures that:
insertUpdateAndDelete()returns, the StateFlow cache is synchronizedgetSongs().firstOrNull()return fresh dataChanges
LocalSongRepository.kt:71-84
insertUpdateAndDelete()No changes to:
SongRepositoryinterface (maintains clean abstraction)MediaImporterlogic (no workarounds needed)