⚡️ Speed up method _PartitionerLoader._load_partitioner by 266%
#69
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 266% (2.66x) speedup for
_PartitionerLoader._load_partitionerinunstructured/partition/auto.py⏱️ Runtime :
2.33 milliseconds→635 microseconds(best of250runs)📝 Explanation and details
The optimization adds
@lru_cache(maxsize=128)to thedependency_existsfunction, providing 266% speedup by eliminating redundant dependency checks.Key optimization: The original code repeatedly calls
importlib.import_module()for the same dependency packages during partition loading. Looking at the line profiler results,dependency_existswas called 659 times and spent 97.9% of its time (9.33ms out of 9.53ms) inimportlib.import_module(). The optimized version reduces this to just 1.27ms total time for dependency checks.Why this works:
importlib.import_module()is expensive because it performs filesystem operations, module compilation, and import resolution. With caching, subsequent calls for the same dependency name return immediately from memory rather than re-importing. The cache size of 128 is sufficient for typical use cases where the same few dependencies are checked repeatedly.Performance impact by test case:
Trade-offs: Small memory overhead for the cache and slight performance penalty for first-time dependency checks, but these are negligible compared to the gains in repeated usage scenarios.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-_PartitionerLoader._load_partitioner-mjebngyband push.