-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Current State
- BaseClient (singleton) wraps an ETMSession.
- AsyncBatchRunner.batch_requests runs a list of loosely-typed request dicts concurrently with asyncio.gather and returns a list of ServiceResult.
- A sync wrapper (batch_requests_sync) uses run_coroutine_threadsafe on a private event loop.
- A free function make_batch_requests duplicates the sync path.
- Error handling is broad; request specs are untyped dicts; results only available after all complete.
Drawbacks
- Singleton pattern reduces flexibility (shared global state).
- Batch API is all-or-nothing: no streaming / incremental consumption.
- Uses private session loop attribute; brittle.
- Request specification is an unvalidated dict → easy to introduce mistakes.
- Helper make_batch_requests feels out of place.
- No concurrency limiting, progress callbacks, or structured retry/parse strategy.
- Error classification is coarse; no partial yield or cancellation friendliness.
- Hard to extend (no hooks / observers).
Ideal State
- Instantiable (optionally cacheable) client with context manager support (sync + async).
- Typed RequestSpec (dataclass) defining method, url, kwargs, optional id.
- Streaming batch API (e.g. async iterator using asyncio.as_completed) plus convenience collector.
- Optional concurrency limit, progress/observer callbacks, and retry/backoff hooks.
- Clear error classification (auth/client/server/parse) with structured data in ServiceResult.
- Sync adapter that doesn’t rely on private loop internals (fails fast if misused inside running loop).
- Integrated logging/tracing; optional response parser injection.
- Deprecated/removal of the free helper in favor of a client.batch(...) method.
Metadata
Metadata
Assignees
Labels
No labels