First of all, thank you for the great work on this package — it's been incredibly helpful and well-designed.
I’m currently working with the stream mode in the OpenAI API and noticed that the response seems to be fully buffered before being delivered, rather than streaming chunks as they arrive. From what I can tell, this might be related to how the Swoole HTTP client handles responses.
Has anyone encountered this behavior? Is there a known workaround or configuration that could help ensure the response is truly streamed as data becomes available?
Any insights or shared experience would be greatly appreciated — thanks in advance!