I'm trying to use the vLLM backend for serving/rollout, but I'm encountering an issue where the service hangs during initialization.
Problem Description
When the AsyncvLLMServer actor is being initialized, the process gets stuck and never proceeds. I see the following key warning message in the logs, which points to a potential deadlock:
Using blocking ray.get inside async actor. This blocks the event loop. Please use await on object ref with asyncio.gather if you want to yield execution to the event loop instead.
By the way, is vllm currently not supported?
I'm trying to use the vLLM backend for serving/rollout, but I'm encountering an issue where the service hangs during initialization.
Problem Description
When the AsyncvLLMServer actor is being initialized, the process gets stuck and never proceeds. I see the following key warning message in the logs, which points to a potential deadlock:
Using blocking ray.get inside async actor. This blocks the event loop. Please use
awaiton object ref with asyncio.gather if you want to yield execution to the event loop instead.By the way, is vllm currently not supported?