Conversation
Signed-off-by: haic0 <149741444+haic0@users.noreply.github.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request expands the GLM-V documentation to incorporate comprehensive support for AMD GPUs. It provides clear guidance on installing vLLM for ROCm environments and outlines the necessary steps and configurations for running GLM models on AMD's MI300x series hardware, enhancing the accessibility of these models to a broader range of hardware platforms. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates the documentation to include instructions for running on AMD GPUs (ROCm). The changes are clear and add valuable information. I've made a few suggestions to improve the consistency and readability of the instructions.
| ### CUDA | ||
| ```bash | ||
| uv venv | ||
| source .venv/bin/activate | ||
| uv pip install -U vllm --torch-backend auto # vllm>=0.12.0 is required | ||
| ``` | ||
| ### ROCm | ||
| ```bash | ||
| uv venv | ||
| source .venv/bin/activate | ||
| uv pip install vllm --extra-index-url https://wheels.vllm.ai/rocm/ | ||
| ``` |
There was a problem hiding this comment.
To improve clarity and avoid repetition, the virtual environment setup commands, which are common for both CUDA and ROCm, should be mentioned only once before the platform-specific instructions. This makes the guide easier to follow.
| ### CUDA | |
| ```bash | |
| uv venv | |
| source .venv/bin/activate | |
| uv pip install -U vllm --torch-backend auto # vllm>=0.12.0 is required | |
| ``` | |
| ### ROCm | |
| ```bash | |
| uv venv | |
| source .venv/bin/activate | |
| uv pip install vllm --extra-index-url https://wheels.vllm.ai/rocm/ | |
| ``` | |
| First, create and activate a virtual environment: | |
| ```bash | |
| uv venv | |
| source .venv/bin/activate |
Then, install vLLM for your specific hardware:
CUDA
uv pip install -U vllm --torch-backend auto # vllm>=0.12.0 is requiredROCm
uv pip install vllm --extra-index-url https://wheels.vllm.ai/rocm/| * vLLM conservatively use 90% of GPU memory, you can set `--gpu-memory-utilization=0.95` to maximize KVCache. | ||
| * Make sure to follow the command-line instructions to ensure the tool-calling functionality is properly enabled. | ||
|
|
||
| ## Running GLM-4.5V / GLM-4.6V with FP8 or BF16 on 4xMI300x/MI325x/MI355x |
| --mm-encoder-tp-mode data | ||
| ``` | ||
|
|
||
| * Please run pip install "transformers>=5.0.0" to upgrade before serving. |
There was a problem hiding this comment.
| * Please run pip install "transformers>=5.0.0" to upgrade before serving. | ||
| * You can set `--max-model-len` to preserve memory. `--max-model-len=65536` is usually good for most scenarios. Note that GLM-4.5V only supports a 64K context length, while GLM-4.6V supports a 128K context length. | ||
| * You can set `--max-num-batched-tokens` to balance throughput and latency, higher means higher throughput but higher latency. `--max-num-batched-tokens=32768` is usually good for prompt-heavy workloads. But you can reduce it to 16k and 8k to reduce activation memory usage and decrease latency. | ||
| * vLLM conservatively use 90% of GPU memory, you can set `--gpu-memory-utilization=0.95` to maximize KVCache. |
There was a problem hiding this comment.
This list of tips is missing an important point about ensuring tool-calling functionality is enabled, which is present in the CUDA section. For consistency and to provide complete guidance, it should be added here as well.
| * vLLM conservatively use 90% of GPU memory, you can set `--gpu-memory-utilization=0.95` to maximize KVCache. | |
| * vLLM conservatively use 90% of GPU memory, you can set `--gpu-memory-utilization=0.95` to maximize KVCache. | |
| * Make sure to follow the command-line instructions to ensure the tool-calling functionality is properly enabled. |
| ``` | ||
| > Note: The vLLM wheel for ROCm requires Python 3.12, ROCm 7.0, and glibc >= 2.35. If your environment does not meet these requirements, please use the Docker-based setup as described in the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/#pre-built-images). | ||
|
|
||
| ## Running GLM-4.5V / GLM-4.6V with FP8 or BF16 on 4xH100 |
There was a problem hiding this comment.
remove on 4xH100
and add 4xH100 to the ### CUDA
|
|
||
| Run tensor-parallel like this: | ||
|
|
||
| ### CUDA |
| * vLLM conservatively use 90% of GPU memory, you can set `--gpu-memory-utilization=0.95` to maximize KVCache. | ||
| * Make sure to follow the command-line instructions to ensure the tool-calling functionality is properly enabled. | ||
|
|
||
| ## Running GLM-4.5V / GLM-4.6V with FP8 or BF16 on 4xMI300x/MI325x/MI355x |
There was a problem hiding this comment.
Don't need to repeat the header. Merge the content with the existing section.
| --mm-encoder-tp-mode data \ | ||
| --mm-processor-cache-type shm | ||
| ``` | ||
|
|
There was a problem hiding this comment.
ROCm (4xMI300x/MI325x/MI355x)
# Start server with FP8 model on 4 GPUs, the model can also be changed to BF16 as zai-org/GLM-4.5V
SAFETENSORS_FAST_GPU=1 \
VLLM_ROCM_USE_AITER=1 \
vllm serve zai-org/GLM-4.5V-FP8 \
--tensor-parallel-size 4 \
--tool-call-parser glm45 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--enable-expert-parallel \
--allowed-local-media-path / \
--mm-encoder-tp-mode data
No description provided.