Skip to content

Add AMD GPU support for DeepSeek-V3, DeepSeek-R1 (MI300X/MI325X/MI355X + AITER)#278

Open
GoldenGrapeGentleman wants to merge 1 commit intovllm-project:mainfrom
GoldenGrapeGentleman:amd-deepseek-v3-r1
Open

Add AMD GPU support for DeepSeek-V3, DeepSeek-R1 (MI300X/MI325X/MI355X + AITER)#278
GoldenGrapeGentleman wants to merge 1 commit intovllm-project:mainfrom
GoldenGrapeGentleman:amd-deepseek-v3-r1

Conversation

@GoldenGrapeGentleman
Copy link
Copy Markdown

Summary

This PR adds AMD GPU support for DeepSeek-V3 and DeepSeek-R1 on MI300X/MI325X/MI355X GPUs.

Changes

  • Step 1: uv-based vLLM ROCm installation guide
  • Step 2: vLLM server launch command with AITER and AITER_MOE enabled
  • Step 3: Benchmark script

Hardware Tested

Hardware Status
8x AMD MI300X + AITER ✅ Verified
8x AMD MI355X + AITER ✅ Verified

Related

Closes the AMD GPU support gap originally started in #144.

…X + AITER)

- Add Step 1: uv-based vLLM ROCm installation
- Add Step 2: vLLM server startup with AITER and AITER_MOE enabled
- Add Step 3: benchmark script

Tested and verified on 8x AMD MI300X and 8x AMD MI355X GPUs with AITER.

Signed-off-by: Yuan Yue <yueyuan@amd.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the hardware compatibility for DeepSeek-V3 and DeepSeek-R1 models by integrating official support for AMD MI300X, MI325X, and MI355X GPUs. It provides users with a complete workflow, from environment setup and vLLM server deployment to performance benchmarking, enabling broader accessibility and deployment options for these advanced AI models on AMD infrastructure.

Highlights

  • AMD GPU Support Added: Comprehensive instructions have been added to enable DeepSeek-V3 and DeepSeek-R1 models to run on AMD MI300X/MI325X/MI355X GPUs.
  • vLLM ROCm Installation Guide: A step-by-step guide for installing vLLM with the ROCm backend using 'uv' for AMD GPUs is now included.
  • vLLM Server Launch Command: A sample command for launching the vLLM server with AITER and AITER_MOE enabled for AMD hardware has been provided.
  • Benchmark Script: Instructions and a command for running a benchmark script to test DeepSeek-V3 performance on AMD GPUs have been added.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • DeepSeek/DeepSeek-V3.md
    • Added a new top-level section titled 'AMD GPU Support'.
    • Included detailed steps for installing vLLM with the ROCm backend using 'uv'.
    • Provided a sample command for starting the vLLM server with specific environment variables for AMD GPUs (SAFETENSORS_FAST_GPU, VLLM_USE_TRITON_FLASH_ATTN, VLLM_ROCM_USE_AITER, VLLM_ROCM_USE_AITER_MOE).
    • Added a benchmark script command for evaluating model performance on AMD hardware.
Activity
  • No specific activity (comments, reviews, etc.) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds documentation for running DeepSeek-V3 and DeepSeek-R1 models on AMD MI300X series GPUs. The changes are well-structured and provide clear installation and server launch instructions. However, I've identified a few areas for improvement, mainly concerning the benchmark instructions which are currently confusing and incomplete. I've also suggested minor formatting changes to improve the document's readability.

Comment thread DeepSeek/DeepSeek-V3.md
Comment on lines +195 to +208
### Step 3: Run Benchmark

Open a new terminal and run the following command to execute the benchmark script.

```bash
vllm bench serve \
--model deepseek-ai/DeepSeek-V3 \
--dataset-name random \
--random-input-len 8000 \
--random-output-len 1000 \
--request-rate 10000 \
--num-prompts 16 \
--ignore-eos
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The instructions for running the benchmark are confusing and the command is incomplete. The vllm bench serve command starts its own server, so the instruction 'Open a new terminal' might confuse users into thinking they need to connect to the server from Step 2. The benchmark command is also missing the necessary environment variables and parallelization arguments from Step 2 to run correctly on AMD GPUs.

I recommend rewriting this section for clarity and correctness. Here is a suggestion:

### Step 3: Run Benchmark

You can benchmark the model using the `vllm bench serve` command. This command will start a server, run the benchmark, and then exit. Note that the environment variables and server arguments from Step 2 must be included for the benchmark to run with the correct configuration.

```bash
SAFETENSORS_FAST_GPU=1 \
VLLM_USE_TRITON_FLASH_ATTN=0 \
VLLM_ROCM_USE_AITER=1 \
VLLM_ROCM_USE_AITER_MOE=1 \
vllm bench serve \
  --model deepseek-ai/DeepSeek-V3 \
  --tensor-parallel-size 8 \
  --enable-expert-parallel \
  --trust-remote-code \
  --dataset-name random \
  --random-input-len 8000 \
  --random-output-len 1000 \
  --request-rate 10000 \
  --num-prompts 16 \
  --ignore-eos

Comment thread DeepSeek/DeepSeek-V3.md
Comment on lines +160 to +161


Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These extra blank lines are unnecessary. One blank line is sufficient to separate sections in Markdown.

Comment thread DeepSeek/DeepSeek-V3.md

Recommended approaches by hardware type are:

MI300X/MI325X/MI355X
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better document structure and readability, consider making this line a sub-heading. This will help users quickly identify the hardware section.

Suggested change
MI300X/MI325X/MI355X
#### MI300X/MI325X/MI355X

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant