diff --git a/inference/a4/single-host-serving/tensorrt-llm/README.md b/inference/a4/single-host-serving/tensorrt-llm/README.md
new file mode 100644
index 00000000..8c884dd7
--- /dev/null
+++ b/inference/a4/single-host-serving/tensorrt-llm/README.md
@@ -0,0 +1,402 @@
+# Single Host Model Serving with NVIDIA TensorRT-LLM (TRT-LLM) on A4 GKE Node Pool
+
+This document outlines the steps to serve and benchmark various Large Language Models (LLMs) using the [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) framework on a single [A4 GKE Node pool](https://cloud.google.com/kubernetes-engine).
+
+This guide walks you through setting up the necessary cloud infrastructure, configuring your environment, and deploying a high-performance LLM for inference.
+
+
+## Table of Contents
+
+* [1. Test Environment](#test-environment)
+* [2. High-Level Architecture](#architecture)
+* [3. Environment Setup (One-Time)](#environment-setup)
+ * [3.1. Clone the Repository](#clone-repo)
+ * [3.2. Configure Environment Variables](#configure-vars)
+ * [3.3. Connect to your GKE Cluster](#connect-cluster)
+ * [3.4. Get Hugging Face Token](#get-hf-token)
+ * [3.5. Create Hugging Face Kubernetes Secret](#setup-hf-secret)
+* [4. Run the Recipe](#run-the-recipe)
+ * [4.1. Supported Models](#supported-models)
+ * [4.2. Deploy and Benchmark a Model](#deploy-model)
+* [5. Monitoring and Troubleshooting](#monitoring)
+ * [5.1. Check Deployment Status](#check-status)
+ * [5.2. View Logs](#view-logs)
+* [6. Cleanup](#cleanup)
+
+
+## 1. Test Environment
+
+[Back to Top](#table-of-contents)
+
+The recipe uses the following setup:
+
+* **Orchestration**: [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine)
+* **Deployment Configuration**: A [Helm chart](https://helm.sh/) is used to configure and deploy a [Kubernetes Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). This deployment encapsulates the inference of the target LLM using the TensorRT-LLM framework.
+
+This recipe has been optimized for and tested with the following configuration:
+
+* **GKE Cluster**:
+ * A [regional standard cluster](https://cloud.google.com/kubernetes-engine/docs/concepts/configuration-overview) version: `1.33.4-gke.1036000` or later.
+ * A GPU node pool with 1 [a4-highgpu-8g](https://docs.cloud.google.com/compute/docs/accelerator-optimized-machines#a4-vms) machine.
+ * [Workload Identity Federation for GKE](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) enabled.
+ * [Cloud Storage FUSE CSI driver for GKE](https://cloud.google.com/kubernetes-engine/docs/concepts/cloud-storage-fuse-csi-driver) enabled.
+ * [DCGM metrics](https://cloud.google.com/kubernetes-engine/docs/how-to/dcgm-metrics) enabled.
+ * [Kueue](https://kueue.sigs.k8s.io/docs/reference/kueue.v1beta1/) and [JobSet](https://jobset.sigs.k8s.io/docs/overview/) APIs installed.
+ * Kueue configured to support [Topology Aware Scheduling](https://kueue.sigs.k8s.io/docs/concepts/topology_aware_scheduling/).
+* A regional Google Cloud Storage (GCS) bucket to store logs generated by the recipe runs.
+
+> [!IMPORTANT]
+> To prepare the required environment, see the [GKE environment setup guide](../../../../docs/configuring-environment-gke-a4x.md).
+> Provisioning a new GKE cluster is a long-running operation and can take **20-30 minutes**.
+
+
+## 2. High-Level Flow
+
+[Back to Top](#table-of-contents)
+
+Here is a simplified diagram of the flow that we follow in this recipe:
+
+```mermaid
+---
+config:
+ layout: dagre
+---
+flowchart TD
+ subgraph workstation["Client Workstation"]
+ T["Cluster Toolkit"]
+ B("Kubernetes API")
+ A["helm install"]
+ end
+ subgraph huggingface["Hugging Face Hub"]
+ I["Model Weights"]
+ end
+ subgraph gke["GKE Cluster (A4)"]
+ C["Deployment"]
+ D["Pod"]
+ E["TensorRT-LLM container"]
+ F["Service"]
+ end
+ subgraph storage["Cloud Storage"]
+ J["Bucket"]
+ end
+
+ %% Logical/actual flow
+ T -- Create Cluster --> gke
+ A --> B
+ B --> C & F
+ C --> D
+ D --> E
+ F --> C
+ E -- Downloads at runtime --> I
+ E -- Write logs --> J
+
+
+ %% Layout control
+ gke
+```
+
+* **helm:** A package manager for Kubernetes to define, install, and upgrade applications. It's used here to configure and deploy the Kubernetes Deployment.
+* **Deployment:** Manages the lifecycle of your model server pod, ensuring it stays running.
+* **Service:** Provides a stable network endpoint (a DNS name and IP address) to access your model server.
+* **Pod:** The smallest deployable unit in Kubernetes. The Triton server container with TensorRT-LLM runs inside this pod on a GPU-enabled node.
+* **Cloud Storage:** A Cloud Storage bucket to store benchmark logs and other artifacts.
+
+
+## 3. Environment Setup (One-Time)
+
+[Back to Top](#table-of-contents)
+
+First, you'll configure your local environment. These steps are required once before you can deploy any models.
+
+
+### 3.1. Clone the Repository
+
+```bash
+git clone https://github.com/ai-hypercomputer/gpu-recipes.git
+cd gpu-recipes
+export REPO_ROOT=$(pwd)
+export RECIPE_ROOT=$REPO_ROOT/inference/a4x/single-host-serving/tensorrt-llm
+```
+
+
+### 3.2. Configure Environment Variables
+
+This is the most critical step. These variables are used in subsequent commands to target the correct resources.
+
+```bash
+export PROJECT_ID=
+export CLUSTER_REGION=
+export CLUSTER_NAME=
+export KUEUE_NAME=
+export GCS_BUCKET=
+export TRTLLM_VERSION=1.3.0rc5
+
+# Set the project for gcloud commands
+gcloud config set project $PROJECT_ID
+```
+
+Replace the following values:
+
+| Variable | Description | Example |
+| --------------------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------- |
+| `PROJECT_ID` | Your Google Cloud Project ID. | `gcp-project-12345` |
+| `CLUSTER_REGION` | The GCP region where your GKE cluster is located. | `us-central1` |
+| `CLUSTER_NAME` | The name of your GKE cluster. | `a4-cluster` |
+| `KUEUE_NAME` | The name of the Kueue local queue. The default queue created by the cluster toolkit is `a4`. Verify the name in your cluster. | `a4` |
+| `ARTIFACT_REGISTRY` | Full path to your Artifact Registry repository. | `us-central1-docker.pkg.dev/gcp-project-12345/my-repo` |
+| `GCS_BUCKET` | Name of your GCS bucket (do not include `gs://`). | `my-benchmark-logs-bucket` |
+| `TRTLLM_VERSION` | The tag/version for the Docker image. Other verions can be found at https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tensorrt-llm/containers/release | `1.3.0rc5` |
+
+
+
+### 3.3. Connect to your GKE Cluster
+
+Fetch credentials for `kubectl` to communicate with your cluster.
+
+```bash
+gcloud container clusters get-credentials $CLUSTER_NAME --region $CLUSTER_REGION
+```
+
+
+### 3.4. Get Hugging Face token
+
+To access models through Hugging Face, you'll need a Hugging Face token.
+ 1. Create a [Hugging Face account](https://huggingface.co/) if you don't have one.
+ 2. For **gated models** like Llama 4, ensure you have requested and been granted access on Hugging Face before proceeding.
+ 3. Generate an Access Token: Go to **Your Profile > Settings > Access Tokens**.
+ 4. Select **New Token**.
+ 5. Specify a Name and a Role of at least `Read`.
+ 6. Select **Generate a token**.
+ 7. Copy the generated token to your clipboard. You'll use this later.
+
+
+
+### 3.5. Create Hugging Face Kubernetes Secret
+
+Create a Kubernetes Secret with your Hugging Face token to enable the pod to download model checkpoints from Hugging Face.
+
+```bash
+# Paste your Hugging Face token here
+export HF_TOKEN=
+
+kubectl create secret generic hf-secret \
+--from-literal=hf_api_token=${HF_TOKEN} \
+--dry-run=client -o yaml | kubectl apply -f -
+```
+
+
+## 4. Run the recipe
+
+[Back to Top](#table-of-contents)
+
+> [!NOTE]
+> After running the recipe with `helm install`, it can take **up to 30 minutes** for the deployment to become fully available. This is because the GKE node must first pull the Docker image and then download the model weights from Hugging Face.
+
+
+### 4.1. Supported Models
+
+[Back to Top](#table-of-contents)
+
+This recipe supports the following models. You can easily swap between them by changing the environment variables in the next step.
+
+Running TRTLLM inference benchmarking on these models are only tested and validated on A4 GKE nodes with certain combination of TP, PP, EP, number of GPU chips, input & output sequence length, precision, etc.
+
+Example model configuration YAML files included in this repo only show a certain combination of parallelism hyperparameters and configs for benchmarking purposes. Input and output length in `gpu-recipes/inference/a4/single-host-serving/tensorrt-llm/values.yaml` need to be adjusted according to the model and its configs.
+
+| Model Name | Hugging Face ID | Configuration File | Release Name Suffix |
+| :--- | :--- | :--- | :--- |
+| **DeepSeek-R1 671B** | `nvidia/DeepSeek-R1-NVFP4-v2` | `deepseek-r1-nvfp4.yaml` | `deepseek-r1` |
+| **Qwen 3 235B A22B FP4** | `nvidia/Qwen3-235B-A22B-NVFP4` | `qwen3-235b-a22b-nvfp4.yaml` | `qwen3-235b-a22b` |
+| **Qwen 3 32B** | `Qwen/Qwen3-32B` | `qwen3-32b.yaml` | `qwen3-32b` |
+
+> [!TIP]
+> **DeepSeek-R1 671B** uses Nvidia's pre-quantized FP4 checkpoint. For more information, see the [Hugging Face model card](https://huggingface.co/nvidia/DeepSeek-R1-NVFP4-v2).
+
+> [!TIP]
+> You can use the [NVIDIA Model Optimizer](https://github.com/NVIDIA/Model-Optimizer/tree/main/examples/llm_ptq) to quantize these models to FP8 or NVFP4 for improved performance.
+
+
+### 4.2. Deploy and Benchmark a Model
+
+[Back to Top](#table-of-contents)
+
+The recipe uses [`trtllm-bench`](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/legacy/performance/perf-benchmarking.md), a command-line tool from NVIDIA to benchmark the performance of TensorRT-LLM engine.
+
+1. **Configure model-specific variables.** Choose a model from the [table above](#supported-models) and set the variables:
+
+ ```bash
+ # Example for DeepSeek-R1 NVFP4
+ export HF_MODEL_ID="nvidia/DeepSeek-R1-NVFP4-v2"
+ export CONFIG_FILE="deepseek-r1-nvfp4.yaml"
+ export RELEASE_NAME="$USER-serving-deepseek-r1"
+ ```
+
+2. **Install the helm chart:**
+
+ ```bash
+ cd $RECIPE_ROOT
+ helm install -f values.yaml \
+ --set-file workload_launcher=$REPO_ROOT/src/launchers/trtllm-launcher.sh \
+ --set-file serving_config=$REPO_ROOT/src/frameworks/a4x/trtllm-configs/${CONFIG_FILE} \
+ --set queue=${KUEUE_NAME} \
+ --set "volumes.gcsMounts[0].bucketName=${GCS_BUCKET}" \
+ --set workload.model.name=${HF_MODEL_ID} \
+ --set workload.image=nvcr.io/nvidia/tensorrt-llm/release:${TRTLLM_VERSION} \
+ --set workload.framework=trtllm \
+ ${RELEASE_NAME} \
+ $REPO_ROOT/src/helm-charts/a4x/inference-templates/deployment
+ ```
+
+3. **Check the deployment status:**
+
+ ```bash
+ kubectl get deployment/${RELEASE_NAME}
+ ```
+
+ Wait until the `READY` column shows `1/1`. See the [Monitoring and Troubleshooting](#monitoring) section to view the deployment logs.
+
+
+## 5. Monitoring and Troubleshooting
+
+[Back to Top](#table-of-contents)
+
+After the model is deployed via Helm as described in the sections [above](#run-the-recipe), use the following steps to monitor the deployment and interact with the model. Replace `` and `` with the appropriate names from the model-specific deployment instructions (e.g., `$USER-serving-deepseek-r1` and `$USER-serving-deepseek-r1-svc`).
+
+
+
+### 5.1. Check Deployment Status
+
+Check the status of your deployment. Replace the name if you deployed a different model.
+
+```bash
+# Example for DeepSeek-R1 671B
+kubectl get deployment/$USER-serving-deepseek-r1
+```
+
+Wait until the `READY` column shows `1/1`. If it shows `0/1`, the pod is still starting up.
+
+> [!NOTE]
+> In the GKE UI on Cloud Console, you might see a status of "Does not have minimum availability" during startup. This is normal and will resolve once the pod is ready.
+
+
+### 5.2. View Logs
+
+To see the logs from the TRTLLM server (useful for debugging), use the `-f` flag to follow the log stream:
+
+```bash
+kubectl logs -f deployment/$USER-serving-deepseek-r1
+```
+
+You should see logs indicating preparing the model, and then running the throughput benchmark test, similar to this:
+
+```bash
+Running benchmark for nvidia/DeepSeek-R1-NVFP4-v2 with ISL=1024, OSL=4096, TP=4, EP=4, PP=1
+
+===========================================================
+= PYTORCH BACKEND
+===========================================================
+Model: nvidia/DeepSeek-R1-NVFP4-v2
+Model Path: /ssd/nvidia/DeepSeek-R1-NVFP4-v2
+Revision: N/A
+TensorRT LLM Version: 1.2
+Dtype: bfloat16
+KV Cache Dtype: FP8
+Quantization: NVFP4
+
+===========================================================
+= MACHINE DETAILS
+===========================================================
+NVIDIA B200, memory 178.35 GB, 4.00 GHz
+
+===========================================================
+= REQUEST DETAILS
+===========================================================
+Number of requests: 1000
+Number of concurrent requests: 752.9244
+Average Input Length (tokens): 1024.0000
+Average Output Length (tokens): 4096.0000
+===========================================================
+= WORLD + RUNTIME INFORMATION
+===========================================================
+TP Size: 4
+PP Size: 1
+EP Size: 4
+Max Runtime Batch Size: 128
+Max Runtime Tokens: 2048
+Scheduling Policy: GUARANTEED_NO_EVICT
+KV Memory Percentage: 85.00%
+Issue Rate (req/sec): 8.6889E+13
+
+===========================================================
+= PERFORMANCE OVERVIEW
+===========================================================
+Request Throughput (req/sec): X.XX
+Total Output Throughput (tokens/sec): X.XX
+Total Token Throughput (tokens/sec): X.XX
+Total Latency (ms): X.XX
+Average request latency (ms): X.XX
+Per User Output Throughput [w/ ctx] (tps/user): X.XX
+Per GPU Output Throughput (tps/gpu): X.XX
+
+-- Request Latency Breakdown (ms) -----------------------
+
+[Latency] P50 : X.XX
+[Latency] P90 : X.XX
+[Latency] P95 : X.XX
+[Latency] P99 : X.XX
+[Latency] MINIMUM: X.XX
+[Latency] MAXIMUM: X.XX
+[Latency] AVERAGE: X.XX
+
+===========================================================
+= DATASET DETAILS
+===========================================================
+Dataset Path: /ssd/token-norm-dist_DeepSeek-R1-NVFP4-v2_1024_4096_tp4.json
+Number of Sequences: 1000
+
+-- Percentiles statistics ---------------------------------
+
+ Input Output Seq. Length
+-----------------------------------------------------------
+MIN: 1024.0000 4096.0000 5120.0000
+MAX: 1024.0000 4096.0000 5120.0000
+AVG: 1024.0000 4096.0000 5120.0000
+P50: 1024.0000 4096.0000 5120.0000
+P90: 1024.0000 4096.0000 5120.0000
+P95: 1024.0000 4096.0000 5120.0000
+P99: 1024.0000 4096.0000 5120.0000
+===========================================================
+```
+
+
+## 6. Cleanup
+
+To avoid incurring further charges, clean up the resources you created.
+
+1. **Uninstall the Helm Release:**
+
+ First, list your releases to get the deployed models:
+
+ ```bash
+ # list deployed models
+ helm list --filter $USER-serving-
+ ```
+
+ Then, uninstall the desired release:
+
+ ```bash
+ # uninstall the deployed model
+ helm uninstall
+ ```
+ Replace `` with the helm release names listed.
+
+2. **Delete the Kubernetes Secret:**
+
+ ```bash
+ kubectl delete secret hf-secret --ignore-not-found=true
+ ```
+
+3. (Optional) Delete the built Docker image from Artifact Registry if no longer needed.
+4. (Optional) Delete Cloud Build logs.
+5. (Optional) Clean up files in your GCS bucket if benchmarking was performed.
+6. (Optional) Delete the [test environment](#test-environment) provisioned including GKE cluster.
\ No newline at end of file
diff --git a/inference/a4/single-host-serving/tensorrt-llm/values.yaml b/inference/a4/single-host-serving/tensorrt-llm/values.yaml
new file mode 100644
index 00000000..114b54cf
--- /dev/null
+++ b/inference/a4/single-host-serving/tensorrt-llm/values.yaml
@@ -0,0 +1,67 @@
+# Copyright 2025 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+queue:
+
+dwsSettings:
+ maxRunDurationSeconds:
+
+huggingface:
+ secretName: hf-secret
+ secretData:
+ token: "hf_api_token"
+
+volumes:
+ gcsVolumes: true
+ ssdMountPath: "/ssd"
+ gcsMounts:
+ - bucketName:
+ mountPath: "/gcs"
+
+service:
+ type: ClusterIP
+ ports:
+ http: 8000
+
+workload:
+ model:
+ name:
+ gpus: 8
+ image:
+ framework:
+ configFile: serving-args.yaml
+ configPath: /workload/configs
+ envs:
+ - name: HF_HUB_ENABLE_HF_TRANSFER
+ value: "1"
+ - name: LAUNCHER_SCRIPT
+ value: "/workload/launcher/launch-workload.sh"
+ - name: SERVER_ARGS_FILE
+ value: "/workload/configs/serving-args.yaml"
+ - name: HF_HOME
+ value: "/ssd"
+ - name: LD_LIBRARY_PATH
+ value: "/usr/local/nvidia/lib64:/usr/local/lib/"
+ benchmarks:
+ experiments:
+ - isl: 1024
+ osl: 4096
+ num_requests: 1000
+
+network:
+ subnetworks[]:
+ gibVersion: us-docker.pkg.dev/gce-ai-infra/gpudirect-gib/nccl-plugin-gib:v1.0.5
+ ncclSettings:
+ - name: NCCL_DEBUG
+ value: "VERSION"
\ No newline at end of file
diff --git a/inference/a4x/single-host-serving/tensorrt-llm/README.md b/inference/a4x/single-host-serving/tensorrt-llm/README.md
index 8bb4cb2d..1abec421 100644
--- a/inference/a4x/single-host-serving/tensorrt-llm/README.md
+++ b/inference/a4x/single-host-serving/tensorrt-llm/README.md
@@ -129,7 +129,7 @@ export CLUSTER_REGION=
export CLUSTER_NAME=
export KUEUE_NAME=
export GCS_BUCKET=
-export TRTLLM_VERSION=1.2.0rc2
+export TRTLLM_VERSION=1.3.0rc5
# Set the project for gcloud commands
gcloud config set project $PROJECT_ID
@@ -199,9 +199,15 @@ kubectl create secret generic hf-secret \
This recipe supports the following models. You can easily swap between them by changing the environment variables in the next step.
+Running TRTLLM inference benchmarking on these models are only tested and validated on A4X GKE nodes with certain combination of TP, PP, EP, number of GPU chips, input & output sequence length, precision, etc.
+
+Example model configuration YAML files included in this repo only show a certain combination of parallelism hyperparameters and configs for benchmarking purposes. Input and output length in `gpu-recipes/inference/a4x/single-host-serving/tensorrt-llm/values.yaml` need to be adjusted according to the model and its configs.
+
| Model Name | Hugging Face ID | Configuration File | Release Name Suffix |
| :--- | :--- | :--- | :--- |
-| **DeepSeek-R1 671B** | `nvidia/DeepSeek-R1-NVFP4-v2` | `deepseek-r1-nvfp4.yaml` | `deepseek-r1-model` |
+| **DeepSeek-R1 671B** | `nvidia/DeepSeek-R1-NVFP4-v2` | `deepseek-r1-nvfp4.yaml` | `deepseek-r1` |
+| **Llama 3.1 405B NVFP4** | `nvidia/Llama-3.1-405B-Instruct-NVFP4` | `llama-3.1-405b.yaml` | `llama-3-1-405b-nvfp4` |
+| **Llama 3.1 405B FP8** | `meta-llama/Llama-3.1-405B-Instruct-FP8` | `llama-3.1-405b.yaml` | `llama-3-1-405b-fp8` |
| **Llama 3.1 70B** | `meta-llama/Llama-3.1-70B-Instruct` | `llama-3.1-70b.yaml` | `llama-3-1-70b` |
| **Llama 3.1 8B** | `meta-llama/Llama-3.1-8B-Instruct` | `llama-3.1-8b.yaml` | `llama-3-1-8b` |
| **Qwen 3 32B** | `Qwen/Qwen3-32B` | `qwen3-32b.yaml` | `qwen3-32b` |
@@ -223,10 +229,10 @@ The recipe uses [`trtllm-bench`](https://github.com/NVIDIA/TensorRT-LLM/blob/mai
1. **Configure model-specific variables.** Choose a model from the [table above](#supported-models) and set the variables:
```bash
- # Example for Llama 3.1 70B
- export HF_MODEL_ID="meta-llama/Llama-3.1-70B-Instruct"
- export CONFIG_FILE="llama-3.1-70b.yaml"
- export RELEASE_NAME="$USER-serving-llama-3-1-70b"
+ # Example for DeepSeek-R1 NVFP4
+ export HF_MODEL_ID="nvidia/DeepSeek-R1-NVFP4-v2"
+ export CONFIG_FILE="deepseek-r1-nvfp4.yaml"
+ export RELEASE_NAME="$USER-serving-deepseek-r1"
```
2. **Install the helm chart:**
@@ -258,7 +264,7 @@ The recipe uses [`trtllm-bench`](https://github.com/NVIDIA/TensorRT-LLM/blob/mai
[Back to Top](#table-of-contents)
-After the model is deployed via Helm as described in the sections [above](#run-the-recipe), use the following steps to monitor the deployment and interact with the model. Replace `` and `` with the appropriate names from the model-specific deployment instructions (e.g., `$USER-serving-deepseek-r1-model` and `$USER-serving-deepseek-r1-model-svc`).
+After the model is deployed via Helm as described in the sections [above](#run-the-recipe), use the following steps to monitor the deployment and interact with the model. Replace `` and `` with the appropriate names from the model-specific deployment instructions (e.g., `$USER-serving-deepseek-r1` and `$USER-serving-deepseek-r1-svc`).
@@ -268,7 +274,7 @@ Check the status of your deployment. Replace the name if you deployed a differen
```bash
# Example for DeepSeek-R1 671B
-kubectl get deployment/$USER-serving-deepseek-r1-model
+kubectl get deployment/$USER-serving-deepseek-r1
```
Wait until the `READY` column shows `1/1`. If it shows `0/1`, the pod is still starting up.
@@ -282,7 +288,7 @@ Wait until the `READY` column shows `1/1`. If it shows `0/1`, the pod is still s
To see the logs from the TRTLLM server (useful for debugging), use the `-f` flag to follow the log stream:
```bash
-kubectl logs -f deployment/$USER-serving-deepseek-r1-model
+kubectl logs -f deployment/$USER-serving-deepseek-r1
```
You should see logs indicating preparing the model, and then running the throughput benchmark test, similar to this:
diff --git a/inference/a4x/single-host-serving/tensorrt-llm/values.yaml b/inference/a4x/single-host-serving/tensorrt-llm/values.yaml
index 2560ff83..f46116a5 100644
--- a/inference/a4x/single-host-serving/tensorrt-llm/values.yaml
+++ b/inference/a4x/single-host-serving/tensorrt-llm/values.yaml
@@ -51,8 +51,8 @@ workload:
value: "/workload/configs/serving-args.yaml"
benchmarks:
experiments:
- - isl: 128
- osl: 128
+ - isl: 1024
+ osl: 1024
num_requests: 1000
network:
diff --git a/src/frameworks/a4/trtllm-configs/deepseek-r1-nvfp4.yaml b/src/frameworks/a4/trtllm-configs/deepseek-r1-nvfp4.yaml
new file mode 100644
index 00000000..5ebdf9d4
--- /dev/null
+++ b/src/frameworks/a4/trtllm-configs/deepseek-r1-nvfp4.yaml
@@ -0,0 +1,35 @@
+tp_size: 4
+ep_size: 4
+pp_size: 1
+backend: pytorch
+kv_cache_free_gpu_mem_fraction: 0.85
+llm_api_args:
+ cuda_graph_config:
+ batch_sizes:
+ - 1
+ - 2
+ - 4
+ - 8
+ - 16
+ - 20
+ - 24
+ - 32
+ - 64
+ - 96
+ - 128
+ - 160
+ - 192
+ - 256
+ - 320
+ - 384
+ - 512
+ enable_padding: true
+ enable_attention_dp: true
+ enable_chunked_prefill: true
+ kv_cache_config:
+ dtype: auto
+ enable_block_reuse: false
+ free_gpu_memory_fraction: 0.85
+ moe_config:
+ backend: CUTLASS
+ print_iter_log: true
\ No newline at end of file
diff --git a/src/frameworks/a4/trtllm-configs/qwen3-235b-a22b-nvfp4.yaml b/src/frameworks/a4/trtllm-configs/qwen3-235b-a22b-nvfp4.yaml
new file mode 100644
index 00000000..2307321a
--- /dev/null
+++ b/src/frameworks/a4/trtllm-configs/qwen3-235b-a22b-nvfp4.yaml
@@ -0,0 +1,4 @@
+tp_size: 1
+pp_size: 1
+backend: pytorch
+kv_cache_free_gpu_mem_fraction: 0.90
\ No newline at end of file
diff --git a/src/frameworks/a4/trtllm-configs/qwen3-32b.yaml b/src/frameworks/a4/trtllm-configs/qwen3-32b.yaml
new file mode 100644
index 00000000..2307321a
--- /dev/null
+++ b/src/frameworks/a4/trtllm-configs/qwen3-32b.yaml
@@ -0,0 +1,4 @@
+tp_size: 1
+pp_size: 1
+backend: pytorch
+kv_cache_free_gpu_mem_fraction: 0.90
\ No newline at end of file
diff --git a/src/frameworks/a4x/trtllm-configs/llama-3-1-405b.yaml b/src/frameworks/a4x/trtllm-configs/llama-3-1-405b.yaml
new file mode 100755
index 00000000..206a27fd
--- /dev/null
+++ b/src/frameworks/a4x/trtllm-configs/llama-3-1-405b.yaml
@@ -0,0 +1,4 @@
+tp_size: 4
+pp_size: 1
+backend: pytorch
+kv_cache_free_gpu_mem_fraction: 0.90
diff --git a/src/helm-charts/a4/inference-templates/deployment/templates/serving-launcher.yaml b/src/helm-charts/a4/inference-templates/deployment/templates/serving-launcher.yaml
index ea3d1b86..0bce149f 100644
--- a/src/helm-charts/a4/inference-templates/deployment/templates/serving-launcher.yaml
+++ b/src/helm-charts/a4/inference-templates/deployment/templates/serving-launcher.yaml
@@ -171,6 +171,8 @@ spec:
{{- end }}
- name: NCCL_PLUGIN_PATH
value: /usr/local/gib/lib64
+ - name: LD_LIBRARY_PATH
+ value: /usr/local/gib/lib64:/usr/local/nvidia/lib64
{{- if $root.Values.network.gibVersion }}
- name: NCCL_INIT_SCRIPT
value: "/usr/local/gib/scripts/set_nccl_env.sh"
@@ -180,6 +182,8 @@ spec:
value: "{{ $root.Values.workload.model.name }}"
- name: MODEL_DOWNLOAD_DIR
value: "/ssd/{{ $root.Values.workload.model.name }}"
+ - name: TRTLLM_DIR
+ value: "/app/tensorrt_llm"
{{- if $root.Values.workload.envs }}
{{- toYaml .Values.workload.envs | nindent 12 }}
{{- end }}
@@ -189,6 +193,7 @@ spec:
args:
- |
#!/bin/bash
+ pip install pyyaml hf_transfer
if [[ -n "${NCCL_INIT_SCRIPT}" ]]; then
echo "Running NCCL init script: ${NCCL_INIT_SCRIPT}"
@@ -203,30 +208,46 @@ spec:
fi
ARGS=()
+ EXTRA_ARGS_FILE="/tmp/extra_llm_api_args.yaml"
- if [ -f "$SERVER_ARGS_FILE" ]; then
- echo "Loading server arguments from ConfigMap"
- while IFS=': ' read -r key value || [ -n "$key" ]; do
- [[ -z "$key" || "$key" == \#* ]] && continue
- key=$(echo "$key" | xargs)
- value=$(echo "$value" | xargs)
+ # Use Python to parse the main config file, extract llm_api_args,
+ # and generate the command-line arguments.
+ python -c "
+ import yaml
+ import sys
- if [ -n "$key" ]; then
- # Handle boolean values
- if [[ "$value" == "true" ]]; then
- # For true values, just add the flag without a value
- ARGS+=("--$key")
- elif [[ "$value" == "false" ]]; then
- ARGS+=("--$key" "false")
- elif [ -n "$value" ]; then
- # For non-boolean values, add both the flag and its value
- ARGS+=("--$key" "$value")
- else
- ARGS+=("--$key")
- fi
- fi
- done < "$SERVER_ARGS_FILE"
- fi
+ args = []
+ llm_api_args = {}
+ config_file = sys.argv[1]
+ extra_args_file = sys.argv[2]
+
+ try:
+ with open(config_file, 'r') as f:
+ config = yaml.safe_load(f)
+
+ if 'llm_api_args' in config:
+ llm_api_args = config.pop('llm_api_args')
+ with open(extra_args_file, 'w') as f:
+ yaml.dump(llm_api_args, f)
+
+ for key, value in config.items():
+ if value is True:
+ args.append(f'--{key}')
+ elif value is not False:
+ args.append(f'--{key}')
+ args.append(str(value))
+
+ # Print the arguments for the shell script to capture
+ print(' '.join(args))
+
+ except Exception as e:
+ print(f'Error parsing config file: {e}', file=sys.stderr)
+ sys.exit(1)
+ " "$SERVER_ARGS_FILE" "$EXTRA_ARGS_FILE" > /tmp/launcher_args.txt
+
+ # Read the generated arguments into the ARGS array
+ mapfile -t ARGS < <(tr ' ' '\n' < /tmp/launcher_args.txt)
+ rm /tmp/launcher_args.txt
{{ if eq $root.Values.workload.framework "trtllm" }}
{{- range $root.Values.workload.benchmarks.experiments }}
diff --git a/src/launchers/trtllm-launcher.sh b/src/launchers/trtllm-launcher.sh
index 5e8ee091..06c0426a 100644
--- a/src/launchers/trtllm-launcher.sh
+++ b/src/launchers/trtllm-launcher.sh
@@ -85,7 +85,7 @@ parse_serving_config() {
for ((index = 0; index < ${#SERVING_CONFIG[@]}; )); do
current_arg="${SERVING_CONFIG[$index]}"
- next_arg="${SERVING_CONFIG[$((index + 1))]}"
+ next_arg=${SERVING_CONFIG[$((index + 1))]:-}
# Handle --key=value format
if [[ "$current_arg" =~ ^--[^=]+=.+ ]]; then
@@ -180,6 +180,7 @@ run_benchmark() {
if [[ $backend == "pytorch" ]]; then
echo "Running throughput benchmark"
+ export NCCL_P2P_LEVEL=PHB
trtllm-bench \
--model $model_name \
--model_path /ssd/${model_name} throughput \