Description
Description
During distributed inference I didn't see a lot of performance improvement when using Infiniband vs. not using Infiniband. I have followed the recommendations by the project here on doing distributed inference. And when doing inference with IB enabled you can see vLLM logs like these (source):
deepseek-v2-5-0:5969:5969 [0] NCCL INFO Channel 00/0 : 1[0] -> 0[0] [receive] via NET/IB/0/GDRDMA
deepseek-v2-5-0:5969:5969 [0] NCCL INFO Channel 01/0 : 1[0] -> 0[0] [receive] via NET/IB/0/GDRDMA
deepseek-v2-5-0:5969:5969 [0] NCCL INFO Channel 00/0 : 0[0] -> 1[0] [send] via NET/IB/0/GDRDMA
deepseek-v2-5-0:5969:5969 [0] NCCL INFO Channel 01/0 : 0[0] -> 1[0] [send] via NET/IB/0/GDRDMA
This blog on the vLLM website also has some benchmarks and it also does not show any significant improvements of using IB or not using IB: https://blog.vllm.ai/2024/07/23/llama31.html
Report of performance regression
Here are the performance results
Test name | GPU | # of req. | Tput (req/s) | Output Tput (tok/s) | Total Tput (tok/s) | Mean TTFT (ms) | Median TTFT (ms) | P99 TTFT (ms) | Mean TPOT (ms) | Median TPOT (ms) | P99 TPOT (ms) | Mean ITL (ms) | Median ITL (ms) | P99 ITL (ms) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
no_ib_tp8_pp2_qps_01 | 1xno-ib-Standard_ND96asr_v4 x 2 | 200 | 0.646012 | 137.901 | 290.344 | 236.677 | 215.085 | 1302.77 | 139.814 | 139.073 | 165.73 | 138.785 | 129.591 | 286.317 |
no_ib_tp8_pp2_qps_04 | 1xno-ib-Standard_ND96asr_v4 x 2 | 200 | 1.17664 | 250.984 | 528.643 | 237.706 | 223.34 | 703.65 | 173.128 | 170.547 | 232.624 | 158.083 | 141.287 | 451.196 |
no_ib_tp8_pp2_qps_16 | 1xno-ib-Standard_ND96asr_v4 x 2 | 200 | 1.46822 | 312.342 | 658.806 | 246.536 | 242.183 | 397.258 | 235.883 | 170.223 | 758.732 | 160.798 | 143.171 | 723.514 |
no_ib_tp8_pp2_qps_inf | 1xno-ib-Standard_ND96asr_v4 x 2 | 200 | 1.57831 | 332.795 | 705.238 | 1746.93 | 1643.29 | 2415.8 | 165.243 | 148.692 | 378.552 | 144.618 | 142.733 | 175.876 |
ib_tp8_pp2_qps_01 | 1xib-Standard_ND96asr_v4 x 2 | 200 | 0.651377 | 139.014 | 292.722 | 227.47 | 203.671 | 1241.54 | 137.99 | 137.408 | 162.498 | 136.986 | 128.114 | 278.476 |
ib_tp8_pp2_qps_04 | 1xib-Standard_ND96asr_v4 x 2 | 200 | 1.18058 | 250.23 | 528.818 | 227.527 | 219.97 | 449.453 | 167.116 | 164.781 | 235.652 | 155.036 | 139.355 | 430.007 |
ib_tp8_pp2_qps_16 | 1xib-Standard_ND96asr_v4 x 2 | 200 | 1.46607 | 311.019 | 656.974 | 239.256 | 233.958 | 342.987 | 228.171 | 169.589 | 628.591 | 158.357 | 141.627 | 776.336 |
ib_tp8_pp2_qps_inf | 1xib-Standard_ND96asr_v4 x 2 | 200 | 1.57666 | 335.703 | 707.755 | 1838.06 | 1882.46 | 2667.65 | 170.38 | 148.324 | 471.125 | 144.155 | 141.769 | 166.42 |
Visualization of the above table:
Color Legend: Green is with Infiniband and red is without Infiniband.
Throughput
TTFT
TPOT
ITL
Misc discussion on performance
Grafana based benchmarking charts:
- Kubernetes / Networking
- Cluster
- Namespace (Pods)
- Ray Dashboard
- vLLM
- NVIDIA DCGM Exporter Dashboard
- NVIDIA DCGM Exporter
The only significant difference I see from the above Grafana charts is, network usage on nodes. I see that the network usage without IB is higher because all the data transfer is happening on ethernet. But it is not significantly higher to warrant the saturation of either of the networks either ethernet or IB + ethernet.
Without IB

With IB

Your current environment (if you think it is necessary)
I am running vLLM with Ray for distributed inference on Azure Kubernetes Service with nodepool of two GPU machines of type Standard_ND96asr_v4
. These machines have 8 A100 GPUs each and are connected via Infiniband. I have deployed the GPU operator and network operator on the Kubernetes cluster.
Ray Head Environment
INFO 04-22 13:50:51 [__init__.py:239] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1084-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7V12 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 0
BogoMIPS: 4890.86
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip rdpid
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (24 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.1.post2+cu124torch2.6
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.4.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.51.0
[pip3] triton==3.2.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
�[4mGPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID�[0m
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE SYS SYS SYS SYS SYS SYS SYS 24-47 1 N/A
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE SYS SYS SYS SYS SYS SYS SYS 24-47 1 N/A
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 SYS SYS NODE NODE SYS SYS SYS SYS NODE 0-23 0 N/A
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 SYS SYS NODE NODE SYS SYS SYS SYS NODE 0-23 0 N/A
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS SYS NODE NODE SYS SYS SYS 72-95 3 N/A
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS SYS NODE NODE SYS SYS SYS 72-95 3 N/A
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS SYS SYS SYS SYS NODE NODE SYS 48-71 2 N/A
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS SYS SYS SYS SYS NODE NODE SYS 48-71 2 N/A
NIC0 NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS SYS SYS SYS SYS SYS SYS
NIC1 NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS SYS SYS SYS SYS SYS SYS
NIC2 SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS SYS SYS SYS NODE
NIC3 SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS SYS SYS SYS NODE
NIC4 SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS SYS SYS
NIC5 SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS SYS SYS
NIC6 SYS SYS SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS
NIC7 SYS SYS SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS
NIC8 SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NVIDIA_VISIBLE_DEVICES=GPU-1a18aca5-b383-a074-4bc0-f7025b0bff6c,GPU-a4d0a79c-d538-2e80-9988-e5093de93bc8,GPU-3e5f5fb6-3b78-42c5-d38e-cf0e0ce1bc2e,GPU-089f42e2-1c0a-dcfb-7ceb-d24edf08742b,GPU-8707c7b7-0bb6-4e9c-61ed-ee4817d4f1a3,GPU-703db7c6-320b-536d-353e-5fed1b01c811,GPU-4361b490-d47b-4c91-7ed2-fd9883567cfe,GPU-e659ab71-db2e-6ca1-8977-01f72a2e53eb
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.20.5-1
NCCL_NET_GDR_LEVEL=SYS
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NCCL_DEBUG=INFO
NVIDIA_PRODUCT_NAME=CUDA
VLLM_USAGE_SOURCE=production-docker-image
CUDA_VERSION=12.4.0
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_IB_DISABLE=0
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
Ray Worker Environment
INFO 04-22 13:52:29 [__init__.py:239] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1084-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7V12 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 0
BogoMIPS: 4890.87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip rdpid
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (24 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.1.post2+cu124torch2.6
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.4.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.51.0
[pip3] triton==3.2.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
�[4mGPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID�[0m
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE SYS SYS SYS SYS SYS SYS SYS 24-47 1 N/A
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE SYS SYS SYS SYS SYS SYS SYS 24-47 1 N/A
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 SYS SYS NODE NODE SYS SYS SYS SYS NODE 0-23 0 N/A
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 SYS SYS NODE NODE SYS SYS SYS SYS NODE 0-23 0 N/A
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS SYS NODE NODE SYS SYS SYS 72-95 3 N/A
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS SYS NODE NODE SYS SYS SYS 72-95 3 N/A
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS SYS SYS SYS SYS NODE NODE SYS 48-71 2 N/A
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS SYS SYS SYS SYS NODE NODE SYS 48-71 2 N/A
NIC0 NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS SYS SYS SYS SYS SYS SYS
NIC1 NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS SYS SYS SYS SYS SYS SYS
NIC2 SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS SYS SYS SYS NODE
NIC3 SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS SYS SYS SYS NODE
NIC4 SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS SYS SYS
NIC5 SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS SYS SYS
NIC6 SYS SYS SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS X NODE SYS
NIC7 SYS SYS SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE X SYS
NIC8 SYS SYS NODE NODE SYS SYS SYS SYS SYS SYS NODE NODE SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NVIDIA_VISIBLE_DEVICES=GPU-68dda3c9-81cb-8b2e-13f8-1e1b0cf85f62,GPU-7becc8c6-c086-1686-1411-31e459854ca1,GPU-ca8b7d4d-66f3-79da-628a-86f381fb8ceb,GPU-6eee3dae-945c-4707-4e6d-e0d7a99ad57d,GPU-8e5bd307-d9b9-981a-9bbb-8bbcc43641f9,GPU-af06d3de-09a8-86d2-71d7-353c1072c111,GPU-81cc8c9b-6fda-5d36-d287-8495bcfa2029,GPU-2af6f3df-1c02-5f55-b5f8-5e03ed6e49ac
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.20.5-1
NCCL_NET_GDR_LEVEL=SYS
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NCCL_DEBUG=INFO
NVIDIA_PRODUCT_NAME=CUDA
VLLM_USAGE_SOURCE=production-docker-image
CUDA_VERSION=12.4.0
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_IB_DISABLE=0
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
The pods are deployed with the following configuration:
apiVersion: leaderworkerset.x-k8s.io/v1
kind: LeaderWorkerSet
metadata:
name: deepseek-v2-5
namespace: default
spec:
replicas: 1
leaderWorkerTemplate:
size: 2
restartPolicy: RecreateGroupOnPodRestart
leaderTemplate:
metadata:
labels:
role: leader
spec:
containers:
- name: vllm
image: ghcr.io/surajssd/llm-k8s/lws-vllm:0.8.3
imagePullPolicy: Always
command:
- sh
- -c
- "set -x;
bash /vllm-workspace/examples/online_serving/multi-node-serving.sh leader
--ray_cluster_size=$LWS_GROUP_SIZE
--dashboard-host=0.0.0.0
--metrics-export-port=8080;
python3 -m vllm.entrypoints.openai.api_server
--port 8000
--model deepseek-ai/DeepSeek-V2.5
--tensor-parallel-size 8
--pipeline-parallel-size 2
--enable-prefix-caching
--max-model-len 8192
--enforce-eager
--trust-remote-code"
env:
- name: NCCL_DEBUG
value: INFO
- name: NCCL_NET_GDR_LEVEL
value: SYS
- name: NCCL_IB_DISABLE
value: "0"
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 60
periodSeconds: 15
timeoutSeconds: 720
ports:
# VLLM port
- containerPort: 8000
# Ray dashboard port
- containerPort: 8265
# Ray metrics port
- containerPort: 8080
# Ray cluster address
- containerPort: 6379
resources:
limits:
nvidia.com/gpu: "8"
rdma/shared_ib: 1
requests:
nvidia.com/gpu: "8"
rdma/shared_ib: 1
securityContext:
capabilities:
add: [ "IPC_LOCK" ]
volumeMounts:
- name: shm
mountPath: /dev/shm
- name: model
mountPath: /root/.cache/huggingface
volumes:
- name: shm
emptyDir:
medium: Memory
- name: model
hostPath:
path: /mnt/model
type: DirectoryOrCreate
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: leaderworkerset.sigs.k8s.io/name
operator: In
values:
- deepseek-v2-5
topologyKey: "kubernetes.io/hostname"
workerTemplate:
metadata:
labels:
role: worker
spec:
containers:
- name: vllm
image: ghcr.io/surajssd/llm-k8s/lws-vllm:0.8.3
imagePullPolicy: Always
command:
- sh
- -c
- "set -x;
bash /vllm-workspace/examples/online_serving/multi-node-serving.sh worker
--ray_address=$LWS_LEADER_ADDRESS
--metrics-export-port=8080"
env:
- name: NCCL_DEBUG
value: INFO
- name: NCCL_NET_GDR_LEVEL
value: SYS
- name: NCCL_IB_DISABLE
value: "0"
ports:
# VLLM port
- containerPort: 8000
# Ray metrics port
- containerPort: 8080
resources:
limits:
nvidia.com/gpu: "8"
rdma/shared_ib: 1
requests:
nvidia.com/gpu: "8"
rdma/shared_ib: 1
securityContext:
capabilities:
add: [ "IPC_LOCK" ]
volumeMounts:
- name: shm
mountPath: /dev/shm
- name: model
mountPath: /root/.cache/huggingface
volumes:
- name: shm
emptyDir:
medium: Memory
- name: model
hostPath:
path: /mnt/model
type: DirectoryOrCreate
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: leaderworkerset.sigs.k8s.io/name
operator: In
values:
- deepseek-v2-5
topologyKey: "kubernetes.io/hostname"
How do I reproduce this issue?
Follow the steps here: https://github.com/surajssd/llm-k8s/blob/fair-benchmarks/deepseek-v2.5/reproduce-benchmark-results.sh
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.