|
The latest version of vllm is not compatible with local deployment of deepseek-v4(0.20)
|
2
|
289
|
April 29, 2026
|
|
Vllm 0.10.1 v1 benchmark Only a part of the requests can be processed before it gets stuck
|
1
|
165
|
November 4, 2025
|
|
FlashMLA issue when running FP8 Deepseek V8 model on H20
|
3
|
163
|
September 9, 2025
|
|
Init DeepSeek-R1 using Offline Batched Inference
|
3
|
268
|
May 18, 2025
|
|
How to run Deep Seek OCR 2 in vllm
|
1
|
1185
|
January 27, 2026
|