|
About the Features category
|
|
0
|
32
|
March 20, 2025
|
|
Custom modality
|
|
3
|
10
|
November 14, 2025
|
|
Asking 6-bit Quantization
|
|
1
|
11
|
November 11, 2025
|
|
Expert offloading
|
|
1
|
22
|
November 11, 2025
|
|
Raw tokens completion via online serving
|
|
1
|
18
|
November 3, 2025
|
|
Deployment example for a qwen3 model with hybrid thinking
|
|
8
|
481
|
October 29, 2025
|
|
vLLM extremely slow / no response with max_model_len=8192 and multi-GPU tensor parallel
|
|
1
|
74
|
October 26, 2025
|
|
When using large batches, the Ray service crashes.ray.exceptions.RayChannelTimeoutError: System error: Timed out waiting for object available to read
|
|
41
|
685
|
October 26, 2025
|
|
Why vllm does not support LMP?
|
|
3
|
49
|
October 23, 2025
|
|
Is it possible to configure the order of the pipeline in multi-node deployments?
|
|
3
|
14
|
October 16, 2025
|
|
Question on Advanced vLLM Use Case: Distributed Prefix Caching for a CAG Evaluation Framework
|
|
1
|
44
|
October 15, 2025
|
|
A bit of frustration with Quantization
|
|
5
|
227
|
October 14, 2025
|
|
DeepSeek-V3 tool_choice="auto", not working but tool_choice="required" is working
|
|
4
|
370
|
October 13, 2025
|
|
Can we reuse cuda graph across layers?
|
|
2
|
34
|
October 9, 2025
|
|
MCP tool-server OpenAI responses API
|
|
3
|
327
|
September 25, 2025
|
|
Pass instructions to Qwen Embedding / Reranker via OpenAI-compatible server?
|
|
5
|
243
|
September 11, 2025
|
|
Is FCFS Scheduling Holding Back vLLm's Performance in Production?
|
|
3
|
79
|
September 11, 2025
|
|
General questions on structured output backend
|
|
9
|
306
|
September 3, 2025
|
|
Clarification: Does vLLM support concurrent decoding with multiple LoRA adapters in online inference?
|
|
1
|
165
|
August 29, 2025
|
|
How to do KV cache transfer between a CPU instance and a GPU instance?
|
|
1
|
134
|
July 31, 2025
|
|
Support for Deploying 4-bit Fine-Tuned Model with LoRA on vLLM
|
|
13
|
304
|
July 30, 2025
|
|
Does vllm support draft model use tp>1 when I use speculative decoding
|
|
1
|
81
|
July 29, 2025
|
|
Is there any roadmap to support prefix caching on dram and disk?
|
|
1
|
76
|
July 25, 2025
|
|
Performance Degradation and Compatibility Issues with AWQ Quantization in vLLM (Qwen2.5-VL-32B)
|
|
1
|
296
|
July 23, 2025
|
|
Multi-node K8s GPU pooling
|
|
3
|
168
|
July 17, 2025
|
|
Error trying to handle streaming tool call
|
|
3
|
223
|
July 17, 2025
|
|
Improving Speculative Decoding for Beginning Tokens & Structured Output
|
|
1
|
91
|
July 16, 2025
|
|
Question: Specifying Medusa Choice Tree in vllm
|
|
1
|
53
|
July 11, 2025
|
|
Disagg Prefill timeout
|
|
1
|
58
|
July 7, 2025
|
|
MoE quantization
|
|
9
|
862
|
July 2, 2025
|