|
When using large batches, the Ray service crashes.ray.exceptions.RayChannelTimeoutError: System error: Timed out waiting for object available to read
|
|
41
|
1350
|
October 26, 2025
|
|
Why vllm does not support LMP?
|
|
3
|
131
|
October 23, 2025
|
|
Is it possible to configure the order of the pipeline in multi-node deployments?
|
|
3
|
126
|
October 16, 2025
|
|
Question on Advanced vLLM Use Case: Distributed Prefix Caching for a CAG Evaluation Framework
|
|
1
|
116
|
October 15, 2025
|
|
A bit of frustration with Quantization
|
|
5
|
635
|
October 14, 2025
|
|
DeepSeek-V3 tool_choice="auto", not working but tool_choice="required" is working
|
|
4
|
714
|
October 13, 2025
|
|
Can we reuse cuda graph across layers?
|
|
2
|
67
|
October 9, 2025
|
|
MCP tool-server OpenAI responses API
|
|
3
|
859
|
September 25, 2025
|
|
Pass instructions to Qwen Embedding / Reranker via OpenAI-compatible server?
|
|
5
|
694
|
September 11, 2025
|
|
Is FCFS Scheduling Holding Back vLLm's Performance in Production?
|
|
3
|
208
|
September 11, 2025
|
|
General questions on structured output backend
|
|
9
|
862
|
September 3, 2025
|
|
Clarification: Does vLLM support concurrent decoding with multiple LoRA adapters in online inference?
|
|
1
|
466
|
August 29, 2025
|
|
How to do KV cache transfer between a CPU instance and a GPU instance?
|
|
1
|
237
|
July 31, 2025
|
|
Support for Deploying 4-bit Fine-Tuned Model with LoRA on vLLM
|
|
13
|
789
|
July 30, 2025
|
|
Does vllm support draft model use tp>1 when I use speculative decoding
|
|
1
|
156
|
July 29, 2025
|
|
Is there any roadmap to support prefix caching on dram and disk?
|
|
1
|
118
|
July 25, 2025
|
|
Performance Degradation and Compatibility Issues with AWQ Quantization in vLLM (Qwen2.5-VL-32B)
|
|
1
|
545
|
July 23, 2025
|
|
Multi-node K8s GPU pooling
|
|
3
|
400
|
July 17, 2025
|
|
Error trying to handle streaming tool call
|
|
3
|
458
|
July 17, 2025
|
|
Improving Speculative Decoding for Beginning Tokens & Structured Output
|
|
1
|
144
|
July 16, 2025
|
|
Question: Specifying Medusa Choice Tree in vllm
|
|
1
|
93
|
July 11, 2025
|
|
Disagg Prefill timeout
|
|
1
|
112
|
July 7, 2025
|
|
MoE quantization
|
|
9
|
1222
|
July 2, 2025
|
|
Why is cuda graph capture sizes limited by max_num_seqs
|
|
1
|
790
|
June 29, 2025
|
|
Scheduler in vllm
|
|
1
|
313
|
June 26, 2025
|
|
Prompt_embeds usage in vllm openai completion api
|
|
4
|
176
|
June 17, 2025
|
|
W8a8两种量化方式有详细介绍吗
|
|
1
|
192
|
June 15, 2025
|
|
Seqence Parallelism Support - Source Code Location
|
|
0
|
41
|
June 10, 2025
|
|
Something weired about the reading procedure of q_vecs in page attention kernel
|
|
3
|
25
|
June 9, 2025
|
|
Computation time remain consistent across chunks in chunked-prefill despite linearly growing attention complexity?
|
|
1
|
65
|
June 2, 2025
|