|
Can vLLM built for old GPU (GT 630M) ? It may use CUDA 9.1.85
|
|
1
|
138
|
August 4, 2025
|
|
How to deploy vllm-ascend in AutoDL's 910B instance?
|
|
7
|
336
|
August 2, 2025
|
|
GPU Time Slicing
|
|
0
|
155
|
July 16, 2025
|
|
How to modify the cuda graph capture sizes via vllm plugin
|
|
1
|
321
|
July 1, 2025
|
|
Can’t use ampere features
|
|
1
|
141
|
June 10, 2025
|
|
KV Cache quantizing?
|
|
3
|
631
|
June 2, 2025
|
|
Does vllm support inference or service startup of CPU small model?
|
|
3
|
178
|
May 30, 2025
|
|
Struggling with my dual GPU setup. And getting chat template errors
|
|
2
|
170
|
May 30, 2025
|
|
How to get torch-npu >= 2.5.1.dev20250308
|
|
3
|
387
|
May 28, 2025
|
|
Question about vllm-ascend performance on server with 8*910B3
|
|
5
|
431
|
May 28, 2025
|
|
Why is this not working? I corrected it but still
|
|
1
|
742
|
May 8, 2025
|
|
Can anyone help me? Why is this not working? It used 😭
|
|
1
|
984
|
May 8, 2025
|
|
Docker explosion this morning after it worked fine for a long while
|
|
6
|
443
|
May 6, 2025
|
|
32GB vs 48GB vRam
|
|
1
|
687
|
May 3, 2025
|
|
Run on B200/5090 without building from source?
|
|
1
|
229
|
May 1, 2025
|
|
Running Gemma 3 on multi-chip TPU failure
|
|
5
|
490
|
May 1, 2025
|
|
How to setup amd gpu as default in dual stack gpu?
|
|
10
|
548
|
April 21, 2025
|
|
Is there any working Colab notebook using vLLM with TPU v5e?
|
|
2
|
303
|
April 16, 2025
|
|
Making best use of varying GPU generations
|
|
2
|
667
|
April 11, 2025
|
|
Improving computing power at home for n00bs
|
|
7
|
136
|
April 2, 2025
|
|
Question about vLLM and vLLM Ascend verisoning policy
|
|
4
|
292
|
April 1, 2025
|
|
Jetson orin, CUDA error: no kernel image is available for execution on the device
|
|
0
|
443
|
March 29, 2025
|
|
Can Ascend officially draft a documentation on the vLLM-Ascend adaptation for graph mode?
|
|
6
|
342
|
March 26, 2025
|
|
About clean_up in the example of vllm-ascend official website
|
|
2
|
263
|
March 20, 2025
|