求救,vllm怎么安装cuda13.0

我的设备Jetson AGX Thor,目前只能装cuda13.0版本,在安装vllm的时候,一直提示要安装cuda12.版本。问题:vllm怎么安装cuda13.0版本,谢谢回复。

vLLM 官方预编译包目前主要支持 CUDA 12.8/12.9 及以下版本,尚未有直接支持 CUDA 13.0 的 wheel 包。如果你的设备只能安装 CUDA 13.0,建议自行从源码编译 vLLM,这样可以兼容你的 CUDA 13.0 环境。编译时需确保 PyTorch 也用 CUDA 13.0 编译,并设置好 CUDA_HOME 环境变量,否则会出现版本不匹配或找不到 CUDA 的报错。具体源码编译流程可参考官方文档相关 issue 讨论

此外,部分社区用户已在 aarch64 架构(如 GH200/Thor)上通过自定义 Dockerfile 或预构建镜像实现了 vLLM+CUDA 13.0 的部署,可参考此 Dockerfile或直接拉取社区镜像。如需详细编译步骤或遇到具体报错,请告知,我可进一步分解说明。

Sources:

Hint: Mention @RunLLM in the post for followups.

您好,可以详细说一下,怎么下载使用吗。谢谢回复

还是不行,报错了:
(llm) nvidia@localhost:/data/lqy/vllm/vllm$ python -m vllm.entrypoints.openai.api_server \

--model /data/lqy/qwen/Qwen3-VL-8B-Instruct \
--served-model-name qwen3-vl-8b \
--trust-remote-code \
--host 0.0.0.0 \
--port 8000

INFO 11-11 10:36:08 [init.py:216] Automatically detected platform cuda.
(APIServer pid=2280009) INFO 11-11 10:36:11 [api_server.py:1839] vLLM API server version 0.11.0
(APIServer pid=2280009) INFO 11-11 10:36:11 [utils.py:233] non-default args: {‘host’: ‘0.0.0.0’, ‘model’: ‘/data/lqy/qwen/Qwen3-VL-8B-Instruct’, ‘trust_remote_code’: True, ‘served_model_name’: [‘qwen3-vl-8b’]}
(APIServer pid=2280009) The argument trust_remote_code is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] Error in inspecting model architecture ‘Qwen3VLForConditionalGeneration’
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] Traceback (most recent call last):
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 966, in _run_in_subprocess
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] returned.check_returncode()
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/conda/envs/llm/lib/python3.12/subprocess.py”, line 502, in check_returncode
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] raise CalledProcessError(self.returncode, self.args, self.stdout,
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] subprocess.CalledProcessError: Command ‘[’/data/conda/envs/llm/bin/python’, ‘-m’, ‘vllm.model_executor.models.registry’]’ returned non-zero exit status 1.
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548]
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] The above exception was the direct cause of the following exception:
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548]
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] Traceback (most recent call last):
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 546, in _try_inspect_model_cls
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] return model.inspect_model_cls()
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/logging_utils/log_time.py”, line 22, in _wrapper
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] result = func(*args, **kwargs)
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 509, in inspect_model_cls
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] mi = _run_in_subprocess(
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 969, in _run_in_subprocess
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] raise RuntimeError(f"Error raised in subprocess:\n"
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] RuntimeError: Error raised in subprocess:
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] :128: RuntimeWarning: ‘vllm.model_executor.models.registry’ found in sys.modules after import of package ‘vllm.model_executor.models’, but prior to execution of ‘vllm.model_executor.models.registry’; this may result in unpredictable behaviour
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] Traceback (most recent call last):
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 198, in _run_module_as_main
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 88, in _run_code
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 990, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] _run()
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 983, in _run
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] result = fn()
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 510, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] lambda: _ModelInfo.from_model_cls(self.load_model_cls()))
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/registry.py”, line 521, in load_model_cls
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] mod = importlib.import_module(self.module_name)
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/conda/envs/llm/lib/python3.12/importlib/init.py”, line 90, in import_module
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] return _bootstrap._gcd_import(name[level:], package, level)
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 1387, in _gcd_import
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 1360, in _find_and_load
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 1331, in _find_and_load_unlocked
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 935, in _load_unlocked
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 999, in exec_module
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “”, line 488, in _call_with_frames_removed
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/models/qwen3_vl.py”, line 57, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.model_loader.weight_utils import default_weight_loader
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/model_loader/init.py”, line 12, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.model_loader.bitsandbytes_loader import (
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/model_loader/bitsandbytes_loader.py”, line 25, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.layers.fused_moe import FusedMoE
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/fused_moe/init.py”, line 8, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.layers.fused_moe.layer import (
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/fused_moe/layer.py”, line 28, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.layers.fused_moe.fused_moe import (
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py”, line 16, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] import vllm.model_executor.layers.fused_moe.modular_kernel as mk
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/fused_moe/modular_kernel.py”, line 13, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.layers.fused_moe.utils import ( # yapf: disable
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/fused_moe/utils.py”, line 9, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.layers.quantization.utils.fp8_utils import (
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/quantization/utils/fp8_utils.py”, line 18, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] from vllm.model_executor.layers.quantization.utils.w8a8_utils import (
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/quantization/utils/w8a8_utils.py”, line 72, in
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] CUTLASS_FP8_SUPPORTED = cutlass_fp8_supported()
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/model_executor/layers/quantization/utils/w8a8_utils.py”, line 49, in cutlass_fp8_supported
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] return ops.cutlass_scaled_mm_supports_fp8(capability)
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/lqy/vllm/vllm/vllm/_custom_ops.py”, line 614, in cutlass_scaled_mm_supports_fp8
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] return torch.ops._C.cutlass_scaled_mm_supports_fp8(cuda_device_capability)
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] File “/data/conda/envs/llm/lib/python3.12/site-packages/torch/_ops.py”, line 1353, in getattr
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] raise AttributeError(
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548] AttributeError: ‘_OpNamespace’ ‘_C’ object has no attribute ‘cutlass_scaled_mm_supports_fp8’
(APIServer pid=2280009) ERROR 11-11 10:36:18 [registry.py:548]
(APIServer pid=2280009) Traceback (most recent call last):
(APIServer pid=2280009) File “”, line 198, in _run_module_as_main
(APIServer pid=2280009) File “”, line 88, in _run_code
(APIServer pid=2280009) File “/data/lqy/vllm/vllm/vllm/entrypoints/openai/api_server.py”, line 1953, in
(APIServer pid=2280009) uvloop.run(run_server(args))
(APIServer pid=2280009) File “/data/conda/envs/llm/lib/python3.12/site-packages/uvloop/init.py”, line 96, in run
(APIServer pid=2280009) return __asyncio.run(
(APIServer pid=2280009) ^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/conda/envs/llm/lib/python3.12/asyncio/runners.py”, line 195, in run
(APIServer pid=2280009) return runner.run(main)
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/conda/envs/llm/lib/python3.12/asyncio/runners.py”, line 118, in run
(APIServer pid=2280009) return self._loop.run_until_complete(task)
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “uvloop/loop.pyx”, line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=2280009) File “/data/conda/envs/llm/lib/python3.12/site-packages/uvloop/init.py”, line 48, in wrapper
(APIServer pid=2280009) return await main
(APIServer pid=2280009) ^^^^^^^^^^
(APIServer pid=2280009) File “/data/lqy/vllm/vllm/vllm/entrypoints/openai/api_server.py”, line 1884, in run_server
(APIServer pid=2280009) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=2280009) File “/data/lqy/vllm/vllm/vllm/entrypoints/openai/api_server.py”, line 1902, in run_server_worker
(APIServer pid=2280009) async with build_async_engine_client(
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/conda/envs/llm/lib/python3.12/contextlib.py”, line 210, in aenter
(APIServer pid=2280009) return await anext(self.gen)
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/lqy/vllm/vllm/vllm/entrypoints/openai/api_server.py”, line 180, in build_async_engine_client
(APIServer pid=2280009) async with build_async_engine_client_from_engine_args(
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/conda/envs/llm/lib/python3.12/contextlib.py”, line 210, in aenter
(APIServer pid=2280009) return await anext(self.gen)
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/lqy/vllm/vllm/vllm/entrypoints/openai/api_server.py”, line 206, in build_async_engine_client_from_engine_args
(APIServer pid=2280009) vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/lqy/vllm/vllm/vllm/engine/arg_utils.py”, line 1142, in create_engine_config
(APIServer pid=2280009) model_config = self.create_model_config()
(APIServer pid=2280009) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/lqy/vllm/vllm/vllm/engine/arg_utils.py”, line 994, in create_model_config
(APIServer pid=2280009) return ModelConfig(
(APIServer pid=2280009) ^^^^^^^^^^^^
(APIServer pid=2280009) File “/data/conda/envs/llm/lib/python3.12/site-packages/pydantic/_internal/_dataclasses.py”, line 121, in init
(APIServer pid=2280009) s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=2280009) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
(APIServer pid=2280009) Value error, Model architectures [‘Qwen3VLForConditionalGeneration’] failed to be inspected. Please check the logs for more details. [type=value_error, input_value=ArgsKwargs((), {‘model’: …rocessor_plugin’: None}), input_type=ArgsKwargs]
(APIServer pid=2280009) For further information visit Redirecting...