Cli serve启动不了 参数是否出错

启动命令:

source /usr/local/lib/miniconda3/bin/activate SenseRL && \
vllm serve /mnt/afs/share_models/git_models/Qwen/Qwen3-32B \
--port 6366 \
--tensor-parallel-size 8 \

日志:

[2025-07-30 17:13:26] reset container password ...
[2025-07-30 17:13:26] check and install necesary softwares for tione ...
[2025-07-30 17:13:27] tikit installed.
[2025-07-30 17:13:28] sshd started.
[2025-07-30 17:13:28] config password-free login from launcher node to all worker nodes ...
[2025-07-30 17:13:30] set environment variables ...
[2025-07-30 17:13:30] start training ...
[2025-07-30 17:13:38] INFO 07-30 17:13:37 [__init__.py:243] Automatically detected platform cuda.
[2025-07-30 17:13:42] INFO 07-30 17:13:41 [__init__.py:31] Available plugins for group vllm.general_plugins:
[2025-07-30 17:13:42] INFO 07-30 17:13:41 [__init__.py:33] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver
[2025-07-30 17:13:42] INFO 07-30 17:13:41 [__init__.py:36] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
[2025-07-30 17:13:44] usage: vllm [-h] [-v] {chat,complete,serve,bench,collect-env} ...
[2025-07-30 17:13:44] vllm: error: unrecognized arguments: \
[2025-07-30 17:13:45] train failed: exit status 2

你的命令行报错“vllm: error: unrecognized arguments”,说明 vllm serve 命令后面有不被识别的参数,或命令格式有误。请检查命令是否有多余或格式错误的参数,确保 serve 后只跟模型路径和合法参数,例如:

vllm serve /mnt/afs/share_models/git_models/Qwen/Qwen3-32B --port 6366 --tensor-parallel-size 8

如仍有问题,建议用 vllm serve --help 查看所有支持的参数和用法,确保没有拼写或格式错误。详细参数说明可参考vLLM CLI官方文档

需要更详细的排查步骤吗?

Sources: