Vllm-ascend部署deepseekv3.2遇到一个问题。

Model Support s of transformers v4.44, default chat template is no longer allowed, so you must provide a chat template if the tokenizer does not define one 。

硬件环境:800IA2*2

昇腾910B4-1*16

参照vllm-ascend官方文档部署deepseekv3.2。

大模型运行起来了,但是使用curl命令访问直接报错!

根据transformers v4.44及以上版本的要求,如果模型的tokenizer未定义chat template,vLLM不再允许使用默认chat template,必须手动提供chat template,否则会报错,无法处理chat请求。你需要通过–chat-template参数指定chat template文件,或参考vLLM官方examples目录下的模板进行适配。否则,curl访问会直接报错“default chat template is no longer allowed”【详见官方文档说明】。

你可以这样启动服务(举例):

vllm serve <model_path> --chat-template <path_to_template.jinja>

详细模板编写和使用方法可参考:官方文档

需要更详细的操作步骤或模板示例吗?

Sources:

Hint: Mention @RunLLM in the post for followups.

你好,为什么你们的官方文档没提到这个

this obtained through ifconfig

nic_name is the network interface name corresponding to local_ip of the current node

nic_name=“xxx”
local_ip=“xxx”

The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)

node0_ip=“xxxx”

export HCCL_OP_EXPANSION_MODE=“AIV”

export HCCL_IF_IP=$local_ip
export GLOO_SOCKET_IFNAME=$nic_name
export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=100
export VLLM_USE_V1=1
export HCCL_BUFFSIZE=200
export VLLM_ASCEND_ENABLE_MLAPO=1
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export VLLM_ASCEND_ENABLE_FLASHCOMM1=0
export HCCL_CONNECT_TIMEOUT=120
export HCCL_INTRA_PCIE_ENABLE=1
export HCCL_INTRA_ROCE_ENABLE=0

vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/DeepSeek-V3.2-W8A8
–host 0.0.0.0
–port 8077
–data-parallel-size 2
–data-parallel-size-local 1
–data-parallel-address $node0_ip
–data-parallel-rpc-port 13389
–tensor-parallel-size 8
–quantization ascend
–seed 1024
–served-model-name deepseek_v3_2
–enable-expert-parallel
–max-num-seqs 16
–max-model-len 8192
–max-num-batched-tokens 4096
–trust-remote-code
–no-enable-prefix-caching

# this obtained through ifconfig

# nic_name is the network interface name corresponding to local_ip of the current node

nic_name=“xxx”

local_ip=“xxx”

# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)

node0_ip=“xxxx”

export HCCL_OP_EXPANSION_MODE=“AIV”

export HCCL_IF_IP=$local_ip

export GLOO_SOCKET_IFNAME=$nic_name

export TP_SOCKET_IFNAME=$nic_name

export HCCL_SOCKET_IFNAME=$nic_name

export OMP_PROC_BIND=false

export OMP_NUM_THREADS=100

export VLLM_USE_V1=1

export HCCL_BUFFSIZE=200

export VLLM_ASCEND_ENABLE_MLAPO=1

export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True

export VLLM_ASCEND_ENABLE_FLASHCOMM1=0

export HCCL_CONNECT_TIMEOUT=120

export HCCL_INTRA_PCIE_ENABLE=1

export HCCL_INTRA_ROCE_ENABLE=0

vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/DeepSeek-V3.2-W8A8 \

--host 0.0.0.0 \

--port 8077 \

--headless \

--data-parallel-size 2 \

--data-parallel-size-local 1 \

--data-parallel-start-rank 1 \

--data-parallel-address $node0_ip \

--data-parallel-rpc-port 13389 \

--tensor-parallel-size 8 \

--quantization ascend \

--seed 1024 \

--served-model-name deepseek_v3_2 \

--enable-expert-parallel \

--max-num-seqs 16 \

--max-model-len 8192 \

--max-num-batched-tokens 4096 \

--trust-remote-code \

--no-enable-prefix-caching \

--gpu-memory-utilization 0.92 \

--compilation-config ‘{“cudagraph_mode”: “FULL_DECODE_ONLY”, “cudagraph_capture_sizes”:[3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48]}’ \

--speculative-config ‘{“num_speculative_tokens”: 2, “method”: “deepseek_mtp”}’
–gpu-memory-utilization 0.92
–compilation-config ‘{“cudagraph_mode”: “FULL_DECODE_ONLY”, “cudagraph_capture_sizes”:[3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48]}’
–speculative-config ‘{“num_speculative_tokens”: 2, “method”: “deepseek_mtp”}’

@RunLLM