RTX Pro 6000运行Qwen3-32B-FP8报错

基本环境:
vllm 0.9.2dev 自己编译的版本,pytorch 2.7.1,cu128。机器信息:HP z6g5,192G内存,显卡RTX PRO 6000。操作系统:Ubuntu24.04。

启动后报错信息如下:
VLLM_ATTENTION_BACKEND=FLASHINFER vllm serve ~/models/Qwen3-32B-FP8
–served-model-name Qwen3-32B
–api-key htxt-ai
–seed 3407
–disable-log-requests
–gpu-memory-utilization 0.90
–host 0.0.0.0 --port 6006
–max-model-len 64000
–dtype bfloat16
–trust-remote-code
–max-num-seqs 32
–enforce-eager
INFO 07-04 23:35:03 [init.py:244] Automatically detected platform cuda.
INFO 07-04 23:35:11 [api_server.py:1388] vLLM API server version 0.1.dev7385+ge30f82c.d20250704
INFO 07-04 23:35:11 [cli_args.py:314] non-default args: {‘host’: ‘0.0.0.0’, ‘port’: 6006, ‘api_key’: ‘htxt-ai’, ‘model’: ‘/home/blackfog/models/Qwen3-32B-FP8’, ‘trust_remote_code’: True, ‘dtype’: ‘bfloat16’, ‘seed’: 3407, ‘max_model_len’: 64000, ‘enforce_eager’: True, ‘served_model_name’: [‘Qwen3-32B’], ‘max_num_seqs’: 32, ‘disable_log_requests’: True}
INFO 07-04 23:35:17 [config.py:853] This model supports multiple tasks: {‘embed’, ‘generate’, ‘reward’, ‘classify’, ‘score’}. Defaulting to ‘generate’.
INFO 07-04 23:35:17 [config.py:1467] Using max model len 64000
INFO 07-04 23:35:17 [config.py:2267] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 07-04 23:35:17 [cuda.py:102] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
INFO 07-04 23:35:20 [init.py:244] Automatically detected platform cuda.
INFO 07-04 23:35:22 [core.py:459] Waiting for init message from front-end.
INFO 07-04 23:35:22 [core.py:69] Initializing a V1 LLM engine (v0.1.dev7385+ge30f82c.d20250704) with config: model=‘/home/blackfog/models/Qwen3-32B-FP8’, speculative_config=None, tokenizer=‘/home/blackfog/models/Qwen3-32B-FP8’, skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=64000, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend=‘auto’, disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=‘’), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=3407, served_model_name=Qwen3-32B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, pooler_config=None, compilation_config={“level”:0,“debug_dump_path”:“”,“cache_dir”:“”,“backend”:“”,“custom_ops”:,“splitting_ops”:,“use_inductor”:true,“compile_sizes”:,“inductor_compile_config”:{“enable_auto_functionalized_v2”:false},“inductor_passes”:{},“use_cudagraph”:true,“cudagraph_num_of_warmups”:0,“cudagraph_capture_sizes”:,“cudagraph_copy_inputs”:false,“full_cuda_graph”:false,“max_capture_size”:0,“local_cache_dir”:null}
WARNING 07-04 23:35:23 [utils.py:2782] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7410faf20b30>
INFO 07-04 23:35:24 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
INFO 07-04 23:35:24 [topk_topp_sampler.py:49] Using FlashInfer for top-p & top-k sampling.
INFO 07-04 23:35:24 [gpu_model_runner.py:1751] Starting to load model /home/blackfog/models/Qwen3-32B-FP8…
INFO 07-04 23:35:24 [gpu_model_runner.py:1756] Loading model from scratch…
INFO 07-04 23:35:24 [cuda.py:238] Using FlashInfer backend on V1 engine.
Loading safetensors checkpoint shards: 0% Completed | 0/7 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 14% Completed | 1/7 [00:01<00:11, 1.85s/it]
Loading safetensors checkpoint shards: 29% Completed | 2/7 [00:03<00:09, 1.88s/it]
Loading safetensors checkpoint shards: 43% Completed | 3/7 [00:05<00:07, 1.91s/it]
Loading safetensors checkpoint shards: 57% Completed | 4/7 [00:07<00:05, 1.92s/it]
Loading safetensors checkpoint shards: 71% Completed | 5/7 [00:09<00:03, 1.93s/it]
Loading safetensors checkpoint shards: 86% Completed | 6/7 [00:11<00:01, 1.93s/it]
Loading safetensors checkpoint shards: 100% Completed | 7/7 [00:13<00:00, 1.92s/it]
Loading safetensors checkpoint shards: 100% Completed | 7/7 [00:13<00:00, 1.92s/it]
INFO 07-04 23:35:38 [default_loader.py:272] Loading weights took 13.47 seconds
INFO 07-04 23:35:38 [gpu_model_runner.py:1782] Model loading took 32.0633 GiB and 13.745091 seconds
ERROR 07-04 23:35:39 [core.py:519] EngineCore failed to start.
ERROR 07-04 23:35:39 [core.py:519] Traceback (most recent call last):
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 510, in run_engine_core
ERROR 07-04 23:35:39 [core.py:519] engine_core = EngineCoreProc(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 394, in init
ERROR 07-04 23:35:39 [core.py:519] super().init(vllm_config, executor_class, log_stats,
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 82, in init
ERROR 07-04 23:35:39 [core.py:519] self._initialize_kv_caches(vllm_config)
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 142, in _initialize_kv_caches
ERROR 07-04 23:35:39 [core.py:519] available_gpu_memory = self.model_executor.determine_available_memory()
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/executor/abstract.py”, line 76, in determine_available_memory
ERROR 07-04 23:35:39 [core.py:519] output = self.collective_rpc(“determine_available_memory”)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py”, line 57, in collective_rpc
ERROR 07-04 23:35:39 [core.py:519] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/utils.py”, line 2716, in run_method
ERROR 07-04 23:35:39 [core.py:519] return func(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/utils/_contextlib.py”, line 116, in decorate_context
ERROR 07-04 23:35:39 [core.py:519] return func(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py”, line 210, in determine_available_memory
ERROR 07-04 23:35:39 [core.py:519] self.model_runner.profile_run()
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py”, line 2257, in profile_run
ERROR 07-04 23:35:39 [core.py:519] = self._dummy_run(self.max_num_tokens, is_profile=True)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/utils/_contextlib.py”, line 116, in decorate_context
ERROR 07-04 23:35:39 [core.py:519] return func(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py”, line 2038, in _dummy_run
ERROR 07-04 23:35:39 [core.py:519] outputs = model(
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
ERROR 07-04 23:35:39 [core.py:519] return self._call_impl(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in _call_impl
ERROR 07-04 23:35:39 [core.py:519] return forward_call(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py”, line 303, in forward
ERROR 07-04 23:35:39 [core.py:519] hidden_states = self.model(input_ids, positions, intermediate_tensors,
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/compilation/decorators.py”, line 173, in call
ERROR 07-04 23:35:39 [core.py:519] return self.forward(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py”, line 354, in forward
ERROR 07-04 23:35:39 [core.py:519] hidden_states, residual = layer(
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
ERROR 07-04 23:35:39 [core.py:519] return self._call_impl(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in _call_impl
ERROR 07-04 23:35:39 [core.py:519] return forward_call(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py”, line 216, in forward
ERROR 07-04 23:35:39 [core.py:519] hidden_states = self.self_attn(
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
ERROR 07-04 23:35:39 [core.py:519] return self._call_impl(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in _call_impl
ERROR 07-04 23:35:39 [core.py:519] return forward_call(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py”, line 135, in forward
ERROR 07-04 23:35:39 [core.py:519] qkv, _ = self.qkv_proj(hidden_states)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
ERROR 07-04 23:35:39 [core.py:519] return self._call_impl(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in call_impl
ERROR 07-04 23:35:39 [core.py:519] return forward_call(*args, **kwargs)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py”, line 487, in forward
ERROR 07-04 23:35:39 [core.py:519] output_parallel = self.quant_method.apply(self, input
, bias)
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/fp8.py”, line 404, in apply
ERROR 07-04 23:35:39 [core.py:519] return torch.ops.vllm.apply_w8a8_block_fp8_linear(
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/_ops.py”, line 1158, in call
ERROR 07-04 23:35:39 [core.py:519] return self._op(*args, **(kwargs or {}))
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/fp8_utils.py”, line 176, in apply_w8a8_block_fp8_linear
ERROR 07-04 23:35:39 [core.py:519] output = w8a8_blockscale_func(q_input, weight, x_scale, weight_scale,
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/fp8_utils.py”, line 41, in cutlass_scaled_mm
ERROR 07-04 23:35:39 [core.py:519] return ops.cutlass_scaled_mm(A,
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/_custom_ops.py”, line 713, in cutlass_scaled_mm
ERROR 07-04 23:35:39 [core.py:519] torch.ops._C.cutlass_scaled_mm(out, a, b, scale_a, scale_b, bias)
ERROR 07-04 23:35:39 [core.py:519] File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/_ops.py”, line 1158, in call
ERROR 07-04 23:35:39 [core.py:519] return self._op(*args, **(kwargs or {}))
ERROR 07-04 23:35:39 [core.py:519] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-04 23:35:39 [core.py:519] RuntimeError: Currently, block scaled fp8 gemm is not implemented for Blackwell
Process EngineCore_0:
Traceback (most recent call last):
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/multiprocessing/process.py”, line 314, in _bootstrap
self.run()
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/multiprocessing/process.py”, line 108, in run
self._target(*self._args, **self._kwargs)
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 523, in run_engine_core
raise e
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 510, in run_engine_core
engine_core = EngineCoreProc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 394, in init
super().init(vllm_config, executor_class, log_stats,
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 82, in init
self._initialize_kv_caches(vllm_config)
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/engine/core.py”, line 142, in _initialize_kv_caches
available_gpu_memory = self.model_executor.determine_available_memory()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/executor/abstract.py”, line 76, in determine_available_memory
output = self.collective_rpc(“determine_available_memory”)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py”, line 57, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/utils.py”, line 2716, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/utils/_contextlib.py”, line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py”, line 210, in determine_available_memory
self.model_runner.profile_run()
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py”, line 2257, in profile_run
= self._dummy_run(self.max_num_tokens, is_profile=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/utils/_contextlib.py”, line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py”, line 2038, in _dummy_run
outputs = model(
^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py”, line 303, in forward
hidden_states = self.model(input_ids, positions, intermediate_tensors,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/compilation/decorators.py”, line 173, in call
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py”, line 354, in forward
hidden_states, residual = layer(
^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py”, line 216, in forward
hidden_states = self.self_attn(
^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py”, line 135, in forward
qkv, _ = self.qkv_proj(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 1762, in call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py”, line 487, in forward
output_parallel = self.quant_method.apply(self, input
, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/fp8.py”, line 404, in apply
return torch.ops.vllm.apply_w8a8_block_fp8_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/_ops.py”, line 1158, in call
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/fp8_utils.py”, line 176, in apply_w8a8_block_fp8_linear
output = w8a8_blockscale_func(q_input, weight, x_scale, weight_scale,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/fp8_utils.py”, line 41, in cutlass_scaled_mm
return ops.cutlass_scaled_mm(A,
^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/vllm/_custom_ops.py”, line 713, in cutlass_scaled_mm
torch.ops._C.cutlass_scaled_mm(out, a, b, scale_a, scale_b, bias)
File “/home/blackfog/miniconda3/envs/vllm_dev/lib/python3.12/site-packages/torch/_ops.py”, line 1158, in call
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Currently, block scaled fp8 gemm is not implemented for Blackwell

你的报错“RuntimeError: Currently, block scaled fp8 gemm is not implemented for Blackwell”说明vLLM当前版本在Blackwell架构(RTX PRO 6000)上尚未实现FP8 block scaled GEMM(矩阵乘法)内核,因此无法用FP8量化模型进行推理。该问题已在多个issue中被确认,Blackwell架构的FP8支持尚未完善,相关内核未实现,导致模型加载后推理初始化阶段直接报错,无法启动服务。据data sourcedata source等多条issue,当前vLLM对Blackwell的FP8支持尚未落地,且官方明确报错“not implemented for Blackwell”。

解决方法:建议使用INT8、AWQ、GPTQ等其他量化方式,或等待vLLM官方后续版本补全Blackwell的FP8支持。你可以将模型换成INT8量化版本,或尝试AWQ/GPTQ等方案,均有用户反馈可在Blackwell上正常推理。官方文档也明确FP8(W8A8)目前仅支持Ada、Hopper,Blackwell未列入支持列表。

需要更详细的排查建议或替代量化方案说明吗?

Sources: