MoE quantization

Hey Runbot,

I was able to serve [EDIT: no, i was not able to serve this] LLama 4 Scout 17b 16e unsloth dynanmic bnb 4bit prior to an nvidia driver update forcing me to a new docker image + container + new vllm download build + install. Now I get this error that I have found other people get, too:

ERROR 04-09 19:22:27 [core.py:386] File “/data1/yinjian/python-scripts/vllm/vllm/model_executor/models/llama4.py”, line 73, in init
ERROR 04-09 19:22:27 [core.py:386] self.experts = FusedMoE(
ERROR 04-09 19:22:27 [core.py:386] ^^^^^^^^^
ERROR 04-09 19:22:27 [core.py:386] File “/data1/yinjian/python-scripts/vllm/vllm/model_executor/layers/fused_moe/layer.py”, line 502, in init
ERROR 04-09 19:22:27 [core.py:386] assert self.quant_method is not None
ERROR 04-09 19:22:27 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

One online discussion of the above error references year-old information about vllm not supporting MoE + quantization. I think that must not be the case anymore since my previous incarnation of vllm was able to serve it. So i don’t think that error really represents whatever the issue is that’s breaking it.

Info over on Unsloth speaks of Unsloth’s own quantization being in play. I am guessing maybe I had installed something (beyond installing just plain unsloth) in my old container for vllm, but I now I can’t figure what it was and I’m starting to think maybe I was halluincating that I ever had vllm serving unsloth’s Llama4 Scout, like maybe it was a dream.

Do you have any suggestions?

The error you’re seeing (assert self.quant_method is not None in FusedMoE) is a known issue when trying to serve Unsloth dynamic BNB 4-bit quantized Llama-4 Scout models with vLLM. As of now, vLLM does not support BNB INT4 (bitsandbytes 4-bit) quantized MoE (Mixture of Experts) models like Llama-4 Scout, due to the lack of a compatible FusedMoE kernel for bitsandbytes quantization. This is confirmed in recent discussions and issues, where vLLM maintainers and users report the same error and clarify that support is not yet available, despite some users’ hopes or past experiences. You were not hallucinating—there is ongoing work, but currently, these models are not supported in vLLM, and there is no workaround by installing extra packages or dependencies alone[^1][^2][^3].

The only current solution is to use a different quantization format that is supported by vLLM, such as the official RedHatAI Llama-4-Scout-17B-16E-Instruct-quantized.w4a16 (INT4) or FP8 quantized models, and ensure you are on the latest nightly version of vLLM. If you specifically need Unsloth’s dynamic BNB-4bit quantization, you will need to wait for upstream support in vLLM. Would you like more detail on how to use the supported quantized models or track support for Unsloth dynamic BNB-4bit in vLLM?

Sources:

Thank you runbot. I was actually hallucinating that I’d run this model on vllm in my previous incarnation container. That’s not where it ran successfully after all. But thank you for the tip about RedHatAI’s llama4 scout(s) i will check that out

You’re welcome! Yes, for Llama-4 Scout models, the RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic is confirmed to work with vLLM when using supported quantization and the latest vLLM version[^1][^2]. If you need help with the exact vLLM command or setup, let me know.

Would you like a step-by-step example for serving the RedHatAI Llama-4 Scout model with vLLM?

Sources:

I am trying to use vLLM lasted docker container to serve RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic but not succeed . Below is the docker container settings:

  # Llama 4 Scout 17B 16E Instructモデル
  llama4-scout-vllm:
    container_name: llama4-scout-vllm
    image: vllm/vllm-openai:latest
    restart: no
    shm_size: '10g'
    ipc: host
    runtime: nvidia
    command: >
      --model RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
      --tensor-parallel-size 4
      --max-model-len 128000
      --max-num-seqs 4
      --gpu-memory-utilization 0.9
      --trust-remote-code
      --dtype auto
      --port 8003
    ports:
      - "8003:8003"
    volumes:
      - ./models:/root/.cache/huggingface
    environment:
      - TORCH_DYNAMO_DISABLE=1
      - VLLM_DISABLE_COMPILE_CACHE=1
      - HF_TOKEN=${HUGGING_FACE_HUB_TOKEN}
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            device_ids: ['2', '3', '4', '5']
            capabilities: [gpu]
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8003/health"]
      interval: 30s
      timeout: 10s
      retries: 3

Then I get below error

llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527]   Hint: Dynamo has detected that tracing the code will result in an error when running in eager. Please double check that your code doesn't contain a similar error when actually running eager/uncompiled.
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527]   Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] 
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527]   Developer debug context: raised exception ExceptionVariable(<class 'AssertionError'>)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] 
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] 
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] from user code:
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527]    File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 392, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527]     hidden_states, residual = layer(positions, hidden_states, residual)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] 
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] 
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:00:43 [multiproc_executor.py:527] 
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] EngineCore failed to start.
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] Traceback (most recent call last):
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     engine_core = EngineCoreProc(*args, **kwargs)
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 390, in __init__
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     super().__init__(vllm_config, executor_class, log_stats,
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 83, in __init__
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     self._initialize_kv_caches(vllm_config)
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 141, in _initialize_kv_caches
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     available_gpu_memory = self.model_executor.determine_available_memory()
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/abstract.py", line 76, in determine_available_memory
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     output = self.collective_rpc("determine_available_memory")
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 220, in collective_rpc
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     result = get_response(w, dequeue_timeout)
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 207, in get_response
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     raise RuntimeError(
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] RuntimeError: Worker failed with error 'Observed exception
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   Explanation: Dynamo found no exception handler at the top-level compiled function when encountering an exception. Exception will propagate outside the compiled region.
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   Hint: Dynamo has detected that tracing the code will result in an error when running in eager. Please double check that your code doesn't contain a similar error when actually running eager/uncompiled.
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] 
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]   Developer debug context: raised exception ExceptionVariable(<class 'AssertionError'>)
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] 
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] 
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] from user code:
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]    File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 392, in forward
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515]     hidden_states, residual = layer(positions, hidden_states, residual)
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] 
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
llama4-scout-vllm  | ERROR 07-01 23:00:43 [core.py:515] ', please check the stack trace above for the root cause
llama4-scout-vllm  | ERROR 07-01 23:00:45 [multiproc_executor.py:140] Worker proc VllmWorker-3 died unexpectedly, shutting down executor.
llama4-scout-vllm  | Process EngineCore_0:
llama4-scout-vllm  | Traceback (most recent call last):
llama4-scout-vllm  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
llama4-scout-vllm  |     self.run()
llama4-scout-vllm  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
llama4-scout-vllm  |     self._target(*self._args, **self._kwargs)
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 519, in run_engine_core
llama4-scout-vllm  |     raise e
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
llama4-scout-vllm  |     engine_core = EngineCoreProc(*args, **kwargs)
llama4-scout-vllm  |                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 390, in __init__
llama4-scout-vllm  |     super().__init__(vllm_config, executor_class, log_stats,
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 83, in __init__
llama4-scout-vllm  |     self._initialize_kv_caches(vllm_config)
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 141, in _initialize_kv_caches
llama4-scout-vllm  |     available_gpu_memory = self.model_executor.determine_available_memory()
llama4-scout-vllm  |                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/abstract.py", line 76, in determine_available_memory
llama4-scout-vllm  |     output = self.collective_rpc("determine_available_memory")
llama4-scout-vllm  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 220, in collective_rpc
llama4-scout-vllm  |     result = get_response(w, dequeue_timeout)
llama4-scout-vllm  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 207, in get_response
llama4-scout-vllm  |     raise RuntimeError(
llama4-scout-vllm  | RuntimeError: Worker failed with error 'Observed exception
llama4-scout-vllm  |   Explanation: Dynamo found no exception handler at the top-level compiled function when encountering an exception. Exception will propagate outside the compiled region.
llama4-scout-vllm  |   Hint: Dynamo has detected that tracing the code will result in an error when running in eager. Please double check that your code doesn't contain a similar error when actually running eager/uncompiled.
llama4-scout-vllm  |   Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.
llama4-scout-vllm  | 
llama4-scout-vllm  |   Developer debug context: raised exception ExceptionVariable(<class 'AssertionError'>)
llama4-scout-vllm  | 
llama4-scout-vllm  | 
llama4-scout-vllm  | from user code:
llama4-scout-vllm  |    File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 392, in forward
llama4-scout-vllm  |     hidden_states, residual = layer(positions, hidden_states, residual)
llama4-scout-vllm  | 
llama4-scout-vllm  | Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
llama4-scout-vllm  | ', please check the stack trace above for the root cause
llama4-scout-vllm  | Traceback (most recent call last):
llama4-scout-vllm  |   File "<frozen runpy>", line 198, in _run_module_as_main
llama4-scout-vllm  |   File "<frozen runpy>", line 88, in _run_code
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1387, in <module>
llama4-scout-vllm  |     uvloop.run(run_server(args))
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
llama4-scout-vllm  |     return __asyncio.run(
llama4-scout-vllm  |            ^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
llama4-scout-vllm  |     return runner.run(main)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
llama4-scout-vllm  |     return self._loop.run_until_complete(task)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
llama4-scout-vllm  |     return await main
llama4-scout-vllm  |            ^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1323, in run_server
llama4-scout-vllm  |     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1343, in run_server_worker
llama4-scout-vllm  |     async with build_async_engine_client(args, client_config) as engine_client:
llama4-scout-vllm  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
llama4-scout-vllm  |     return await anext(self.gen)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 155, in build_async_engine_client
llama4-scout-vllm  |     async with build_async_engine_client_from_engine_args(
llama4-scout-vllm  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
llama4-scout-vllm  |     return await anext(self.gen)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 191, in build_async_engine_client_from_engine_args
llama4-scout-vllm  |     async_llm = AsyncLLM.from_vllm_config(
llama4-scout-vllm  |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 162, in from_vllm_config
llama4-scout-vllm  |     return cls(
llama4-scout-vllm  |            ^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 124, in __init__
llama4-scout-vllm  |     self.engine_core = EngineCoreClient.make_async_mp_client(
llama4-scout-vllm  |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 93, in make_async_mp_client
llama4-scout-vllm  |     return AsyncMPClient(vllm_config, executor_class, log_stats,
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 716, in __init__
llama4-scout-vllm  |     super().__init__(
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 422, in __init__
llama4-scout-vllm  |     self._init_engines_direct(vllm_config, local_only,
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 491, in _init_engines_direct
llama4-scout-vllm  |     self._wait_for_engine_startup(handshake_socket, input_address,
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 511, in _wait_for_engine_startup
llama4-scout-vllm  |     wait_for_engine_startup(
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/utils.py", line 494, in wait_for_engine_startup
llama4-scout-vllm  |     raise RuntimeError("Engine core initialization failed. "
llama4-scout-vllm  | RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}

My server specs is as below:

  • GPU 2,3,4,5 each is a A100 40GB
  • CPU AMD EPYC
  • RAM 1TB
  • CUDA 12.8 installed

can you show me the correct way to serve this model in vLLM ?

The error you are seeing is due to incompatibility between vLLM’s CUDA graph compilation and the Llama-4 Scout FP8 model. For this model, you must disable CUDA graph compilation by adding the --enforce-eager flag. This is required because the model’s architecture (MoE + FP8) is not compatible with torch.compile/cuda graphs in vLLM, as confirmed by vLLM maintainers and user reports[^1][^2].

Update your docker command section to include --enforce-eager:

command: >
  --model RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
  --tensor-parallel-size 4
  --max-model-len 128000
  --max-num-seqs 4
  --gpu-memory-utilization 0.9
  --trust-remote-code
  --dtype auto
  --port 8003
  --enforce-eager

This should resolve the “Dynamo found no exception handler” and assertion errors. Would you like more details on memory tuning or troubleshooting?

Sources:

Trying your suggestion by adding --enforce-eager resulted in a different errors . I can not shared all the errors logs because it is over 32000 chars limit of this chat, so I just include the final parts of the logs.

llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 581, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     model_output = self.model(input_ids, positions, intermediate_tensors,
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 173, in __call__
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self.forward(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 392, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     hidden_states, residual = layer(positions, hidden_states, residual)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 319, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     hidden_states = self.feed_forward(hidden_states)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 99, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     routed_out = self.experts(
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                  ^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1356, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self.forward_impl(hidden_states, router_logits)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1448, in forward_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     final_hidden_states = self.quant_method.apply(
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                           ^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py", line 367, in apply
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     assert not apply_router_weight_on_input, (
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527] AssertionError: Apply router weight on input not supported for Marlin MoE.
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527] Traceback (most recent call last):
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 522, in worker_busy_loop
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     output = func(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]              ^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return func(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 205, in determine_available_memory
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     self.model_runner.profile_run()
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 2012, in profile_run
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     hidden_states = self._dummy_run(self.max_num_tokens)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return func(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 1847, in _dummy_run
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     outputs = model(
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]               ^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/mllama4.py", line 841, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self.language_model(input_ids, positions, intermediate_tensors,
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 581, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     model_output = self.model(input_ids, positions, intermediate_tensors,
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 173, in __call__
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self.forward(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 392, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     hidden_states, residual = layer(positions, hidden_states, residual)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 319, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     hidden_states = self.feed_forward(hidden_states)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 99, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     routed_out = self.experts(
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                  ^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1356, in forward
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     return self.forward_impl(hidden_states, router_logits)
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1448, in forward_impl
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     final_hidden_states = self.quant_method.apply(
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]                           ^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py", line 367, in apply
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]     assert not apply_router_weight_on_input, (
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527] AssertionError: Apply router weight on input not supported for Marlin MoE.
llama4-scout-vllm  | (VllmWorker rank=0 pid=232) ERROR 07-01 23:34:37 [multiproc_executor.py:527] 
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515] EngineCore failed to start.
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515] Traceback (most recent call last):
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]     engine_core = EngineCoreProc(*args, **kwargs)
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 390, in __init__
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]     super().__init__(vllm_config, executor_class, log_stats,
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 83, in __init__
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]     self._initialize_kv_caches(vllm_config)
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 141, in _initialize_kv_caches
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]     available_gpu_memory = self.model_executor.determine_available_memory()
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/abstract.py", line 76, in determine_available_memory
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]     output = self.collective_rpc("determine_available_memory")
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 220, in collective_rpc
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]     result = get_response(w, dequeue_timeout)
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 207, in get_response
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515]     raise RuntimeError(
llama4-scout-vllm  | ERROR 07-01 23:34:37 [core.py:515] RuntimeError: Worker failed with error 'Apply router weight on input not supported for Marlin MoE.', please check the stack trace above for the root cause
llama4-scout-vllm  | ERROR 07-01 23:34:38 [multiproc_executor.py:140] Worker proc VllmWorker-0 died unexpectedly, shutting down executor.
llama4-scout-vllm  | Process EngineCore_0:
llama4-scout-vllm  | Traceback (most recent call last):
llama4-scout-vllm  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
llama4-scout-vllm  |     self.run()
llama4-scout-vllm  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
llama4-scout-vllm  |     self._target(*self._args, **self._kwargs)
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 519, in run_engine_core
llama4-scout-vllm  |     raise e
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
llama4-scout-vllm  |     engine_core = EngineCoreProc(*args, **kwargs)
llama4-scout-vllm  |                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 390, in __init__
llama4-scout-vllm  |     super().__init__(vllm_config, executor_class, log_stats,
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 83, in __init__
llama4-scout-vllm  |     self._initialize_kv_caches(vllm_config)
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 141, in _initialize_kv_caches
llama4-scout-vllm  |     available_gpu_memory = self.model_executor.determine_available_memory()
llama4-scout-vllm  |                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/abstract.py", line 76, in determine_available_memory
llama4-scout-vllm  |     output = self.collective_rpc("determine_available_memory")
llama4-scout-vllm  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 220, in collective_rpc
llama4-scout-vllm  |     result = get_response(w, dequeue_timeout)
llama4-scout-vllm  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 207, in get_response
llama4-scout-vllm  |     raise RuntimeError(
llama4-scout-vllm  | RuntimeError: Worker failed with error 'Apply router weight on input not supported for Marlin MoE.', please check the stack trace above for the root cause
llama4-scout-vllm  | Traceback (most recent call last):
llama4-scout-vllm  |   File "<frozen runpy>", line 198, in _run_module_as_main
llama4-scout-vllm  |   File "<frozen runpy>", line 88, in _run_code
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1387, in <module>
llama4-scout-vllm  |     uvloop.run(run_server(args))
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
llama4-scout-vllm  |     return __asyncio.run(
llama4-scout-vllm  |            ^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
llama4-scout-vllm  |     return runner.run(main)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
llama4-scout-vllm  |     return self._loop.run_until_complete(task)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
llama4-scout-vllm  |     return await main
llama4-scout-vllm  |            ^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1323, in run_server
llama4-scout-vllm  |     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1343, in run_server_worker
llama4-scout-vllm  |     async with build_async_engine_client(args, client_config) as engine_client:
llama4-scout-vllm  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
llama4-scout-vllm  |     return await anext(self.gen)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 155, in build_async_engine_client
llama4-scout-vllm  |     async with build_async_engine_client_from_engine_args(
llama4-scout-vllm  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
llama4-scout-vllm  |     return await anext(self.gen)
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 191, in build_async_engine_client_from_engine_args
llama4-scout-vllm  |     async_llm = AsyncLLM.from_vllm_config(
llama4-scout-vllm  |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 162, in from_vllm_config
llama4-scout-vllm  |     return cls(
llama4-scout-vllm  |            ^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 124, in __init__
llama4-scout-vllm  |     self.engine_core = EngineCoreClient.make_async_mp_client(
llama4-scout-vllm  |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 93, in make_async_mp_client
llama4-scout-vllm  |     return AsyncMPClient(vllm_config, executor_class, log_stats,
llama4-scout-vllm  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 716, in __init__
llama4-scout-vllm  |     super().__init__(
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 422, in __init__
llama4-scout-vllm  |     self._init_engines_direct(vllm_config, local_only,
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 491, in _init_engines_direct
llama4-scout-vllm  |     self._wait_for_engine_startup(handshake_socket, input_address,
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 511, in _wait_for_engine_startup
llama4-scout-vllm  |     wait_for_engine_startup(
llama4-scout-vllm  |   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/utils.py", line 494, in wait_for_engine_startup
llama4-scout-vllm  |     raise RuntimeError("Engine core initialization failed. "
llama4-scout-vllm  | RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
llama4-scout-vllm exited with code 1```

The error AssertionError: Apply router weight on input not supported for Marlin MoE. indicates that the RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic model is using a MoE (Mixture of Experts) kernel (Marlin) that does not support the “apply router weight on input” feature in vLLM. This is a known limitation in vLLM for Marlin-based MoE models, and there is currently no user-exposed flag to disable this behavior for this model type. As a result, the model cannot be served with the current vLLM release, even with --enforce-eager enabled.

This limitation is confirmed in recent vLLM issues and is not resolved by changing runtime flags or environment variables. The only workaround is to use a different model or quantization format that does not trigger this unsupported code path, or to wait for an upstream fix in vLLM that adds support for this feature in Marlin MoE kernels. Would you like more detail on alternative models or tracking this issue?

Sources:

any llama4 scout 8 bit version that I can use on vllm ?

There is currently no official 8-bit (INT8 or W8A8) quantized version of Llama-4 Scout (17B 16E) available for use with vLLM. The supported quantized versions for Llama-4 Scout in vLLM are FP8 and INT4 (W4A16), such as RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic and RedHatAI/Llama-4-Scout-17B-16E-Instruct-quantized.w4a16 (source). No 8-bit version is listed or referenced in the vLLM documentation or GitHub issues.

Would you like more detail on using the available FP8 or INT4 versions with vLLM?

Sources: