Anyone knows that can I run Alibaba Wan2.2 Model in Mac M3/M4/M5?

Hi, there. I want to know whether I can run vllm in Docker to serve Wan2.2 Text-To-Video Model?
I’m considering to buy MacBookPro with M4 Pro/M5.
Any suggestion is appreciated :slightly_smiling_face:

It seems likely that vLLM does not currently support running on MacBook Pro with M4/M5 (Apple Silicon), as both the official documentation and user reports indicate that vLLM is not supported on Apple Silicon/MacOS, even in Docker. There is no mention in the data sources of support for the Wan2.2 Text-To-Video model specifically, and vLLM’s multimodal support is focused on certain vision and video models, but not on Mac hardware or with Apple GPUs/CPUs. Running vLLM in Docker on MacBook Pro (M4/M5) is not supported at this time, and you may encounter compatibility issues or lack of hardware acceleration for LLM inference and serving on Apple Silicon devices. For text-to-video models, there is no explicit support or example for Wan2.2 in the vLLM documentation.

Would you like more detail on supported hardware or alternative deployment options for text-to-video models?

Sources:

Hint: Mention @RunLLM in the post for followups.

@RunLLM Yes, please give some details. @RunLLM