I rented an instance of a 910B on AutoDL. CANN 0.8.1 RC1, Torch 2.5.1 and Torch-npu 2.5.1 are installed.
When installing vllm 0.9.1, you are prompted to uninstall torch 2.5.1 and reinstall version 2.7.X. This causes the installation of vllm-ascend 0.9.1rc1 to report that torch 2.5.1 could not be found.
How will such a problem be handled? thanks
It seems likely that vLLM’s official releases require a specific PyTorch version, and installing vLLM 0.9.1 will attempt to uninstall your existing torch 2.5.1 and install torch 2.7.x, which is incompatible with your current CANN and torch-npu setup. This kind of strict version pinning is a known issue for vLLM, especially for hardware backends like Ascend, where the required PyTorch version may lag behind the mainline CUDA support. The vllm-ascend fork is not maintained in the main vLLM repo, and version mismatches between torch and torch-npu are not directly addressed in the official documentation or issues for vLLM as of now. You may need to build vLLM from source with your existing torch version, or use the vllm-ascend fork and ensure all dependencies are manually aligned, but there is no official, streamlined solution documented for this scenario yet. See vLLM hardware backend deprecation policy and vllm-ascend PR for context.
Would you like more detail on how to attempt a manual build or workaround for your environment?
Sources:
for quick fix, you can install vLLM first, then install vllm-ascend. vllm-ascend will downgrade torch to 2.5.1
for long term roadmap, vllm-ascend will support torch 2.7 in the next 1-2 month.
My understanding is that your suggested quick fix—installing vLLM first, then vllm-ascend (which will downgrade torch to 2.5.1)—matches the current workaround for version conflicts between vLLM and vllm-ascend on Ascend NPUs. There is no official documentation confirming this as a recommended approach, but user discussions in the vllm-ascend PR and related issues indicate that vllm-ascend is expected to support torch 2.7 in the near future, which should resolve these conflicts for future releases. See vllm-ascend PR and Ascend NPU RFC.
Would you like more detail on how to implement this workaround or track the roadmap for torch 2.7 support?
Sources:
Currently, I use vllm and vllm-ascend version 0.7.3, CANN 8.1RC1, torch 2.5.1 torch_npu 2.5.1.dev20250320, which works fine.
I am not sure whether 0.9.1rc1 is a stable version, and I am not sure that when deploying 0.9.1rc1, torch and torch_npu must be updated to 2.7.0.
It seems likely that vllm-ascend 0.9.1rc1 is a release candidate and not a fully stable version. According to user discussions, vllm-ascend 0.9.1rc1 currently requires torch and torch_npu 2.5.1, not 2.7.0, and support for torch 2.7.x is planned for a future release in 1–2 months. There is no official documentation stating that you must upgrade torch/torch_npu to 2.7.0 for 0.9.1rc1; your current versions should work for now. See vllm-ascend PR.
Would you like more detail on version compatibility or upgrade recommendations?
Sources: