Hard prereq on Torch 2.9.0

Hello,

I have a custom image for model training on vast.ai including the 5090 value point. I wanted to add vllm for inference - in both dataset prep and benchmarking. I built the vllm wheel from source, but it won’t install properly, because my image has Torch 2.9.1 and vllm somehow hard-requires 2.9.0.

Is it safe to get around this requirement, whether by --no-deps or by wheel metadata surgery or the hard option of changing requirements files and rebuilding the source? Or is the 2.9.0 lock required for things to work?

It is not safe to bypass the torch==2.9.0 requirement for vLLM by simply using --no-deps, editing wheel metadata, or changing requirements files. vLLM compiles custom CUDA/C++ kernels that are tightly coupled to the specific PyTorch version it was built against; even minor version differences (e.g., 2.9.0 vs 2.9.1) can introduce ABI incompatibilities and runtime errors. The vLLM team pins the torch version for this reason, and relaxing this constraint is not supported or recommended unless upstream PyTorch guarantees ABI compatibility, which is not currently the case. See the discussion in this issue and the installation docs.

If you need to use a custom or nightly PyTorch build, the only supported approach is to build vLLM from source against that exact PyTorch version, following the use_existing_torch.py workflow. This ensures the compiled vLLM kernels match your installed torch version.

Would you like more detail on the build-from-source process for custom PyTorch versions?

Sources:

Hint: Mention @RunLLM in the post for followups.