Conda and setup.py conflicting advice

vLLM directions often suggest conda and setup.py, while vLLM website says to do “uv pip,” not conda anymore. setup.py itself tells you don’t use me!

I am mostly just following directions at my early stage of learning about vLLM (and LLMs at all), so I’ve doing “uv pip” and “setup.py” like instructions show.

Should i keep using setup.py, at least for a while, or should i believe setup.py and stop using it?

Does conda make anything work right that uv pip doesn’t make work right? That is a question lurking in my head for a month now.

If I use build approach(es) suggested by setup.py as alternatives for setup.py, are those stable (yet)? More stable than setup.py?

My intention is to minimize problem areas for myself as I learn. Learning complex systems isn’t easy, but isolation at least makes learning the system parts easier.

TY!

Which device are you using that suggests setup.py?

The CUDA installation instructions GPU — vLLM simply advise pip install -e .

1 Like

Oh right, good question. I was looking at [Doc]: Steps to run vLLM on your RTX5080 or 5090! · Issue #14452 · vllm-project/vllm · GitHub for getting vllm running on nvidia rtx 50[789]0.

It feels like I usually see setup.py oriented directions, but maybe that’s because I’m 100% seeking my way past some building block with my 5070ti

Is it that setup.py will bring locally built depenencies into the mix while pip install might only bring official (incl prelease) repo sourced libaries into the mix, probably alleviating myriads of local environment setup issues for folks?

For my local installs I usually use the command:

VLLM_USE_PRE_COMPILED=1 uv pip install -e .
  • VLLM_USE_PRECOMPILED=1 so that I don’t have to compile the custom ops (your mileage may vary if you are using GPUs that vLLM doesn’t have precompiled binaries for)
  • uv pip instead of pip because it’s blazing fast :rocket:
  • -e so that changes to local files are used when running vLLM

With this setup I also have other packages (like transformers) locally installed with no issues.

Right – So probably what’s going on with custom written instructions is that when everI find instructions for setup & installing a brand new GPU, authors of those instructions have likely just been using setup.py for so many years that it’s habit and they’re most likely just doing “python setup.py” out of habit (vs any real reason) rather than “uv pip install -{options}”? IIUC uv pip install is the new way, while “python setup.py” is the (an) old way. Sounds like I should be able to use uv pip install in lieu of python setup.py as some instructions might specify. Would you say that’s generally correct?

TY for responding to this fairly OT question. I’m new to using LLMs & vLLM while also being ~new to everything below it / depended upon by it / mixed in with it, so I want to (ask &) learn the lower layers of the software stack in a vLLM-minded context.

I think so, yes. Especially on NVIDIA GPUs.

Other devices currently use various methods to install vLLM and we hope to standardise that in the future.

And thank you for using the forum! This is exactly the kind of learning that we want people to be able to find in search engines!

1 Like

Great! I can be an excellent guinea pig / dummy :joy: