vLLM has no internet connection

Hi everyone, I’m new to this. I installed vllm with GPU support and gemma3-4b-it on a computer. Everything works—I have two GUIs, OpenWEbUI and Anything LLM running in Docker, and they’re all connected as expected. However, I can’t get the model to access information from the internet.

The Docker containers are online, the GUI settings seem fine, and the internet is enabled.

Does anyone know what I might be missing or what’s going wrong?

vLLM itself does not provide internet access or web search capabilities to models; it only serves the model for inference. Features like web search or internet access are handled by the frontend (e.g., OpenWebUI, Anything LLM) or by integrating external tools/plugins. You must ensure that your UI is configured to enable RAG/web search and that the model supports tool use, but vLLM does not fetch web content by itself—this is outside its scope according to Open WebUI docs and Anything LLM integration.

If you want the model to access the internet, check that your UI (OpenWebUI, Anything LLM) is set up to use a web search engine or RAG backend, and that the model is prompted or configured to use these tools. If you have enabled these features in the UI and still have issues, the problem is likely with the UI’s tool/plugin configuration, not vLLM itself. Would you like more detail on configuring web search in OpenWebUI or Anything LLM?

Sources:

Hint: Mention @RunLLM in the post for followups.