Your error indicates the container cannot resolve or access Hugging Face URLs, likely due to missing or invalid authentication (Hugging Face token) or network/DNS issues. The official vLLM Docker instructions require mounting your Hugging Face cache and setting the HUGGING_FACE_HUB_TOKEN environment variable for private or gated models. Without a valid token, you may get 401 errors or DNS failures when the container tries to download model files. See official vLLM Docker docs and GPT-OSS recipe.
Example command (replace <your_token> with your Hugging Face token):
docker run --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<your_token>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:gptoss \
--model openai/gpt-oss-20b
Would you like more detail on troubleshooting network or authentication issues?