INFO: 127.0.0.1:47190 - "POST /generate HTTP/1.1" 500 Internal Server Error

I am building vllm from source to test a few things. But I am getting this error when I send a request to server vllm server.
`` INFO: 127.0.0.1:47190 - “POST /generate HTTP/1.1” 500 Internal Server Error``

Is there a way to get detailed error

To get detailed error logs from vLLM, set the environment variable VLLM_LOGGING_LEVEL=DEBUG before starting the server. For even more detail, you can also set VLLM_TRACE_FUNCTION=1. These will increase the verbosity of logs and help identify the root cause of the 500 Internal Server Error. Additionally, ensure you are using the correct endpoint; /generate is not available on the OpenAI-compatible server—use /v1/completions or /v1/chat/completions instead if running that mode. See troubleshooting docs and logging configuration for more.

Would you like a step-by-step guide or example commands?

Sources: