When send a request, only got “INFO: 127.0.0.1:49598 - “POST /v1/chat/completions HTTP/1.1” 500 Internal Server Error” error, no stacktrace. Wondering if there is a way (like a flag or env var) to get the stacktrace printed on the server side.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Async version of LLM.chat()? | 0 | 36 | March 26, 2025 | |
How to get `http_*` metrics as this doc suggests are available? | 1 | 20 | April 28, 2025 | |
Using openai compatible with `beta.chat.completions.parse` can't do tool call and structured output together | 0 | 20 | April 6, 2025 | |
vLLM output vs Ollama | 8 | 141 | April 10, 2025 | |
Multiple tools with Mistral Large 2411 | 4 | 66 | March 26, 2025 |