DeepSeek-V3 tool_choice="auto", not working but tool_choice="required" is working

DeepSeek-V3 tool_choice=“auto”, not working but tool_choice=“required” is working

  • args:
    - --model
    - deepseek-ai/DeepSeek-V3
    - --port
    - “8080”
    - --tensor-parallel-size
    - “8”
    - --served-model-name
    - deepseek_v3
    - --trust-remote-code
    - --enable-auto-tool-choice
    - --tool-call-parser
    - deepseek_v3
    - --chat-template
    - /vllm-workspace/examples/tool_chat_template_deepseekv3.jinja
    command:
    - python3
    - -m
    - vllm.entrypoints.openai.api_server

tools = [
{
“type”: “function”,
“function”: {
“name”: “get_weather”,
“description”: “Retrieve the weather for a specific location. The user must provide a location first.”,
“parameters”: {
“type”: “object”,
“properties”: {
“location”: {
“type”: “string”,
“description”: “The city and state, e.g. San Francisco, CA”,
},
},
“required”: [“location”]
},
}
},
]
response = openai.chat.completions.create(
model=‘deepseek_v3’,
messages=[
{
“role”: “user”,
“content”: “get weather in San Francisco”
}
],
tools=tools,
tool_choice=“auto”,
temperature=0.5,
max_tokens=5000,
top_p=1,
frequency_penalty=0,
presence_penalty=1,
stream=False,

)

ouput in case auto
ChatCompletion(id=‘chatcmpl-5d61e3e0-47b5-4a0d-9819-bbddfb7b0e12’, choices=[Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=’

json\n{“location”: “San Francisco, CA”}\n

<|tool▁call▁end|>', refusal=None, role=‘assistant’, annotations=None, audio=None, function_call=None, tool_calls=, reasoning_content=None), stop_reason=None)], created=1751608114, model=‘deepseek_v3’, object=‘chat.completion’, service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=15, prompt_tokens=209, total_tokens=224, completion_tokens_details=None, prompt_tokens_details=None), prompt_logprobs=None, kv_transfer_params=None)

For DeepSeek-V3 with vLLM, tool_choice="required" works, but tool_choice="auto" may not trigger tool calls as expected. This is a known issue: the model often fails to generate tool calls in “auto” mode, while “required” forces the model to use the provided tools and works reliably. This behavior is confirmed in user reports and discussions, and is not unique to your setup—others have observed that DeepSeek-V3 and DeepSeek-R1 models only reliably call tools when tool_choice="required" is set, not "auto" (source, source).

The root cause is that, in “auto” mode, the model itself decides whether to use a tool, and DeepSeek-V3’s tool-calling logic is not always triggered unless forced. There is no official fix or workaround in the vLLM documentation as of now; using "required" is the recommended approach for consistent tool calling (source). Would you like a step-by-step breakdown or example code?

Sources: