请问为什么我使用vllm推理和使用modelscope库进行推理差距会这么大,使用vllm推理的回复完全不对
vllm推理代码
from vllm import LLM, SamplingParams
llm = LLM(model=“models/Qwen/Qwen2.5-1.5B-Instruct”)
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.8,
top_k=20,
max_tokens=256
)
outputs = llm.generate(“你好”)
for output in outputs:
print(output.outputs[0].text)
结果:
,我需要预订一张去纽约的机票。请问有什么好的建议吗?
modelscope推理代码
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "models/Qwen/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "你好"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
结果:
你好!有什么我可以帮助你的吗?