Multimodal inference guideline?

this code goes wrong still:

def predict(self, image, question: str) -> str:
    """Generate an answer for the given image and question."""
    prompt = self.build_prompt(question)
    if isinstance(prompt, list):
        prompt = prompt[0]  # or ''.join(prompt)

    outputs = self.llm.generate(
        {"prompt": prompt, "multi_modal_data": {"image": image}},
        sampling_params=self.sampling_params,
    )

Your code is almost correct, but the error likely comes from self.build_prompt(question) returning a list of dictionaries (e.g., chat template messages), not a string. For models like LLaVA-OneVision, you must use the tokenizer’s apply_chat_template to convert the message list to a string prompt (see example).

Correct approach:

# Suppose self.tokenizer is your model's tokenizer
messages = self.build_prompt(question)  # returns a list of dicts
prompt = self.tokenizer.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

outputs = self.llm.generate(
    {"prompt": prompt, "multi_modal_data": {"image": image}},
    sampling_params=self.sampling_params,
)

Do not pass a list directly as the prompt—always convert it to a string with the tokenizer.

Would you like a full code example for your model?

Sources:

Can you in simple terms explain what max_model_len and max_num_seqs do? What do they represent in the models?

max_model_len sets the maximum number of tokens (input + output) a single request can use—it’s the model’s context window.
max_num_seqs sets the maximum number of requests (sequences) that can be processed in parallel (i.e., the batch size for one iteration).

Sources:

what is the correct way to inference the minicpm models (both V, O)? I get the following error

Failed to evaluate openbmb/MiniCPM-Llama3-V-2_5: Expected there to be 1 prompt updates corresponding to 1 image items, but instead found 0 prompt updates! This is likely because you forgot to include input placeholder tokens (e.g., ) in the user message content (vision_language.py example, multimodal inputs doc).

Example:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("openbmb/MiniCPM-Llama3-V-2_5", trust_remote_code=True)
messages = [{"role": "user", "content": "()\nWhat is in this image?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = llm.generate({"prompt": prompt, "multi_modal_data": {"image": image}})

Would you like a full example for MiniCPM-O as well?

Sources:

I do the following, which still returns the sam error:

def _prompt_minicpm(self, question: str) → str:
“”“Prompt format for MiniCPM models.”“”
messages = {“role”: “user”, “content”: f"(./)\n{question}"}
return self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

and

def predict(self, image, question: str) -> str:
    """Generate an answer for the given image and question."""
    prompt = self.build_prompt(question)
    outputs = self.llm.generate(
        {"prompt": prompt, 
         "multi_modal_data": {"image": image}},
        sampling_params=self.sampling_params,
    )
    return outputs[0].outputs[0].text.strip()

Your error persists because you are not using the correct image placeholder for MiniCPM models. You must use () (not just (./)) in the prompt, and you must pass a list of messages (not a single dict) to apply_chat_template (vision_language.py example).

Corrected code:

def _prompt_minicpm(self, question: str) -> str:
    messages = [{
        "role": "user",
        "content": f"()\n{question}"
    }]
    return self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

This will resolve the “Expected there to be 1 prompt updates corresponding to 1 image items” error.

Would you like a full end-to-end example?

Sources:

NFO 07-02 07:06:12 [worker.py:294] Memory profiling takes 8.16 seconds
INFO 07-02 07:06:12 [worker.py:294] the current vLLM instance can use total_gpu_memory (23.64GiB) x gpu_memory_utilization (0.95) = 22.46GiB
INFO 07-02 07:06:12 [worker.py:294] model weights take 15.95GiB; non_torch_memory takes 0.06GiB; PyTorch activation peak memory takes 0.69GiB; the rest of the memory reserved for KV Cache is 5.76GiB.
INFO 07-02 07:06:13 [executor_base.py:113] # cuda blocks: 2948, # CPU blocks: 2048
INFO 07-02 07:06:13 [executor_base.py:118] Maximum concurrency for 4096 tokens per request: 11.52x
INFO 07-02 07:06:14 [llm_engine.py:428] init engine (profile, create kv cache, warmup model) took 10.39 seconds
Adding requests: 0%| | 0/1 [00:04<?, ?it/s]
Failed to evaluate openbmb/MiniCPM-Llama3-V-2_5: Expected there to be 1 prompt updates corresponding to 1 image items, but instead found 0 prompt updates! This is likely because you forgot to include input placeholder tokens (e.g., )\n{question}"
    }]
    return self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)


def _prompt_blip(self, question: str) -> str:
    """Prompt format for BLIP/BLIP2 models."""
    return f"Question: {question} Answer:"

def _prompt_ovis(self, question: str) -> str:
    """Prompt format for Ovis models."""
    return f"USER: ) (no extra whitespace or line breaks), and the prompt must be passed as a string after applying the chat template to a list of messages. Also, ensure the image is passed as a PIL Image, not a tensor.

Correct MiniCPM prompt function:

def _prompt_minicpm(self, question: str) -> str:
    messages = [{
        "role": "user",
        "content": f"()\n{question}"
    }]
    return self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

Key points:

  • The placeholder must be () (no extra line breaks or spaces).
  • The message list must be passed to apply_chat_template.
  • The image must be a PIL Image object.

This will resolve the “Expected there to be 1 prompt updates corresponding to 1 image items, but instead found 0 prompt updates!” error (vision_language.py example).

Would you like a full end-to-end example for MiniCPM inference?

Sources:

I think it has to do with the stop token id’s? how do i incorporate those in my code?

Yes, for some models (including MiniCPM), you must explicitly set the correct stop_token_ids in your generation call to ensure proper stopping (vision_language.py example). You can get the stop token IDs from the tokenizer:

stop_tokens = ["<|im_end|>", "<|endoftext|>"]  # Example for MiniCPM
stop_token_ids = [self.tokenizer.convert_tokens_to_ids(tok) for tok in stop_tokens]

outputs = self.llm.generate(
    {
        "prompt": prompt,
        "multi_modal_data": {"image": image}
    },
    sampling_params=self.sampling_params,
    stop_token_ids=stop_token_ids,
)

This ensures generation stops at the correct tokens. Would you like a full code example?

Sources:

Whys is ti still not working!!!
The image type is <class ‘PIL.Image.Image’>
Failed to evaluate openbmb/MiniCPM-Llama3-V-2_5: LLM.generate() got an unexpected keyword argument ‘stop_token_ids’

CODE:

"""Utilities for running inference with various VLM models.

This module exposes the :class:`VLMInference` class which can be used with
different multimodal model families. The class automatically selects the
correct prompting scheme based on the provided model name or an explicit
``family`` argument.
"""
import os

os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
#os.environ["VLLM_USE_V1"] = "1"  # Ensure VLLM uses the correct version: set to zero for blip, Gemma, internVL, Ovis

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

class VLMInference:
    """Run inference for different multimodal model families."""

    def __init__(self, model_name: str, family: str | None = None) -> None:
        self.model_name = model_name
        # Infer the family if not explicitly provided
        self.family = family or self._detect_family(model_name)

        # Set up the LLM and tokenizer once
        self.llm = LLM(
            model=model_name,
            gpu_memory_utilization=0.95,
            enforce_eager=True,
            #max_model_len=1024*4,
            max_num_seqs= 1,
            limit_mm_per_prompt= {"image": 1}, #disable for gemma
            trust_remote_code=True,
            tensor_parallel_size=1,  #add to two for prometheus
            # dtype="float16", 
        )
        print("initializing tokenizer for model:", model_name)
        self.tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
        
        # Conservative generation parameters for deterministic output
        self.sampling_params = SamplingParams(
            temperature=0.0,
            max_tokens=500,
            top_p=1.0,
            top_k=1,
        )

    # ------------------------------------------------------------------
    # Prompt helpers
    # ------------------------------------------------------------------
    def _prompt_internvl(self, question: str) -> str:
        """Prompt format for InternVL models."""
        messages = [{"role": "user", "content": f")\n{question}"
        }]
        return self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)


    def _prompt_blip(self, question: str) -> str:
        """Prompt format for BLIP/BLIP2 models."""
        return f"Question: {question} Answer:"

    def _prompt_ovis(self, question: str) -> str:
        """Prompt format for Ovis models."""
        return f"USER: ) in the prompt string (vision_language.py example). Also, make sure you pass a list of messages to apply_chat_template.

Corrected code:

def _prompt_minicpm(self, question: str) -> str:
    messages = [{
        "role": "user",
        "content": f"()\n{question}"
    }]
    return self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

This will resolve the “Expected there to be 1 prompt updates corresponding to 1 image items, but instead found 0 prompt updates!” error.

Would you like a full end-to-end example?

Sources:

It works for V2.6, so it seems to be with the stop tokens?

If your code works for MiniCPM-V-2_6 but not for MiniCPM-Llama3-V-2_5, the issue is likely with the stop tokens. MiniCPM-Llama3-V-2_5 uses both eos_id and eot_id as stop tokens, while V-2_6 uses only eos_id (see example). Make sure you set stop_token_ids=[tokenizer.eos_id, tokenizer.eot_id] in your SamplingParams for V-2_5.

Would you like a code snippet showing this for both models?

Sources:

I am doing that!

self.tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) self.stop_token_ids = [self.tokenizer.eos_id, self.tokenizer.eot_id] #added

So what can i be doing wrong still

If you are already setting self.stop_token_ids = [self.tokenizer.eos_id, self.tokenizer.eot_id] and still get the “Expected there to be 1 prompt updates corresponding to 1 image items, but instead found 0 prompt updates!” error, the problem is not with the stop tokens. This error is caused by the prompt missing the required image placeholder.

Checklist:

  • Your prompt for MiniCPM-Llama3-V-2_5 must contain () exactly (not (./) or any other variant).
  • You must use the tokenizer’s apply_chat_template on a list of messages, not a single dict or string.
  • The image you pass must be a valid PIL Image.

Example:

messages = [{
    "role": "user",
    "content": f"()\nWhat is in this image?"
}]
prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = self.llm.generate(
    {"prompt": prompt, "multi_modal_data": {"image": image}},
    sampling_params=SamplingParams(
        temperature=0.0,
        max_tokens=500,
        stop_token_ids=[self.tokenizer.eos_id, self.tokenizer.eot_id],
    ),
)

If you do all of the above and still get the error, double-check for typos, extra whitespace, or incorrect message formatting.

Would you like a minimal, copy-paste-ready working example?

Sources: