Jina AI VLM - Support help

Hi, I’m Florian from Jina AI (part of Elastic).

Currently, I’m trying to make our new custom Vision Language Modelsupport vLLM, but it turns out to be very difficult for me. Would it be possible to have a pairing session with one of you people to help me on the integration part?

I can not post the code here since we did not publish our new model yet.

For direct support and pairing on integrating your custom Vision Language Model with vLLM, the recommended approach is to join the vLLM developer Slack and ask in the #sig-multi-modality channel, where core contributors and community members can provide real-time help and possibly arrange a pairing session. You can join via https://slack.vllm.ai/ as stated in the official contact page.

If you prefer asynchronous help, you can also open a GitHub issue describing your integration challenges, and the vLLM team will respond there. Would you like more detail on the integration process or specific technical guidance?

Sources:

Hint: Mention @RunLLM in the post for followups.

Basically what Run AI said. Please approach Roger or I on Slack.

Thank you so much! I joined slack and we continue there.