ImportError: The model architecture ‘mobilenetv5_300m_enc’ is not supported in your version of timm (0.9.10). Please upgrade timm to a more recent version with pip install -U timm.
(VllmWorker TP0 pid=152) INFO 09-22 14:18:21 [multiproc_executor.py:520] Parent process exited, terminating worker
(VllmWorker TP1 pid=153) INFO 09-22 14:18:21 [multiproc_executor.py:520] Parent process exited, terminating worker
[rank0]:[W922 14:18:22.767945607 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see Distributed communication package - torch.distributed — PyTorch 2.8 documentation (function operator())
The error occurs because the version of timm in your Docker image (0.9.10) does not support the mobilenetv5_300m_enc architecture required by Gemma 3n models. You need to upgrade timm to at least 1.0.17 and also ensure transformers is at least 4.55.4 for full Gemma 3n support. This is a known issue and has been fixed in newer Docker images and code; as a workaround, you can run pip install --upgrade timm==1.0.19 transformers==4.55.4 inside your container before starting vLLM, or rebuild your Docker image with these versions pre-installed. See the official fix and workaround details in this issue and this PR.
Would you like step-by-step instructions for updating your Docker image or container?
how can I get the latest docker image where these are fixed?
sudo docker pull vllm/vllm-openai:latest
have done that but looks like its not the latest?