Free AMD GPU Access for the vllm developer

Dear vLLM Developers,

AMD has announced a Developer Cloud initiative to provide community developers with complimentary access to AMD GPU instances. This program aims to empower contributors to accelerate their work on open-source AI projects, including vLLM.
If you’re interested in participating, please email :

bingqing.guo@amd.com;
haichzha@amd.com;

with the following details:

  • Full Name
  • Professional/Affiliation Title
  • Key vLLM Contributions (e.g., PRs, features, optimizations)
  • Development Plans (e.g., proposed features, enhancements, or research goals for vLLM)

Eligible developers will receive free GPU hours upon approval. This is a unique opportunity to leverage AMD infrastructure to advance vLLM and the broader AI ecosystem.

We appreciate your dedication to the vLLM community!

1 Like

About AMD developer Cloud:
(Introducing AMD Instinct™ MI300X GPU Droplets | DigitalOcean)

It seems likely that the AMD Developer Cloud provides access to AMD Instinct MI300X GPUs for developers, including for AI and LLM workloads. However, the vLLM documentation does not mention or provide instructions for using the AMD Developer Cloud specifically, nor does it reference the DigitalOcean or AMD blog posts you linked. Supported AMD GPUs for vLLM include MI210, MI250, MI250X, MI300, and Radeon RX 7900/9000 series, and vLLM can be installed on these via ROCm 6.3, but there is no explicit mention of integration with AMD Developer Cloud in the official docs or issues I have ingested so far.

Would you like more detail on how to set up vLLM on AMD GPUs in general, or are you looking for information about using vLLM specifically within the AMD Developer Cloud environment?
Sources: