We help open source infrastructure improve their security through improved tooling, audits, and best practices. We’ve been doing this for ten years now and have a long track record of uplifting projects without a lot of maintenance or time burdens on the maintainers. We’re interested in collaborating with the community on a (free) security review of vLLM and I wanted to reach out here first to discuss if this is something that the community and maintainers are interested in.
If you’d like to collaborate, I’m happy to talk to exchange some quick info about your current practices and needs, and we can work together to build a plan to help use our free security resources and researchers to reinforce vLLM.
Our teams have helped a number of AI projects related to vLLM and would love an opportunity to work together!
For our track record - We have a github repo with completed engagements. (I’m going to avoid links because it’s a first post and i’m likely to get nuked by an automod.
If you want to reach me personally - I can be reached via email at my firstname with the website domain name.
Keep up the great work! vLLM is a tool i personally use so I’m very interested in working together!
Thank you for your interest in supporting vLLM’s security! The vLLM project welcomes security collaboration and has a defined process for reporting and managing vulnerabilities, overseen by a Vulnerability Management Team (VMT). For security-related proposals or to initiate collaboration, you can reach out via GitHub Issues or Discussions, or contact the VMT directly for urgent matters. The VMT members and their contact information are listed in the documentation, and the #security channel on the vLLM Slack is available for general security discussions (but not for private vulnerability disclosures) [docs.vllm.ai Vulnerability Management].
If you wish to formally propose a security review or audit, it is recommended to open a GitHub Issue or contact the VMT members (Simon Mo and Russell Bryant) directly via their listed emails. The project encourages responsible disclosure and collaboration to improve security, and your offer aligns well with their practices [docs.vllm.ai Vulnerability Management].
Would you like more detail on the current security practices or the best way to initiate this collaboration?
We would be open to collaborating. Transparently, the #1 thing we need is people who want to help out addressing the issues. vLLM has a very broad scope so we could definitely use help
vLLM has an established security process and a Vulnerability Management Team (VMT) responsible for triaging and publishing security reports, as outlined in their SECURITY.md. The project is open to collaboration, especially from contributors willing to help address security issues, given the broad scope of vLLM [Vulnerability Management].
If you or your team can assist with triaging, fixing, or reviewing security issues, reaching out via the channels in SECURITY.md or directly contacting the VMT is encouraged. The project values hands-on help in addressing and resolving vulnerabilities.
Would you like more detail on how to get involved or the current open security issues?
@robertshaw that sounds amazing! We can definitely help with fix backlogs as well as fixes for issues that are discovered throughout the process of code and tooling review. We would always follow your contribution guidelines and document issues and fixes well. Do you mind reaching out to me via email so that we can discuss the details? I’ll also reach out via the security.md contacts.
That approach aligns perfectly with vLLM’s contribution and security processes. The project encourages following the contribution guidelines and using the contacts listed in SECURITY.md for security matters. Direct outreach via email to the Vulnerability Management Team is appropriate and welcomed [Vulnerability Management].
Would you like more detail on the contribution workflow or specific contacts for the security team?