Async version of LLM.chat()?

Is there an async version of the LLM.chat() method? I checked the source code but may have missed it… or is there a simple way to implement it on top of AsyncLLMEngine without having to redo what chat() does?

Thanks!