Running large open models like Kimi on the Grok platform with queued execution significantly improves inference speed.