Strange that they don't just use an open weights model; there are several now that surpass ChatGPT 3.5, which is probably good enough for what they need.
Mixtral 8x7B, just out. Codes better than ChatGPT in the few prompts I've done so far, and I can run it at 2 to 3 tokens per second on my GPU-less laptop.