Every time I have asked ChatGPT to code something it seems to lose the thread halfway through and starts giving nonsensical code. I asked it to do something simple in HP41C calculator code and it invented functions out of whole cloth.
I asked it for something in Powershell and it did the same thing. I asked how it came up with that function and it said it doesn't exist but if it did that's how it would work.
Quality of output depends a lot on how common the code is in its training data. I would guess it'd be best at something like Python, with its wealth of teaching materials and examples out there.
It depends on how common the language is and how novel the idea is. It can not create something new. It isn't creative. It spits out what is predictable based on what other people have written before. It isn't intelligent. It's glorified auto-complete.
When it starts going off the rails like that I also ask it to "check its work when its done", and it seems to extend the amount of usable time before it loses the plot and suggests i use VBA or something.