I've experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there's no way of knowing that it's just making things up, it's literally worse than useless.
ChatGPT has been helpful in being an interactive rubber duck. I used it to help myself breakdown the technical problems that I need to solve and it helps to cut down time taken to complete a difficult ticket that usually take a couple of days of work to a couple of hours.
I’ve had similar experiences with it telling me to call functions of third party libs that don’t exist. When you tell it “That function X does not exist” it says “I’m sorry, your right function X does not exist on library A. here is another example using function Y” then function Y doesn’t exist either.
I have found it useful in a limited scope, but I have found co-pilot to be much more of a daily time saver.
So? Those mistakes will come up in testing, and you can easily fix them (either yourself, or ask it to do it for you, whichever is faster).
I regularly ask ChatGPT to write code against classes/functions that didn't exist until earlier today when I wrote those APIs. Obviously the model doesn't know those APIs... but it doesn't matter, you can just paste the function list or whole class definitions in and now it does know they're there and will use them.
You don't, you get it to write both the code and the tests. And you read both of them yourself. And you run them in a debugger to verify they do what you expect.
Yeah, that's half the work of "normal coding" but it's also half the work. Which is a pretty awesome boost to productivity.
But where it really boosts your productivity is with APIs that you aren't very familiar with. ChatGPT is a hell of a lot better than Google for simple "what API can I use for X" questions.