I'm super happy that people share stuff like this to discredit AI and say that it doesn't help with code.
And then I correct it and say "hey yeah there's two Ls in a row" and it apologises for its mistake and gives me perfect code.
Yeah it can't give you perfectly working code 100% of the time for every single thing, first time. What it can do is can get you 75% of the way, and then get to 99% with a short conversation.
The issue I mostly have that it then corrects the error I pointed out but introduces a new one. I then point out the new error and it reintroduces the old one.
But overall still very helpful, so helpful actually that I actually pay for Plus. Web Plugin is awesome for quickly searching trough documentation.
I wouldn't say I'm discrediting it, given that I use it on a regular basis.
But I have seen companies choose the generative mess from ChatGPT over actually experienced devs or writers, that too the first response. They won't even correct it, because they won't even read it fully.
I know it sounds like a rare occurrence, but boy oh boy do I have news for you.
Most times it works like that but sometimes it gets stubborn and can't give me a correct answer, try asking the AI for something like: which variables are unused in this code:
...
I knew at least some were unused but chatgpt refused to acknowledge so, until it did but also included a bunch of other variables that were used.
The original prompt was a description of a use case where my client needed to be able to identify if a cell contains a repeated letter. I can't give you the exact prompt since I don't have access to their instance. This is a screenshot from them.
But I'll ask them to give it a whirl in GPT-4 and report back 🫡.
You would be making a huge mistake to look at this and say "AI isn't there yet. We have nothing to worry about."
In this case here, is just a misunderstand about how GPT-3.5 sees words and to what degree it can examine is own output. The user here has found a double blindspot, where GPT-3.5 appears to be weakest, because we assume that it "sees" the interface as we do and thinks in the same way we do.
A couple things to keep in mind: you can easily plug gpt into additional functions, which will give it whatever language-parsing abilities you need. Secondly, you can simply ask GPT to review it's answers, and you'll get a 85% improvement on just the first pass.
Moreover, gpt4 is a completely different animal. Much smarter right out of the box.
Lastly, it's really important to remember to think about a large language models like gpt3 and gpt4 as if they were the language centers of the brain... Not the complete brain.
They are capable of extremely advanced reasoning and creativity. And with just a little bit of know how even an amateur programmer can plug these things into additional tools which turn them into essential, AGIs... Like Jarvis from iron Man, or HAL from 2001.