It's not just that the novelty has worn off, It's progressively gotten less useful. Any god damn question I ask gets 90,000 qualifiers and it refuses to provide any data at all. I think OpenAI is so terrified of liabilty they have significantly dumbed down it's utility in the public release. I can't even ask ChatGPT to provide a link to study it references, if it references anything at all rather than making ambiguous statements.
Also, ChatGPT 4 came out but is still only available to people who pay (as far as I know). So using ChatGPT 3 feels like only having access to the leftovers. When it first came out, that was exciting because it felt like progress was going to be rapid, but instead it stagnated. (Luckily interesting LLM stuff is still happening, it's just nothing to do with OpenAI.)
Chatgpt4 has also noticeably declined in quality since it was released too. I use it less because it's become less useful and more frustrating to use. I think openAI have been steadily gimping it trying to get their costs down and make it respond faster.
I pay for it and it's... Okay for most things. It's pretty great at nerd stuff though*. Pasting an error code or cryptic log file message with a bit of context and it's better than googling for 4 days.
*If you know enough to sus out the obviously wrong shit it produces every once in a while.
Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.
I usually can find what I'm looking for unless it's really obscure with days of searching. If something is that obscure, it seems kind of unlikely ChatGPT is going to give a good answer either.
If you know enough to sus out the obviously wrong shit it produces every once in a while.
That's one pretty big problem. If something really is difficult/complex you likely won't be able to tell the difference between a wrong answer from ChatGPT and one that's correct unless it just says something obviously ridiculous.
Obviously humans make mistakes too, but at least when you search you see results in context, other can potentially call out/add context to things that might not be correct (or even misleading), etc. With ChatGPT you kind of have to trust it or not.
Yeah if it's that hard to find gpt is just going to hallucinate some bs into the response. I use it as a stack overflow at times and often run into garbage when I'm trying to solve a truly novel problem. I'll often try to simplify it to something contrived but mostly find the output useful as a sort of spark. I can't say I ever find the raw code it generates useful or all that good.
It'll often give wrong answers but some of those can contain useful bits that you can arrange into a solution. It's cool, but I still think people are oddly enamored with what is really just a talking Google. I don't think it's the game changer people are thinking it is.
It's pretty useful if you're in a more generalist job. I mostly work in visual design, but I sometimes deal with coding and web dev. As someone with a mostly surface understanding of these things, asking gpt to explain exact things that don't make sense in basic terms or solve basic issues is a huge time saver for me. Googling these issues usually works but takes way longer than getting a tailored response from gpt if you know how to ask.
I got it to give me a book that was still in copyright status by selectively asking for bigger and bigger quotes. Took a while. Now it seems to have cottoned on to that trick.