I also noticed that chatGPT can't actually correct itself. It just says "oh sorry, here's something different" and gives you another crap answer. I noticed it with code specifically. If I remember correctly it was better when it was brand new.
The apology thing is sort of hilarious. I wonder what exactly they did to make it eternally apologetic. There was an article on HN recently about how it is basically impossible to get Chat GPT to stop apologizing, as in, if you ask it to stop, it will apologize for apologizing.
I experienced exactly that! I told it to stop apologizing for everything and just respond with correct answers and it apologized for not being able to stop apologizing.
It's because humans have rated potential responses and ChatGPT has been trained to generate the kind of responses that most consistently get preferred rating. You can imagine how an AI trained to say what people want to hear would become a people pleaser.
That's what frustrates me the most whenever I try to use it. I tell it to be less verbose, stop over explaining and apologizing every time I correct it, and it just spits out another four paragraphs explaining why it's sorry.
The only solution I can think of is using it via API with Python and make a call with the final reply asking it to remove apologies from the text, but the token usage will increase.
I do something similar when I need to tell the model to keep the language of a text before performing a task with that text. I send the model a chunk of text and ask it to respond with single word, indicating the language of the text and then I include that in the next prompt like "Your output must be in SPANISH", or whatever.
It cannot read. It doesn't see words or letters. It works with Tokens which words are converted into. It cant count the number of letters in a word because it can't see them. OpenAI has a Tokenizer you can plug a prompt into to see how its broken up, but youre asking a fish to fly.
Is there a workaround to "trick" it into understanding letters? I'd love to use it to play with language and brainstorm some riddles or other wordplay, but if it literally can't understand language on a human level, that's a fools errand.
Y'all seem to gloss over the word artificial when it comes to reading "artificial intelligence". That or you're leaning too hard on the first definition..
made or produced by human beings rather than occurring naturally, especially as a copy of something natural.
"her skin glowed in the artificial light"
(of a person or their behavior) insincere or affected.
"an artificial smile" đ€
It's just so counterintuitive for a layman to have this tool that can write long flowing passages of text and theoretically pass a rudimentary Turin test, but it can't even begin to work with language on the level most toddlers can. We humans typically have to learn letters before we move up to words, sentences, paragraphs, and finally whole compositions. But this thing skipped right over the first several milestones and has no mechanism for reverse engineering that capability.
ChatGPT doesn't understand letters, or phonetics, or most other aspects of speech. I tried for an hour to train it to understand what a palindrome is, with the hopes of getting it to generate some new ones. Nothing stuck. It was like trying to teach a dog to write its name.
It has not. ChatGPT has been a monumental achievement and has been capable of performing previously impossible and highly impressive tasks. This is new behavior for it.