what's extremely funny to me is that this exact phrase was used when I was in college to explain why you shouldn't do exactly what the OpenAI team later did, in courses on AI and natural language processing. we were straight up warned not to do it, with a discussion on ethics centered on "what if it works and you don't wind up with model that spews unintelligible gibberish?" (the latter was mostly how it went back then - neural nets were extremely hard to train back then). there were a couple of kids who were like "...but it worked... " and the professor pointedly made them address the consequences.
this wasn't even some liberal arts school - it was an engineering school that lacked more than a couple of profs qualified to teach philosophy and ethics. it just used to be the normal way the subject was taught, back when it was still normal to discourage the use of neural nets for practical and ethical reasons (like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage).
I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they'd add an ethics class later.
instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can't remember what it said a paragraph ago. I feel old.
(like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage)
Microsoft Tay, after one day exposed to internet nazis
I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they'd add an ethics class later.
instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can't remember what it said a paragraph ago
And like the LLMs themselves, they'll confidently be wrong and assume knowledge and mastery that they simply don't have, as seen in this thread.
Microsoft Tay, after one day exposed to internet nazis
even less coherent. a neural net trained on a bad corpus won't even produce words. it's like mashing your face on the keyboard in a way that produces things that sound like words, inserted into and around actual, incomprehensible text. honestly, reading what gpt3 produced, I think that what was happening to a degree and they were doing postprocessing to extract usable text.
And like the LLMs themselves, they'll confidently be wrong and assume knowledge and mastery that they simply don't have, as seen in this thread.
did they get banned? I expected more angry, nonsense replies. "did chatgpt write this?" is such a fun and depressing game.