People who are using it to solve problems which require equivalent effort of writing a sufficient prompt and just directly solving it without AI at all for sure are AI folk.
Are you implying that dream is an AI? Or do people like, upload AI copies of him?
If it’s the former, that’s absurd. If it’s the latter, that doesn’t prove consciousness. AI voice replication is done by text to speech. A conscious human would type out what the voice should say.
I work in the tech industry so I am forced to understand new technologies. This one is unique in that it plays on the human brain’s innate tendency to anthropomorphize things.
It also does a great job of presenting information
I've seen some people on Twitter complain that their coworkers use ChatGPT to write emails or summarize text. To me this just echoes the complaints made by previous generations against phones and calculators. There's a lot of vitriol directed at anyone who isn't staunchly anti AI and dares to use a convenient tool that's avaliable to them.
I think my main issue with that use case is that it's a "solution" to a relatively minor problem (which has a far simpler solution), that actually compounds the problem.
Let's say I don't want to write prose for my email, I have a list of bullet points I want to get across. Awesome, I feed it into the chat gippity and boom, my points are (hopefully) property represented in prose.
Now, the recipient doesn't want to read prose. ESPECIALLY if it's the fluffy wordy-internet-recipe-preamble that the chat gippity tends to produce. They want a bullet point summary. So they feed it into the chat gippity to get what is (hopefully) a properly condensed bullet point summary.
So, suddenly we have introduced a fallible middle translation layer for actually no reason.
Just write the clear bullet point email in the first place. Save everyone the time. Save everyone from the 2 chances for the chat gippity to fuck it up.