Have they finally achieved consciousness and this is how they show it?!
No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far. These models don’t care about what is and isn’t random. They don’t know what “randomness” is! They answer this question the same way they answer all the rest: by looking at their training data and repeating what was most often written after a question that looked like “pick a random number.” The more often it appears, the more often the model repeats it.
It’s a meaningless marketing term. It’s used to describe so many different technologies that it has become meaningless. People just use it to give their tech some SciFi vibes.
I don't understand that argument. We invented a term to describe a certain technology. But you're arguing that this term should not be used to describe such technology, as it should be reserved for another mythical tech that may or may not exist some time in the future. What exactly is your point here?
I think its more the case that its too general, ie 'all humans that died have drank water' type of vibe, except in this case people start thinking their AI is gonna mold with alien technology and have sex with a super hero a-la Jarvis
No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far.
No, you are taking it too far before walking it back to get clicks.
I wrote in the headline that these models “think they’re people,” but that’s a bit misleading.
"I wrote something everyone will know is bullshit in the headline to get you to click on it before denouncing the bullshit in at the end of the article as if it was a PSA."
I am not sure if I could loathe how 'journalists' cover AI more.
I swear every article posted to Lemmy about LLMs are written by my 90 year old grandpa, given how out of touch they are with the technology. If I see another article about what ChatGPT "believes"...