The public versions of the ais used in Turing tests usually have less computing power. The test itself is often also highly specific in what and how questions can be asked.
This hardly news because models have passed the test before and as a result the test is made more difficult. It says nothing about intelligence and only about the ability to convincingly simulate a human conversation.
In the 1970s, psychiatrists couldn't distinguish between PARRY, a chatbot simulating paranoia, and a human patient suffering from schizophrenia. So this 'bot convinces judges that it's a human' phenomenon is not new and tells us more about how humans think. #AI#DeceptionDetection#Chatbots#TuringTest#LLMs#GPT4
Forget psychotherapy, it can barely understand anything:
> Hello, I am Eliza. I'll be your therapist today.
* Hello! How are you?
> Would you prefer if I were not ?
* Not what?
> Have you asked such questions before?
* No
> You are being a bit negative and that won't help us move forward.
* What are you smoking?
> Perhaps in your fantasies I am smoking?
> Hello, I am Eliza. I'll be your therapist today.
* Hello! How are you?
> Would you prefer if I were not ?
* Not what?
> Have you asked such questions before?
* No
> You are being a bit negative and that won't help us move forward.
* What are you smoking?
> Perhaps in your fantasies I am smoking?