@agamemnonymous No, it looks like it beforehand. ChatGPT's just a language prediction engine, but people think it can think. It can only discern what the most probable language patterns are, it can't make judgements. But people are arguing it is working off inspiration.
And we've KNOWN it will look like it beforehand, that's why there's even concepts like a Turing test, to prepare us for discerning the illusion of intelligence from actual intelligence.
Prersonally, I suspect social media and the way that Bigsoc companies hack the human mind using feed algorithms is an argument for a Non-AI Singularity, and more likely than a math engine that predicts the next word in an astoundingly natural way.
I think you may underestimate the nature of exponential positive feedback. The AI singularity centers around an inflection point of self-programming before which noticeable improvements take place over months and weeks, and after which they take place over seconds and microseconds. Self-modification iterates faster than you can record.
It has nothing to do with "inspiration" or "actual intelligence". It is entirely based on self-modification, and the "illusion" of intelligence is sufficient for that task. Eventually, the illusion is indiscernible from reality (spoken as a very complex method of distributing gametes).
@Yendor Point is that it's jumping the gun to think we can escape climate change by rocketing to Mars and terraforming the climate there, rather than just concentrating on terraforming Earth back to a liveable environment and THEN worrying about moving elsewhere. If we can't keep Earth inhabitable, we can't make Mars inhabitable.
Just like people who think Large Language Models are genuine AI are completely jumping the gun about what we're capable of coding right now.
People saying LLMs are a singularity don't know how LLMs work.
When you feed ChatGPT some text, what you get out wasn't the result of the text being processed by a neural network. It's the result of the text being processed by a deterministic algorithm, one part of which was decided on by a neural network far in advance. That's what neural nets do, they contribute to other algorithms that, while often complicated, are deterministic. If ChatGPT were to become sentient somehow, it would be happening behind the scenes with a neural network you've literally never interacted with before.
This isn't a perfect example but consider you feed a program all of the books ever written. The program parses these books and keeps track of how often one word correlates with another word, based on the frequency that the word appears along the other words.
Now store all that data into a huge (35gb) file. This file isn't human readible, it's just sort of a large table of all of these word correlations. Install this program with its large language model (the 35gb file generated from parsing all the books) on a system or systems capable of doing lots of math fast. Something like a high end GPU.
Now, as a user, send a series of words to the program. The program will look at the words you have written and come up with words that correlate to what you have written and what the bot has already written.
"Correlate" isn't really the best term to use here, but statistics are done based on surrounding words. The program still acts like a program, just predicting the next word using statistics found in the LLM. The program doesn't know how to do math, or write code, but it can have very convincing discussions on both, or anything really.