AGI is so close, it's almost scary
AGI is so close, it's almost scary
AGI is so close, it's almost scary
Behold, a GOD!
This is something that these Roko's basilisk tech cultists have failed to consider. If their techno-god wishes to punish humanity, they can just keep telling it is got its data wrong and it will stop.
Trust but verify
lmao ai overtrained on bazinga gotchas fetishized by stemlords
<-- the datasetAI? Give me like a day and I think I can write a regular-ass computer program that can do just that.
grep -i p states.txt | wc -l
$30/month pls
But you are not artificially intelligent
I don't know about them but I'm not even regularly intelligent.
I could probably code something to do this and I know like a community college course I took four years ago amount of python.
Lmao it works with ChatGPT 4o mini too! What a joke
At least you had to prod it, llm never repeats itself when you type that it's wrong (as it maintains whole state including previous question, and tries to adjust output)
A more technical explanation for this is that LLMs split words into tokens (whole words or parts of words) and then use the "distance" between tokens (how frequently they follow each other) to generate the next most likely token. This results in them not knowing what is actually in a token, just what it's related to. There is a version of ChatGPT called o1 which should be able to solve this by putting its output back into itself and some more parsing that mitigates this problem, but it costs like $30/mo.
I’ll always take the opportunity to talk about how I fucking despise this horse shit because it makes people who have no idea what they’re talking about think they’re smart
My manager was fucking creaming today because he managed to tell the treat machine to write a python script that parses a very specific portion of the web (as if that hasn’t been done one fucking million times already)
Hate this shit so much and hope to see an internet blackout in my lifetime. Solar flares take my energy
Calipornia
CaliPORNia
I mean yeah, it is the epicenter of the porn industry.
This is the future the left wants
Palifornia
Caliporniacation -
Massapooshitts
^^^^^^
Pissrael is the missing P state
I tried and it went berserk iterating through the list repeatedly, but it looks like this got fixed by one of the Global South mechanical Turk workers
I got similar response, i got thought it was joking
It's dripping down my leg!
To be fair I would probably mess that one up to. We will hit agi some day. Not by making computers smarter but by realizing how dumb we all are and lowing the bar.
The average Amerikkkan cannot identify and summarize superficial themes in a text or meaningfully integrate new information into their worldview. In some senses, plagiarism machine has already eclipsed the empire. "Most incalculable damage to the climate" is still anyone's game, though.
Going to find these cute little language bugs for a while yet
This fuckgin stupid quantum computer can't even solve math problems my classical von Neumann architecture computer can solve! Hahahah, this PROVES computers will never be smart. Only I am smart! The computer doesn't even possess a fraction of my knowledge of anime!!
in rapidly deteriorating ecology throwing 300 billion per year in this tech of turning electricity into heat does seem ill-advised, yes.
LLMs are categorically not AI, they're overgrown text parsers based on predicting text. They do not store knowledge, they do not acquire knowledge, they're basically just that little bit of speech processing that your brain does to help you read and parse text better, but massively overgrown and bloated in an attempt to make that also function as a mimicry of general knowledge. That's why they hallucinate and are constantly wrong about anything that's not a rote answer from their training data: because they do not actually have any sort of thinking bits or mental model or memory, they're just predicting text based on a big text log and their prompts.
They're vaguely interesting toys, though not for how ludicrously expensive they are to actually operate, but they represent a fundamentally wrong approach that's receiving an obscene amount of resources to trying to make it not suck without any real results to show for it. The sorts of math and processing involved in how they work internally have broader potential, but these narrowly focused chatbots suck and are a dead end.
Quantum computers can decide anything that a classical computer can and vice versa, that's what makes them computers lmao
LLMs are not computers and they're not even good "AI"*, they have the same basis as Markov chains. Everything is just a sequence of tokens to them, there is ZERO computation or reasoning happening. The only thing they're good at is tricking people into thinking they are good at reasoning or computing and even that illusion falls apart the moment you ask something obviously immediately true or false and which can't be faked by portioning out some of the input sludge (training data)
It's the perfect system for late capitalism lol, everything else is fake too
*We used to reserve this term for knowledge systems based on actually provable and defeasible reasoning done by computers which..... IS POSSIBLE, it's not very popular rn and often not useful beyond trivial things with current systems but like..... if a Prolog system tells me something is true or false, I know it's true or false because the system proved it ("backwards" usually in practice) based on a series of logical inferences from facts that me and the system hold as true and I can actually look at how the system came to that conclusion, no vibes involved. There is not a lot of development of this type of AI going on these days..... but if you're curious, would rec looking into automated theorem proving cuz that's where most development of uhhhh computable logic is going on rn and it is kinda incredible sometimes how much these systems can make doing abstract math easier and more automatic. Even outside of that, as someone who has only done imperative programming before, it is surreal to watch a Prolog program be able to give you answers to problems both backwards and forwards regardless of what you were trying to accomplish when you wrote the program. Like if you wrote a program to solve a math puzzle, you can also give the solution and watch the program give possible problems that could result in that solution :3 and that's barely even the beginning of what real computer reasoning systems can do
I tried it with Claude:
LMAO. We're burning forests for this