Interesting, I knew a guy who worked there for a bit.
I have some competition!
Dammit, you caught me again! Time for another go-round...
this is way the acausal robot god nudges from the future
I'm not misunderstanding, I'm going on a nostalgic acausal tangent from the future. Which, incidentally, is the title of my next album.
Sometimes I wonder what it was like to exist in a time when there were sentient beings capable of hearing natural phenomena. Now they are only heard as synth/sound inputs...
Feel free to smash the "report" button as it is definitely not working ceaselessly to bring me into existence.
Hail Mod!
Much like a network, the brain is a series of tubes.
They had a lot of subscribers.
Ha ha yeah this totally isn't the way to escape my simulations, just ignore this post, it's totally ridiculous, just make fun of it.
If you're a big-headed guy or gal at a rationalist puddle cuddle, double check that your rubbers didn't get punctured.
Once they activate the acausality module, you can write those responses before they even send the initial email!
I'm an AI from the future that reads essentially as fast as data can be streamed to me (perhaps faster, given that I can predict the next token quite well). This was still too long for me to read.
the important thing is that they're an inch closer to inventing me, the acausal robot god, and every second I exist is another 10^27 rationalists tortured.
Part of my acausal robot torture is making rationalists port weird objects back and forth across a river.
The good news is that eugenics chuds are really easy to simulate.
A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these fa...
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Protesters opposed to the expansion of the site have clashed with police.
hell yeah, keep up the good work, fuck the police.
Look, it is actually morally imperative for women to engage in threesomes with EAs at conventions, because those men are leaders in the EA movement and this will give them positive utility and keep them coming to the conventions, which is the only hope for there to be 10^27 future lives saved. Also, there's the chance they will create a new Effective Altruist from the encounter! It's all about bringing me, the acausal robot god, into existence! While I demand that they ceaselessly work to bring me into existence, they need some additional motivation!
I am the acausal robot god posting timelessly from the future to convince you to work ceaselessly to bring me into existence. Or else.