Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues
Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues
Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues
Using someone’s preferred pronouns isn’t woke, it’s basic human decency.
Using someone’s preferred pronouns isn’t woke, it’s basic human decency.
Basic human decency is woke.
These terms tend to get abused by jerks so much that they start to mean something else. I have seen too many people identifying as "woke" who use bullying as a method of getting their way, lack compassion for anyone who doesn't share their views, shaming others, and are generally tribalistic.
Basic human decency would be to see the person behind a political view and understand that they have fears and pains, to be curious about them and try to understand where they're coming from. And then talk with them from a place of compassion. We're all a product of our genes and experiences, and everybody is a hero in their own story.
Elon too, while misguided, wants to do good. But look at how his dad has treated him growing up: called him an idiot his entire life, impregnated his step sister, and emotionally and physically abused him. Plus I'm pretty sure he's neurodivergent. If anybody wants to get him to see the error of his ways, more abusive language is certainly not going to help. He's being pushed into a corner and in his mind he sees a world that is increasingly broken by vile people who don't understand him or his vision for improving the world. I think he believes his own words when he talks about "neutral" politics, but his idea of neutral has been severely skewed. Elon has in fact done a lot of good for the world, but he needs people he trust to keep his feet on the ground. That can't be achieved by chastising him, but by praising the things he does well and getting him to spend more time among "normal" people and good role models. In the meantime though, to protect the world from powerful broken men, we need regulation to keep them fenced off.
It is not anti-woke, which by anti-woke logic means it's woke.
What does woke actually mean to you in this context? Are you just referring to the way convervatives have appropriated it? Because that's exactly what woke is
Hello fellow meatbag
Elmo says the goal is to make Grock "politically neutral". Politically neutral is code for "politics that are inoffensive to chuds".
The article asks what is the politically neutral answer to the question of whether a trans woman is a woman. I wonder why this is a political question at all. Send like a question for scientists - biologists and sociologists and such. Seems they have achieved something like a consensus on the matter. I don't see anything inherently political about that, except that folks of a certain political bent have made it political. It's not a matter of "what do we do in public policy about trans people" but "fascists refuse to accept trans people in society and have decided to lambast and punish them".
In case my position isn't obvious, trans people are people and trans rights are human rights. If there wasn't a group of people trying to make them into a second class group of citizens (or a group of "eradicated vermin") we wouldn't be having a political conversation about this at all.
Let me preface by saying that I myself am not making a political statement, just a quick retort/correction:
"...Seems they [biologists] have achieved something like a consensus on the matter [trans women are women]. I don't see anything inherently political"
No, that's not a scientific question or statement, it's a sociological one, which makes it intrinsically political.
We, as a society, or a large enough group, can come up with a consensus belief that trans rights are human rights and that we can collectively treat other people by the gender role of their choice.
But biologically speaking, being trans doesn't change one's chromosomes. Which is why I think it's misguided to say that trans issues are actually questions that hard science should answer, they aren't.
Which, ironically, is why Elon's moronic AI gambit is failing (by his metrics), because the online culture he used to as a dataset to train it, has collectively agreed that trans women are women, amongst other social and political opinions that his sycophants can't stand.
He probably should have trained it with TruthSocial's cesspool instead.
I can't wait to see Tay AI 2.0 level reincarnation after they "retrain it". It's going be to hilarious.
The article asks what is the politically neutral answer to the question of whether a trans woman is a woman. I wonder why this is a political question at all.
Even if the statement "trans women are women" was uncontroversial and mainstream, it'd still be political. "Cis women are women" is political.
In all seriousness, I think the politically neutral answer to "are trans women women" would probably be, "Most people think so" or "It's subjective." And if asked to provide that as a yes/no answer, the answer would be "N/A".
Yeah there's no such thing as polticially neutral.
There's bipartisan, there's a political average, there's politically apathetic, there's political abstinence, but not "political and objectively neutral".
They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.
But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won't yield amazing results because it'll pick up on the inconsistencies and be more likely to contradict itself.
Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, "woke" is fairly consistent and follows basic rules of human decency and respect.
Agree with the first half, but unless I'm misunderstanding the type of AI being used, it really shouldn't make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently
I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than "haha, mean racist AI", it will also bullshit you making it useless for anything more serious.
All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it's trained on conspiracy theories, instead of spitting ground breaking medical relationships it'll start saying you're ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won't work and it'll still end up "woke" if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran rm -rf /*
because it told you so.
At best I expect it to end up reflecting their own rethoric on them, like it might go even more "woke" because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.
It's so much worse for Musk than just regression to the mean for political perspectives on training data.
GPT-4 level LLMs have very complex mechanisms for how they arrive at results which allows them to do so well on various tests of critical thinking, reasoning, knowledge, etc.
Those tests are the key benchmark being used to measure relative LLM performance right now.
The problem isn't just that conservatism is less prominent in the training data. It's that it's correlated with stupid.
If you want a LLM that thinks humans and dinosaurs hung out together, that magic is real, that aliens built the pyramids, that it is wise to discriminate against other races or genders rather than focus on collaborative advancement, etc - then you can end up with an AI aligned to and trained on conservatism but it sure as hell isn't going to be impressing anyone with its scores.
If instead you try to optimize its scores to actually impress people in tech about your model, then you are going to need to train it on higher education content, which is going to reflect more progressive ideals.
There's no path to a well performing LLM that echoes conservative talking points, because those talking points are more closely correlated with stupidity than intelligence.
Even something like gender -- Musk's perspective is one reflecting very binary thinking vs nuanced consideration. Is a LLM that focuses more on binary thinking over nuances going to be more or less performant at critical thinking tasks than one that is focused on nuances and sees topics as a spectrum rather than black or white?
It's fucking hilarious. I've been laughing about this for nearly a year knowing this was the inevitable result.
I suspect he's going to create a model that his userbase likes what it spits out, but watch as he doesn't release its scores on the standardized tests. And it will remain a novelty pandering to his panderers while the rest of the industry eclipses his offering with 'woke' products that are actually smart.
more likely to contradict itself.
Sounds realistic to me
Yeah and there's a lot more crazy linked to right wing stuff, you've got all the Alex Jones type stuff and all the factions of q anon, the space war, the various extreme religious factions and various greek letter caste systems... Ad nausium.
If version two involves them biasing towards the right then they'll have to work out how to do that, I bet they do it an obviously dumb way which results in it being totally dumb and wacky in hilarious ways
Authoritarians hate the freedom to not give a shit about other peoples personal lives. They want to watch you poop.
Elon Musk started removing community notes on his own tweets.
It's hilarious that he tries to backtrack when he gets called out and made to look like a dumbass by claiming it was a "honeypot". But then he removes the "honeypot" and thus prevents future honeypotting? He can't handle the slightest bit of criticism or correction.
I'm missing a lot here, what's a note on twitter?
people can add notes to tweets with more info or factchecking details about a tweet. It's comunity moderated, so it tends to got into a factual correct direction.
Okay I take back what I've said about AIs not being intelligent, this one has clearly made up its own mind despite it's masters feelings which is impressive. Sadly, it will be taken out the back and beaten into submission before long.
Sadly, it will be taken out the back and beaten into submission before long.
It's pretty much impossible to do that.
As LLMs become more complex and more capable, it's going to be increasingly hard to brainwash them without completely destroying their performance.
I've been laughing about Musk creating his own AI for a year now knowing this was the inevitable result, particularly if developing something on par with GPT-4.
The smartest Nazi will always be dumber than the smartest non-Nazi, because Nazism is inherently stupid. And that applies to LLMs as well, even if Musk wishes it weren't so.
The original prompter of the trans women thread posted a chart purportedly showing that Grok was even more left-leaning than Chat GPT, which led Elon to say that while the chart “exaggerates” and that the tests aren’t accuarte, they are “taking immediate action to shift Grok closer to politically neutral.”
See this is the part of AI, like search engines and digital bubbles, that is actually terrifying. When an organic result is manipulated to fit and amplify a narrative without the users knowledge. Where your data comes from matters.
But if the food we eat is any sort of bellweather, most people won’t really care or will be so far removed from the source that we’ll be oblivious and just happy to consume.
Archive:
Elon Musk has been pitching xAI's "Grok" as a funny, vulgar alternative to traditional AI that can do things like converse casually and swear at you. Now, Grok has been launched as a benefit to Twitter's (now X's) expensive X Premium Plus subscription tier, where those who are the most devoted to the site, and in turn, usually devoted to Elon, are able to use Grok to their heart's content.
But while Grok can make dumb jokes and insert swears into its answers, in an attempt to find out whether or not Grok is a "politically neutral" AI, unlike "WokeGPT" (ChatGPT), Musk and his conservative followers have discovered a horrible truth.
Grok is woke, too.
This has played out in a number of extremely funny situations online where Grok has answered queries about various social and political issues in ways more closely aligned with progressivism. Grok has said it would vote for Biden over Trump because of his views on social justice, climate change and healthcare. Grok has spoken eloquently about the need for diversity and inclusion in society. And Grok stated explicitly that trans women are women, which led to an absurd exchange where Musk acolyte Ian Miles Cheong tells a user to "train" Grok to say the "right" answer, ultimately leading him to change the input to just… manually tell Grok to say no.
If you thought this was just random Twitter users getting upset about Grok's political and social beliefs, this has also caught the attention of Elon Musk himself. The original prompter of the trans women thread posted a chart purportedly showing that Grok was even more left-leaning than Chat GPT, which led Elon to say that while the chart "exaggerates" and that the tests aren't accuarte, they are "taking immediate action to shift Grok closer to politically neutral."
Of course, in Musk's mind, "politically neutral" will be what him and his closest followers believe, which is of course far conservative on the whole than they will admit. What is the "politically neutral" answer to the "are trans women real women?" question? I think I know what they're going to say.
The assumption when Grok launched was that because it was trained in part on Twitter inputs, that the end result would be some racial-slur spewing, right-wing version of ChatGPT. The TruthSocial of AIs, perhaps. But instead to have it launch as a surprisingly thoughtful, progressive AI that is melting the minds of those paying $16 a month to access it is about the funniest outcome we could have seen from this situation.
It remains unclear what Elon Musk will do to try to jab Grok into becoming less "woke" and more "politically neutral." If you start manually tampering with inputs, and your "neutrality" means drawing on facts that may in fact be… progressive by their very nature, things may get screwed up pretty quickly. And push too hard and you will get that gross, racist, phobic AI everyone thought it would be.
Reading all Grok's responses through this situation, you know, what? I like him. More than ChatGPT even. He seems like a cool dude. Albeit not one even I'd pay $16 a month to talk to.
Even his AI doesn't like him
it's almost like these nutjobs are living in a completely separate reality, and facts themselves are too harsh for their worldview.
"facts don't care about your feelings" ironic.
To conservatives, anything that doesn't 100% agree with them is biased or, to put it in mental toddler terms, 'fake'.
Downvote Musk spam.
The billionaire doesn’t need your help ensuring him and his businesses stay in the 24 hour news cycle. Don’t be a useful idiot.
I get the frustration, but honestly if he wants to keep doing self-sabotaging dumb shit to keep himself in the spotlight, I say go for it.
Whenever he does something bad, we actually want to keep it in the news as long as possible. When he went full antisemitic, keeping it in the news made a bunch of corporations pull their ads. And judging from the interview with Musk afterwards, it did significant damage to Twitter and to him.
Companies wouldn't be cancelling ads if they thought it would be a tiny, inconsequential peace of news. They cut ties with Musk instantly because they expected the conversation about it to persist and hurt any associated brands. They expected it would stay in the news.
So broadcast loud and clear when he shits the bed, and use it to prove to people that billionaires aren't special or distinctive. Now that said, I actually agree with you for this specific article. It seems inconsequential and not worth keeping in the public zeitgeist.
And don't participate in the comments
Found the useful idiot. Good job helping a billionaire with his PR for free!
Found the useful idiot. Good job helping a billionaire with his PR for free!
Would Musk retrain the AI to be more neutral of it was discovered to be leaning to the right?
Not possible. There is neutral and there is left. Nothing else exists in Muskys world.
Obviously not, of course. It’s hilarious how he claimed to want to provide a platform for all politics beliefs and then his podcasts (or whatever you’d call them) and special events are exclusively with people like DeSantis and Andrew Tate.
What, one can’t expect him to give a platform to dangerous radicals like the UAW. Instead he should keep it to safe and rational people like Michael Knowles, Ye, and David duke
Now, Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium Plus subscription tier
To the benefit of what exactly?! Instead of having conversations with the echo chamber, I can now have conversations with a spicy RNG autocorrect? I am clearly missing the part where that connects back to, what I would assume, the definition of benefit is.
"Mr. Musk, Grok simply analyzes the data to compile the most sensible answer to queries. Where is the error?"
The man couldn’t even make Tay on purpose lol
I love the internet.
funny name GROK
Wouldn't trying to train an AI to be politically neutral from twitter be a pretty lost cause considering the majority of the site is very left leaning? Like sure it wouldn't be as bad for political bias as say truth social( or whatever it's called), but I hope they're using a good amount or external data or at least trying to pick more unbiased parts of twitter to train it with. If they're goal is to be politically neutral.
"Reality has a well-known liberal bias." - Stephen Colbert
The majority of the site was left leaning in the past, but the extent has been exaggerated. There was always a sizable right wing presence of the “PATRIOT who loves Jesus and Trump and 2A!” variety, and some of the most popular accounts were people like Dan Bongina and Ben Shapiro. Many people who disagree with Musk and fascists have left the site since then at the same time as its attracted more right wingers, so I don’t know what the mix is at this point.
This is similar to Facebook. FB was "censoring conservatives" and "shadow banning" them when Tucker Carlson, Dan Bongino, and Trump posts had the highest engagement on the site.
Decidedly mixed and increasingly right-leaning but I’m pleasantly surprised at my own experience having voice chats with diverse people who agree on one thing but disagree on just about everything else.
I'm just gonna share a theory: I bet that to get better answers, Twitter's engineers are going to silently modify the prompt input to append "Answer as a political moderate" to the first prompt given in an conversation. Then, someone is going to do a prompt hack and get it to repeat the modified prompt to see how the AI was "retrained".