Everyone is joking, but an ai specifically made to manipulate public discourse on social media is basically inevitable and will either kill the internet as a source of human interaction or effectively warp the majority of public opinion to whatever the ruling class wants. Even more than it does now.
It's going to drive the AI into madness as it will be trained on bot posts written by itself in a never ending loop of more and more incomprehensible text.
It's going to be like putting a sentence into Google translate and converting it through 5 different languages and then back into the first and you get complete gibberish
I think people miss an important point in these selloffs. It's not just the raw text that's valuable, but the minute interactions between networks of users people.
Like the timings between replies and how vote counts affect not just engagement, but the tone of replies, and their conversion rate.
I've could imagine a sort of "script" running for months, haunting your every move across the internet, constantly running personalised little a/b tests, until a tactic is found to part you from your money.
I mean this tech exists now, but it's fairly "dumb." But it's not hard to see how AI will make it much more pernicious.
Google now has a full complete set of logs including user IPs (correlate with gmail accounts), PRIVATE MESSAGES, and also reddit posts.
They pinky promise they will only train AI on the data.
I can pretty much guarantee someone can subpoena google for your information communicated on reddit, since they now have this PII (username(s)/ip/gmail account(s)) combo. Hope you didn't post anything that would make the RIAA upset! And let's be clear... your deleted or changed data is never actually deleted or changed... it's in an audit log chain somewhere so there's no way to stop it.
"GDPR WILL SAVE ME!" - gdpr started in 2016. Can you ever be truly sure they followed your deletion requests?
Good luck,
The Ai just going to be a porn addicted nazi cultist and is just going to a racist AI. I dont rember which one but a company did a similar thing and the AI just became really racist.
I'm waiting for the first time their LLM gives advice on how to make human leather hats and the advantages of surgically removing the legs of your slaves after slurping up the rimworld subreddits lol
How much is reddit paying its users? Frankly, the users have a strong case to say that their value has been taken from them unfairly and without consideration.
Yes, Reddit has terms and conditions where they claim full rights to anything you post. However that's not an exchange of data for access to the website, the access to the website is completely free - the fine print is where they claim these rights. These are in fact two transactions, they provide access to the site free of charge, and they sneak in a second transaction where you provide data free of charge. Using this deceptive methodology they obscure the value being exchanged, and today it is very apparent that the user is giving up far more value.
I really think a class action needs to be made to sort all this out. It's obscene that companies (not just reddit, but Google, Facebook and everyone else) can steal value from people and use it to become amongst the wealthiest businesses in the world, without fairly compensating the users that provide all the value they claim for themselves.
The data brokerage industry is already a $400 bn industry - and that's just people buying and selling data. Yet, there are only 8 bn people in the world. If we assume that everyone is on the internet and their data has equal value (both of which are not true, US data is far more valuable) then that would mean that on average a person's data is worth at least $50 a year on the market. This figure also doesn't include companies like Facebook or Google, who keep proprietary data about people and sell advertising, and it doesn't include the value that reddit is selling here - it's just the trading of personal data.
We are all being robbed. It's like that classic case of bank fraud where the criminal takes pennies out of peoples' accounts, hoping they won't notice and the bank will think it's an error. Do it to enough people and enough times and you can make millions. They take data from everyone and they make billions.
Ideally the AI can actually learn to differentiate unhinged vs reasonable posts. To learn if a post is progressive, libertarian or fascist. This could be used for evil of course, but it could also help stem the tide of bots or fascists brigading or Russia's or China's troll farms or all the special interests trying to promote their shit. Instead of tracing IPs you could have the AI actually learn how to identify networks of shitposters.
Obviously this could also be used to suppress legitimate dissenters. But the potential to use this for good on e.g. lemmy to add tags to posts and downrate them could be amazing.
I went through my comment history and changed all my comments with 100+ karma to a bunch of nonsense I found on the Internet, mostly from bots posting YouTube comments. It's mostly English words so it shouldn't get discarded for being gibberish. But they didn't make coherent information. I was sad to see some of my posts go away but I don't want to feed the imitative AI.
Also did the first 6 pages of my "controversial" comments.
I know they have backups, but that's why I didn't simply delete them. Hopefully these edited versions get into the training set and fuck it up, even if only a little.
It's be funny if someone could come up with a "drop table" post that would maybe make it into the set...
I hope my several thousands of comments of complete and utter non sense that I left in my wake when I abandoned reddit, make it into the training data. I know that some lazy data engineer will either forget to check or give the task to an underperforming AI that will just fuck it up further.
I say we poison the well. We create a subreddit called r/AIPoison. An automoderator will tell any user that requests it a randomly selected subreddit to post coherent plausible nonsense. Since there is no public record of which subreddit is being poisoned, this can't be easily filtered out in training data.
I'm so confused about how AI learning is supposed to work. Does it just need any data at all in significant quantity, is the quality of the data almost irrelevant? Because otherwise surely they could just feed it back issues of scientific American, or the scanned copies of the library of congress, I can't reasonably believe that Reddit is going to add anything unless it's just pure on adulterated quantity that's important.
While reddit has some of the most unhinged posts on the internet, it's also home to some of the most insightful and niche knowledge on the internet. For every insane venting politically misguided post, there's posts about electronic configurations, coding, athletic conditioning, parenting, psychology, astronomy, and media criticism.
Google has signed a content licensing deal with the social media platform, Reuters reported on Wednesday, citing sources familiar with the matter.
Their concerns about what a Reddit-trained AI might be like are probably not unfounded, considering some of the off-the-rails content posts made on the site since its inception in 2005.
Take this guy, who claimed in 2014 that he was caught in a particularly Kafkaesque scenario, where he had to pretend his girlfriend was a giant cockroach named Ogtha when he made love to her.
Like this guy's viral 2015 post on the 19-million-user strong forum r/TodayIFuckedUp, where he recounted how he went to his girlfriend's parents' home, pretended not to know what a potato was, and then got kicked out of the house by her angry father.
Some platform users have written uplifting, inspirational posts and offered useful life and career advice.
Elon Musk, for one, has been tapping on data from X, formerly Twitter, to train his AI company's chatbot, Grok.
The original article contains 396 words, the summary contains 165 words. Saved 58%. I'm a bot and I'm open source!
Is there still time for me to ask them for all the info they have on me with EULA or whatever it is and have them remove everyone of my comments?
My creative insults and mental instability are my own, Google ain't having them! (Although they already do, probably, along with my fingerprints, facial features, voice, fetishes, etc.)