Meanwhile, the masses are still using all the 'services' because they all have momentum. I'm not confident any of them can do anything bad enough to chase off their users.
I thought Facebook would die with all the scandals, I'm the only person in my life who cared. I deleted Twitter before it became X, I'm the only one I know who did that.
I don't think anyone gives a shit and it's made me hate people a lot more than I used to.
I already left reddit because they did bad things.
Assume you mean chase off a critical mass though? The fact that "X" is still a thing may prove you correct.
Yeah, they’re not leaving. The only way they would leave is if the service were to be physically shut down. Pretty sure you could make everyone watch 1 minute long ads on app open and they would still stay.
I'm old as shit, I've seen an uncountable amount of "social media" come and go. At it's heart reddit is just a forum. They've tacked on a lot of modern shit, but so do most of them when they're running out of steam.
It's a war of attrition now. People will leave in batches overtime until it just kinda ends, or not. Myspace is still shuffling around here somewhere.
Which will probably last for about one year, long enough to boost IPO valuations, then openAI (we all know who's buying it) will cancel their contract because it's too expensive and Reddit does not actually generate enough unique content yearly to be worth continuously training on. Then the death spiral happens again.
Honestly it's probably the best search dataset in existence right now. You can make Google suck far less by appending "reddit" to most searches because you'll get results from a group consisting of a higher ratio of actual humans instead of bots.
Yeah reddit is shit, but the rest of the internet is 10x worse at this point. Pretty much any writing that isn't a labor of love on someone's personal page or users interacting with each other in a semi organic way is rapidly becoming 100% GPT vomit as every company in existence lays off their writing staff
It's ludicrously cheap for the size and quality of the dataset. A set of 829 academic papers at University of Michigan is priced at $25,000—about 1/2400 of this sale. If you were to scale that dollar value to the size of the Reddit dataset, you'd expect it to contain about 2 million academic papers' worth of data.
But Reddit has almost two decades of text written by 200 million chronically-online people. And sure, probably most Reddit users don't write an academic paper amount of content every year; but the average is probably closer to that than not, especially when you consider that some of those subreddits like AskHistorians and AskScientists really are generating the equivalent of dozens of academic papers per day. Just based on the amount of text alone, Reddit should've sold us out for 50-100x what they got for just a single year of data, and 1000-2000x for the full twenty years (though, granted, they didn't have that much data for that entire time, so let's say half that).
Furthermore, those 829 papers in the U of M dataset are disconnected, unlinked text representing a tiny fraction of what U of M's 50,000 students generate in even a single year. Reddit has data with links, images, conversational responses, prompt responses, Q&As, flash fiction, slash fiction, historical deep-dives, investigations, memes, inside jokes, a development of style and consensus over time, and a comprehensive understanding of what it means to interact online, generated by people around the world over the course of 18 years. It's much better data for almost any LLM purpose that isn't just writing academic papers from the perspective of students at a medium size 4-year undergrad institution in the Midwestern US. The quality of the dataset should've made the value even higher. It's hard to say exactly how much higher, but let's just be extremely conservative and say it should have doubled the total.
That means that, conservatively, the value of Reddit's dataset—or, rather, our dataset, which Reddit freebooted from us—was about 1000x what they were paid, based on the proportional value of the U of M dataset.
They should've sold us out for billions.
Of course, we don't know anything about what exclusivity deals or subset of data that they might have included with this deal. It might only be one year of data, and only 6 months of exclusivity. But assuming they sold the rights to the entire dataset, we got sold for pennies.
Most tools miss a ton bc of the limitations of the website and api. The best, pretty much only, way to get everything is to get an export of your data, then use that csv to delete all items one by one
Yup, shreddit has the ability to use the csv from the data request. Took me about 24 hours to edit and erase the 20.000+ comments I made over the last 10 years.
If you want a web app, try redact.dev (yes, there's a paid version where you can download your old messages, but the free one wipes out your posts (with random text) for free.
Its obscure enough that I don’t think it’s being sought out by AI companies. The nature of federated instances should make it a bit more challenging to pull a complete data set too
On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter.
The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.
Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said.
In April 2023, Reddit founder and CEO Steve Huffman told The New York Times that it planned to charge AI companies for access to its almost two decades' worth of human-generated content.
If the reported $60 million/year deal goes through, it's quite possible that if you've ever posted on Reddit, some of that material may be used to train the next generation of AI models that create text, still pictures, and video.
Even without the deal, experts have discovered in the past that Reddit has been a key source of training data for large language models and AI image generators.
The original article contains 379 words, the summary contains 182 words. Saved 52%. I'm a bot and I'm open source!