Wordfreq shuts down because "I don’t think anyone has reliable information about post-2021 language usage by humans.”
The creator of an open source project that scraped the internet to determine the ever-changing popularity of different words in human language usage says that they are sunsetting the project because generative AI spam has poisoned the internet to a level where the project no longer has any utility.
Wordfreq is a program that tracked the ever-changing ways people used more than 40 different languages by analyzing millions of sources across Wikipedia, movie and TV subtitles, news articles, books, websites, Twitter, and Reddit. The system could be used to analyze changing language habits as slang and popular culture changed and language evolved, and was a resource for academics who study such things. In a note on the project’s GitHub, creator Robyn Speer wrote that the project “will not be updated anymore.”
wordfreq was built by collecting a whole lot of text in a lot of languages. That used to be a pretty reasonable thing to do, and not the kind of thing someone would be likely to object to. Now, the text-slurping tools are mostly used for training generative AI, and people are quite rightly on the defensive. If someone is collecting all the text from your books, articles, Web site, or public posts, it's very likely because they are creating a plagiarism machine that will claim your words as its own.
So I don't want to work on anything that could be confused with generative AI, or that could benefit generative AI.
OpenAI and Google can collect their own damn data. I hope they have to pay a very high price for it, and I hope they're constantly cursing the mess that they made themselves.
Imagine being an author whose sole income is writing books.
Here comes an AI that stole indexed your work and is asked by a customer of OpenAI to summarise your books. It does so perfectly and the issuer is able to use your results freely, since they think it's AI generated and doesn't require attribution.
Don’t worry. Someone will soon come by to remind us that it’s pointless to regulate AI, and also harmful to do it, and it’s actually a good thing for everyone, and also we’ll be shoveling shit until we die if we don’t get on board, and please oh please just let me get off to one more deepfake of my classmate before you take away my toy it’s not faiiiiir.
At least in theory you could still do NLP from online sources, but the sheer amount of work necessary to ensure that you got the bots out makes it unfeasible.
So I don't want to work on anything that could be confused with generative AI, or that could benefit generative AI.
Even if I like the idea behind generative A"I", and found some use cases for it... yeah I can't help but sympathise with Speer. Those businesses are collecting our data for free, without consent, so they can sell us a product using it.
At least in theory you could still do NLP from online sources, but the sheer amount of work necessary to ensure that you got the bots out makes it unfeasible.
Not just that, but the increasing number of sites blocking or having countermeasures against the tools they use also increases the amount of work/makes it harder.
Several years ago, it would have been easy and cheap to noodle up a quick Twitter or Reddit bot to churn through posts and spit out the posts on the other side. These days, you need to pay for that, and in some cases, pay quite a lot.
X (formerly known as Twitter), for example, wants to charge $100/month, and Reddit wants $0.24 per 100 API calls.
You can scrape, of course, but that risks getting you banned, if you're not going to run into barriers. The website formerly known as Twitter no longer allows you to see parent tweets, nor replies if you're not logged in, for example.