Skip Navigation

r/BotDefense is shutting down - I hope Reddit likes spam and malicious actors

Post contents (and a mirror):

BotDefense is wrapping up operations

TL;DR below.

When we announced the BotDefense project in 2019, we had no idea how large the project would become. Our initial list of bots was just 879 accounts. Most of them were annoying rather than outright malicious.

Since then, we've witnessed the rise of malicious bots being used to farm karma for the purpose of spamming and scamming users across Reddit and we've done our best to help communities stem the tide. We spent countless hours finding and reviewing accounts, writing code to automate detections, and reviewing appeals (mostly from outright criminals and karma farmers definitely running bots, but we typically unban about 4 accounts per month, and unlike similar bots an unban means that we unban the account everywhere we banned it).

Along the way, we've struggled with the scope of the problem, rewritting our back-end code multiple times and figuring out how to scale to the 3,650 subreddits that BotDefense now moderates. We came up with new algorithms to identify content theft, reduce the number of times we accidentally ban an innocent account, and more. In January of 2023, we added an incredible 10,070 bots to our ban list which now stands at an incredible 144,926 accounts.

Like many anti-abuse projects on Reddit, we've done all of this for free while putting up with Reddit's penchant for springing detrimental changes on developers and moderators (e.g., adding API limits without advance notice and blocking Pushshift) and figuring out workarounds for numerous scalability issues that Reddit never seems to fix. Without Pushshift, the number of malicious bots we were able to ban dropped to 5,517 in May.

Now, Reddit has changed the Reddit API terms to destroy third-party apps and harm communities. A group of developers and moderators tried to convince Reddit to not continue down this path and communities protested like never before, but that was all in vain. Reddit is so brazenly hostile to moderators and developers that the CEO of Reddit has referred to us as "landed gentry".

With these changes and in this environment, we no longer believe we can effectively perform our mission. The community of users and moderators submitting accounts to us depend on Pushshift, the API, and third-party apps. And we would be deluding ourselves if we believed any assurances from Reddit given the track record of broken promises. Investing further resources into Reddit as a platform presents significant risks, and it's safer to allocate one's time, energy, and passions elsewhere.

Therefore, we have already disabled submissions of new accounts and our back-end analytics, and we will be disabling future actions on malicious and annoying bots. We will continue to review appeals and process unbans for a minimum of 90 days, or until Reddit breaks the code running BotDefense.

We'd rather be figuring out how to combat the influx of ChatGPT bots flooding Reddit, temu bots flooding subreddits with fake comments, and every other malicious bot out there, of course.

At this time, we advise keeping BotDefense as a moderator through October 3rd so any future unbans can be processed. We will provide updates if the situation changes or if we have any other news to share.

Finally, I want to thank all of the users and moderators who have contributed accounts, my co-moderators who have helped review countless accounts, and to all of the communities that have trusted us with helping moderate their subreddits.

Regards.

— dequeued

TL;DR With the API changes now in place, we no longer believe we can effectively perform our mission so we are sunsetting BotDefense. We recommend keeping BotDefense on as a moderator through October 3rd so any unbans can be processed.

70
Reddit @lemmy.ml GoodKingElliot @feddit.uk
BotDefense is wrapping up operations on Reddit, due to the infeasibility of doing their work effectively in the aftermath of Reddit's API changes

You're viewing a single thread.

70 comments
  • Hiw do you scam users with farmed karma?

    • Having 0 or negative karma is a barrier to posting. Some subs completely disallow posting if you don't meet a threshold, but even outside of those subs you still run into things like having your comments held up in the spam queue until they are manually reviewed.

      So having a bit of karma allows them to post their scam and/or spam links and have a chance of being visible.

      • I struggled with that when I first started on reddit... Couldn't post because no karma and no karma because no post.

        LPT for anyone living in the past prior to reddit imploding: you can get a lot of karma from going to AITA or another relationship advice sub and making a quick spicy and/or sympathetic comment. Esp in an interesting thread that was recently posted so you are one of the first comments. I'd say that >80% of all my karma came from my infrequent dipping into those subs, maybe <5% of my contributions. With the other 95% being arguably more useful and constructive but less mass appeal.

        I wouldn't make a very good robot. :/

    • Let’s say I have an account with lots of positive karma. Let’s say I take that account, and make it look nice, I can look like a paragon of a community, or a customer service account or anything I want. Now let’s say I go into a mmo community and use that nice good looking account to run a scam where I get people to send me passwords and 2FA codes, now I’m running off with their MMO gold and selling it.

      Let’s say I setup an account that seems to be related to a crypto wallet company, you post to a subreddit asking for help and I come along and convince you to send me your crypto, or to screenshot something that compromises your seed without you thinking, or send you to a webpage that looks like you’re signing a transaction to sign in.

      Basically if karma is a metric of community trust, someone will use that trust against the community

    • I think the term used in the post "spam or scam" might not be that accurate ? My understanding is that the vast majority of bots on reddit are there to influence opinions.

      I'm pretty sure everyone (absolutely including myself) is heavily influenced by the reading the opinions of others, especially if it's repetitive.

      Additionally, I'm also sure that most people (yes probably me included) tend to post opinions or at lease phrase their own opinions in a way that they hope will gather more upvotes.

      Also you don't need to change peoples strongly held opinions - you only need to tip the scale in your favour.

      With that in mind, imagine that you had infinite voting power. You can give a comment the 2,000 upvotes it needs for visibility, or give another comment however many downvotes it needs to fade into obscurity. It would be pretty easy to support a particular opinion or idea.

      Now, as to why karma is required - if you're going to direct your bot net to descend on a single comment and downvote it to oblivion, then you need them to look and behave the way people do rather than bots. A few comments here and there, a little karma, general meandering engagement.

      That's my take anyway - makes more sense than trying direct scams on reddit.

      • I am especially upvoting you because you are wiling to admit that the upvotedness of prior comments influences how you craft future contributions.

        And I am articulating it in the hopes that others will see it and have the the fortitude to stop pretending they are immune to it. I promise to upvote you when I see it.

    • Scam and spambots always start out with farming karma, because most subs have automoderation which prevents posting by low post and low karma accounts. Having a high amount of karma means that you have access to all subs in reddit.

You've viewed 70 comments.