Skip Navigation

Soliciting Feedback for Improvements to the Media Bias Fact Checker Bot

Hi all!

As many of you have noticed, many Lemmy.World communities introduced a bot: @MediaBiasFactChecker@lemmy.world. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

The !news@lemmy.world mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.

Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.

150

You're viewing a single thread.

150 comments
  • I'm frankly rather concerned about the idea of crowdsourcing or voting on "reliability", because - let's be honest here - Lemmy's population can have highly skewed perspectives on what constitutes "accurate", "unbiased", or "reliable" reporting of events. I'm concerned that opening this to influence by users' preconceived notions would result in a reinforced echo chamber, where only sources which already agree with their perspectives are listed as "accurate". It'd effectively turning this into a bias bot rather than a bias fact checking bot.

    Aggregating from a number of rigorous, widely-accepted, and outside sources would seem to be a more suitable solution, although I can't comment on how much programming it would take to produce an aggregate result. Perhaps just briefly listing results from a number of fact checkers?

You've viewed 150 comments.