Skip Navigation

Soliciting Feedback for Improvements to the Media Bias Fact Checker Bot

Hi all!

As many of you have noticed, many Lemmy.World communities introduced a bot: @MediaBiasFactChecker@lemmy.world. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

The !news@lemmy.world mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.

Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.

150

You're viewing a single thread.

150 comments
  • My personal view is that the bot provides a net negative, and should be removed.

    Firstly, I would argue that there are few, if any, users whom the bot has helped avoid misinformation or a skewed perspective. If you know what bias is and how it influences an article then you don't need the bot to tell you. If you don't know or care what bias is then it won't help you.

    Secondly, the existence of the bot implies that sources can be reduced to true or false or left or right. Lemmy users tend to deal in absolutes of right or wrong. The world exists in the nuance, in the conflict between differing perspectives. The only way to mitigate misinformation is for people to develop their own skeptical curiosity, and I think the bot is more of a hindrance than a help in this regard.

    Thirdly, if it's only misleading 1% of the time then it's doing harm. IDK how sources can be rated when they often vary between articles. It's so reductive that it's misleading.

    As regards an open database of bias, it doesn't solve any of the issues listed above.

    In summary, we should be trying to promote a healthy sceptical curiosity among users, not trying to tell them how to think.

    • Thanks for the feedback. I have had the thought about it feeling like mods trying to tell people how to think, although I think crowdsourcing an open source solution might make that slightly better.

      One thing that’s frustrating with the MBFC API is that it reduces “far left” and “lean left” to just “left.” I think that gets to your point about binaries, but it is a MBFC issue, not an issue in how we have implemented it. Personally, I think it is better on the credibility/reliability bit, since it does have a range there.

      • That's perhaps a small part of what I meant about binaries. My point is, the perspective of any given article is nuanced, and categorising bias implies that perspectives can be reduced to one of several.

        For example, select a contentious issue like abortion. Collect 100 statements from 100 people regarding various related issues, health concerns, ethics, when an embryo becomes a fetus, fathers rights. Finally label each statement as either pro-choice or pro-life.

        For sobering trying to understand the complex issues around abortion, the labels are not helpful, and they imply that the entire argument can be reduced to a binary choice. In a word it's reductive. It breeds a culture of adversity rather than one of understanding.

        In addition, I can't help but wonder how much "look at this cool thing I made" is present here. I love playing around with web technologies and code, and love showing off cool things I make to a receptive audience. Seeking feedback from users is obviously a healthy process, and I praise your actions in this regard. However, if I were you I would find it hard not to view that feedback through the prism of wanting users to find my bot useful.

        As I started off by saying, I think the bot provides a net negative, as it undermines a culture of curious scepticism.

      • Just a point of correction, it does distinguish between grades. There is "Center-Left," "Left," and "Extreme Left."

You've viewed 150 comments.