Skip Navigation

Should Lemmy consider potentially implementing a "Self-Harm and Suicide Concern" reporting feature similar to Reddit's?

Reddit currently has a feature titled:

“Someone is considering suicide or serious self-harm”

which allows users to flag posts or comments when they are genuinely concerned about someone’s mental health and safety.

When such a report is submitted, Reddit’s system sends an automated private message to the reported user containing mental health support resources, such as contact information for crisis helplines (e.g., the Suicide & Crisis Lifeline, text and chat services, etc.).

In some cases, subreddit moderators are also alerted, although Reddit does not provide a consistent framework for moderator intervention.


The goal of the feature is to offer timely support to users in distress and reduce the likelihood of harm.

However, there have been valid concerns about misuse—such as false reporting to harass users, or a lack of moderation tools or guidance for handling these sensitive situations.


Given Lemmy's decentralized, federated structure and commitment to privacy and free expression, would implementing a similar self-harm concern feature be feasible or desirable on Lemmy?


Some specific questions for the community:

Would this feature be beneficial for Lemmy communities/instances, particularly those dealing with sensitive or personal topics (e.g., mental health, LGBTQ+ support, addiction)?

How could the feature be designed to minimize misuse or trolling, while still reaching people who genuinely need help?

Should moderation teams be involved in these reports? If so, how should that process be managed given the decentralized nature of Lemmy instances?

Could this be opt-in at the instance or community level to preserve autonomy?

Are there existing free, decentralized, or open-source tools/services Lemmy could potentially integrate for providing support resources?


Looking forward to your thoughts—especially from developers, mods, and mental health advocates on the platform.


https://support.reddithelp.com/hc/en-us/articles/360043513931-What-do-I-do-if-someone-talks-about-seriously-hurting-themselves-or-is-considering-suicide

13 comments
  • ime as a subreddit mod that was nearly exclusively used for harassment, usually transphobic harassment. In the one or two cases where there was a report for someone who had suicidal or self-harm ideation, there's still zilch I could have done; I would just approve the post so the user could get support and speak to others (the subreddit was a support group for a sensitive subject, so it wouldn't be out of place for a post to say that the stress of certain things was making them suicidal).

  • No way. If anything, that kind of thing just supresses people from expressing themselves honestly in a way that might help them.

    Real human connection and compassion might make a difference. A cookie cutter template message is (genuinely) a "we don't want you to talk about this here" response

    We aren't beholden to advertisers, we don't need this

  • I'm inclined to believe not a single actually suicidal person received one of these messages.

    You can't automate concern for fellow humans.

  • The best help you can give someone in distress is hearing them, whilst you redirect them to a place that can help with empathy and compassion.

    Any form of automated message comes across as the exact opposite of empathy and compassion.

    In addition, speaking as the admin of a trans and queer community, I don't have any special tools or abilities to help people. Sending the report to me doesn't let me help them, because they're almost certainly not in my country, and I don't have any special access that enables me to contact them or reach out to them. The tool I do have, is the instance itself that we host, that allows people to connect with their community and their peers, that allows them to struggle, and that shuts down anyone who would try and add to the hurt of someone on the edge.

    Which is to say, I don't think a reddit style feature has a place here. It will let people think they're helping, without actually doing so, as well as providing a new vector for abuse (though that would be less of an issue than on reddit). In theory, an automated list of resources that could be called on could be useful, but again, if someone is struggling, they need to feel heard, and automated replies can come across as dispassionate and uncaring.

  • The existing reporting framework already works for this. Report those so that they can be removed ASAP.

    Mods/admins should not be expected to be mental health professionals, and internet volunteers shouldn't have to shoulder that burden.

13 comments