Still being federated with exploding-heads does not mean the instance supports them.
But it means that shit-heads are allowed to take part in the discussions on their communities.
They do intoxicate the discussions we will have.
They will attack minorities.
Maybe you will not be harassed by them, more vulnerable people will be.
They are allowed to moderate communities on these instances.
Exploding-heads members actively guide young and unknowing people to their hateful instance.
This was a link to exploding-heads, because when someone copies a permalink of a comment from them on another instance, it will be a link to their instance.
Are we building a place where vulnerable people are welcome and safe or are we building a place where nazis are welcome and safe?
They want you to block them, they comment that everywhere. They block people who are in favor of defederating them https://sh.itjust.works/post/225714
When their opponents won't see them anymore, they can harass and recruit without being noticed.
Blocking will not solve the problem. They will spread even when you close your eyes.
Maybe this was not done intentionally, but now there is a post with a list of 'health communities on Lemmy' with the first entry guiding to exploding heads. https://lemmy.world/post/396561
It is still possible that some of the mentioned instances do support them.
The owner of sh.itjust.works says that because of “free speech” all other instances would be allowed. It is suspicious to me that his line is drawn only for lemmygrad https://sh.itjust.works/comment/130474
The decision to block the Lemmygrad instance was less a question of censorship, and more an issue of personal conviction. As a volunteer dedicating my personal resources and time to facilitating a space for users to create, discover, and discuss - not just on this instance, but across the Fediverse - I admit that this choice was made alone, selfishly, without the consent or thoroughly considering the collective opinion of the community. With the above said, sh.itjust.works has had the lemmygrad instance blocked from its first day.
I’d like to also point out that the lemmygrad instance has far more blocked instances than what is currently blocked here. Maybe you can create a post on that instance to see what that’s all about and report back?
Since The_Donald was removed, I did not find more racist content on the mentioned servers. That is part their tactics. They act harmless but recruit to their instance, attract likeminded people and chase others away. Discussions on sh.itjust.works about such topics are very toxic already, soon they might be able to do more harm.
(My research is very limited, as i could not search for all exploding-heads member content on other instances by entering their domain. I know there are nice communities on sh.itjust.works.)
What can be done?
On joinmastodon.org there is a Mastodon Server Covenant with very few useful rules, one of them is "Active moderation against racism, sexism, homophobia and transphobia". https://joinmastodon.org/covenant
It would be a first step to implement these rules for join-lemmy.org.
At least instances that want to be recommended on there should have to agree to that rule.
More actions should be taken now. Please make suggestions. Things will only become more complicated. The next reddit wave is incoming.
There’s no feasible way to stop people of a specific ideology from using an open source piece of software. You say blocking isn’t the answer but I’m not sure what more you can expect. Even if you invent some kind of automated Nazi detector, what action can it enforce beyond banning or blocking?
The most realistic solution I can think of is letting instances or users subscribe to curated blacklists. Something akin to Adblock or email server spam blacklists.
Even if you invent some kind of automated Nazi detector
As interesting a problem as this might be (my final year undergrad project was similar in nature), the nuance of written word can be significant and difficult to detect. Making a good product and then ensuring multiple communities would adopt it is not realistic in 2023.
What is realistic, however, is a zero tolerance policy against such individuals. What we should be doing is ostracizing them. You take away their platform, you take away their power. They can only do so much damage if they're all gathered up in a single community with no ties or influence to the rest of the world.
It keeps them from participating by demoting them to the kids table, but you're still in a glass house to some extent.
I think this is the right answer, but the structure is going to require some amount of frequent drama just like this every time. You can keep an open federation policy until proven malicious, or you can verify partners, but I don't see the way around discussions.