All services, except Lemmy, is up and running again via the new domain: https://fmhy.net
Lemmy itself will require a significant database cleanup to get users transferred, posts and communities may not be possible
Cleanup will have to be done either manually or with a custom script
And now to answer some common questions I've seen floating around:
Why is lemmy.ml not affected?
WE DON'T KNOW. My assumption is that it's popular (or lucky enough) to not be affected by this change. The only difference between our two domains is that lemmy.ml doesn't use CF, and AFAIK, multiple other domains (with and without cf) are still unreachable.
Why can't you just change the domain?
That's not how federation works. Most services (including Mastodon and Lemmy) do not support changing URL properly, as doing so could potentially break the whole network.
Why is it taking so long to get back up?
Not all staff members have access to the server, CF, or domain registrar. Not to mention that this is a complicated task that requires a lot of effort, and one that we didn't even know we had to do until the following morning. (We were under the assumption that the .ml TLD didn't just explode.)
Not saying it was a coordinated attack, per your edit, but anything popular is a prime target for various types of attacks, especially DDoS.. But with every attack, the developers and various admins and owners of instances learn something new and how to mitigate it. So while it's annoying, it's just as much a blessing as a curse - better to patch things quick than leave an exploitable hole open for who knows how long.
I run my own single user instance, and it was down as well. Not sure why someone would target a single user instance. Not ruling it out, but it seems unlikely.
All instances using federation are publicly visible and it's simple to script attacking all of us.
However it's even easier to just attack Lemmy.world since almost everyone is there and it will have maximum disruption on everyone. People have centralized on one server. :)
But it's just some denial of service attacks right now. Eventually they probably get tired of it too. There is no point to it really.
There is a GitHub issue on it and I experienced the exact same thing with my instance. A timeout occurs and the only way to fix it is to restart it seems. Like everyone else, it's strange that it all happened at the same time.
It's not that strange. A timeout occurs on several servers overnight, and maybe a bunch of Lemmy instances are all run in the same timezone, so all their admins wake up around the same time and fix it.
Well it's a timeout, so by fixing it at the same time the admins have "synchronized" when timeouts across their servers are likely to occur again since it's tangentially related to time. They're likely to all fail again around the same moment.
It's kind of similar to the thundering herd where a bunch of things getting errors will synchronize their retries in a giant herd and strain the server. It's why good clients will add exponential backoff AND jitter (a little bit of randomness to when the retry is done, not just every x^2 seconds). That way if you have a million clients, it's less likely that all 1,000,000 of them will attempt a retry at the extract same time, because they all got an error from your server at the same time when it failed.
Edit: looked at the ticket and it's not exactly the kind of timeout I was thinking of.
This timeout might be caused by something that's loosely a function of time or resources usage. If it's resource usage, because the servers are federated, those spikes might happen across servers as everything is pushing events to subscribers. So, failure gets synchronized.
Or it could just be a coincidence. We as humans like to look for patterns in random events.
This probably makes more sense although the issue I was experiencing earlier had similar logs as the issue I linked and others have commented on it too around the same time. I'm guessing they're related.
I get that message pretty much every time I visit a Community that I haven't visited before (or maybe one that no-else from my instance has visited before).
It get fixed on a refresh (like the message suggests), so I'm guessing it's a problem with time-out settings when my instance has to pull in a lot of new data.
Everything is back up now. I used connect for Lemmy and it actually tells you why an instance is not working. Lemmy.world was under maintenance, and the other one was just down.
For those affected by these outages in the larger servers and who'd be interested in helping spread around: my instance will be free for the first 250 registrations. There are ~220 spots still left.
The catch is that my registration process is (purposefully) difficult to avoid squatters/spammers/bots. So you need to do one of the following:
sign up to my main portal site first. If you do that, please give your email address so that I can send a confirmation with the first credentials.
signup to communick.news directly, and send me a DM on reddit (/u/rglullis), or here, or on mastodon) with the username you used to signup. If you don't send the DM, I will assume it's a bot and will deny the application.
Edit: downvoters, please don't be so cynical. I've been offering this even before the reddit blackout. What is so bad about it?
When vlemmy.net disappeared (and it was the only one I had registered to), I registered at lemmy.world, sh.itjust.works, and while I was trying to register at lemmy.ML, I registered at feddit.NL by mistake. (then I requested a login at lemmy.ml, but never got a confirmation).
At the moment feddit.NL is the only instance I have a login at that I can use.
That was a happy mistake :-)
Maybe the problem is me. If feddit.nl goes down, I'll know for sure.