Skip Navigation

State of the shork!

So it's been a few days, where are we now?

I also thought given the technical inclination of a lot of our users that you all might be somewhat interested in the what, how and why of our decisions here, so I've included a bit of the more techy side of things in my update.

Bandwidth

So one of the big issues we had was the heavy bandwidth caused by a massive amount of downloaded content (not in terms of storage space, but multiple people downloading the same content).

In terms of bandwidth, we were seeing the top 10 single images resulting in around 600GB+ of downloads in a 24 hour period.

This has been resolved by setting up a frontline caching server at pictrs.blahaj.zone, which is sitting on a small, unlimited 400Mbps connection, running a tiny Caddy cache that is reverse proxying to the actual lemmy server and locally caching the images in a file store on its 10TB drive. The nginx in front of lemmy is 301 redirecting internet facing static image requests to the new caching server.

This one step alone is saving over $1,500/month.

Alternate hosting

The second step is to get away from RDS and our current fixed instance hosting to a stand-alone and self-healing infrastructure. This has been what I've been doing over the last few days, setting up the new servers and configuring the new cluster.

We could be doing this cheaper with a lower cost hosting provider and a less resiliant configuration, but I'm pretty risk averse and I'm comfortable that this will be a safe configuration.

I woudn't normally recommend this setup to anyone hosting a small or single user instance, as it's a bit overkill for us at this stage, but in this case, I have decided to spin up a full production grade kubernetes cluster with a stacked etcd inside a dedicated HA control plane.

We have rented two bigger dedicated servers (64GB, 8 CPU, 2TB RAID 1, 1 GBPS bandwidth) to run our 2 databases (main/standby), redis, etc on. Then a the control plane is running on 3 smaller instances (2GB, 2 CPU each).

All up this new infrastructure will cost around $9.20/day ($275/m).

Current infrastructure

The current AWS infrastructure is still running at full spec and (minus the excess bandwidth charges) is still costing around $50/day ($1500/m).

Migration

Apart from setting up kubernetes, nothing has been migrated yet. This will be next.

The first step will be to get the databases off the AWS infrastucture first, which will be the biggest bang for buck as the RDS is costing around $34/day ($1,000/m)

The second step will be the next biggest machine which is our Hajkey instance at Blåhaj zone, currently costing around $8/day ($240/m).

Then the pictrs installation, and lemmy itself.

And finally everything else will come off and we'll shut the AWS account down.

20

You're viewing a single thread.

20 comments
  • I tried to update my profile picture for the first time since the migration and now I don't have a profile picture at all, anyone else noticed issues? Image upload attempted from the webui settings page at lemmy.blahaj.zone.

    • Yeah same here, assuming its just a migration hiccup

      • @NoStressyJessie@lemmy.blahaj.zone Just log files filling up a partition. It should be good to go again now

        • I'm trying, maybe I messed up when I converted the file, but it shows as a broken image, when I go to the web address where the image should be hosted it says

          {"msg":"Error in MagickWand, ImproperImageHeader `/data/pict-rs/files/jhLII3k5jz.png' @ error/png.c/ReadPNGImage/4286"}

          Edit: Same kind of error for jpg

          {"msg":"Error in MagickWand, InsufficientImageDataInFile `/data/pict-rs/files/gnUPYJkCuT.jpg' @ error/jpeg.c/ReadJPEGImage_/1112"}

          the first image I exported from gimp, 2nd picture was converted online. Seems unlikely I botched 2 seperate conversion attempts using seperate utilities

You've viewed 20 comments.