Upgrading a self-hosted server (episode 2)
Upgrading a self-hosted server (episode 2)
Upgrading a self-hosted server (2)
- Episode 1: Introduction and plans
- Episode 2: Hardware upgrades and installing Debian stable
- Episode 3: Installing Docker and basic containers (multimedia, files, printing)
Adding a secondary disk to the machine
I'd like to have the old system around while tinkering with the new one, just in case something goes south. Also the old system is full of config files and scripts that are still useful.
I was planning to use a spare SSD I had lying around but it was a bit more involved than I'd expected. The machine has two M.2 slots with the old system disk occupying one of them, and 6 SATA ports on the motherboard, being used by the 6 HDDs. The spare SSD would need a 7th SATA port.
I could go get another M.2 but filling the second M.2 takes away one SATA channel, so I would be back to being one port short. 🤦
The solution was a PCI SATA expansion card which I happened to have around. Alternatively I could've disconnected one of the RAID arrays and free up some SATA ports that way.
Taking a system backup
So at this point I finally have two usable SSDs, one of which has the current system on it and is bootable.
What I'm proposing to do is make a 1-to-1 copy to the spare SSD and make it bootable too, then install Debian to the main SSD (the M.2).
It's simple because I used a single partition for the old system, so there are no multiple partitions for things like /home, /var, /tmp, swap etc.
- Used
fdisk
to wipe the partition table on the backup SSD and create one primary Linux partition across the whole disk. In fdisk that basically means hitting Enter-Enter-Enter and accepting all defaults. - Used mkfs.ext4 -m5 to create an Ext4 filesystem on the backup SSD.
- Mount the backup SSD partition and use
cp -avx
to copy the root filesystem to it.
I've seen many people recommend dd
or clonezilla for the copy but that also duplicates the block ID, which you then need to change or there would be trouble. Others go for rsync
or piping through tar
but a simple cp -ax
is perfectly fine for this.
Making the backup disk bootable
- Used
blkid
to find the UUID of the backup SSD partition and change it in the backup's/etc/fstab
. - If you have others partitions like /home, /var etc. you would do the same for each of them.
- For swap I happen to use one that needs no special handling. It's a
/swapfile
in the root partition which I created withdd
, formatted withmkswap
, and mounted with/swapfile swap swap defaults 0 0
in/etc/fstab
. That file was simply copied bycp
so there's nothing else needed.
Next is installing grub
on the backup SSD. You need to chroot into the SSD for this one as well as mount --bind
/dev
, /sys
and /proc
.
Here's an example for that last point, taken from this nice reddit post:
# mkdir /mnt/chroot
# mount /dev/SDD-partition-1 /mnt/chroot
# mount --bind /dev /mnt/chroot/dev
# mount --bind /proc /mnt/chroot/proc
# mount --bind /sys /mnt/chroot/sys
# mount --bind /tmp /mnt/chroot/tmp
# chroot /mnt/chroot
// Verify that `/etc/grub.d/30-os_prober` or similar is installed and executable
# update-grub
# grub-install /dev/SDD-whole-disk
# update-initramfs -u
I verified that I could actually boot into the backup disk before proceeding.
Installing Debian stable on the main disk
(I've decided to go with a regular install rather than debootstrap
after all.)
Grabbed an amd64 ISO off Debian's website and put it on a flash stick. I used the graphical gnome-disk-utility
app for that but you can also do dd bs=4M if=image.iso of=/dev/stick-device
. Usual warnings apply – make sure it's the right disk, umount any pre-existing flash partition first etc.
Booting the flash stick should have been uneventful but the machine would not see it in BIOS or in the quick-boot BIOS menu so I had to poke around and disable some kind of secure feature (which will probably be different on your BIOS so good luck with that).
During the install I selected the usual things like keymap, timezone, the disk to install to (the M.2, not the SSD backup with the old system), chose to use the whole disk in one partition as before, created a user, disallowed login as root, and requested explicit installation of a SSH server. Networking works via DHCP so I didn't need to configure it.
First access
SSH'd into the machine using user + password, copied the public SSH key from my desktop machine into the Debian's ~/.ssh/authorized_keys
, and then disabled password logins in /etc/ssh/sshd_config
and service ssh restart
. Made sure I could login with public key before disconnecting that root session.
Some other things I installed directly on the OS: ntpdate
, joe
, mc
, apt-file
, net-tools
, htop
, vainfo
, curl
, smartmontools
, hdparm
, rsync
.
Mounting RAID arrays
The RAID arrays are MD (the Linux software driver) so they should have been already detected by the Linux kernel. A quick look at /proc/mdstat
and /etc/mdadm/mdadm.conf
confirms that the arrays have indeed been detected and configured and are running fine.
All that's left is to mount the arrays. After creating /mnt
directories I added them to /etc/fstab
with entries such as this:
UUID=array-uuid-here /mnt/nas/array ext4 rw,nosuid,nodev 0 0
...and then systemctl daemon-reload
to pick up the new fstab right away, followed by a mount -a
as root to mount the arrays.
Publishing the arrays over NFS
Last step in restoring basic functionality to my LAN is to publish the mounted arrays over NFS. My desktop machine can function fine without NFS but it's nice to have all the arrays available. I used nfs-kernel-server
for this.
Please note that Debian and Ubuntu have a legacy issue with NFS where they fail to mount it after reboot. According to this discussion the most reliable solution is to make the service explicitly dependent on rpcbind by creating /etc/systemd/system/nfs-kernel-server.service.d/10-dep.conf
with the following content:
[Unit]
Requires=rpcbind.service
After=rpcbind.service
To publish the arrays over NFS you add them to /etc/exports
with entries such as this:
/mnt/nas/array desktop.lan(rw,async,secure,no_subtree_check,mp,no_root_squash)
Please note that the exports have to be done towards a client machine explicitly, either by name or IP. After a service nfs-kernel-server restart
the NFS shares are ready for use.
Now on the client machine side you would most likely install nfs-utils
and define the NFS shares in /etc/fstab
like this:
nas:/mnt/nas/array /mnt/nas/array nfs vers=4,rw,hard,intr,noatime,timeo=10,retrans=2,retry=0,rsize=1048576,wsize=1048576 0 0
The first one (with nas: prefix) is the remote dir, the second is the local dir. I usually mirror the locations on the server and clients but you can of course use different ones.
Goals for the next episode
In the next episode I will attempt to install something non-essential, like Emby or Deluge, in a docker container. I intend to keep the system installed on the metal very basic, basically just ssh, nfs and docker over the barebones Debian install. Everything else should go into docker containers.
When working with docker I'd like to:
- Keep the docker images on the system disk but all configurations on a RAID array.
- Map all file access to a RAID array and network ports to the host. This includes things like databases and other persistent files, as long as they're important and not just cache or runtime data.
Basically the idea is that if I ever lose the system disk I should be able to simply reinstall a barebones Debian and redo the containers using the stored configs, and all the important mutable files would be on RAID.
See you next time, and meanwhile I appreciate any comments about what I could've done better as well as suggestions for next steps!