What's (are) the funniest/stupidest way(s) you've broken your linux setup?
Tinkering is all fun and games, until it's 4 am, your vision is blurry, and thinking straight becomes a non-option, or perhaps you just get overly confident, type something and press enter before considering the consequences of the command you're about to execute... And then all you have is a kernel panic and one thought bouncing in your head: "damn, what did I expect to happen?".
Off the top of my head I remember 2 of those. Both happened a while ago, so I don't remember all the details, unfortunately.
For the warmup, removing PAM. I was trying to convert my artix install to a regular arch without reinstalling everything. Should be kinda simple: change repos, install systemd, uninstall dinit and it's units, profit. Yet after doing just that I was left with some PAM errors... So, I Rdd-ed libpam instead of just using --overwrite. Needless to say, I had to search for live usb yet again.
And the one at least I find quite funny. After about a year of using arch I was considering myself a confident enough user, and it so happened that I wanted to install smth that was packaged for debian. A reasonable person would, perhaps, write a pkgbuild that would unpack the .deb and install it's contents properly along with all the necessary dependencies. But not me, I installed dpkg. The package refused to either work or install complaining that the version of glibc was incorrect... So, I installed glibc from Debian's repos. After a few seconds my poor PC probably spent staring in disbelief at the sheer stupidity of the meatbag behind the keyboard, I was met with a reboot, a kernel panic, and a need to find another PC to flash an archiso to a flash drive ('cause ofc I didn't have one at the time).
Perhaps not the same definition of "broken" that you're looking for, but when I first started using Linux, I was using Kubuntu as my first distro have some brief experimenting with Manjaro.
Anyway, back then, I for some reason had the Skype snap installed. Can't recall why I had it to begin with, but I decided later on that ofc I didn't need Skype, and of course uninstalled the snap.
A few days later, I was met with some storage issues, where I had a limited amount of storage left on my SSD. I'm sitting there a little confused since I swore I was using less storage, but I did a thorough cleaning of my computer by deleting files I didn't necessarily need, and uninstalling any programs that I hardly ever used. That seemed to do the job, even if it was less storage space...
Until the next day, when the storage was full again. After getting some help from someone, I found that Skype, despite being uninstalled, was still running in the background, and found that there were residual files. The residual stuff running in the background was trying to communicate with what I had uninstalled, and logged multiple errors per second in a plaintext file that ended up being 176GB.
Whether I did something wrong or if there was something up with the snap, I still don't know as this was over a year ago and I was still learning the ropes of Linux at the time.
One that I can remember many years ago, classic trying to do something on a flash drive and dd my main hdd instead.
Funny thing, since this was a 5400rpm and noticed relatively quick (say 1-2 minutes), I could ctrl-c the dd, make a backup of most of my personal files (being very careful not to reboot) and after that I could safely reformat and reinstall.
To this day it amazes me how linux managed to not crash with a half broken root file system (I mean, sure, things were crashing right and left, but given the situation, having enough to back up most things was like magic)
which is why I like to add a set -u at the begining of a script.
The second one is not with a Linux box but a mainframe running AIX:
If on Linux killall java kills all java processes, on AIX it just ignore the arguments and kill all processes that the user can kill.
Adios the CICS region 😬 (on the test env. thankfully)
I can't even remember how I did this, but overwriting the partition table on the main production server at our small startup (back when "the server" would usually live on the premises of the startup). I remember my boss starting to hyperventilate from panic while I reconstructed it from memory / notes, and all the filesystems came back and he calmed down.
Same job, they gave me a little embedded-systems unit for me to use to build a prototype on. I hooked it up, nothing worked. I brought it back to them.
Hey, this one doesn't work.
Huh... that's weird, it was working before. Did you break it?
I don't think so. Can I have one that works?
They literally told me, as they were handing me the second one: Okay, here's another one. Don't break it.
I figured it out literally seconds after breaking the second one... I was hooking it up to 12 volts of power when it needed 5. Second dead computer. Explaining that and that I needed a third one now was fun.
I thoroughly backup up my slow nvme before installing a new faster one. I actually didn't even want to reuse the installation, just the files at /home.
So I mounted it at /mnt/backupnvme0n1, 2, etc and rsynced
The first few dry runs showed a lot of data was redundant, so I geniously thought "wow I should delete some of these". And that's when I did a classic sudo rm -rf in the /mnt root folder instead of /mnt/dirthathadthoseredundantfiles
I'm in the process of switching my main server over from windows to Linux
I went with Deb 12 and it all works smoothly but I don't have enough room to back up data to change the drive formats so they're still NTFS. I was looking at my main media HDD and thought "oh, I'll at least delete those windows partitions and leave the main partition intact."
I found out the hard way that NTFS partitions can't just reclaim space like that. It shuffles all the data when you change the partition. It's currently 23 hours into the job and it's 33% done.
I did this to reclaim 30 MB of space on a 14 TB drive.
Tinkering is all fun and games, until it’s 4 am, your vision is blurry, and thinking straight becomes a non-option, or perhaps you just get overly confident, type something and press enter before considering the consequences of the command you’re about to execute… And then all you have is a kernel panic and one thought bouncing in your head: “damn, what did I expect to happen?”.
Nah, that's when the fun really starts! ;)
The package refused to either work or install complaining that the version of glibc was incorrect… So, I installed glibc from Debian’s repos.
:D That one is a classic. Most distributions don't include packagers from other distros because 99% of the time it's a bad idea. But with Arch you can do whatever you want, of course
My two things:
I've heard about some new coreutils (rm, cp, cat... this time the name really fits the contents :D) and I decided to test it out. Of course it was conflicting with my current coreutils package and I couldn't just replace it because deleting the old package would break requirements. So without thinking I forced the package manager to delete it "I'll install a new one in just a second". Turns out it's hard to install a package without cp, etc :D
I don't remember what I was doing but I overwrote the first bytes of hdd. Meaning my partition table disappeared. Nothing could be mounted, no partitions found. Seemingly a brick.
Turns out, if you run a rescue iso, ask it to try and recognize partitions and recreate the table without formatting, Linux will come back to life as if nothing happened
First time trying Linux I went with an arch install because I Googled "best version of Linux" and went with arch. Followed a guide to the point of drive formatting and I decided to go with a setup with drive encryption. I didn't understand what I was doing, ended up locking myself out of my hard drives and couldn't get windows to reinstall on them. I used a MacBook for a week until I installed Ubuntu and managed to wipe and reset my drives and reinstalled. Needless to say I am going to read up a little more before I try that again.
Before installing Arch on a USB flash drive, I disabled ext4 journaling in order to reduce disk reads and writes, being fully aware of the implications (file corruption after unexpected power loss). I was confident that I would never have to pull the plug or the drive without issuing a normal shutdown first. Unfortunately, there was one possibility I hadn't considered: sometimes, there's that one service preventing your PC from turning off, and at that stage there's no way to kill it (besides waiting for systemd to time out, but I was impatient).
So I pulled the plug. The system booted fine, but was missing some binaries. Unfortunately, I couldn't use pacman to restore them because some of the files it relied on were also destroyed.
This was not the last time I went through this. Luckily I've learned my lesson by now
Found out the hard way that if you edit /etc/sudoers with anything other than visudo you best be absolutely sure the syntax is correct, otherwise sudo will refuse to read it and you'll be locked out.
Also learned to add -rf to the rm command at the end, after I re-read it to make sure it does what it should do. Something like rm /path -rf instead of rm -fr /path. That protects you from your fat fingers hitting the enter key half way through.
I was playing around with Pygame of all things, and it wasn't behaving as the (apparently out of date) documentation was saying it should, so I figured I'd just uninstall and reinstall Python.
I've done my plenty of stupid stuff, from dd disks I was using to forcefully uninstall dependencies of the package manager. But the one that takes the cake for me happened back in 2012, I was working at a research lab in the university and was sharing a computer with another intern. That other intern used Gentoo and so we agreed that the machine should be Gentoo, I've installed it at my house on my PC and got comfortable with it before we shared that computer. One thing that I learnt when installing Gentoo is that the /dev folder is created on boot, you don't populate it when installing, instead you mount the one from the host system you're using to install.
The computer had an issue with a device, can't even remember what it was, so I thought I'll run rm -rf /dev that should take care of the issue and after a reboot it will be repopulated... It might have worked, but what I actually ran was rm -rf /etc.
Years ago I was dual-booting with Ubuntu just to try out whatever this Linux thing was that all the nerds were talking about. Liked it and played around with it, but for whatever reason I wanted to go back to just Windows, I needed the space I had partitioned off or something, can’t remember why. So I just uninstalled or deleted the bootloader somehow (maybe I just deleted the Linux partition and expected the space to clear up like normal).
Go to restart the computer… oh shit. Ohshotohshitohshitohshit.
I wanted to use fio to benchmark my root drive. I had seen a tutorial saying that the file= parameter should point to the device file, so I pointed it at /dev/sda. As you might expect, the write test didn't go so well.
I don't know if that counts, but on a fresh default Debian Stable system, my cat walked across the keyboard and the DE crashed.
I could still switch to another TTY and reboot via command line.
After the reboot, I was greeted by a blinking cursor and nothing else. Had to reinstall.
My own classic was fiddling with the nvidia PRIME config to try and get rid of some very mildly irritating screen tearing. No graphics output at all. Now this is fixable of course, but it's a pig.
And I'd decided to do this 2 hours before an incredibly important progress review meeting for my PhD.
Got it back with about 10 mins to spare and decided just to leave the driver config alone after that.
Bonus round
Also a friend managed to bork his ubuntu 16 laptop by trying to switch from unity to gnome and ending up with sort of neither. That was reinstall territory right there.
I have a stupid one, but far from funny.. I've been using and building computers for a very long time so I'm far from a noob, but I'm still quite cautious, bordering on paranoid, so I like to unplug all other drives when re/installing an OS just to avoid stupid mistakes. I go through the installer on the livecd, there's only one drive to choose from so I don't think much about it, select that it should erase everything, I set up the new partition structure, and start the process. After about a minute I begin wondering "why is it taking so long?", and "what is that ticking noise? SSDs shouldn't be making any sounds when written to", when I realize that I had unplugged the wrong drives and that I was currently overwriting my main storage drive. Of course I had backups of the most important things like photos and code, though not really synced for a couple of months so I lost some stuff permanently.
This was pre-linux for me but something you can still do in most distros so I think it's a valid story.
In 1999 I was using Napster on computer running MS-DOS. I was 12 years old and an aspiring open media enthusiast/stupid script kiddie. I was using the file explorer interface in Napster and accidentally gave access to my entire C drive. I also had opened ports to share certain media and to fuck with my friends using daemon tools (back then you could do stupid stuff like control a friend's desktop with certain versions of daemon tools). Immediately I started receiving packages called things like "sleep.tight.tiny.mite" and I knew I was fucked so I clicked in the Napster interface and clicked "delete" and deleted my entire active drive.
I panicked and installed the only operating system we had which was a random copy of Red Hat. When my dad came home I pretended like it had always had Linux on it. I do think he was more impressed than mad.
I copied a program into the /bin/ folder while in a file browser with sudo permissions and somehow overwrote every file except the one I was moving.
It, of course, couldn't boot, but copying the bins from a live iso made it at least boot able. Reinstalled Linux after that, of course.
This one took a stupid amount of time to debug - but on the other hand, when grub failed it did with "can't find any bootable thingy" and not "missing configuration file" as, in my later opinion, it should.
Oh, I just remembered another one or three. So, resizing the partitions. My install at the time had a swap partition that I didn't need anymore. Should be simple, right? Remove the partition and the corresponding fstab entry, resize root, profit. Well, the superblock disagreed. Fortunately, I was lucky enough to be able to re-create the scheme as it was, and then take my time to read the wiki and do the procedure properly (e2fsck, resize2fs and all that stuff).
Some people I've met since, unfortunately, weren't so lucky (as far as I remember, both tried to shrink and were past mkfs already) and had to reinstall. The moral is, one does simply mess with superblocks; read the wiki first!
Not the installation strictly speaking, but my most "funny" fuckup was setting up xfree86. There was a configuration for crt monitor scan frequency that you had to setup. I messed up something and the monitor started to squeel like crazy and quickly hit hard reset in panic.
The monitor didn't die, but it had a slight high pitch noise to it after.
Nooo I have so many.. This one I can explain in English:
Xubuntu but blind
So, this is ~2016. Ubuntu is hip and a handful of my students use it. On my PCs I only use Debian and Suse. So to help them better I take out an old ASUS laptop and install Ubuntu on it. Try out Xubuntu instead.
At that time I was also huge into alternative keyboard layouts. I had a slightly modified Neo keyboard layout installed when I switched to Xubuntu.
Here the fun starts because the obscure internal graphics card built into the laptop didn't have driver support under Xubuntu. Black screen but I could hear it working. This was the hardest driver fix I ever did. No monitor and a keyboard layout I wasn't used to, under a Linux distro I wasn't used to. And I also was at the university library, so no hardware support or Debian stick in reach.
I once deleted the network system in alpine. I'd been having some trouble with with the default one (I think wpa_supplicant) so I decided to try the other one (I think iwctl). But I thought that there might be problems with havung both of them so before I installed iwctl I deleted wpa_supplicant (thinking that it was more of a config utility than the whole network system), only to find that I couldn't connect to the internet to install iwctl.
On OpenSUSE, in Yast bootloader tool, there is a checkbox to to do something like locking the bootloader (it has been a while, I don't remember the exact thingy).
Rebooted and oh, surprise, the bootloader was locked... Which mean Grub didn't load.
A few years ago i spent a lot of time converting .flac-files into .ogg-files in order to put on my oldschool iPod.
As I did a lot of repetitive typing - entering $dir / for file in flac ; do convert etc / mkdir -p $somewhere/$artist/$album / mv $somewhere/.ogg->$new_dir/
and so on - I thought:
"hm lets just write a loop over loops for all the artists here and then all the albums and at the same time create the nested directories somewhere else... hm actually in the home directory.... and later love everything on the iPod at once."
so i was in my music folder with the artists-folders i wanted to convert. i did something wrong
So i did my complicated script directly in the shell. I made something wrong and instead of creating a folder "~/artist/album" I created 3 folders in my current working directory: "~", "artist" and "album".
hmph dammit gotta try again... but first : i have to clean up these useless folders in the current dir.
so i type of course this:
"$ rm -r ~ artist album "
after about 5 seconds of wondering why it took so long i realized my error. o_O
I stopped the running command, but it was (of course) too late and i bricked my current installation. All the half-deleted config files made or impossible to start normally and extremely tedious to repair it by hand, so i reinstalled.
I set up 2FA via a hardware security key (a yubikey) for login, sudo etc. I then tried to switch security keys, removing the old pam files and adding a new one. But I didn't tidy the pam files up before logging in, and there was effectively no way to log in, since editing the pam files required sudo access to edit in the first place. So basically the whole system required access to a pluggable authentication module that it no longer had any ability to recognize. It was honestly pretty funny. I did manage to recover my data by booting from a live system and decrypting my drive from there.
I've also accidentally removed my desktop environment twice while trying to update Python versions and then cleaning up old packages, but that's kinda not that big deal and is just a facepalm moment.
Installed python3 before it was made the native python on the dist. Half broke everything, including apt & python. So I uninstalled it, and then everything was broken. Finally got python3 reinstalled, and lived with it kindof working & awful distribution updates.
I have finally freed myself of that prison last month, by nuking everything and starting fresh.
Linux Mint: removed all taskbars from the desktop. I was hoping it would just allow me to reset them to the default. But in reality, it breaks the GUI and it's very hard to reset from the GUI. Suddenly my keystrokes weren't being detected and I couldn't open up applications with any sort of regularity. After a lot of dicking around, I got the terminal working so I could reset Cinnamon.
It's not the worst way I've broken a machine. But it was one of the most annoying.
I don't remember what I was trying to achieve, but it was a bad idea. I also didn't (and still don't) know how to fix the outcome of this, so - since my home was on a separate partition anyway - I just reinstalled Debian since that was much quicker anyway.
I didnt break anything, but there was this one time i was setting up a new lxc container i had just spun up. I installed nginx, and a bunch of other packages, started writing new config files.... Then i noticed my prompt was user@desktop$ instead of user@server$
Whoops... I was in the wrong terminal window, typing commands into my desktop instead of the container i was setting up.
I dont remember what I did when I was stoned. The next day I tried to do normal sudo dnf install and it doesnt recognize any command anymore. I tried restarting it and I cant login anymore because the login scripts dont work. Not that funny but just happened and weirdest way I have broke it
Man, this was a few months back. I’ve got fedora asahi Linux (Linux on an ARM Mac) and I was trying to install Pycharm to play a bit with Python. Unfortunately, they did not have it packaged for arm, so I had to download a pre compiled tar or zip folder. I test it, see that it is an assortment of bin folders and alike, and decide to put it all elsewhere so it wouldn’t get lost. So I put it on the root and merge the folders. I think immediately “wait this is stupid” and decide to get Pycharm out of there. (I was on nautilus with root privileges), so i simply Ctrl-Z outa there. It shows a warning whether I wanted to delete 4000 files, but because I am an idiot, I didn’t realise what rhay meant. So I did it. I then continue on with my life, and find myself unable to open apps. I was fairly confused, as the apps I already had open still worked. I decide to try to restart the laptop. It is when I see that there is no restart button anymore that I realise what I did, and I just think to myself. I’ll be dammed if this survives a restart, im already screwed so it doesn’t matter. (It didn’t survive the reboot, had to install from scratch. At least an excuse to use the K desktop environment)
At one point I had the coolest Ventoy USB; CyberRe, LABEL=hakr. But then I got a new computer and apparently the ssd was /dev/nvme0n1 instead of /dev/sda. While I was installing Arch, When I created a new GPT partition on /dev/sda, it wiped my beautiful Ventoy 😢
So I am sort of an embedded developer, and I like to mess around with weird configurations. So the craziest experiment I did was trying to reflash a rasberry pi from a system running in the pi's RAM. It honestly might have worked, but during the prep work I forgot to resize the filesystem before mucking with the paritions and had to reflash the normal way before I could try again. Ended up just turning it into a pihole instead, but I still learned a lot about pivot_root
One day on my main Arch installation I created a container inside a directory, and "booted" into it by using systemd-nspawn. When I was done with it I decided to do a rm -rf / inside the container just to be funny. Then I noticed that my DE on the host froze and I couldn't do anything. Then I realized that systemd-nspawn mounts some important host's directories on the container, and I deleted those when I did the rm -rf /. I didn't lose anything, but it was scary.
It was my first time using a Linux GUI. I was comfortable with CLI, but it was my first time having it installed on a laptop instead of just sshing into a server somewhere.
So naturally, instead of learning how the GUI worked, I tried changing it to be exactly like Windows. I was doing things like making it so I could double click shell scripts and other code files and they would run instead of opening them up in an editor. I think you see where this is going, but I sure as hell didn't.
Well, one of my coworkers comes over and asks me to run this code on this device we were developing. We were still in the very early stages of development, we didn't even have git set up, so he brought the code over on a USB stick. I pop it into my laptop. I went to check it once by opening it in an editor by double clicking on it... Only it ran the code that was written for our device on my laptop instead of opening in an editor.
To this day, I have no idea what it did to fuck my laptop so bad. I spent maybe an hour trying to figure out what was wrong, but I was so inexperienced with Linux, that I decided to just reinstall the OS. I had only installed it the day before anyway, so I wasn't losing much.
Back when I used ubuntu, Unity was stuck with old gnome packages. This meant that the version gnome-terminal packaged with ubuntu (up to at least 18.04) didn't have text reflow on window size changes.
You could add the upstream sources, upgrade the specific text reflow package only, and then disable the sources.
I forgot to disable the sources, or typed dist-upgrade (this happened multiple times...). Broke the whole desktop/lightdm setup with half upgraded packages, and half removed packages (for preparation to install new versions). Way easier to reinstall the os than to disentangle. Unity was a mess then anyway.
Moral: Actually read the package change summaries when doing updates/removes/installs, and [ y/N ] means actually check what the fuck you think you're agreeing to.
BtrFS snapshots for idiots
I've also run automated snapshots on my btrfs partition, then run out of space doing multi-hop system upgrade on fedora (dnf has a plugin that creates a snapshot every time it kicks in.
You can imagine there were many changes happenning per snapshot, and I effectively could have rolled back 4 major fedora versions... Til I ran out of space.
I couldn't get a replacement drive in time, and I had an hour to rebuild my laptop before needing to be on a customer site, so sadly I couldn't preserve my drive for later investigation. My best guess is the high-water-mark was configured incorrectly, and somehow it was able to 'write' data past the extents of the filesystem.
Rollback did work for my home partition, but I had to mount it from another OS to get it to work - so no data loss!
By that time I'd already reinstalled the os to the root partition/subvolume however, so I couldn't determine the exact cause of failure :(
Moral: Snapshots are not backups, and 'working' is not 'tested'
Built a new desktop, backed up everything on my old laptop, next step was to format an Arch installer USB. Instead of formatting the USB, I formatted my laptop's /boot partition. No big loss since I had the backup and was done with that old toaster, but oops.
I once did an apt-get upgrade in the middle of when debian testing was recompiling all packages and moving to a new gcc version. I get it, using testing invites stuff like this. But come on, there should at least be a way to warn people beforehand.
@fl42v I have thousands from my early days, but my only recent-ish one was pretty funny.
On an Arch install that hadn't been updated for a while, in a rush, had an app that needed OpenSSL 3. Instead of updating the whole system, I just updated the openssl package.
*Everything* broke immediately. Turns out a lot of stuff depends on openssl. Who knew?
To fix, booted to the arch installer, chrooted into my env, and reverted to the previous version of the package — then updated properly.
I ran firejail config or something, which replaces a lot of home directory app files. Not sure if binaries or desktop entries.
But things broke, randomly, screenshots not working, not even inside firefox etc. I reinstalled the system and imported the home folder... and it was there again!
I was trying to extract some files from a a Linux image of one of those ARM boards. It was packed into the cpio format, and I had never used the format before. Of course I was trying to extract to a root owned directory and I sudo'ed it. I effed up the command and overwrote all my system directories (/bin, /usr, /lib, etc...). Thankfully I had backed up my system recently and was able to get it working again.
I set up a progressive backup of my home folder... to my home folder. By the time I got home that day it was impossible to log in because there was no room to create a login record. Had to fix that by deleting the backup file using a live CD.
I was new to Linux, I made the not so calculated decision to use manjaro as my daily, deleted xorg to in an attempt to reinstall xorg to then hopefully fix the stuttering. Everything went wrong, no display obviously, /boot/ files where corrupt. I now use arch and am wiser
Once I succumbed to a proprietary software's allure, post-usage, I felt like a digital pariah! To rid myself of the taint, I wiped my system clean – reinstall time!
I cant remember anymore... Let me explain ... My first computer was with at-the-time-very-new windows xp, using primary for games, after some time it got bloated with stuff so i had to reinstall again and again over time. Then i discovered redhat,centos and debian... I started heavily distro hopping. My passion for software grew to the point that I was installing new software on daily basis, just to explore new things. But nothing seemed stable enough, ubuntu, fedora, sabayon, gentoo, arch... And their derivatives all broke under my fingers to the point that i had to do more fixing than discovering new software, I took it as a challenge and continue. At around the time of university I discovered NixOS, as with any new technology I went head on with it. It took a lot of trial and error since at the time there were no documentation for any of it. I spent months reading the code, but I never gave up, since what I have found was a gem. I found the OS that is resistant to my curiosity, I just cant seem to be able to break it. Now I use NixOS everywhere that I can, even on my work computer. I do not need to reinstall after initial installation. Well... only when hardware fails...
It was only in a container on a Chromebook, but I'll share it anyway. One time, I had installed Android Studio but found it mildly annoying that I got a line when using apt about Android Studio and some error on a certain line of this one file. I believe the file was something related to dpkg, and after changing some things within the file, I seemed to have broken apt. Luckily, I had a backup, but it was a few days old, so I had to reinstall some apps.
I had issues with a new version of glibc that prevented me from working on music in Ardour on Manjaro. I then proceeded to force-downgrade glibc (in the hopes of letting me get back to work) and that broke sudo and some other things, which I found out after rebooting. That was an interesting learning experience. Now I snapshot before I do stupid stuff. :]
I'm not sure how funny this will be, but here's how I broke my system twice in a single case.
Step by step:
Migrated from Manjaro KDE to EndeavourOS KDE. Kept the previous home directory.
After a few updates, there was a problem with Plasma. Applications were not starting from the panels or the .desktop files (they worked from the terminal. The terminal emulator was in startup and worked that way)
After a few google searches, found out that downgrading glibc would do something, so downgraded... Worked for a while
While using pacman -Syu, I always checked for warnings (foolishly thinking that the downgraded and ignored glibc would cause a pacman warning if it broke dependencies) and there were none. So, the updated OS stopped working due to unmatched glibc. BREAK 1
To fix it, I opened one of my multiple boots (another EndeavourOS) and made a script using pacman -Ql and cp to copy new glibc related files into the broken system (because I was too lazy to learn how to do it the correct way with pacman and chroot didn't work because glibc is needed by bash).
Turned out the script I made was wrong and I hadn't checked the intermediate output from pacman -Ql, which was telling cp to copy the whole /etc /usr and other directories. (just if I hadn't given the -r to cp) BREAK 2
In the end, I just made a new installation, this time with a new home and hand-picked whatever settings I wanted from the previous home, Viva la multi-HDD
I've had the typical disasters with partition tables and boot loader mixups, but the one I keep coming back to is updating my Nvidia drivers too eagerly. Whether something gets messed up with an external monitor, or the laptop starts resisting switching away from the integrated GPU, or an electron app I use regularly that makes heavy use of 3D acceleration breaks, or I just need to bump the driver version in a reproducible system state record... it's just bad news.
About a year ago I somehow fucked up installing a new window manager on my tablet so badly I had to start from scratch - to this day I have no idea what happened there, but it just wouldn't boot properly or anything after that 🤷 I needed it for school pretty quickly though so my top priority was getting it working again, so I set up a fresh install instead of continuing to fuck around.
Not the same level of destruction, but I fucked up my first ever install a couple months in trying to resolve dependencies related to python and wine, which is why I'm more interested in sandboxing whenever feasible these days. After only two months I guess I had been fucking around with linux long enough to have a little too much unearned confidence, lol
The first time I wanted to try Linux I did by installing elementary OS in dual boot mode (with windows) and everything went well, I played with it a bit and then I returned to Windows..
So, few days after that I realize that I have a lot of space in the Linux partition and I didn't have plans to use it anymore so I go to drive's & partition's manager on windows to delete my elementary OS partition..
Oh Lord when I restarted my PC, grub was showing nonsenses and I couldn't boot on windows again, I was in panic, I spent the rest of the day trying to fix grub to boot windows. At the end of the day I did it and save all my files and I uninstall grub properly, but what a day 😂
Back when I started using Linux, I really wanted something that was super different from windows (I used Gnome 3 for like 3 years). I decided one day to try out Fedora cause, hey, I can live on the bleeding edge.
Second day I had it installed, I was having issues with the audio. Decided to try reinstalling pulse. Apt autoremoved it and somehow completely nuked the entire GUI. Stuck in terminal mode, I found that I had no ethernet to connect to, nor could I figure out how to connect to a wifi network with a password or download packages to a USB. After a couple hours, I gave up, wiped the drive, and went back to Mint.
I used to work at this place that had a gigantic QNX install. I don't know if QNX that we used back then had any relation to q&x now They certainly don't look very close.
It was in the '90s and they had it set up so that particular nodes handled particular jobs. One node to handle boot images and serve as a net boot provider, one node handled all of the arcnet to ethernet communication, one node handled all the serial to mainframe, a number of the nodes were main worker nodes that collected data and operated machinery and diverters. All of these primary systems were on upper-end 386s or 486s ,they all had local hard disks.
The last class of node they called slave nodes. They were mainly designed for user data ingest, data scanning stations, touch screen terminals, simple things that weren't very high priority.
These nodes could have hard discs in them, and if they did, they would attempt to boot from them saving the net boot server a few cycles.
If for some reason they were unable to boot from their local hard drive, They would netboot format their local hard drive and rewrite their local file system.
If they were on able to rewrite their local file system they could still operate perfectly fine purely off the net boot. The Achilles heel of the system was that you had no idea that they had net booted unless you looked into the log files. If you boot it off your local hard drive of course your root file system would be on your local hard drive. If you had net booted, and it could not rebuild your local file system, your local root file / was actually the literal partition on the boot server. Because of the design of the network boot, nothing looked like it was remotely mounted.
SOP for problems on one of the slave nodes was to wipe the hard disk and reboot, in the process it would format the hard drive and either fix itself or show up as unreliable and you could then replace the disc or just leave the disc out of it. Of course If the local disk had failed and the box had already rebooted off netboot without a technician standing there to witness it, rm -Rf would wipe out the master boot node.
I wasn't the one that wiped it, but I fully understand why the guy did.
Turns out we were on a really old version of QNX, we were kind of a remote warehouse mostly automated. They just shut us down for about a week. Flew a team out. Rebuilt the system from newer software, and setup backups.
Not really a "braking my linux setup", but still fun as hell! Back in university, a friend of mine got a new notebook at a time... we spent the night at the university hacking and they wanted to set the notebook up in the evening. They got to the point where they had to setup luks via the cryptsetup CLI.
But they got stuck, it just wouldn't work.
They tried for HOURS to debug why cryptsetup didn't let them setup LUKS on the drive.
At some point, in the middle of the night (literally something like 2 in the morning) they suddenly JUMPED from their seat and screamed "TYPE UPPERCASE 'YES' - FUCK!!!"
They debugged for about six hours and the conclusion was that cryptsetup asks "If you are sure you want to overwrite, type uppercase 'yes'". ... and they typed lowercase. For six hours. Literally.
The room was on the floor, holding their stomach laughing.
I can't remember what I did to break it, but back when I was in high school I was tinkering right before class and rendered my laptop unbootable. I booted into an Arch Linux USB, chrooted into my install, found the config file I messed with, then reverted it. I booted back into my system and started the bell ringer assignment as quickly as I could. I had one minute left when the teacher walks by, looks at it, and says that I did a really good job. She never knew my laptop was unbootable just 2 minutes earlier.
Actually, I have a story that I'd consider an achievement even though it was extremely stupid and by all accounts should've bricked the system but didnt.
So I was on windows and wanted to install linux as a dual-boot on the main drive. The problem was that my mobo didnt like this particular and the only flash drive I've had, dropping it out mid-boot, before I got any usable terminal, so a usual install method wasn't an option. So I had this crazy idea to start a vmware vm in windows and pass the linux iso and the boot drive directly to it and try to install it live over the running system. Unfortunately, vmware guys thought of this and there's a check that disallows passing the boot drive to vms. So i created a bunch of .vmdks for another drive and fiddled with them in notepad until I somehow managed to trick vmware and at some point it started booting the same windows copy that I was sitting on. I quickly powered it off, added the linux iso and proceeded to install like I usually would. It did involve some partition shuffling, but, somehow, it went smoothly, linux installed, grub caught on, and even windows somehow survived, even though it was physically moved around on the disk. It serms that vmware later patched this out, because later in an attempt to re-create the trick of running the same copy of windows twice, but after updates to both windows and vmware, I was met with the same old error that boot drive is not allowed when trying to add that same virtual drive I had laying around.
A few years ago I was having obscure audio problems on Ubuntu so I tried replacing pulseaudio with pipewire. I was feeling pretty cocky with using the package manager so I tried
sudo apt install pipewire
Installed successfully, realized nothing changed, figured maybe I had to get rid of pulseaudio to make it stick.
sudo apt remove pulseaudio
Just two commands. Instant black screen, PC reboots into the terminal interface. No GUI. Rebooting again just brings me back to the terminal.
I fixed it eventually, but I'm really not very computer literate despite using Linux, so I was sweating bullets for a minute that I might have bricked it irreversibly or something.
I suppose it doesn't quite qualify as breaking the system in a funny or stupid way but it certainly was one of those stupid things that was easy to fix after a ton of trouble shooting, ignoring the issue for a while and trying to fix it again.
So i had an old pc where I had a failed hard drive which I replaced. Obviously I also accidentally unplugged my optical disc drive and plugged it back in. Now that failed drive was just a data drive so the system should have booted up no problem since the os was on a SSD but instead it got a kernel panic and got stuck at boot. Since it was late I left it at that and came back to that the next day where it would still not boot. So I unplugged the disc drive and looked up what it could be. Tried a ton of different possible solutions but every time I added that disc drive it would panic.
I eventually kind of gave up and just didn't use that disc drive at all and just had it as a paperweight in the system. Unplugged and all that. When my replacement SSDs for my old data drive and backup drive came in I tried again to get that optical drive working but to no avail. So I unplugged it again, got it all set up and ran into another issue where for some reason Linux couldn't properly use my backup SSD. So I investigated that as well and trough some miracle found a post on the forum from my Mainboard manufacturer... Turns out that particular Mainboard had a data retention chip on it that didn't like Linux.
So naturally I just plugged everything into the data ports that were not controlled by that chip and it all worked as intended.
Stupid dumb chip on a Mainboard, all I had to do was try the simple idea of unplugging and trying a different connector but instead I did all that other stuff first that didn't work and cost me so much of my time.
Moral of the story, when in doubt try and put stuff on different connectors and see if that fixes it. Might just be a dead connector for all you know. Or an incompatible chip on the Mainboard.
FWIW I bought that Mainboard long before I switched to Linux and didn't plan at all to switch at the time. But that's a different story.
I was testing a custom initramfs that would load a full root into a ramdisk, and when I was going to shut down I tried to run rm -rf --no-preserve-root / to see what would happen, since I was on a ramdisk anyway. The computer would not boot after that because it nuked the UEFI options.
and a need to find another PC to flash an archiso to a flash drive ('cause ofc I didn’t have one at the time).
you can do that from your phone using etchdroid
i don't remember ever breaking my system in a terrible way, but when i started using linux (with linux mint) i uninstalled ca-certificates and i think that uninstalled the whole DE
I was running Fedora. Something like 27 or so. I needed drivers. I don't remember if it was AMD or Nvidia, but they were only available on RedHat.
So I downloaded the RedHat drivers for the GPU and forced it to install. It worked! It was great.
Then when I updated the distro to the next release... everything failed. It was dropping into grub, but no video was output. Ooof.
So I ended up enabling a terminal console and connecting to it via a serial port to debug. I had to completely uninstall that RPM and I was never happy that it was properly gone. So a few months later I ended up reinstalling the whole OS.
On the plus side, I learned a lot about grub and serial consoles. Worth it.
Somehow I found ways to remove and break the GUI multiple times in multiple ways in multiple distros.
Different scenarios, different times, different issues trying to "fix". My usual fix after this was always to copy what I think I still had important and then move on with a reinstall.
Recently I have been playing with ZorinOS and broke it in the same way by fidgeting with pipewire. Distro hoped to Fedora Silverblue due to the immutable filesystem. I wonder if I will break this one in a way I cannot revert it easily with rpm-ostree. I almost feel challenged.
Before I understood how to properly build and test mesa (graphics driver), I compiled it and then procedeed to manually symlink the files in the lib and lib32 directories.
When I pressed enter on that ln command, the UI immediately crashed and X would no longer start after rebooting the computer. Reinstalling mesa from a virtual terminal wouldn't fix it so I just reinstalled the system.
Good times :)
I wanted to move my Arch VM to bare metal, so I copied out all the important bits. Then I wanted to move that copy to a new drive so I could boot into it.
I THOUGHT I'd MV all the files in the Arch install's etc directory using sudo MV /etc ...
I also (somehow) mashed my install's etc with Arch's and bungled both, with no live CD to help.
I learned a thing or two about absolute file paths...
Can't say I have any interesting stories. Most of mine are just the head-scratching "I don't know why that didn't work; guess I need to reinstall" kind of story. Like enabling encrypted LVM on install and suddenly nothing is visible to UEFI. Or trying to switch desktop environments using tasksel and now I have a blank screen on next reboot. That lame kind of stuff.
My coworker though... he was mindlessly copy/pasting commands and did the classic rm -rf $UNSETVARIABLE while in / and nuked months of migrated data on his newly built system. He hadn't even set up backups yet. Management was upset but lenient.
Wanted a cool bootscreen on my Nixos machine - commented out the bootloader to troubleshoot, why my meme-boot-picture wouldn't show - after rebooting, it loaded straight into the BIOS and finally realized what I had done... Was able to fix it thankfully
I had rEFInd and GRUB installed entirely by accident, and a botched update for Arch hosed my entire EFI setup making it impossible to boot Linux or Windows w/o a LiveCD. Thankfully it self repaired once I nuked rEFInd. I ended up going back to Ubuntu, but I hate snaps. I still would recommend Arch for most Linux users who want the power windows.
Relied on an AUR package for building and signing my unified kernel image... one day it was outdated and geberating the image failed, I noticed that by the fact that the system refused to boot my OS. Fixing it was done in a few minutes but boy, that was a shock :D
Guess who also checks the exact output of the kernel rebuild now before rebooting!
The first time I installed Fedora after like a decade I updated to new minor version -> sudo reboot because I was already in the terminal -> reinstalled because it wouldn't boot anymore