Skip Navigation
162 comments
  • An isolated shingle spit nature reserve. We'd lost mains power in a storm some while back and were running on a generator. Fuel deliveries were hard to arrange. We'd finally got one. We were pretty much running on fumes and another storm was coming in. We really needed this delivery.

    To collect the fuel, I had to take the Unimog along a dump track and across 5 miles of loose shingle - including one low causeway stretch through a lagoon that was prone to wash out during storms. We'd rebuilt it a LOT over the years. On the way up, there was plenty of water around there, but it was still solid.

    I get up to the top ok and get the tank full - 2000L of red diesel - but the wind is pretty strong by the time I have. Half way back, I drop down off the seawall and reach the causeway section. The water is just about topping over. If I don't go immediately, I won't get through at all and we will be out of fuel for days - maybe weeks. So I put my foot down and get through that section only to find that 200 meters on, another section already has washed out. Oh shit.

    I back up a little but sure enough the first section has also washed through now. I now have the vehicle and a full load of fuel marooned on a short section of causeway that is slowly washing out. Oh double shit. Probably more than double. Calling it in on the radio, everyone else agrees and starts preparing for a pollution incident.

    In the end I find the firmest spot that I can in that short stretch and leave the Moggie there. Picking my route and my moment carefully I can get off that 'island' on foot - no hope with the truck - BUT due to the layout of the lagoons only to the seaward ridge, where the waves are now crashing over into the lagoon with alarming force. I then spend one of the longest half-hours I can remember freezing cold and drenched, scrambling yard by yard along the back side of that ridge and flattening myself and hoping each time a big wave hits.

    The firm bit of causeway survived and there was no washed away Unimog or pollution in the end - and I didn't drown either - but much more by luck than judgement.

    These days I am in a position where I am responsible for writing risk assessments and methods statements for procedures like this. It was another world back then.

  • I used to work at a car dealership. One day I had to use a bay in a different building because my usual workplace was occupied. The other building had a lift that I hadn't used before.

    Anyways, I drove the car onto the lift, got out and placed the arms of the lift under the jacking points like I had done a thousand times before. I raised the lift a little and checked if the placement was still correct. It looked good, so I raised the car to a medium height. When I looked again, I realized that this lift had a central platform that was also raised and was set about 20 centimeters higher than the four arms that usually lift the car.

    This 90.000 Euro SUV was basically balancing on a 180x50cm piece of metal right in the center. I managed to lower it down safely but my pulse goes up just thinking about that day.

  • Sharing my story for posterity.

    I used to work at a medical center for old folks with varying disabilities. It was a great job all things considered, just didn't pay very well and the scheduling was a mess.

    Anyway, one day I'm cleaning tables on the dining room when I hear on my walkie talkie that one of the new people need help with a guy in the bathroom. Usually "they need help" means "something has gone awry, please unfuck the situation" and, since I was the supervisor on shift, my job frequently involved untucking a situation.

    I arrive outside the bathroom door and the new employee tells me that she walked into a situation that she wasn't prepared for. I figured it was some poop, or the guy fell asleep on the toilet or something.

    I walk in and the walls were all painted with poop. The sink was painted with poop. The floor was painted with poop. The paper towel dispenser had poop all over the front of it.

    The poor guy had gone to the bathroom, got confused and tried to remember what toilet paper was. He saw me and knew I was there to help, but he was nonverbal. His way of saying thank you was to gently take his hand and rest it under your chin.

    He did so, but his hand was also still covered on poop.

    I'm used to poop. It's a normal job hazard in that line of work. But something about having to clean myself and every surface in the room from caked poop while somebody else gave the poor guy a shower...that kind of story sticks with you. To this day I can't look at finger paints without feeling a little queasy.

    • I’m sorry, that sounds like a really shitty day.

    • Your story makes up for the non-work related stories in this thread. It's both work related and shitty lol. I'm sorry you had to go through that.

    • Normally I'm very much anti "lets use robots to replace jobs", but this is one case where I think it would be a win for everybody. The robot won't care, and the elderly person won't feel their dignity lost, and all is taken care of behind closed doors.

      My grandma started losing control of herself towards the end, and my mother did overtime in taking care of her and cleaning her. This sounds sweet, but it was a bad situation for everyone. My mother essentially started treating her own mother like a baby, often in front of us, and my grandmother (a proud and strong woman my entire life) essentially lost her sense of dignity and independence. I still remember her as the strong and proud woman she was, and I do my best to forget her last year.

      We need robot caretakers.

  • My first salaried job was also my first proper IT job and I was a "junior technician" ... the only other member of IT staff was my supervisor who had been a secretary that got a 1 week sysadmin course and knew very little.

    The server room was a complete rat's nest and I resolved to sort it out. It was all going very well until I tripped over the loose SCSI 3 cable between the AIX server and it's raid array. While it was in use.

    It took me 2 days to restore everything from tape. My supervisor was completely useless.

    A few months later I was "made redundant", leaving behind me everything working perfectly and a super tidy server room. I got calls from the company asking for help for the following 6 months, which I politely declined.

  • Strap in friends, because this one is a wild ride.

    I had stepped into the role of team lead of our IS dept with zero training on our HP mainframe system (early 90s).
    The previous team lead wasn't very well liked and was basically punted out unceremoniously.
    While I was still getting up to speed, we had an upgrade on the schedule to have three new hard drives added to the system.

    These were SCSI drives back then and required a bunch of pre-wiring and configuration before they could be used. Our contact engineer came out the day before installation to do all that work in preparation of coming back the next morning to get the drives online and integrated into the system.

    Back at that time, drives came installed on little metal sleds that fit into the bays.
    The CE came back the next day, shut down the system, did the final installations and powered back up. ... Nothing.
    Two of the drives would mount but one wouldn't. Did some checking on wiring and tried again. Still nothing. Pull the drive sleds out and just reseat them in different positions on the bus. Now the one drive that originally didn't mount did and the other two didn't. What the hell.... Check the configs again, reboot again and, success. Everything finally came up as planned.

    We had configured the new drives to be a part of the main system volume, so data began migrating to the new devices right away. Because there was so much trouble getting things working, the CE hung around just to make sure everything stayed up and running.

    About an hour later, the system came crashing down hard. The CE says, "Do you smell something burning?" Never a good phrase.
    We pull the new drives out and then completely apart. One drive, the first one that wouldn't mount, had been installed on the sled a bit too low. Low enough for metal to metal contact, which shorted out the SCSI bus, bringing the system to its knees.

    Fixed that little problem, plug everything back in and ... nothing. The drives all mounted fine, but access to the data was completely fucked,
    Whatever... Just scratch the drives and reload from backup, you say.

    That would work...if there were backups. Come to find out that the previous lead hadn't been making backups in about six months and no one knew. I was still so green at the time that I wasn't even aware how backups on this machine worked, let alone make any.

    So we have no working system, no good data and no backups. Time to hop a train to Mexico.

    We take the three new drives out of the system and reboot, crossing all fingers that we might get lucky. The OS actually booted, but that was it. The data was hopelessly gone.

    The CE then started working the phone, calling every next-level support contact he had. After a few hours of pulling drives, changing settings, whimpering, plugging in drives, asking various deities for favors, we couldn't do any more.

    The final possibility was to plug everything back in and let the support team dial in via the emergency 2400 baud support modem.
    For the next 18 hours or so, HP support engineers used debug tools to access the data on the new drives and basically recreate it on the original drives.
    Once they finished, they asked to make a set of backup tapes. This backup took about 12 hours to run. (Three times longer than normal as I found out later.)
    Then we had to scratch the drives and do a reload. This was almost the scariest part because up until that time, there was still blind hope. Wiping the drives meant that we were about to lose everything.
    We scratched the drives, reloaded from the backup and then rebooted.

    Success! Absolute fucking success. The engineers had restored the data perfectly. We could even find the record that happened to be in mid-write when the system went down. Tears were shed and backs were slapped. We then declared the entire HP support team to be literal gods.

    40+ hours were spent in total fixing this problem and much beer was consumed afterwards.

    I spent another five years in that position and we never had another serious incident. And you can be damn sure we had a rock solid backup rotation.

    (Well, there actually was another problem involving a nightly backup and an inconveniently placed, and accidentally pressed, E-stop button, but that story isn't nearly as exciting.)

  • Happy ending story, but it's still gross.

    I do workplace safety and hazardous material handling (instructions, plans, regulation, etc), for all sorts of facilities, from dirty ground to lab waste.

    Hospitals have a number of types of dangerous waste, among them stuff that get disinfected in bags in an autoclave (oven) and stuff that shouldn't be in a bag, like needles, scalpel blades etc.

    I was giving some on-site instructions, which included how to dispose of things. So I tell the people to never assume someone does everything right, because we've all thrown trash in the wrong bag at some point, and you don't want to find out someone left a scalpel in the autoclave bag by jamming it into the hole and pulling a needle from your hand.

    My eye drifts slightly left, to one of my students current assisting another worker doing literally that, stuffing a second bag into the autoclave and then shouting "OW, fuck", before dripping blood on the ground.

    Now, nobody knows what's in the bag. Some moron threw sharps in with the bio waste, who knows where it's from. For all I know, they just caught zombie-ebola, and it's my fault for talking slightly too slow.

    Thankfully, after some antibiotic and fervent prayer, everything turned out to be OK.

  • I know most of these stories are going to be IT or food service, so I'll chime in with mine to change it up.

    TLDR: We caused some explosions on a transformer because someone didn't read test results.

    I work for a power utility. One night, we were energizing a new transformer. It fed a single strip mall complex with a major grocery chain on it, so that's why it was at night, as we couldn't affect the service while they were open.

    Anyways, we go to energize, close the primary switches and one of the lightning arrestors blows up. And I mean blows up, like an M80 just went off. Lit up the sky bright as day for a couple moments at 1 in the morning. The protection opened the switches and everybody is panicking, making sure nobody was hurt.

    Well after everybody settled down, the arrestor was replaced, they decide to throw it in again. Switches come closed, and explosion #2 happens. A second arrestor blows spectacularly. I tried to convince the one supervisor on site to go for a third time, because why not, but he didn't want to do it again. Whatever.

    A few days go by and we find out what the issue was. This transformer was supposed to be a 115kV to 13.2kV. Come to find out there was an internal tap selection that was set for 67kV for the primary, and not 115kV. So what was happening was the voltage was only being stepped down half as much as needed so there was like 28kV or so on the secondary instead of 13.2kV and that was over the lightning arrestors ratings, hence why they were blowing up. So the transformer had to have its oil drained, guys had to go inside it and physically rewire it to the correct ratio.

    We had a third party company do the acceptance testing on this transformer, and our engineering department just saw all the green checkmarks but didn't pay attention to the values for the test results. Nobody expected to run into this because we don't have any of this type of transformer in our system, but that's certainly no excuse.

    Moral of the story: read your acceptance test results carefully.

  • Several years ago, when I was more just the unofficial office geek, our email was acting up. Though we had Internet access as normal. At the time, email (Exchange) was hosted on-prem on our server. Anything server related, I'd contact our MSP to handle it. Which usually meant they'd simply reboot the server. Easy enough, but I was kinda afraid and hesitant to touch the server unless the MSP explicitly asked/told me to do something.

    I reported it to our MSP, expecting a quick response, but nothing. Not even acknowledgment of the issue. This was already going on for like an hour, so I decided to take matters into my own hands. I went to the server, turned on the monitor...and it was black. Well, shit. Couldn't even do a proper shutdown. So I emailed again, waited a bit, and again no response.

    Well, if the server was being unresponsive, I figured a hard shutdown and reboot would be fine. I knew that's what the MSP would (ask me to) do. What difference was them telling me to do it versus just me doing it on my own? I was going to fix email! I was going to be the hero! So I did it.

    Server booted up, but after getting past the BIOS and other checks...it went back to black screen again. No Windows login. That's not so terrible, since that was the status quo. Except now, people were also saying Internet all of a sudden stopped working. Oh shit.

    Little did I know that the sever was acting as our DNS. So I essentially took down everything: email, Internet, even some server access (network drives, DBs). I was in a cold sweat now since we were pretty much dead in the water. I of course reached out AGAIN to the MSP, but AGAIN nothing. Wtf...

    So I told my co-workers and bosses, expecting to get in some trouble for making things worse. Surprisingly, no one cared. A couple people decided to go home and work. Some people took super long lunches or chitchatted. Our receptionist was playing games on her computer. Our CEO had his feet up on his desk and was scrolling Facebook on his phone. Another C-suite decided to call it an early day.

    Eventually, at basically the end of the day, the MSP reached out. They sent some remote commands to the server and it all started working again. Apparently, they were dealing with an actual catastrophe elsewhere: one of their clients' offices had burned down so they were focused on BCDR over there all day.

    So yeah, I took down our server for half a day. And no one cared, except me.

  • Setting up a GNU Mailman mailing-list with the 'reply-to-all' setting still set to on (as per the defaults).

    It was intended to be a distribution list.

    The list had 8,000 recipients.

    .

    I found out what an ohnosecond is, as the blood chilled a little in my veins.

    Turned out not too bad, 6 innocuous replies were sent out to everyone. It was fully open for about 25 minutes before I set 'reply-to-all' to off. Could have been much worse.

    Will never ever forget to do that with a mailing-list again.

  • no work related but am overfill sink with water changer because forgot to remove drain cover

    is flood

    am get in trouble also ..

162 comments