Optus says "changes to routing information" after a "routine software upgrade" was behind last week's nationwide outage, affecting 10.2 million Australians and impacting 400,000 businesses.
So it turns out that the cause was indeed a rogue change they couldn't roll back as we had been speculating.
Weird that whatever this issue is didn't occur in their test environment before they deployed into Production. I wonder why that is.
how else do you explain to the layman “catastrophic failure in the configuration update of core network infrastructure and its preceding, meant-to-be-soundproof, processes”
This happens in my business all the time...the test FTP IP address is left in the code and shit falls apart costing us millions... They hold a PIR and then it happens again.
Optus says "changes to routing information" after a "routine software upgrade" was behind last week's nationwide outage, affecting 10.2 million Australians and impacting 400,000 businesses.
"These routing information changes propagated through multiple layers in our network and exceeded preset safety levels on key routers which could not handle these," the company said.
Before Monday's disclosure by Optus, experts had theorised the outage was likely a "regular software upgrade gone wrong".
"The problem is too widespread to be due to a cable break or equipment failure," said Tom Worthington, a senior lecturer in computer science from the Australian National University in Canberra.
The software upgrade theory surmised by telecommunications analysts and experts last Wednesday were put to Optus CEO Kelly Bayer Rosmarin, who rejected those suggestions.
The reason for the outage follows the federal government announcing earlier on Monday that it would require telecommunications companies in Australia to report their cybersecurity measures to avoid a repeat of Optus' cyber hack last year.
The original article contains 528 words, the summary contains 159 words. Saved 70%. I'm a bot and I'm open source!
If this is how they do their routine updates, they have had an extremely lucky run so far. Inadequate understanding of what the update would/could do, inadequate testing prior to deployment, no rollback capability, no disaster recovery plan. Yeah nah, you can’t get that lucky for that long. Maybe they have cut budget or sacked the people who knew what they were doing? Let’s hope they learn from this.