Five whys

At 3:30 in the morning of January 10th, 2008, a shrill chirping woke up our system administrator, Michael Gorsuch, asleep at home in Brooklyn. It was a text message from Nagios, our network monitoring software, warning him that something was wrong.

He swung out of bed, accidentally knocking over (and waking up) the dog, sleeping soundly in her dog bed, who, angrily, staggered out to the hallway, peed on the floor, and then returned to bed. Meanwhile Michael logged onto his computer in the other room and discovered that one of the three data centers he runs, in downtown Manhattan, was unreachable from the Internet.

This particular data center is in a secure building in downtown Manhattan, in a large facility operated by Peer 1. It has backup generators, several days of diesel fuel, and racks and racks of batteries to keep the whole thing running for a few minutes while the generators can be started. It has massive amounts of air conditioning, multiple high speed connections to the Internet, and the kind of “right stuff” down-to-earth engineers who always do things the boring, plodding, methodical way instead of the flashy cool trendy way, so everything is pretty reliable.

Internet providers like Peer 1 like to guarantee the uptime of their services in terms of a Service Level Agreement, otherwise known as an SLA. A typical SLA might state something like “99.99% uptime.” When you do the math, let’s see, there are 525,949 minutes in a year (or 525,600 if you are in the cast of Rent), so that allows them 52.59 minutes of downtime per year. If they have any more downtime than that, the SLA usually provides for some kind of penalty, but honestly, it’s often rather trivial… like, you get your money back for the minutes they were down. I remember once getting something like $10 off the bill once from a T1 provider because of a two day outage that cost us thousands of dollars. SLAs can be a little bit meaningless that way, and given how low the penalties are, a lot of network providers just started advertising 100% uptime.

Within 10 minutes everything seemed to be back to normal, and Michael went back to sleep.

Until about 5:00 a.m. This time Michael called the Peer 1 Network Operations Center (NOC) in Vancouver. They ran some tests, started investigating, couldn’t find anything wrong, and by 5:30 a.m. things seemed to be back to normal, but by this point, he was as nervous as a porcupine in a balloon factory.

At 6:15 a.m. the New York site lost all connectivity. Peer 1 couldn’t find anything wrong on their end. Michael got dressed and took the subway into Manhattan. The server seemed to be up. The Peer1 network connection was fine. The problem was something with the network switch. Michael temporarily took the switch out of the loop, connecting our router directly to Peer 1’s router, and lo and behold, we were back on the Internet.

By the time most of our American customers got to work in the morning, everything was fine. Our European customers had already started emailing us to complain. Michael spent some time doing a post-mortem, and discovered that the problem was a simple configuration problem on the switch. There are several possible speeds that a switch can use to communicate (10, 100, or 1000 megabits/second). You can either set the speed manually, or you can let the switch automatically negotiate the highest speed that both sides can work with. The switch that failed had been set to autonegotiate. This usually works, but not always, and on the morning of January 10th, it didn’t.

Michael knew this could be a problem, but when he installed the switch, he had forgotten to set the speed, so the switch was still in the factory-default autonegotiate mode, which seemed to work fine. Until it didn’t.

Michael wasn’t happy. He sent me an email:

I know that we don’t officially have an SLA for On Demand, but I would like us to define one for internal purposes (at least). It’s one way that I can measure if myself and the (eventual) sysadmin team are meeting the general goals for the business. I was in the slow process of writing up a plan for this, but want to expedite in light of this morning’s mayhem.

An SLA is generally defined in terms of ‘uptime’, so we need to define what ‘uptime’ is in the context of On Demand. Once that is made clear, it’ll get translated into policy, which will then be translated into a set of monitoring / reporting scripts, and will be reviewed on a regular interval to see if we are ‘doing what we say’.

Good idea!

But there are some problems with SLAs. The biggest one is the lack of statistical meaningfulness when outages are so rare. We’ve had, if I remember correctly, two unplanned outages, including this one, since going live with FogBugz on Demand six months ago. Only one was our fault. Most well-run online services will have two, maybe three outages a year. With so few data points, the length of the outage starts to become really significant, and that’s one of those things that’s wildly variable. Suddenly, you’re talking about how long it takes a human to get to the equipment and swap out a broken part. To get really high uptime, you can’t wait for a human to switch out failed parts. You can’t even wait for a human to figure out what went wrong: you have to have previously thought of every possible thing that can possibly go wrong, which is vanishingly improbable. It’s the unexpected unexpecteds, not the expected unexpecteds, that kill you.

Really high availability becomes extremely costly. The proverbial “six nines” availability (99.9999% uptime) means no more than 30 seconds downtime per year. That’s really kind of ridiculous. Even the people who claim that they have built some big multi-million dollar superduper ultra-redundant six nines system are gonna wake up one day, I don’t know when, but they will, and something completely unusual will have gone wrong in a completely unexpected way, three EMP bombs, one at each data center, and they’ll smack their heads and have fourteen days of outage.

Think of it this way: If your six nines system goes down mysteriously just once and it takes you an hour to figure out the cause and fix it, well, you’ve just blown your downtime budget for the next century. Even the most notoriously reliable systems, like AT&T’s long distance service, have had long outages (six hours in 1991) which put them at a rather embarrassing three nines … and AT&T’s long distance service is considered “carrier grade,” the gold standard for uptime.

Keeping internet services online suffers from the problem of black swans. Nassim Taleb, who invented the term, defines it thus: “A black swan is an outlier, an event that lies beyond the realm of normal expectations.” Almost all internet outages are unexpected unexpecteds: extremely low-probability outlying surprises. They’re the kind of things that happen so rarely it doesn’t even make sense to use normal statistical methods like “mean time between failure.” What’s the “mean time between catastrophic floods in New Orleans?”

Measuring the number of minutes of downtime per year does not predict the number of minutes of downtime you’ll have the next year. It reminds me of commercial aviation today: the NTSB has done such a great job of eliminating all the common causes of crashes that nowadays, each commercial crash they investigate seems to be a crazy, one-off, black-swan outlier.

Somewhere between the “extremely unreliable” level of service, where it feels like stupid outages occur again and again and again, and the “extremely reliable” level of service, where you spend millions and millions of dollars getting an extra minute of uptime a year, there’s a sweet spot, where all the expected unexpecteds have been taken care of. A single hard drive failure, which is expected, doesn’t take you down. A single DNS server failure, which is expected, doesn’t take you down. But the unexpected unexpecteds might. That’s really the best we can hope for.

To reach this sweet spot, we borrowed an idea from Sakichi Toyoda, the founder of Toyota. He calls it Five Whys. When something goes wrong, you ask why, again and again, until you ferret out the root cause. Then you fix the root cause, not the symptoms.

Since this fit well with our idea of fixing everything two ways, we decided to start using five whys ourselves. Here’s what Michael came up with:

  • Our link to Peer1 NY went down
  • Why? – Our switch appears to have put the port in a failed state
  • Why? – After some discussion with the Peer1 NOC, we speculate that it was quite possibly caused by an Ethernet speed / duplex mismatch
  • Why? – The switch interface was set to auto-negotiate instead of being manually configured
  • Why? – We were fully aware of problems like this, and have been for many years.  But – we do not have a written standard and verification process for production switch configurations.
  • Why? – Documentation is often thought of as an aid for when the sysadmin isn’t around or for other members of the operations team, whereas, it should really be thought of as a checklist.

“Had we produced a written standard prior to deploying the switch and subsequently reviewed our work to match the standard, this outage would not have occurred,” Michael wrote. “Or, it would occur once, and the standard would get updated as appropriate.”

After some internal discussion we all agreed that rather than imposing a statistically meaningless measurement and hoping that the mere measurement of something meaningless would cause it to get better, what we really needed was a process of continuous improvement. Instead of setting up a SLA for our customers, we set up a blog where we would document every outage in real time, provide complete post-mortems, ask the five whys, get to the root cause, and tell our customers what we’re doing to prevent that problem in the future. In this case, the change is that our internal documentation will include detailed checklists for all operational procedures in the live environment.

Our customers can look at the blog to see what caused the problems and what we’re doing to make things better, and, hopefully, they can see evidence of steadily improving quality.

In the meantime, our customer service folks have the authority to credit customers’ accounts if they feel like they were affected by an outage. We let the customer decide how much they want to be credited, up to a whole month, because not every customer is even going to notice the outage, let alone suffer from it. I hope this system will improve our reliability to the point where the only outages we suffer are really the extremely unexpected black swans.

PS. Yes, we want to hire another system administrator so Michael doesn’t have to be the only one to wake up in the middle of the night.

About the author.

In 2000 I co-founded Fog Creek Software, where we created lots of cool things like the FogBugz bug tracker, Trello, and Glitch. I also worked with Jeff Atwood to create Stack Overflow and served as CEO of Stack Overflow from 2010-2019. Today I serve as the chairman of the board for Stack Overflow, Glitch, and HASH.