I’m going on vacation for a week. Now behave and don’t give the babysitter a hard time or I’ll dock your allowance.
Month / January 2001
Daily Builds Are Your Friend
In 1982, my family took delivery of the very first IBM-PC in Israel. We actually went down to the warehouse and waited while our PC was delivered from the port. Somehow, I convinced my dad to get the fully-decked out version, with two floppy disks, 128 K memory, and both a dot-matrix printer (for fast drafts) and a Brother Letter-Quality Daisy Wheel printer, which sounds exactly like a machine gun when it is operating, only louder. I think we got almost every accessory available: PC-DOS 1.0, the $75 technical reference manual with a complete source code listing of the BIOS, Macro Assembler, and the stunning IBM Monochrome display with a full 80 columns and … lower case letters! The whole thing cost about $10,000 including Israel’s then-ridiculous import taxes. Extravagant!
Now, “everybody” knew that BASIC was a children’s language that requires you to write spaghetti code and turns your brain into Camembert cheese. So we shelled out $600 for IBM Pascal, which came on three floppy diskettes. The compiler’s first pass was on the first diskette, the second pass was on the second diskette, and the linker was on the third diskette. I wrote a simple “hello, world” program and compiled it. Total time elapsed: 8 minutes.
Hmm. That’s a long time. I wrote a batch file to automate the process and shaved it down to 7 1/2 minutes. Better. But when I tried to write long programs like my stunning version of Othello which always beat me, I spent most of the time waiting for compiles. “Yep,” a professional programmer told me, “we used to keep a sit-up board in the office and do sit-ups while we were doing compiles. After a few months of programming I had killer abs.”
One day, a spiffy program called Compas Pascal appeared from Denmark, which Philippe Kahn bought and renamed Borland Turbo Pascal. Turbo Pascal was sort of shocking, since it basically did everything that IBM Pascal did, only it ran in about 33K of memory including the text editor. This was nothing short of astonishing. Even more astonishing was the fact that you could compile a small program in less than one second. It’s as if a company you had never heard of introduced a clone of the Buick LeSabre which could go 1,000,000 MPH and drive around the world on so little gasoline than an ant could drink it without getting sick.
Suddenly, I became much more productive.
That’s when I learned about the concept of the REP loop. REP stands for “Read, Eval, Print”, and it describes what a lisp interpreter does for a living: it “reads” your input, evaluates it, and prints the result. An example of the REP loop is shown below: I type something, and the lisp interpreter reads it, evaluates it, and prints the result.
On a slightly larger scale, when you’re writing code, you are in a macro-version of the REP loop called the Edit-Compile-Test loop. You edit your code, compile it, test it, and see how well it works.
A crucial observation here is that you have to run through the loop again and again to write a program, and so it follows that the faster the Edit-Compile-Test loop, the more productive you will be, down to a natural limit of instantaneous compiles. That’s the formal, computer-science-y reason that computer programmers want really fast hardware and compiler developers will do anything they can to get super-fast Edit-Compile-Test loops. Visual Basic does it by parsing and lex-ing each line as you type it, so that the final compile can be super-quick. Visual C++ does it by providing incremental compiles, precompiled headers, and incremental linking.
But as soon as you start working on a larger team with multiple developers and testers, you encounter the same loop again, writ larger (it’s fractal, dude!). A tester finds a bug in the code, and reports the bug. The programmer fixes the bug. How long does it take before the tester gets the fixed version of the code? In some development organizations, this Report-Fix-Retest loop can take a couple of weeks, which means the whole organization is running unproductively. To keep the whole development process running smoothly, you need to focus on getting the Report-Fix-Retest loop tightened.
One good way to do this is with daily builds. A daily build is an automatic, daily, complete build of the entire source tree.
Automatic – because you set up the code to be compiled at a fixed time every day, using cron jobs (on UNIX) or the scheduler service (on Windows).
Daily – or even more often. It’s tempting to do continuous builds, but you probably can’t, because of source control issues which I’ll talk about in a minute.
Complete – chances are, your code has multiple versions. Multiple language versions, multiple operating systems, or a high-end/low-end version. The daily build needs to build all of them. And it needs to build every file from scratch, not relying on the compiler’s possibly imperfect incremental rebuild capabilities.
Here are some of the many benefits of daily builds:
- When a bug is fixed, testers get the new version quickly and can retest to see if the bug was really fixed.
- Developers can feel more secure that a change they made isn’t going to break any of the 1024 versions of the system that get produced, without actually having an OS/2 box on their desk to test on.
- Developers who check in their changes right before the scheduled daily build know that they aren’t going to hose everybody else by checking in something which “breaks the build” — that is, something that causes nobody to be able to compile. This is the equivalent of the Blue Screen of Death for an entire programming team, and happens a lot when a programmer forgets to add a new file they created to the repository. The build runs fine on their machines, but when anyone else checks out, they get linker errors and are stopped cold from doing any work.
- Outside groups like marketing, beta customer sites, and so forth who need to use the immature product can pick a build that is known to be fairly stable and keep using it for a while.
- By maintaining an archive of all daily builds, when you discover a really strange, new bug and you have no idea what’s causing it, you can use binary search on the historical archive to pinpoint when the bug first appeared in the code. Combined with good source control, you can probably track down which check-in caused the problem.
- When a tester reports a problem that the programmer thinks is fixed, the tester can say which build they saw the problem in. Then the programmer looks at when he checked in the fix and figure out whether it’s really fixed.
Here’s how to do them. You need a daily build server, which will probably be the fastest computer you can get your hands on. Write a script which checks out a complete copy of the current source code from the repository (you are using source control, aren’t you?) and then builds, from scratch, every version of the code that you ship. If you have an installer or setup program, build that too. Everything you ship to customers should be produced by the daily build process. Put each build in its own directory, coded by date. Run your script at a fixed time every day.
- It’s crucial that everything it takes to make a final build is done by the daily build script, from checking out the code up to and including putting the bits up on a web server in the right place for the public to download (although during the development process, this will be a test server, of course). That’s the only way to insure that there is nothing about the build process that is only “documented” in one person’s head. You never get into a situation where you can’t release a product because only Shaniqua knows how to create the installer, and she was hit by a bus. On the Juno team, the only thing you needed to know to create a full build from scratch was where the build server was, and how to double-click on its “Daily Build” icon.
- There is nothing worse for your sanity then when you are trying to ship the code, and there’s one tiny bug, so you fix that one tiny bug right on the daily build server and ship it. As a golden rule, you should only ship code that has been produced by a full, clean daily build that started from a complete checkout.
- Set your compilers to maximum warning level (-W4 in Microsoft’s world; -Wall in gcc land) and set them to stop if they encounter even the smallest warning.
- If a daily build is broken, you run the risk of stopping the whole team. Stop everything and keep rebuilding until it’s fixed. Some days, you may have multiple daily builds.
- Your daily build script should report failures, via email, to the whole development team. It’s not too hard to grep the logs for “error” or “warning” and include that in the email, too. The script can also append status reports to an HTML page visible to everyone so programmers and testers can quickly determine which builds were successful.
- One rule we followed on the Microsoft Excel team, to great effect, was that whoever broke the build became responsible for babysitting builds until somebody else broke it. In addition to serving as a clever incentive to keep the build working, it rotated almost everybody through the job of buildmaster so everybody learned about how builds are produced.
- If your team works in one time zone, a good time to do builds is at lunchtime. That way everybody checks in their latest code right before lunch, the build runs while they’re eating, and when they get back, if the build is broken, everybody is around to fix it. As soon as the build is working everybody can check out the latest version without fear that they will be hosed due to a broken build.
- If your team is working in two time zones, schedule the daily build so that the people in one time zone don’t hose the people in the other time zone. On the Juno team, the New York people would check things in at 7 PM New York time and go home. If they broke the build, the Hyderabad, India team would get into work (at about 8 PM New York Time) and be completely stuck for a whole day. We started doing two daily builds, about an hour before each team went home, and completely solved that problem.
For Further Reading:
- Some discussion on tools for daily builds
- Making daily builds is important enough that it’s one of the 12 steps to better code.
- There’s a lot of interesting stuff about the builds made (weekly) by the Windows NT team in G. Pascal Zachary’s book Showstopper.
- Steve McConnell writes about daily builds here.
2001/01/27
When you’re using source control, sometimes one programmer accidentally checks in something that breaks the build. For example, they’ve added a new source file, and everything compiles fine on their machine, but they forgot to add the source file to the code repository. So they lock their machine and go home, oblivious and happy. But nobody else can work, so they have to go home too, unhappy.
Breaking the build is so bad (and so common) that it helps to make daily builds, to insure that no breakage goes unnoticed. On large teams, one good way to insure that breakages are fixed right away is to do the daily build every afternoon at, say, lunchtime. Everyone does as many checkins as possible before lunch. When they come back, the build is done. If it worked, great! Everybody checks out the latest version of the source and goes on working. If the build failed, you fix it, but everybody can keep on working with the pre-build, unbroken version of the source.
On the Excel team we had a rule that whoever broke the build, as their “punishment”, was responsible for babysitting the builds until someone else broke it. This was a good incentive not to break the build, and a good way to rotate everyone through the build process so that everyone learned how it worked.
Read more about daily builds in my new article Daily Builds are Your Friend.
2001/01/23
The Onion says it better than I did…
2001/01/20
How exciting, my book is already on Amazon!
It’s not shipping until March, though. And the cover image they have on Amazon is just a mockup, it’s not the real thing.
2001/01/19
ADP has the worst web programmers in the world.
Jared was trying to check his 401K info at this site.
Enter social security number, press tab, enter PIN, press tab, and press Space to press the “Login” button.
Because the morons who wrote that page are using the “tabindex” attribute without understanding it, this actually activates a button way down at the bottom of the page which locks you out of your account and mails a new PIN to you via snail mail.
This kind of horrific programming shows evidence of no usability testing whatsoever. The most trivial usability test would uncover this. (Not to mention the other horrific JavaScript on that page, for example, the JavaScript that pops up an dialog box if you put a space in the social security number).
Now, based on one web page, I wouldn’t condemn a whole company. But ADP’s entire web site is crawling with these kinds of bugs. I tried using ADP’s EasyPayNet to run the payroll for my company. The UI was full of stupid JavaScript bugs that screwed up my payroll every single time I tried to pay people. It copied employee’s data on top of other employee’s data. It locked me out for days at a time. Way-too-clever JavaScript validation routines made it impossible to change values on certain forms (because they would not let me tab off of a control that was inconsistent with another control, so there was no way to change them both at once to make them consistent).
That’s why Fog Creek stopped using ADP for our Payroll – now we use Intuit QuickBooks and it’s a million times better.
And while I’m complaining…
Thinking of wireless networking? Stay away from SMC Networks. Their products are terrible. We tried a bunch of their 802.11 wireless networking PC cards and discovered that they barely worked at 10 feet; one wall was enough to stop them cold. Serves me right for buying the cheapest brand.
We switched to the Lucent/Orinoco stuff, which works much, much better.
Big Macs vs. The Naked Chef
Mystery: why is it that some of the biggest IT consulting companies in the world do the worst work?
Why is it that the cool upstart consulting companies start out with a string of spectacular successes, meteoric growth, and rapidly degenerate into mediocrity?
I’ve been thinking about this, and thinking about how Fog Creek Software (my own company) should grow. And the best lessons I can find come from McDonald’s. Yes, I mean the awful hamburger chain.
The secret of Big Macs is that they’re not very good, but every one is not very good in exactly the same way. If you’re willing to live with not-very-goodness, you can have a Big Mac with absolutely no chance of being surprised in the slightest.
The other secret of Big Macs is that you can have an IQ that hovers somewhere between “idiot” and “moron” (to use the technical terms) and you’ll still be able to produce Big Macs that are exactly as unsurprising as all the other Big Macs in the world. That’s because McDonald’s real secret sauce is its huge operations manual, describing in stunning detail the exact procedure that every franchisee must follow in creating a Big Mac. If a Big Mac hamburger is fried for 37 seconds in Anchorage, Alaska, it will be fried for 37 seconds in Singapore – not 36, not 38. To make a Big Mac you just follow the damn rules.
The rules have been carefully designed by reasonably intelligent people (back at McDonald’s Hamburger University) so that dumdums can follow them just as well as smart people. In fact the rules include all kinds of failsafes, like bells that go off if you keep the fries in the oil too long, which were created to compensate for more than a little human frailty. There are stopwatches and timing systems everywhere. There is a system to make sure that the janitor checks if the bathrooms are clean every half hour. (Hint: they’re not.)
The system basically assumes that everybody will make a bunch of mistakes, but the burgers that come out will be, um, consistent, and you’ll always be asked if you want fries with that.
Just for the sake of amusement, let’s compare a McDonald’s cook, who is following a set of rules exactly and doesn’t know anything about food, to a genius like The Naked Chef, the British cutie Jamie Oliver. (If you chose to leave this site now and follow that link to watch the MTV-like videos of The Naked Chef making basil aioli, you have my blessing. Go in good health.) Anyway, comparing McDonald’s to a gourmet chef is completely absurd, but please suspend disbelief for a moment, because there’s something to be learned here.
Now, the Naked Chef doesn’t follow no stinkin’ Operations Manual. He doesn’t measure anything. While he’s cooking, you see a flurry of food tossed around willy-nilly. “We’ll just put a bit of extra rosemary in there, that won’t hurt, and give it a good old shake,” he says. ” Mash it up. Perfect. Just chuck it all over the place.” (Yes, it really looks like he’s just chucking it all over the place. Sorry, but if I tried to chuck it all over the place, it wouldn’t work.) It takes about 14 seconds and he’s basically improvised a complete gourmet meal with roasted slashed fillet of sea-bass stuffed with herbs, baked on mushroom potatoes with a salsa-verde. Yum.
Well, I think it’s pretty obvious that The Naked Chef’s food is better than you get at McDonald’s. Even if it sounds like a stupid question, it’s worth a minute to ask why. It’s not such a stupid question. Why can’t a big company with zillions of resources, incredible scale, access to the best food designers money can buy, and infinite cash flow produce a nice meal?
Imagine that The Naked Chef gets bored doing “telly” and opens a restaurant. Of course, he’s a brilliant chef, the food would be incredible, so the place is hopping with customers and shockingly profitable.
When you have a shockingly profitable restaurant, you quickly realize that even if you fill up every night, and even if you charge $19 for an appetizer and $3.95 for a coke, your profits reach a natural limit, because one chef can only make so much food. So you hire another chef, and maybe open some more branches, maybe in other cities.
Now a problem starts to develop: what we in the technical fields call the scalability problem. When you try to clone a restaurant, you have to decide between hiring another great chef of your caliber (in which case, that chef will probably want and expect to keep most of the extra profits that he created, so why bother), or else you’ll hire a cheaper, younger chef who’s not quite as good, but pretty soon your patrons will figure that out and they won’t go to the clone restaurant.
The common way of dealing with the scalability problem is to hire cheap chefs who don’t know anything, and give them such precise rules about how to create every dish that they “can’t” screw it up. Just follow these here rules, and you’ll make great gourmet food!
Problem: it doesn’t work exactly right. There are a million things that a good chef does that have to do with improvisation. A good chef sees some awesome mangos in the farmer’s market and improvises a mango-cilantro salsa for the fish of the day. A good chef deals with a temporary shortage of potatoes by creating some taro chip thing. An automaton chef who is merely following instructions might be able to produce a given dish when everything is working perfectly, but without real talent and skill, will not be able to improvise, which is why you never see jicama at McDonald’s.
McDonald’s requires a very particular variety of potato, which they grow all over the world, and which they pre-cut and freeze in massive quantities to survive shortages. The precutting and freezing means that the french-fries are not as good as they could be, but they are certainly consistent and require no chef-skills. In fact, McDonald’s does hundreds of things to make sure that their product can be produced with consistent quality, by any moron you can get in the kitchen, even if the quality is “a bit” lower.
Summary, so far:
- Some things need talent to do really well.
- It’s hard to scale talent.
- One way people try to scale talent is by having the talent create rules for the untalented to follow.
- The quality of the resulting product is very low.
You can see the exact same story playing out in IT consulting. How many times have you heard this story?
Mike was unhappy. He had hired a huge company of IT consultants to build The System. The IT consultants he hired were incompetents who kept talking about “The Methodology” and who spent millions of dollars and had failed to produce a single thing.
Luckily, Mike found a youthful programmer who was really smart and talented. The youthful programmer built his whole system in one day for $20 and pizza. Mike was overjoyed. He recommended the youthful programmer to all his friends.
Youthful Programmer starts raking in the money. Soon, he has more work than he can handle, so he hires a bunch of people to help him. The good people want too many stock options, so he decides to hire even younger programmers right out of college and “train them” with a 6 week course.
The trouble is that the “training” doesn’t really produce consistent results, so Youthful Programmer starts creating rules and procedures that are meant to make more consistent results. Over the years, the rule book grows and grows. Soon it’s a six-volume manual called The Methodology.
After a few dozen years, Youthful Programmer is now a Huge Incompetent IT Consultant with a capital-M-methodology and a lot of people who blindly obey the Methodology, even when it doesn’t seem to be working, because they have no bloody idea whatsoever what else to do, and they’re not really talented programmers — they’re just well-meaning Poli Sci majors who attended the six-week course.
And Newly Huge Incompetent IT Consultant starts messing up. Their customers are unhappy. And another upstart talented programmer comes and takes away all their business, and the cycle begins anew.
I don’t need to name names, here, this cycle has happened a dozen times. All the IT service companies get greedy and try to grow faster than they can find talented people, and they grow layers upon layers of rules and procedures which help produce “consistent,” if not very brilliant work.
But the rules and procedures only work when nothing goes wrong. Various “data-backed Web site” consulting companies sprouted up in the last couple of years and filled their ranks by teaching rank amateurs the fourteen things you need to know to create a data-backed Web site (“here’s a select statement, kid, build a Web site”). But now that dotcoms are imploding and there’s suddenly demand for high-end GUI programming, C++ skills, and real computer science, the kids who only have select statements in their arsenal just have too steep a learning curve and can’t catch up. But they still keep trying, following the rules in chapter 17 about normalizing databases, which mysteriously don’t apply to The New World. The brilliant founders of these companies could certainly adjust to the new world: they are talented computer scientists who can learn anything, but the company they built can’t adjust because it has substituted a rulebook for talent, and rulebooks don’t adjust to new times.
What’s the moral of the story? Beware of Methodologies. They are a great way to bring everyone up to a dismal, but passable, level of performance, but at the same time, they are aggravating to more talented people who chafe at the restrictions that are placed on them. It’s pretty obvious to me that a talented chef is not going to be happy making burgers at McDonald’s, precisely because of McDonald’s rules. So why do IT consultants brag so much about their methodologies? (Beats me.)
What does this mean for Fog Creek? Well, our goal has never been to become a huge consulting company. We started out doing consulting as a means to an end — the long-term goal was to be a software company that is always profitable, and we achieved that by doing some consulting work to supplement our software income. After a couple of years in business our software revenues grew to the point where consulting was just a low-margin distraction, so now we only do consulting engagements that directly support our software. Software, as you know, scales incredibly well. When one new person buys FogBUGZ, we make more money without spending any more money.
More important is our obsession with hiring the best… we are perfectly happy to stay small if we can’t find enough good people (although with six weeks annual vacation, finding people doesn’t seem to pose a problem). And we refuse to grow until the people we already hired have learned enough to become teachers and mentors of the new crowd.
2001/01/18
Mystery: why is it that some of the biggest IT consulting companies in the world do the worst work?
Why is it that the cool upstart consulting companies start out with a string of spectacular successes, meteoric growth, and rapidly degenerate into mediocrity?
Big Macs vs. The Naked Chef tries to explain it.
2001/01/16
Remember how I said you should never sign non-compete agreements?