“There are two opposing forces inside Microsoft, which I will refer to, somewhat tongue-in-cheek, as The Raymond Chen Camp and The MSDN Magazine Camp.”
Year / 2004
News
Oh, goody, FireFox 0.9 is here. And it’s less than a 5 MB download. I have long since switched to FireFox for web browsing. I switched for the popup blocking but I stayed for the tabbed browsing.
Here are three reasons to switch web browsers today:
- You’ll get fewer viruses and you’ll get no annoying popups asking you if you want to install lame spyware that will ruin your computer forcing a complete reinstall.
- You can open all your bookmarks in tabs, all at once, and let them download in the background while you read them.
- You’ll help break the Microsoft Monopoly on web browsers. Microsoft took over the browser market fair and square by making a better product, but they were so afraid that Web-based applications would eliminate the need for Windows that they locked the IE team in a dark dungeon and they haven’t allowed improvements to IE for several years now. Now Firefox is the better product and there’s a glimmer of hope that one day DHTML will actually improve to the point where web-based applications are just as good as Windows-based applications.
How Microsoft Lost the API War
Here’s a theory you hear a lot these days: “Microsoft is finished. As soon as Linux makes some inroads on the desktop and web applications replace desktop applications, the mighty empire will topple.”
Although there is some truth to the fact that Linux is a huge threat to Microsoft, predictions of the Redmond company’s demise are, to say the least, premature. Microsoft has an incredible amount of cash money in the bank and is still incredibly profitable. It has a long way to fall. It could do everything wrong for a decade before it started to be in remote danger, and you never know… they could reinvent themselves as a shaved-ice company at the last minute. So don’t be so quick to write them off. In the early 90s everyone thought IBM was completely over: mainframes were history! Back then, Robert X. Cringely predicted that the era of the mainframe would end on January 1, 2000 when all the applications written in COBOL would seize up, and rather than fix those applications, for which, allegedly, the source code had long since been lost, everybody would rewrite those applications for client-server platforms.
Well, guess what. Mainframes are still with us, nothing happened on January 1, 2000, and IBM reinvented itself as a big ol’ technology consulting company that also happens to make cheap plastic telephones. So extrapolating from a few data points to the theory that Microsoft is finished is really quite a severe exaggeration.
However, there is a less understood phenomenon which is going largely unnoticed: Microsoft’s crown strategic jewel, the Windows API, is lost. The cornerstone of Microsoft’s monopoly power and incredibly profitable Windows and Office franchises, which account for virtually all of Microsoft’s income and covers up a huge array of unprofitable or marginally profitable product lines, the Windows API is no longer of much interest to developers. The goose that lays the golden eggs is not quite dead, but it does have a terminal disease, one that nobody noticed yet.
Now that I’ve said that, allow me to apologize for the grandiloquence and pomposity of that preceding paragraph. I think I’m starting to sound like those editorial writers in the trade rags who go on and on about Microsoft’s strategic asset, the Windows API. It’s going to take me a few pages, here, to explain what I’m really talking about and justify my arguments. Please don’t jump to any conclusions until I explain what I’m talking about. This will be a long article. I need to explain what the Windows API is; I need to demonstrate why it’s the most important strategic asset to Microsoft; I need to explain how it was lost and what the implications of that are in the long term. And because I’m talking about big trends, I need to exaggerate and generalize.
Developers, Developers, Developers, Developers
Remember the definition of an operating system? It’s the thing that manages a computer’s resources so that application programs can run. People don’t really care much about operating systems; they care about those application programs that the operating system makes possible. Word Processors. Instant Messaging. Email. Accounts Payable. Web sites with pictures of Paris Hilton. By itself, an operating system is not that useful. People buy operating systems because of the useful applications that run on it. And therefore the most useful operating system is the one that has the most useful applications.
The logical conclusion of this is that if you’re trying to sell operating systems, the most important thing to do is make software developers want to develop software for your operating system. That’s why Steve Ballmer was jumping around the stage shouting “Developers, developers, developers, developers.” It’s so important for Microsoft that the only reason they don’t outright give away development tools for Windows is because they don’t want to inadvertently cut off the oxygen to competitive development tools vendors (well, those that are left) because having a variety of development tools available for their platform makes it that much more attractive to developers. But they really want to give away the development tools. Through their Empower ISV program you can get five complete sets of MSDN Universal (otherwise known as “basically every Microsoft product except Flight Simulator“) for about $375. Command line compilers for the .NET languages are included with the free .NET runtime… also free. The C++ compiler is now free. Anything to encourage developers to build for the .NET platform, and holding just short of wiping out companies like Borland.
Why Apple and Sun Can’t Sell Computers
Well, of course, that’s a little bit silly: of course Apple and Sun can sell computers, but not to the two most lucrative markets for computers, namely, the corporate desktop and the home computer. Apple is still down there in the very low single digits of market share and the only people with Suns on their desktops are at Sun. (Please understand that I’m talking about large trends here, and therefore when I say things like “nobody” I really mean “fewer than 10,000,000 people,” and so on and so forth.)
Why? Because Apple and Sun computers don’t run Windows programs, or, if they do, it’s in some kind of expensive emulation mode that doesn’t work so great. Remember, people buy computers for the applications that they run, and there’s so much more great desktop software available for Windows than Mac that it’s very hard to be a Mac user.
Sidebar What is this “API” thing?
If you’re writing a program, say, a word processor, and you want to display a menu, or write a file, you have to ask the operating system to do it for you, using a very specific set of function calls which are different on every operating system. These function calls are called the API: it’s the interface that an operating system, like Windows, provides to application developers, like the programmers building word processors and spreadsheets and whatnot. It’s a set of thousands and thousands of detailed and fussy functions and subroutines that programmers can use, which cause the operating system to do interesting things like display a menu, read and write files, and more esoteric things like find out how to spell out a given date in Serbian, or extremely complex things like display a web page in a window. If your program uses the API calls for Windows, it’s not going to work on Linux, which has different API calls. Sometimes they do approximately the same thing. That’s one important reason Windows software doesn’t run on Linux. If you wanted to get a Windows program to run under Linux, you’d have to reimplement the entire Windows API, which consists of thousands of complicated functions: this is almost as much work as implementing Windows itself, something which took Microsoft thousands of person-years. And if you make one tiny mistake or leave out one function that an application needs, that application will crash.
And that’s why the Windows API is such an important asset to Microsoft.
(I know, I know, at this point the 2.3% of the world that uses Macintoshes are warming up their email programs to send me a scathing letter about how much they love their Macs. Once again, I’m speaking in large trends and generalizing, so don’t waste your time. I know you love your Mac. I know it runs everything you need. I love you, you’re a Pepper, but you’re only 2.3% of the world, so this article isn’t about you.)
The Two Forces at Microsoft
There are two opposing forces inside Microsoft, which I will refer to, somewhat tongue-in-cheek, as The Raymond Chen Camp and The MSDN Magazine Camp.
Raymond Chen is a developer on the Windows team at Microsoft. He’s been there since 1992, and his weblog The Old New Thing is chock-full of detailed technical stories about why certain things are the way they are in Windows, even silly things, which turn out to have very good reasons.
The most impressive things to read on Raymond’s weblog are the stories of the incredible efforts the Windows team has made over the years to support backwards compatibility:
Look at the scenario from the customer’s standpoint. You bought programs X, Y and Z. You then upgraded to Windows XP. Your computer now crashes randomly, and program Z doesn’t work at all. You’re going to tell your friends, “Don’t upgrade to Windows XP. It crashes randomly, and it’s not compatible with program Z.” Are you going to debug your system to determine that program X is causing the crashes, and that program Z doesn’t work because it is using undocumented window messages? Of course not. You’re going to return the Windows XP box for a refund. (You bought programs X, Y, and Z some months ago. The 30-day return policy no longer applies to them. The only thing you can return is Windows XP.)
I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.
This was not an unusual case. The Windows testing team is huge and one of their most important responsibilities is guaranteeing that everyone can safely upgrade their operating system, no matter what applications they have installed, and those applications will continue to run, even if those applications do bad things or use undocumented functions or rely on buggy behavior that happens to be buggy in Windows n but is no longer buggy in Windows n+1. In fact if you poke around in the AppCompatibility section of your registry you’ll see a whole list of applications that Windows treats specially, emulating various old bugs and quirky behaviors so they’ll continue to work. Raymond Chen writes, “I get particularly furious when people accuse Microsoft of maliciously breaking applications during OS upgrades. If any application failed to run on Windows 95, I took it as a personal failure. I spent many sleepless nights fixing bugs in third-party programs just so they could keep running on Windows 95.”
A lot of developers and engineers don’t agree with this way of working. If the application did something bad, or relied on some undocumented behavior, they think, it should just break when the OS gets upgraded. The developers of the Macintosh OS at Apple have always been in this camp. It’s why so few applications from the early days of the Macintosh still work. For example, a lot of developers used to try to make their Macintosh applications run faster by copying pointers out of the jump table and calling them directly instead of using the interrupt feature of the processor like they were supposed to. Even though somewhere in Inside Macintosh, Apple’s official Bible of Macintosh programming, there was a tech note saying “you can’t do this,” they did it, and it worked, and their programs ran faster… until the next version of the OS came out and they didn’t run at all. If the company that made the application went out of business (and most of them did), well, tough luck, bubby.
To contrast, I’ve got DOS applications that I wrote in 1983 for the very original IBM PC that still run flawlessly, thanks to the Raymond Chen Camp at Microsoft. I know, it’s not just Raymond, of course: it’s the whole modus operandi of the core Windows API team. But Raymond has publicized it the most through his excellent website The Old New Thing so I’ll name it after him.
That’s one camp. The other camp is what I’m going to call the MSDN Magazine camp, which I will name after the developer’s magazine full of exciting articles about all the different ways you can shoot yourself in the foot by using esoteric combinations of Microsoft products in your own software. The MSDN Magazine Camp is always trying to convince you to use new and complicated external technology like COM+, MSMQ, MSDE, Microsoft Office, Internet Explorer and its components, MSXML, DirectX (the very latest version, please), Windows Media Player, and Sharepoint… Sharepoint! which nobody has; a veritable panoply of external dependencies each one of which is going to be a huge headache when you ship your application to a paying customer and it doesn’t work right. The technical name for this is DLL Hell. It works here: why doesn’t it work there?
The Raymond Chen Camp believes in making things easy for developers by making it easy to write once and run anywhere (well, on any Windows box). The MSDN Magazine Camp believes in making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve. The Raymond Chen camp is all about consolidation. Please, don’t make things any worse, let’s just keep making what we already have still work. The MSDN Magazine Camp needs to keep churning out new gigantic pieces of technology that nobody can keep up with.
Here’s why this matters.
Microsoft Lost the Backwards Compatibility Religion
Inside Microsoft, the MSDN Magazine Camp has won the battle.
The first big win was making Visual Basic.NET not backwards-compatible with VB 6.0. This was literally the first time in living memory that when you bought an upgrade to a Microsoft product, your old data (i.e. the code you had written in VB6) could not be imported perfectly and silently. It was the first time a Microsoft upgrade did not respect the work that users did using the previous version of a product.
And the sky didn’t seem to fall, not inside Microsoft. VB6 developers were up in arms, but they were disappearing anyway, because most of them were corporate developers who were migrating to web development anyway. The real long term damage was hidden.
With this major victory under their belts, the MSDN Magazine Camp took over. Suddenly it was OK to change things. IIS 6.0 came out with a different threading model that broke some old applications. I was shocked to discover that our customers with Windows Server 2003 were having trouble running FogBugz. Then .NET 1.1 was not perfectly backwards compatible with 1.0. And now that the cat was out of the bag, the OS team got into the spirit and decided that instead of adding features to the Windows API, they were going to completely replace it. Instead of Win32, we are told, we should now start getting ready for WinFX: the next generation Windows API. All different. Based on .NET with managed code. XAML. Avalon. Yes, vastly superior to Win32, I admit it. But not an upgrade: a break with the past.
Outside developers, who were never particularly happy with the complexity of Windows development, have defected from the Microsoft platform en-masse and are now developing for the web. Paul Graham, who created Yahoo! Stores in the early days of the dotcom boom, summarized it eloquently: “There is all the more reason for startups to write Web-based software now, because writing desktop software has become a lot less fun. If you want to write desktop software now you do it on Microsoft’s terms, calling their APIs and working around their buggy OS. And if you manage to write something that takes off, you may find that you were merely doing market research for Microsoft.”
Microsoft got big enough, with too many developers, and they were too addicted to upgrade revenues, so they suddenly decided that reinventing everything was not too big a project. Heck, we can do it twice. The old Microsoft, the Microsoft of Raymond Chen, might have implemented things like Avalon, the new graphics system, as a series of DLLs that can run on any version of Windows and which could be bundled with applications that need them. There’s no technical reason not to do this. But Microsoft needs to give you a reason to buy Longhorn, and what they’re trying to pull off is a sea change, similar to the sea change that occurred when Windows replaced DOS. The trouble is that Longhorn is not a very big advance over Windows XP; not nearly as big as Windows was over DOS. It probably won’t be compelling enough to get people to buy all new computers and applications like they did for Windows. Well, maybe it will, Microsoft certainly needs it to be, but what I’ve seen so far is not very convincing. A lot of the bets Microsoft made are the wrong ones. For example, WinFS, advertised as a way to make searching work by making the file system be a relational database, ignores the fact that the real way to make searching work is by making searching work. Don’t make me type metadata for all my files that I can search using a query language. Just do me a favor and search the damned hard drive, quickly, for the string I typed, using full-text indexes and other technologies that were boring in 1973.
Automatic Transmissions Win the Day
Don’t get me wrong… I think .NET is a great development environment and Avalon with XAML is a tremendous advance over the old way of writing GUI apps for Windows. The biggest advantage of .NET is the fact that it has automatic memory management.
A lot of us thought in the 1990s that the big battle would be between procedural and object oriented programming, and we thought that object oriented programming would provide a big boost in programmer productivity. I thought that, too. Some people still think that. It turns out we were wrong. Object oriented programming is handy dandy, but it’s not really the productivity booster that was promised. The real significant productivity advance we’ve had in programming has been from languages which manage memory for you automatically. It can be with reference counting or garbage collection; it can be Java, Lisp, Visual Basic (even 1.0), Smalltalk, or any of a number of scripting languages. If your programming language allows you to grab a chunk of memory without thinking about how it’s going to be released when you’re done with it, you’re using a managed-memory language, and you are going to be much more efficient than someone using a language in which you have to explicitly manage memory. Whenever you hear someone bragging about how productive their language is, they’re probably getting most of that productivity from the automated memory management, even if they misattribute it.
Sidebar
Why does automatic memory management make you so much more productive? 1) Because you can write
Racing car aficionados will probably send me hate mail for this, but my experience has been that there is only one case, in normal driving, where a good automatic transmission is inferior to a manual transmission. Similarly in software development: in almost every case, automatic memory management is superior to manual memory management and results in far greater programmer productivity.
If you were developing desktop applications in the early years of Windows, Microsoft offered you two ways to do it: writing C code which calls the Windows API directly and managing your own memory, or using Visual Basic and getting your memory managed for you. These are the two development environments I have used the most, personally, over the last 13 years or so, and I know them inside-out, and my experience has been that Visual Basic is significantly more productive. Often I’ve written the same code, once in C++ calling the Windows API and once in Visual Basic, and C++ always took three or four times as much work. Why? Memory management. The easiest way to see why is to look at the documentation for any Windows API function that needs to return a string. Look closely at how much discussion there is around the concept of who allocates the memory for the string, and how you negotiate how much memory will be needed. Typically, you have to call the function twice—on the first call, you tell it that you’ve allocated zero bytes, and it fails with a “not enough memory allocated” message and conveniently also tells you how much memory you need to allocate. That’s if you’re lucky enough not to be calling a function which returns a list of strings or a whole variable-length structure. In any case, simple operations like opening a file, writing a string, and closing it using the raw Windows API can take a page of code. In Visual Basic similar operations can take three lines.
So, you’ve got these two programming worlds. Everyone has pretty much decided that the world of managed code is far superior to the world of unmanaged code. Visual Basic was (and probably remains) the number one bestselling language product of all time and developers preferred it over C or C++ for Windows development, although the fact that “Basic” was in the name of the product made hardcore programmers shun it even though it was a fairly modern language with a handful of object-oriented features and very little leftover gunk (line numbers and the LET statement having gone the way of the hula hoop). The other problem with VB was that deployment required shipping a VB runtime, which was a big deal for shareware distributed over modems, and, worse, let other programmers see that your application was developed in (the shame!) Visual Basic.
One Runtime To Rule Them All
And along came .NET. This was a grand project, the super-duper unifying project to clean up the whole mess once and for all. It would have memory management, of course. It would still have Visual Basic, but it would gain a new language, one which is in spirit virtually the same as Visual Basic but with the C-like syntax of curly braces and semicolons. And best of all, the new Visual Basic/C hybrid would be called Visual C#, so you would not have to tell anyone you were a “Basic” programmer any more. All those horrid Windows functions with their tails and hooks and backwards-compatibility bugs and impossible-to-figure-out string-returning semantics would be wiped out, replaced by a single clean object oriented interface that only has one kind of string. One runtime to rule them all. It was beautiful. And they pulled it off, technically. .NET is a great programming environment that manages your memory and has a rich, complete, and consistent interface to the operating system and a rich, super complete, and elegant object library for basic operations.
And yet, people aren’t really using .NET much.
Oh sure, some of them are.
But the idea of unifying the mess of Visual Basic and Windows API programming by creating a completely new, ground-up programming environment with not one, not two, but three languages (or are there four?) is sort of like the idea of getting two quarreling kids to stop arguing by shouting “shut up!” louder than either of them. It only works on TV. In real life when you shout “shut up!” to two people arguing loudly you just create a louder three-way argument.
(By the way, for those of you who follow the arcane but politically-charged world of blog syndication feed formats, you can see the same thing happening over there. RSS became fragmented with several different versions, inaccurate specs and lots of political fighting, and the attempt to clean everything up by creating yet another format called Atom has resulted in several different versions of RSS plus one version of Atom, inaccurate specs and lots of political fighting. When you try to unify two opposing forces by creating a third alternative, you just end up with three opposing forces. You haven’t unified anything and you haven’t really fixed anything.)
So now instead of .NET unifying and simplifying, we have a big 6-way mess, with everybody trying to figure out which development strategy to use and whether they can afford to port their existing applications to .NET.
No matter how consistent Microsoft is in their marketing message (“just use .NET—trust us!”), most of their customers are still using C, C++, Visual Basic 6.0, and classic ASP, not to mention all the other development tools from other companies. And the ones that are using .NET are using ASP.NET to develop web applications, which run on a Windows server but don’t require Windows clients, which is a key point I’ll talk about more when I talk about the web.
Oh, Wait, There’s More Coming!
Now Microsoft has so many developers cranking away that it’s not enough to reinvent the entire Windows API: they have to reinvent it twice. At last year’s PDC they preannounced the next major version of their operating system, codenamed Longhorn, which will contain, among other things, a completely new user interface API, codenamed Avalon, rebuilt from the ground up to take advantage of modern computers’ fast display adapters and realtime 3D rendering. And if you’re developing a Windows GUI app today using Microsoft’s “official” latest-and-greatest Windows programming environment, WinForms, you’re going to have to start over again in two years to support Longhorn and Avalon. Which explains why WinForms is completely stillborn. Hope you haven’t invested too much in it. Jon Udell found a slide from Microsoft labelled “How Do I Pick Between Windows Forms and Avalon?” and asks, “Why do I have to pick between Windows Forms and Avalon?” A good question, and one to which he finds no great answer.
So you’ve got the Windows API, you’ve got VB, and now you’ve got .NET, in several language flavors, and don’t get too attached to any of that, because we’re making Avalon, you see, which will only run on the newest Microsoft operating system, which nobody will have for a loooong time. And personally I still haven’t had time to learn .NET very deeply, and we haven’t ported Fog Creek’s two applications from classic ASP and Visual Basic 6.0 to .NET because there’s no return on investment for us. None. It’s just Fire and Motion as far as I’m concerned: Microsoft would love for me to stop adding new features to our bug tracking software and content management software and instead waste a few months porting it to another programming environment, something which will not benefit a single customer and therefore will not gain us one additional sale, and therefore which is a complete waste of several months, which is great for Microsoft, because they have content management software and bug tracking software, too, so they’d like nothing better than for me to waste time spinning cycles catching up with the flavor du jour, and then waste another year or two doing an Avalon version, too, while they add features to their own competitive software. Riiiight.
No developer with a day job has time to keep up with all the new development tools coming out of Redmond, if only because there are too many dang employees at Microsoft making development tools!
It’s Not 1990
Microsoft grew up during the 1980s and 1990s, when the growth in personal computers was so dramatic that every year there were more new computers sold than the entire installed base. That meant that if you made a product that only worked on new computers, within a year or two it could take over the world even if nobody switched to your product. That was one of the reasons Word and Excel displaced WordPerfect and Lotus so thoroughly: Microsoft just waited for the next big wave of hardware upgrades and sold Windows, Word and Excel to corporations buying their next round of desktop computers (in some cases their first round). So in many ways Microsoft never needed to learn how to get an installed base to switch from product N to product N+1. When people get new computers, they’re happy to get all the latest Microsoft stuff on the new computer, but they’re far less likely to upgrade. This didn’t matter when the PC industry was growing like wildfire, but now that the world is saturated with PCs most of which are Just Fine, Thank You, Microsoft is suddenly realizing that it takes much longer for the latest thing to get out there. When they tried to “End Of Life” Windows 98, it turned out there were still so many people using it they had to promise to support that old creaking grandma for a few more years.
Unfortunately, these Brave New Strategies, things like .NET and Longhorn and Avalon, trying to create a new API to lock people into, can’t work very well if everybody is still using their good-enough computers from 1998. Even if Longhorn ships when it’s supposed to, in 2006, which I don’t believe for a minute, it will take a couple of years before enough people have it that it’s even worth considering as a development platform. Developers, developers, developers, and developers are not buying into Microsoft’s multiple-personality-disordered suggestions for how we should develop software.
Enter the Web
I’m not sure how I managed to get this far without mentioning the Web. Every developer has a choice to make when they plan a new software application: they can build it for the web or they can build a “rich client” application that runs on PCs. The basic pros and cons are simple: Web applications are easier to deploy, while rich clients offer faster response time enabling much more interesting user interfaces.
Web Applications are easier to deploy because there’s no installation involved. Installing a web application means typing a URL in the address bar. Today I installed Google’s new email application by typing Alt+D, gmail, Ctrl+Enter. There are far fewer compatibility problems and problems coexisting with other software. Every user of your product is using the same version so you never have to support a mix of old versions. You can use any programming environment you want because you only have to get it up and running on your own server. Your application is automatically available at virtually every reasonable computer on the planet. Your customers’ data, too, is automatically available at virtually every reasonable computer on the planet.
But there’s a price to pay in the smoothness of the user interface. Here are a few examples of things you can’t really do well in a web application:
- Create a fast drawing program
- Build a real-time spell checker with wavy red underlines
- Warn users that they are going to lose their work if they hit the close box of the browser
- Update a small part of the display based on a change that the user makes without a full roundtrip to the server
- Create a fast keyboard-driven interface that doesn’t require the mouse
- Let people continue working when they are not connected to the Internet
These are not all big issues. Some of them will be solved very soon by witty Javascript developers. Two new web applications, Gmail and Oddpost, both email apps, do a really decent job of working around or completely solving some of these issues. And users don’t seem to care about the little UI glitches and slowness of web interfaces. Almost all the normal people I know are perfectly happy with web-based email, for some reason, no matter how much I try to convince them that the rich client is, uh, richer.
So the Web user interface is about 80% there, and even without new web browsers we can probably get 95% there. This is Good Enough for most people and it’s certainly good enough for developers, who have voted to develop almost every significant new application as a web application.
Which means, suddenly, Microsoft’s API doesn’t matter so much. Web applications don’t require Windows.
It’s not that Microsoft didn’t notice this was happening. Of course they did, and when the implications became clear, they slammed on the brakes. Promising new technologies like HTAs and DHTML were stopped in their tracks. The Internet Explorer team seems to have disappeared; they have been completely missing in action for several years. There’s no way Microsoft is going to allow DHTML to get any better than it already is: it’s just too dangerous to their core business, the rich client. The big meme at Microsoft these days is: “Microsoft is betting the company on the rich client.” You’ll see that somewhere in every slide presentation about Longhorn. Joe Beda, from the Avalon team, says that “Avalon, and Longhorn in general, is Microsoft’s stake in the ground, saying that we believe power on your desktop, locally sitting there doing cool stuff, is here to stay. We’re investing on the desktop, we think it’s a good place to be, and we hope we’re going to start a wave of excitement…”
The trouble is: it’s too late.
I’m a Little Bit Sad About This, Myself
I’m actually a little bit sad about this, myself. To me the Web is great but Web-based applications with their sucky, high-latency, inconsistent user interfaces are a huge step backwards in daily usability. I love my rich client applications and would go nuts if I had to use web versions of the applications I use daily: Visual Studio, CityDesk, Outlook, Corel PhotoPaint, QuickBooks. But that’s what developers are going to give us. Nobody (by which, again, I mean “fewer than 10,000,000 people”) wants to develop for the Windows API any more. Venture Capitalists won’t invest in Windows applications because they’re so afraid of competition from Microsoft. And most users don’t seem to care about crappy Web UIs as much as I do.
And here’s the clincher: I noticed (and confirmed this with a recruiter friend) that Windows API programmers here in New York City who know C++ and COM programming earn about $130,000 a year, while typical Web programmers using managed code languages (Java, PHP, Perl, even ASP.NET) earn about $80,000 a year. That’s a huge difference, and when I talked to some friends from Microsoft Consulting Services about this they admitted that Microsoft had lost a whole generation of developers. The reason it takes $130,000 to hire someone with COM experience is because nobody bothered learning COM programming in the last eight years or so, so you have to find somebody really senior, usually they’re already in management, and convince them to take a job as a grunt programmer, dealing with (God help me) marshalling and monikers and apartment threading and aggregates and tearoffs and a million other things that, basically, only Don Box ever understood, and even Don Box can’t bear to look at them any more.
Much as I hate to say it, a huge chunk of developers have long since moved to the web and refuse to move back. Most .NET developers are ASP.NET developers, developing for Microsoft’s web server. ASP.NET is brilliant; I’ve been working with web development for ten years and it’s really just a generation ahead of everything out there. But it’s a server technology, so clients can use any kind of desktop they want. And it runs pretty well under Linux using Mono.
None of this bodes well for Microsoft and the profits it enjoyed thanks to its API power. The new API is HTML, and the new winners in the application development marketplace will be the people who can make HTML sing.
Mike Gunderloy’s Coder to Developer
Note: This is my foreword to Mike Gunderloy’s awesome new book, Coder to Developer. The book is now available from SYBEX.
You know what drives me crazy?
“Everything?” you ask. Well, OK, some of you know me a bit too well by now.
But seriously, folks, what drives me crazy is that most software developers don’t realize just how little they know about software development.
Take, for example, me.
When I was a teenager, as soon as I finished reading Peter Norton’s famous guide to programming the IBM-PC in Assembler, I was convinced that I knew everything there was to know about software development in general. Heck, I was ready to start a software company to make a word processor, you see, and it was going to be really good. My imaginary software company was going to have coffee breaks with free donuts every hour. A lot of my daydreams in those days involved donuts.
When I got out of the army, I headed off to college and got a degree in Computer Science. Now I really knew everything. I knew more than everything, because I had learned a bunch of computer-scientific junk about linear algebra and NP completeness and frigging lambda calculus which was obviously useless, so I thought they must have run out of useful things to teach us and were scraping the bottom of the barrel.
Nope. At my first job I noticed how many things there are that many Computer Science departments are too snooty to actually teach you. Things like software teamwork. Practical advice about user interface design. Professional tools like source code control, bug tracking databases, debuggers and profilers. Business things. Computer Science departments in the most prestigious institutions just won’t teach you this stuff because they consider it “vocational,” not academic; the kind of thing that high school dropouts learn at the local technical institute so they can have a career as an auto mechanic, or an air-conditioner repairman, or a (holding nose between thumb and forefinger) “software developer.”
I can sort of understand that attitude. After all, many prestigious undergraduate institutions see their goal as preparing you for life, not teaching you a career, least of all a career in a field that changes so rapidly any technologies you learn now will be obsolete in a decade.
Over the next decade I proceeded to learn an incredible amount about software development and all the things it takes to produce software. I worked at Microsoft on the Excel team, at Viacom on the web team, and at Juno on their email client. And, you know what? At every point in the learning cycle, I was completely convinced that I knew everything there was to know about software development.
“Maybe you’re just an arrogant sod?” you ask, possibly using an even spicier word than “sod.” I beg your pardon: this is my foreword; if you want to be rude write your own damn foreword, tear mine out of the book, and put yours in instead.
There’s something weird about software development, some mystical quality, that makes all kinds of people think they know how to do it. I’ve worked at dotcom-type companies full of liberal arts majors with no software experience or training who nevertheless were convinced that they knew how to manage software teams and design user interfaces. This is weird, because nobody thinks they know how to remove a burst appendix, or rebuild a car engine, unless they actually know how to do it, but for some reason there are all these people floating around who think they know everything there is to know about software development.
Anyway, the responsibility is going to fall on your shoulders. You’re probably going to have to learn how to do software development on your own. If you’re really lucky, you’ve had some experience working directly with top notch software developers who can teach you this stuff, but most people don’t have that opportunity. So I’m glad to see that Mike Gunderloy has taken upon himself to write the book you hold in your hands. Here you will find a well-written and enjoyable introduction to many of the most important things that you’re going to need to know as you move from being a person who can write code to being a person who can develop software. Do those sound like the same thing? They’re not. That’s roughly the equivalent of going from being a six year old who can crayon some simple words, backwards N’s and all, to being a successful novelist who writes books that receive rave reviews and sell millions of copies. Being a software developer means you can take a concept, build a team, set up state of the art development processes, design a software product, the right software product, and produce it. Not just any software product: a high quality software product that solves a problem and delights your users. With documentation. A web page. A setup program. Test cases. Norwegian versions. Bokmål and Nynorsk. Appetizers, dessert, and twenty seven eight-by-ten color glossy photographs with circles and arrows and a paragraph on the back of each one explaining what each one was. (Apologies to Arlo Guthrie.)
And then, one day, finally, perhaps when it’s too late, you’ll wake up and say, “Hmm. Maybe I really don’t know what it really takes to develop software.” And on that day only, and not one minute before, but on that day and from that day forward, you will have earned the right to call yourself a software developer. In the meantime, all is not lost: you still have my blessing if you want to eat donuts every hour.
News
You know what drives me crazy?
“Everything?” you ask. Well, OK, some of you know me a bit too well by now.
But seriously, folks, what drives me crazy is that most software developers don’t realize just how little they know about software development.
News
Perfectionism
If I was as much of a perfectionist as some here would have me be, I would never get out the door in the morning, I’d be so busy scrubbing the floors of my apartment until they sparkle and shaving every ten minutes and removing lint from my clothing with masking tape, and by the time I finished that I’d have to shave again and take out the trash because there was masking tape in the trash and re-scrub the floor because when I took the trash out I might have tracked in dust. And then I’d have to shave again.
I could go insane with the web page behind the discussion board. First I could make it 110% xhtml 1.1 + CSS. Heck, why not xhtml 2.0 just to be extra addictive-personality-disordered. Then I could neatly format all the html code so it’s perfectly indented. But the html is generated by a script, and the script has to be indented correctly so that it’s perfect too, and a correctly indented ASP script does not, by defintion, produce correctly indented HTML. So I could write a filter that takes the output of the ASP script and reindents it so that if anybody does a View Source they would see neatly indented HTML and think I have great attention to detail. Then I would start to obsess about all the wasted bandwidth caused by meaningless whitespace in the HTML file, and I’d go back and forth in circles between compressed HTML and nicely laid out HTML, pausing only to shave.
I could spend the rest of my life perfecting the HTML behind every page on all of our sites, or I could do something that might actually benefit someone.
Perfectionism is a very dangerous quality in business and in life, because by being perfectionist about one thing you are, by definition, neglecting another. The three days I spent insuring that all icons in CityDesk 3.0 are displayed with perfect alpha-blended effects came at the price of having a web site where the descender of the “g” is not a hyperlink. And both are at the price of working on my next book, or writing another article for Joel on Software, or making CityDesk publish really big sites faster.
If you’re noticing a recurring theme, it’s that I never like to talk about whether or not to do X. The question should never be “X, yes or no?” As long as you have limited time and resources, you always have to look at the cost and the benefit of X. Questions should be “Is X worth the time” or “Will X or Y have a greater return on investment?”
Great Minds Think Alike
or, you can take the boy out of Microsoft but you can’t take Microsoft out of the boy
Raymond Chen: “In other words, in an error-code model, it is obvious when somebody failed to handle an error: They didn’t check the error code. But in an exception-throwing model, it is not obvious from looking at the code whether somebody handled the error, since the error is not explicit.” (c.f. Joel on Exceptions)
Larry Osterman: “I’m not saying that metrics are bad. They’re not. But basing people’s annual performance reviews on those metrics is a recipe for disaster.” (c.f. Joel on Measurement, Joel on Incentive Pay, Why FogBugz isn’t a crutch for HR, etc.)
By the way, have you noticed how everyone at Microsoft is a blogger now? Dave Winer has managed to successfully and almost single-handedly pull off the most incredible Fire and Motion coup in the history of the software industry. His endless evangelism of blogging now has every Microsoft employee spending more time working on their blogs than working on software development or even picking out polo shirts. Brilliant! And that fifth column thing with Scoble — there are no words! Bravo!
The Best Thing on Television, Ever
We just finished watching Season 1 of the BBC television series The Office on DVD during our lunchbreaks at Fog Creek. WOW! Incredibly funny, incredibly touching, and supernaturally realistic. But now I’m paranoid when nobody in the office laughs at my jokes. I’m an entertainer, first, really, then a boss. Also I’ll have to cut down on the army stories.
Hint to Americans: turn on the English subtitles and you’ll catch twice as many jokes.
News
Dogfood
The term “eating your own dogfood,” in the software industry, means using the code you’re developing for your own daily needs: basically, being a user as well as a developer, so the user empathy that is the hallmark of good software comes automatically.
This site is produced in CityDesk, and about half of my time is spent writing code for CityDesk, so it’s been my policy to edit Joel on Software using the current, debugging version of CityDesk running inside the debugger. The neat part is that if I’m writing a long essay for the site and the application crashes, I have a chance to debug it right there and then and in fact if I haven’t saved in a while I must debug it right there and then, otherwise I won’t be able to save my work.
Anyway, for the last couple of weeks, the development version of CityDesk has been using a new, smaller database schema (it’s mostly the same as the old schema but with some redundancies removed to make it better normalized) and the truth is I was a little bit scared to upgrade the Joel on Software database so I could publish. But dogfood we must eat, so here you go.
Interviews
Eric Lippert writes: “Dev candidates: if you’ve done any reading at all, you know that most of your interviews will involve writing some code on a whiteboard. A word of advice: writing code on whiteboards is HARD. Practice!” Good advice. I’m wondering if we should stop giving advice on interviewing… my guerrilla guide is so well read that my old trick of looking for people who write their }’s immediately after their {‘s doesn’t work any more. Everyone who interviews at Fog Creek always carefully does that now, and then they sort of look at me to make sure I noticed that they wrote their } immediately after their {. Tip: That’s not what I’m looking for any more.
Memetics and Email Viruses
Gary Cornell and I had an interesting conversation about how email viruses are getting cleverer and better written. It reminded me of Richard Dawkins and Oliver Goodenough (Nature, September 1, 1994) who realized that chain letters were a great example of the evolution of memes. Evolution requires:
- A genetic code, such as DNA
- Replication
- Mutation
- Natural Selection
In a chain letter, you have
- The text of the letter itself
- The letter requires you to copy it and send it to other people
- When the letter is copied by hand, everyone makes slight mistakes and slight changes, either intentionally, because they think they are better, or unintentionally, by mistake.
- The letters that work best at convincing people to copy them get copied the most and thus those memes survive the longest.
The same thing happens with email viruses. The ones with the best fake letters, e.g., the ones that persuade the most people to open the attachment, will survive and reproduce. The ones that aren’t very convincing die out. The next stage, which may have already happened, would be for the virus to modify a couple of words at random in the text of the message before sending it out. Instead of blasting a million people the same message, blast groups of 100 people the same message with a different random change. Eventually random mutation will improve the ability of these messages to survive and reproduce by fooling people into opening the attachment.
I’ve said it before, and I’ll say it again … nobody knows more about marketing in the shrinkwrapped software industry than Rick Chapman, and the new fourth edition of his book is the only place you can go to find a complete encylopedia of just about everything there is to know about marketing software. There’s really nothing else that compares and if you’re trying to market software you really have to read this book.
Over the years and the editions Rick has added an awful lot of material, and a lot of it is starting to show its age. In particular a lot of the discussion of channel marketing may not be relevant: thanks to the Internet, plenty of software companies today are doing fine using 100% direct-to-customer without any traditional channel whatsoever. Don’t let that stop you from buying the book; it has plenty of useful data on Internet and direct sales, too. Before you try to sell software, you have to at least sit down and read this book cover to cover, if only to gain the humility to realize how much is involved in marketing.
25
Thanks to everyone who came to the open house last night. If you have pictures, send me a link!
We had an interesting conversation about how the impedance mismatch between contemporary high-level programming languages (Java, C#, Python, VB) and relational databases. Since a huge percentage of code requires access to databases, the glue (a.k.a. the connecticazoint) between the RDBMS layer and the application code is very important, yet virtually every modern programming language assumes that RDBMS access is something that can be left to libraries. In other words, language designers never bother to put database integration features into their languages. As a tiny example of this, the syntax for “where” clauses is never identical to the syntax for “if” statements. And don’t get me started about data type mismatches: just the fact that columns of any type might be “null” leads to an incompatibility between almost every native data type and the database data types.
The trouble with this is that the libraries (think ADO, DAO, ODBC, JDBC, embedded SQL, and a thousand others) need to be general purpose to be reusable, and yet what you really want is a mapping between a native data structure and a table row or query result row. Inevitably, you have to hand roll this mapping and wire it up manually, which is error prone and frustrating.
I think this is a fatal flaw in language design, akin to the bad decision by the designers of C++ that it was not necessary to support a native string type. “Let a thousand CString/TString/String/string<char> types flourish,” they said, and then spent more than a decade adding new features to the language until it was marginally, but not completely, possible to implement a non-awful string class. And now we have a thousand string types (most large C++ bodies of code I’ve seen use three or four) and a bunch of really good books by Scott Meyers about why your personal hand-rolled string class is inadequate. It’s about time that a language designer admitted that RDBMS access is intrinsic to modern application implementation and supported it in a first-class way syntactically.
Now for all the disclaimers to prevent “but what about” emails. (1) in functional languages like lisp the syntax layer is so light that you could probably implement very good RDBMS shims in ways that feel almost native. Especially if you have lazy evaluation of function parameters, it’s easy to see how you could build a “where” clause generator that used the same syntax as your “if” predicates. (2) Access Basic, later Access VBA, had a couple of features to make database access slicker, specifically the [exp] syntax and the rs!field syntax, but it’s really only 10%. There are probably other niche-languages or languages by RDBMS vendors that do a nice job. (3) Attempts to solve this problem in the past have fallen in two broad groups: the people who want to make the embedded SQL programming languages better (PL/SQL, TSQL, et al), and the people who want to persist objects magically using RDBMS backends (OODBMSes and object persistence libraries). Neither one fully bridges the gap: I don’t know of anyone who builds user interfaces in SQL or its derivatives, and the object persistence implementations I’ve seen never have a particularly good implementation of SELECT.
04
Save the date: Fog Creek Software will host an open house at our new office on March 24th, 2004, at 6:00 PM.
535 8th Ave. (bet. 36th and 37th), 18th Floor, New York
Top Twelve Tips for Running a Beta Test
Here are a few tips for running a beta test of a software product intended for large audiences — what I call “shrinkwrap“. These apply for commercial or open source projects; I don’t care whether you get paid in cash, eyeballs, or peer recognition, but I’m focused on products for lots of users, not internal IT projects.
- Open betas don’t work. You either get too many testers (think Netscape) in which case you can’t get good data from the testers, or too few reports from the existing testers.
- The best way to get a beta tester to send you feedback is to appeal to their psychological need to be consistent. You need to get them to say that they will send you feedback, or, even better, apply to be in the beta testing program. Once they have taken some positive action such as filling out an application and checking the box that says “I agree to send feedback and bug reports promptly,” many more people will do so in order to be consistent.
- Don’t think you can get through a full beta cycle in less than eight to ten weeks. I’ve tried; lord help me, it just can’t be done.
- Don’t expect to release new builds to beta testers more than once every two weeks. I’ve tried; lord help me, it just can’t be done.
- Don’t plan a beta with fewer than four releases. I haven’t tried that because it was so obviously not going to work!
- If you add a feature, even a small one, during the beta process, the clock goes back to the beginning of the eight weeks and you need another 3-4 releases. One of the biggest mistakes I ever made was adding some whitespace-preserving code to CityDesk 2.0 towards the end of the beta cycle which had some, shall we say, unexpected side effects that a longer beta would have fleshed out.
- Even if you have an application process, only about one in five people will send you feedback anyway.
- We have a policy of giving a free copy of the software to anyone who sends any feedback, positive, negative, whatever. But people who don’t send us anything don’t get a free copy at the end of the beta.
- The minimum number of serious testers you need (i.e., people who send you three page summaries of their experience) is probably about 100. If you’re a one-person shop, that’s all the feedback you can handle. If you have a team of testers or beta managers, try to get 100 serious testers for every employee that is available to handle feedback.
- Even if you have an application process, only one out of five testers is really going to try the product and send you feedback. So, for example, if you have a QA department with 3 testers, you should approve 1500 beta applications to get 300 serious testers. Fewer than this and you won’t hear everything. More than this and you’ll be deluged with repeated feedback.
- Most beta testers will try out the program when they first get it, and then lose interest. They are not going to be interested in retesting it every time you drop them another build unless they really start using the program every day, which is unlikely for most people. Therefore, stagger the releases. Split your beta population into four groups and each new release, add another group that gets the software, so there are new beta testers for each milestone.
- Don’t confuse a technical beta with a marketing beta. I’ve been talking about technical betas, here, in which the goal is to find bugs and get last-minute feedback. Marketing betas are prerelease versions of the software given to the press, to big customers, and to the guy who is going to write the Dummies book that has to appear on the same day as the product. With marketing betas you don’t expect to get feedback (although the people who write the books are likely to give you copious feedback no matter what you do, and if you ignore it, it will be cut and pasted into their book).

