2002/10/28

Countdown to FogBUGZ 3.0. We’re supposed to ship on November 4th, which is less than a week away. All that’s left is the new marketing stuff for the website, a couple of small bug fixes, and some improvements to the online store so we can sell support contracts and upgrades online.

2002/10/23

Where are they now? A couple of years ago, I posted a list of the big Silicon Alley companies and how their stocks had plummeted. Silicon Alley was the term used for New York City’s ephemeral dot com industry which lasted for maybe four years before collapsing under the weight of its own overhype.

I thought it might be interesting to revisit those companies and see what has happened since.

Company 2 yrs Ago Today
Juno Online Services 2.69 4.16
TheGlobe.com 0.55 0.06
iTurf 1.09 0.00
Priceline 6.78 1.85
Register.com 7.88 3.52
IVillage 3.00 0.87
24/7 Media 6.00 0.30
Razorfish 5.47 0.05
Agency.com 11.94 3.35
TheStreet.com 3.50 2.22
EarthWeb 9.50 0.32
Doubleclick 24.69 6.05
about.com 22.88 4.26

Footnotes: The “today” column represents how much money you would have if you bought a share back then and held onto it. Many corporate entities have changed. Juno is now a part of United Online. TheGlobe still owns two tiny gaming web sites. iTurf is vanished but your share might have been worth a few pennies when the shareholders sued the management of the typographically challenged company dELiAs*cOm, which merged with it and shut it down. The only reason Agency.com is worth so much is that your share was bought for cash by a big advertising agency before it could go any lower. EarthWeb, which had started out as a Big List of Java Applets in the days when something like that was amusing, changed their name to Dice and dumped the old EarthWeb web site onto retro ubercontent superportal internet.com. Now they’re just a job-listing board. And about.com was acquired by a big Old Media company which hasn’t done so well, either. Thankfully I personally own only stock index funds, no individual stocks in public companies.

Personally the most interesting thing is that my old company, Juno Online Services, has actually been a great investment during a horrible 2 years. Probably because the old management has been completely replaced. My leaving probably helped a bit, in fact, shortly after I left the stock surged momentarily to about $80 in celebration.

2002/10/21

VNC: Yes, I know about TightVNC. It’s faster than VNC but nowhere near as fast and usable as Terminal Services.

Other people peeked in their server logs and saw .NET CLR penetration at around 6 – 8%. Some very consumer oriented sites are seeing even less (2-3%).

RSS: after much interesting discussion, the conclusion is the best way to reduce this bandwidth is already a part of the HTTP spec, and needs to be implemented by the aggregators. Kudos to Dave Winer who implemented it this weekend in Radio Userland. Hopefully other aggregators will soon get with the program — specifically NetNewsWire and FeedReader who are the most popular and thus the biggest problem. Presumably their customers don’t want to waste bandwidth, either, so it’s in their best interests. (And I don’t want to have to hide the feed from bandwidth hogging user agents.) I shall now shut up about this topic because I know it’s excruciatingly boring to the rest of the world.

Update 10/22: NetNewsWire fixed it. Good work Brent!

2002/10/19

Stuff

Here’s some stuff I’m thinking about, in no particular order.

VNC vs. Windows Terminal Services

We regularly use two pieces of software to access Windows computers remotely. Windows Terminal Services from Microsoft (now called “Remote Desktop” in Windows XP, and also marketed by Citrix) is one of them; the other is WinVNC which is an open source project that originated at AT&T in Cambridge, England.

If you have a choice in the matter, Windows Terminal Services is much, much better, for two reasons. First, it is incredibly fast, because the wire protocol is closer to the Windows GDI layer. You can remotely access a Windows computer over a modem quite comfortably, which surprised the heck out of me. By comparison, VNC’s network protocol basically consists of transmitting blocks of changed pixels across the wire. So for example when a string is written to the screen, Terminal Services won’t transmit much more than the string itself, letting the client find the font and do all the rendering. VNC would have to first detect that a part of the screen had changed, and then transmit a compressed bitmap across the wire. And smooth scrolling works perfectly and feels smooth when you use Terminal Sevices, even over a slow connection. VNC just messes up the screen.

Empirically, it’s a lot faster and that makes all the difference in the quality of the experience.

Second big reason. VNC for some reason does not transmit Shift+Arrow keys. The shift gets lost. OK, it’s a little bug, but VNC made me notice that I frequently correct typing errors with Shift+Ctrl+Left (select previous word) and then I type over it. VNC doesn’t transmit this. As a result it is incredibly painful for me to type long text messages over VNC.

Third big reason. No matter how much I play with the settings, there are too many cases where VNC forgets to transmit a particular “damaged” region of the screen to the client. The two most common cases I’ve found are when you right-click to get a popup menu — the menu appears on the server but is not transmitted to the client so you think something is wrong — and when you scroll in an application that uses smooth scrolling, the screen gets all messed up. It reminds me of Unix, 1987 when your friends would write messages on your tty (old-school IM) and you had to hit Ctrl+L to get Emacs to clean up the screen.

The biggest disadvantage of Windows Terminal Services is that the server has to be a Windows machine. (There are all kinds of clients.) This is not a big deal for me; our Linux machines are servers and ssh is fine. If I really had to do GUI stuff with Linux servers I would just use Exceed or something, but I haven’t needed that for years.

AMD Hammer vs. Intel Itanium

Intel’s throw-it-all-away-and-start-over Itanium CPU project is turning out to be the Ishtar of the CPU world: way over budget, years late, and terrible. I have heard that running in 32-bit “backwards compatible” mode (required for 99.999% of the software that exists) it is about as fast as a Pentium II/366. Maybe one tenth the performance of the fastest Pentium 4. Meanwhile, AMD couldn’t afford to start from scratch, so their new Hammer CPU should be able to run 32 bit code just as fast as a 32 bit processor. Beginning to see a theme? AMD is not really shipping yet, so it’s not over until it’s over, but I’m betting it will shake out to have been a major mistake that Intel designed a new chip from scratch rather than extend the Pentium 4 with 64 bit features. Maybe some of my readers who follow the CPU world more closely can weigh in.

Meanwhile, back in Throw It All Away Land…

It may turn out to be the case that one of the biggest benefits to come out of the Mozilla project is XUL, which seems to be one of the first solid frameworks for true GUI portability (WORA). Basically, you design your interface in XML, glue the events together with some JavaScript, and call binary XPCOM classes (virtually the same as Microsoft COM classes) when you need to do something fast in C++ that doesn’t need a UI. And UI’s never need to be that fast, so this is a good division of labor.

Theoretically, you get cross-platform Nirvana. And thanks a lot of hard work all the little platform-specific touches (like Alt+Space N to minimize a window) are finally right, which is one of the biggest weaknesses of previous efforts at WORA like AWT and Swing. If I had to start developing a new commercial app I would seriously look at XUL.

But Joel, You Said Netscape Was Stupid…

They were. They shouldn’t have rewritten from scratch. They should have done this all in steps. Big chunky steps, fine, but steps. For example, they could have rebuilt the rendering engine — without touching any of the other stuff — as a first step. Then ship. Was there anything wrong with the networking library? I don’t think there was. Even if there was, OK. So, fix it. One step at a time. Then ship. Then implement XUL and start converting some of the dialogs to XUL as another step. Then ship. Then port the existing UI — port, not rewrite — in XUL. Rather than argue about what the dialogs should have in them, you just recreate the existing dialogs exactly as is, only this time in XUL. Get that working. Ship. Then decide if it’s worth changing the dialog. Is it? Ok, Ship again. Yes, getting from a messy architecture to a nice architecture costs time, but it doesn’t cost as much as starting from scratch did. Over the period of time between Netscape 4 and Mozilla 1 they could have had three releases and still be where they are now. No, way ahead. And we’d have a real browser ecology instead of monoculture.

Being virtually out of the market for 4 years was a catastrophe for Netscape’s browser market share and handed Microsoft a huge monopoly, and they don’t need another monopoly. (Some people think it is entirely Microsoft’s anticompetitive practices that got them 90% market share in the browser market. I just don’t buy that. People used IE instead of Netscape 4.0 over the last four years because they liked it better, not because they were tricked into it or because it was installed on their desktop by default. Give people some credit.)

None of this means that Mozilla is not a good work of engineering and XUL may well be a real benefit to Apple and Linux, because application developers finally have a way to deliver to all three platforms for perhaps 110% of the cost of Windows alone.

Server Logs

I spent some time studying the Joel on Software server logs today. Here are some of the things I wanted to find out.

Do people have the .NET runtime? I’m trying to figure out when we can afford to port CityDesk to .NET without alienating all the download.com people who download CityDesk, try it, like it, and give me money.

Of regular Joel on Software readers, it seems like 32% have the .NET runtime. This seems really high to me compared to the population as a whole. We’re software developers. I really don’t know why anyone but a software developer would have the .NET runtime now; it doesn’t come with any popular software. So my data isn’t that useful. If someone has access to the server logs of a more mainstream website, tell me what kind of ratio you’re seeing.

Does anyone care about the RSS feed? About 12% of the IP addresses that came to Joel on Software yesterday came to get rss.xml. That’s more than I would have thought, but it was a slow news day. I would expect when I actually post a long new article, I would get the same number of RSS clients but way more normal web browser hits. I was also surprised to see that Ranchero NetNewsWire, created by former UserLand employee Brent Simmons, is more than twice as popular as his ex-boss’s product Radio (377 subscribers for NNW, 163 for Radio.)

Most of the RSS subscribers are whacking me every hour, which is actually costing me cash money in excess bandwidth charges. How can I set it up so they only visit once a day? Is this an RSS option? I rarely post more than once or twice a day. Maybe I should change the RSS feed to just include headlines with links.

2002/10/18

Sneak preview: New in FogBUGZ 3.0. We’re still on schedule to ship the new version on November 4th. And no, we didn’t get rid of the cute orangutan.

Daniel Berlinger reviews 3.0. “If you aren’t already using a bug/feature database, and if you’d like to improve your customer service, project tracking, efficiency and more—purchase FogBugz.”

2002/10/17

Thanks to Mike Gunderloy for a nice review of my book: “I’m sure someone has written the thousand-page tome about usability, but I’ll take this one instead.”

Burningbird: “Why the blues, PHP, the other languages asked. All the languages that is but C, because all C ever said was ‘bite me’, being a rude language and hard to live with, but still respected because it was such a good worker.”

Herbert Meyer in the National Review: “Our intelligence services failed because their leaders and their top-level analysts just weren’t smart enough to figure things out early enough to make a difference. They had lots of energy and dedication, to be sure. What they didn’t have enough of, was brains.”

2002/10/16

I spent the day catching up on the Joel on Software translation effort.

There are new articles online in Danish, Dutch, French, German, Hungarian, Indonesian, Japanese, Korean, Portuguese (Brazilian and Iberian), and Romanian.

As my faithful volunteers have learned it sometimes takes me a week to respond to their email; I usually let all the translation-related email pile up and then work through it once a week.

Right now there are a few articles stuck in the queue because they don’t have a copy editor — if you can edit in Farsi, Portuguese (Portugal), Russian, Spanish, or Turkish, please let me know!

2002/10/09

Dynamic HTML, 2d EditionAh! Danny Goodman has released the new version of Dynamic HTML. It’s been a few years since the first edition, which is still the best reference on HTML despite being severely out of date. The new edition is 1400 packed pages that actually tells you what web browsers that are actually in use actually do, which makes it invaluable. It has been brought up to date with all the latest browsers and the newest HTML specs. If you’re working with HTML in any way, shape, or form, this book is an absolute requirement.

2002/10/08

Feedback from my posting about FogBUGZ Setup fell into four categories.

“Why make Setup reversable? Instead you should collect all the information from the user and make all the changes in one batch at the end.” There are a couple of things to understand here. First of all, even if you do everything in one batch at the end, there’s always a possibility that some step in the middle of the batch will fail, and in that case, a well-behaved setup program will back out the steps that were already done. There are well over 100 error messages in the string table for FogBUGZ Setup so the number of things that can fail is not insignificant.

Second, it’s not nice to tell people about an error in their input three pages after they made the mistake. For example, early in the FogBUGZ setup process we prompt you to create an account for FogBUGZ to use:

FogBUGZ Setup Screenshot

The account creation could fail for a myriad of reasons, none of which can be predicted before trying to create the account. For example, the password might not conform to the system password policy. And different national versions of Windows NT have different rules about accented letters in passwords (betcha didn’t know that!). It’s better to tell the user about this problem right away so they can correct their input rather than having a message come up during the long install process later, forcing the user to back up and fix it. And even if you force the user to back up and fix it, you still have to undo the first part of the work that you did before creating the account, otherwise you’ve left their system in an indeterminate state.

In any case I need to write code to create the account and delete the account in case something later fails; I might as well call that code on this page of the wizard where I can display a useful error message.

And what are the kinds of things that need to be reversable? Well, in order to upgrade FogBUGZ without requiring a reboot (and we never, ever require a reboot), we have to shut down a couple of processes that might have been keeping FogBUGZ files pinned down, such as IIS (Microsoft’s web server). So part one of the batch is “Stop IIS.” Now if part 2 fails for some reason, it would be extremely rude to leave IIS not running. And anyway, it’s not like I don’t need to write the code for “Start IIS” for the end of the batch. So the code to rollback “Stop IIS” is already written. No big deal, I just need to call it at the right place.

I think one reason that people think you should “gather all the info and then do all the work” is because with very large installation programs that are very slow, this is a polite way to waste less of the user’s time. Indeed even FogBUGZ setup does 95% of its work at the very end. But the “create account” operation is so fast, that principle simply doesn’t apply here. Even our 95% of the work phase takes well under a minute, most of which is spent waiting for IIS to stop and start.

“Why did you use VC++/MFC? Surely an advanced intelligence such as yourself has admitted by now that Delphi is more productive.” First of all, leave your language religious fanaticism at the Usenet door. Somehow I managed to figure out in high school that language advocacy and religious arguments are unbelievably boring. 

Secondly, even if Delphi were more productive, the only pertinent question, since I am writing the code, is what is more productive for Joel Spolsky. And I don’t know Delphi at all, but I know Win32, MFC, and VC++ really, really well. So while I might not outcode a good Delphi programmer, I would definitely outcode a completely inexperienced Delphi programmer (which is me), certainly over a short 4 week project. Third, many of the things I needed to do in this setup program are things like “grant the Logon as Service privilege to an account.” This is rare enough that the only way to find out how to do this is to search the Microsoft knowlege base and the web in general. When you search the web in general for how to do fancy things with Windows NT, what you find is about 75% C code, maybe 20% VB code, and 5% everything else. Yes, I know, I could translate the C code into Delphi (assuming I was a sophisticated Delphi programmer, not a completely inexperienced Delphi programmer), but that costs as much productivity as I would supposedly gain from your supposedly more productive programming language. And fourth, I already had about 30% of the code I needed for Setup in MFC format: from FogBUGZ 2.0 Setup, and a library I’ve been using for years to make wizards.

“Why make Setup at all? You already have your customers’ money. Good Setup programs don’t increase sales.” This was actually the smartest question and made me think the hardest. I came up with three reasons:

  1. Decreased tech support cost. This setup program will pay for itself over the life of the code.
  2. Delight my customers. When I’m trying to get them to upgrade to 4.0, I want them to remember how painless the 3.0 installation was, so they won’t hesitate because they are afraid to upgrade. I’m still using an old version of SpamAssassin that is becoming increasingly ineffective, even though I know the new version is much better, because I just can’t bear the thought of another morning wasted. The very memory of the first SpamAssassin installation — all the little SSH windows, some su’ed, trying to scroll through man pages and Google Groups, accidentally hitting Ctrl+Z in Emacs to undo and having it suspend, trying to guess why we can’t get the MTA to run procmail, sorry it’s too much. If SpamAssassin was making money off of upgraders they would have lost my business because they don’t have a SETUP program.
  3. Win reviews. Software reviewers always cast about for some kind of standardized way to rate software, even when they are comparing apples and oranges and planets and 17th century philosophers. They always have a meaningless list of things to review which can be applied to PC games, mainframe databases, web site auction software, and DNA sequencing software. And Setup is always on their list. A single flaw in setup is guaranteed to be mentioned in every review because every reviewer will see it and say “Aha!”

“How can we make WISE better?” Kudos to the product manager of WISE Installation System for calling me up and listening to my litany of all the reasons his product wasn’t adequate for typical IIS/ASP/SQL applications.