Anil Dash is the new CEO of Fog Creek Software

I have some huge news to share with you.

For the first time since Fog Creek Software was founded more than sixteen years ago, we have a new CEO, Anil Dash.

Who?

I’ve been friends with Anil since the earliest days of Fog Creek Software. He’s a pioneer of blogging (beat me to it by about five months), and has been a tech industry entrepreneur, activist, and writer for almost two decades.

You can read Anil’s full bio here. For those of you that want the top level bullet points:

  • Blogging pioneer
  • Helped start Six Apart, the company behind Moveable Type and TypePad
  • Founder of Expert Labs, ThinkUp, and MakerBase (which is joining Fog Creek)
  • Advisor and board member to a whole slew of companies and non-profits
  • Lives in New York City, with his wife Alaina Browne and their son Malcolm.

I’ve gotten to know Anil much better since he joined the Stack Overflow Board of Directors in 2011 and discovered that he’s a remarkable creative thinker and really, really understands developers, how the work developers do fits into society, and thus, how we can make technology more humane and ethical from our unique position of making things for software developers. There is no single person I would trust more to help Fog Creek figure out how to make something big, important, and valuable.

Fog Creek is a weird company here, with unique values that you don’t find in a lot of other companies. That’s why we’re so successful, and that’s why we love working here. Some of the weird stuff we do is non-negotiable. We would never dream of having just any competent person from outside the company come in, let alone give them the CEO role, if we weren’t convinced that they were 100% fanatical and excited about Fog Creek Software’s unique operating system. We’ve been friends with Anil for so long that we’re confident that the combination of his talents and worldview with our quirky operating system will be a stellar combination.

In short, we need Anil to help support us with ideas and leadership for HyperDev (now renamed Gomix) and any future products we come up with, and we need his soapbox and industry connections to continue to keep Fog Creek Software relevant. Thus I think the perfect position for him is as CEO of Fog Creek Software.

A typical startup is built around a single product, and some theory that people will pay money for that product. This theory eventually become false, and the company goes away. But Fog Creek was different. We were just a place where great people come together to build great things. People come here because of the other people that are here. And that makes it fundamentally much stronger and longer lasting. We build new products every year, some of which work, and some of which don’t; we can spin off other companies; the individuals who work here can change; but as long as we remain dedicated to being a place where great people come together to build great things we’re going to remain a highly respected and successful company for a long long time.

What are you doing, Joel?

I’m the full-time CEO of Stack Overflow, which just hit 300 employees and really takes all my time now.

Where’s Michael Pryor?

He’s the full-time CEO of Trello, which is about to hit 100 employees and takes all of his time.

So, what’s going on at Fog Creek?

Fog Creek is focused on two things:

  • Fog Creek’s developer tool, FogBugz, is still going strong. We have a dedicated development team continuing to work on it and are still regularly releasing new features and enhancements, especially in the area of agile development.
  • Fog Creek’s newest project, Gomix (formerly “HyperDev”) is relaunching. This is a developer playground for building full-stack web-apps fast.

Anil as CEO will be assisted by COO Jordan Harris, and Michael and I are still heavily involved but now at the board level.

Introducing HyperDev

One more thing…

It’s been awhile since we launched a whole new product at Fog Creek Software (the last one was Trello, and that’s doing pretty well). Today we’re announcing the public beta of HyperDev, a developer playground for building full-stack web-apps fast.

HyperDev is going to be the fastest way to bang out code and get it running on the internet. We want to eliminate 100% of the complicated administrative details around getting code up and running on a website. The best way to explain that is with a little tour.

Step one. You go to hyperdev.com.

Boom. Your new website is already running. You have your own private virtual machine (well, really it’s a container but you don’t have to care about that or know what that means) running on the internet at its own, custom URL which you can already give people and they can already go to it and see the simple code we started you out with.

All that happened just because you went to hyperdev.com.

Notice what you DIDN’T do.

  • You didn’t make an account.
  • You didn’t use Git. Or any version control, really.
  • You didn’t deal with name servers.
  • You didn’t sign up with a hosting provider.
  • You didn’t provision a server.
  • You didn’t install an operating system or a LAMP stack or Node or operating systems or anything.
  • You didn’t configure the server.
  • You didn’t figure out how to integrate and deploy your code.

You just went to hyperdev.com. Try it now!

What do you see in your browser?

Well, you’re seeing a basic IDE. There’s a little button that says SHOW and when you click on that, another browser window opens up showing you your website as it appears to the world. Notice that we invented a unique name for you.

Over there in the IDE, in the bottom left, you see some client side files. One of them is called index.html. You know what to do, right? Click on index.html and make a couple of changes to the text.

Now here’s something that is already a little bit magic… As you type changes into the IDE, without saving, those changes are deploying to your new web server and we’re refreshing the web browser for you, so those changes are appearing almost instantly, both in your browser and for anyone else on the internet visiting your URL.

Again, notice what you DIDN’T do:

  • You didn’t hit a “save” button.
  • You didn’t commit to Git.
  • You didn’t push.
  • You didn’t run a deployment script.
  • You didn’t restart the web server.
  • You didn’t refresh the page on your web browser.

You just typed some changes and BOOM they appeared.

OK, so far so good. That’s a little bit like jsFiddle or Stack Overflow snippets, right? NBD.

But let’s look around the IDE some more. In the top left, you see some server side files. These are actual code that actually runs on the actual (virtual) server that we’re running for you. It’s running node. If you go into the server.js file you see a bunch of JavaScript. Now change something there, and watch your window over on the right.

Magic again… the changes you are making to the server-side Javascript code are already deployed and they’re already showing up live in the web browser you’re pointing at your URL.

Literally every change you make is instantly saved, uploaded to the server, the server is restarted with the new code, and your browser is refreshed, all within half a second. So now your server-side code changes are instantly deployed, and once again, notice that you didn’t:

  • Save
  • Do Git incantations
  • Deploy
  • Buy and configure a continuous integration solution
  • Restart anything
  • Send any SIGHUPs

You just changed the code and it was already reflected on the live server.

Now you’re starting to get the idea of HyperDev. It’s just a SUPER FAST way to get running code up on the internet without dealing with any administrative headaches that are not related to your code.

Ok, now I think I know the next question you’re going to ask me.

“Wait a minute,” you’re going to ask. “If I’m not using Git, is this a single-developer solution?”

No. There’s an Invite button in the top left. You can use that to get a link that you give your friends. When they go to that link, they’ll be editing, live, with you, in the same documents. It’s a magical kind of team programming where everything shows up instantly, like Trello, or Google Docs. It is a magical thing to collaborate with a team of two or three or four people banging away on different parts of the code at the same time without a source control system. It’s remarkably productive; you can dive in and help each other or you can each work on different parts of the code.

“This doesn’t make sense. How is the code not permanently broken? You can’t just sync all our changes continuously!”

You’d be surprised just how well it does work, for most small teams and most simple programming projects. Listen, this is not the future of all software development. Professional software development teams will continue to use professional, robust tools like Git and that’s great. But it’s surprising how just having continuous merging and reliable Undo solves the “version control” problem for all kinds of simple coding problems. And it really does create an insanely addictive form of collaboration that supercharges your team productivity.

“What if I literally type ‘DELETE * FROM USERS’ on my way to typing ‘WHERE id=9283’, do I lose all my user data?”

Erm… yes. Don’t do that. This doesn’t come up that often, to be honest, and we’re going to add the world’s simplest “branch” feature so that optionally you can have a “dev” and “live” branch, but for now, yeah, you’d be surprised at how well this works in practice even though in theory it sounds terrifying.

“Does it have to be JavaScript?”

Right now the server we gave you is running Node so today it has to be JavaScript. We’ll add other languages soon.

“What can I do with my server?”

Anything you can do in Node. You can add any package you want just by editing package.json. So literally any working JavaScript you want to cut and paste from Stack Overflow is going to work fine.

“Is my server always up?”

If you don’t use it for a while, we’ll put your server to sleep, but it will never take more than a few seconds to restart. But yes for all intents and purposes, you can treat it like a reasonably reliably, 24/7 web server. This is still a beta so don’t ask me how many 9’s. You can have all the 8’s you want.

“Why would I trust my website to you? What if you go out of business?”

There’s nothing special about the container we gave you; it’s a generic VM running Node. There’s nothing special about the way we told you to write code; we do not give you special frameworks or libraries that will lock you in. Download your source code and host it anywhere and you’re back in business.

“How are you going to make money off of this?”

Aaaaaah! why do you care!

But seriously, the current plan is to have a free version for public / open source code you don’t mind sharing with the world. If you want private code, much like private repos, there will eventually be paid plans, and we’ll have corporate and enterprise versions. For now it’s all just a beta so don’t worry too much about that!

“What is the point of this Joel?”

As developers we have fantastic sets of amazing tools for building, creating, managing, testing, and deploying our source code. They’re powerful and can do anything you might need. But they’re usually too complex and too complicated for very simple projects. Useful little bits of code never get written because you dread the administration of setting up a new dev environment, source code repo, and server. New programmers and students are overwhelmed by the complexity of distributed version control when they’re still learning to write a while loop. Apps that might solve real problems never get written because of the friction of getting started.

Our theory here is that HyperDev can remove all the barriers to getting started and building useful things, and more great things will get built.

“What now?”

Really? Just go to HyperDev and start playing!

Town Car Version Control

The team at Fog Creek is releasing a major new version of Kiln today. Kiln is a distributed version control system.

One of the biggest new features is Kiln Harmony, which lets you operate on Kiln repositories using either Git or Mercurial. So you can push changes to a Kiln repo using Git and then pull them using Mercurial. This means that you never have to decide whether you want to use Git or Mercurial. Religious war: averted.

But, I’m getting ahead of myself!

For those of you that have been living under a rock, the single biggest change in developers’ lives in the last decade (besides Stack Overflow, natch) is Distributed Version Control. DVCS is such an important improvement over the previous generation of centralized version control (Subversion, CVS, etc.) that it’s a required upgrade, even though it’s honestly a bit harder to use.

The popular DVCS options are Git and Mercurial. Both are open source. They are very, very similar in capabilities and operation; in fact, they are so similar that Kiln Harmony hides all the differences, so you can use any Git or Mercurial tool to work with any Kiln repository.

If Git and Mercurial are open source, why are people making money selling them?

The short answer is that the open source tools are kind of raw. They’re dune buggies. Powerful, yes, and sufficient for a college project, but as it turns out, people buy Cadillacs, not dune buggies, to drive around in, because they like to have windshield wipers, 14-way power adjustable seats, and a way to start the engine from twenty feet away. Just in case you live in a Hollywood movie and the ignition has been hooked up to a bomb.

Fog Creek (and others, notably GitHub) are making money selling version control by providing a whole bunch of features that make the overall code management experience easier and more useful. For example, we both provide professional, secure hosting, a web management and administration interface, and somebody you can call for help.

Where we differ is that Kiln is more focused on the corporate market, while GitHub was designed for open source projects. I think of Kiln as the corporate Lincoln Town Car, while GitHub is kind of a VW Minibus. Both are eminently better choices than using raw Git.

So, specifically, Kiln gives you corporate things like:

  • code reviews
  • access control and permissions
  • fast code search
  • a news feed to follow code you care about

GitHub gives you things that match the sociology of open source projects:

  • public home pages
  • a social network, with profiles
  • fork and pull workflow

Since internal corporate projects have a very different sociology than open source projects, Kiln is very different than GitHub. On internal projects, almost all code that is developed is eventually used, although it needs to be reviewed, so Kiln kind of assumes that everything you do is most likely going to end up in the main code base, and we have a slick code review system.

On open source projects, contributions can come from volunteers all over the Internet, many of whom are happy to fork the code for their own needs. So GitHub provides a social network, emphasizes the ease of forking someone else’s code (something you’re unlikely to do in a closed corporate environment), and has a thing called a pull request that matches the way people tend to collaborate on open source projects without advance coordination.

ANYWAY, back to the new version of Kiln.

When Tyler and Ben built Kiln 1.0, they built it on Mercurial. Why? Well, Mercurial had pretty much all the same concepts as Git, but Git was historically unfriendly to Windows which is used by many of our corporate clients. We also thought that the Mercurial command line (hg) was a bit closer to Subversion (svn) which a lot of programmers were already used to.

So, long story short, we decided Mercurial was about 1% better than Git and that’s the way we went. We didn’t want to start a holy war, and we liked Git, but we just had a feeling that all else being equal, Mercurial was marginally better than Git.

We still think that, but in the years since Kiln first shipped, GitHub has taken the world by storm, creating an ecosystem around Git that more than makes up for its minor failings. Today Git is without a doubt more popular. So we knew we needed to add Git to Kiln.

We could have done it the lazy way: support both kinds of repositories and make you choose which one to use. Maybe add some nice conversion tools.

But we are not lazy. We decided to do it the awesome way.

We decided that the awesome way would be to make Kiln fully bilingual. It stores every repo in both formats. It automatically converts everything back and forth, always. The translation is 1:1, reversible, and round-trippable. Whatever you do to a Kiln repository using Git will be immediately visible to Mercurial users and vice versa.

Every user of every Kiln repo can choose either Mercurial or Git, and everything always works.

You can push in Git, and pull in Mercurial. Or vice versa. Or both.

A team that uses Mercurial internally (and barely understands Git) can push their code to GitHub and interact with the GitHub community.

If your team likes Git but you prefer Mercurial yourself, you can use a different version control system than everybody else on your team and, honestly, they don’t even have to know.

If your team is using Mercurial today but you want to switch to Git, you can move over — one person at a time. If Joe in Accounting refuses to move, it doesn’t matter. He can keep using Mercurial.

Everything maps. Everything round-trips.

There are some other big improvements in the version of Kiln available today. Super-fast code search. SSH and IP-whitelisting for security. Project READMEs. A bunch of other improvements throughout the interface that will be a huge upgrade for anyone already using Kiln. If you’re interested, you can start a free trial online.

How Trello is different

Just a few months ago, we launched Trello, a super simple, web-based team coordination system. The feedback has been overwhelmingly positive and adoption has been very strong, even in its early, 1.0 state.

Trello is new kind of development project for Fog Creek. It’s 100% hosted; there will never be an “installed software” version of Trello. That allowed us to modernize many aspects of our development process; I am happy to announce that there is absolutely no Visual Basic code involved in any part of Trello. What’s next, flying cars?

The biggest difference you’ll notice (compared to our previous products pitched solely at software developers) is that Trello is a totally horizontal product.

Horizontal means that it can be used by people from all walks of life. Word processors and web browsers are horizontal. The software your dentist uses to torture you with drills is vertical.

Vertical software is much easier to pull off and make money with, and it’s a good choice for your first startup. Here are two key reasons:

  • It’s easier to find customers. If you make dentist software, you know which conventions to go to and which magazines to advertise in. All you have to do is find dentists.
  • The margins are better. Your users are professionals at work and it makes sense for them to give you money if you can solve their problems.

Making a major horizontal product that’s useful in any walk of life is almost impossible to pull off. You can’t charge very much, because you’re competing with other horizontal products that can amortize their development costs across a huge number of users. It’s high risk, high reward: not suitable for a young bootstrapped startup, but not a bad idea for a second or third product from a mature and stable company like Fog Creek.

Forgive me if I now divert into telling you a quick story about my time spent on the Microsoft Excel team way back in 1991. (Yes, I know you were not born yet, but I assure you that computers had been invented. Just hop up here on my knee and shut up.)

Everybody thought of Excel as a financial modeling application. It was used for creating calculation models with formulas and stuff. You would put in your assumptions and then calculate things like “if interest rates go up by 0.00001% next year, what percentage of Las Vegas homeowners will plunge into bankruptcy?” For example.

Round about 1993 a couple of us went on customer visits to see how people were using Excel.

We found a fellow whose entire job consisted of maintaining the “number of injuries this week” spreadsheet for a large, highly-regulated utility.

Once a week, he opened an Excel spreadsheet which listed ten facilities, containing the name of the facilities and the number 0, which indicated that were 0 injuries that week. (They never had injuries).

He typed the current date in the top of the spreadsheet, printed a copy, put it in a three-ring binder, and that was pretty much his whole, entire job. It was kind of sad. He took two lunch breaks a day. I would too, if that was my whole job.

Over the next two weeks we visited dozens of Excel customers, and did not see anyone using Excel to actually perform what you would call “calculations.” Almost all of them were using Excel because it was a convenient way to create a table.

(Irrelevant sidenote: the few customers we could find who were doing calculations were banks, devising explosive devices called “derivatives.” They used Excel to maximize the bankers’ bonuses on nine out of ten years, and to cause western civilization to nearly collapse every tenth year. Something about black swans. Probably just a floating point rounding error.)

What was I talking about? Oh yeah… most people just used Excel to make lists. Suddenly we understood why Lotus Improv, which was this fancy futuristic spreadsheet that was going to make Excel obsolete, had failed completely: because it was great at calculations, but terrible at creating tables, and everyone was using Excel for tables, not calculations.

Bing! A light went off in my head.

The great horizontal killer applications are actually just fancy data structures.

Spreadsheets are not just tools for doing “what-if” analysis. They provide a specific data structure: a table. Most Excel users never enter a formula. They use Excel when they need a table. The gridlines are the most important feature of Excel, not recalc.

Word processors are not just tools for writing books, reports, and letters. They provide a specific data structure: lines of text which automatically wrap and split into pages.

PowerPoint is not just a tool for making boring meetings. It provides a specific data structure: an array of full-screen images. 

Some people saw Trello and said, “oh, it’s Kanban boards. For developing software the agile way.” Yeah, it’s that, but it’s also for planning a wedding, for making a list of potential vacation spots to share with your family, for keeping track of applicants to open job positions, and for a billion other things. In fact Trello is for anything where you want to maintain a list of lists with a group of people.

There are millions of things that need that kind of data structure, and there hasn’t been a great “list-of-list” app before Trello. (There have been outliners, but outlines are, IMHO, one of the great dead ends in UI design: so appealing to programmers, yet so useless to civilians).

Once you get into Trello, you’ll use it for everything. I use about thirty Trello boards regularly, and I use them with everyone in my life, from the APs (Aged Parents), with whom I plan vacations, with every team at work, and just about every project I’m involved in.

So, ok, that was the first big difference with Trello: horizonal, not vertical. But there are a bunch of other differences:

It’s delivered continuously. Rather than having major and minor releases, we pretty much just continuously push out new features from development to customers. A feature that you built and tested, but didn’t deliver yet because you’re waiting for the next major release, becomes inventory. Inventory is dead weight: money you spent that’s just wasting away without earning you anything. Sure, 100 years ago, we had these things called “CD-ROMs” and we shipped software that way, so there was an economic reason to bunch up features before we inflict ‘em on the world. But there’s no reason to work that way any more. You already knew that, of course. I’m just saying—I stopped using Visual Basic about five minutes ago. Brave New World.

It’s not exhaustively tested before being released. We thought we could get away with this because Trello is free, so customers are more forgiving. But to tell the truth, the real reason we get away with it is because bugs are fixed in a matter of hours, not months, so the net number of “bugs experienced by the public” is low.

We work in public. The rule on the Trello team is “default public.” We have a public Trello board that shows everything that we’re working on and where it’s up to. We use this to let customers vote and comment on their favorite features. By the way, while Trello was under development, it was secret. We had a lot of beta testers who gave us customer feedback so that the development team could use lean startup principles, but the nine months we spent building version 1.0 in secret gave us a significant lead in a competitive marketplace. But now that we’re shipping, there’s no reason not to talk about our plans.

This is a “Get Big Fast” product, not a “Ben and Jerry’s”  product. See Strategy Letter I. The business goal for Trello is to ultimately get to 100 million users. That means that our highest priority is removing any obstacles to adoption. Anything that people might use as a reason not to use Trello has to be found and eliminated. For example:

Trello is free. The friction caused by charging for a product is the biggest impediment to massive growth. In the long run, we think it’s much easier to figure out how to extract a small amount of money out of a large number of users than to extract a large amount of money out of a small number of users. Once you have 100 million users, it’s easy to figure out which of those users are getting the most value out of the product you built. The ones who are getting the most value will be happy to pay you. The others don’t cost much to support.

The API and plug-in architectures are the highest priority. Another way of putting that is:  never build anything in-house if you can expose a basic API and get those high-value users (the ones who are getting the most value out of the platform) to build it for you. On the Trello team, any feature that can be provided by a plug-in must be provided by a plug-in.

(The API is currently in very rudimentary form. You can already use it to do very interesting things. It is under rapid development.)

We use cutting edge technology. Often, this means we get cut fingers. Our developers bleed all over MongoDB, WebSockets, CoffeeScript and Node. But at least they’re having fun. And in today’s tight job market, great programmers have a lot of sway on what they’re going to be working on. If you can give them an exciting product that will touch millions of people, and let them dig deep into TCP-IP internals while they try to figure out why simple things aren’t working, they’ll have fun and they’ll love their jobs. Besides, we’re creating a product that we’ll be working on for the next ten years. Technology that’s merely “state of the art” today is going to be old and creaky in five years. We tried to go a little bit beyond “state of the art.” It’s a calculated risk.

None of this is very radical. TL;DR: Fog Creek Software develops an internet product using techniques that every Y-combinator startup has been using since spez was sleeping with his laptop so he could reboot Reddit when Lisp crashed in the middle of the night. If you haven’t tried Trello yet, try it, then tell me on twitter if it worked.

Fruity treats, customization, and supersonics: FogBugz 7 is here

A year ago today, FogBugz development was in disarray.


The original roadmap was too complicated
We had done this big offsite at a beach house in the Hamptons and came up with a complicated roadmap that involved splitting FogBugz into two separate products and two separate teams. We had done a lot of work on the architecture that made the product much more modular, but we had this goofy plan to do a major release containing virtually no new features, just to let the new architecture shake out, a plan which nobody was very excited about.

So, on July 31, 2008, we reset our plans. We gave up on the idea of shipping a standalone Wiki product, and merged the Wiki team with the FogBugz team. And we nailed down a new vision for FogBugz 7 that’s a lot easier to understand and a much better product: something we could ship in one year.

Then the development team shipped it. Exactly on schedule. Well, maybe a week or two early. They used Evidence Based Scheduling religiously on this large one year project and it worked amazingly well. Yes, they had to cut and trim features as they went along, but the accuracy of the estimates also gave them the confidence to add a couple of major features (like Scrum support) that you’ll love.

One of the best things we did as a development team was to write a short, concise, comprehensible vision statement that got everybody exactly on the same page about what we were going to do over the course of a year. The vision statement made it easy to prioritize. Instead of just telling us what was in the product, it also gave us a way to know what was out.

Here’s the vision statement, in its entirety, which is a pretty good description of what we are actually shipping today. Please excuse the tone of voice; remember that this was an internal document to galvanize the team.

Fog Creek Confidential

FogBugz 7

As of August, 2008, the entire FogBugz and Weeble teams are working towards a single major new release of FogBugz that will blow away our customers (real and imaginary). When they see it they will grow weak in the knees. Competitors will shiver in fear at the monumental amount of win in this release. As customers evaluate the software, they will simply never find a reason not to use FogBugz to run their software teams. No matter what the grumpy people on their team come up with, they’ll find that not only have we implemented it in FogBugz, we’ve done it in a FULL-ASSED way. No more HALF-ASSED features (I’m looking at you, logo customization in FogBugz 6.1).

This release has three important focus areas with friendly catchnames.

  1. fruity treats
  2. customization
  3. supersonics

If it’s NOT ON THAT LIST it’s NOT IN THE PRODUCT. Get used to it.

fruity treats

FogBugz 7.0 will include a long list of simple improvements that will make life dramatically easier for people trying to get things done, especially when they want to do things just a wee bit differently than we do here in the Land of the Fog. Every little feature will be a delight for somebody, especially that person who keeps emailing us because he can’t believe that the feature he wants which is obviously only six lines of code hasn’t been implemented in FogBugz 1.0, 2.0, 3.0, 4.0, “4.5”, or 6.0, and if we don’t get it soon he JUST MIGHT HAVE TO GO OVER TO THE AUSTRALIANS.

Collectively, though, fruity treats will make FogBugz friggin’ amazing, and they’ll help us win more sales because we won’t have so many showstopper reasons why people choose another so-called bug “tracker.”

What’s a fruity treat? It must fit these three rules to get into the 7.0 orchard:

  1. It must be something customers and potential customers are asking about all the time
  2. There must not be a trivial, easy workaround in 6.1
  3. It must be relatively easy for us to implement. No big earth-shaking new features will sneak in.
  4. “Three rules,” I said. Not four. Why is there a 4 here?

Visit the shared filter FogBugz 7 Fruity Treats to see what’s coming up.

customization

FogBugz 7.0 will include our smashingly powerful new plug-in archicture, which, combined with the FogBugz API, will give people complete confidence that if there’s anything FogBugz can’t do out of the FogBox, you can write it yourself. No more will we tell customers “you get the source code, so you can modify it!” That’s BS. They know perfectly well that if they modify our source code, terribobble tragedies will occur the minute we release a service pack. From now on, we can say, “there’s a great plug in architecture and a whole online cornucopia of righteous plug-ins available for download.”

So you can trick out your FogBugz installation like a lowrider or an off-road dune buggy. You can make it into a Cadillac or a space shuttle. It’s up to you.

supersonics

Thanks to the newfangled, all-electronic compilation machine (“Wasabi”) that we had installed at great expense, FogBugz will be running on the CLR and Mono for greatly improved performance and compatibility. Whiz zip blip! bleep! You’ll be able to run 1000s of users on one server. Long queries will finish faster. Laundry will be brighter.

and that is it.

Nothing else. Go fix yourself an icy lemonade.

The team got pretty excited. Having a sharp focused vision statement like that, and having the whole team working towards a single shared goal, really helped us get our house in order. We scrubbed through thousands of backlogged ideas, feature requests, and comments, and came up with a set of fruity treats that will eliminate virtually every customer objection that we hear during the sales process. We developed a comprehensive plug-in architecture that’s pretty amazing, and had interns develop a slew of slick plug-ins. And the fact that Wasabi is now a genuine .NET language made for substantial performance improvements over running on the VBScript “runtime.”

I’ll let the team give you a comprehensive look at what’s new in FogBugz 7, but here are some of the highlights:

  • Subcases: organize your work hierarchically
  • EBS can track the schedule of developers who work on multiple projects
  • EBS also now has dependencies (work on X can’t start until Y is complete)
  • Scrum is fully supported, with project backlogs and EBS-powered burndown charts
  • Just about the slickest implementation of tags you’ve ever seen
  • Plug-ins, with comprehensive support throughout the product
  • Customizable workflow
  • Lots of visual improvements and small usability enhancements
  • A context menu in the grid saves steps
  • Easier case entry right from the grid
  • Auto complete in case fields, so you don’t have to remember case numbers
  • Custom fields (Yes. They tied me up in a closet.)
  • URL triggers (FogBugz will hit a URL you specify when certain events happen)
  • Easier administration, through an administrator dashboard, and a feature for cloning users and creating a list of new users all at once
  • Much better performance, including substantial caching that speeds up display of email, EBS calculations, and more

Those are just the big-ticket items. FogBugz 7 is rife with little areas where the development team put a ridiculous amount of attention to detail. For example, the signup process, which is actually very complicated on the backend, became much simpler on the front end, due to a heroic amount of work that every user will only see once. If you do nothing else, check out the signup process to see the effort that went into making signup just a tiny bit faster. Another example: we completely replaced the entire email processing infrastructure, just because there were tiny corner case bugs that simply could not be solved with the commercial class library we had been using.


Tyler Griffin Hicks-Wright
I wish I could take more credit for it, but the truth is, Fog Creek has grown. We have a very professional team with testers, program managers, and developers, and I just sort of sit here agog at what a brilliant job they’re doing. All of the credit for this fantastic new product goes to them. I’m just the Michael Scott character who wastes everybody’s time whenever I venture out of my office.

FogBugz 7 is shipping today for Windows servers and on our own, hosted infrastructure. The Mono version (for Macintosh and Linux) will be in beta soon. To try it, go to try.fogbugz.com.

If you’re currently using FogBugz on Demand, you’re already using 7.0.

If you run FogBugz on your own server and have an up-to-date support contract, the upgrade is free, otherwise, bring your support contract up-to-date and you’ll be good to go.

New, faster Copilot

Something I knew: if you just put traffic on the Internet, it’s not necessarily going to go by the most efficient route.

Something I didn’t know: that can make a pretty big difference. The default routes can be slow, clogged, and high latency. Think Cross-Bronx Expressway.

Akamai sent a couple of salespeople over to pitch us a service called IP Application Accelerator. According to the goofy pictures-with-clouds in the whitepaper, when you subscribe to this service, your packets go straight to the nearest Akamai node, which are installed all over the world, and then they magically zip on a superfast superclean superhighway to the Akamai node nearest your destination, after which they hop off and take the city bus to their final destination.

I have to admit to being extremely skeptical. Isn’t that what the Internet is supposed to do anyway? When I heard about this I really didn’t think it would work. I mean, sure, I understand Akamai’s original product, whereby the big static files in your site would be copied to nodes all over the Internet for faster delivery, but I didn’t expect great speed improvements for an application like Fog Creek Copilot, which can’t cache anything.

Jason, on the Copilot team, wanted to try anyway. Performance is the biggest complaint about Copilot, so we were ready to try anything that increased the “speed of light.” Setting it up turned out to be pretty easy. The costs were reasonable and Akamai was more than happy to let us see if it worked as well as advertised before committing to spending anything. Setup consisted entirely of changing an IP address or two.

Well, the new Akamaized Copilot seems to get about 100% more throughput going from Boston to Los Angeles. More importantly, our exhaustive scientific experiments using beakers and chemicals and graph paper and slide rules proved that the usability of Copilot jumped from “tolerable” to “pretty snappy.”

My high school science teacher would be proud.

Last week in Munich I was staying in a hotel (Bayerischer Hof) with ridiculously bad internet connectivity (provided by Swisscom) that was bursty, had lengthy dropouts, surprisingly low bandwidth (I couldn’t watch YouTube movies of cats doing funny things, even at the lowest resolution), and was poorly managed (it literally could not route to many popular sites). So I tried the new Akamaized Copilot back to my desk in New York and was blown away… Copilot’s speed and reliability doing remote desktop was actually better than the native internet access in the hotel. This shows, I think, that Akamai managed to pull its traffic off of the crappy Swisscom network before Swisscom could do any more damage. Awesome.

It’s still too early in the experiment to decide conclusively that this was a good move. The internet is a huge place, and we’ve only done a handful of experiments. The final verdict will come from our customers, but so far I’m a believer.

The new Fog Creek office

Remember the Bionic Office? Fog Creek moved in there in 2003. After a couple of years we had outgrown the first office so we expanded to take over the whole floor. By the time our lease ran out in 2008 we had about 25 people in a space built for 18 and we knew we had to move. Besides, the grungy midtown location, perfect for startups, was starting to get us down after five years. We had a little bit more money, so we were looking for a place with about twice the space that cost about four times as much.

It bears repeating that at Fog Creek our goal is building the best possible place for software developers to work. Finding a great space was not easy. Our ideal of giving every developer a private office is unusual, so it’s almost impossible to find prebuilt office space set up that way. That means we didn’t have much choice but to find the best raw space and then do our own interior construction.

We knew it was going to take a while. After the first office, I knew that you should always plan on ten months from the day you start looking at space until the day you move in. And I also knew that if I wasn’t intimately involved in every detail of the construction, we’d end up with the kind of life-sucking dreary cubicle hellhole made popular by the utopian workplace in “Office Space.”

After a tedious search, we signed a lease for about 10,600 square feet on a high floor at 55 Broadway, almost all the way downtown, with fantastic views of the Hudson River, Governor’s Island, the Statue of Liberty, and Jersey City.

We found a landlord with his own construction crew who was willing to do the interior construction for us, at no charge. The only problem was that his idea of a nice office was a lot closer to Initech than Fog Creek. So we had to chip in about a half million dollars of our own to upgrade just about everything.

Building great office space for software developers serves two purposes: increased productivity, and increased recruiting pull. Private offices with doors that close prevent programmers from interruptions allowing them to concentrate on code without being forced to stop and listen to every interesting conversation in the room. And the nice offices wow our job candidates, making it easier for us to attract, hire, and retain the great developers we need to make software profitably. It’s worth it, especially in a world where so many software jobs provide only the most rudimentary and depressing cubicle farms.

Here are a few of the features of the new office:

Gobs of well-lit perimeter offices. Every developer, tester, and program manager is in a private office; all except two have direct windows to the outside (the two that don’t get plenty of daylight through two glass walls).

Desks designed for programming. Long, straight desks include a motorized height-adjustable work surface for maximal ergonomics and comfort, and so you can stand up for part of the day if you want. Standard 30” monitors. Desks are straight instead of L-shaped to make pair programming and code reviews more comfortable. There are 20 electrical outlets behind every desk and most developers have small hubs for extra computers. Our standard-issue chair is the Herman Miller Aeron. Those guest chairs are the famous Series 7 by Arne Jacobsen. The pedestal storage is on wheels and incorporates a cushion-top for additional guest seating.

Glass whiteboards. Easy to erase, look great, and don’t stain.

Coffee bar and lunchroom. There’s an espresso machine, a big fridge full of beverages, a bottomless supply of snacks, and delicious catered lunch brought in every day. We all eat lunch together which is one of the highlights of working here.

A huge salt water aquarium which brings light and color into the center of the office.

Plenty of meeting space. The lunch room has a projector and motorized screen (most frequently used to play Rock Band, thanks Jeff Atwood); there are several smaller meeting tables around, two conference rooms, and a big S-shaped couch.

A library, fully stocked with obsolete paper books and two reclining leather chairs, perfect for an after-lunch nap.

A shower (floor to ceiling marble), so you can bike to work or work out during the day.

Wood floors around the perimeter, so you can use scooters to get around. Carpet in the offices to make them quiet. Concrete in the lunch room because it’s bright and looks cool.

I can’t quite fit in enough pictures in this article to really give you a feel for the space, but I put a bunch of photos of the new Fog Creek office up on Picasa. If you’re interested in learning more about the rationale behind spending so much money on building a great workspace, read A Field Guide to Developers.

Five whys

At 3:30 in the morning of January 10th, 2008, a shrill chirping woke up our system administrator, Michael Gorsuch, asleep at home in Brooklyn. It was a text message from Nagios, our network monitoring software, warning him that something was wrong.

He swung out of bed, accidentally knocking over (and waking up) the dog, sleeping soundly in her dog bed, who, angrily, staggered out to the hallway, peed on the floor, and then returned to bed. Meanwhile Michael logged onto his computer in the other room and discovered that one of the three data centers he runs, in downtown Manhattan, was unreachable from the Internet.

This particular data center is in a secure building in downtown Manhattan, in a large facility operated by Peer 1. It has backup generators, several days of diesel fuel, and racks and racks of batteries to keep the whole thing running for a few minutes while the generators can be started. It has massive amounts of air conditioning, multiple high speed connections to the Internet, and the kind of “right stuff” down-to-earth engineers who always do things the boring, plodding, methodical way instead of the flashy cool trendy way, so everything is pretty reliable.

Internet providers like Peer 1 like to guarantee the uptime of their services in terms of a Service Level Agreement, otherwise known as an SLA. A typical SLA might state something like “99.99% uptime.” When you do the math, let’s see, there are 525,949 minutes in a year (or 525,600 if you are in the cast of Rent), so that allows them 52.59 minutes of downtime per year. If they have any more downtime than that, the SLA usually provides for some kind of penalty, but honestly, it’s often rather trivial… like, you get your money back for the minutes they were down. I remember once getting something like $10 off the bill once from a T1 provider because of a two day outage that cost us thousands of dollars. SLAs can be a little bit meaningless that way, and given how low the penalties are, a lot of network providers just started advertising 100% uptime.

Within 10 minutes everything seemed to be back to normal, and Michael went back to sleep.

Until about 5:00 a.m. This time Michael called the Peer 1 Network Operations Center (NOC) in Vancouver. They ran some tests, started investigating, couldn’t find anything wrong, and by 5:30 a.m. things seemed to be back to normal, but by this point, he was as nervous as a porcupine in a balloon factory.

At 6:15 a.m. the New York site lost all connectivity. Peer 1 couldn’t find anything wrong on their end. Michael got dressed and took the subway into Manhattan. The server seemed to be up. The Peer1 network connection was fine. The problem was something with the network switch. Michael temporarily took the switch out of the loop, connecting our router directly to Peer 1’s router, and lo and behold, we were back on the Internet.

By the time most of our American customers got to work in the morning, everything was fine. Our European customers had already started emailing us to complain. Michael spent some time doing a post-mortem, and discovered that the problem was a simple configuration problem on the switch. There are several possible speeds that a switch can use to communicate (10, 100, or 1000 megabits/second). You can either set the speed manually, or you can let the switch automatically negotiate the highest speed that both sides can work with. The switch that failed had been set to autonegotiate. This usually works, but not always, and on the morning of January 10th, it didn’t.

Michael knew this could be a problem, but when he installed the switch, he had forgotten to set the speed, so the switch was still in the factory-default autonegotiate mode, which seemed to work fine. Until it didn’t.

Michael wasn’t happy. He sent me an email:

I know that we don’t officially have an SLA for On Demand, but I would like us to define one for internal purposes (at least). It’s one way that I can measure if myself and the (eventual) sysadmin team are meeting the general goals for the business. I was in the slow process of writing up a plan for this, but want to expedite in light of this morning’s mayhem.

An SLA is generally defined in terms of ‘uptime’, so we need to define what ‘uptime’ is in the context of On Demand. Once that is made clear, it’ll get translated into policy, which will then be translated into a set of monitoring / reporting scripts, and will be reviewed on a regular interval to see if we are ‘doing what we say’.

Good idea!

But there are some problems with SLAs. The biggest one is the lack of statistical meaningfulness when outages are so rare. We’ve had, if I remember correctly, two unplanned outages, including this one, since going live with FogBugz on Demand six months ago. Only one was our fault. Most well-run online services will have two, maybe three outages a year. With so few data points, the length of the outage starts to become really significant, and that’s one of those things that’s wildly variable. Suddenly, you’re talking about how long it takes a human to get to the equipment and swap out a broken part. To get really high uptime, you can’t wait for a human to switch out failed parts. You can’t even wait for a human to figure out what went wrong: you have to have previously thought of every possible thing that can possibly go wrong, which is vanishingly improbable. It’s the unexpected unexpecteds, not the expected unexpecteds, that kill you.

Really high availability becomes extremely costly. The proverbial “six nines” availability (99.9999% uptime) means no more than 30 seconds downtime per year. That’s really kind of ridiculous. Even the people who claim that they have built some big multi-million dollar superduper ultra-redundant six nines system are gonna wake up one day, I don’t know when, but they will, and something completely unusual will have gone wrong in a completely unexpected way, three EMP bombs, one at each data center, and they’ll smack their heads and have fourteen days of outage.

Think of it this way: If your six nines system goes down mysteriously just once and it takes you an hour to figure out the cause and fix it, well, you’ve just blown your downtime budget for the next century. Even the most notoriously reliable systems, like AT&T’s long distance service, have had long outages (six hours in 1991) which put them at a rather embarrassing three nines … and AT&T’s long distance service is considered “carrier grade,” the gold standard for uptime.

Keeping internet services online suffers from the problem of black swans. Nassim Taleb, who invented the term, defines it thus: “A black swan is an outlier, an event that lies beyond the realm of normal expectations.” Almost all internet outages are unexpected unexpecteds: extremely low-probability outlying surprises. They’re the kind of things that happen so rarely it doesn’t even make sense to use normal statistical methods like “mean time between failure.” What’s the “mean time between catastrophic floods in New Orleans?”

Measuring the number of minutes of downtime per year does not predict the number of minutes of downtime you’ll have the next year. It reminds me of commercial aviation today: the NTSB has done such a great job of eliminating all the common causes of crashes that nowadays, each commercial crash they investigate seems to be a crazy, one-off, black-swan outlier.

Somewhere between the “extremely unreliable” level of service, where it feels like stupid outages occur again and again and again, and the “extremely reliable” level of service, where you spend millions and millions of dollars getting an extra minute of uptime a year, there’s a sweet spot, where all the expected unexpecteds have been taken care of. A single hard drive failure, which is expected, doesn’t take you down. A single DNS server failure, which is expected, doesn’t take you down. But the unexpected unexpecteds might. That’s really the best we can hope for.

To reach this sweet spot, we borrowed an idea from Sakichi Toyoda, the founder of Toyota. He calls it Five Whys. When something goes wrong, you ask why, again and again, until you ferret out the root cause. Then you fix the root cause, not the symptoms.

Since this fit well with our idea of fixing everything two ways, we decided to start using five whys ourselves. Here’s what Michael came up with:

  • Our link to Peer1 NY went down
  • Why? – Our switch appears to have put the port in a failed state
  • Why? – After some discussion with the Peer1 NOC, we speculate that it was quite possibly caused by an Ethernet speed / duplex mismatch
  • Why? – The switch interface was set to auto-negotiate instead of being manually configured
  • Why? – We were fully aware of problems like this, and have been for many years.  But – we do not have a written standard and verification process for production switch configurations.
  • Why? – Documentation is often thought of as an aid for when the sysadmin isn’t around or for other members of the operations team, whereas, it should really be thought of as a checklist.

“Had we produced a written standard prior to deploying the switch and subsequently reviewed our work to match the standard, this outage would not have occurred,” Michael wrote. “Or, it would occur once, and the standard would get updated as appropriate.”

After some internal discussion we all agreed that rather than imposing a statistically meaningless measurement and hoping that the mere measurement of something meaningless would cause it to get better, what we really needed was a process of continuous improvement. Instead of setting up a SLA for our customers, we set up a blog where we would document every outage in real time, provide complete post-mortems, ask the five whys, get to the root cause, and tell our customers what we’re doing to prevent that problem in the future. In this case, the change is that our internal documentation will include detailed checklists for all operational procedures in the live environment.

Our customers can look at the blog to see what caused the problems and what we’re doing to make things better, and, hopefully, they can see evidence of steadily improving quality.

In the meantime, our customer service folks have the authority to credit customers’ accounts if they feel like they were affected by an outage. We let the customer decide how much they want to be credited, up to a whole month, because not every customer is even going to notice the outage, let alone suffer from it. I hope this system will improve our reliability to the point where the only outages we suffer are really the extremely unexpected black swans.

PS. Yes, we want to hire another system administrator so Michael doesn’t have to be the only one to wake up in the middle of the night.

A game of inches

“Did someone leave the radio on in the bathroom?” I asked Jared. There was a faint sound of classical music.

“No. It’s coming from outside. It started while you were away and happens every night.”

We live in an apartment building in New York. There are neighbors on all sides. We’re used to hearing TV shows from below, and the little kid in the apartment directly above us has a favorite game: throwing a bunch of marbles on the floor and then throwing himself around the room violently. I’m not sure how you keep score. As I write this, he’s running rapidly from room to room crashing into things. I can’t wait until he’s old enough for paintball.

Anyway. This classical-music-late-at-night thing had never happened before.

Worse, it was some kind of sturm-und-drang romantic crap that was making me angry right when I wanted to fall asleep.

Eventually, the music stopped, and I was able to drift off to sleep. But the next night, when the music resumed at midnight, I was really worn out, and it was more self-important Wagner rubbish, with pompous crescendos that consistently woke me up every time I finally drifted off to sleep, and I had no choice but to go sit in the living room and look at pictures of lolcats until it stopped, which it finally did, around 1 am.

The next night I had had enough. When the music started at about midnight, I got dressed and started exploring the apartment building. I crept around the halls, listening at every door, trying to figure out where the music came from. I poked my head out windows and found an unlocked door leading to an airshaft where the music was amazingly loud. I climbed up and down the stairs, and listened closely from the window on each and every landing, until I was pretty convinced that the problem was from dear old Mrs. C’s apartment, #2B, directly below us.

I didn’t think Mrs. C, who is probably in her 60s, was even awake that late, let alone listening to music loudly, but I briefly entertained the possibility that the local classical music station was doing the Ring Cycle or something and she was staying up late to hear it.

Not bloody likely.

One thing I had noticed was that the music seemed to go on at midnight every night, and off at 1:00 am. Somehow, that made me think it was a clock radio, which probably had the alarm factory set to 12:00.

I couldn’t bring myself to wake up an old lady downstairs on the mere suspicion that music was coming from her apartment. Frustrated, I went back to my apartment and caught up on xkcd. I was depressed and angry, because I hadn’t solved the problem. I festered and scowled all the next day.

The next evening, I knocked on Mrs. C’s door. The super had told me she was going away for the entire summer the next day so if the problem was coming from her apartment I better find out pronto.

“Sorry to bother you,” I said. “I’ve noticed that every night around midnight there’s loud classical music coming from the airshaft behind our apartments and it’s keeping me awake.”

“Oh no, it’s not me!” she insisted, as I suspected she would. Of course not: she probably goes to sleep at a completely normal hour and certainly never plays loud music that bothers the neighbors!

I thought she was a little hard of hearing and probably never noticed the thing blaring away from her spare room in the middle of the night. Or maybe she was a good sleeper.

It took a few minutes, but I finally convinced her to check if there was any kind of clock radio in the room below my window.

Turns out there was. Right in an open window beneath my own bedroom window. When I saw that it was tuned to 96.3, WQXR, I knew I had found the culprit.

“Oh, that thing? I have no idea how to use that thing. I never use it,” she said. “I’ll disconnect it completely.”

“Not necessary,” I said, and turned off the alarm, set the volume to zero, and, in my late-onset OCD, set the clock to the exact time.

Mrs. C was terribly apologetic, but it really wasn’t her fault. It took me—me!—quite a while to figure out how to operate the damn clock radio, and let me tell you, sonny boy, I know a thing or two about clock radios.  The UI was terrible. Your average little old lady didn’t stand a chance.

Is it the clock radio’s fault? Sort of. It was too hard to use. It had an alarm that continued to go off daily even if nobody touched it the day before, which is not the greatest idea. And there’s no reason to reset the alarm time to midnight after a power outage. 7:00 am would be a completely civilized default.

Somehow, over the last few weeks, I’ve become hypercritical. I’m always looking for flaws in things, and when I find them, I become single-minded about fixing them. It’s a particular frame of mind, actually, that software developers get into when they’re in the final debugging phase of a new product.

Over the last few weeks, I’ve been writing all the documentation for the next big version of FogBugz. As I write things, I try them out, either to make sure they work the way I think they should, or to get screenshots. And every hour or so, bells go off. “Wait a minute! What just happened? That’s not supposed to work like that!”

And since it’s software, I can always fix it. HA HA! Just kidding! I can’t make head or tail out of the code any more. I enter a bug and someone with a clue fixes it.

Dave Winer says, “To create a usable piece of software, you have to fight for every fix, every feature, every little accommodation that will get one more person up the curve. There are no shortcuts. Luck is involved, but you don’t win by being lucky, it happens because you fought for every inch.

Commercial software—the kind you sell to other people—is a game of inches.

Every day you make a tiny bit of progress. You make one thing just a smidgen better. You make the alarm clock default to 7:00am instead of 12:00 midnight. A tiny improvement that will barely benefit anyone. One inch.

There are thousands and tens of thousands of these tiny things.

It takes a mindset of constant criticism to find them. You have to reshape your mind until you’re finding fault with everything. Your significant others go nuts. Your family wants to kill you. When you’re walking to work and you see a driver do something stupid, it takes all your willpower to resist going up to the driver and explaining to him why he nearly killed that poor child in the wheelchair.

And as you fix more and more of these little details, as you polish and shape and shine and craft the little corners of your product, something magical happens. The inches add up to feet, the feet add up to yards, and the yards add up to miles. And you ship a truly great product. The kind of product that feels great, that works intuitively, that blows people away. The kind of product where that one-in-a-million user doing that one-in-a-million unusual thing finds that not only does it work, but it’s beautiful: even the janitor’s closets of your software have marble floors and solid core oak doors and polished mahogany wainscoting.

And that’s when you know it’s great software.

Congratulations to the FogBugz 6.0 team, outlandishly good players of the game of inches, who shipped their first beta today, on track for final release at the end of the summer. This is the best product they’ve ever produced. It will blow you away.

Copilot 2.0 ships!

Hoorah! Fog Creek Copilot 2.0 is now online, with three, no wait, five, no, three new features.

Well, I guess it depends how you count. In a moment I’ll count ‘em. In the meantime, a little background.

Fog Creek Copilot is a remote tech support service that lets one person control another computer remotely, much like VNC or RDC, with the advantage that it requires zero configuration, works through firewalls, and installs nothing.

Two summers ago, we had four interns here who built it, all by themselves, over the course of their 12 week internship. The only thing we gave them was a spec, some desks, and computers. They put together the web site, the documentation, all the code for five major components, came up with the marketing plan, did usability testing, and demoed to the public at a trade show. They kept a blog, which you can still read, and someone even made a documentary movie made about their summer.

(Sidebar: One of the reasons they were able to accomplish so much in one summer is that they used open source software as a starting point. Of course, everything they did, with the exception of our proprietary back end server code, is available under the GPL license.)

OK. New features!

1. Support for Mac! OMG! All versions of Mac OS X from 10.2 on are now supported. I’m fairly confident that our Mac remote desktop implementation is second to none.

Oh, wait. Interruption. You may be wondering, if the interns did the whole thing over a summer, who wrote all this new code?

The answer is that two of the interns accepted full-time jobs at Fog Creek, Tyler and Ben. Tyler extended his summer until December, and then headed off for a leave of absence to finish his Masters degree at Stanford. Ben graduated from Duke last summer and has been cranking away on 2.0 since then. Ben, by the way, is the only person I know who writes code in C, C++, C#, and Objective C all in the same day, while writing a book about Smalltalk at night. We also had a Mac programming guru, Daniel Jalkut, get us started with the Mac port.

OK, next new big feature:

2. Direct Connect! We’ve always done everything we can to make sure that Fog Creek Copilot can connect in any networking situation, no matter what firewalls or NATs are in place. To make this happen, both parties make outbound connections to our server, which relays traffic on their behalf. Well, in many cases, this isn’t necessary. So version 2.0 does something rather clever: it sets up the initial connection through our servers, so you get connected right away with 100% reliability. But then once you’re all connected, it quietly, in the background, looks for a way to make a direct connection. If it can’t, no big deal: you just keep relaying through our server. If you can make a direct peer-to-peer connection, it silently shifts your data onto the direct connection. You won’t notice anything except, probably, much faster communication.

3. File transfer! An easy-to-use file upload and download feature makes this the PERFECT application for installing Firefox on all your relatives’ computers. It’s especially handy for tech support scenarios. Imagine this: your new software works great everywhere but this one guy has a wacky system that makes your software keep crashing. So you use Fog Creek Copilot to take over his system, and then you use the file transfer feature to copy new builds to his computer as you try to fix the problem.

4. Does this count as a feature? We lowered the price for day passes–24 hours of usage—from $10 to no, not $9, not $8, not $7, would you believe it’s only FIVE DOLLARS? That’s right, unlimited usage for 24 hours for five lonely bucks.

I guess I should explain the reasoning behind that. First of all, the direct connect feature (#2) should reduce our bandwidth bills in many situations, so we can pass that savings on.

Second, we don’t want anyone to have an excuse not to use Fog Creek Copilot. To avoid paying $10, you might actually be crazy enough to try to just talk your mom into uninstalling Norton Utilities, punching the appropriate holes in the Windows firewall, and setting up appropriate port-forwarding rules on her broadband router… but for $5, why go through the trouble? Or you might be willing to set up your own server outside the firewall, with VNC running as a listener, and walk your customers through setting up VNC and connecting back to you, but again, why bother for five bucks?

We think that’s a negligible price to pay to know that all you need to tell your mom, or your customer, is “Go to copilot.com, type in this number, and download and run the program you find there.” And to know that it will Just Work.

We’re betting that the lower price will lead to more users, which will lead to more corporate subscriptions, which will lead to higher total revenues.

So. Congratulations to Tyler and Ben for a fantastic upgrade!