Usability Testing with Morae

The last time I participated in formal usability testing was at a fancy lab in Colorado, custom built for the purpose at a cost of around $100,000. It was basically a television studio, complete with one-way glass, lots of special video gear, and a giant video console that would have been adequate to broadcast the Superbowl. To do usability testing for Juno, a group of us flew out to Colorado, rented cars, stayed in a hotel, ate at expensive restaurants, and generally consumed massive amounts of money so we could watch people try to sign up for our online service, and, generally, succeed.

At the other extreme, I’ve long been an advocate of hallway usability tests and paper prototypes, which often find some of the biggest usability problems long before they occur for about fifty cents.

Now there’s a middle ground. My friends over at TechSmith in Okemos, Michigan recently released a software product called Morae which lets you use cheap webcams to set up a complete usability lab in your office without fancy equipment or one-way glass. I asked them if they would be willing to usability test their own product by running a usability test in the Fog Creek office for our new remote assistance service, and they graciously agreed.

Here’s how Morae works. You set up your usability testing subject in front of a computer with a webcam and a microphone:


Then any number of people can watch the subject from their own computers:


Here Tyler is watching two screens: one is showing the helper and the other is showing the person being helped. He can see their screens, hear everything they say, and see video of the subject in the corner. We happen to have windows between the offices at Fog Creek so he can actually see the helper through his window. Let’s see if I can zoom in on Tyler’s screens:


In my book on UI design, I wrote about a common problem with usability tests:

In most usability tests, you prepare a list of instructions for the user. For example, if you were usability testing an Internet access provider, you might have an instruction to “sign up for the service.” (I actually did this very usability test, several times, in my career.)

So far so good. The first user comes in, sits down, starts signing up for the service, and gets to the screen asking them how they want to pay. The user looks at me helplessly. “Do I gotta pay for this myself?”

“Oh wait,” you interrupt. “Here, use this fake credit card number.”

The sign up procedure then asks them if they would like to use a regular modem, a cable modem, or a DSL line.

“What do I put here?” asks the user. Possibly because they don’t know the answer, but possibly because they know the answer for their computer, but they’re not using their computer, they’re using your computer, which they’ve never seen before, in a usability lab, where they’ve never been before.

Lerone hides in the bushes with a video camera.To work around this problem, usability testers have started trying to do field testing. Instead of giving the subject tasks to do in a highly contrived environment, you conspire to watch the subject doing their own work at their own desk while you hide in a nearby shrub and spy on them. Morae, by the way, would be perfect for that. This method is most useful when you already have a version n of your product and you’re trying to figure out how to improve version n+1.

The usability test worked great. Our usability test was a little bit uncommon in that we had two subjects, since Fog Creek Copilot involves a helper and a helpee, and Morae only let us hear one subject at a time from one computer. To work around this we just set up two computers with the Morae Remote Viewer so we could get the sound from both subjects.

So far this morning we’ve run two usability test sessions, with great results: we’ve already realized that 2 out of 2 helpers were confused about how to get reconnected, since the Fog Creek Copilot helper application deletes itself when you’re done with it. This is a classic example of the user model not conforming to the program model … most programs don’t delete themselves! … which is the source of virtually every usability problem. From the very first chapter of UI for Programmers:

The cardinal axiom of all user interface design:

A user interface is well-designed when the program behaves exactly how the user thought it would.

I should have known this. The program design violated a principle I wrote myself in big bold print in my own book: it didn’t do what you expected. The great thing about usability tests is with a day of usability testing and handful of subjects, even if you’re as senile as I am, you can find the biggest areas where you didn’t realize where the program’s behavior diverges from the user’s expected behavior.


I (Heart) UnicodeThe contents of the office next door to us are being auctioned off as we speak. I hope that means we’ll be able to expand into there soon: we’ve totally outgrown our existing office space.

The interns have spent the last two weeks working on performance enhancements to They put in support for automatically finding proxy servers and using them if necessary, and this week they’ve been working on raw speed. Tonight the whole company is going to sales training, in the form of the play Glengarry Glen Ross.

Thanks to David McNett for sending me the great I (heart) Unicode T-Shirt! He’s selling them, both in Mac and Windows format, on cafepress.



A small note from yesterday’s post. Although the compression algorithm is commonly called “LZW,” the original paper was credited to Ziv first, so I think it should properly be referred to as Ziv-Lempel-Welch or ZLW.

Hitting the High Notes

In March, 2000, I launched this site with the shaky claim that most people are wrong in thinking you need an idea to make a successful software company:

The common belief is that when you’re building a software company, the goal is to find a neat idea that solves some problem which hasn’t been solved before, implement it, and make a fortune. We’ll call this the build-a-better-mousetrap belief. But the real goal for software companies should be converting capital into software that works.

For the last five years I’ve been testing that theory in the real world. The formula for the company I started with Michael Pryor in September, 2000 can be summarized in four steps:

Best Working Conditions Best Programmers Best Software Profit!

It’s a pretty convenient formula, especially since our real goal in starting Fog Creek was to create a software company where we would want to work. I made the claim, in those days, that good working conditions (or, awkwardly, “building the company where the best software developers in the world would want to work”) would lead to profits as naturally as chocolate leads to chubbiness or cartoon sex in video games leads to gangland-style shooting sprees.

For today, though, I want to answer just one question, because if this part isn’t true, the whole theory falls apart. That question is, does it even make sense to talk about having the “best programmers?” Is there so much variation between programmers that this even matters?

Maybe it’s obvious to us, but to many, the assertion still needs to be proven.

Several years ago a larger company was considering buying out Fog Creek, and I knew it would never work as soon as I heard the CEO of that company say that he didn’t really agree with my theory of hiring the best programmers. He used a biblical metaphor: you only need one King David, and an army of soldiers who merely had to be able to carry out orders. His company’s stock price promptly dropped from 20 to 5, so it’s a good thing we didn’t take the offer, but it’s hard to pin that on the King David fetish.

And in fact the conventional wisdom in the world of copycat business journalists and large companies who rely on overpaid management consultants to think for them, chew their food, etc., seems to be that the most important thing is reducing the cost of programmers.

In some other industries, cheap is more important than good. Wal*Mart grew to be the biggest corporation on Earth by selling cheap products, not good products. If Wal*Mart tried to sell high quality goods, their costs would go up and their whole cheap advantage would be lost. For example if they tried to sell a tube sock that can withstand the unusual rigors of, say, being washed in a washing machine, they’d have to use all kinds of expensive components, like, say, cotton, and the cost for every single sock would go up.

So, why isn’t there room in the software industry for a low cost provider, someone who uses the cheapest programmers available? (Remind me to ask Quark how that whole fire-everybody-and-hire-low-cost-replacements plan is working.)

Here’s why: duplication of software is free. That means that the cost of programmers is spread out over all the copies of the software you sell. With software, you can improve quality without adding to the incremental cost of each unit sold.

Essentially, design adds value faster than it adds cost.

Or, roughly speaking, if you try to skimp on programmers, you’ll make crappy software, and you won’t even save that much money.

The same thing applies to the entertainment industry. It’s worth hiring Brad Pitt for your latest blockbuster movie, even though he demands a high salary, because that salary can be divided by all the millions of people who see the movie solely because Brad is so damn hot.

Or, to put it another way, it’s worth hiring Angelina Jolie for your latest blockbuster movie, even though she demands a high salary, because that salary can be divided by all the millions of people who see the movie solely because Angelina is so damn hot.

But I still haven’t proven anything. What does it mean to be “the best programmer” and are there really such major variations between the quality of software produced by different programmers?

Let’s start with plain old productivity. It’s rather hard to measure programmer productivity; almost any metric you can come up with (lines of debugged code, function points, number of command-line arguments) is trivial to game, and it’s very hard to get concrete data on large projects because it’s very rare for two programmers to be told to do the same thing.

The data I rely upon comes from Professor Stanley Eisenstat at Yale. Each year he teaches a programming-intensive course, CS 323, where a large proportion of the work consists of about five programming assignments, each of which takes about two weeks. The assignments are very serious for a college class: implement a Unix command-line shell, implement a ZLW file compressor, etc.

There was so much griping among the students about how much work was required for this class that Professor Eisenstat started asking the students to report back on how much time they spent on each assignment. He has collected this data carefully for several years.

I spent some time crunching the data; it’s the only data sets I know of where we have dozens of students working on identical assignments using the same technology at the same time. It’s pretty darn controlled, as experiments go.

The first thing I did with this data was to calculate the average, minimum, maximum, and standard deviation of hours spent on each of twelve assignments. The results:

Project Avg Hrs Min Hrs Max Hrs StDev Hrs
CMDLINE99 14.84 4.67 29.25 5.82
COMPRESS00 33.83 11.58 77.00 14.51
COMPRESS01 25.78 10.00 48.00 9.96
COMPRESS99 27.47 6.67 69.50 13.62
LEXHIST01 17.39 5.50 39.25 7.39
MAKE01 22.03 8.25 51.50 8.91
MAKE99 22.12 6.77 52.75 10.72
SHELL00 22.98 10.00 38.68 7.17
SHELL01 17.95 6.00 45.00 7.66
SHELL99 20.38 4.50 41.77 7.03
TAR00 12.39 4.00 69.00 10.57
TEX00 21.22 6.00 75.00 12.11
ALL PROJECTS 21.44 4.00 77.00 11.16

The most obvious thing you notice here is the huge variations. The fastest students were finishing three or four times faster than the average students and as much as ten times faster than the slowest students. The standard deviation is outrageous. So then I thought, hmm, maybe some of these students are doing a terrible job. I didn’t want to include students who spent 4 hours on the assignment without producing a working program. So I narrowed the data down and only included the data from students who were in the top quartile of grades… the top 25% in terms of the quality of the code. I should mention that grades in Professor Eisenstat’s class are completely objective: they’re calculated formulaically based on how many automated tests the code passes and nothing else. No points are deducted for bad style or lateness.

Anyway, here are the results for the top quartile:

Project Avg Hrs Min Hrs Max Hrs StdDev Hrs
CMDLINE99 13.89 8.68 29.25 6.55
COMPRESS00 37.40 23.25 77.00 16.14
COMPRESS01 23.76 15.00 48.00 11.14
COMPRESS99 20.95 6.67 39.17 9.70
LEXHIST01 14.32 7.75 22.00 4.39
MAKE01 22.02 14.50 36.00 6.87
MAKE99 22.54 8.00 50.75 14.80
SHELL00 23.13 18.00 30.50 4.27
SHELL01 16.20 6.00 34.00 8.67
SHELL99 20.98 13.15 32.00 5.77
TAR00 11.96 6.35 18.00 4.09
TEX00 16.58 6.92 30.50 7.32
ALL PROJECTS 20.49 6.00 77.00 10.93

Not much difference! The standard deviation is almost exactly the same for the top quartile. In fact when you look closely at the data it’s pretty clear there’s no discernable correlation between the time and score. Here’s a typical scatter plot of one of the assignments… I chose the assignment COMPRESS01, an implementation of Ziv-Lempel-Welch compression assigned to students in 2001, because the standard deviation there is close to the overall standard deviation.

Scatter Plot showing hours vs. score

There’s just nothing to see here, and that’s the point. The quality of the work and the amount of time spent are simply uncorrelated.

I asked Professor Eisenstat about this, and he pointed out one more thing: because assignments are due at a fixed time (usually midnight) and the penalties for being late are significant, a lot of students stop before the project is done. In other words, the maximum time spent on these assignments is as low as it is partially because there just aren’t enough hours between the time the assignment is handed out and the time it is due. If students had unlimited time to work on the projects (which  would correspond a little better to the working world), the spread could only be higher.

This data is not completely scientific. There’s probably some cheating. Some students may overreport the time spent on assignments in hopes of gaining some sympathy and getting easier assignments the next time. (Good luck! The assignments in CS 323 are the same today as they were when I took the class in the 1980s.) Other students may underreport because they lost track of time. Still, I don’t think it’s a stretch to believe this data shows 5:1 or 10:1 productivity differences between programmers.

But wait, there’s more!

If the only difference between programmers were productivity, you might think that you could substitute five mediocre programmers for one really good programmer. That obviously doesn’t work. Brooks’ Law, “adding manpower to a late software project makes it later,” is why. A single good programmer working on a single task has no coordination or communication overhead. Five programmers working on the same task must coordinate and communicate. That takes a lot of time. There are added benefits to using the smallest team possible; the man-month really is mythical.

But wait, there’s even more!

The real trouble with using a lot of mediocre programmers instead of a couple of good ones is that no matter how long they work, they never produce something as good as what the great programmers can produce.

Five Antonio Salieris won’t produce Mozart’s Requiem. Ever. Not if they work for 100 years.

Five Jim Davis’s — creator of that unfunny cartoon cat, where 20% of the jokes are about how Monday sucks and the rest are about how much the cat likes lasagna (and those are the punchlines!) … five Jim Davis’s could spend the rest of their lives writing comedy and never, ever produce the Soup Nazi episode of Seinfeld.

The Creative Zen team could spend years refining their ugly iPod knockoffs and never produce as beautiful, satisfying, and elegant a player as the Apple iPod. And they’re not going to make a dent in Apple’s market share because the magical design talent is just not there. They don’t have it.

The mediocre talent just never hits the high notes that the top talent hits all the time. The number of divas who can hit the f6 in Mozart’s Queen of the Night is vanishingly small, and you just can’t perform The Queen of the Night without that famous f6.

Is software really about artistic high notes? “Maybe some stuff is,” you say, “but I work on accounts receivable user interfaces for the medical waste industry.” Fair enough. This is a conversation about software companies, shrinkwrap software, where the company’s success or failure is directly a result of the quality of their code.

And we’ve seen plenty of examples of great software, the really high notes, in the past few years: stuff that mediocre software developers just could not have developed.

Back in 2003, Nullsoft shipped a new version of Winamp, with the following notice on their website:

  • Snazzy new look!
  • Groovy new features!
  • Most things actually work!

It’s the last part… the “Most things actually work!” that makes everyone laugh. And then they’re happy, and so they get excited about Winamp, and they use it, and tell their friends, and they think Winamp is awesome, all because they actually wrote on their website, “Most things actually work!” How cool is that?

If you threw a bunch of extra programmers onto the Windows Media Player team, would they ever hit that high note? Never in a thousand years. Because the more people you added to that team, the more likely they would be to have one real grump who thought it was unprofessional and immature to write “Most things actually work” on your website.

Not to mention the comment, “Winamp 3: Almost as new as Winamp 2!”

That kind of stuff is what made us love Winamp.

By the time AOL Time Warner Corporate Weenieheads got their hands on that thing the funny stuff from the website was gone. You can just see them, fuming and festering and snivelling like Salieri in the movie Amadeus, trying to beat down all signs of creativity which might scare one old lady in Minnesota, at the cost of wiping out anything that might have made people like the product.

Or look at the iPod. You can’t change the battery. So when the battery dies, too bad. Get a new iPod. Actually, Apple will replace it if you send it back to the factory, but that costs $65.95. Wowza.

Why can’t you change the battery?

My theory is that it’s because Apple didn’t want to mar the otherwise perfectly smooth, seamless surface of their beautiful, sexy iPod with one of those ghastly battery covers you see on other cheapo consumer crap, with the little latches that are always breaking and the seams that fill up with pocket lint and all that general yuckiness. The iPod is the most seamless piece of consumer electronics I have ever seen. It’s beautiful. It feels beautiful, like a smooth river stone. One battery latch can blow the whole river stone effect.

Apple made a decision based on style, in fact, iPod is full of decisions that are based on style. And style is not something that 100 programmers at Microsoft or 200 industrial designers at the inaptly-named Creative are going to be able to achieve, because they don’t have Jonathan Ive, and there aren’t a heck of a lot of Jonathan Ives floating around.

I’m sorry, I can’t stop talking about the iPod. That beautiful thumbwheel with its little clicky sounds …  Apple spent extra money putting a speaker in the iPod itself so that the thumbwheel clicky sounds would come from the thumbwheel. They could have saved pennies … pennies! by playing the clicky sounds through the headphones. But the thumbwheel makes you feel like you’re in control. People like to feel in control. It makes people happy to feel in control. The fact that the thumbwheel responds smoothly, fluently, and audibly to your commands makes you happy. Not like the other 6,000 pocket-sized consumer electronics bit of junk which take so long booting up that when you hit the on/off switch you have to wait a minute to find out if anything happened. Are you in control? Who knows? When was the last time you had a cell phone that went on the instant you pressed the on button?



Emotional appeal.

These are what make the huge hits, in software products, in movies, and in consumer electronics. And if you don’t get this stuff right you may solve the problem but your product doesn’t become the #1 hit that makes everybody in the company rich so you can all drive stylish, happy, appealing, cars like the Ferrari Spider F-1 and still have enough money left over to build an ashram in your back yard.

It’s not just a matter of “10 times more productive.” It’s that the “average productive” developer never hits the high notes that make great software.

Sadly, this doesn’t really apply in non-product software development. Internal, in-house software is rarely important enough to justify hiring rock stars. Nobody hires Dolly Parton to sing at weddings. That’s why the most satisfying careers, if you’re a software developer, are at actual software companies, not doing IT for some bank.

The software marketplace, these days, is something of a winner-take-all system. Nobody else is making money on MP3 players other than Apple. Nobody else makes money on spreadsheets and word processors other than Microsoft, and, yes, I know, they did anti-competitive things to get into that position, but that doesn’t change the fact that it’s a winner-take-all system.

You can’t afford to be number two, or to have a “good enough” product. It has to be remarkably good, by which I mean, so good that people remark about it. The lagniappe that you get from the really, really, really talented software developers is your only hope for remarkableness. It’s all in the plan:

Best Working Conditions Best Programmers Best Software Profit!


Does it even make sense to talk about having the “best programmers?” Is there so much variation between programmers that this even matters?

Maybe it’s obvious to us, but to many, the assertion still needs to be proven.

Hitting the High Notes


Fog Creek HouseJD has a report and pictures from the Fog Creek Open House. Thanks to everyone who came!

Also in attendance, deadprogrammer, whose website Deadprogrammer’s Cafe best illustrates, through beautiful photonarratives, my theory that “New York is the kind of place where ten things happen to you every day on the way to the subway that would have qualified as interesting dinner conversation in Bloomington, Indiana, and you don’t pay them any notice.”


Fog Creek Copilot(SM) LogoOne of the most common reactions we keep getting to Fog Creek Copilot is, “Please don’t tell my mom about this!”

Fog Creek Copilot is still in a limited beta. Yesterday we opened it to the first 50 beta testers and today we’re adding another 100.

We’re finding a lot of small bugs and making a lot of improvements. Over the last few days most of the bugs have been deployment issues. Since we’re deploying the service on a web farm with two servers, and most of the development has been done on a single server, we found a few tiny details that needed to be fixed. Nothing major.

We’re also putting a lot of work into features that it takes to make an online service with very high uptime. For example, when we upgrade the reflector part of the service, anybody still using the old reflector can continue to use it until they’re done, while the new reflector picks up the new traffic; this is called “draining.” And if one of the servers goes down, even while people are using it, the clients automatically reconnect to the other server.


If you’re in or around New York City today, you won’t want to miss Fog Creek’s Open House today!

  • Meet our summer interns, the Aardvarks, and see a demo of that copilot thing they’ve been cooking up.
  • Celebrate the first beta of Fog Creek Copilot with wine, cheese, and even rumored chocolate-covered strawberries
  • Check out the bionic office in living color before all the expansion-construction chaos begins
  • Meet the filmmaker, working on a documentary on software development. You might even be in the movie!
  • See the fish, the new server rack, the polarizing film, the treasure chest, etc.

No rsvp necessary.

THURSDAY, July 14, 5:30 pm – 7:00 pm.

Fog Creek Software, 535 8th Ave. (near 37th St.), 18th Floor

A, C, E to 34th Street

Project Aardvark Midterm Report

Project Aardvark, if you’ve been following along with the blog, is our summer interns’ new product. We’ve got four interns here (three in development, one in marketing) putting together a complete product from beginning to end. Now that they’ve officially announced what it’s all about and we’re about to start the first beta, I can bring you up to date on the project, which is more or less halfway done.

The Idea

If you’ve ever tried to help your technologically-challenged uncle fix his computer problems over the phone, you know what a pain in the butt it can be to try to walk him through the fix.

“Click START”


“Start. Click Start. It’s in the bottom left.”

“I have C – T – R – L in the bottom left.”

“The bottom left of your screen.”

“Oh. OK, I clicked it.”

“OK, now click RUN.”


“On the menu that came up. Click RUN.”

“It’s not there.”

“What do you mean it’s not there?”

“It’s not there. I don’t have a RUN.”

“What do you see? Read me everything you see”

“Recycle Bin… My Computer… Anna Navratilova J P G…”

“No, on the menu.”

“What menu?

“The menu that came up when you clicked start.”

“When I what?”

This is when you give up and realize that something that could take you 10 seconds to fix in person is about to become a two hour nightmare during which you’ll alienate your family, lose sleep, tie up the phone line while your Auntie Marge is stuck on the turnpike with no gas and can’t get through to your uncle to come rescue her, and curse your lot in life. Just because you’re a programmer doesn’t mean you have to be the help desk for a dozen friends, relatives, and the people in the apartment next door. Does it?

That’s the general idea behind the new Fog Creek CopilotSM service. In a nutshell, you go to and get an invitation code. You tell your uncle to go to and type in that same invitation code. You each get a little program to download and run. When you run the program, your uncle’s computer screen shows up in a window. When you move your mouse, his mouse moves. When you type something, it appears on his computer. Etc. And now you fix the problem and log off, and peace is restored and your aunt gets home safely and your uncle dances at your wedding instead of boycotting it and holding up unpleasant signs across from the hotel where said wedding is taking place.

But but but…

Yes, similar services already exist. That never stopped me before. I’d like to point out that Fog Creek has been doubling in revenues every year mostly thanks to bug tracking software, and it’s not like we invented bug tracking software. There are a few things our product will do better than the competition, but mostly we just want the Fog Creek Copilot experience to be shockingly seamless. It’ll be totally secure, it’ll be cheap, it’ll be painless, it will work through firewalls on either side so you can help mom at home on her firewalled DSL from behind your NAT at work without a hiccup. We even made it so that the little software program you download is totally self-contained, totally pre-configured, and deletes itself when you’re done so you can feel more secure about the whole episode. There’s no commitment; you don’t have to sign up or create an account and remember a password; you can even make your uncle pay since, after all, he’s getting the benefit.

For the geeks in the audience, the service uses a highly customized and optimized version of VNC, but it also requires a customized “reflector” service that we’re building which sits outside of any firewalls. The idea is, since you can’t connect into mom’s computer which is behind a firewall, she’ll connect out to our server, you’ll connect out to our server too, and the reflector will forward data back and forth between the two of you.

How’d you get the idea?

For the last few years we’ve been supporting FogBugz customers using a similar scheme, although it’s a bit of a pain to set up. Our customers have to follow 7 steps to allow us to control their computers, and we’ve found that walking people through these steps on the phone takes an average of 5 minutes. With the Fog Creek Copilot service we’ll just tell them to check their email and click on a link and hey presto! we’re fixing their computer.

So the original idea was to use this for tech support. But when I told the idea to the interns, two out of four said, “yeah, I could use something like that to help my mom.” That’s when we realized there’s a huge world out there of Informal Tech Support… lots of people trying to help Uncle Leo who can’t use products like VNC because of the firewall problem. So we changed the focus of release 1 to be the casual and home user instead of tech support departments.

The Name

We had a list of important criteria for the name, but the most important one was that when someone read the name to someone else over the phone, it would be extremely likely that they would get it right. This ruled out names that are weirdly spelled, names which could be easily confused over the phone (for example “m” and “n” are almost impossible to distinguish), and names that have different possible spellings. We went rather too far along the process of investigating the name “Fixant” (complete with a very cool drawing of an ant holding an ethernet cable) before I just got everyone together in a room for a half hour of brainstorming, when we finally hit upon the idea of “Copilot.” I can’t even remember who thought of it. The idea of brainstorming is just that you shout out ideas, which stimulate other people to have ideas, and you put them all up on a whiteboard.

Well, there are a couple of dozen products named Copilot, many with registered trademarks, so our trademark lawyer advised us to use Fog Creek Copilot which would eliminate any possibility of confusion with those other Copilot brand products. The point of trademark laws is that what you’re not allowed to do is create any confusion or potential confusion as to the origin of your product, and sticking “Fog Creek” in front guarantees that, but we have to be religious about always using the full name. I didn’t really mind, having started my career working on products like Microsoft Excel, Microsoft Visual Basic for Applications for Microsoft Excel, etc. etc. After a few weeks on the Microsoft Excel team if you ever saw the word “Excel” without a “Microsoft” in front of it, it looked nekkid.

We bought the domain name for more money than we spent developing the first version of FogBugz, oy gevalt, but it is a really good name — easy to spell, pronounce, and it even sort of suggests what the product does, which makes it more memorable.

The Conference

For some reason, a long long time ago, I had agreed to give the keynote speech for CFUNITED, a conference about ColdFusion.

“I never used ColdFusion!” I protested.

“Don’t worry. Nobody has. The biggest sponsor of this conference is Microsoft, who have a huge presence trying to get the ColdFusion developers to switch to VB.NET,” the organizers told me.

As luck would have it, the conference timing was perfect for the first feature-complete version of the interns’ code. It gave the team a deadline to work to. At the conference itself, we set up a Fog Creek booth and the interns gave demos to several hundred attendees who wandered by.

This was the first trade show Fog Creek had ever attended. The truth is, a trade show is not a very cost-effective way to reach potential customers. Given the cost of travel, hotels, the booth, a thousand bucks for nice brochures, and everybody taking a week off of work, it’s a really expensive way to get in front of prospects, especially since I can write an article on my website and get in front of 1000 times as many people.

But that’s not really the point: the point is to have interactive experiences with your customers. You can try out lots of different pitches and really listen to how people respond to them, which is something you can’t do in non-interactive marketing like web sites and magazine ads. I learned this from Eric Sink, who wrote a great article on the topic, Going to a Trade Show.

We went down to Washington in two big SUVs with all four interns, one of the FogBugz developers, Brett, who gave demos of FogBugz, and me. Our booth probably looked just a little bit too much like a science-fair exhibit, but, whatever, it was our first attempt. Next time we’ll know to make backdrop posters that stretch right out to the edge of the backdrop which looks a bit more professional, and I’ll remember to bring a lucite brochure dispenser instead of arranging the trifold brochures artfully in the shape of an Aardvark.

But that’s not really a big deal. What was a big deal is that we got to talk to hundreds of potential customers, and, wow! the response was just incredible. At the very best, the response we got was, “I need a thousand of these yesterday for my whole team.” Almost everyone was impressed by the product and knew that they wanted to use it. A very small number of people were aware of other competitors and other solutions to the problem, but mostly people gave us very positive feedback. More importantly, after spending two days pitching the product again and again to lots of different people we learned the most effective ways to present it. We learned that the best way to present it was not to start with, “You know VNC?” This drew glassy stares. The best way was to start with a typical remote support scenario. “Your mom calls you up. She says her screen is half grey. You have no idea what the heck she’s talking about.”

The Beta

We got back last Friday and immediately started working on the beta. The goal of this summer internship is to be shipping to paying customers by the end of the summer, and I didn’t want our interns to go home after “mostly” finishing the code, leaving us permanent employees to debug until next February, so we’re working on a really compressed schedule. Over the next few weeks, we’ve got to:

  • Deploy the site and service on our web farm
  • Launch a controlled, private beta so a few people can start trying the service and we can figure out if it works in the real world. Yaron is taking beta applications now.
  • Launch a wider public beta
  • Do a round of usability tests in the lab. The nice folks at TechSmith have a product for usability testing called Morae, and they’ll be coming out to New York and helping us organize and run the usability tests. Morae lets you set up a complete usability lab using nothing more than their software and a little webcam
  • Start serious QA.

So far we’ve really been hitting the schedule so I’m pretty confident. In the meantime, you can:

  • Follow along with the Project Aardvark blog!
  • Apply for the beta! If you’re accepted, you can try the Fog Creek Copilot service yourself!
  • If you’re in New York, come to the Project Aardvark Open House! You’ll get free wine and cheese and you can pepper the interns with questions about sockets programming in person. The open house is Thursday, July 14, 2005, from 5:30 PM – 7:00 PM, at Fog Creek Software, 535 8th Ave., 18th Floor, New York.