One big theme of FogBUGZ 3.0 was "listen to your customers." If you use FogBUGZ you can get feedback from your customers via email or the web. You can even add a menu item to send feedback in your application, or catch crashes from the field and submit them directly into FogBUGZ like we do with CityDesk. Customer suggestions can become feature items, prioritized and tracked alongside other development tasks.
Here at Fog Creek we started using the email integration component to handle all customer service email that comes into the company. Instead of using a lame single-user email client like Outlook, we now have a web interface that shows everybody messages as they come in, which anyone can assign, track, and prioritize just like bugs. And a complete record of every customer email interaction is kept in FogBUGZ for all to see, so anybody in the company can pick up the thread of a conversation with a customer. If your company-wide email aliases ("email@example.com") are causing problems, you should look at FogBUGZ, even if you don't use it for bug tracking at all.
Inspired by the Extreme Programming idea of prioritized user stories, FogBUGZ lets you maintain lists of features, ideas, bugs, keep them prioritized at all times, and always work off the top of your list. If 3x5 cards or big whiteboards don't cut it for you, FogBUGZ is a great way to do Extreme Programming. (Read about how the Six Degrees team did it.)
I'm also happy to say that FogBUGZ does not have certain features that seem sensible but which experience shows reduce the overall effectiveness of the product. I've already talked about custom fields, which are dangerous if used inappropriately, but there are some other things that FogBUGZ intentionally does not do.
For example, inspired by software testing guru Cem Kaner and of course Dr. Deming, FogBUGZ does not provide individual performance metrics for people. If you want a report for which programmer makes the most bugs, or the infamous "which programmer has the most bugs that they allegedly 'fixed' reopened by testing because they were not really fixed," FogBUGZ won't give it to you. Why? Because as soon as you start measuring people and compensating people based on things like this, they are going to start optimizing their behavior to optimize these numbers, and not in the way you intended. Every bug report becomes an argument. Programmers insist on recategorizing bugs as "features." Or they refuse to check in code until the performance review period is over. Testers are afraid to enter bugs -- why antagonize programmers? And pretty soon, the measurements give you what you "wanted": the number of bugs in the bug tracking system goes down to zero. Of course, there are just as many bugs in the code, those are an inevitable part of writing software, they're just not being tracked. And the bug tracking software, hijacked as an HR crutch, becomes worthless for what it was intended for.
We long ago decided that to make FogBUGZ successful, it needs to reflect and enforce as much as we know about good software development practices. That's why I'm very proud of the product we released today. Check it out, there's a free online demo. If you buy the software this month we're giving out free copies of Mary Romero Sweeney's great book. And if you're using any other bug tracking method -- commercial, open source, or Excel files on a server -- we'll let you take 30% off as a competitive upgrade.
I thought that I'd be relaxing this week, after shipping FogBUGZ 3.0 on Monday.
I figured I'd come in late, spend a couple of hours posting some articles I have backlogged for Joel on Software, catch up on the translation effort, and maybe take off the afternoons to see movies.
Murphy had other plans.
The number of users trying the free online version of FogBUGZ surged to unheard of levels, and the server didn't handle it very well, necessitating a last-minute rearchitecting of the way the trial server creates private databases. Hopefully that is fully in place now and the trial server should be rock solid.
In a perfect world, should we have load-tested the trial server before going live? It seems like "best practices," right? Maybe not. Let's assume load testing costs 4 engineer days which seems about right to me. Fixing the server to handle the code actually did cost 4 engineer days.
I need a table.
|COST||do load testing||don't load test|
|server OK||4 days||0 days|
|server NOT OK||8 days||4 days|
Hmm, cheaper not to load test. That is, unless the cost of failure is higher than 4 days work. The actual cost of failure was that some people couldn't get into their trial databases for about an hour before we noticed and kicked the server. Probably no big deal; worth 4 engineer days.
I may have drawn the wrong conclusion; maybe one of those people who lost interest was considering a site license for 300,000 IBM employees. But still, you need some kind of economic model to decide where to spend your limited resources. You can't make sensible decisions reliably by saying things like "load testing is a no-brainer" or "the server will probably survive." Those are emotional brain droppings, not analysis. And in the long run we scientists will win.
At Fog Creek we do calculations like this all the time. For example, a lot of our internal utilities and databases are really pretty buggy. The bugginess causes pain, but fixing the bugginess would cost us actual money. It's not worth spending an engineering day to fix a problem that wastes 30 seconds of someone's time once a month. This only applies to software for internal consumption. With the software that we sell, those tiny incremental improvements are the whole reason our software is better and can compete in the marketplace.
In other words: with internal software, there are steeply diminishing marginal returns as you fix smaller and smaller bugs, so it is economically rational for internal software systems to be somewhat buggy, as long as they get the job done. After that bug fixes become deadweight.
With commercial software in highly competitive markets (like software project tracking) virtually all your competitive advantage comes from fixing smaller and smaller bugs to attain a higher level of quality. That's one of the big differences between those two worlds.
FogBUGZ sales are through the roof, which is nice, of course, but it means that the usual 5% of people with weird configuration issues who need help are taking up a lot more time than usual.
And no matter how perfect your SETUP or your web site, you always get phone calls from potential customers who just want to chat with a human for 10 minutes and then they'll buy. OK.
All in all, it's been busy and hectic when I was hoping for a quiet week. Ah well.
Abstractions fail. Sometimes a little, sometimes a lot. There's leakage. Things go wrong. It happens all over the place when you have abstractions, and it's dragging us down. Read all about it in my latest feature article: The Law of Leaky Abstractions.
Joel on Software in Russian is now live with the first 7 articles. Many thanks to Alexander Bashkov, Alexander Shirshov, Alexey Simonov, Dmitriy Mayorov, Marat Zborovskiy, Marianna Evseeva, Michael Zukerman, Petr Gladkikh, Sergey Kalmakov, Simon Hawkin, and Yury Udovichenko who translated and edited these articles. More are on the way!
Bad Spam Filters
Spam is getting worse and worse. My incoming spam ratio is well over 50% by now. SpamAssassin catches and tags most of it; these are automatically shuttled into a "Spam" folder. About once a week, it takes me 15 seconds to make sure there's nothing important in there and throw it out.
On the other hand, overzealous system administrators are causing serious damage to the connectivity of the Internet by imposing draconian spam filters. The Joel on Software mailing list is operated by a legitimate email delivery company with strong anti spam policies; it is double-opt-in, of course. Increasingly, emails sent to the mailing list are getting bounced -- not tagged -- before they even get to the users. In the last half hour, five people tried to sign up, but the confirmation email didn't even get to them. Apparently my mailing list provider's IP address is now blacklisted by SpamCop. OK, fair enough. But if you or your ISP is using a spam filter that bounces mail, you're going to lose stuff that you didn't want to lose. So don't do it. Use tagging systems instead -- have the spam filter add a tag like "***SPAM***" to the subject line, and let your email client shuttle these off to another folder.
Here's what I'd like to see: a system that delivers an email for one cent. Nobody has to use it, but if you want to get your messages through, you pay one cent and the system delivers it for you. Every spam filtering system on earth can safely whitelist all email that comes from the one cent server, because no spammer can afford the penny times the 19 million messages they send. I would use it for all my email. You could even give 3/4s of a cent to the recipient as a credit to use for sending their own mail, keeping the 1/4 cent to pay for the servers. Eventually, if it caught on, you wouldn't need a spam filter: just put all the free email in a suspect folder, and check it once a week in case some old school holdouts insist on sending you email without paying.
A few months ago I read The Goal, by Eliyahu M. Goldratt, mainly because it has become extremely popular at business schools, and it looked fun. It was interesting, and fun. I didn't understand how the book's theory, called the Theory of Constraints, could possibly be applied to software development, but it was still interesting enough, and I figured if I ever found myself running a factory again, it would be helpful.
Last week I discovered his newer book, Critical Chain. This book applies the Theory of Constraints, introduced in The Goal, to project management, and it seems to really make sense.
Lets say you're creating Painless Software Schedules. Most people's intuition is to come up with conservative, padded estimates for each task, but they still find that their schedules always slip. Goldratt shows that the slip is precisely because we pad the estimates for each step, which leads to three problems:
Goldratt's solution is to choose task estimates that are not padded: each individual task's estimate should be exactly in the middle of the probability curve, so there is a 50% chance you will finish early and a 50% chance you will be late. You should move all the padding to the end of the project (or milestone) where it won't do any harm.
I can't do justice to an entire book in this short post, but if you're doing any kind of project scheduling or management, I highly recommend that you read both books (read The Goal first no matter what you do, both because it's more entertaining, and because it teaches you the foundation you need for Critical Chain).
Lots of new Spanish translations today.
1113 posts over 15 years. Everything I’ve ever published is right here.
There’s a software company in New York City dedicated to doing things the right way and proving that it can be done profitably and successfully.