As many of our regular readers know, product development at Latitude Geographics has taken on an Agile flavor over the last few months. While it hasn't been without its challenges, I think most participants would agree that, all things considered, it has been a good experience. We've been stretched and I think the effort is paying off. Here is an overview of a few of our Agile practices.

Short Development Iterations. We chose to run our products group on a 3-week iteration. Feature development begins on a Monday and ends with a software release on Friday, 3 weeks later. The thought was that two weeks would be too short to get anything meaningful done and 4 weeks would be long enough to defocus us. This has worked pretty well for us. It is always challenging to get everything done that we've wanted to get done, but arguably, that would be true for any iteration length.

Unit Testing. Given the heavy GUI orientation of our product, it has been challenging to get write good, comprehensive unit tests for many of our features. The strategy we've adopted is to factor out business logic from our GUI code and to write unit tests for the business logic. We've also automated our User Acceptance Testing using a test tool called Selenium. We've found that once our tests are automated, we are able to re-run the tests as often as necessary, catching errors early while they are inexpensive to fix. (Finding an available developer to fix bugs is another issue altogether). We're still early in our unit test experience, however, the value of writing unit tests and using them after they've been written has been demonstrated on a number of occasions.

Stakeholder Involvement. Before each iteration, we sit down with our stakeholders, examine a candidate list of features for the iteration and talk about priorities. This has been an excellent exercise for everyone involved. The best way to save development dollars is to not develop features that no one cares about. Once the list of features is decided, we sit down again with our stakeholders and develop requirements for each feature with them. This minimizes the amount of rework we need to do because of misunderstandings. At the end of an iteration, we get our stakeholders together one more time and give them a demo and solicit their feedback. While at first glance, this level of involvement, seems time consuming, and perhaps even a little wasteful, it has actually really been a real time saver for us. We are now able to focus only on features that have a real business need behind them.

Code Reviews. For the most part, code reviews are simply hard work. There is a lot of benefit to them, but, honestly, it is really difficult to review a feature consisting of several thousand lines of code scattered over a large number of files. There are obvious benefits to doing code reviews including an opportunity for knowledge transfer and catching really obvious issues before they become entrenched but I have yet to see a good way to perform code reviews that really do find most errors in code. If anyone out there has had good experiences reviewing code, I'd really be interested in hearing what their approach was.