Pride Goeth Before a Profit

My eye was caught by this quote in the the Harvard Business School Working Knowledge newsletter that describes how profits rise when teams can take pride in their work: "'Relationship, trust, and pride are all intertwined,' says General Motor's Rick Sutton, the site manager for two Saginaw, MI, power-train plants employing 3,000 people. 'The way I look at it is, in order to build pride, you have to have trust, and in order to have trust, you have to have a relationship. So you've got to figure out how to connect and spend time with people.'"

Do more. Do less.

One of my colleagues explained why he thought XP was so successful: "Do more of what is working. Do less of what is not working. Perform experiments instead of guessing, measure results." I think that this sums up nicely how a pragmatic and outcome-led Agile approach to software development works.

Cannot measure technical debt?

Martin Fowler has posted that technical debt cannot be measured. It probably is impossible to establish an exact measurement method for something so complex. But a team of developers always know when their velocity is being undermined by technical debt that needs refactoring away. In general I find that talking openly with a development team provides a more useful and reliable picture of the state of affairs than any number-crunching could ever do.
So instead of aiming for a precise metric why not just ask the team how much technical debt they think has accumulated? The answers can be classified (e.g. None / Some - work mostly unaffected / Some - work often affected / Lots - work constantly affected) so that a fair decision can be made about the amount of debt to repay in the current iteration.

Failure isn't always a dirty word

I'm trying to tease out connections between Hal Macomber's blog post about being a pessimist (maybe!) when it comes to the chances of project failure, Martin Fowler's thoughts on failure and productivity, and Dale Emery's human-centered definition of project success. I'm not sure I've succeeded but hopefully I can massage these wandering thoughts into something more concrete in the future.

On the whole I think that the word 'failure' has such a negative effect on those we apply it to that we should spray it around a little less. Is a 100% project success rate realistic? (And if not, then what is?)
- There are plenty of high-risk projects with a possible high return, e.g. where the technology is at a research stage. If we fail on one of these - when we were likely to fail - then is that 'bad'? (And when we succeed against the odds do we recognise that either?)
- Some percentage of projects will suffer mishaps due to special cause not common cause, so is it fair to label these teams as failures?

Sometimes calling an IT initiative a failure is just part of the corporate blame game. It can be politically expedient to have a scapegoat but perhaps some failures should be reframed in terms of the potential for future success?
- Throw one away: we didn't achieve X and Y but we still achieved Z (and learnt more about X and Y along the way), so we'll follow Fred Brook's advice and start over. (And I know he really said we should plan to throw one away but in the context of this reframing it still seems appropriate.)
- We live to fight another day: Project J didn't deliver, but the 11-person team have learnt more about this new technology, they aren't about to quit or take a month off sick, and they will be a real help to Project K next week. (The opposite of 'Success By Death March'.)
- The Law Of Unintended Consequences: version 1 of the application wasn't really what the customer wanted, but the business environment just changed and quite fortuitously that will make version 2 much easier to complete. (How likely is this? I once saw exactly this situation on a small scale, so it is possible - though I'll concede it's not going to happen often.) On a related note, the checkered initial history of an application like Excel shows that a change to the technology environment can also usher in success for a one-time software 'failure'.

At the other end of the spectrum are situations where the spectre of cancellation has to be invoked before the project gets the attention it needs (because for some reason cancellation is often seen as the worst way to fail).
- "At the end of our first iteration it seems likely that you won't be able to rely on the SnakeOil technology you want to roll out world wide so we're not going to be able to deliver" (subtext: but we think we know of some alternatives so let's talk about them.)
- "Your business representatives are always too busy to talk to us, our team is still short the six developers the IS department promised us, we can't get Security to give us access to the building where the users are, and quite frankly I don't think we have a chance" (subtext: but if you want us to succeed, then please demonstrate it by helping to sort these problems out.)

It's easy to say that projects fundamentally fail because they weren't set up for success but until we develop 20-20 foresight that's not a particularly useful statement. It's almost as trite as saying that if none of my projects have failed so far, then none of my projects will ever fail. (And what are the chances of that?!) Perhaps the only way forwards for the whole failure / success debate is to state up-front (and adjust as we go) what the binary tests are that tell us whether we're succeeding or failing on this project.
I got thinking about this because Industrial Logic are doing some fascinating work with management tests. From what I can tell (at a distance) my concept of project tests are similar - though I can imagine a suite of project tests that collectively plot a point somewhere on a two-dimensional scale whose axes are succeeding - not succeeding and failing - not failing. (And if the project falls into the succeeding AND failing quadrant then I guess that means that the tests must be contradictory!) Such tests would have a life (and usefulness) far beyond the duration of the initial software development effort.

Doing it all the time

One of the ideas underlying XP is "If it's worth doing, then we'll do it all the time". But doing something all the time can change the way that we feel about doing it.
I was reminded of this when discussing photography with Steve Purcell. He remarked that taking photos seems much easier when carrying a camera around all day: after an hour or so he no longer has to actively look out for worthwhile compositions as they seem to leap out by themselves. Which I think is similar to how it feels to practice Test Driven Development or Pair Programming.
If you only do these XP practices occasionally then they seem odd or unnatural, and it takes an effort to stay focussed and achieve results. (The developer is focussing on the act of development, rather than the code that she is trying to develop.) But if you do them all the time then you get into a rhythm: the tests seem to write themselves, it doesn't matter which of you is typing, and before you know it the card that you're working is on complete...

Postscript: a couple of people have mentioned the parallels between "the rhythm" described in this post and what sports people call "being in the zone" - the mind / body state that leads to heightened athletic performance. So now I'm wondering what a simultaneous PET brain scan of both members of a high-performance pair would look like. Any medical scientists out there looking for an off-the-wall research project?!

Software Descents

Back in 1992 American Programmer magazine published an article by Jim Highsmith that draws parallels between mountaineering and software development. Jim called his piece 'Software Ascents', and - much as I like the content - I think that the title misses the mark: reaching the top of a mountain is often the easiest part of a climb. The tired, cold, or dark descent to safety is the hazardous sting in the tail that truly tests mountaineers to the limit. (See for example the tragedy of Whymper's party on the Matterhorn, the epic of Doug Scott's broken legs on The Ogre, or Joe Simpson's experience in Touching the Void.)
So which part of a software project is like the descent from a mountain summit? Well, on a waterfall project I think that the testing phase represents that long, slow, trek back to base camp: the developers reach their summit at the end of the coding phase only to find that there is a hard slog downwards before the project is finished. And what about an agile project? Unfortunately, agile projects can be even more treacherous: each release - especially the first one - can be mistaken for an opportunity to make the big push to the top. And then you stumble and fall down the descent of the other side.
Trying to 'peak' for each release diminishes your ability to successfully complete the next iteration, or even the next release. So think of the bigger picture and remember to keep some energy in reserve for the descent. You'll be glad you did.